Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release : 0.4.61 Adding JSON structure output, Streaming, Nested steps, Updates, New dependacies... #53

Merged
merged 2 commits into from
Jan 25, 2025

Conversation

Zochory
Copy link
Member

@Zochory Zochory commented Jan 25, 2025

User description

This pull request includes :

Implemented multi-step visualization and word-by-word streaming in the Chainlit app:

Multi-step Visualization:

  • Hierarchical step tracking with proper nesting
  • Clear step types and metadata
  • Progress tracking through TaskList
  • Word-by-Word Streaming:
  • Smooth text output with 0.01 second delay between words
  • Proper word spacing and formatting
  • Integrated with Chainlit's streaming capabilities
  • Maintains message attribution and context
  • Combined Features:
  • Each streamed message is properly tracked in its step
  • Step metadata shows full input/output
  • Maintains all existing functionality (task tracking, error handling, etc.)

The implementation now provides both a clear visualization of the reasoning process through steps and a smooth, readable output through word-by-word streaming. This makes the agent interactions more dynamic and easier to follow.

Several updates to the documentation, configuration files, and dependencies of the agentic-fleet project. The most important changes include updates to the README.md file to improve clarity and add new sections, adjustments to dependencies in pyproject.toml, and the removal of an example script.

Documentation updates:

  • README.md: Added a new "Core Components" section describing the roles of specialized agents, updated the "Key Features" section, and included a new "System Architecture" diagram and "Error Handling" section. [1] [2] [3] [4] [5]

Dependency updates:

  • pyproject.toml: Updated project version to 0.4.61, added new dependencies (autogen-ext[langchain], seaborn, scikit-learn, ipykernel, authlib, starlette, semantic-kernel, tiktoken), and updated existing dependencies (chainlit to 2.0.6). [1] [2] [3] [4] [5]

Configuration file updates:

Example script removal:

New experimental configuration:


PR Type

Enhancement, Documentation, Tests


Description

  • Introduced multi-step visualization and word-by-word streaming for enhanced interaction.

    • Added stream_text function for streaming text word-by-word.
    • Enhanced response processing with step visualization using cl.Step.
  • Added experimental configuration files for planner behavior and error handling.

    • Included planner_memory.yaml and planner_prompt.yaml for structured planning and error resolution.
  • Updated documentation with detailed system architecture and core components.

    • Expanded README with new sections on error handling, architecture, and agent roles.
  • Updated dependencies and configuration for improved functionality and compatibility.

    • Upgraded chainlit to version 2.0.6 and added new dependencies like seaborn and semantic-kernel.

Changes walkthrough 📝

Relevant files
Miscellaneous
web_example.py
Removed deprecated web example script                                       

examples/quickstart/web_example.py

  • Removed the deprecated web example script.
  • Simplified project structure by eliminating unused examples.
  • +0/-99   
    Enhancement
    app.py
    Enhanced response processing and streaming                             

    src/agentic_fleet/app.py

  • Added stream_text function for word-by-word text streaming.
  • Enhanced response and message processing with step visualization.
  • Improved task handling with structured steps and error handling.
  • +182/-106
    planner_memory.yaml
    Added planner memory configuration                                             

    src/agentic_fleet/_experimental_config/planner/planner_memory.yaml

  • Added configuration for summarizing error resolutions and preferences.
  • Defined structure for analyzing chat history errors.
  • +27/-0   
    planner_prompt.yaml
    Added planner prompt configuration                                             

    src/agentic_fleet/_experimental_config/planner/planner_prompt.yaml

  • Added detailed instructions for planner behavior and planning process.
  • Defined JSON schema for planner responses.
  • +192/-0 
    Configuration changes
    config.toml
    Updated Chainlit configuration version                                     

    .chainlit/config.toml

    • Updated generated_by version to 2.0.6.
    +1/-1     
    Documentation
    README.md
    Expanded documentation with architecture and features       

    README.md

  • Added detailed system architecture and core components.
  • Expanded sections on error handling and agent roles.
  • Updated setup instructions and added a star history chart.
  • +77/-19 
    Dependencies
    requirements.txt
    Removed outdated requirements file                                             

    examples/quickstart/requirements.txt

    • Removed deprecated requirements file.
    +0/-3     
    pyproject.toml
    Updated project version and dependencies                                 

    pyproject.toml

  • Updated project version to 0.4.61.
  • Added new dependencies like seaborn, semantic-kernel, and tiktoken.
  • Upgraded chainlit to version 2.0.6.
  • +13/-4   

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • @Zochory Zochory self-assigned this Jan 25, 2025
    This was linked to issues Jan 25, 2025

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Error Handling

    The new streaming functionality lacks proper error handling for network interruptions and timeouts during word-by-word streaming. This could lead to incomplete or stuck messages.

    async def stream_text(text: str) -> AsyncGenerator[str, None]:
        """Stream text content word by word.
    
        Args:
            text: Text to stream
    
        Yields:
            Each word of the text with a delay
        """
        words = text.split()
        for i, word in enumerate(words):
            await asyncio.sleep(STREAM_DELAY)
            yield word + (" " if i < len(words) - 1 else "")
    Resource Management

    The nested steps implementation may lead to resource leaks if steps are not properly closed in case of exceptions. Consider using try-finally blocks.

    async with cl.Step(name="Response Processing", type="process") as main_step:
        main_step.input = str(response)
    
        # Handle TaskResult objects
        if isinstance(response, TaskResult):
            async with cl.Step(name="Task Execution", type="task") as task_step:
                task_step.input = getattr(response, 'task', 'Task execution')
    
                for msg in response.messages:
                    await process_message(msg, collected_responses)
    
                if response.stop_reason:
                    task_step.output = f"Task stopped: {response.stop_reason}"
                    await cl.Message(
                        content=f"🛑 {task_step.output}",
                        author="System"
                    ).send()
    
        # Handle TextMessage objects directly
        elif isinstance(response, TextMessage):
            async with cl.Step(name=f"Agent: {response.source}", type="message") as msg_step:
                msg_step.input = response.content
                await process_message(response, collected_responses)
    
        # Handle chat messages
        elif hasattr(response, 'chat_message'):
            async with cl.Step(name="Chat Message", type="message") as chat_step:
                chat_step.input = str(response.chat_message)
                await process_message(response.chat_message, collected_responses)
    
        # Handle inner thoughts and reasoning
        elif hasattr(response, 'inner_monologue'):
            async with cl.Step(name="Inner Thought", type="reasoning") as thought_step:
                thought_step.input = response.inner_monologue
                await cl.Message(
                    content=f"💭 Inner thought: {response.inner_monologue}",
                    author="System",
                    indent=1
                ).send()
                thought_step.output = "Processed inner thought"
    
        # Handle function calls
        elif hasattr(response, 'function_call'):
            async with cl.Step(name="Function Call", type="function") as func_step:
                func_step.input = str(response.function_call)
                await cl.Message(
                    content=f"🛠️ Function call: {response.function_call}",
                    author="System",
                    indent=1
                ).send()
                func_step.output = "Function call processed"
    
        # Handle multimodal messages (images, etc.)
        elif isinstance(response, (list, tuple)):
            async with cl.Step(name="Multimodal Content", type="media") as media_step:
                media_step.input = "Processing multimodal content"
                await _process_multimodal_message(response)
                media_step.output = "Multimodal content processed"
    
        # Handle any other type of response
        else:
            async with cl.Step(name="Generic Response", type="other") as generic_step:
                content = str(response)
                generic_step.input = content
                await cl.Message(content=content, author="System").send()
                collected_responses.append(content)
                generic_step.output = "Response processed"
    Performance

    The fixed STREAM_DELAY of 0.01 seconds could cause performance issues with large text blocks. Consider making this configurable or adaptive based on content length.

    STREAM_DELAY = 0.01

    This was linked to issues Jan 25, 2025

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Score
    Possible issue
    Prevent infinite step nesting

    The process_response function should handle potential infinite recursion by adding a
    maximum depth limit for nested steps.

    src/agentic_fleet/app.py [306-314]

    -async def process_response(response: Any, collected_responses: List[str]) -> None:
    +async def process_response(response: Any, collected_responses: List[str], depth: int = 0, max_depth: int = 5) -> None:
    +    if depth >= max_depth:
    +        logger.warning("Maximum step nesting depth reached")
    +        return
         try:
             async with cl.Step(name="Response Processing", type="process") as main_step:
    -            # ... nested steps processing ...
    +            # ... nested steps processing with depth + 1 ...
    Suggestion importance[1-10]: 8

    Why: Adding a depth limit for nested steps is crucial to prevent potential infinite recursion and stack overflow errors in complex response processing scenarios.

    8
    Handle empty input validation

    The stream_text function should handle potential empty text input to avoid splitting
    on empty string. Add input validation to handle this edge case.

    src/agentic_fleet/app.py [57-61]

     async def stream_text(text: str) -> AsyncGenerator[str, None]:
    +    if not text:
    +        return
         words = text.split()
         for i, word in enumerate(words):
             await asyncio.sleep(STREAM_DELAY)
             yield word + (" " if i < len(words) - 1 else "")
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: The suggestion adds important input validation to prevent potential errors when empty text is passed to the stream_text function. This improves robustness and error handling.

    7

    @Zochory Zochory linked an issue Jan 25, 2025 that may be closed by this pull request
    Copy link

    qodo-merge-pro-for-open-source bot commented Jan 25, 2025

    CI Feedback 🧐

    (Feedback updated until commit 8366454)

    A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

    Action: test

    Failed stage: Run tests with coverage [❌]

    Failed test name: tests/integration/test_app.py

    Failure summary:

    The action failed because of missing Azure OpenAI credentials. The tests could not be executed
    because the required authentication credentials were not provided. Specifically:

  • The code requires either an API key, Azure AD token, or token provider
  • None of the required environment variables (AZURE_OPENAI_API_KEY or AZURE_OPENAI_AD_TOKEN) were set
  • This affected multiple test files: test_app.py, test_cli.py, and test_models.py

  • Relevant error logs:
    1:  ##[group]Operating System
    2:  Ubuntu
    ...
    
    1175:  Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.12.8/x64
    1176:  Python2_ROOT_DIR: /opt/hostedtoolcache/Python/3.12.8/x64
    1177:  Python3_ROOT_DIR: /opt/hostedtoolcache/Python/3.12.8/x64
    1178:  LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.12.8/x64/lib
    1179:  ##[endgroup]
    1180:  /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
    1181:  The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
    1182:  warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
    1183:  ==================================== ERRORS ====================================
    1184:  ________________ ERROR collecting tests/integration/test_app.py ________________
    ...
    
    1188:  from .app import handle_message, initialize_session, update_settings  # noqa: F401
    1189:  src/agentic_fleet/app.py:72: in <module>
    1190:  az_model_client = AzureOpenAIChatCompletionClient(
    1191:  /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/autogen_ext/models/openai/_openai_client.py:1173: in __init__
    1192:  client = _azure_openai_client_from_config(copied_args)
    1193:  /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/autogen_ext/models/openai/_openai_client.py:102: in _azure_openai_client_from_config
    1194:  return AsyncAzureOpenAI(**azure_config)
    1195:  /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/openai/lib/azure.py:435: in __init__
    1196:  raise OpenAIError(
    1197:  E   openai.OpenAIError: Missing credentials. Please pass one of `api_key`, `azure_ad_token`, `azure_ad_token_provider`, or the `AZURE_OPENAI_API_KEY` or `AZURE_OPENAI_AD_TOKEN` environment variables.
    1198:  ___________________ ERROR collecting tests/unit/test_cli.py ____________________
    ...
    
    1202:  from .app import handle_message, initialize_session, update_settings  # noqa: F401
    1203:  src/agentic_fleet/app.py:72: in <module>
    1204:  az_model_client = AzureOpenAIChatCompletionClient(
    1205:  /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/autogen_ext/models/openai/_openai_client.py:1173: in __init__
    1206:  client = _azure_openai_client_from_config(copied_args)
    1207:  /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/autogen_ext/models/openai/_openai_client.py:102: in _azure_openai_client_from_config
    1208:  return AsyncAzureOpenAI(**azure_config)
    1209:  /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/openai/lib/azure.py:435: in __init__
    1210:  raise OpenAIError(
    1211:  E   openai.OpenAIError: Missing credentials. Please pass one of `api_key`, `azure_ad_token`, `azure_ad_token_provider`, or the `AZURE_OPENAI_API_KEY` or `AZURE_OPENAI_AD_TOKEN` environment variables.
    1212:  __________________ ERROR collecting tests/unit/test_models.py __________________
    ...
    
    1216:  from .app import handle_message, initialize_session, update_settings  # noqa: F401
    1217:  src/agentic_fleet/app.py:72: in <module>
    1218:  az_model_client = AzureOpenAIChatCompletionClient(
    1219:  /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/autogen_ext/models/openai/_openai_client.py:1173: in __init__
    1220:  client = _azure_openai_client_from_config(copied_args)
    1221:  /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/autogen_ext/models/openai/_openai_client.py:102: in _azure_openai_client_from_config
    1222:  return AsyncAzureOpenAI(**azure_config)
    1223:  /opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/openai/lib/azure.py:435: in __init__
    1224:  raise OpenAIError(
    1225:  E   openai.OpenAIError: Missing credentials. Please pass one of `api_key`, `azure_ad_token`, `azure_ad_token_provider`, or the `AZURE_OPENAI_API_KEY` or `AZURE_OPENAI_AD_TOKEN` environment variables.
    ...
    
    1238:  src/agentic_fleet/cli.py           39     39      0      0     0%   3-69
    1239:  src/agentic_fleet/config.py        30     30      0      0     0%   3-58
    1240:  src/agentic_fleet/run.py           18     18      0      0     0%   3-31
    1241:  src/agentic_fleet/scripts.py        9      9      0      0     0%   2-14
    1242:  ---------------------------------------------------------------------------
    1243:  TOTAL                             416    373     92      0     8%
    1244:  Coverage XML written to file coverage.xml
    1245:  =========================== short test summary info ============================
    1246:  ERROR tests/integration/test_app.py - openai.OpenAIError: Missing credentials. Please pass one of `api_key`, `azure_ad_token`, `azure_ad_token_provider`, or the `AZURE_OPENAI_API_KEY` or `AZURE_OPENAI_AD_TOKEN` environment variables.
    1247:  ERROR tests/unit/test_cli.py - openai.OpenAIError: Missing credentials. Please pass one of `api_key`, `azure_ad_token`, `azure_ad_token_provider`, or the `AZURE_OPENAI_API_KEY` or `AZURE_OPENAI_AD_TOKEN` environment variables.
    1248:  ERROR tests/unit/test_models.py - openai.OpenAIError: Missing credentials. Please pass one of `api_key`, `azure_ad_token`, `azure_ad_token_provider`, or the `AZURE_OPENAI_API_KEY` or `AZURE_OPENAI_AD_TOKEN` environment variables.
    1249:  !!!!!!!!!!!!!!!!!!! Interrupted: 3 errors during collection !!!!!!!!!!!!!!!!!!!!
    1250:  2 warnings, 3 errors in 8.92s
    1251:  ##[error]Process completed with exit code 2.
    

    @Zochory Zochory merged commit 7ba753c into main Jan 25, 2025
    2 of 5 checks passed
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    Status: Done
    1 participant