Skip to content

Commit

Permalink
Merge branch 'main' into save_load_state_vd
Browse files Browse the repository at this point in the history
  • Loading branch information
victordibia authored Nov 30, 2024
2 parents 176b453 + f02aac7 commit 5d7da2b
Show file tree
Hide file tree
Showing 49 changed files with 1,823 additions and 945 deletions.
1 change: 1 addition & 0 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ jobs:
{ ref: "v0.4.0.dev5", dest-dir: "0.4.0.dev5" },
{ ref: "v0.4.0.dev6", dest-dir: "0.4.0.dev6" },
{ ref: "v0.4.0.dev7", dest-dir: "0.4.0.dev7" },
{ ref: "v0.4.0.dev8", dest-dir: "0.4.0.dev8" },
]
steps:
- name: Checkout
Expand Down
40 changes: 20 additions & 20 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ We will update verion numbers according to the following rules:

1. Create a PR that updates the version numbers across the codebase ([example](https://github.com/microsoft/autogen/pull/4359))
2. The docs CI will fail for the PR, but this is expected and will be resolved in the next step
2. After merging the PR, create and push a tag that corresponds to the new verion. For example, for `0.4.0.dev7`:
- `git tag 0.4.0.dev7 && git push origin 0.4.0.dev7`
2. After merging the PR, create and push a tag that corresponds to the new verion. For example, for `0.4.0.dev8`:
- `git tag 0.4.0.dev8 && git push origin 0.4.0.dev8`
3. Restart the docs CI by finding the failed [job corresponding to the `push` event](https://github.com/microsoft/autogen/actions/workflows/docs.yml) and restarting all jobs
4. Run [this](https://github.com/microsoft/autogen/actions/workflows/single-python-package.yml) workflow for each of the packages that need to be released and get an approval for the release for it to run

Expand All @@ -59,27 +59,27 @@ We will update verion numbers according to the following rules:
To help ensure the health of the project and community the AutoGen committers have a weekly triage process to ensure that all issues and pull requests are reviewed and addressed in a timely manner. The following documents the responsibilites while on triage duty:

- Issues
- Review all new issues - these will be tagged with [`needs-triage`](https://github.com/microsoft/autogen/issues?q=is%3Aissue%20state%3Aopen%20label%3Aneeds-triage).
- Apply appropriate labels:
- One of `proj-*` labels based on the project the issue is related to
- `documentation`: related to documentation
- `x-lang`: related to cross language functionality
- `dotnet`: related to .NET
- Add the issue to a relevant milestone if necessary
- If you can resolve the issue or reply to the OP please do.
- If you cannot resolve the issue, assign it to the appropriate person.
- If awaiting a reply add the tag `awaiting-op-response` (this will be auto removed when the OP replies).
- Bonus: there is a backlog of old issues that need to be reviewed - if you have time, review these as well and close or refresh as many as you can.
- Review all new issues - these will be tagged with [`needs-triage`](https://github.com/microsoft/autogen/issues?q=is%3Aissue%20state%3Aopen%20label%3Aneeds-triage).
- Apply appropriate labels:
- One of `proj-*` labels based on the project the issue is related to
- `documentation`: related to documentation
- `x-lang`: related to cross language functionality
- `dotnet`: related to .NET
- Add the issue to a relevant milestone if necessary
- If you can resolve the issue or reply to the OP please do.
- If you cannot resolve the issue, assign it to the appropriate person.
- If awaiting a reply add the tag `awaiting-op-response` (this will be auto removed when the OP replies).
- Bonus: there is a backlog of old issues that need to be reviewed - if you have time, review these as well and close or refresh as many as you can.
- PRs
- The UX on GH flags all recently updated PRs. Draft PRs can be ignored, otherwise review all recently updated PRs.
- If a PR is ready for review and you can provide one please go ahead. If you cant, please assign someone. You can quickly spin up a codespace with the PR to test it out.
- If a PR is needing a reply from the op, please tag it `awaiting-op-response`.
- If a PR is approved and passes CI, its ready to merge, please do so.
- If it looks like there is a possibly transient CI failure, re-run failed jobs.
- The UX on GH flags all recently updated PRs. Draft PRs can be ignored, otherwise review all recently updated PRs.
- If a PR is ready for review and you can provide one please go ahead. If you cant, please assign someone. You can quickly spin up a codespace with the PR to test it out.
- If a PR is needing a reply from the op, please tag it `awaiting-op-response`.
- If a PR is approved and passes CI, its ready to merge, please do so.
- If it looks like there is a possibly transient CI failure, re-run failed jobs.
- Discussions
- Look for recently updated discussions and reply as needed or find someone on the team to reply.
- Look for recently updated discussions and reply as needed or find someone on the team to reply.
- Security
- Look through any securty alerts and file issues or dismiss as needed.
- Look through any securty alerts and file issues or dismiss as needed.

## Becoming a Reviewer

Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@
<img src="https://microsoft.github.io/autogen/0.2/img/ag.svg" alt="AutoGen Logo" width="100">

[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40pyautogen)](https://twitter.com/pyautogen) [![GitHub Discussions](https://img.shields.io/badge/Discussions-Q%26A-green?logo=github)](https://github.com/microsoft/autogen/discussions) [![0.2 Docs](https://img.shields.io/badge/Docs-0.2-blue)](https://microsoft.github.io/autogen/0.2/) [![0.4 Docs](https://img.shields.io/badge/Docs-0.4-blue)](https://microsoft.github.io/autogen/dev/)
[![PyPi autogen-core](https://img.shields.io/badge/PyPi-autogen--core-blue?logo=pypi)](https://pypi.org/project/autogen-core/0.4.0.dev7/) [![PyPi autogen-agentchat](https://img.shields.io/badge/PyPi-autogen--agentchat-blue?logo=pypi)](https://pypi.org/project/autogen-agentchat/0.4.0.dev7/) [![PyPi autogen-ext](https://img.shields.io/badge/PyPi-autogen--ext-blue?logo=pypi)](https://pypi.org/project/autogen-ext/0.4.0.dev7/)

[![PyPi autogen-core](https://img.shields.io/badge/PyPi-autogen--core-blue?logo=pypi)](https://pypi.org/project/autogen-core/0.4.0.dev8/) [![PyPi autogen-agentchat](https://img.shields.io/badge/PyPi-autogen--agentchat-blue?logo=pypi)](https://pypi.org/project/autogen-agentchat/0.4.0.dev8/) [![PyPi autogen-ext](https://img.shields.io/badge/PyPi-autogen--ext-blue?logo=pypi)](https://pypi.org/project/autogen-ext/0.4.0.dev8/)

</div>

# AutoGen

> [!IMPORTANT]
>
> - (11/14/24) ⚠️ In response to a number of asks to clarify and distinguish between official AutoGen and its forks that created confusion, we issued a [clarification statement](https://github.com/microsoft/autogen/discussions/4217).
> - (10/13/24) Interested in the standard AutoGen as a prior user? Find it at the actively-maintained *AutoGen* [0.2 branch](https://github.com/microsoft/autogen/tree/0.2) and `autogen-agentchat~=0.2` PyPi package.
> - (10/02/24) [AutoGen 0.4](https://microsoft.github.io/autogen/dev) is a from-the-ground-up rewrite of AutoGen. Learn more about the history, goals and future at [this blog post](https://microsoft.github.io/autogen/blog). We’re excited to work with the community to gather feedback, refine, and improve the project before we officially release 0.4. This is a big change, so AutoGen 0.2 is still available, maintained, and developed in the [0.2 branch](https://github.com/microsoft/autogen/tree/0.2).
Expand Down Expand Up @@ -104,7 +104,7 @@ We look forward to your contributions!
First install the packages:

```bash
pip install 'autogen-agentchat==0.4.0.dev7' 'autogen-ext[openai]==0.4.0.dev7'
pip install 'autogen-agentchat==0.4.0.dev8' 'autogen-ext[openai]==0.4.0.dev8'
```

The following code uses OpenAI's GPT-4o model and you need to provide your
Expand Down
7 changes: 6 additions & 1 deletion docs/switcher.json
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,12 @@
{
"name": "0.4.0.dev7",
"version": "0.4.0.dev7",
"url": "/autogen/0.4.0.dev7/",
"url": "/autogen/0.4.0.dev7/"
},
{
"name": "0.4.0.dev8",
"version": "0.4.0.dev8",
"url": "/autogen/0.4.0.dev8/",
"preferred": true
}
]
9 changes: 0 additions & 9 deletions dotnet/Directory.Packages.props
Original file line number Diff line number Diff line change
Expand Up @@ -24,15 +24,6 @@
<PackageVersion Include="Azure.ResourceManager.ContainerInstance" Version="1.2.1" />
<PackageVersion Include="Azure.Storage.Files.Shares" Version="12.21.0" />
<PackageVersion Include="CloudNative.CloudEvents.SystemTextJson" Version="2.7.1" />
<PackageVersion Include="Elsa" Version="3.1.3" />
<PackageVersion Include="Elsa.EntityFrameworkCore" Version="3.1.3" />
<PackageVersion Include="Elsa.EntityFrameworkCore.Sqlite" Version="3.1.3" />
<PackageVersion Include="Elsa.Http" Version="3.1.3" />
<PackageVersion Include="Elsa.Identity" Version="3.1.3" />
<PackageVersion Include="Elsa.Workflows.Api" Version="3.1.3" />
<PackageVersion Include="Elsa.Workflows.Core" Version="3.1.3" />
<PackageVersion Include="Elsa.Workflows.Designer" Version="3.0.0-preview.727" />
<PackageVersion Include="Elsa.Workflows.Management" Version="3.1.3" />
<PackageVersion Include="Grpc.AspNetCore" Version="2.67.0" />
<PackageVersion Include="Grpc.Core" Version="2.46.6" />
<PackageVersion Include="Grpc.Net.ClientFactory" Version="2.67.0" />
Expand Down
3 changes: 1 addition & 2 deletions protos/agent_worker.proto
Original file line number Diff line number Diff line change
Expand Up @@ -117,12 +117,11 @@ message Message {
oneof message {
RpcRequest request = 1;
RpcResponse response = 2;
Event event = 3;
cloudevent.CloudEvent cloudEvent = 3;
RegisterAgentTypeRequest registerAgentTypeRequest = 4;
RegisterAgentTypeResponse registerAgentTypeResponse = 5;
AddSubscriptionRequest addSubscriptionRequest = 6;
AddSubscriptionResponse addSubscriptionResponse = 7;
cloudevent.CloudEvent cloudEvent = 8;
}
}

9 changes: 7 additions & 2 deletions python/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
# AutoGen Python packages

[![0.4 Docs](https://img.shields.io/badge/Docs-0.4-blue)](https://microsoft.github.io/autogen/dev/)
[![PyPi autogen-core](https://img.shields.io/badge/PyPi-autogen--core-blue?logo=pypi)](https://pypi.org/project/autogen-core/0.4.0.dev7/) [![PyPi autogen-agentchat](https://img.shields.io/badge/PyPi-autogen--agentchat-blue?logo=pypi)](https://pypi.org/project/autogen-agentchat/0.4.0.dev7/) [![PyPi autogen-ext](https://img.shields.io/badge/PyPi-autogen--ext-blue?logo=pypi)](https://pypi.org/project/autogen-ext/0.4.0.dev7/)

[![PyPi autogen-core](https://img.shields.io/badge/PyPi-autogen--core-blue?logo=pypi)](https://pypi.org/project/autogen-core/0.4.0.dev8/) [![PyPi autogen-agentchat](https://img.shields.io/badge/PyPi-autogen--agentchat-blue?logo=pypi)](https://pypi.org/project/autogen-agentchat/0.4.0.dev8/) [![PyPi autogen-ext](https://img.shields.io/badge/PyPi-autogen--ext-blue?logo=pypi)](https://pypi.org/project/autogen-ext/0.4.0.dev8/)

This directory works as a single `uv` workspace containing all project packages. See [`packages`](./packages/) to discover all project packages.

Expand All @@ -17,10 +16,13 @@ poe check
```

### Setup

`uv` is a package manager that assists in creating the necessary environment and installing packages to run AutoGen.

- [Install `uv`](https://docs.astral.sh/uv/getting-started/installation/).

### Virtual Environment

During development, you may need to test changes made to any of the packages.\
To do so, create a virtual environment where the AutoGen packages are installed based on the current state of the directory.\
Run the following commands at the root level of the Python directory:
Expand All @@ -29,11 +31,14 @@ Run the following commands at the root level of the Python directory:
uv sync --all-extras
source .venv/bin/activate
```

- `uv sync --all-extras` will create a `.venv` directory at the current level and install packages from the current directory along with any other dependencies. The `all-extras` flag adds optional dependencies.
- `source .venv/bin/activate` activates the virtual environment.

### Common Tasks

To create a pull request (PR), ensure the following checks are met. You can run each check individually:

- Format: `poe format`
- Lint: `poe lint`
- Test: `poe test`
Expand Down
4 changes: 2 additions & 2 deletions python/packages/autogen-agentchat/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ build-backend = "hatchling.build"

[project]
name = "autogen-agentchat"
version = "0.4.0.dev7"
version = "0.4.0.dev8"
license = {file = "LICENSE-CODE"}
description = "AutoGen agents and teams library"
readme = "README.md"
Expand All @@ -15,7 +15,7 @@ classifiers = [
"Operating System :: OS Independent",
]
dependencies = [
"autogen-core==0.4.0.dev7",
"autogen-core==0.4.0.dev8",
]

[tool.uv]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
AgentMessage,
ChatMessage,
HandoffMessage,
MultiModalMessage,
TextMessage,
ToolCallMessage,
ToolCallResultMessage,
Expand Down Expand Up @@ -114,7 +115,10 @@ class AssistantAgent(BaseChatAgent):
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o")
model_client = OpenAIChatCompletionClient(
model="gpt-4o",
# api_key = "your_openai_api_key"
)
agent = AssistantAgent(name="assistant", model_client=model_client)
response = await agent.on_messages(
Expand Down Expand Up @@ -145,7 +149,10 @@ async def get_current_time() -> str:
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o")
model_client = OpenAIChatCompletionClient(
model="gpt-4o",
# api_key = "your_openai_api_key"
)
agent = AssistantAgent(name="assistant", model_client=model_client, tools=[get_current_time])
await Console(
Expand All @@ -157,6 +164,39 @@ async def main() -> None:
asyncio.run(main())
The following example shows how to use `o1-mini` model with the assistant agent.
.. code-block:: python
import asyncio
from autogen_core.base import CancellationToken
from autogen_ext.models import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
async def main() -> None:
model_client = OpenAIChatCompletionClient(
model="o1-mini",
# api_key = "your_openai_api_key"
)
# The system message is not supported by the o1 series model.
agent = AssistantAgent(name="assistant", model_client=model_client, system_message=None)
response = await agent.on_messages(
[TextMessage(content="What is the capital of France?", source="user")], CancellationToken()
)
print(response)
asyncio.run(main())
.. note::
The `o1-preview` and `o1-mini` models do not support system message and function calling.
So the `system_message` should be set to `None` and the `tools` and `handoffs` should not be set.
See `o1 beta limitations <https://platform.openai.com/docs/guides/reasoning#beta-limitations>`_ for more details.
"""

def __init__(
Expand All @@ -167,13 +207,19 @@ def __init__(
tools: List[Tool | Callable[..., Any] | Callable[..., Awaitable[Any]]] | None = None,
handoffs: List[Handoff | str] | None = None,
description: str = "An agent that provides assistance with ability to use tools.",
system_message: str = "You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.",
system_message: str
| None = "You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.",
):
super().__init__(name=name, description=description)
self._model_client = model_client
self._system_messages = [SystemMessage(content=system_message)]
if system_message is None:
self._system_messages = []
else:
self._system_messages = [SystemMessage(content=system_message)]
self._tools: List[Tool] = []
if tools is not None:
if model_client.capabilities["function_calling"] is False:
raise ValueError("The model does not support function calling.")
for tool in tools:
if isinstance(tool, Tool):
self._tools.append(tool)
Expand All @@ -193,6 +239,8 @@ def __init__(
self._handoff_tools: List[Tool] = []
self._handoffs: Dict[str, Handoff] = {}
if handoffs is not None:
if model_client.capabilities["function_calling"] is False:
raise ValueError("The model does not support function calling, which is needed for handoffs.")
for handoff in handoffs:
if isinstance(handoff, str):
handoff = Handoff(target=handoff)
Expand Down Expand Up @@ -230,6 +278,8 @@ async def on_messages_stream(
) -> AsyncGenerator[AgentMessage | Response, None]:
# Add messages to the model context.
for msg in messages:
if isinstance(msg, MultiModalMessage) and self._model_client.capabilities["vision"] is False:
raise ValueError("The model does not support vision.")
self._model_context.append(UserMessage(content=msg.content, source=msg.source))

# Inner messages.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,38 @@


class UserProxyAgent(BaseChatAgent):
"""An agent that can represent a human user in a chat."""
"""An agent that can represent a human user through an input function.
This agent can be used to represent a human user in a chat system by providing a custom input function.
Args:
name (str): The name of the agent.
description (str, optional): A description of the agent.
input_func (Optional[Callable[[str], str]], Callable[[str, Optional[CancellationToken]], Awaitable[str]]): A function that takes a prompt and returns a user input string.
.. note::
Using :class:`UserProxyAgent` puts a running team in a temporary blocked
state until the user responds. So it is important to time out the user input
function and cancel using the :class:`~autogen_core.base.CancellationToken` if the user does not respond.
The input function should also handle exceptions and return a default response if needed.
For typical use cases that involve
slow human responses, it is recommended to use termination conditions
such as :class:`~autogen_agentchat.task.HandoffTermination` or :class:`~autogen_agentchat.task.SourceMatchTermination`
to stop the running team and return the control to the application.
You can run the team again with the user input. This way, the state of the team
can be saved and restored when the user responds.
See `Pause for User Input <https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/teams.html#pause-for-user-input>`_ for more information.
"""

def __init__(
self,
name: str,
description: str = "a human user",
*,
description: str = "A human user",
input_func: Optional[InputFuncType] = None,
) -> None:
"""Initialize the UserProxyAgent."""
Expand All @@ -34,10 +60,12 @@ def produced_message_types(self) -> List[type[ChatMessage]]:
return [TextMessage, HandoffMessage]

def _get_latest_handoff(self, messages: Sequence[ChatMessage]) -> Optional[HandoffMessage]:
"""Find the most recent HandoffMessage in the message sequence."""
for message in reversed(messages):
if isinstance(message, HandoffMessage):
return message
"""Find the HandoffMessage in the message sequence that addresses this agent."""
if len(messages) > 0 and isinstance(messages[-1], HandoffMessage):
if messages[-1].target == self.name:
return messages[-1]
else:
raise RuntimeError(f"Handoff message target does not match agent name: {messages[-1].source}")
return None

async def _get_input(self, prompt: str, cancellation_token: Optional[CancellationToken]) -> str:
Expand Down
Loading

0 comments on commit 5d7da2b

Please sign in to comment.