Exceute tool after/from generate_oai_reply #2218
-
My code: llm_config={"config_list": config_list} analyst = AssistantAgent( report = """Multiply 110 * 73282 and 90 * 9292""" analyst.register_for_llm(name="calculator", description="A simple calculator")(calculator) analysis = analyst.generate_oai_reply([user_msg(report)]) Prints: I was able to use generate_oai_reply to mimic the openai tool call feature. Is there a way to use the analysts' response to execute those tools now? Or do I have to do it on my own? I don't plan to use user_proxy or agent chats |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
You can use |
Beta Was this translation helpful? Give feedback.
-
Check out this tutorial section https://microsoft.github.io/autogen/docs/tutorial/tool-use#how-to-hide-tool-usage-and-code-execution-within-a-single-agent |
Beta Was this translation helpful? Give feedback.
You can use
register_nested_chats
to register an inner chat between an AssistantAgent and a UserProxyAgent which will perform the tool call + tool execution in the chat.cc @qingyun-wu
I think this is a good example to add in the notebooks.