-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Facing difficulty using Gemini Flash 2.0 with function calling #4878
Comments
Gemini has added support for openai style endpoint, can you try that without specifying the |
@ekzhu If I don't specify
I guess it is expecting it to be a openAI API key when I do not specify I am using OAI Config List method like this: OAI_CONFIG_FILE: [
{
"model":"gemini-2.0-flash-exp",
"api_key": "KEY",
"api_type": "google"
}
] Runtime: config_list_gemini_flash = autogen.config_list_from_json(
"OAI_CONFIG_LIST",
filter_dict={
"model": ["gemini-2.0-flash-exp"]
}
)
gemini_flash_config = {
"cache_seed": 42,
"max_retries": 5,
"config_list": config_list_gemini_flash,
"timeout": 30000,
} Is there any configuration issue in this workflow? |
You need to set |
@ekzhu thanks I followed this approach but still getting an error. Updated OAI_CONFIG_LIST:
Getting this error:
I guess there is some autogen/gemini integration error here as in how the text is being passed to Gemini endpoint? |
What is the code that led to this error? Gemini seems to reject an empty text parameter in a message. |
I guess I'll have to try again to keep a log of the messages. Will update you shortly. |
@ekzhu I am just trying to use gemini 2.0 flash exp for speaker selection. Here's my code: config_list_gemini_flash = autogen.config_list_from_json(
"OAI_CONFIG_LIST",
filter_dict={
"model": ["gemini-2.0-flash-exp"]
}
)
gemini_flash_config = {
"cache_seed": 42,
"max_retries": 5,
"config_list": config_list_gemini_flash,
"timeout": 30000,
}
def setup_groupchat(self):
self.groupchat = autogen.GroupChat(
agents=[],
messages=[],
max_round=250,
select_speaker_prompt_template = "Read the above conversation. Then select the next role from {agentlist} to play. Only return the role.",)
manager = autogen.GroupChatManager(groupchat=self.groupchat, llm_config=gemini_flash_config) Still getting the same error. |
Where is the rest of the code? It looks like there is no agents? BTW, we are releasing v0.4 soon this week. Your problem might be fixed by simply upgrading to v0.4. Migration doc: https://aka.ms/autogen-migrate. |
I had just redacted the agent names for sharing here. The code is working but the gemini flash is the only one causing issue as I mentioned above. Just for clarity I am not using Gemini flash anywhere other than speaker selection. When I use it for an agent that has a function call then the issue appears of function calling. Otherwise as a speaker selector, it is working fine. Here's an example of how I am creating the agent: def create_agent(name, system_message, llm_config):
return autogen.AssistantAgent(name = name, system_message = system_message, llm_config = llm_config)
self.executor = autogen.UserProxyAgent(name="Executor",system_message=executor_system_message,human_input_mode="NEVER",code_execution_config={"last_n_messages": 2,"executor": self.executor},)
self.agentOne = create_agent("AgentOne", system_message = agent_one_system_message, llm_config = gemini_flash_config)
autogen.register_function(retreive_from_internet, caller=self.agentOne, executor=self.executor, name="retreive_from_internet", description="Search internet and find context from internet.")
def setup_groupchat(self):
self.groupchat = autogen.GroupChat(
agents=[self.agentOne, self.executor],
messages=[],
max_round=250,
select_speaker_prompt_template = "Read the above conversation. Then select the next role from {agentlist} to play. Only return the role.",)
manager = autogen.GroupChatManager(groupchat=self.groupchat, llm_config=gemini_flash_config) |
I don't see any problem in setting up the agents. Likely something to do with the messages that got sent to Gemini endpoint may contain an empty text field. It would be good to root out which message is it by debugging and inspecting the output. How did the group chat start? Was there any initial message? |
@ekzhu Yes, I use initiate_chat function for starting groupchat with input message. Example: manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=gemini_flash_config)
user_proxy.initiate_chat(manager, message=INITIAL_MESSAGE) The message does pass through and works as expected for agents with function calling when the LLM is not Gemini flash. But when using any Gemini model as LLM for agents, I am observing this error. |
Okay, thanks for narrowing it down. So, the issue happens when it is (1) gemini-flash and (2) when function calling is involved? This is likely due to the function calling messages involved an empty field, which Gemini flash rejects. If we can have a minimal reproduction with function calling, it would be great. If you want to continue working on v0.2. For v0.4, it will not have this problem, because the function calls happen within each agent in a group chat. |
Yes, let me try to reproduce with a minimal version. Although I suspect that the issue would still be prevalent as the message being empty is not under my control. Or I don't know how we can alter that for autogen v0.2. v0.4 sounds lucrative but the core issue is that there are so many dependencies and migration would be super time consuming! |
What happened?
Whenever there's a function call with Gemini Flash 2.0 in autogen, I get this error:
Google GenAI exception occurred while calling Gemini API: 400 * GenerateContentRequest.tools[0].function_declarations[0].parameters.properties: should be non-empty for OBJECT type
What did you expect to happen?
It should be able to use the tool call properly.
How can we reproduce it (as minimally and precisely as possible)?
Just an assistantagent with a function call should be able to reproduce it.
AutoGen version
0.2
Which package was this bug in
Core
Model used
Gemini flash 2.0 exp
Python version
3.11
Operating system
Ubuntu 22.04
Any additional info you think would be helpful for fixing this bug
No response
The text was updated successfully, but these errors were encountered: