Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG/ERROR] Can only concatenate str (not "dict") to str #246

Open
j3ramy opened this issue Jan 13, 2025 · 2 comments
Open

[BUG/ERROR] Can only concatenate str (not "dict") to str #246

j3ramy opened this issue Jan 13, 2025 · 2 comments

Comments

@j3ramy
Copy link

j3ramy commented Jan 13, 2025

Hi,

I got an interesting bug/error (?) when I try to execute holmes ask command. I installed holmesgpt 0.7.2 via brew on my MacOS with Python version 3.9.6. The installation was successful, and I can execute the holmes version command.

I am using Ollama as the LLM backend (even if it may be buggy as your documentation says).

Everytime when I try to execute holmes ask "Which pod is not running?" --model=ollama_chat/llama3.2:1b then it runs into the following error:

in completion:2804
in get_ollama_response:366
in token_counter:1638

Failed to execute script 'holmes' due to unhandled exception!
TypeError: can only concatenate str (not "dict") to str

During handling of the above exception, another exception occurred:
in ask:281
in prompt_call:80
in call:122
in completion:148
in wrapper:960
in wrapper:849
in completion:3065
...

Is it because I am using Ollama or the Python version? Have you already experienced this error?

Thanks in advance!

@aantn
Copy link
Contributor

aantn commented Jan 13, 2025

Hi, I suspect this is a bug with LiteLLM (one of our dependencies) and how they implement Ollama support - maybe this bug? BerriAI/litellm#6958

Are you able to verify it doesn't happen with other models?

@j3ramy
Copy link
Author

j3ramy commented Jan 13, 2025

Hi, I suspect this is a bug with LiteLLM (one of our dependencies) and how they implement Ollama support - maybe this bug? BerriAI/litellm#6958

Are you able to verify it doesn't happen with other models?

Actually it happens with llama3.2:latest as well...

If I use gemma2:2b then this is the console output:

holmes ask "Why ist my cluster not running?" --model=ollama_chat/gemma2:2b  
User: Why ist my cluster not running?
Couldn't find model's name ollama_chat/gemma2:2b in litellm's model list, fallback to 128k tokens for              llm.py:140
max_input_tokens                                                                                                             
Couldn't find model's name ollama_chat/gemma2:2b in litellm's model list, fallback to 4096 tokens for              llm.py:174
max_output_tokens                                                                                                            
AI: {}

It's not finding the LLM even if it exists:

ollama list
NAME               ID              SIZE      MODIFIED   
llama3.2:1b        baf6a787fdff    1.3 GB    2 days ago    
gemma2:2b          8ccf136fdd52    1.6 GB    2 days ago    
llama3.2:latest    a80c4f17acd5    2.0 GB    2 d                                       

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants