You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I got an interesting bug/error (?) when I try to execute holmes ask command. I installed holmesgpt 0.7.2 via brew on my MacOS with Python version 3.9.6. The installation was successful, and I can execute the holmes version command.
I am using Ollama as the LLM backend (even if it may be buggy as your documentation says).
Everytime when I try to execute holmes ask "Which pod is not running?" --model=ollama_chat/llama3.2:1b then it runs into the following error:
in completion:2804
in get_ollama_response:366
in token_counter:1638
Failed to execute script 'holmes' due to unhandled exception!
TypeError: can only concatenate str (not "dict") to str
During handling of the above exception, another exception occurred:
in ask:281
in prompt_call:80
in call:122
in completion:148
in wrapper:960
in wrapper:849
in completion:3065
...
Is it because I am using Ollama or the Python version? Have you already experienced this error?
Thanks in advance!
The text was updated successfully, but these errors were encountered:
Hi, I suspect this is a bug with LiteLLM (one of our dependencies) and how they implement Ollama support - maybe this bug? BerriAI/litellm#6958
Are you able to verify it doesn't happen with other models?
Actually it happens with llama3.2:latest as well...
If I use gemma2:2b then this is the console output:
holmes ask "Why ist my cluster not running?" --model=ollama_chat/gemma2:2b
User: Why ist my cluster not running?
Couldn't find model's name ollama_chat/gemma2:2b in litellm's model list, fallback to 128k tokens for llm.py:140
max_input_tokens
Couldn't find model's name ollama_chat/gemma2:2b in litellm's model list, fallback to 4096 tokens for llm.py:174
max_output_tokens
AI: {}
It's not finding the LLM even if it exists:
ollama list
NAME ID SIZE MODIFIED
llama3.2:1b baf6a787fdff 1.3 GB 2 days ago
gemma2:2b 8ccf136fdd52 1.6 GB 2 days ago
llama3.2:latest a80c4f17acd5 2.0 GB 2 d
Hi,
I got an interesting bug/error (?) when I try to execute holmes ask command. I installed holmesgpt 0.7.2 via brew on my MacOS with Python version 3.9.6. The installation was successful, and I can execute the holmes version command.
I am using Ollama as the LLM backend (even if it may be buggy as your documentation says).
Everytime when I try to execute
holmes ask "Which pod is not running?" --model=ollama_chat/llama3.2:1b
then it runs into the following error:in completion:2804
in get_ollama_response:366
in token_counter:1638
Failed to execute script 'holmes' due to unhandled exception!
TypeError: can only concatenate str (not "dict") to str
During handling of the above exception, another exception occurred:
in ask:281
in prompt_call:80
in call:122
in completion:148
in wrapper:960
in wrapper:849
in completion:3065
...
Is it because I am using Ollama or the Python version? Have you already experienced this error?
Thanks in advance!
The text was updated successfully, but these errors were encountered: