You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<|start_header_id|>system<|end_header_id|>
You are an expert in composing functions. You are given a question and a set of possible functions.
Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
If none of the function can be used, point it out. If the given question lacks the parameters required by the function,
also point it out. You should only return the function call in tools call sections.
If you decide to invoke any of the function(s), you MUST put it in the format of [func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)]
You SHOULD NOT include any other text in the response.
Here is a list of functions in JSON format that you can invoke.
{tool_defs}
You are an expert software engineer. You are given the following problem:
<problem_statement>
{problem_statement}
</problem_statement>
The repo is called django.
Here is the file tree of the repository:
<file_tree>
{file_tree}
</file_tree>
Your task is to locate the relevant file to the problem statement by making one or more function/tool calls.
If you have located the relevant file, call the `pick_file` function with the path to the file. E.g., `pick_file(path="src/file.py")`
<|eot_id|><|start_header_id|>user<|end_header_id|>
Turn 0<|eot_id|><|start_header_id|>assistant<|end_header_id|>
<|python_tag|>{"type": "function", "name": "list_files", "parameters": {"path": "django/"}}<|eot_id|><|start_header_id|>ipython<|end_header_id|>
FILE_CONTENT<|eot_id|><|start_header_id|>user<|end_header_id|>
Turn 1<|eot_id|><|start_header_id|>assistant<|end_header_id|>
<|python_tag|>pick_file(path="django/docs/sources/releases/1.9.0.txt")
And it worked pretty well, the LLM rendered future tool calls in the same way. You still get some hallucinations with <|python_tag|>, but not a big deal - whereas in this case if I have 10 turns, the 1st one is a good tool call, the remaining 9 are code_interpreter calls.
Is this behaviour intended?
Error logs
As above
Expected behavior
Expect not to use code interpreter unless I explicitly specify.
The text was updated successfully, but these errors were encountered:
As far as I know, llama models official documentation states that, if ipython is mentioned anywhere in the instructions, it is trained to invoke code_interpreter as tool for the query that has been asked? Could you once try this without adding ipython?
This is present in `llama_models/models/llama_3_1/prompt_format.md`
System Info
Information
🐛 Describe the bug
Working with @heyjustinai to refactor coding agent example meta-llama/llama-stack-apps#150 to use Llama-stack agent framework - but running into this issue.
Code is here: https://github.com/aidando73/llama-stack-apps/pull/8/files#diff-0f147b774a4f5541b23f469ab2cd04a32b7c996c608ee0b09afb085e805ab019
I'm getting a lot of:
Specifically, I'm getting a good tool call on the first iteration
But then afterwards, I get only code_interpreter:
This is the compiled system prompt:
^ Any future turns returns <|python_tag|>function_name(...) - I'm expecting regular tool calls [function_name(...)].
I suspect it's because tool calls from the llm are being rendered as
So the llm thinks that that's how you make tool calls. When I was using raw completions API, I rendered the assistant response as:
And it worked pretty well, the LLM rendered future tool calls in the same way. You still get some hallucinations with <|python_tag|>, but not a big deal - whereas in this case if I have 10 turns, the 1st one is a good tool call, the remaining 9 are code_interpreter calls.
Is this behaviour intended?
Error logs
As above
Expected behavior
Expect not to use code interpreter unless I explicitly specify.
The text was updated successfully, but these errors were encountered: