You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just run mlx_lm command or streamlit command in another terminal? if so, these commands open a page run at a port different from the omni server's default port, what is the difference?
The text was updated successfully, but these errors were encountered:
@liconge I'm not quite sure if I fully understand your question.
If you're not clear on how to use the project after it's launched, it's actually very simple. You just need to add a line of code to your current project, setting the OpenAI SDK port to http://localhost:10240/v1.
If you don't know how to create some products based on this project, you can refer to some examples in the examples directory. You can also quickly try them in other libraries that support setting up the OpenAI Client, such as the examples I added based on tools in #6.
fromopenaiimportOpenAIfromphi.agentimportAgentfromphi.model.openaiimportOpenAIChatfromphi.tools.duckduckgoimportDuckDuckGo# Use mlx-omni-server to provide local OpenAI serviceclient=OpenAI(
base_url="http://localhost:10240/v1",
api_key="mlx-omni-server", # not-needed
)
web_agent=Agent(
model=OpenAIChat(
client=client,
id="mlx-community/Qwen2.5-3B-Instruct-4bit",
),
tools=[DuckDuckGo()],
instructions=["Always include sources"],
show_tool_calls=True,
markdown=True,
)
web_agent.print_response("Tell me about Apple MLX?", stream=False)
Just run mlx_lm command or streamlit command in another terminal? if so, these commands open a page run at a port different from the omni server's default port, what is the difference?
The text was updated successfully, but these errors were encountered: