Replies: 2 comments
-
I have the same issue, only output is different <title>404 Not Found</title>404 Not Founddrogon/1.9.2 |
Beta Was this translation helpful? Give feedback.
0 replies
-
The URL has been changed; now, Cortex will serve the chat/completions endpoint at http://localhost:3000/inferences/server/loadmodel. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm currently running the following command, copied from the documentation with the exception of a new localhost address:
I'm getting a 404 error which references that the
inferences/llamacpp/loadmodel
is not an available route.I'm curious about how to proceed and wonder how I can test what other load model links might be correct?
Beta Was this translation helpful? Give feedback.
All reactions