-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pass more tokens #42
Comments
The context size is set to max (2048) already |
is 2048 a hard limit with llama.cpp? or is that a function of the model? I know GPT' 3.5 is somewhere around 4000 but it seems to have a better memory for longer before it goes senile? I'm not sure how they achieve that I suspect they might be offloading a summarised version of the previous post to keep the 'Bot on track? |
Yes, 2048 seems to be the hard limit. It allows you to make context go more than 2048, but it warns that performance may be negatively impacted. |
Are you planning on implementing context? By "context," I mean compressing previous messages and placing them in the prompt like GPT3/4's The Chatbot-Ui really caught my attention, and I'm fascinated by the idea of combining it with other UIs like Next.js and Electron. How challenging would this be, and is it even possible? |
In theory, I could do that. But it would make the performance very poor on most computer. OpenAI can do this because they have a bunch of beefy GPUs at their disposal. But this runs locally, sometimes on near-potato hardware.
I don't know how to use Next.js because I hate HTML frameworks (e.g. bootstrap, vue, angular, react). It probably wouldn't be hard to turn it into an electron app though. If it runs in the web browser, you could just embed that very same page into an electron app and that's it. |
I haven't tried this yet but it might help increasing prompt size by compressing the prompt. |
I'll take a look at how it works later. If it's not too complicated, I'll try to implement something similar. |
I'm not sure how it affects performance but at least it might be good to be aware of that possibility. At least it's implemented as drop-in-replacement which is quite cool, imho. |
Hi, So, if I can ask, what have you set as the default for tokens? also is there a settings file I can tinker with to give it more tokens for context and replies? I don't mind if I make it a bit slower (still faster than trying to run on my GPU), but sometimes when you really get it going with just the right prompt it writes gold. then when it cuts itself off in the middle of a sentence in the middle of a story after running out of tokens, I can't make it remember the context of the previous reply and continue where it left off.
The text was updated successfully, but these errors were encountered: