-
Notifications
You must be signed in to change notification settings - Fork 74
Replies: 3 comments · 5 replies
-
I would like to also ask for a help |
Beta Was this translation helpful? Give feedback.
All reactions
-
The "Generic OpenAI Compatible API" backend is compatible with the LocalAI API |
Beta Was this translation helpful? Give feedback.
All reactions
-
This is correct! And here are the LocalAI Logs when running "Home-3B-v3.q5_k_m" and calling it.
Thanks for reading! |
Beta Was this translation helpful? Give feedback.
All reactions
-
Have you specified a chat format? (zephyr) |
Beta Was this translation helpful? Give feedback.
All reactions
-
That's a lot of entities exposed. If I had to guess: what is likely happening is that you are using too much context space for the model, which causes the model to get confused about what is going on and output gibberish. You might have better luck using a model that is natively 16k or 32k context length and configuring In-Context-Learning with the model. |
Beta Was this translation helpful? Give feedback.
All reactions
-
You might have better luck using a model that is natively 16k or 32k context length and configuring In-Context-Learning with the model.> I switched to Thanks for the answer! |
Beta Was this translation helpful? Give feedback.
All reactions
-
It worked perfectly for a few hours and I now get bad answers and no actions issues. Llama3 gives me the right commands but they are not acted upon: |
Beta Was this translation helpful? Give feedback.
All reactions
-
But how can I load the model to LocalAI? For me it doesnt work with llama.cpp Best regards |
Beta Was this translation helpful? Give feedback.
-
Hi,
how can I configure localAI to use this model? Since I am already running localAI I dont want to host another app parallel to localAI.
Thanks and best regards!
Beta Was this translation helpful? Give feedback.
All reactions