How to get raw system prompt that actually gets sent to the LLM #77
Unanswered
tannisroot
asked this question in
Q&A
Replies: 1 comment
-
If you turn on |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Maybe I'm missing something, but is there a way to get a raw system prompt message that gets sent to the LLM when you use the Assist pipeline, with all the devices and services added to it and not just "{{ services }}" and "{{ devices }}?
The reason I'm asking is I want to look at how different parameters and changes in the prompt affect the model's response, and doing so through the assist pipeline is not very convenient (plus it hides some of the output), so I figured I would just simulate it with a modelfile.
I tried looking at Ollama's logs hoping it prints it there, but unfortunately it does not :(
Beta Was this translation helpful? Give feedback.
All reactions