LLM plugin for models hosted by OpenRouter
First, install the LLM command-line utility.
Now install this plugin in the same environment as LLM.
llm install llm-openrouter
You will need an API key from OpenRouter. You can obtain one here.
You can set that as an environment variable called LLM_OPENROUTER_KEY
, or add it to the llm
set of saved keys using:
llm keys set openrouter
Enter key: <paste key here>
To list available models, run:
llm models list
You should see a list that looks something like this:
OpenRouter: openrouter/openai/gpt-3.5-turbo
OpenRouter: openrouter/anthropic/claude-2
OpenRouter: openrouter/meta-llama/llama-2-70b-chat
...
To run a prompt against a model, pass its full model ID to the -m
option, like this:
llm -m openrouter/anthropic/claude-2 "Five spooky names for a pet tarantula"
You can set a shorter alias for a model using the llm aliases
command like so:
llm aliases set claude openrouter/anthropic/claude-2
Now you can prompt Claude using:
cat llm_openrouter.py | llm -m claude -s 'write some pytest tests for this'
Images are supported too, for some models:
llm -m openrouter/anthropic/claude-3.5-sonnet 'describe this image' -a https://static.simonwillison.net/static/2024/pelicans.jpg
llm -m openrouter/anthropic/claude-3-haiku 'extract text' -a page.png
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-openrouter
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
llm install -e '.[test]'
To run the tests:
pytest
To add new recordings and snapshots, run:
pytest --record-mode=once --inline-snapshot=create