Skip to content

Commit

Permalink
feat(examples/nvim): add nvim-lsp config based option
Browse files Browse the repository at this point in the history
  • Loading branch information
Robzz committed Jun 10, 2024
1 parent ad51913 commit e1bddde
Showing 1 changed file with 53 additions and 6 deletions.
59 changes: 53 additions & 6 deletions examples/nvim.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,61 @@

## Register the language server with neovim

TODO: contribute some defaults to the nvim-lspconfig repo and link to that instead.
You have 2 options, until a configuration is merged in
[nvim-lspconfig](https://github.com/nvim-lspconfig/nvim-lspconfig), you can either use [my
fork](https://github.com/Robzz/nvim-lspconfig) (recommended), or the raw nvim LSP API if you don't want to trust some
random fork.

The following code will register the language server with Neovim, and naively attach it to every buffer. `lsp-ai` is
configured with the `llama_cpp` backend to run the CodeGemma v1.1 model.
Both configurations configure `lsp-ai` to use the `llama_cpp` backend runnning an 8-bit quant of the CodeGemma v1.1
model using FIM, fully offloaded to GPU.

Note: the model configuration is provided as an example/starting point only and I do not vouch for the quality of the
generations. Adjust to taste.

### Using nvim-lspconfig

Add the following snippet to your LSP configuration:

```lua
lspconfig.lsp_ai.setup {
-- Uncomment the following line if using nvim-cmp with the LSP source
-- capabilities = require('cmp_nvim_lsp').default_capabilities()

cmd_env = {
-- Add any environment variables you require here, e.g. for CUDA device selection
-- CUDA_VISIBLE_DEVICES = "1",
},
init_options = {
models = {
model1 = {
type = "llama_cpp",
repository = "mmnga/codegemma-1.1-2b-gguf",
name = "codegemma-1.1-2b-Q8_0.gguf",
n_ctx = 2048,
n_gpu_layers = 999
}
},
completion = {
model = "model1",
parameters = {
fim = {
start = "<|fim_prefix|>",
middle = "<|fim_suffix|>",
["end"] = "<|fim_middle|>"
},
max_context = 2000,
max_new_tokens = 32
}
}
},
}
```

### Using the nvim LSP API

This configuration is not recommended for serious use as it does not properly manage the LSP server lifecycle and simply
registers it with every buffer, even the ones you might not want, like terminal buffers.

```lua
local lsp_ai_config = {
-- Uncomment the following line if you use nvim-cmp with the cmp_nvim_lsp source.
Expand All @@ -19,9 +66,9 @@ local lsp_ai_config = {
-- Add required environment variables here, e.g. for CUDA device selection.
-- CUDA_VISIBLE_DEVICES = "1"
},
root_dir = vim.loop.cwd(),
root_dir = nil,
init_options = {
-- cmp-ai configuration goes here.
-- lsp-ai configuration goes here.
memory = {
file_store = {}
},
Expand Down Expand Up @@ -65,7 +112,7 @@ For a copilot-like ghost-text experience, here is an example configuration using
[cmp-nvim-lsp](https://github.com/hrsh7th/cmp-nvim-lsp) source. This is **not a full configuration**, please refer to
the nvim-cmp documentation for a full starter config without ghost text if you need one.

This configuration enables ghost-text in nvim-cmp, and registers a custom comparator that puts `cmp-ai` suggestions
This configuration enables ghost-text in nvim-cmp, and registers a custom comparator that puts `lsp-ai` suggestions
at the top so that they're the ones being drawn with ghost text.

```lua
Expand Down

0 comments on commit e1bddde

Please sign in to comment.