This is parrot.nvim, the ultimate stochastic parrot to support your text editing inside Neovim.
Features โข Demo โข Getting Started โข Commands โข Configuration โข Roadmap โข FAQ
Warning
Version 0.4.x introduces breaking changes, including the removal of agents and modifications to some hook functions. Please review the updated documentation and adjust your configuration accordingly.
Note
This repository is still a work in progress, as large parts of the code are still being simplified and restructured. It is based on the brilliant work gp.nvim by https://github.com/Robitx.
I started this repository because a perplexity subscription provides $5 of API credits every month for free. Instead of letting them go to waste, I modified my favorite GPT plugin, gp.nvim, to meet my needs - a new Neovim plugin was born! ๐ฅ
Unlike gp.nvim, parrot.nvim prioritizes a seamless out-of-the-box experience by simplifying functionality and focusing solely on text generation, excluding the integration of DALL-E and Whisper.
- Persistent conversations as markdown files stored within the Neovim standard path or a user-defined location
- Custom hooks for inline text editing and to start chats with predefined prompts
- Support for multiple providers:
- Anthropic API
- perplexity.ai API
- OpenAI API
- Mistral API
- Gemini API
- Local and offline serving via ollama
- Groq API
- Flexible support for providing API credentials from various sources, such as environment variables, bash commands, and your favorite password manager CLI (lazy evaluation)
- Provide repository-specific instructions with a
.parrot.md
file with the commandPrtContext
- No autocompletion and no hidden requests in the background to analyze your files
Seamlessly switch between providers and models.
Trigger code completions based on comments.
Let the parrot fix your bugs.
{
"frankroeder/parrot.nvim",
tag = "v0.4.2",
dependencies = { 'ibhagwan/fzf-lua', 'nvim-lua/plenary.nvim' },
-- optionally include "rcarriga/nvim-notify" for beautiful notifications
config = function()
require("parrot").setup {
-- Providers must be explicitly added to make them available.
providers = {
pplx = {
api_key = os.getenv "PERPLEXITY_API_KEY",
-- OPTIONAL
-- gpg command
-- api_key = { "gpg", "--decrypt", vim.fn.expand("$HOME") .. "/pplx_api_key.txt.gpg" },
-- macOS security tool
-- api_key = { "/usr/bin/security", "find-generic-password", "-s pplx-api-key", "-w" },
},
openai = {
api_key = os.getenv "OPENAI_API_KEY",
},
anthropic = {
api_key = os.getenv "ANTHROPIC_API_KEY",
},
mistral = {
api_key = os.getenv "MISTRAL_API_KEY",
},
gemini = {
api_key = os.getenv "GEMINI_API_KEY",
},
groq = {
api_key = os.getenv "GROQ_API_KEY",
},
ollama = {} -- provide an empty list to make provider available
},
}
end,
}
Below are the available commands that can be configured as keybindings. These commands are included in the default setup. Additional useful commands are implemented through hooks (see my example configuration).
Command | Description |
---|---|
PrtChatNew <target> |
open a new chat |
PrtChatToggle <target> |
toggle chat (open last chat or new one) |
PrtChatPaste <target> |
paste visual selection into the latest chat |
PrtInfo |
print plugin config |
PrtContext <target> |
edits the local context file |
PrtChatFinder |
fuzzy search chat files using fzf |
PrtChatDelete |
delete the current chat file |
PrtChatRespond |
trigger chat respond (in chat file) |
PrtStop |
interrupt ongoing respond |
PrtProvider <provider> |
switch the provider (empty arg triggers fzf) |
PrtModel <model> |
switch the model (empty arg triggers fzf) |
Interactive | |
PrtRewrite |
Rewrites the visual selection based on a provided prompt |
PrtAppend |
Append text to the visual selection based on a provided prompt |
PrtPrepend |
Prepend text to the visual selection based on a provided prompt |
PrtNew |
Prompt the model to respond in a new window |
PrtEnew |
Prompt the model to respond in a new buffer |
PrtVnew |
Prompt the model to respond in a vsplit |
PrtTabnew |
Prompt the model to respond in a new tab |
Example Hooks | |
PrtImplement |
takes the visual selection as prompt to generate code |
PrtAsk |
Ask the model a question |
With <target>
, we indicate the command to open the chat within one of the following target locations (defaults to toggle_target
):
popup
: open a popup window which can be configured via the options provided belowsplit
: open the chat in a horizontal splitvsplit
: open the chat in a vertical splittabnew
: open the chat in a new tab
All chat commands (PrtChatNew, PrtChatToggle
) and custom hooks support the
visual selection to appear in the chat when triggered.
Interactive commands require the user to make use of the template placeholders
to consider a visual selection within an API request.
{
-- The provider definitions include endpoints, API keys, default parameters,
-- and topic model arguments for chat summarization, with an example provided for Anthropic.
providers = {
anthropic = {
api_key = os.getenv "ANTHROPIC_API_KEY",
endpoint = "https://api.anthropic.com/v1/messages",
topic_prompt = "You only respond with 3 to 4 words to summarize the past conversation.",
topic = {
model = "claude-3-haiku-20240307",
params = { max_tokens = 32 },
},
params = {
chat = { max_tokens = 4096 },
command = { max_tokens = 4096 },
},
},
...
}
-- default system prompts used for the chat sessions and the command routines
system_prompt = {
chat = ...,
command = ...
},
-- the prefix used for all commands
cmd_prefix = "Prt",
-- optional parameters for curl
curl_params = {},
-- The directory to store persisted state information like the
-- current provider and the selected models
state_dir = vim.fn.stdpath("data"):gsub("/$", "") .. "/parrot/persisted",
-- The directory to store the chats (searched with PrtChatFinder)
chat_dir = vim.fn.stdpath("data"):gsub("/$", "") .. "/parrot/chats",
-- Chat user prompt prefix
chat_user_prefix = "๐จ:",
-- llm prompt prefix
llm_prefix = "๐ฆ:",
-- Explicitly confirm deletion of a chat file
chat_confirm_delete = true,
-- Local chat buffer shortcuts
chat_shortcut_respond = { modes = { "n", "i", "v", "x" }, shortcut = "<C-g><C-g>" },
chat_shortcut_delete = { modes = { "n", "i", "v", "x" }, shortcut = "<C-g>d" },
chat_shortcut_stop = { modes = { "n", "i", "v", "x" }, shortcut = "<C-g>s" },
chat_shortcut_new = { modes = { "n", "i", "v", "x" }, shortcut = "<C-g>c" },
-- Option to move the chat to the end of the file after finished respond
chat_free_cursor = false,
-- use prompt buftype for chats (:h prompt-buffer)
chat_prompt_buf_type = false,
-- Default target for PrtChatToggle, PrtChatNew, PrtContext and the chats opened from the ChatFinder
-- values: popup / split / vsplit / tabnew
toggle_target = "vsplit",
-- The interactive user input appearing when can be "native" for
-- vim.ui.input or "buffer" to query the input within a native nvim buffer
-- (see video demonstrations below)
user_input_ui = "native",
-- Popup window layout
-- border: "single", "double", "rounded", "solid", "shadow", "none"
style_popup_border = "single",
-- margins are number of characters or lines
style_popup_margin_bottom = 8,
style_popup_margin_left = 1,
style_popup_margin_right = 2,
style_popup_margin_top = 2,
style_popup_max_width = 160
-- Prompt used for interactive LLM calls like PrtRewrite where {{llm}} is
-- a placeholder for the llm name
command_prompt_prefix_template = "๐ค {{llm}} ~ ",
-- auto select command response (easier chaining of commands)
-- if false it also frees up the buffer cursor for further editing elsewhere
command_auto_select_response = true,
-- fzf_lua options for PrtModel and PrtChatFinder when plugin is installed
fzf_lua_opts = {
["--ansi"] = true,
["--sort"] = "",
["--info"] = "inline",
["--layout"] = "reverse",
["--preview-window"] = "nohidden:right:75%",
},
-- Enables the query spinner animation
enable_spinner = true,
-- Type of spinner animation to display while loading
-- Available options: "dots", "line", "star", "bouncing_bar", "bouncing_ball"
spinner_type = "star",
}
With `user_input_ui = "buffer"`, your input is simply a buffer. All of the content is passed to the API when closed.
This plugin provides the following default key mappings:
Keymap | Description |
---|---|
<C-g>c |
Opens a new chat via PrtChatNew |
<C-g><C-g> |
Trigger the API to generate a response via PrtChatRespond |
<C-g>s |
Stop the current text generation via PrtStop |
<C-g>d |
Delete the current chat file via PrtChatDelete |
require("parrot").setup {
-- ...
hooks = {
Ask = function(parrot, params)
local template = [[
In light of your existing knowledge base, please generate a response that
is succinct and directly addresses the question posed. Prioritize accuracy
and relevance in your answer, drawing upon the most recent information
available to you. Aim to deliver your response in a concise manner,
focusing on the essence of the inquiry.
Question: {{command}}
]]
local model_obj = parrot.get_model("command")
parrot.logger.info("Asking model: " .. model_obj.name)
parrot.Prompt(params, parrot.ui.Target.popup, model_obj, "๐ค Ask ~ ", template)
end,
}
-- ...
}
require("parrot").setup {
-- ...
hooks = {
SpellCheck = function(prt, params)
local chat_prompt = [[
Your task is to take the text provided and rewrite it into a clear,
grammatically correct version while preserving the original meaning
as closely as possible. Correct any spelling mistakes, punctuation
errors, verb tense issues, word choice problems, and other
grammatical mistakes.
]]
prt.ChatNew(params, chat_prompt)
end,
}
-- ...
}
Refer to my personal lazy.nvim setup for further hooks and key bindings: https://github.com/frankroeder/dotfiles/blob/master/nvim/lua/plugins/parrot.lua
Users can utilize the following placeholders in their templates to inject specific content into the user messages or custom system prompts:
Placeholder | Content |
---|---|
{{selection}} |
Current visual selection |
{{filetype}} |
Filetype of the current buffer |
{{filepath}} |
Full path of the current file |
{{filecontent}} |
Full content of the current buffer |
{{multifilecontent}} |
Full content of all open buffers |
Below is an example of how to use these placeholders in a completion hook, which receives the full file context and the selected code snippet as input.
require("parrot").setup {
-- ...
hooks = {
CompleteFullContext = function(prt, params)
local template = [[
I have the following code from {{filename}}:
```{{filetype}}
{{filecontent}}
```
Please look at the following section specifically:
```{{filetype}}
{{selection}}
```
Please finish the code above carefully and logically.
Respond just with the snippet of code that should be inserted.
]]
local model_obj = prt.get_model()
prt.Prompt(params, prt.ui.Target.append, model_obj, nil, template)
end,
}
-- ...
}
The placeholders {{filetype}}
and {{filecontent}}
can also be used in the chat_prompt
when
creating custom hooks calling prt.ChatNew(params, chat_prompt)
to directly inject the whole file content.
require("parrot").setup {
-- ...
CodeConsultant = function(prt, params)
local chat_prompt = [[
Your task is to analyze the provided {{filetype}} code and suggest
improvements to optimize its performance. Identify areas where the
code can be made more efficient, faster, or less resource-intensive.
Provide specific suggestions for optimization, along with explanations
of how these changes can enhance the code's performance. The optimized
code should maintain the same functionality as the original code while
demonstrating improved efficiency.
Here is the code
```{{filetype}}
{{filecontent}}
```
]]
prt.ChatNew(params, chat_prompt)
end,
}
-- ...
}
- Add status line integration/ notifications for summary of tokens used or money spent
- Improve the documentation
- Create a tutorial video
- Reduce overall code complexity and improve robustness
- I am encountering errors related to the state.
If the state is corrupted, simply delete the file
~/.local/share/nvim/parrot/persisted/state.json
. - The completion feature is not functioning, and I am receiving errors.
Ensure that you have an adequate amount of API credits and examine the log file
~/.local/state/nvim/parrot.nvim.log
for any errors. - I have discovered a bug, have a feature suggestion, or possess a general idea to enhance this project.
Everyone is invited to contribute to this project! If you have any suggestions, ideas, or bug reports, please feel free to submit an issue.