Giter Club home page Giter Club logo

vim-ai's People

Contributors

chrissound avatar cposture avatar duylam avatar gordiandziwis avatar jiangyinzuo avatar jkoelker avatar juodumas avatar kennypete avatar konfekt avatar madox2 avatar misterbuckley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vim-ai's Issues

What not using the edition end point?

I noticed that the completion is using text davinci whereas the edition endpoints have up to 8k context length and seem way more appropriate. I was curious as to why :)

Enable use of virtual text rather than buffer for inserting streaming completions

Hi martin, thanks again for this plugin. It is a dream to work with.

One issue I face is that the streaming completion, when entering into a code file of a syntax that I have auto linters and so on for, it will stream fast the new text into the buffer but then the linters are trying to process each version of the file after each token was inserted, clobbering my Vim for a while until it's all been processed out. The quickfix window will scream through a bunch of syntax errors as the tokens were emitted mid-word and so on.

The other GPT Vim plugin that I know, but don't like as much, solves this by using the virtual text indicator: issue, resolution.

Unfortunately this is way more sophisticated vimscript than I am able to understand at a glance. I would love to learn how to implement this but it will probably be a while.

Hello! + feature request: Azure Openai API integration + notes

Hi Martin,

First of all, thank you for sharing your project!
I recently came across vim-ai while contemplating how to implement my idea of automating the interaction with a large language model (LLM) web playground, similar to the openai web playground.
In line with this concept, I posted a thought-provoking question on this link: https://vi.stackexchange.com/questions/42827/exploring-the-potential-of-vim-editor-as-an-efficient-multi-prompt-playground-fo

Before delving into my perspective and the potential integration with your project, I would like to submit a minor yet crucial feature request: enabling integration with the Microsoft Azure OpenAI service, as an alternative API endpoint.

As you might be aware, the OpenAI APIs are now accessible on the Microsoft Azure platform. However, as is often the case with Microsoft, they tend to introduce complexity into the process... The API interface differs slightly in this context. Here are two links that highlight the distinctions:

Essentially, Microsoft requires the following setup:

export OPENAI_API_KEY=<AZUREKEY>
export OPENAI_API_BASE="https://<AZUREPROJECTNAME>.openai.azure.com/openai/deployments/text-davinci-003/completions?api-version=2022-12-01"

BTW, The API_BASE in Azure parlance is also called the endpoint.

All in all, to run the Azure APIs (instead of the Openai APIs) you need to call the specific endpoint specified in the OPENAI_API_BASE var. Maybe found the line involved: https://github.com/madox2/vim-ai/blob/main/py/chat.py#L72

        response = openai_request('https://api.openai.com/v1/chat/completions', request, http_options)

A possible solution is to avoid to hardcode the URL to 'https://api.openai.com/v1/chat/completions', instead getting it from the environment variable OPENAI_API_BASE?

Beside the above point, I'm looking forward to discussing my perspective and the potential integration further.

Best regards,
giorgio

Wrapping selections for code higlighting

I have wrapped AIChatRun calls with those functions for getting a nicer prompt:

function! CodeReviewFn(range) range
  let l:lines = trim(join(getline(a:firstline, a:lastline), "\n"))
  let l:initial_prompt = ['>>> system', 'As a clean coding expert, review the code with the highest standards. Make the code more expressive and concise, using comments only when necessary. Also, consider the best practices for ' . &filetype . '.']
  let l:prompt = '```' . &filetype  . "\n" . l:lines . "\n```"
  let l:config = {
  \  'options': {
  \    'initial_prompt': l:initial_prompt,
  \  },
  \}
  '<,'>call vim_ai#AIChatRun(v:false, l:config, l:prompt)
endfunction
command! -range AIChatCodeReview <line1>,<line2>call CodeReviewFn(<range>)

function! g:vim_ai_mod.AIChatWrapper(range, ...) range
  let l:lines = trim(join(getline(a:firstline, a:lastline), "\n"))
  let l:instruction = a:0 ? a:1 : ''
  let l:prompt = l:instruction . "\n```" . &filetype  . "\n" . l:lines . "\n```"
  '<,'>call vim_ai#AIChatRun(v:false, {}, l:prompt)
endfunction
command! -range -nargs=? AIChatWrapper <line1>,<line2>call g:vim_ai_mod.AIChatWrapper(<range>, <f-args>)

And by the way with treesitter there is no need to define embedded syntaxes, this is already included.

gpt-4 model is not working

I tried GPT-4 model by vim-ai.
However, the following error occurred
The message says that the GPT-4 model is not supported.

gvim_9lKFmUpVtU

How can I configure it to use the gpt-4 api?

setting

OS: windows11
Vim: Gvim 9.0 1-1221

let g:vim_ai_edit = {
			\  "options": {
			\    "model": "gpt-4",
			\    "max_tokens": 1000,
			\    "temperature": 0.1,
			\    "request_timeout": 10,
			\  },
			\}

:AIE makes the line just disappear

scr1
scr2

I have tried AIE command to make the variables username and password and use them in the conn statement but the target line just disappears with no error given. (I have verified the plugin works on other commands before).

If at that position I do '<,'>AIE make variables. it wil actually start producing some lines.

[Feature Request] Redo

Dear Developer,

I wanted to start by expressing my appreciation for your exceptional work on the plugin. As a user, it has been tremendously helpful for my text processing work.

Specifically, I have been using the :AIEdit function in visualization mode to optimize sentence expressions. However, there have been instances when I was not satisfied with the result and had to use the u key to undo the operation and repeat it. This process can become tiresome if we have to repeat it multiple times. Therefore, I believe that adding a redo feature to the plugin would significantly simplify the users' operation and save time.

Finally, I want to commend you on creating such a remarkable plugin. It has made a significant impact in my workflow, and I am grateful for your dedication and hard work.

Thank you again for all your efforts.

Best regards.

E903: Process failed to start: too many open files

I'm getting an error that I'm not sure its root cause is vim-ai, but it is happening when using the plugin.

I'm getting the error from the title when the plugin outputs lines to the editor, generally after the 14th line.

Here's an example prompt:

:AI write an rspec file template so I can start coding on it. the class name is SomeJob

It starts outputting code, and when it finishes line 14, it errors out with the following:

Error detected while processing function vim_ai#AIRun[19]..provider#python3#Call:
line   18:
Error invoking 'python_execute_file' on channel 2254 (python3-script-host):
Traceback (most recent call last):
  File "/Users/alex/.vim/plugins/vim-ai/py/complete.py", line 55, in <module>
    handle_completion_error(error)
  File "/Users/alex/.vim/plugins/vim-ai/py/utils.py", line 172, in handle_completion_error
    raise error
  File "/Users/alex/.vim/plugins/vim-ai/py/complete.py", line 52, in <module>
    render_text_chunks(text_chunks)
  File "/Users/alex/.vim/plugins/vim-ai/py/utils.py", line 57, in render_text_chunks
    vim.command("normal! a" + text)
  File "/opt/homebrew/lib/python3.11/site-packages/pynvim/api/nvim.py", line 287, in command
    return self.request('nvim_command', string, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/pynvim/api/nvim.py", line 182, in request
    res = self._session.request(name, *args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/pynvim/msgpack_rpc/session.py", line 102, in request
    raise self.error_wrapper(err)
pynvim.api.common.NvimError: function vim_ai#AIRun[19]..provider#python3#Call[18]..vim_ai#AIRun[19]..InsertLeave Autocommands for "*"..function ale#Queue[33]..<SNR>1
37_Lint[20]..ale#engine#RunLinters[4]..<SNR>173_GetLintFileValues[27]..<lambda>4966[1]..<SNR>173_RunLinters[12]..<SNR>173_RunLinter[6]..<SNR>173_RunIfExecutable[43].
.<SNR>173_RunJob[27]..ale#command#Run[65]..ale#job#Start, line 21: Vim(let):E903: Process failed to start: too many open files: "/bin/bash"
Press ENTER or type command to continue

Ask original buffer modification from AIChat

I'm not sure if it is possible but I would like to be able to modify the buffer content from the AIChat buffer.

Currently if I want to edit a buffer, I have to select the portion I want to edit then use AIEdit.

Unfortunately, the AIEdit prompt happens in the vim command which is not as great a a vim buffer.

Would it be possible to perform AIEdit prompt directly from the AIChat buffer ?

Trouble using the AIEdit command

Hello,

First, thanks for this awesome VIM plugin.

I have some trouble using the AIEdit command.
I am trying something similar as the demo gif in the readme, but it seems that the visual-selected text i am trying to ask openAI to manipulate is not sent.

Here are the steps i am doing :

  • Visual-selecting a list of items (shift + v)
  • typing :AIEdit make this data JSON
  • pressing enter

The AI returns me what seems to be a "basic" json structure with some random example data.

I am doing something wrong when i try to use the AIEdit command ?

Best regards,

tvvmongeek

Every command returning 429 Http Status

Hi,

Everything seems to be working with my install but no matter what command I run I see 429

Error invoking 'python_execute_file' on channel 5 (python3-script-host):
Traceback (most recent call last):
  File "/Users/s/.vim/plugged/vim-ai/py/complete.py", line 54, in <module>
    handle_completion_error(error)
  File "/Users/s/.vim/plugged/vim-ai/py/utils.py", line 149, in handle_completion_error
    raise error
  File "/Users/s/.vim/plugged/vim-ai/py/complete.py", line 52, in <module>
    render_text_chunks(text_chunks)
  File "/Users/s/.vim/plugged/vim-ai/py/utils.py", line 41, in render_text_chunks
    for text in chunks:
  File "/Users/s/.vim/plugged/vim-ai/py/utils.py", line 123, in openai_request
    with urllib.request.urlopen(req, timeout=request_timeout) as response:
 
  File "/opt/homebrew/Cellar/[email protected]/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 216,
 in urlopen
    return opener.open(url, data, timeout)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 525,
 in open
    response = meth(req, response)
               ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 634,
 in http_response
    response = self.parent.error(
               ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 563,
 in error
    return self._call_chain(*args)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 496,
 in _call_chain
    result = func(*args)
             ^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 643,
 in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 429: Too Many Requests

Any idea why that might be happening? I have plenty of credits and a paid account with OpenAI

aichat files storage / history / backup

First of all - I love your plugin, it was the cherry on top of the OpenAI cake. I installed it today and i think i have used it hundred of times already - impossible to go back to copy pasting stuff now.

I immediately think about storing all my chats and saving selected chats, but i find writing full paths or saving in current dir kind of cumbersome.

Do you have any ideas or some established workflow or best practices on where to save .aichat files? How do you do it?
Maybe you have some ideas? Would love to try to hack something up to make this plugin even better.

Cheers!

doc: missing example for presets

Hi,

Just wanted to point out that I don't find the preset feature well explained in the documentation. Is the syntax :AINewChat preset_below, :AINewChat {preset below}? etc. I think just one example in the README would clear things up for those unfamiliar with vim syntax :)

Cheers

Bold Text Formatting.

First off, big fan of your Vim plugin here. It's been a game changer for me.

I've been using it for a while now and noticed something. When the AI uses double asterisks to mark text as bold (markdown style), the plugin doesn't actually show the text in bold.

For instance, if the AI spits out a list like:

  • Item 1
  • Item 2
  • Item 3

The items don't pop out in bold in Vim. they just show the double asterisks instead. - ** Item 1 **

It'd be super cool if the plugin could support bold text formatting when double asterisks are used (and double underscores __ text __ in case AI decides to use them instead, and maybe italic with single * text * and _ text _ too?). I think it'd make the AI's output easier to read, especially for titles and lists.

I get that this might not be a quick fix, but I wanted to throw the idea out there. Thanks for considering it, and keep up the awesome work on the plugin!

Edit: A workaround is to create a file in ~/.vim/syntax/aichat.vim and populate it with the follwoing:

syntax region markdownItalic matchgroup=ItalicDelimiter start="\*" end="\*" concealends
highlight ItalicDelimiter cterm=italic gui=italic
highlight markdownItalic cterm=italic gui=italic

syntax region markdownBold matchgroup=BoldDelimiter start="\*\*" end="\*\*" concealends
highlight BoldDelimiter cterm=bold gui=bold
highlight markdownBold cterm=bold gui=bold

make sure to have set conceallevel=2 in your .vimrc

How to continue the `<SELECTION> :AIChat` using `.aichat` file?

Thanks for this amazing plugin! With the .aichat file, I can easily continue my chat conversation. However, when I select some text in another file and execute :'<,'>AIChat, a new chat session is opened. Is there a way to make it continue in the opened .aichat file?

Feaure suggestion :AICommand

:AICommand, creates vim commands based on your description and lets you see them before executing.

Inspiration:
I'm loving https://github.com/TheR1D/shell_gpt, and how you can run sgpt --shell "Install ripgrep" and it will return

sudo apt-get install ripgrep
Execute shell command? [y/N]:

[Bug Report] Empty reply when using :AIChat

When using :AIChat, there may be cases that GPT provides empty answers:

CleanShot 2023-03-27 at 11 03 22

I would be happy to help, please let me know what information you need. Perhaps a debug module would allow me to help you better.

Best plugin so far

Hi Martin,

I think this plugin is probably the best GPT vim plugin so far. The others are too heavy. This one is light-weight.

Almost like working directly with the API.

Would be nice to have more documentation about getting the most of it (for example using the system prompts).

What is your experience so far in experimenting with the API? Do you have any insights you would be willing to share?

We can do a live stream if you want if you can share some great ideas.

Allow dynamic setting of max_tokens based on input text

During completion or editing, an error can reliably occur when max_tokens + [length of selected text] + [length of instruction given] exceeds the model's content length limit, even if the model would have generated very short completion.

For example, a setting of max_tokens in my config to 4000 with text-davinci-003 will error on AIE when my selected text is realistically long (say, 100 tokens). To avoid this, I have to write constraining configuration that assumes the worst-case (it has been reasonable to assume i will never insert a long text and request back a longer text, so 1/2 the model's capacity does OK.)

However it would be nice to be able to set like "dynamic_max_tokens": { "enabled": 1, "effective_max": 4096 }} and have the plugin do some math to interpolate an appropriate max_tokens based on how much is leftover after accounting for your pasted text. This behavior would also make room for smarter error handling in cases where the selected text is close to or over the max length.

I would be happy to work on a PR for this!

At a minimum I hope this issue will help others understand why they are getting this somewhat surprising error when running with ambitiously chosen max_tokens:

Error detected while processing function vim_ai#AIEditRun[13]..provider#python3#Call:
line   18:
Error invoking 'python_execute_file' on channel 8 (python3-script-host):
Traceback (most recent call last):
  File "/Users/eve/.vim/plugged/vim-ai/py/complete.py", line 55, in <module>
    handle_completion_error(error)
  File "/Users/eve/.vim/plugged/vim-ai/py/utils.py", line 163, in handle_completion_error
    raise error
  File "/Users/eve/.vim/plugged/vim-ai/py/complete.py", line 52, in <module>
    render_text_chunks(text_chunks)
  File "/Users/eve/.vim/plugged/vim-ai/py/utils.py", line 53, in render_text_chunks
    print_info_message('Empty response received. Tip: You can try modifying the prompt and retry.')
  File "/Users/eve/.vim/plugged/vim-ai/py/utils.py", line 140, in print_info_message
    vim.command(f"normal \<Esc>")
  File "/usr/local/lib/python3.11/site-packages/pynvim/api/nvim.py", line 287, in command
    return self.request('nvim_command', string, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pynvim/api/nvim.py", line 182, in request
    res = self._session.request(name, *args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pynvim/msgpack_rpc/session.py", line 102, in request
    raise self.error_wrapper(err)
pynvim.api.common.NvimError: Vim(normal):E21: Cannot make changes, 'modifiable' is off

NameError: name 'vim' is not defined

I'm on ubuntu WSL and I installed python support with apt install python3-neovim

Error detected while processing function vim_ai#AIChatRun[30]..provider#python3#Call:
line   18:
Error invoking 'python_execute_file' on channel 4 (python3-script-host):
Traceback (most recent call last):
  File "/home/nrosquist/.local/share/nvim/lazy/vim-ai/py/chat.py", line 2, in <module>
    plugin_root = vim.eval("s:plugin_root")
NameError: name 'vim' is not defined

Initial_prompt needs to be a list to work

let l:initial_prompt = [">>> system", "As a clean coding expert, review the code with the highest standards. Make the code more expressive and concise, using comments only when necessary. Also, consider the best practices for " . &filetype . "."]

let l:config = {
\  "options": {
\    "initial_prompt": l:initial_prompt,
\  },
\}

This is not documented yet

modularity?

Hello again!

I hope you're doing well. I have a question about 'g:vim_ai_chat' and how to customize the chat window. I'm wondering if there's a way to make it more flexible so that users can choose how to open the chat window. For example, could we add some arguments to the shortcut input so that users can choose between opening the chat window in a split or a new tab?

I'm a total noob at vimscript, so I might be missing something obvious. But I think it would be really helpful.

Also, just to give you an idea of what I'm talking about, when you added the "ui" key in 'g:vim_ai_chat', I wasn't sure if the intent was to make it easily customizable by the user as a 'setting' or as a modular type of code. Right now, it seems like I can only open the chat window as either a split or a new tab, but not both. Is there a way to make it more modular so that users can have more flexibility in how they open the chat window? For example, instead of the shortcut inputting ":AIChat", could we have it enter ":AIChat newtab" to make it easier for less savvy users to have a binding for ":AIChat split"?

Having at least one argument would make it easier I think for other users to customize it if they know python but no vimscript.

Thanks!

fold questions

Leaving this for reference, feel free to close; it should not be too much work: an adaption of vim-markdown could fold every question together with its answer for easier navigation through long AI chats. (Navigation without folds has been taken care of here.)

Levergage neovim and treesitter support

I have one example in mind: Explain a line of code.

This is the buffer in neovim:

local has_words_before = function()
    local line, col = unpack(vim.api.nvim_win_get_cursor(0))
    return col ~= 0
        and vim.api
                .nvim_buf_get_lines(0, line - 1, line, true)[1] -- Cursor is on this line
                :sub(col, col)
                :match '%s'
            == nil
end

Now treesitter has textobjects for functions and objects. So it would be possible to create a command which takes the cursors line and the surrounding function/class to construct a prompt like this:


Explain this line:

.nvim_buf_get_lines(0, line - 1, line, true)[1]

In this function:

local has_words_before = function()
    local line, col = unpack(vim.api.nvim_win_get_cursor(0))
    return col ~= 0
        and vim.api
                .nvim_buf_get_lines(0, line - 1, line, true)[1]
                :sub(col, col)
                :match '%s'
            == nil
end

The result could be shown in a floating window.

I will try to implement this, but before I wanted to ask if this is a good Idea, because this plugin is vimscript based and maybe I should try to do it with this plugin: https://github.com/dpayne/CodeGPT.nvim

I really would like to go with your plugin, because the AIChat buffer is a absolute killerfeature.

:AIEdit leads to a python error

After updating to the latest release of vim-ai in nevoid on my Mac, here is what an :AIEdit prompt generates:

lazy/vim-ai/py/complete.py", line 6, in
engine = config['engine']
~~~~~~^^^^^^^^^^
TypeError: string indices must be integers, not 'str'

Python is 3.11.3, Neovim is latest stable

Error on running the command `:AIC` in the middle of the chat

This error happens quite frequently for me.

Step to replicate

  1. Open a chat by command :AIC or :AIN
  2. Chat for a while
  3. Sometimes the error occurs. Restart vim will help
Error detected while processing function vim_ai#AIChatRun[5]..<SNR>137_set_paste:                                                                                                                                                                  
line    1:                                                                                                                                                                                                                                         
E716: Key not present in Dictionary: "ui"                                                                                                                                                                                                          
Error detected while processing function vim_ai#AIChatRun[30]..provider#python3#Call:                                                                                                                                                              
line   18:                                                                                                                                                                                                                                         
Error invoking 'python_execute_file' on channel 9 (python3-script-host):                                                                                                                                                                           
Traceback (most recent call last):                                                                                                                                                                                                                 
  File "/Users/finn/.local/share/nvim/plugged/vim-ai/py/chat.py", line 7, in <module>                                                                                                                                                              
    config_ui = config['ui']                                                                                                                                                                                                                       
                ~~~~~~^^^^^^                                                                                                                                                                                                                       
KeyError: 'ui'                                                                                                                                                                                                                                     
Error detected while processing function vim_ai#AIChatRun[31]..<SNR>137_set_nopaste:                                                                                                                                                               
line    1:                                                                                                                                                                                                                                         
E716: Key not present in Dictionary: "ui"                                                                                                                                                                                                          
Press ENTER or type command to continue   

No request_timeout in the OpenAI API

Hello!

First off: very nice package you have here! I wanted to start using it but instantly ran into some issues with the request_timeout option. Referring to the API reference for OpenAI, there seem to not be any such option (API for requests found here). Just curious: why is it in here?

I have solved it by removing the option, and it now works for me. Is it possible for me to contribute with the changes in any way?

TimeoutError when when using AIChat

I'm using AIChat with the following buffer:

>>> user

I have this error 

C:\Users\LyloDevW10\scoop\apps\pyenv\current\pyenv-win\libexec\pyenv.vbs(0, 1) Microsoft VBScript runtime error: File not found


<<< assistant

It leads to the following error:

Error detected while processing function vim_ai#AIChatRun[30]..provider#python3#Call:
line   18:
Error invoking 'python_execute_file' on channel 8 (python3-script-host):
Traceback (most recent call last):
  File "/Users/martin/.local/share/nvim/plugged/vim-ai/py/chat.py", line 83, in <module>
    handle_completion_error(error)
  File "/Users/martin/.local/share/nvim/plugged/vim-ai/py/utils.py", line 172, in handle_completion_err
or
    raise error
  File "/Users/martin/.local/share/nvim/plugged/vim-ai/py/chat.py", line 77, in <module>
    render_text_chunks(text_chunks)
  File "/Users/martin/.local/share/nvim/plugged/vim-ai/py/utils.py", line 53, in render_text_chunks
    for text in chunks:
  File "/Users/martin/.local/share/nvim/plugged/vim-ai/py/utils.py", line 136, in openai_request
    with urllib.request.urlopen(req, timeout=request_timeout) as response:
  File "/Users/martin/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 216, in urlopen
    return opener.open(url, data, timeout)
  File "/Users/martin/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 519, in open
    response = self._open(req, data)
  File "/Users/martin/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 536, in _open
    result = self._call_chain(self.handle_open, protocol, protocol +
  File "/Users/martin/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 496, in _call_chai
n
    result = func(*args)
  File "/Users/martin/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 1391, in https_ope
n
    return self.do_open(http.client.HTTPSConnection, req,
  File "/Users/martin/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 1352, in do_open
    r = h.getresponse()
  File "/Users/martin/.pyenv/versions/3.10.6/lib/python3.10/http/client.py", line 1374, in getresponse
    response.begin()
  File "/Users/martin/.pyenv/versions/3.10.6/lib/python3.10/http/client.py", line 318, in begin
    version, status, reason = self._read_status()
  File "/Users/martin/.pyenv/versions/3.10.6/lib/python3.10/http/client.py", line 279, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/Users/martin/.pyenv/versions/3.10.6/lib/python3.10/socket.py", line 705, in readinto
    return self._sock.recv_into(b)
  File "/Users/martin/.pyenv/versions/3.10.6/lib/python3.10/ssl.py", line 1274, in recv_into
    return self.read(nbytes, buffer)
  File "/Users/martin/.pyenv/versions/3.10.6/lib/python3.10/ssl.py", line 1130, in read
    return self._sslobj.read(len, buffer)
TimeoutError: The read operation timed out
Press ENTER or type command to continue

Despite the buffer content, I must precise that I'm working on neovim on MacOS.

Feature request: Place response in string

Heya! I dug around a little bit but couldn't see an easy way to achieve this off the bat. What I'd like to be able to do is call e.g. :AIExecute to run a prompt and store the resulting string in a local buffer, rather than pasting it to the screen. This would make it easy to do something like "Given this line-delimited code snippet, suggest any potential issues or code smells in the format Line: Suggestion" and then output the resulting buffer into ALE, for ai-driven linting.

how to install with lazy plugin manager?

This is more of a question than an issue, how can I use this plugin with the lazy plugin manager in Neovim? I've tried manual installation but the plugin doesn't get picked up. Thanks in advance for your help.

Anyway to make chat async?

Great plugin, thanks. I find myself mostly using it via chat, and sometimes waiting around while it types a long answer. I know neovim/vim 8 have async features -- would this be possible/hard to build in?

Feature inquiry/request: Setting max linewidth for AI output

Hello,

I usually use vim on a side monitor or split, so that I can comfortably read maybe 130 characters. AI output usually exceeds this.

Is there a way to set a maximum linewidth? How difficult would it be to add an option? I could look into implementing that option with some guidance.

Best regards,
aJns

GPT-4 doesn't work

I've tried setting the following options:

In my .vimrc:

let g:vim_ai_chat = {
\  "options": {
\    "model": "gpt-4",
\    "temperature": 0.2,
\  },
\}

And on the fly:

let g:vim_ai_chat['options']['model'] = 'gpt-4'
let g:vim_ai_chat['options']['temperature'] = 0.2

All I get is this error message:

Error detected while processing function vim_ai#AIChatRun:
line   30:
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/Users/hellhound/.vim/plugged/vim-ai/py/chat.py", line 82, in <module>
    handle_completion_error(error)
  File "/Users/hellhound/.vim/plugged/vim-ai/py/utils.py", line 149, in handle_completion_error
    raise error
  File "/Users/hellhound/.vim/plugged/vim-ai/py/chat.py", line 77, in <module>
    render_text_chunks(text_chunks)
  File "/Users/hellhound/.vim/plugged/vim-ai/py/utils.py", line 41, in render_text_chunks
    for text in chunks:
  File "/Users/hellhound/.vim/plugged/vim-ai/py/utils.py", line 123, in openai_request
    with urllib.request.urlopen(req, timeout=request_timeout) as response:
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 214, in urlopen
    return opener.open(url, data, timeout)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 523, in open
    response = meth(req, response)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 632, in http_response
    response = self.parent.error(
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 561, in error
    return self._call_chain(*args)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 494, in _call_chain
    result = func(*args)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 641, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
Press ENTER or type command to continue

My system specs:
Screenshot 2023-04-24 at 18 21 47

I'm using MacPorts 2.8.1 and I have the following Python's version installed:

python39 @3.9.16_0+lto+optimizations (active)

Error message

Hi there,

since a few day the package did not work any more and displays the following error message. What is the issue? Any ideas how to fix it? Would be great!

Fehler beim Ausführen von "function vim_ai#AIRun[19]..function vim_ai#AIRun": Zeile 19: Traceback (most recent call last): File "<string>", line 1, in <module> File "/Users/x/.vim/pack/vendor/start/vim-ai/py/complete.py", line 3, in <module> vim.command(f"py3file {plugin_root}/py/utils.py") vim.error: Vim(py3file):vim.error: Vim:/Users/x/.vim/pack/vendor/start/vim-ai/py/utils.py:170: SyntaxWarning: invalid escape sequence '\<'

Document selection_boundary

Hi!

First: Many, many thanks for this great plugin! I really like it very much.

I do not understand what exactly this selection_boundary config option is for.
Somewhat it sounds like a helper that describes boundaries for quicker selections.
For instance a value of "```" helping to select Markdown-formatted code snippets in assistant's responses etc.
But how do I use that if configured? One, two examples would help and you may want to describe that in the README.

All best,
Jamil

Make `max_tokens` optional

The behavior of the max_tokens is not very intuitive.

It will break in the following scenario:

gpt4 token limit ~= 8000
max_tokens = 6000
prompt = 3000

This will gives a 400 http error, my gues is that they add prompt and max_tokens beforehand and compare it against the token limit. So it behaves more like min_token while validating the request.

Solution is not to set the max_tokens parameter in the api request.

Error installing on MacOS

Using MacOS Ventura 13.0.1 and following the manual installation instructions, I get the following error message:

Error detected while processing /Users/adam/.config/nvim/plugin/vim-ai/autoload/vim_ai.vim:
line    1:
E117: Unknown function: vim_ai_config#load
Press ENTER or type command to continue

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.