Giter Club home page Giter Club logo

nvim-llama's Introduction

๐Ÿฆ™ nvim-llama

Ollama interfaces for Neovim: get up and running with large language models locally in Neovim.

nvim-llama.mp4

๐Ÿ—๏ธ ๐Ÿ‘ท Warning! Under active development!! ๐Ÿ‘ท ๐Ÿšง

Requirements

Docker is required to use nvim-llama.

And that's it! All models and clients run from within Docker to provide chat interfaces and functionality. This is an agnostic approach that works for MacOS, Linux, and Windows.

Installation

Use your favorite package manager to install the plugin:

Packer

use 'jpmcb/nvim-llama'

lazy.nvim

{
    'jpmcb/nvim-llama'
}

vim-plug

Plug 'jpmcb/nvim-llama'

Setup & configuration

In your init.vim, setup the plugin:

require('nvim-llama').setup {}

You can provide the following optional configuration table to the setup function:

local defaults = {
    -- See plugin debugging logs
    debug = false,

    -- The model for ollama to use. This model will be automatically downloaded.
    model = llama2,
}

Model library

Ollama supports an incredible number of open-source models available on ollama.ai/library

Check out their docs to learn more: https://github.com/jmorganca/ollama


When setting the model setting, the specified model will be automatically downloaded:

Model Parameters Size Model setting
Neural Chat 7B 4.1GB model = neural-chat
Starling 7B 4.1GB model = starling-lm
Mistral 7B 4.1GB model = mistral
Llama 2 7B 3.8GB model = llama2
Code Llama 7B 3.8GB model = codellama
Llama 2 Uncensored 7B 3.8GB model = llama2-uncensored
Llama 2 13B 13B 7.3GB model = llama2:13b
Llama 2 70B 70B 39GB model = llama2:70b
Orca Mini 3B 1.9GB model = orca-mini
Vicuna 7B 3.8GB model = vicuna

Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models. 70B parameter models require upwards of 64 GB of ram (if not more).

Usage

The :Llama autocommand opens a Terminal window where you can start chatting with your LLM.

To exit Terminal mode, which by default locks the focus to the terminal buffer, use the bindings Ctrl-\ Ctrl-n

nvim-llama's People

Contributors

jpmcb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nvim-llama's Issues

Bug: Surface ollama on a different port

I had some troubles getting this plugin running because it is using port 11343 that port was already being used by my local copy of ollama service.

The error from docker of course wasn't very specific:
Screenshot 2024-02-17 7 14 19 PM

My work around was using systemctl to stop ollama then boot up neovim and run the :Llama command. It would be nice to make the port configurable or at least ensure that the port isn't already in use.

new error

Error checking docker status: ...local/share/nvim/lazy/nvim-llama/lua/nvim-llama/init.lua:47: Docker is
not running.

After running the plugin the above error appeared any ideas cuz i do not seem to find anything about it out there

Home env/Docker detection on windows

Since HOME is not defaultly set on windows, nvim running ollama.lua doesn't find HOME variable and it doesn't run.

image

The error showed is this one:

image

feature: Support remote ollama service hosts

nvim-llama should support using a remote ollama host: ollama supports this out of the box with a OLLAMA_HOST env variable but using docker, we'll need to inspect this manually and pass that to the docker client container or add it as an optional setting.

Merging nvim-llama with llm.nivm

Hi, I wanted to let you know that https://github.com/gsuuon/llm.nvim just got support for llamacpp (I made PR for it).
Maybe it would be good idea, to merge and move your work into llm.nvim, rather that creating new plugin? This way, way things could move faster.
I made post on reddit about it : https://www.reddit.com/r/neovim/comments/16trc61/llamacpp_support_now_added_to_llmnvim/

And example configuration that uses llm.nvim
https://github.com/JoseConseco/nvim_config/blob/master/lua/nv_llm-nvim/init.lua

Features: docker control

I might have missed something but from what I could tell once I run :Llama that container is on and can't be stopped from neovim without manually running docker commands.

Surface some of the internals:

  • Stop
  • Restart
  • Cleanup (that removes the docker container, maybe even cleanup storage with that)
    and on closing the terminal buffer it should probably at least stop the container.

If I open neovim, do a thing, then close neovim I would expect to get all the resources (cpu and ram) back and that I wouldn't have any of the processes that started from neovim to still be running. (maybe this should be broken out as a bug???)

Okay so... How do you use it?

I've installed it. I've required it. I've set it up... What do I do now? Is there some command in the editor window I should use?

You should give example usage in your README.md. I've looked at the source code and found nothing obvious.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.