jpmcb / nvim-llama Goto Github PK
View Code? Open in Web Editor NEW๐ฆ Ollama interfaces for Neovim
License: MIT License
๐ฆ Ollama interfaces for Neovim
License: MIT License
AMD gpus are now supported in Ollama: https://ollama.com/blog/amd-preview
It'd be great to surface that (which requires running the ROCm) tagged container.
We should look at how we can also support llamafiles: https://github.com/Mozilla-Ocho/llamafile
This would be another great paradigm for running llms within neovim
I might have missed something but from what I could tell once I run :Llama that container is on and can't be stopped from neovim without manually running docker commands.
Surface some of the internals:
If I open neovim, do a thing, then close neovim I would expect to get all the resources (cpu and ram) back and that I wouldn't have any of the processes that started from neovim to still be running. (maybe this should be broken out as a bug???)
Some good feedback from this video: https://www.youtube.com/watch?v=MzFr7iXsESs
Hi, I wanted to let you know that https://github.com/gsuuon/llm.nvim just got support for llamacpp (I made PR for it).
Maybe it would be good idea, to merge and move your work into llm.nvim, rather that creating new plugin? This way, way things could move faster.
I made post on reddit about it : https://www.reddit.com/r/neovim/comments/16trc61/llamacpp_support_now_added_to_llmnvim/
And example configuration that uses llm.nvim
https://github.com/JoseConseco/nvim_config/blob/master/lua/nv_llm-nvim/init.lua
Error checking docker status: ...local/share/nvim/lazy/nvim-llama/lua/nvim-llama/init.lua:47: Docker is
not running.
After running the plugin the above error appeared any ideas cuz i do not seem to find anything about it out there
I had some troubles getting this plugin running because it is using port 11343 that port was already being used by my local copy of ollama service.
The error from docker of course wasn't very specific:
My work around was using systemctl to stop ollama then boot up neovim and run the :Llama command. It would be nice to make the port configurable or at least ensure that the port isn't already in use.
I've installed it. I've required it. I've set it up... What do I do now? Is there some command in the editor window I should use?
You should give example usage in your README.md. I've looked at the source code and found nothing obvious.
nvim-llama should support using a remote ollama host: ollama supports this out of the box with a OLLAMA_HOST
env variable but using docker, we'll need to inspect this manually and pass that to the docker client container or add it as an optional setting.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.