Giter Club home page Giter Club logo

butterfish's Introduction

Dotfiles for Peter Bakkum

https://github.com/bakks/bakks

Configured for Dvorak keyboard layout.

Current setup:

  • Terminal: kitty
  • Shell: zsh
  • Window manager: tmux
  • Editor: neovim
# Install Homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Brew packages
# node@16 is necessary because Github Copilot vim plugin currently requires 12.x-17.x
brew install node@16 npm tmux nvim fzf go yarn git gh htop reattach-to-user-namespace entr coreutils wget kitty grc
brew install homebrew/cask-fonts/font-hack-nerd-font
brew install bakks/bakks/poptop bakks/bakks/butterfish
go install github.com/boyter/[email protected]

# npm packages
npm install -g typescript typescript-language-server pyright prettier

gh auth login

# Populate the homedir with this repo's contents
cd ~
gh repo clone bakks/bakks
rsync -a bakks/ ./
rm -rf bakks/

To do

  • Moving cursor left/right when in the command mode (e.g. activate by renaming cR)
  • Move to next or previous LSP error
  • Add karabiner config info here

Keyboard Cheat Sheet

kitty

C-, : Open kitty config

C-⌘-, : Reload kitty config

⌥-⌘-, : Show current kitty config

zsh

C-t : Previous history item

C-n : Next history item

C-h : Cursor left

C-s : Cursor right

tmux

Normal Mode

C-b : Next pane

C-x : Previous pane

C-g : New pane split horizontally

C-d g : New pane split vertically

C-f : Next pane layout

C-l : Kill current pane

C-d h : Resize pane left

C-d t : Resize pane down

C-d n : Resize pane up

C-d s : Resize pane right

C-d d : Swap pane forward

C-d D : Swap pane backward

C-d r : Reload tmux configuration

C-u : Enter copy mode

Copy mode

Allows you to manipulate the text of a tmux pane.

H : Cursor left

T : Cursor down

N : Cursor up

S : Cursor right

t : Scroll down

n : Scroll up

C-t : Page down

C-n : Page up

V / C-v : Select text

y / Y : Copy selected text

neovim

Normal mode

h : Cursor left

t : Cursor down

n : Cursor up

s : Cursor left

C-h : Cursor left x12

C-t : Cursor down x8

C-n : Cursor up x8

C-s : Cursor right x12

j : Jump to top of file

q : Jump to bottom of file

u : Un-do

U : Re-do

. : Repeat last action

dd : Delete current line

i : Enter insert mode

I : Enter insert mode at beginning of line

k : Write file

r : Insert a single character at current location

C-r : Reload file

; : Reload vim config

o : Add newline and enter insert mode

e : Delete newline at end of current line

m : Delete selected text and enter insert mode

C-a : Jump to beginning of line

C-e : Jump to end of line

T : Next search result

N : Previous search result

- : Delete trailing whitespace throughout file

= : Grep in local directory

C-p : Open fzf file finder

P : Open file tree (nvim-tree.lua)

zR : Unfold all

[0-9]B : Go to numbered tab

B : Next tab

M : Previous tab

Q : Quit file

LSP / Autocompletion

cd : Go to definiton

cD : Go to type definition

ck : Show definition in hover window

ci : Go to implementation

cr : Show references

cR : Rename current symbol

tab : Select next autocompletion result

nvim-tree

o : Open selected file

O : Open file in new tab

Tab : Preview file

t / C-t : Next file

C-n : Go to parent

s : Open file with system default program

R : Refresh file tree

a : New file

r : Rename

y : Copy name

Y : Copy path

gy : Copy absolute path

butterfish's People

Contributors

bakks avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

butterfish's Issues

Bugs with text in bash/gnome terminal

Soon 6 in the morning and at least a 24h session 💥 lets hope I missed something and no bug.

I have tested with bash "vanilla" and with "oh my bash"using default terminal app both as normal user and elevated and same behaviour on a pretty fresh popos (ubuntu 22). Maybe my terminal settings but i reset them to default to make sure, env TERM=xterm-256color with unicode.
Anyway, a few glitches when using mainly home/end keys but from time to time arrow keys and especially when jumping words with ctrl. If its my settings, dont feel ashamed to close this ticket 😄

Great tool btw, really useful with or without these minor issues.

Thank you and good night.

bug

getting this: Error: [429:insufficient_quota] You exceeded your current quota, please check your plan and billing details.

So I'm getting:

Error: [429:insufficient_quota] You exceeded your current quota, please check your plan and billing details.

And i'm not the only one, in the reddit announcement of this wonderful tool someone else is also getting that error:
https://www.reddit.com/r/commandline/comments/11ystyq/butterfish_a_transparent_shell_wrapper_with_gpt/

This this because I have to upgrade to chatgpt plus, e.g. their 20$/month plan? or something else?

Feature idea: Code review by GPT

Hi 👋 , I stumbled upon your repo on https://news.ycombinator.com/item?id=35994037, and liked how integrated it was and some advanced features that most other simple CLIs don't have, like the indexing + search. 👍

Some existing tools do things like generating commit messages and having GPT review their code. This is entirely possible with the composable tools, or a common prompt + git diff etc., but I think the convenience of having an extra command that takes care of that "boilerplate" can bring some value.

So in this issue I'm proposing to add a PR review command.

Some inspiration could be taken from https://github.com/zackproser/automations/blob/310f5343a2a8a2506fe09ade972c71c367770141/autoreview/autoreview.sh

LLM prompts get interpreted as commands with zsh/prezto

Hi - just started using this tool and absolutely love it!

It seems to have a minor issue working with prezto and zsh. The prompts (commands starting with uppercase) are interpreted as regular shell commands with that plugin enabled.

I know it's prezto because:

  1. It's happening on both Mac and Ubuntu
  2. On both these systems it's working fine with bash and stock zsh
mymachine ~ ❯❯❯ 🐠 Is this thing on?
zsh: no matches found: on?
mymachine ~ ❯❯❯ 🐠 echo $PS1
%{%}${SSH_TTY:+"%F{9}%n%f%F{7}@%f%F{3}%m%f "}%F{4}${_prompt_sorin_pwd}%(!. %B%F{1}#%f%b.)${editor_info[keymap]} 🐠%{ %?%}

Is there an easy way to fix this? Happy to provide more debugging info if needed - thank you!

getting into a strange state after modifying zshrc

I started the butterfish shell and the little fish shows up. then i went to add some aliases in my zshrc related to butterfish and the fish goes away after source ~/.zshrc. does the presence of the fish emoji or its absence indicate the state of the wrapper?

➜  code brew install bakks/bakks/butterfish && butterfish shell
Running `brew update --auto-update`...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core and homebrew/cask).
==> New Formulae
ansible@7              fastgron               libint                 openfga                tern
apko                   git-credential-oauth   libomemo-c             procps@3               votca
aws-amplify            joshuto                libpaho-mqtt           shodan                 wzprof
bashate                libecpint              melange                spotify_player         xbyak
ddns-go                libfastjson            nexttrace              swift-outdated
==> New Casks
chatbox                dintch                 eusamanager            lasso                  processmonitor
copilot                engine-dj              filemonitor            loupedeck              tea
craft                  eu                     firefly-shimmer        motu-m-series          yealink-meeting

You have 35 outdated formulae and 2 outdated casks installed.

Warning: bakks/bakks/butterfish 0.0.31 is already installed and up-to-date.
To reinstall 0.0.31, run:
  brew reinstall butterfish
Logging to /var/tmp/butterfish.log

➜  code export PS1="$PS1🐠 "
➜  code 🐠
➜  code 🐠 butterfish prompt "please add `alias bf="butterfish"` to my zshrc file"
I'm sorry, but I cannot add anything to your zshrc file as I am an AI language model and do not have access to your computer's file system. However, I can provide you with instructions on how to edit your zshrc file.

To edit your zshrc file, follow these steps:

1. Open your terminal.
2. Type `nano ~/.zshrc` and press Enter. This will open your zshrc file in the Nano text editor.
3. Make the necessary changes to your zshrc^C
➜  code 🐠 code ~/.zshrc
➜  code 🐠 source ~/.zshrc
➜  code bf
Usage: butterfish <command>

Do useful things with LLMs from the command line, with a bent towards software engineering.

Butterfish is a command line tool for working with LLMs. It has two modes: CLI command mode, used to prompt LLMs,
summarize files, and manage embeddings, and Shell mode: Wraps your local shell to provide easy prompting and
autocomplete.

Butterfish stores an OpenAI auth token at ~/.config/butterfish/butterfish.env and the prompt wrappers it uses at
~/.config/butterfish/prompts.yaml.

To print the full prompts and responses from the OpenAI API, use the --verbose flag. Support can be found at
https://github.com/bakks/butterfish.

If you don't have OpenAI free credits then you'll need a subscription and you'll need to pay for OpenAI API use. If
you're using Shell Mode, autosuggest will probably be the most expensive part. You can reduce spend here by disabling
shell autosuggest (-A) or increasing the autosuggest timeout (e.g. -t 2000). See "butterfish shell --help".

v0.0.31 darwin arm64 (commit 8cc7f94) (built 2023-04-21T02:11:22Z) MIT License - Copyright (c) 2023 Peter Bakkum

Flags:
  -h, --help       Show context-sensitive help.
  -v, --verbose    Verbose mode, prints full LLM prompts.

Commands:
  shell
    Start the Butterfish shell wrapper. This wraps your existing shell, giving you access to LLM prompting by
    starting your command with a capital letter. LLM calls include prior shell context. This is great for keeping a
    chat-like terminal open, sending written prompts, debugging commands, and iterating on past actions.

    Use:

      - Type a normal command, like 'ls -l' and press enter to execute it

      - Start a command with a capital letter to send it to GPT, like 'How do I find local .py files?'

      - Autosuggest will print command completions, press tab to fill them in

      - Type 'Status' to show the current Butterfish configuration

      - GPT will be able to see your shell history, so you can ask contextual questions like 'why didn't my last
        command work?'

        Here are special Butterfish commands:

      - Status : Show the current Butterfish configuration

      - Help : Give hints about usage

    If you don't have OpenAI free credits then you'll need a subscription and you'll need to pay for OpenAI API use.
    If you're using Shell Mode, autosuggest will probably be the most expensive part. You can reduce spend here by
    disabling shell autosuggest (-A) or increasing the autosuggest timeout (e.g. -t 2000).

  prompt [<prompt> ...]
    Run an LLM prompt without wrapping, stream results back. This is a straight-through call to the LLM from the
    command line with a given prompt. This accepts piped input, if there is both piped input and a prompt then they
    will be concatenated together (prompt first). It is recommended that you wrap the prompt with quotes. The default
    GPT model is gpt-3.5-turbo.

  summarize [<files> ...]
    Semantically summarize a list of files (or piped input). We read in the file, if it is short then we hand it
    directly to the LLM and ask for a summary. If it is longer then we break it into chunks and ask for a list of
    facts from each chunk (max 8 chunks), then concatenate facts and ask GPT for an overall summary.

  gencmd <prompt> ...
    Generate a shell command from a prompt, i.e. pass in what you want, a shell command will be generated. Accepts
    piped input. You can use the -f command to execute it sight-unseen.

  rewrite <prompt>
    Rewrite a file using a prompt, must specify either a file path or provide piped input, and can output to stdout,
    output to a given file, or edit the input file in-place. This command uses the OpenAI edit API rather than the
    completion API.

  exec [<command> ...]
    Execute a command and try to debug problems. The command can either passed in or in the command register (if you
    have run gencmd in Console Mode).

  index [<paths> ...]
    Recursively index the current directory using embeddings. This will read each file, split it into chunks,
    embed the chunks, and write a .butterfish_index file to each directory caching the embeddings. If you re-run this
    it will skip over previously embedded files unless you force a re-index. This implements an exponential backoff
    if you hit OpenAI API rate limits.

  clearindex [<paths> ...]
    Clear paths from the index, both from the in-memory index (if in Console Mode) and to delete .butterfish_index
    files. Defaults to loading from the current directory but allows you to pass in paths to load.

  loadindex [<paths> ...]
    Load paths into the index. This is specifically for Console Mode when you want to load a set of cached indexes
    into memory. Defaults to loading from the current directory but allows you to pass in paths to load.

  showindex [<paths> ...]
    Show which files are present in the loaded index. You can pass in a path but it defaults to the current
    directory.

  indexsearch <query>
    Search embedding index and return relevant file snippets. This uses the embedding API to embed the search string,
    then does a brute-force cosine similarity against every indexed chunk of text, returning those chunks and their
    scores.

  indexquestion <question>
    Ask a question using the embeddings index. This fetches text snippets from the index and passes them to the LLM
    to generate an answer, thus you need to run the index command first.

Run "butterfish <command> --help" for more information on a command.

butterfish: error: expected one of "shell",  "prompt",  "summarize",  "gencmd",  "rewrite",  ...
➜  code code ~/.zshrc
➜  code source ~/.zshrc
➜  code source ~/.zshrc
➜  code bfs
Logging to /var/tmp/butterfish.log
Butterfish shell is already running, cannot wrap shell again (detected with BUTTERFISH_SHELL env var).
➜  code bfp
Usage: butterfish <command>

Do useful things with LLMs from the command line, with a bent towards software engineering.

Butterfish is a command line tool for working with LLMs. It has two modes: CLI command mode, used to prompt LLMs,
summarize files, and manage embeddings, and Shell mode: Wraps your local shell to provide easy prompting and
autocomplete.

Butterfish stores an OpenAI auth token at ~/.config/butterfish/butterfish.env and the prompt wrappers it uses at
~/.config/butterfish/prompts.yaml.

To print the full prompts and responses from the OpenAI API, use the --verbose flag. Support can be found at
https://github.com/bakks/butterfish.

If you don't have OpenAI free credits then you'll need a subscription and you'll need to pay for OpenAI API use. If
you're using Shell Mode, autosuggest will probably be the most expensive part. You can reduce spend here by disabling
shell autosuggest (-A) or increasing the autosuggest timeout (e.g. -t 2000). See "butterfish shell --help".

v0.0.31 darwin arm64 (commit 8cc7f94) (built 2023-04-21T02:11:22Z) MIT License - Copyright (c) 2023 Peter Bakkum

Flags:
  -h, --help       Show context-sensitive help.
  -v, --verbose    Verbose mode, prints full LLM prompts.

Commands:
  shell
    Start the Butterfish shell wrapper. This wraps your existing shell, giving you access to LLM prompting by
    starting your command with a capital letter. LLM calls include prior shell context. This is great for keeping a
    chat-like terminal open, sending written prompts, debugging commands, and iterating on past actions.

    Use:

      - Type a normal command, like 'ls -l' and press enter to execute it

      - Start a command with a capital letter to send it to GPT, like 'How do I find local .py files?'

      - Autosuggest will print command completions, press tab to fill them in

      - Type 'Status' to show the current Butterfish configuration

      - GPT will be able to see your shell history, so you can ask contextual questions like 'why didn't my last
        command work?'

        Here are special Butterfish commands:

      - Status : Show the current Butterfish configuration

      - Help : Give hints about usage

    If you don't have OpenAI free credits then you'll need a subscription and you'll need to pay for OpenAI API use.
    If you're using Shell Mode, autosuggest will probably be the most expensive part. You can reduce spend here by
    disabling shell autosuggest (-A) or increasing the autosuggest timeout (e.g. -t 2000).

  prompt [<prompt> ...]
    Run an LLM prompt without wrapping, stream results back. This is a straight-through call to the LLM from the
    command line with a given prompt. This accepts piped input, if there is both piped input and a prompt then they
    will be concatenated together (prompt first). It is recommended that you wrap the prompt with quotes. The default
    GPT model is gpt-3.5-turbo.

  summarize [<files> ...]
    Semantically summarize a list of files (or piped input). We read in the file, if it is short then we hand it
    directly to the LLM and ask for a summary. If it is longer then we break it into chunks and ask for a list of
    facts from each chunk (max 8 chunks), then concatenate facts and ask GPT for an overall summary.

  gencmd <prompt> ...
    Generate a shell command from a prompt, i.e. pass in what you want, a shell command will be generated. Accepts
    piped input. You can use the -f command to execute it sight-unseen.

  rewrite <prompt>
    Rewrite a file using a prompt, must specify either a file path or provide piped input, and can output to stdout,
    output to a given file, or edit the input file in-place. This command uses the OpenAI edit API rather than the
    completion API.

  exec [<command> ...]
    Execute a command and try to debug problems. The command can either passed in or in the command register (if you
    have run gencmd in Console Mode).

  index [<paths> ...]
    Recursively index the current directory using embeddings. This will read each file, split it into chunks,
    embed the chunks, and write a .butterfish_index file to each directory caching the embeddings. If you re-run this
    it will skip over previously embedded files unless you force a re-index. This implements an exponential backoff
    if you hit OpenAI API rate limits.

  clearindex [<paths> ...]
    Clear paths from the index, both from the in-memory index (if in Console Mode) and to delete .butterfish_index
    files. Defaults to loading from the current directory but allows you to pass in paths to load.

  loadindex [<paths> ...]
    Load paths into the index. This is specifically for Console Mode when you want to load a set of cached indexes
    into memory. Defaults to loading from the current directory but allows you to pass in paths to load.

  showindex [<paths> ...]
    Show which files are present in the loaded index. You can pass in a path but it defaults to the current
    directory.

  indexsearch <query>
    Search embedding index and return relevant file snippets. This uses the embedding API to embed the search string,
    then does a brute-force cosine similarity against every indexed chunk of text, returning those chunks and their
    scores.

  indexquestion <question>
    Ask a question using the embeddings index. This fetches text snippets from the index and passes them to the LLM
    to generate an answer, thus you need to run the index command first.

Run "butterfish <command> --help" for more information on a command.

butterfish: error: unknown flag -p, did you mean one of "-h", "-v"?
➜  code bfp "what's going on"
Usage: butterfish <command>

Do useful things with LLMs from the command line, with a bent towards software engineering.

Butterfish is a command line tool for working with LLMs. It has two modes: CLI command mode, used to prompt LLMs,
summarize files, and manage embeddings, and Shell mode: Wraps your local shell to provide easy prompting and
autocomplete.

Butterfish stores an OpenAI auth token at ~/.config/butterfish/butterfish.env and the prompt wrappers it uses at
~/.config/butterfish/prompts.yaml.

To print the full prompts and responses from the OpenAI API, use the --verbose flag. Support can be found at
https://github.com/bakks/butterfish.

If you don't have OpenAI free credits then you'll need a subscription and you'll need to pay for OpenAI API use. If
you're using Shell Mode, autosuggest will probably be the most expensive part. You can reduce spend here by disabling
shell autosuggest (-A) or increasing the autosuggest timeout (e.g. -t 2000). See "butterfish shell --help".

v0.0.31 darwin arm64 (commit 8cc7f94) (built 2023-04-21T02:11:22Z) MIT License - Copyright (c) 2023 Peter Bakkum

Flags:
  -h, --help       Show context-sensitive help.
  -v, --verbose    Verbose mode, prints full LLM prompts.

Commands:
  shell
    Start the Butterfish shell wrapper. This wraps your existing shell, giving you access to LLM prompting by
    starting your command with a capital letter. LLM calls include prior shell context. This is great for keeping a
    chat-like terminal open, sending written prompts, debugging commands, and iterating on past actions.

    Use:

      - Type a normal command, like 'ls -l' and press enter to execute it

      - Start a command with a capital letter to send it to GPT, like 'How do I find local .py files?'

      - Autosuggest will print command completions, press tab to fill them in

      - Type 'Status' to show the current Butterfish configuration

      - GPT will be able to see your shell history, so you can ask contextual questions like 'why didn't my last
        command work?'

        Here are special Butterfish commands:

      - Status : Show the current Butterfish configuration

      - Help : Give hints about usage

    If you don't have OpenAI free credits then you'll need a subscription and you'll need to pay for OpenAI API use.
    If you're using Shell Mode, autosuggest will probably be the most expensive part. You can reduce spend here by
    disabling shell autosuggest (-A) or increasing the autosuggest timeout (e.g. -t 2000).

  prompt [<prompt> ...]
    Run an LLM prompt without wrapping, stream results back. This is a straight-through call to the LLM from the
    command line with a given prompt. This accepts piped input, if there is both piped input and a prompt then they
    will be concatenated together (prompt first). It is recommended that you wrap the prompt with quotes. The default
    GPT model is gpt-3.5-turbo.

  summarize [<files> ...]
    Semantically summarize a list of files (or piped input). We read in the file, if it is short then we hand it
    directly to the LLM and ask for a summary. If it is longer then we break it into chunks and ask for a list of
    facts from each chunk (max 8 chunks), then concatenate facts and ask GPT for an overall summary.

  gencmd <prompt> ...
    Generate a shell command from a prompt, i.e. pass in what you want, a shell command will be generated. Accepts
    piped input. You can use the -f command to execute it sight-unseen.

  rewrite <prompt>
    Rewrite a file using a prompt, must specify either a file path or provide piped input, and can output to stdout,
    output to a given file, or edit the input file in-place. This command uses the OpenAI edit API rather than the
    completion API.

  exec [<command> ...]
    Execute a command and try to debug problems. The command can either passed in or in the command register (if you
    have run gencmd in Console Mode).

  index [<paths> ...]
    Recursively index the current directory using embeddings. This will read each file, split it into chunks,
    embed the chunks, and write a .butterfish_index file to each directory caching the embeddings. If you re-run this
    it will skip over previously embedded files unless you force a re-index. This implements an exponential backoff
    if you hit OpenAI API rate limits.

  clearindex [<paths> ...]
    Clear paths from the index, both from the in-memory index (if in Console Mode) and to delete .butterfish_index
    files. Defaults to loading from the current directory but allows you to pass in paths to load.

  loadindex [<paths> ...]
    Load paths into the index. This is specifically for Console Mode when you want to load a set of cached indexes
    into memory. Defaults to loading from the current directory but allows you to pass in paths to load.

  showindex [<paths> ...]
    Show which files are present in the loaded index. You can pass in a path but it defaults to the current
    directory.

  indexsearch <query>
    Search embedding index and return relevant file snippets. This uses the embedding API to embed the search string,
    then does a brute-force cosine similarity against every indexed chunk of text, returning those chunks and their
    scores.

  indexquestion <question>
    Ask a question using the embeddings index. This fetches text snippets from the index and passes them to the LLM
    to generate an answer, thus you need to run the index command first.

Run "butterfish <command> --help" for more information on a command.

butterfish: error: unknown flag -p, did you mean one of "-h", "-v"?

Bug: shell and indexquestion do not work on nixos

here is an example of running butterfish shell. Occasionally words like "status" or "--help" will pop up in gray, and you cannot exit the program without closing the terminal.
image
The command was run on alacritty with bash, but the same thing happens on wezterm or using other shells.

here is an example of running butterfish indexquestion "test"
image
all these commands somewhat work:

butterfish index
butterfish indexsearch
butterfish loadindex

but butterfish indexquestion does not

Question: Is this compatible with atuin

In the docs it says that butterfish "manages my shell history". I'm using https://github.com/atuinsh/atuin to sync my shell history (actually discovered it via the llmshellautocomplete project which leverages it to do something that appears much hackier than your project).

Atuins main "magic" is in storing the shell history in a sqlite library and then implementing search, sync etc. on top of that. I quite like it since I'm often server hopping.

It it possible to use the two concurrently? And if not, off the top of your head, what would be the main steps I'd have to do to contribute compatibility?

Feature Request: expanded embedding capabilities

Butterfish is supercool. I'd like expanded embedding capabilities. Primarily, I'd like to see:

  • More robust searching capabilities
  • Ability to query my index remotely
  • Ability to integrate index with other software

Potential Approaches:

If this feature is prioritized, there are likely numerous approaches to implementation. A few I've thought of:

  • Pinecone as an alternative vector index & query engine: Provide a means to switch from using local indexing and brute force querying to Pinecone. Perhaps through environment variables or a config file.
  • Ability to sync local index up to Pinecone: Provide a command like butterfish indexsync . to upsert / delete embeddings in a local index to Pinecone.
  • VectorDB Abstraction/Plugins: Provide an extension point where multiple vector database implementations could be built, and used based on user configuration. In this way you eventually support local, pinecone, milvus, weaviate, etc.
  • Implement efficient search algorithms and remote querying: Keep a local-only solution, but implement more efficient search algorithms, and implement a way to query your local index remotely.

Linux Install Method Not Working

[~/tmp]$ go version
go version go1.18.1 linux/amd64

[~/tmp]$ go get github.com/bakks/butterfish/cmd/butterfish

go: go.mod file not found in current directory or any parent directory.
        'go get' is no longer supported outside a module.
        To build and install a command, use 'go install' with a version,
        like 'go install example.com/cmd@latest'
        For more information, see https://golang.org/doc/go-get-install-deprecation
        or run 'go help get' or 'go help install'.

Feature Request: fish shell support

The goal of this project seems to be in alignment with so much of fish shell's mission - in particular. In fact, the name and intent similarity immediately made me look to see if there was already a fisher plugin implementation of butterfish.

User Story

As an avid fan of fish shell, I would like to use butterfish, so that I can benefit from an even more friendly interactive shell.

Feature Request: Safe Retrieval Plugin

I'd like for ChatGPT to be able to directly use my local embeddings index, without giving it unfettered access to run commands on my machine.

Provide a command that starts a chatgpt retrieval plugin.

Feature Request: Enhanced Multiline and Clipboard Input in `butterfish prompt`

Problem:
When pasting multiline content from the clipboard into butterfish prompt, it doesn't process correctly.

Proposed Solution:
Allow butterfish prompt to recognize when invoked without any arguments, and then automatically open the user's default text editor (or fallback to vim) for input. This behavior would mirror the functionality seen with:

apt-get install moreutils
vipe | butterfish prompt

Benefit Over Existing Solutions:
Using the vipe approach, the context written in the text editor disappears in the terminal once the input is submitted. With a custom method, this context could be retained internally by butterfish, allowing users to reference it later without cluttering the terminal output.

Better examples in the README here on github

I'm a reasonably profficient user of the shell, and the man page with listing of the flags are good. However, some concrete examples of the following would be nice, at least to me, and I'm probably not the only one I would think:

  • How to get butterfish to use previous instructions and answers? I would think the answer would be using the shell mode.
  • the yaml idea is neat, but how do you use it? For some, this might be obvious. It isn't for me, and I'm not a newbie.
  • examples of how to use the index to generate responses based on local text. This is really exciting, and although it does seems possible to infer how to it from the readme, I think an example could spell it out in a very nice way that would highlight this extremely useful feature.

Hope you find the input useful, let me know how I can assist you.

Prompt mode context

Prompting might lead to a question from the model to explain or demonstrate further. Answering yes in a follow-up prompt will not yield an explanation or a demonstration since the model regards the user's response as a new prompt.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.