Giter Club home page Giter Club logo

Comments (13)

bakks avatar bakks commented on August 22, 2024

@tomlue Thanks for this suggestion! Can you say more about the multiline paste not working? This is controlled by the shell and in some cases it works for me, i.e. I can do something like butterfish prompt "<paste>" and the shell will extend down several lines, though more esoteric characters can break this I think.

from butterfish.

tomlue avatar tomlue commented on August 22, 2024

I think it will be difficult to get a solution that works in every shell. One immediate problem is just with escaping quotes. The vipe | butterfish prompt approach works really well and almost makes this unnecessary. With vipe the context of the text written into the text editor is lost in the butterfish shell though.

Maybe there are better feature requests though. I will keep thinking.

from butterfish.

bakks avatar bakks commented on August 22, 2024

Got it, yep quotes may present a problem. My initial thought is that the best pattern is to edit a local file and then pipe that file in, e.g.

vim context.txt
butterfish prompt < context.txt
butterfish prompt "here is a prompt that uses the context" < context.txt

In some ways I think that is simpler behavior than popping an editor open but I'm curious about your opinion on the pros/cons.

from butterfish.

tomlue avatar tomlue commented on August 22, 2024

The vipe approach works pretty well for me, if you like I'm satisfied enough with it for you to close this issue. In ubuntu

apt-get install moreutils
alias bpc="vipe | butterfish prompt"

the bpc alias will then drop you into a command line text editor and feed the results to butterfish prompt.

from butterfish.

bakks avatar bakks commented on August 22, 2024

Makes sense but I want to make sure I understand your workflow - is that something you would use frequently or only if you're having trouble with a multiline prompt or some other weirdly shaped prompt?

from butterfish.

tomlue avatar tomlue commented on August 22, 2024

It is something I find myself using very frequently. Frequently I am constructing prompts where I do something like:

  1. run some code I'm working on
  2. get a stack trace
  3. copy the stack trace to clipboard
  4. run vipe | butterfish prompt
  5. create a prompt like "can you help me with this stacktrace? {paste stacktrace} 'it came from this code' {paste code}"

In those scenarios some multistep text editing and copy/paste is happening.

A separate feature request might be to parse the prompt for things that look like file paths, cat the file path, and add the context to the prompt. This way you could just copy and paste a stack trace into butterfish and it could handle the rest. This would give butterfish a capacity that the web app chatgpt doesn't have, that is, reading local file paths to build custom prompts.

from butterfish.

bakks avatar bakks commented on August 22, 2024

Ok I think I see -- Please try out Shell Mode and see if that works for you, I use it for that kind of workflow really frequently. Basically the shell history becomes the prompt context. I'm really curious if it solves your problem, please give feedback if it doesn't quite work.

Here's an example:
Screen Shot 2023-08-25 at 12 47 01 PM

2 more thoughts:

  • The common problem with the pattern above (and working with code in general) is that you fill up the context pretty quickly. One way I've been working around that is this thing: https://github.com/bakks/tako, which is pretty underdeveloped but the basic idea is you can pull meaningful stuff out of code files to fit into the context window, for example to fetch a specific function based on the function's name.
  • I really like the idea of being able to paste in a stack trace or other text that has file paths and be able to pull open those paths automatically, let me think more about that.

from butterfish.

tomlue avatar tomlue commented on August 22, 2024

cating the file so that it gets added to the shell context mostly works. It does seem like sometimes butterfish is losing context. I guess it takes the most recent context?

I agree thought 2 seems like a nice feature, and something butterfish can do that the chatgpt web app can't.

Using tree sitter to parse code also seems smart. Something like:

  1. Tree sit all the code files and extract expressions
  2. embed the expressions in a vector store
  3. get a prompt and embed it, use vector store to find the most relevant embedded expressions.
  4. build new prompt with the relevant expressions.

You start wanting to use a local LLM though, because it's a lot of embeddings. Which is another feature request (maybe better to add these to the issue list). We should be able to use LLAMA2 or other LLMs.

In terms of the terminal context getting too long, you could also chunk the terminal context and do the same vector store approach.

It would be interesting to keep a local vector store running with access to a wider context, plugins for email and the like would be handy.

from butterfish.

bakks avatar bakks commented on August 22, 2024

cating the file so that it gets added to the shell context mostly works. It does seem like sometimes butterfish is losing context. I guess it takes the most recent context?

Yeah, it fits as much history into the context window as it can, but it truncates specific line items (like a specific command output) as a strategy to manage this, plus the context window fills up pretty quickly if a lot of stuff is printed. So the history is like a quick way to build context.

If you want to see what's actually going into the prompt you can run shell mode with the -v flag, e.g. butterfish shell -v, and then watch the log file, it can be useful to tell if you're fitting what you want into the prompt.

I'm less bullish on the vector store / RAG approach because 1) i think that strategy is more effective for written text than code / terminal commands and 2) like all search mechanisms it isn't guaranteed to surface exactly what you need, and unless you show the vector store response to the user it's really hard to know if you're getting bad results because of the LLM or because of the vector store.

The question I've been pondering lately is how much coding-agent stuff I want to try / put into Butterfish, and I've pretty much decided that it doesn't make sense to go very far down that road because I think the best stuff will be partly built into the editor (like sourcegraph cody or cursor.so).

So here's what I'm concluding - I think the shell history is a good/cheap/stupid/simple way to manage context in most cases, but one good addition might be to have a command like butterfish promptedit which opens your command line editor to a buffer that is then sent as the prompt. I don't want to do this on butterfish prompt because I think that might be confusing if someone didn't know about that functionality and forgot to pass in a prompt, or the stdin pipe didn't work or something. So two final questions:

  • Do you think that would be useful / would you use that immediately?
  • Would you want that to always open up the same file/buffer every time you run it or should it be a new buffer every time?

Sorry if this is a lot of back and forth but it's super helpful for me to understand how people are using this tool and what features might be good, thanks for your help!

from butterfish.

tomlue avatar tomlue commented on August 22, 2024

If you built that I would use it rather than the alias I have for bpc = vipe | butterfish prompt. I use the vipe | butterfish prompt very frequently.

I commonly use it to copy in bits of code. Enabling references to expressions/paths in the prompts would maybe be a better way of doing this.

from butterfish.

bakks avatar bakks commented on August 22, 2024

Added this command, will release soon. Note that this isn't doing anything to deal with a buffer that extends past the token limit of the model you're using.

butterfish promptedit --help
Usage: butterfish promptedit

Like the prompt command, but this opens a local file with your
default editor (set with the EDITOR env var) that will then be
passed as a prompt in the LLM call.

Flags:
  -h, --help                     Show context-sensitive help.
  -v, --verbose                  Verbose mode, prints full LLM
                                 prompts (sometimes to log file).
                                 Use multiple times for more
                                 verbosity, e.g. -vv.
  -V, --version                  Print version information and
                                 exit.

  -f, --file="~/.config/butterfish/prompt.txt"
                                 Cached prompt file to use.
  -e, --editor=""                Editor to use for the prompt.
  -m, --model="gpt-3.5-turbo"    GPT model to use for the prompt.
  -n, --num-tokens=1024          Maximum number of tokens to
                                 generate.
  -T, --temperature=0.7          Temperature to use for the prompt,
                                 higher temperature indicates more
                                 freedom/randomness when generating
                                 each token.

from butterfish.

bakks avatar bakks commented on August 22, 2024

OK! This is deployed now in v0.1.8, please give it a try. What I did for myself is add a line like this in ~/.zshrc:

export EDITOR=nvim

from butterfish.

tomlue avatar tomlue commented on August 22, 2024

works for me, very nice!

from butterfish.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.