Giter Club home page Giter Club logo

aquarium's People

Contributors

chris-abbott avatar fafrd avatar rikudousage avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aquarium's Issues

Feature Request: Support for llama backend

With the advancements in at home AI it would be amazing to see support for the following backend:

https://abetlen.github.io/llama-cpp-python/

Web Server
llama-cpp-python offers a web server which aims to act as a drop-in replacement for the OpenAI API. This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc).

To install the server package and get started:

pip install llama-cpp-python[server]
export MODEL=./models/7B
python3 -m llama_cpp.server
Navigate to http://localhost:8000/docs to see the OpenAPI documentation.

Redirecting the API URL within the source of the golang library used for openAI gets about this far:
image

Then I am met with this error:

19c31e44 iteration 1: executing                                                                                                                                                                                                                                                                
panic: runtime error: slice bounds out of range [4:3]                                                                                                       

goroutine 70 [running]:
aquarium/actor.(*Actor).iteration(0xc00018b100)
	/home/raven/aquarium/actor/actor.go:327 +0x1079
aquarium/actor.(*Actor).Loop.func2()
	/home/raven/aquarium/actor/actor.go:148 +0x68
created by aquarium/actor.(*Actor).Loop
	/home/raven/aquarium/actor/actor.go:141 +0xabb

Hangs on cd

Sick project.

One thing I've noticed is that it will hang while trying to CD to a directory.

/bin/bash: line 1: exec: cd: not found

And it doesn't seem to know what to do when cd isn't found so it just keeps trying, at least in my last attempt.

Slice bounds out of range - seems like a one-off error.

panic: runtime error: slice bounds out of range [721:720]

goroutine 82 [running]:
aquarium/actor.(*Actor).iteration(0xc000096000)
/Users/ksimpson/git/aquarium/actor/actor.go:299 +0xf6a
aquarium/actor.(*Actor).Loop.func2()
/Users/ksimpson/git/aquarium/actor/actor.go:141 +0x68
created by aquarium/actor.(*Actor).Loop
/Users/ksimpson/git/aquarium/actor/actor.go:134 +0x97b

hack: give agent optional bordom timeout that +1s the max procs?

Bordom is part of agency .. right?

for example if agent throws something into the background with some simple job control foo &
it could, via bordom timer be given +1 to max procs...

Or perhaps an interactive element, where the output screen shows the current # of procs and the user can increase this value:

also the split output screen is neat, I could envision sending this data to gpt4 with a prefixed with a general explanation prompt and see if it could come up with good ideas to escape the loop.

cool project.

edit: 7 dollars?!
This thing is pretty dangerous... I think that feeding the last 10 or 20 lines of stdout would be almost as effective,
and cost a lot less.

GPT 3.5 turbo

I've asked GPT-4 to modify the genDialogue() function to use gpt-3.5-turbo instead. Seems to work fine for me. I'm not familiar with Go can someone take a look and see if it looks good?

func genDialogue(aiPrompt string) (string, error) {
	apiKey := os.Getenv("OPENAI_API_KEY")
	if apiKey == "" {
		return "", errors.New("undefined env var OPENAI_API_KEY")
	}

	ctx := context.Background()
	client := gpt3.NewClient(apiKey)

	messages := []gpt3.ChatCompletionRequestMessage{
		{
			Role:    "user",
			Content: aiPrompt,
		},
	}

	logger.Debugf("### Sending request to OpenAI:\n%s\n\n", aiPrompt)

	request := gpt3.ChatCompletionRequest{
		Model:        "gpt-3.5-turbo",
		Messages:     messages,
		MaxTokens:    tokens,
		Temperature:  gpt3.Float32Ptr(0.0),
	}

	resp, err := client.ChatCompletion(ctx, request)
	if err != nil {
		logger.Debugf("### ERROR from OpenAI:\n%s\n\n", err)
		return "", err
	}

	trimmedResponse := strings.TrimSpace(resp.Choices[0].Message.Content)
	logger.Debugf("### Received response from OpenAI:\n%s\n\n\n", trimmedResponse)
	return trimmedResponse, nil
}

Reduce API cost

Running this program can be expensive due to the OpenAI api cost. Part of the reason is that we are sending the entire output of previous commands, and for very long messages this can add up.

We can probably fix this by only sending the last ~10 or 20 output lines to OpenAI- we should implement that as a flag or something. If it works very well i'll make this the default behavior.

Actor gets blocked

Running:
OPENAI_API_KEY="..." ./aquarium --goal "Your goal is to check for any security vulnerabilities."

Running the following command and gets stuck:
sudo apt-get install -q -y rkhunter

blocked

It would be nice to have a timeout that can fire if the terminal's content doesn't change in x seconds. In case of fire triggering the loop of getting the terminal's output back to the AI.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.