- At least 8GB of RAM
- At least 20.2GB of free storage (just for the models)
-
Install Python 3.12.0 (https://www.python.org/downloads/)
-
Install ollama (https://ollama.com/download)
-
Clone this repository
-
Cd into the CTFBuddy directory
-
(optional) Get an ngrok account and set up ngrok (https://dashboard.ngrok.com/get-started/setup/)
-
(optional) Get a free static domain
-
Install required Python libraries and backages
pip install -r requirements.txt
- (optional) Enter your ngrok auth token using this command (replace
[AUTH TOKEN]
with your actual auth token)
ngrok config add-authtoken [AUTH TOKEN]
- Pull the llama3, mistral, phi3, and llava models
ollama pull llama3
ollama pull mistral
ollama pull phi3
ollama pull llava-llama3
-
Create a Hugging Face account (https://huggingface.co/)
-
Get gated model access for Mistral (model: mistralai/Mistral-7B-Instruct-v0.3) and Llama3 (model: meta-llama/Meta-Llama-3-8B)
-
Get a Hugging Face User Access Token (https://huggingface.co/settings/tokens)
-
Get a Google API key and a and a Programmable Search Engine ID (https://developers.google.com/custom-search/v1/overview)
- Configure CTFBuddy if you have not already done so
python3 edit_config.py
- (optional) Start a tunnel
ngrok http 11434 --host-header="localhost:11434" --domain=xxxxxxxx.ngrok-free.app
- Wait for the client to show something like
HTTP tunnel: https://xxxxxxxx.ngrok-free.app
- Open another terminal, then run the client
python3 ctfbuddy/main.py
- Wait for the client to show something like
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
- Open the local URL on your browser
- Crypto Autosolver
- You can only make 20000 HTTP/S requests per month and transfer 1 GB out of the server per month with a free account
- However, unless you are a heavy user of CTFBuddy (in that case I applaud you), you are very unlikely to exhaust your free consumption limits
- Please double-check everything generated by AI; it may generate inaccurate/nonsensical information!
- https://mattmazur.com/2023/12/14/running-mistral-7b-instruct-on-a-macbook/ for the suggestion of using Ollama
- https://youtu.be/jENqvjpkwmw?si=n_nOXS_CLallmsfb and https://mer.vin/2024/02/ollama-embedding/ for the UI library suggestion