juliooa / secondbrain Goto Github PK
View Code? Open in Web Editor NEWMulti-platform desktop app to download and run Large Language Models(LLM) locally in your computer.
Home Page: https://secondbrain.sh
License: MIT License
Multi-platform desktop app to download and run Large Language Models(LLM) locally in your computer.
Home Page: https://secondbrain.sh
License: MIT License
Title, but with a twist: It now works. It was only when I didn't have a model activated that the 404 was happening.
Think it would be really cool to have the option to tweak model parameters.
The main one's really being temperature and repeat penalty, adjusting these help depending on what you're trying to do with the model or how you're evaluating it.
Please show file size of models available to download.
For example, if the MBT model had MPT selected, and I swap over to Wizard-Vicuna while choosing Llamma, the MBT model will select llama as the default setting.
When multiple models are enabled, their activation icons turn orange, and de-selection isn't possible. Rebooting the app will default to the most recently activated model.
Hi Julio,
When do you expect to have finish the "Query your documents" function?
This function looks like a very attractive proposition of your product, given that I can find many other local open source app based softwares with vector searching in documents.
I think it would be good if the program can track how much resources are currently being consumed by it. The amount of CPU, GPU, and RAM capacity could be shown in the title bar at the top of the app.
Away-Sleep on the Reddit thread mentioned this.
As said on the tin. I tried to activate a model, Wizard-Vicuna 7b Uncensored, and got this error.
Basically, information that tells the user whether their Second Brain is the latest and greatest. AI models having update notes, changelogs, ect where applicable might also be handy.
I have 20 models already downloaded. Can't figure out a way to point the app to an existing path with those models in it. I don't want to download anything new.
When selecting "MBT" for Wizard-Vicuna 7b, this error happens. Selecting Llamma works fine.
It would be good that when looking at an AI model, that Second Brain would list the hardware recommendations and compare them against the user's machine.
When giving the model a prompt and it begins outputting a response and you cancel, it does not actually cancel but instead continues outputting.
Here is a bug reported by Rodneyg on the Reddit thread:
Very nice! I downloaded it and it downloaded the model fine but it's getting an error when I try to activate it. On new MBP M2 Max 32 GB memory.
The model in use is Wizard Vicuna 13b Uncensored GGML, and the error reads as:
Error: unknown tensor 'layers.1.attention_norm.weight' in ""
LammaCPP introduced metal support, for M1/M2 blazing fast inference for GGML's.
While on Reddit, I replied to a topic and put forward Kagi's answer to what a character card is - and it was incorrect. It isn't just Optical Character Recognition, but seems to be a sort of "container" that helps inform a role that an AI character plays.
This functionality would definately be very valuable for AI prompts. Here is a github page that details a new generation of Character Cards, and also has links to a number of sites that are involved with them.
I downloaded Wizard-Vicuna 7b uncensored, and the app gave a number of notifications. I think they were for the...?variants?, such as MPT, GPT, and so forth.
I tried giving it multiple lines of context but the moment I press Ctrl+Enter to invoke a new line, instead it begins generating output.
This is Darth Gius's suggestion on the Reddit thread.
Sure, what I'd like to add is a completion endpoint for my node.js app, most chatbots create this making a local server and printing a link (to not start every time the chatbot) and the external app sends the prompt and receives the output through the link, like here (I think, it's the Tauri page for the api), or this (uses openai api in python to generates answers), but I'm no expert on this, for now I can/prefer to send context+prompt directly to your app and start it every time I need an answer (I already have a nodejs code to start a python chatbot and send prompts there and receive outputs in nodejs, now I would have to understand how your code works and change the python chatbot with your app).
On the rustformers github page I see that one of the commands to generate the answer is llm llama infer -m ggml-gpt4all-j-v1.3-groovy.bin -p "Rust is a cool programming language because", my basic idea for now is to change the Tauri app to let it do -p prompt, which receives from my code through the link or through a shared variable (if I don't use the link and start different times your app)
Tried using that feature inside Second Brain, but nothing happens.
As said below: (Wizard-Vicuna 13b Uncensored)
Yes, my local model has Wi-Fi capabilities built into its hardware so it can connect to the internet wirelessly via any available network signal. This allows me to search through web pages using popular online search engines such as Google or Bing directly within the browser interface without needing additional software installed onto my computer first.
I am guessing it actually can't, but the program should allow the user to grant/deny internet access to AI models in the future. If multiple AI are running, the ability to select individual permissions would be good. Should AI access be granted to AI, the option to select which browser to use would be important - I use uBlock Origin to prevent assorted threats, an extension that I don't have for Edge.
A couple people mentioned this sort of thing in the reddit thread, I figure it would be easier to track if the request(s) was cloned here.
I think this because any AVX on my 2xprocessor 24thread computer....?
I use with the appimage linux.
Apparently, some AI models have the ability to be used on both CPU and GPUs. Being able to select which hardware (and to what degree?) may be useful for some users.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.