nrl-ai / pautobot Goto Github PK
View Code? Open in Web Editor NEW๐ฅ Your private task assistant with GPT ๐ฅ (1) Ask questions about your documents. (2) Automate tasks.
๐ฅ Your private task assistant with GPT ๐ฅ (1) Ask questions about your documents. (2) Automate tasks.
So I ran this in git bash for windows and it didn't help.
Also, when I run the development side, I get this..
https://nextjs.org/docs/messages/module-not-found
Import trace for requested module:
./components/RightSidebar.js
./pages/index.js
3 | import { clearChatHistory } from "@lib/requests/history";
4 | import { ingestData } from "@/lib/requests/documents";
5 |
6 | export default function ModelSelector() {
https://nextjs.org/docs/messages/module-not-found
Import trace for requested module:
./components/RightSidebar.js
./pages/index.js
Manage knowledge database by files
Hello All ,
I am seeing below error when i run the python file. can anyone please share the next resolution steps for this ?
Error -
File "/usr/local/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/shutil.py", line 559, in copytree
with os.scandir(src) as itr:
^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/FBT/Desktop/Projects/pautobot/pautobot/pautobot/frontend-dist'
I'd love to be able to upload a set of documents that are persistent. As is I need to clear the documents to ask questions about another one. This could be solved by have sets that could be switched between.
Right now, I can only upload one file at a time.. To be more technical, I can set up a batch upload, but I have to select each file. What would be helpful is to recurse through directories and perhaps select multiple files in the upload file window. That would allow me to select a batch of files to be uploaded and leave it running overnight to digest them all.
Hello,
I am a bit of a computer newbie, but this project inspired me so much that I downloaded Python for the first time to try and build my own little GPT bot :-)
I got it up and running on a commercial laptop (2ghz/16gb). So far so good.
However, I trained it on 90 pdf files, which was too much for it given the spec of my PC. So I wanted to restart everything. That meant I closed down python, and restarted it. On startup, the PAutoBot starts ingesting everything allover - "Note: The bot is currently ingesting data. Please wait until it finishes".
This means I can even reduce the number of articles to e.g. 5 just to try it. So that means that eventhough I have it installed, I have not been able to actually run a single search yet.
Please help me, how do I reset the bot so I can try it properly?
Hi there,
thanks for the project. I ran into an issue while connection to the website within my network. So I have a headless server running the code and open 192.168.2.10:5678 on another computer. Website looks good. But I get error messages with every click afterwords.
The terminal on the server say's:
Starting PautoBot...
Version: 0.0.18
Found model file at /home/meiko/pautobot-data/models/ggml-gpt4all-j-v1.3-groovy.bin
gptj_model_load: loading model from '/home/meiko/pautobot-data/models/ggml-gpt4all-j-v1.3-groovy.bin' - please wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx = 2048
gptj_model_load: n_embd = 4096
gptj_model_load: n_head = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot = 64
gptj_model_load: f16 = 2
gptj_model_load: ggml ctx size = 5401.45 MB
gptj_model_load: kv self size = 896.00 MB
gptj_model_load: ................................... done
gptj_model_load: model size = 3609.38 MB / num tensors = 285
Creating new vectorstore
Loading documents from /home/meiko/pautobot-data/contexts/default/documents
Loading new documents: 0it [00:00, ?it/s]
No new documents to load
No new documents to load
INFO: Started server process [536575]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:5678 (Press CTRL+C to quit)
INFO: 192.168.2.48:51870 - "GET / HTTP/1.1" 304 Not Modified
INFO: 192.168.2.48:51870 - "GET /_next/static/U0WT1P6WMnAqx-4T8sG3z/_buildManifest.js HTTP/1.1" 404 Not Found
INFO: 192.168.2.48:51871 - "GET /_next/static/U0WT1P6WMnAqx-4T8sG3z/_ssgManifest.js HTTP/1.1" 404 Not Found
INFO: 192.168.2.48:51873 - "POST /api/ask HTTP/1.1" 405 Method Not Allowed
Any hints?
Thanks!
The project uses 1.13.3, so can't install with pip as it gives
ImportError: cannot import name 'formatargspec' from 'inspect'
This feature will be used for file processing later.
When I try to upload documents and click Ingest Data this exception pops up.
RuntimeError: Failed to import transformers.models.bert.modeling_bert because of the following error (look up to see its traceback):
cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' (C:\Users\User\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\urllib3\util\ssl_.py)
As this is yet to support GPTQ models, maybe you can extend the functionality by adding api support of textgen webui? Textgen Webui can load GPTQ model and can expose API
Can you provide a progress bar for thinking if possible? Can you also provide a way to stop thinking to start a new question/request?
Hello, sorry for the fact I couldn't find the solution in issues and if the question is dumb, but looking for the answer and trying by myself didn't give the result.
Details:
The problem is I have low-end PC which is capable of running Alpaca and Vicuna (both 7B), but quite slowly. On the other hand, trying different models I saw that models under 1B parameters run quite well. Mainly they are based on Flan-T5. They give good results as for my machine and quickly enough (about 3-5 tokens per second). Using it with text is another better point. For example, asking it "basing on this text, answer -..." I have almost perfect answer. But giving it text each time is bad practice as for me. I mean, time spend etc.
Short question:
Is there any way to use this tool with any of these models?
LaMini-Flan-T5-783M
Flan-T5-Alpaca (770M or something)
RWKV (under 1.5B)
(any other good small models, under 1B parameters)
If you give the detailed manual I will be very grateful! Solutions, other than pautobot, privateGPT etc. are also welcome!
Thank you for understanding, answers and sorry for any inconvenience!
How's the weather in Hanoi today? => "Hanoi" => Cloudy, 38oC.
Does it work with other languages than English? If not, how may I adapt it?
Thank you!
After successfully ingesting or deleting a document, the web front end becomes unresponsive.
Loading new documents: 0it [00:00, ?it/s]
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [36056]
Ctrl+C does not kill the program either. I am accessing the front end from another computer using the -ip 0.0.0.0 option.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.