Giter Club home page Giter Club logo

chatd's Introduction

chatd.mp4

Chat with your documents using local AI. All your data stays on your computer and is never sent to the cloud. Chatd is a completely private and secure way to interact with your documents.

Chatd is a desktop application that lets you use a local large language model (Mistral-7B) to chat with your documents. What makes chatd different from other "chat with local documents" apps is that it comes with the local LLM runner packaged in. This means that you don't need to install anything else to use chatd, just run the executable.

Chatd uses Ollama to run the LLM. Ollama is an LLM server that provides a cross-platform LLM runner API. If you already have an Ollama instance running locally, chatd will automatically use it. Otherwise, chatd will start an Ollama server for you and manage its lifecycle.

Quickstart

  1. Download the latest release from chatd.ai or the releases page.
  2. Unzip the downloaded file.
  3. Run the chatd executable.

Advanced Setup

Links

Development

Run the following commands in the root directory.

npm install
npm run start

Packaging and Distribution

MacOS

  1. Download the latest ollama-darwin release for MacOS from here.
  2. Make the downloaded binary executable: chmod +x path/to/ollama-darwin
  3. Copy the ollama-darwin executable to the chatd/src/service/ollama/runners directory.
  4. Optional: The Electron app needs be signed to be able to run on MacOS systems other than the one it was compiled on, so you need a developer certificate. To sign the app, set the following environment variables:
[email protected]
APPLE_IDENTITY="Developer ID Application: Your Name (ABCDEF1234)"
APPLE_ID_PASSWORD=your_apple_id_app_specific_password
APPLE_TEAM_ID=ABCDEF1234

You can find your Apple ID, Apple Team ID, and Apple ID Application in your Apple Developer account. You can create an app-specific password here.

  1. Run npm run package to package the app.

Windows

  1. Download the latest ollama-windows-amd64.zip release from here.
  2. Copy the contents of the zip into chatd/src/service/ollama/runners/.
  3. Run npm run package to package the app.

Note: The Windows app is not signed, so you will get a warning when you run it.

Linux

  1. Download the latest ollama-linux-amd64 release from here.
  2. Copy the ollama executable to chatd/src/service/ollama/runners/ollama-linux.
  3. Run npm run package to package the app.

chatd's People

Contributors

brucemacd avatar lucasew avatar micheleriva avatar nonno-cicala avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatd's Issues

Load an entire directory

Right now chatd can only load one file at a time, enable scanning entire directories into the conversation context.

Unable to run upon istallation

Apple MacBook Pro M2 pro 32 GB, MacOs 14.1.2

Upon iztalation the model downloaded, but not verified. System message: Error: Unable to load dynamic library: Unable to load dynamic server library: dlopen(/var/folders/x2/ym2p3fwd5wj8j20v18jq5jf00000gn/T/ollama2099815941/metal/libext_server.dylib, 0x0006): tried: '/var/folders/x2/ym2p3fwd5wj8j20v18jq5jf00000gn/T/ollama2099815941/metal/libext_server.dylib' (code signature in <5A0BFDE3-DAFF-3CDC-88E1-EB69C3900B6B> '/private/var/folders/x2/ym2p3fwd5wj8j20v18jq5jf00000gn/T/ollama2099815941/metal/libext_server.dylib' not valid for use in process: mapping process and mapped file (non-platform) have different Team IDs),
Screenshot 2024-04-06 at 18 12 27

Feature suggestion: Show document on the side and highlight references

Because chatd uses a document as the source-of-truth for responses, I think it might be a good idea to show the actual document on the side (maybe using a two-sided layout where the chat takes 40% and the document the remaining 60%?), highlight the source chunks used as reference and point to them from the chat using simple markdown links.

Does this sound like a good idea? If so, I'd happy to contribute!

UI launches with browser dev tools at the side

For me on launch, the Electron BrowserWindow is split, see attached. It's possible (but very unlikely) it's picking up local settings, I was playing with Electron not long ago, but I didn't go anywhere near showing dev tools.

Screenshot from 2023-11-06 21-44-09

Ask direct questions about the provided document

In the example a document is uploaded and the model is asked to name the planets in the solar system - this is a task any decent LLM should be able to do without a document. Currently when I open a pdf document and ask something like

Does this document mention the number of planets in the solar system I get a response like

"I am not able to provide a specific answer about whether or not a particular document..."

It would be good if we could ask questions about the document, similar to elicit. This would enable us to ask the LLM about opinions the document provides which would give responses that the LLM could not without context.

Cannot find module '@opendocsg/pdf2md' Require stack: Error

Hello, new this, but able to get ollama running with mistral, and chatd on windows 11 out of the box.
I tried the suggestion of explicitly defining path on line 149 of api.js

const worker = new Worker('c:/chatd/chatd-win32-x64/src/service/worker.js');

as opposed to the relative path, but no joy. Open to suggestions, appreciate any time you can spare.

2024-04-17_22-10-34

Cryptic Error Loading Model

Error: HTTP Error (500): {"error":"invalid version"}

And in the terminal

Ollama server is running
Error: HTTP Error (500): {"error":"invalid version"}
    at Ollama.generate (/home/user/deepLearning/chatd/chatd-linux-x64/resources/app/src/service/ollama/ollama.js:286:13)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Ollama.run (/home/scott/deepLearning/chatd/chatd-linux-x64/resources/app/src/service/ollama/ollama.js:177:5)
    at async run (/home/user/deepLearning/chatd/chatd-linux-x64/resources/app/src/service/ollama/ollama.js:329:10)
    at async IpcMainImpl.runOllamaModel (/home/scott/deepLearning/chatd/chatd-linux-x64/resources/app/src/api.js:36:5)

It was solved by upgrading ollama with curl https://ollama.ai/install.sh | sh

Ollama not installing on the Mac Intel x64

Using ARM, everything works as expected. Without Ollama installed, the app functions perfectly.

Using electron-forge, I've added an x64 dmg build. Electron builds, notarizes, staples, and installs on Mac intel machines.

I have ollama included in the package at src/service/ollama/runners/ollama-darwin.app
When people test the app, an error is thrown:

Signal (7) 2023-11-30 11-28-11

I've tried to put the ollama.app in a directory called 'ollama-darwin' like this: src/service/ollama/runners/ollama-darwin/ollama.app, but still got the error.

Thanks in advance for your help. I love this project.

Error loading V8 startup snapshot file

Doesn't start on Windows 11 and I see in the logfile: [0212/153405.326:FATAL:v8_initializer.cc(526)] Error loading V8 startup snapshot file
Any requirements I'm missing?

404 endpoint not available

Hi, you've made recent commits where you switch from /api/generate endpoint to /api/chat endpoint. When I run the packed executables (outside of chatd) they do not provide the /chat endpoint. When I install ollama to my WSL via the sh from ollama page, it provides the /chat endpoint. Is there a difference between running it inside WSL/spawning a child process via node? Or a version difference? Thanks!

Stuck on initializing

Running the latest release downloaded today on macOS 14.2 on an 2022 M2 MacBook Air, chatd sticks here indefinitely. I've force-quit and launched numerous times.
CleanShot 2023-12-12 at 15 30 29@2x

Failed to fulfill prompt error

I am getting a Error: Failed to fulfill prompt message when opening the app. It's an immediate error, making it seem that it isn't initializing.

What causes this error so that I may provide more context?

question about the project...

hi i'm searching for a tool using ollama and Llm in general that allow me to search from a prompt some local files and sumarise and give them directly to the user. the goal is th help the user to find specific company infos from the files that he have acces. most llm will imaginate so i was wondering if chatd was able to do this... tanks
Nice day

feature request: update to use new npm install and use modelfile

I tested this app a few weeks ago and it's an elegant proof of concept - thanks so much for sharing!

I have a specific system prompts I enable via a modelfile and then creating my own "custom model".

Using the current codebase, I was not able to overwrite the system prompt from whatever model called in the api.js file. The model would switch properly as defined in line 19, but entering a custom system prompt at line 87 onward never seemed to make any difference.

I'm guessing I would need to upload/publish my customization (really just a system prompt) as its own model to hugging face and then see if ollama could download that model instead. Would that work?

I'm wondering if the new npm install has you rethinking any of this project and if there's a possibility to make customizations like those in the modelfile available for building a local app. This would be super-powerful (and note, I'm not a developer... I'm just following instructions from the readme - so again, well done! This is already very accessible)

Feature request > Cuda Support -config options

I love the concept of starting an application and getting to work with it, I really do.
Maybe I haven't seen it or maybe it's missing, but I think it would be highly interesting if there were configuration options and even templates, as well as CUDA support to improve inference speed.
At the moment it looks more than interesting. Maybe you could talk to LM Studio and join forces?

Support Intel Macs

Right now only Apple Silicon is supported. Supporting Intel Macs means building the LLM runner from source for an Intel specific release target.

Unable to load dynamic server library

Apple M3 Max

Error: Unable to load dynamic library: Unable to load dynamic server library: dlopen(/var/folders/5j/k79qzwzj79zccrv78q7yc3yh0000gn/T/ollama1325625138/metal/libext_server.dylib, 0x0006): tried: '/var/folders/5j/k79qzwzj79zccrv78q7yc3yh0000gn/T/ollama1325625138/metal/libext_server.dylib' (code signature in <5A0BFDE3-DAFF-3CDC-88E1-EB69C3900B6B> '/private/var/folders/5j/k79qzwzj79zccrv78q7yc3yh0000gn/T/ollama1325625138/metal/libext_server.dylib' not valid for use in process: mapping process and mapped file (non-platform) have different Team IDs),

Using a remote server for ollama

I'm using a laptop for daliy work and I have a GPU server. So, it would be nice if I can use chatd as a frontend and running the LLMs in my server with ollama.

By the way, I'm currently using a local reverse-proxy to make chatd connect to my server. It works fine, but a little complicated.

caddy reverse-proxy --from :11434 --to server:11434

How does summarization work?

I understand that when asking specific questions, we can use vector search to find relevant part of the document.
But if the user ask "Please summarize the document", vector searching doesn't help here.

Selected file will not load

In the latest release 1.1.0 on Linux Mint 21 Vanessa (~ Ubuntu 22.04), the selected file will not load, regardless of which filetype I select. The button contains the text Cannot find module '/[path/to]/chatd-linux-x64/src/service/worker.js' where [path/to] is where I installed it. Chatd is useless without being able to load a file. Reverting to v1.0.1 is the workaround.

Is there a web version

Is there a web version because I run all my ai stuff on my server and I access it from my desktop!?

100k pdfs?

Hello everyone,

It is not a proper issue, just a few questions.
Is it possible to use chatd with 100k english PDFs? Or it is too much data?
If I can use it, how can I load them with Linux terminal?

Thank you so much in advance!

Can't find service/worker.js

CleanShot 2024-07-21 at 10 50 56@2x

Downloaded latest archive for mac. opens fine, but after a load a document (either a pdf or a .md file) then it gets this error above ^

Change response language

Is there a way how to change response language please?
While question can be in a different language answer is still in English.

Thank you

Clunky handling of model download problems

I think the root cause is probably around Ollama's downloading - prone to failure with my slow connection. There's an outstanding issue nearby over there :

ollama/ollama#941

But I'm sure cleaner handling should be possible. The error report isn't useful - 'Prompt not Fulfilled' or somesuch. More information would be good, and/or some options like Retry/Clear Cache and Retry/Select Another Model. Sorry, I'm lazy, choosing a model should go here as a feature request.

On the (big) positive side, I did have mini-orca 3B locally, installed directly via Ollama. After I changed mistral to orca-mini in the chatd code & npm'd, it found it. Then worked together with info from a a PDF. (Results look hopeless after having spent time with GPT4, but I'm just delighted it runs, good enough for my purposes).

Allow Pulling New Models without Going Through Ollama

This is spun out from #1

Right now you can run custom LLMs using Ollama, but chatd expects that the model has already been downloaded before Ollama is connected to chatd. chatd should be able to handle downloading new models directly, without using the Ollama CLI.

Workaround: run ollama pull <model name> in a terminal and download the model before switching it in chatd for the time being.

Issue After Installing AI Model

i got this message after it said initializing AI model: " Error: Unable to load dynamic library: Unable to load dynamic server library: dlopen(/var/folders/r3/41cmfh2s0l52ndz3w971w7lc0000gn/T/ollama2523156174/metal/libext_server.dylib, 0x0006): tried: '/var/folders/r3/41cmfh2s0l52ndz3w971w7lc0000gn/T/ollama2523156174/metal/libext_server.dylib' (code signature in <5A0BFDE3-DAFF-3CDC-88E1-EB69C3900B6B> '/private/var/folders/r3/41cmfh2s0l52ndz3w971w7lc0000gn/T/ollama2523156174/metal/libext_server.dylib' not valid for use in process: mapping process and mapped file (non-platform) have different Team IDs), "

image

Uninstalling app and removing LLM file

is there a way to uninstall the app on a MacOS so the supporting files including removing the LLM file gets deleted?

I´ve removed the app file but can´t find the LLM mistral file. in which directory is it saved?

Hallucinations and Issues reading document

Hi, thanks for making this! feels like it has great potential.

I have tried a couple of different documents and for a couple it made up the contents of an imaginary document and other it.

For example i grapped a random pdf and asked it questions.

image

The pdf has loads of info in it but the LLM seems to only pick out a couple of sentaces.

Is there a better way i should be interacting with the app?

Cheers

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.