augustdev / enchanted Goto Github PK
View Code? Open in Web Editor NEWEnchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.
License: Apache License 2.0
Enchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.
License: Apache License 2.0
I manage installed applications using Homebrew. It makes installing, updating, and managing machines very easy.
I would love if Enchanted could be added to Homebrew.
The model does not reply / stops replying when the app is put into background
Hello,
Enchanted 1.5.2 introduced an issue that wasn’t here before.
If I now type a prompt on Enchanted, there’s a big change the text will be missing words or letters.
If I type the same prompt on Ollama cmd, the prompt result is consistent.
Model used : openchat:7b-v3.5-0106-q6_K
It also happens on every model I’ve tried like Mistral.
It might have to do with the new JSON changes done for Ngrok?
I’m running ollama locally, Enchanted on iOS and iPadOS gives the same result on the same Wi-Fi, locally.
Anything I can send to help with this ?
Thank you for your work, it’s fantastic to use :)
I would be nice to be able to connect to Mistral AI API to utilize their server.
Setting would look like:
Url: https://api.mistral.ai/v1
Private key: xxxxxxxxxxx
It would be great to know if the message about to be sent is going to fix in the conversation context.
In this way, we can avoid having garbled / nonsense back from the model.
Hi,
Is it possible to get an executable version of the app.
I cant install it via App Store since my Mac is managed.
Thanks
First of all thanks for your amazing app. I really enjoy it, and the recent additions (1.5.9) are absolutely amazing.
Although I would like to suggest one improvement in chats naming. Currently chats has names based on the initial user request, but because since 1.5.9 Completions are posted in chat window (which is amazing), the Chat input is always pre-defined and as a result starts with the same phrase, which makes chat names confusing if the same Completion has been done multiple times against different selections.
That would be great if ollama could generate some concise 3-4 words chat name based on the initial request/response context, so chat names would have more sense and would be easier to navigate.
Thanks in advance.
Hi thanks a lot for this app its really cool !!
Quick feedback when sending a message the input looses its focus and have to click in it again which is kind of annoying !
Otherwise everything is really nice bravo !
I am using cloudflare tunnel to route, on the ios app responses get cut short while on the macos app no response is printed...
What info may I provide to better help with diagnosis and resolution of this bug.
First of all, this is a very impressive and useful App! Thanks!
After I set ollama server address, it does load all available models and by default use the first model as the default one. One minor issue is that everytime the iOS reload this App or I start a new conversation, it forgets the last loaded model and always loads the first one, as #5 mentions.
The main issue for me is that it doesn't refresh models list after the initial one. If I delete model(s) from ollama server, these models still show up in the drop-down list. I don't know how to get it refreshed, re-saving server URI or swipe-up to close the app doesn't help.
If I select the removed model from the list, the App gives error: (in red color) The data couldn't be read because it is missing.
There are a few options:
The generation screen is already very beautiful! however, when I am generating programming or debugging information, I miss a highlight in the language used in the example code block.
Great Work!
It would be great to have the ability to launch Ollama while opening Enchanted.
https://developer.apple.com/documentation/appkit/nsworkspace/3172700-openapplication
It would be useful some minimal action to intrigate with Shortcuts via the app.
Hi,
Thanks for releasing Enchanted! Would you mind adding an open source license to this project?
Thank you!
The app doesn't currently tolerate a trailing slash in the API base URL. If a trailing slash is present, then requests like //api/tags
are sent, which are in turn not tolerated by Ollama.
It would be best if Enchanted strips the trailing /
from the configured base URL because it re-adds it when composing requests.
I'm running mistral 8x7b on my windows pc and am accessing it from my MacBook and iPhone through enchanted. There seems to be an issue with it handling math equations though. The code and tables work fine, but it does not appear to work with math equations. Not entirely sure if this is supported or not, but I thought I should let somebody know in case it is an issue. Here is an example:
"$$ \sqrt{25} = 5 $$
This equation shows that the square root of 25 is equal to 5. The square root of a number
I'd like it if I could quickly show/hide Enchanted using a menubar icon and/or global shortcut.
you have to have mac os 14. any way to support older macs such as 12.5?
First of all, thank you for putting together this wonderful app.
I have tried the app and could like to propose you an improvement. Right now when you press the wave icon on the right hand side of the input box it asks for permissions to send the audios to Apple for transcription and then it asks for permission to use the microphone.
One very interesting update and upgrade would be the possibility to use your very own whisper endpoint to get audio transcription done. In this way you could have your audios transcribed in the same server as the Ollama server, without sending them to third parties.
For those people that would like to have their audio transcribed and get it typed into the input box, they could use the built-in microphone icon of the Apple keyboard, to get them processed using Apple means (on device or on their remote servers).
Thanks for considering this feature.
P.D I could write some documentation about how to setup a whisper endpoint and add that to this project documentation if required.
Hi,
I'm stuck using Enchanted both on iOS and iPadOS, after write the prompt the answer show an error in red:
Response could not be decoded because of error:
The data couldn't be read because it isn't in the correct format.
I'm using the version on macOS without any problem.
The model i'm using is llama2-latest.
As soon as I write a letter in the prompt, the prompt field disappears behind the keyboard. I can not see what I type anymore, nor validate the prompt. And the menu is not available anymore either.
If the generated text is a little bit longer, we want to start reading it from top to bottom. But, if it's still generating, we cannot read the text because you are scrolling to the bottom when new data is still coming.
I propose following the bottom only if it's already scrolled to the bottom of the screen. If the user scrolls up, it breaks the auto-follow behavior and the user can freely scroll and read the text while the new tokens are coming to the bottom.
Original: open-webui/open-webui#1375
The Enchanted macos app is not seemingly compatible with open-webui, this is a workaround and maybe insight to a bug. I am not sure if this is the only app with this issue, but I wanted to take advantage of the easy auth with JWT. I serve multiple vlans and family members and servers, so this was a win for me; maybe it'll help someone.
I also have a non-typical set up because of a mac studio and docker cannot reach GPU on Apple Silicon. So this is the tldr setup:
User > Enchanted (or app) > router: HAProxy (multiple vlans) > caddy + open-webui/ollama docker > mac studio: caddy + ollama (* origins)
Despite the above, I have tried it directly to open-webui without HAProxy or base Caddyfile and error without workaround. This is the Caddyfile for caddy + open-webui/ollama docker
part:
:443 {
tls /etc/caddy/cert.pem /etc/caddy/key.pem
# Rewrite / path HEAD requests to /ollama path HEAD requests
@head {
method HEAD
}
rewrite @head /ollama
# Reverse proxy all other requests to the Docker container
reverse_proxy open-webui:8080
}
I am not sure of the security implications of this yet, but wanted to note this finally worked after quite a bit of troubleshooting.
Bug Summary:
Enchanted requests "GET /ollama/api/tags HTTP/1.1"
and follows up with a HEAD /ollama
request to the base ollama API url, in this case https://site.tld/ollama
but greeted with "HEAD /ollama/ HTTP/1.1" 405 Method Not Allowed
and Enchanted cannot "reach" the server.
Steps to Reproduce:
Expected Behavior:
Send request and "HEAD /ollama HTTP/1.1" 200 OK
Actual Behavior:
Send request and "HEAD /ollama/ HTTP/1.1" 405 Method Not Allowed
Confirmation:
Docker + separate ollama API
See description.
I have intel mac of macOS big sur, it seems that the app listed in the appstore is not compatible with the system.
Is there any way to use the app?
As of now, it is not possible to modify a message sent in the conversation. This would enhance the UX as the user can make edits to the conversation and re-use the existing conversation
Would be nice to have this iOS app ported to Android with SCADE, assuming it was written with Swift, should be super easy. Thanks for the awesome work!
It would be great if there would be a button of some sort to add files/images for prompts.
I am running ollama on my Google compute instance behind an nginx proxy. I can navigate to the endpoint and confirm /api/tags
returns a response as well as other endpoints such as /api/version
.
Unfortunately when I add my endpoint via enchanted's settings, I can send a chat, but I never receive a response. I also confirmed that the request is received by ollama via the logs on the server.
Why ios 17 only ??
Can we please have a feature to re-generate LLM response once again?
When I follow the instructions and use ngrok, it runs properly (as displayed on your instructions video) and the app sees the ollama server and available models, however, once I use the app's chat, ngrok displays "POST /api/chat 404 Not Found". I assume it's because of the ngrok warning page that's displayed on other devices on first visit. Can you add an option in the app to handle it? I assume following this "Set and send an ngrok-skip-browser-warning request header with any value." would work.
Thank you!
It would be helpful to have a way to designate one model as the default. That way, when starting a new conversation, it wouldn't be necessary to pick the model most of the time.
I dont think it makes sense to support tons of other models, but since groq is 10x faster and has free api keys to mixtral at 30 reqs a minute, I think worth it :)
https://groq.com/
First of all, great job!
It would be great to be able to delete the chats. Single and entire days / all chats.
My MacBook Pro is macOS Monterey version of 12.7.3.
I downloaded the app on M1 Mac, and it crashes when I click on settings
Hello,
if you prompt anything and get a response from the system and you try to create a new session (button at the right top corner), then the keyboard of the iPhone 12 (latest iOS) is blocking/overlaying the text field and send button. Please fix that. Otherwise, it's a great app.
Thank you.
Hi,
This is an amazing app. Is it possible to run it on Windows/android too?
With a server backend that's running an openai compatible server
Thanks
First of all, thank you for you application, and for making it open-source!
Completions is truly an amazing feature, but it would be awesome if I could forward the processed output to the chat window (new chat for example) instead of typing it back to where it was selected. For summaries, or text/code analysis that would be much more convenient, plus I would have an access to history of those summaries (or other pre-defined text processing results)
The App crashes if you are in a conversation and Press "delete all conversations" from settings menu
Can we please have an option to edit user's messages, to simplify prompt engineering?
I plan to use a proxy for ollama like this to secure my endpoint.
So, could you plan to allow to add support like this for the Bearer?
curl -X <METHOD> -H "Authorization: Bearer <USER_KEY>" http://localhost:<PORT>/<PATH> [--data <POST_DATA>]
BTW, the project looks very prommising.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.