Giter Club home page Giter Club logo

enchanted's Introduction

Hi there 👋

enchanted's People

Contributors

augustdev avatar eliseomartelli avatar vishv96 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

enchanted's Issues

Bug : Words / letters skipped

Hello,

Enchanted 1.5.2 introduced an issue that wasn’t here before.

If I now type a prompt on Enchanted, there’s a big change the text will be missing words or letters.

If I type the same prompt on Ollama cmd, the prompt result is consistent.

Model used : openchat:7b-v3.5-0106-q6_K
It also happens on every model I’ve tried like Mistral.

IMG_4008

IMG_4009

It might have to do with the new JSON changes done for Ngrok?

I’m running ollama locally, Enchanted on iOS and iPadOS gives the same result on the same Wi-Fi, locally.

Anything I can send to help with this ?

Thank you for your work, it’s fantastic to use :)

Feature: token count / context management

It would be great to know if the message about to be sent is going to fix in the conversation context.

In this way, we can avoid having garbled / nonsense back from the model.

executable file for macOS

Hi,

Is it possible to get an executable version of the app.
I cant install it via App Store since my Mac is managed.

Thanks

Feature Request: Let ollama name chats based on the first request/response.

First of all thanks for your amazing app. I really enjoy it, and the recent additions (1.5.9) are absolutely amazing.

Although I would like to suggest one improvement in chats naming. Currently chats has names based on the initial user request, but because since 1.5.9 Completions are posted in chat window (which is amazing), the Chat input is always pre-defined and as a result starts with the same phrase, which makes chat names confusing if the same Completion has been done multiple times against different selections.

That would be great if ollama could generate some concise 3-4 words chat name based on the initial request/response context, so chat names would have more sense and would be easier to navigate.

Thanks in advance.

Incompatible with Cloudflare Tunnel ? Possible Bug

I am using cloudflare tunnel to route, on the ios app responses get cut short while on the macos app no response is printed...

What info may I provide to better help with diagnosis and resolution of this bug.

Ollama is unreachable

Hi 👋

I'm trying out the new version 1.0.1. I can reach ollama in a web browser, but apparently not in Enchanted.

signal-2023-12-20-00-59-33-425-1
signal-2023-12-20-00-57-33-191
signal-2023-12-20-00-59-33-425

Cannot refresh models list

First of all, this is a very impressive and useful App! Thanks!

After I set ollama server address, it does load all available models and by default use the first model as the default one. One minor issue is that everytime the iOS reload this App or I start a new conversation, it forgets the last loaded model and always loads the first one, as #5 mentions.

The main issue for me is that it doesn't refresh models list after the initial one. If I delete model(s) from ollama server, these models still show up in the drop-down list. I don't know how to get it refreshed, re-saving server URI or swipe-up to close the app doesn't help.

If I select the removed model from the list, the App gives error: (in red color) The data couldn't be read because it is missing.

Expected Behavior

There are a few options:

  1. Force reloading models list when user over-swipe-down models list;
  2. Force reloading models list when user starts a new conversation;
  3. Force reloading models list automatically when the App starts; or
  4. Add an option in Settings to allow user to reload models list from ollama server;

Feature Request: Add code syntax highlighting

The generation screen is already very beautiful! however, when I am generating programming or debugging information, I miss a highlight in the language used in the example code block.
Great Work!

  • I'm using the macOS version

Edit button

Would really like to be able to edit a prompt and resend it

image

License

Hi,
Thanks for releasing Enchanted! Would you mind adding an open source license to this project?
Thank you!

bug: baseUrl trailing slash

The app doesn't currently tolerate a trailing slash in the API base URL. If a trailing slash is present, then requests like //api/tags are sent, which are in turn not tolerated by Ollama.

It would be best if Enchanted strips the trailing / from the configured base URL because it re-adds it when composing requests.

Markdown Equation Issues

I'm running mistral 8x7b on my windows pc and am accessing it from my MacBook and iPhone through enchanted. There seems to be an issue with it handling math equations though. The code and tables work fine, but it does not appear to work with math equations. Not entirely sure if this is supported or not, but I thought I should let somebody know in case it is an issue. Here is an example:
"$$ \sqrt{25} = 5 $$
This equation shows that the square root of 25 is equal to 5. The square root of a number $n$ is a value $r$ such that $r \cdot r = n$. In this case, $5 \cdot 5 = 25$, so $\sqrt{25} = 5$."

Optional menubar

I'd like it if I could quickly show/hide Enchanted using a menubar icon and/or global shortcut.

Can't use Completions

image When i use Completions,i can see the events above, but nothing happend. I already give the auth. Thanks.~

Add support for a Whisper endpoint for audio transcription

First of all, thank you for putting together this wonderful app.

I have tried the app and could like to propose you an improvement. Right now when you press the wave icon on the right hand side of the input box it asks for permissions to send the audios to Apple for transcription and then it asks for permission to use the microphone.

One very interesting update and upgrade would be the possibility to use your very own whisper endpoint to get audio transcription done. In this way you could have your audios transcribed in the same server as the Ollama server, without sending them to third parties.

For those people that would like to have their audio transcribed and get it typed into the input box, they could use the built-in microphone icon of the Apple keyboard, to get them processed using Apple means (on device or on their remote servers).

Thanks for considering this feature.

P.D I could write some documentation about how to setup a whisper endpoint and add that to this project documentation if required.

Bug: The data couldn't be read because it isn't in the correct format.

Hi,
I'm stuck using Enchanted both on iOS and iPadOS, after write the prompt the answer show an error in red:

Response could not be decoded because of error:
The data couldn't be read because it isn't in the correct format.

I'm using the version on macOS without any problem.

The model i'm using is llama2-latest.

Last version unusable on iPhone 13

As soon as I write a letter in the prompt, the prompt field disappears behind the keyboard. I can not see what I type anymore, nor validate the prompt. And the menu is not available anymore either.

Feature suggestion: Prevent scrolling to the bottom while generating text

Description

If the generated text is a little bit longer, we want to start reading it from top to bottom. But, if it's still generating, we cannot read the text because you are scrolling to the bottom when new data is still coming.
I propose following the bottom only if it's already scrolled to the bottom of the screen. If the user scrolls up, it breaks the auto-follow behavior and the user can freely scroll and read the text while the new tokens are coming to the bottom.

[open-webui] Compatibility workaround for `"HEAD /ollama/ HTTP/1.1" 405 Method Not Allowed`

Bug Report

Original: open-webui/open-webui#1375

Description

The Enchanted macos app is not seemingly compatible with open-webui, this is a workaround and maybe insight to a bug. I am not sure if this is the only app with this issue, but I wanted to take advantage of the easy auth with JWT. I serve multiple vlans and family members and servers, so this was a win for me; maybe it'll help someone.

I also have a non-typical set up because of a mac studio and docker cannot reach GPU on Apple Silicon. So this is the tldr setup:

User > Enchanted (or app) > router: HAProxy (multiple vlans) > caddy + open-webui/ollama docker > mac studio: caddy + ollama (* origins)

Despite the above, I have tried it directly to open-webui without HAProxy or base Caddyfile and error without workaround. This is the Caddyfile for caddy + open-webui/ollama docker part:

:443 {
    tls /etc/caddy/cert.pem /etc/caddy/key.pem

    # Rewrite / path HEAD requests to /ollama path HEAD requests
    @head {
        method HEAD
    }
    rewrite @head /ollama

    # Reverse proxy all other requests to the Docker container
    reverse_proxy open-webui:8080
}

I am not sure of the security implications of this yet, but wanted to note this finally worked after quite a bit of troubleshooting.

Bug Summary:
Enchanted requests "GET /ollama/api/tags HTTP/1.1" and follows up with a HEAD /ollama request to the base ollama API url, in this case https://site.tld/ollama but greeted with "HEAD /ollama/ HTTP/1.1" 405 Method Not Allowed and Enchanted cannot "reach" the server.

Steps to Reproduce:

  1. Set Enchanted base Ollama URL and token to open-webui/ollama and the token
  2. docker compose logs and see error

Expected Behavior:
Send request and "HEAD /ollama HTTP/1.1" 200 OK

Actual Behavior:
Send request and "HEAD /ollama/ HTTP/1.1" 405 Method Not Allowed

Environment

  • Operating System: MacOS Sonoma 14.2.1

Reproduction Details

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
    ^relevant ones posted above

Logs and Screenshots

Screenshot 2024-04-01 at 1 42 56 AM

Installation Method

Docker + separate ollama API

Additional Information

See description.

How to use the app in macOS 11?

I have intel mac of macOS big sur, it seems that the app listed in the appstore is not compatible with the system.

Is there any way to use the app?

Feature: edit and resend message

As of now, it is not possible to modify a message sent in the conversation. This would enhance the UX as the user can make edits to the conversation and re-use the existing conversation

Please port to Android with SCADE

Would be nice to have this iOS app ported to Android with SCADE, assuming it was written with Swift, should be super easy. Thanks for the awesome work!

No response when using ollama behind an nginx proxy

I am running ollama on my Google compute instance behind an nginx proxy. I can navigate to the endpoint and confirm /api/tags returns a response as well as other endpoints such as /api/version.

Unfortunately when I add my endpoint via enchanted's settings, I can send a chat, but I never receive a response. I also confirmed that the request is received by ollama via the logs on the server.

Feature suggestion: sidebar and model list

Screenshot 2024-02-19 at 1 26 18 PM
  1. Would be great that app remembers user's last sidebar status: hide or show.
  2. The list of available models is stale. May want to refresh the list per chat session or app restart?

Thanks for open source.

ngrok's warning page blocks access to the API

When I follow the instructions and use ngrok, it runs properly (as displayed on your instructions video) and the app sees the ollama server and available models, however, once I use the app's chat, ngrok displays "POST /api/chat 404 Not Found". I assume it's because of the ngrok warning page that's displayed on other devices on first visit. Can you add an option in the app to handle it? I assume following this "Set and send an ngrok-skip-browser-warning request header with any value." would work.

image

Thank you!

enhancement: default model

It would be helpful to have a way to designate one model as the default. That way, when starting a new conversation, it wouldn't be necessary to pick the model most of the time.

add support for Groq

I dont think it makes sense to support tons of other models, but since groq is 10x faster and has free api keys to mixtral at 30 reqs a minute, I think worth it :)
https://groq.com/

Keyboard bug iOS

Hello,

if you prompt anything and get a response from the system and you try to create a new session (button at the right top corner), then the keyboard of the iPhone 12 (latest iOS) is blocking/overlaying the text field and send button. Please fix that. Otherwise, it's a great app.

Thank you.

Android/windows support

Hi,

This is an amazing app. Is it possible to run it on Windows/android too?
With a server backend that's running an openai compatible server

Thanks

llama2 always show even if not installed

Hello,
I'm writing this issue because the model llama2 is always shown in UI pickers even if it's not installed.

image image
ollama ls
NAME            	ID          	SIZE  	MODIFIED       
codellama:latest	8fdf8f752f6e	3.8 GB	23 minutes ago	
mistral:latest  	61e88e884507	4.1 GB	43 minutes ago	

I'm running Version 1.5.5 (14)

Feature: Summary generated in chat window.

First of all, thank you for you application, and for making it open-source!

Completions is truly an amazing feature, but it would be awesome if I could forward the processed output to the chat window (new chat for example) instead of typing it back to where it was selected. For summaries, or text/code analysis that would be much more convenient, plus I would have an access to history of those summaries (or other pre-defined text processing results)

Crash on clearing history

The App crashes if you are in a conversation and Press "delete all conversations" from settings menu

Feature Request : Add support for Authorization: Bearer

I plan to use a proxy for ollama like this to secure my endpoint.

So, could you plan to allow to add support like this for the Bearer?
curl -X <METHOD> -H "Authorization: Bearer <USER_KEY>" http://localhost:<PORT>/<PATH> [--data <POST_DATA>]

BTW, the project looks very prommising.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.