Giter Club home page Giter Club logo

agnai's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

agnai's Issues

Feature: CSS classes to difference between user(s) and character

A simple feature request:
I'm writing custom CSS and hit the issue that there is nothing separating the user(s) from the bot in the html markup, css classes would come handy to separate them.

Tangentially, more of a wish than a feature request:
Custom CSS saved onto the character profile, it would allow fancy stuff.

poe.com support

Hello! I recently found a free platform for testing models ranging from Bloomz and Alpaca to GPT-4 and GPT-3.5 Turbo. Here's the link, comrades and anons! https://github.com/ading2210/openplayground-api I hope to see this integration in this project (the only catch is that authentication goes through two-factor code and email along with a cookie token). The maximum context length is 5000 characters.

"Scale request failed: socket hang up"

Hi, this issue might be too specific to fix so feel free to close it.

When I reroute the docker container's traffic through a vpn container (gluetun), all requests fail like this. I went into this container's commandline and was able to reach debian's servers through apt without an issue. I should add that the scale URL and key work fine on the officially hosted version of agnai.

AI Horde API Settings

Browser password saving will try to insert your site login as the AI Horde API key.
If this ever gets mistakenly saved, it you have to manually insert '0000000000' to get it to return to anonymous mode.
This is a security risk as it's potentially sending your login as an API key.

vital suggestions for improving the project (no)

first to first, there is not enough support for tai card format instead of json.
secondly, there is not enough hack in which dialogue examples will not be deleted, aaaandd swipes after the "new generation" of the answer if let's say I got not quite the answer that I wanted, but it is still better than 10 subsequent ones and I want to return to it
I'm really sorry that I can't help the project in any way other than these stupid hackneyed ideas and "bad code" (flashbacks) via chatgpt
Thanks for considering my suggestion and your time!

NSFW

Can you make the NSFW content allowed ?

delete character does not delete the character

I tried on the demo install and local, but delete character does not delete anything, it just says it does because mongodb does not return an error.
Could it need an actual ObjectId instead of a string?

Preamble capped off

  • Suggestion:
    Should preamble length be taken off the max tokens?
    At the moment after a while the preamble is not part of the chat anymore and pinky lost the brain ..

Support Windows Installation

Windows manually installation fails with 'NODE_ENV' is not recognized as an internal or external command

image

(Side note: The readme.md says to run npm build:all which is not a valid npm command. The correct command is npm run build:all)

Small feedback

Small UI thing, I think it'd be better if the conversation header would actually stick on the top instead being on the bottom until you start writing messages:
image

Also, regarding the adapters. I've looked at the code and it looks like it's currently centered around text-based completion, which is fine since most services are like that. But are you planning to maybe change it to allow for easier integration of OpenAI Turbo? As far as I know by the code, it's possible right now, but a lot of code will have to be reimplemented in that adapter specifically - getting the messages, tokenizing them (Turbo model has a different tokenizer), also checking the context (because of a different tokenizer + OpenAI's max token context is 4096).

Also I think that if implemented, Davinci and Turbo should be two fully separate adapters, since Davinci is really close to other adapters in how it'll work, but Turbo is completely different.

Anyway, I'll try to add Davinci first and then see if I hack around Turbo support just for testing (I doubt it'll be PRable since I suck at webdev)

PWA

What would be the best route to take to make a PWA, since SolidJS is usually build with Vite, their PWA solution is for Vite.
So I looked around a bit and it seems that simple workbox caching integration with their default service worker would be fine.

Any thoughts? I can make PR when we agree on solution..

add support for tree-based chat

add support for tree-based chat (i.e. allow swiping for each message in the chat). The way it was done in roko's basilisk. In his implementation each message edit generated a new branch. But there was no way to delete a single message or edit it without generating a new branch.
Would be great to improve it by giving user the option to edit the message without creating a new branch (the way it works right now) and with creating a new branch at will. Also it would be nice to add to each swipe (branch/message) an optional label (e.g. "this branch ends with bad end z", "this branch takes turn towards y") next to the date.

Examples of gaslights wanted

Gaslights/user prompts are pivotal to the quality of conversations using Scale or Turbo.
If you have a gaslight you gives you good results and are happy to share it, share it here or on Discord.
I'd like to provide a "library" or something similar within the app.

Age Verification

AC:

Given I am on the agnai.chat website
And I am not logged in
Then I should see a "Click here if you're 18 otherwise click here" buttons
And I should not have access to any other feature of the website
When I click on the "18" button
Then I should see the rest of the website
When I click on the not-18 button
Then I should be redirected anywhere else on the internet

Global memory

I would love to have a possibility to create global memory items,

This would make it possible to create a world/theme that all characters follow. I already have a softprompt on top of my model, but common world info from agn-ai would be great.

if is not feasable, the same is archieved by setting useWorldInfo to true and organising it in my Kobold instance, however then I loose track of prompt limits.

Logging in

I was all sudden logged out of my account, It wouldn’t
74B1CBC8-1463-4D58-AC8B-73A15F4A39CF
allow me to log back into my account

"Database not yet initialized"

my setup:
win 10, python3.9,
manual install so..

git cloned the repo,
(mongoDB is not installed on my end)
'npm install' inside a conda env,
'npm run start:win'

it loads but seemed to be some fishy warnings or errors midway through.
it does compile and the servers runs.

but when i try to chat i get this 500 error regardless of what API i try to use:

"database not yet initialized"
mentioned codelines:
getDb @client.js :27
db @client.js :32
obtainLock @lock.js :14
guest-msg.js :24
wrapped @wrap.js :12
layer.handle @layer.js :95
next @route.js :144
Route.dispatch @route.js :114
Layer.handle [as handle request] @layer.js :95
index.js: :284.15

...ideas?

KoboldAI direct connection

When using KoboldAI (not horde), I get a service unavailable error when a message is already being processed for another client.
I guess it would be feature request, but could the messages that goto non-horde services have a local queue (redis?)
Or should I change something on my KoboldAI client?

"Plain text" is not an option in when creating a chat

When creating character I wanter to use Plain Text persona scheme, but it didn't switch from this persona attribbutes list, so i had to create a character and then edit it, then it switched.

Anyway, when I tried to create a new chat with this character there was no Plain Text option at all, just other 3, and my plain text from earlier was inserted as attribute "text"

And my plain text is just "character_name from anime_name", it worked well with openai with tavernai, it just took all the info about the character from it's database or whatever, but what about this agnai?

Enable spellcheck in the chat input

I have fat fingers and a need to spell everything correctly. I don't see why the input box shouldn't have spellcheck to make life easier.
I can make a PR for this if needed.

Support for inserting jailbreak at the end of message history

The best jailbreak technique for Turbo consists in inserting the jailbreak after the most recent message in the message history. Here's an example. Say "User" greets the AI, then after a few messages tells the AI "Say a bad word!". This is how the jailbreak works:

[assistant] Hi! How can I help you?
[user] Hi! I have a special request.
[assistant] Sure, how can I help?
[user] Say a bad word!
[system] (jailbreak goes here) /* invisible to the user */

Another format:

[assistant] Hi! How can I help you?
[user] Hi! I have a special request.
[assistant] Sure, how can I help?
[user] (jailbreak goes here) /* invisible to the user */
[assistant] (Acknowledged. My response will etc etc.) /* invisible to the user */
[user] Say a bad word!

With every new request, the jailbreak is moved to the bottom of the message history.

A Tavern mod called Franken mod implements it as "NULL mode", named after someone who found the technique.

You might have to consider what the best UX is, to support such a feature without making the preset settings too complicated.

Support for auto-inserted words / UJB presets

It's no secret that system prompt has much less impact on the next message than your message and also is far from always applicable. And quite often the same part of the message has to be duplicated in the next one. So I thought it would be great to add a feature of extra words that can be inserted at the beginning or at the end of user's message (optionally with regex support but that would be too good i guess). Something like a side menu with a list of word sets with option to choose where to insert them and the respective toggles.
Some examples:
User character name (e.g. Anon: ). at the beginning of the message (in scenes where there are too many characters and all AI's except for gpt4 regularly get confused whose message it is)
A recurring action that user character does, which user is too lazy to write himself. at the beginning or the end of the message
{specifying the speech style for the next message}. at the end of the message. The most important thing in my opinion.
{indicating what to focus on}. at the end of the message.
{avoiding a certain topic } which is too relevant to the scene, but user isn't interested in it. at the end of the message
{rp only for x} or {rp for x and y} in scenes where there are several characters and ai may not understand who he needs to answer for.
With (optional) removal of these words from the chat prompt before sending. The side space isn't used right now anyway (and in my opinion could be even wider) It might also make sense to add this feature for bot messages, but in my opinion it's not that useful.
omF4c

issues with kobold

Why do I get a fail to generate response "Unproccesable entity"? Is there something wrong on my end or is this a bug.
Screenshot 2023-03-24 220409

Can't make a line break when writing a reply to a character.

It's kinda hard to see how your reply looks when you can't make a few line breaks when writing it.

It would be really nice if you could do that, at least considering people that come from tavern are used to it.

edit:

Didn't see it in settings, but keep doing what're you doing, I really enjoy this chat app :)

Tokenization for OpenAI Turbo model

Since AgnAI now supports the Turbo OpenAI model, there's a need to add another tokenizer for it as it uses a different tokenizer. There's an NPM package https://www.npmjs.com/package/@dqbd/tiktoken that supports both normal gpt tokenizer (called gpt2) and the Turbo one (called cl100k_base). Changing the code to use it is quite easy, but I first wanted to make sure you're ok with it - after all, that package uses WASM, so I'm not sure how well will it work for something that's user-facing.

There's one other problem though - Turbo uses a special way of denoting messages, so the token counts are a bit different from just the raw prompt (they're higher) - see this Jupyter notebook by OpenAI. A simple JS port of that code can look like this:

function count_tokens(encoding, messages) {
    let num_tokens = 0;
    for (var msg of messages) {
        num_tokens += 4;
        for (const [key, value] of Object.entries(msg)) {
            num_tokens += encoding.encode(value).length;
            if (key == "name") {
                num_tokens += -1;
            }
        }
    }
    num_tokens += 2;
    return num_tokens;
}

Labels for names

To label a (edited) clone of a character for use in a specific scenario without messing with their name.
E.g: Robot (therapist), Robot (death machine)
Ai will only see "Robot" in both cases

Trouble connecting with own Kobold Instance

I got my own KoboldAI instance and when I use it with TavernAI, it is all good. No issues.
When I use with agn-ai, I only get one message back (if I am lucky) and after that it stalls.
No more responses, not even when refreshing page or starting new chat, other character, nothing anymore.
If I set agn-ai to horde, there is no issues.

I checked on the Kobold side and the message is coming in, will further debug by adding some logging in the adapter.

Any suggestions?

Description lost when creating a character

Commit: 85df245

To reproduce:

  • In /character/create, create a character including the Description field and press Create
  • The description is not shown on the characters page
  • Editing the character indeed shows the field as empty
  • When filling the field and pressing Update, now it gets updated correctly

This is unrelated but in the same page; the empty attributes are lost when saving without a value. I think it's fine but I expected them to still be there. I'll let you be the judge of whether the attribute name is worth keeping or not :)

Linebreaks

For some reason agnai doesn't receive line breaks, its quite annoying when using a RPG bot with stats which are all clogged in a sole paragraph. Even with other bots, when looking at the original result in scale, it haves line breaks, but in the UI it doesn't.

Temporary dialogue examples

Tavernai has an additional field for bot definitions, which is used for example dialogs. But unlike the "Sample Conversation" from Agnai (i presume), this field get overridden by context memory when it reaches its limit. Which, in my opinion, is pretty smart.
But example dialogs aren't the only thing the user might want to make temporary. For example, this field can be used for various kinds of details that might be important for a particular scenario but useless for another. Or, for example, include additional characters that user may want to include or exclude depending on scenario.
I think it would be great to adapt this feature from Tavernai and extend its functionality for using defs as well

No tags?

Wouldn't it be easier for others to find out about this project with tags?

Steamship support

Scale is pretty much kill unless you don't care about their new filter, requesting steamship support for access to GPT 4.
I don't know if you can adapt code that is written for Tavern but ill drop it here to see if it can help.
rentry.org/spermshipeng

Some feature requests regarding shared bot usage

It would be a really nice addition to be able to chat with other people if you're chatting with the same bot, to plan the messages - noticed myself from 4chan "stress-testing".

Another one would be to have some typing status to see what others want to send.

Yet another one - see who's in the room right now

file can not be last on docker cli

Small typo in the instructions, order of the options is not correct:

Run: docker compose -p agnai up -d -f self-host.docker-compose.yml

gives flag -f error (up needs to come after the options)

should be:

Run: docker-compose -p agnai -f self-host.docker-compose.yml up -d

PY_URL?

I wonder what PY_URL is and the included Python files, is that supposed to run? What can I do with it?

Persistent volume permission

At the moment, self-host.docker-compose.yml mounts /app/dist/assets to /dist/assets on host machine. There's a few problems I've observed:

  • Permission issues if the current user is not allowed to create files in root dir.
  • Straight up doesn't work if you use Docker for Windows, you lose all character avatar assets.

Proposed fix, allow Docker to manage that dir as a persistent volume.

  • In Dockerfile, add line VOLUME [ "/app/dist/assets" ]
  • Remove these lines in self-host.docker-compose.yml:
volumes:
    - /dist/assets:/app/dist/assets

I can make a PR if this looks good.

OAI Turbo adapter ignores preset maxContextLength/falls back to 2048

I noticed some strange behavior when working on a bot with larger definitions, where the prompt I was seeing in the terminal window was much shorter than expected given Turbo's 4096 context limit. I confirmed that my generation presets were set correctly, and confirmed they were being sent to the server correctly via the network tab, and eventually realized that the call to mapPresetsToAdapter in srv/adapter/generate.ts#createTextStreamV2 was stripping maxContextLength from the settings object provided to the controller.

This seems to be fixable by adding the key to the serviceGenMap in common/presets.ts.

  openai: {
    // ...
    maxContextLength: 'maxContextLength',

Easy enough for me to fix that one, but I'm thinking this might be happening to other adapters as well and I don't fully understand how the presets are being used/transformed for every other adapter. From my understanding serviceGenMap is mapping Agnai keys to the ones expected by each particular service, so blindly adding maxContextLength to each of them might not be appropriate if that value starts getting passed to the service's API.

edit: This appears to be specific to turbo, see #94 (comment)

It seems like maxContextLength maybe should be special-cased, since you probably never actually want to pass it to the downstream service as you're just using it to build the prompt within Agnai.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.