Giter Club home page Giter Club logo

ovidijusparsiunas / deep-chat Goto Github PK

View Code? Open in Web Editor NEW
1.3K 29.0 188.0 70.67 MB

Fully customizable AI chatbot component for your website

Home Page: https://deepchat.dev

License: MIT License

HTML 0.25% JavaScript 16.20% CSS 6.44% TypeScript 42.83% Python 1.33% MDX 26.81% Go 2.32% Java 3.09% Svelte 0.73%
ai angular chat chatbot chatgpt component openai react solid svelte vue cohere files huggingface speech image react-chatbot ai-chatbot nextjs

deep-chat's Introduction


Deep Chat

Deep Chat is a fully customizable AI chat component that can be injected into your website with minimal to no effort. Whether you want to create a chatbot that leverages popular APIs such as ChatGPT or connect to your own custom service, this component can do it all! Explore deepchat.dev to view all of the available features, how to use them, examples and more!

🚀 Main Features

  • Connect to any API
  • Avatars
  • Names
  • Send/Receive files
  • Capture photos via webcam
  • Record audio via microphone
  • Speech To Text for message input
  • Text To Speech to hear message responses
  • Support for MarkDown and custom elements to help structure text and render code
  • Introduction panel and dynamic modals to help describe functionality for your users
  • Connect to popular AI APIs such as OpenAI, HuggingFace, Cohere directly from the browser
  • Support for all major ui frameworks/libraries
  • Host a model on the browser
  • Everything is customizable!

🎉 🎉 2.0 is now available 🎉 🎉

Announcing Deep Chat 2.0! We have redesigned and improved Deep Chat based on all of your generous feedback. It is now much easier to implement into any website and configure to provide the best possible chat experience for your users. Check out the release notes for more information.

version 2.0

💻 Getting started

npm install deep-chat

If using React, install the following instead:

npm install deep-chat-react

Simply add the following to your markup:

<deep-chat></deep-chat>

The exact syntax for the above will vary depending on the framework of your choice (see here).

⚡ Connect

Connect

Connecting to a service is simple, all you need to do is define its API details using the request property:

<deep-chat request='{"url":"https://service.com/chat"}'/>

The service will need to be able to handle request and response formats used in Deep Chat. Please read the Connect section in documentation and check out the server template examples.

Alternatively, if you want to connect without changing the target service, use the interceptor properties to augment the transferred objects or the handler function to control the request code.

🔌 Direct connection

Direct connection

Connect to popular AI APIs directly from the browser via the use of the directConnection property:

<deep-chat directConnection='{"openAI":true}'/>

<deep-chat directConnection='{"openAI":{"key": "optional-key-here"}}'/>

Please note that this approach should be used for local/prototyping/demo purposes ONLY as it exposes the API Key to the browser. When ready to go live, please switch to using the request property described above along with a proxy service.

Currently supported direct API connections: OpenAI, HuggingFace, Cohere, Stability AI, Azure, AssemblyAI

🤖 Web model

Web Model

No servers, no connections, host an LLM model entirely on your browser.

Simply add the deep-chat-web-llm module and define the webModel property:

<deep-chat webModel="true" />

📷 🎤 Camera and Microphone

Capture

Use Deep Chat to capture photos with your webcam and record audio with the microphone. You can enable this using the camera and microphone properties:

<deep-chat camera="true" microphone="true" ...other properties />

🎤 🔉 Speech

deep-chat-speech-to-text.mp4

Input text with your voice using Speech To Text capabilities and have the responses read out to you with Text To Speech. You can enable this functionality via the speechToText and textToSpeech properties.

<deep-chat speechToText="true" textToSpeech="true" ...other properties />

🔰 Examples

Check out live codepen examples for your UI framework/library of choice:

React Vue 2 Vue 3 Svelte SvelteKit Angular Solid Next Nuxt VanillaJS

Setting up your own server has never been easier with the following server templates. From creating your own service to establishing proxies for other APIs such as OpenAI, everything has been documented with clear examples to get you up and running in seconds:

Express Nest Flask Spring Go SvelteKit Next

All examples are ready to be deployed on a hosting platform such as Vercel.

📺 Tutorials

Demo videos ara available on YouTube:

Videos

🕹️ Playground

Create, configure and use Deep Chat components without writing any code in the official Playground!

Playground

🎉 Update - components can now be stretched to full screen dimensions using the new Expanded View:

Expanded View

🌟 Sponsors

Thankyou to our generous sponsors!

                         

  matthiasamberg                 dorra                         techpeace

❤️ Contributions

Open source is built by the community for the community. All contributions to this project are welcome!
Additionally, if you have any suggestions for enhancements, ideas on how to take the project further or have discovered a bug, do not hesitate to create a new issue ticket and we will look into it as soon as possible!

deep-chat's People

Contributors

mcapodici avatar mchill avatar ovidijusparsiunas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-chat's Issues

Use of eval in deepChat.js is strongly discouraged as it poses security risks

try run to production happen errors but it's normal in dev:

frappe@acbde37c143d:~/frappe-bench/apps/qifudengta/sites/jinxin$ yarn build
yarn run v1.22.19
$ vite build
vite v4.5.0 building for production...
node_modules/deep-chat/dist/deepChat.js (6731:11) Use of eval in "node_modules/deep-chat/dist/deepChat.js" is strongly discouraged as it poses security risks and may cause issues with minification.
Killed
error Command failed with exit code 137.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

try to find deep-Chat.js "node_modules/deep-chat/dist/deepChat.js"

TypeConverters.attibutes = {
  string: function string(r) {
    return r;
  },
  number: function number(r) {
    return parseFloat(r);
  },
  "boolean": function boolean(r) {
    return r === "true";
  },
  object: function object(r) {
    return JSON.parse(r);
  },
  array: function array(r) {
    return JSON.parse(r);
  },
  "function": function _function(value) {
    return eval(value);
  }
};

via the gpt suggestion:

TypeConverters.attibutes = {
  string: function string(r) {
    return r;
  },
  number: function number(r) {
    return parseFloat(r);
  },
  "boolean": function boolean(r) {
    return r === "true";
  },
  object: function object(r) {
    return JSON.parse(r);
  },
  array: function array(r) {
    return JSON.parse(r);
  },
  "function": function _function(value) {
    return new Function('return ' + value)(); // 使用 new Function 来替代 eval
  }
};

websocket connection getting called many times at start

So, i am modifying the websocket sample template provided ,

Now the websocket handler or is trying to call connection many times sometimes , causing issues and the sometimes not rendering the messages as well.

Why is the handler being called many times ? How can i stop that thing ?

Sandbox

Vue 3 - error when passing values into Deep Chat

I use the following sample code below in codesandbox, got this error

Please define "request" with a "url"

<deep-chat
  request='{
    "url": "https://customapi.com/message",
    "method": "POST",
    "headers": {"customName": "customHeaderValue"},
    "additionalBodyProps": {"field": "value"}
  }'
></deep-chat>

How to get correct response from the api method?

I attempted dialogue integration using the following code. However, I examined the results,
and the response received is not coming from the server.
In theory, the server should return the following JSON:

{ "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! How can I assist you today as a policy consulting expert for corporate parks?" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 93, "completion_tokens": 17, "total_tokens": 110 } }

my deep-chat:

`<deep-chat
id="chat-element"
demo="true"
style=" border-radius: 10px;
height: calc(95vh - 5.5rem);;
width: 600px;
border: 1px solid rgb(202, 202, 202);
font-family: Inter, sans-serif, Avenir, Helvetica, Arial;
font-size: 1.2rem;
background-color: white;
position: relative;"
:request='{
"url": "https://xxx.chat/v1/chat/completions",
"headers": {
"Content-Type": "application/json",
"Authorization": "Bearer LixxxtO7"
}
}'
:requestInterceptor="(request) => {
// request.body = {prompt: request.body.messages[0].text};
request.body = { app_code: 'pKxxxxjc5',
messages: [
{
role: 'user',
content: request.body.messages[0].text
},
]
};
return request;
}"

:responseInterceptor="(response) => {
        // const responseText = // extract it from the response argument
        return {result: {text: response.detail}}
}"  

`

image

Handling async code inside the responseInterceptor

I am experiencing a problem with the responseInterceptor function when I use in conjunction with Laravel Queue for handling asynchronous requests. The component doesn't seem to wait for the interceptor to complete before throwing an error for an incorrect response.

chatElementRef.responseInterceptor = (response) => {
        return new Promise((resolve, reject) => {
            let attempts = 0;
            function checkGeneratedText() {
                if (generatedText !== "") {
                    response.reply = generatedText;
                    resolve(response);
                } else if (attempts < 5) {
                    attempts++;
                    setTimeout(checkGeneratedText, 1000);  // Check again in 1 second
                } else {
                    reject("Timed out waiting for generatedText");
                }
            }
            checkGeneratedText();
        });
    };

since I am queuing the requests from our project, I need to check if the process is done and if there is a reply generated.
but instead, the default response is being processed and not the one I defined.

Thank you for your help.

Custom url

I just started looking at deep.dev and I m looking at the following example:

<!-- This example demonstrates how to set values via attributes and properties (recommended) -->
<!-- !!Please note that upon loading/saving this sandbox - the property values applied will not be applied without a RESTART!! -->
<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8" />
    <link rel="stylesheet" href="./src/styles.css" />
  </head>
  <script
    type="module"
    src="https://unpkg.com/[email protected]/dist/deepChat.bundle.js"
  ></script>
  <body>
    <h1>Deep Chat</h1>
    <!-- Attributes can be set as strings either directly on the element (demo/textInput) or via a `setAttribute` method on its reference (introMessage).
      When passing JSON objects make sure that they are first correctly stringified (use the following tool https://jsonlint.com/), functions assigned
      to properties must not have external references and all regex values are properly escaped.
      You can also pass values into the component via properties by using the element reference (initialMessages).
      -->
    <deep-chat
      id="chat-element"
      demo="true"
      textInput='{"placeholder":{"text": "Welcome to the demo!"}}'
    ></deep-chat>
  </body>
  <!-- !!Either set the script as "module" or place your code in a timeout in order to wait for the component to load -->
  <script type="module">
    const elementRef = document.getElementById("chat-element");
    // Setting value via a property (easiest way)
    elementRef.initialMessages = [
      { role: "user", text: "Hey, how are you today?" },
      { role: "ai", text: "I am doing very well!" }
    ];
    elementRef.setAttribute(
      "introMessage",
      JSON.stringify({
        text: "JavaScript demo for the Deep Chat component."
      })
    );
  </script>
</html>

Assuming i have set the request url, but how do I get the response message from URL request?

Custom Header

First, great work! 🤗

Is there a possibility of adding a header element to the chat?
I would like to add an title and/or a button to maximize the chat window.

Best regards

Elixir - Phoenix LiveView sample project

The Deep Chat repo is seeking a kind contribution from dev(s) that are familiar with Elixir - Phoenix LiveView who could create a sample hello-world project that would have the Deep Chat web component embedded inside it.

For anyone who is familiar with the framework - it should not take longer than 20 mins.

I was already able to embed Deep Chat into a LiveView project on my computer, however because my experience with the framework is very limited I spent hours trying to pass state values into its properties without much success. Therefore it would be better if someone who has worked with Phoenix LiveView components could lend a hand and do it properly.

Expectation
The repo already consists examples for SSR frameworks such as NextJs and SvelteKit which can offer a glimpse into what this project should contain. But for simplicity and the essence of contributor's time a simple homepage that contains a Deep Chat component that can send messages and receive sample responses should be enough.

Nice To Haves
It would be great to have examples on how to establish an SSE (Server Sent Events) connection for streams or how the server can handle files like we have for the existing examples, but this is not required.

To help anyone get started, follow these steps:

  1. In your Phoenix LiveView project navigate to the assets/ folder and there run the following command:
    npm install deep-chat
  2. Inside the assets/js/app.js file, add the following line of code:
    import "deep-chat"
  3. You can then embed Deep Chat in any of your .ex or .heex files via:
    <deep-chat></deep-chat>

Feel free to comment below for any questions. Thankyou!

Media not displaying when simulating a stream

Hi Ovidijus, first of all, thank you for the great work you have made. It is a really nice project. I recently took this to make a chatbot with Langchain. I noticed that images will not be shown if simulated stream is enabled. I have spent some time looking at the source code. It seems that the static method Stream.simulate will not consider any images in the response. Will there be any future enhancement on this?

Anyway, thanks in advance.

some questions about simulation param

I have some questions about simulation param in https://deepchat.dev/docs/connect/#stream

Used to stream text responses from the target service.
If the service supports [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) via the text/event-stream MIME type, you can asynchronously stream the incoming events by setting this to true. See [examples](https://deepchat.dev/examples/servers) on how to set this up for your own server.
Alternatively, use the simulation object property to facilitate a stream-like experience for regular responses (incl. [websocket](https://deepchat.dev/docs/connect/#Websocket)). You can also control the millisecond interim of each word's appearance by assigning it a number with the default being 70.

Does it means this param can only be used in websocket?

I used it in common http request,it occurs error.
I used it in sse by Flux,it isn't stream-like .

Is there any more examples about how to use it?

html response from signal handler

Hi, it's been a long time I haven't followed the progress of the project and I see that it's progressing well, congratulations.
I noticed that when using Signal belonging to handler request to return the response in stream it does not accept the html, I don't know if it's a bug or an error on my part

I was able to get around the problem by combining the react state and initialMessage to update the messages gradually but performance level is not great

Support for Ember.js

There is already a great list of supported frameworks. Would be great to see Ember.js support as well. Any plans?

scroll down to bottom in angular

Hello, thanks for this useful library.

I'm using it in Angular but when the page reloads I want to scroll down to the bottom (I'm loading the chat history from db). I checked the repo and I encountered this:

public static isScrollbarAtBottomOfElement(element: HTMLElement) {
    // Get the scroll height, visible height, and current scroll position
    const scrollHeight = element.scrollHeight;
    const visibleHeight = element.clientHeight;
    const scrollPosition = element.scrollTop;

    // Calculate the remaining scroll height
    const remainingScrollHeight = scrollHeight - visibleHeight;

    // Check if the current scroll position is at the bottom
    return scrollPosition >= remainingScrollHeight - ElementUtils.CODE_SNIPPET_GENERATION_JUMP;
  }

I tried to implement the same but it doesn't understand the container with id = messages. I'd appreciate if you could help me.

here's my component

  async ngOnInit(): Promise<void> {
    const chatHistory = await this.loadChatHistory();
    this.initialMessages.push(...chatHistory);

    // I want to scroll down here
  }

Response should support arrays like initialMessages

Hi,

Digging deeper with Deep Chat, I have an issue with Responses.

The issue is that bundles of messages (from AI) are not supported, despite necessary.

Use-case: "Hey how can I help you? <button 1> <button 2> <button 3>" (4 messages), when this message is not an initial one.

I tried to circumvent the problem by setting up a responseInterceptor to post each message individually. It would work if there was a method submitAIMessage, but we only have a submitUserMessage. I didn't look in the source code yet to see if there is an undocumented method.

Except if I missed something, I see two potential improvements:

  • Response should support arrays. It would be much more flexible.
  • The submitUserMessage(message) method might be changed to submitMessage(message, role), so some bot messages can be handled client-side.

OpenAI Assistant in a proxy server

Feature Request: Support Passing Assistant Details in Custom Requests

Current Behavior:
I am utilising your deep-chat-nextjs server as a proxy for OpenAI and the request prop in the DeepChat component does not support assistant parameters like the Direct Connection for OpenAI.

Desired Behavior:
I would like to request support for passing assistant details in the request prop.

Additional Information
If an update is not feasible, it would be greatly appreciated if you could provide guidance on how to achieve the requested functionality manually using custom headers and what it would look like on the nextjs example server.

Vue object values not picked up

This behavior is quite strange.

You can take a look at the following code. The code segment below runs without any issues:

By selecting a city, you can obtain the desired city within deep-chat.

https://stackblitz.com/edit/rmazpd-wsvrsr?file=src%2FApp.vue


<template>
    <div class="card flex justify-content-center">
        <Listbox v-model="selectedCity" 
        :options="cities" filter optionLabel="name" 
        @change="handleSelectionChange"
        class="w-full md:w-14rem" />
    </div>
      <deep-chat
    id="ichat"
    :demo="true"
    :textInput="{ placeholder: { text: 'Welcome to the demo!' } }"
    :initialMessages="initialMessages"
    
  />
</template>

<script setup>
import { ref } from "vue";
import "deep-chat";

const selectedCity = ref();
const cities = ref([
    { name: 'New York', code: 'NY' },
    { name: 'Rome', code: 'RM' },
    { name: 'London', code: 'LDN' },
    { name: 'Istanbul', code: 'IST' },
    { name: 'Paris', code: 'PRS' }
]);

const initialMessages = [
  { role: 'user', text: 'Hey, how are you today?' },
  { role: 'ai', text: 'I am doing very well!' },
];

const handleSelectionChange = (event) => {
    console.log("hello,data:",event.value);
    document.querySelector("#ichat").submitUserMessage(event.value.name);
};

</script>
<style>
div {
  font-family: sans-serif;
  text-align: center;
  justify-content: center;
  display: grid;
}
</style>

However, there is an issue when running the following code. The fundamental difference is that the "deep-chat" below uses a request call. This causes "deep-chat" to refresh when the listbox changes.

https://stackblitz.com/edit/rmazpd-promzf?file=src%2FApp.vue


<template>
    <div class="card flex justify-content-center">
        <Listbox v-model="selectedCity" 
        :options="cities" filter optionLabel="name" 
        @change="handleSelectionChange"
        class="w-full md:w-14rem" />
    </div>
      <deep-chat
            id="ichat"
            ref="deepChatRef"
            :inputAreaStyle='{"backgroundColor": "#ebf5ff"}'                   

            :request="{
                url: 'https://api.link-ai.chat/v1/chat/completions',
                method: 'POST',
                headers: {
                    Authorization: 'Bearer LinddddddTQpKIgISB9uzD0tO7'
                }      
            }"   

            :requestInterceptor="(request) => {
                // request.body = {prompt: request.body.messages[0].text};
                request.body = { app_code: 'pxxxxxjc5',
                                messages: [
                                    {
                                    role: 'user',
                                    content: request.body.messages[0].text
                                    },
                                ]
                                };
                return request;      
            }"

            :responseInterceptor="(response) => {
                    // const responseText = // extract it from the response argument
                    return { text: response.choices[0].message.content };
            }"  
    
  />
</template>

<script setup>
import { ref } from "vue";
import "deep-chat";

const selectedCity = ref();
const cities = ref([
    { name: 'New York', code: 'NY' },
    { name: 'Rome', code: 'RM' },
    { name: 'London', code: 'LDN' },
    { name: 'Istanbul', code: 'IST' },
    { name: 'Paris', code: 'PRS' }
]);

const initialMessages = [
  { role: 'user', text: 'Hey, how are you today?' },
  { role: 'ai', text: 'I am doing very well!' },
];

const handleSelectionChange = (event) => {
    console.log("hello,data:",event.value);
    document.querySelector("#ichat").submitUserMessage("hello");
};

</script>
<style>
div {
  font-family: sans-serif;
  text-align: center;
  justify-content: center;
  display: grid;
}
</style>

TypeError after successful API request

Description:
After making a successful request to my own API, I'm encountering an error:
TypeError: Cannot read properties of undefined (reading 'pollingInAnotherRequest')

API Response:
The response from my API is as follows:

{
  "response": "This is a sample generate text from language model"
}

Configuration:
I've set up the component with the following configuration:

<deep-chat
  containerStyle='{
    "borderRadius": "10px",
    "width": "96vw",
    "height": "calc(100vh - 70px)",
    "paddingTop": "10px"
  }'
  messageStyles='{
    "default": {
      "shared": {
        "innerContainer": {"width": "95%"},
        "bubble": {
          "maxWidth": "100%", "backgroundColor": "unset", "marginTop": "10px", "marginBottom": "10px", "fontSize": "1rem"}},
      "user": {
        "bubble": {
          "marginLeft": "0px", "color": "black"}},
      "ai": {
        "outerContainer": {
          "backgroundColor": "rgba(247,247,248)", "borderTop": "1px solid rgba(0,0,0,.1)", "borderBottom": "1px solid rgba(0,0,0,.1)"
        }
      }
    }
  }'
  avatars='{
    "default": {"styles": {"position": "left"}},
    "ai": {"src": "path-to-icon.png"}
  }'
  inputAreaStyle='{"fontSize": "1rem"}'
  textInput='{"placeholder": {"text": "Send a message"}}'
  initialMessages='[
    {
      "text": "Hi! I am you AI Assistant",
      "role": "ai"
    }
  ]'
  request='{
    "url": "/api/chat/send",
    "method": "POST",
    "headers": {
      "X-CSRF-TOKEN": "{{csrf_token()}}"
    }
  }'
></deep-chat>

Discussion: pass request to handler function instead of API endpoint

I am trying to create an LLM chain with state for my application. However, I can only use the request interface to talk to stateless API endpoints on server-side. It would be much more convenient if I could pass a handleRequest function that does any pre-processing of the message, handles state on the client and makes API calls if I so choose. I also read about interceptor functions that can intercept and modify the request, but it still demands it eventually passes to a server API endpoint. A handler function can do the job of requestInterceptor, responseInterceptor and request. I don't think streaming will work though. I am using SvelteKit

Allow customization or removal of the caution message

Hi Ovidijus,

I'm using deep-chat in a scenario where users can provide their own API keys.
There's currently a hardcoded caution message that's automatically displayed when no key is set, generated by the createCautionText function.

private static createCautionText() {
    const cautionElement = document.createElement('a');
    cautionElement.classList.add('insert-key-input-help-text');
    cautionElement.innerText = 'Please exercise CAUTION when inserting your API key outside of deepchat.dev or localhost!!';
    return cautionElement;
}

Would it be possible to add the ability to customize or hide/remove this message?
Thanks!

react-google-charts is not showing up using Deepchat in React Application

Hi @OvidijusParsiunas, Hope you are doing good.

My first question is Does Deepchat supports react-google-charts ?
If yes what is correction I need to do in my code.

I am using the "data" to render the pie chart instead of response received from Axios.

const data = [ ["Year", "Sales", "Expenses", "Profit"], ["2014", 1000, 400, 200], ["2015", 1170, 460, 250], ["2016", 660, 1120, 300], ["2017", 1030, 540, 350], ];

                `<DeepChat request={{
                  handler: async (body, signals) => {
                    try {
                      const response = await axios.get(process.env.REACT_APP_API_URL + 'chat/query?text='+body.messages[0].text);
                      const htmlResponse = `<div>
                                              <div style="margin-bottom: 10px">Here is an example chart:</div>
                                              <Chart
                                                className="chartbot"
                                                chartType="PieChart"
                                                width="100%"
                                                height="400px"
                                                data=${data}
                                              />
                                              ${chartComponent}
                                            </div>`;
                                         
                      signals.onResponse({ html: htmlResponse });
                    } catch (error) {
                      console.error(error);
                    }
                  }
                }} />`

In Elements of Console window the code renders for Chart is like below

`

Here is an example chart:
[object Object]
`

Image in initialMessages when using OpenAI chat causes an error

Hi Ovidijus,

I've encountered a bug in deep-chat that I hope you can help with.

When initializing the chat with an initialMessages property that includes an image, it seems to cause an error with the OpenAI API. Here's the payload I used:

[
  {
    "text": "Hello!",
    "role": "user"
  },
  {
    "text": "What can you do?",
    "role": "user"
  },
  {
    "files": [
      {
        "src": "https://deepchat.dev/img/city.jpeg",
        "type": "image"
      }
    ],
    "role": "ai"
  },
  {
    "text": "I can assist with a variety of tasks such as answering questions, providing recommendations, translating languages, and much more!",
    "role": "ai"
  }
]

Then I got this error:

initial-messages-error

From the network tab, I can see that OpenAI returns a 400 Bad Request error with the following message:

{
  "error": {
    "message": "'content' is a required property - 'messages.3'",
    "type": "invalid_request_error",
    "param": null,
    "code": null
  }
}

And here's what's being sent to OpenAI:

{
    "model": "gpt-4",
    "max_tokens": 2000,
    "temperature": 1,
    "top_p": 1,
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "content": "Hello!",
            "role": "user"
        },
        {
            "content": "What can you do?",
            "role": "user"
        },
        {
            "role": "assistant"
        },
        {
            "content": "I can assist with a variety of tasks such as answering questions, providing recommendations, translating languages, and much more!",
            "role": "assistant"
        },
        {
            "content": "cool",
            "role": "user"
        }
    ]
}

So, it appears that the content property is missing for the message that includes the image.

Could you please look into this issue? Or maybe I'm doing something wrong on my setup ;)
Thanks in advance for your help!
Best

Support buttons after an AI message

A common chatbot pattern is to prompt the user with some preconfigured actions:

Large Image

image

Is it possible for deep-chat to support this?

How to send/receive messages to external API (Express)

I read the docs and example but can't figure out how to send and receive messages to Express server...

I hv an express server listen to http://localhost:5050/ask

app.post("/ask", async (req, res) => {
  const prompt = req.body.prompt;

i usually use curl to post the messages to the chatbot, it is listening to the prompt
curl -X POST http://localhost:5050/ask -H 'Content-Type: application/json' -d '{ "prompt": "What is the weather forecast in Kuala Lumpur ?" }'

The return messages from express are

return res.status(200).json({
          success: true,
          message: msgs,
        }); 

Current setup on Vue3 , i use the additionalBodyProps to send the prompt but I m not sure where to get the questions from the users

 <deep-chat
    :introMessage="introMessage"
    :request="{
      url: 'http://localhost:5050/ask',
      method: 'POST',
      headers: { customName: 'customHeaderValue' },
      additionalBodyProps: { prompt: messages },
    }"
  >
  </deep-chat>

Result text is incorrect when streaming and using avatars or names

hello, I saw that you had finished updating the main branch, it's a good job, I tested the new functionality and I congratulate you,
I encountered a small problem when finishing a project with the new requestHandler, everything works well except that it does not use the same signal as the base URL, therefore the interception functions like onNewMessage or even the body prop returned by the requestHandler itself does not intercept incoming messages, before was intercepted with the signal directly and as now it is a separate logic I think that incoming messages would be taken from the front directly but I think that will slow down the process a bit , I also tried to update the "initialMessages" to insert the incoming message each time but once uploaded we cannot change it, tell me what you think about it

How to customize the CSS of the container? (width / height)

Hi,

Thank you for this great and promising project!

I wanted to know if there is a mean to customize the inner CSS especially to change the width and the height of the chat area.

If I change the parent width and height through the :style property, I obtain this render.

image

Many thanks

Implement Copy Button for Code Blocks in Markdown Rendering

Summary

I'd like to propose a new feature for deep-chat: the addition of a "Copy" button to code blocks in Markdown rendering. This feature would enable users to easily copy code snippets to their clipboard, enhancing the user experience, especially for developers.

Description

Currently, when viewing code blocks in Markdown files, users have to manually select and copy code snippets. This can be cumbersome, especially for longer code blocks or when using mobile devices. By adding a "Copy" button to each code block, we can streamline this process, making it more efficient and user-friendly.

Use Case

This feature is particularly useful for developers or any users who frequently interact with code snippets in documentation, READMEs, or other Markdown files. It saves time and reduces the risk of accidentally omitting parts of the code when copying.

Suggested Implementation

A "Copy" button could be discreetly positioned in the upper right corner of each code block.
Clicking the button would copy the entire content of the code block to the user's clipboard.
Optionally, a brief visual confirmation (like a tooltip saying "Copied!") could appear upon successful copying.

Multiple AI Avatars, especially for use in multiple ai systems.

I'm using your wonderful package and it's great. Thanks! I am now working on AI agents, where a conversation may have a user and multiple AI agents conversing. It would be great if, in messages returned, we could specify different AIs. I.e. right now you have 'ai' or 'user' and their corresponding name and icon/avatar definitions. Might it be possible for message roles to be extended to something like "ai:ai-name", like "ai:coder", "ai:reviewer", etc. and have associated config for setting different names and avatars/icons to mape to those? I could create a custom component, but then I would lose or have to create the rendering for markdown, prism, and custom elements as are in standard messages.

Messages being cleared on re-render

I'm running into some challenges with the react component when deep-chat is a child component. Messages are being cleared and the component seems to reset itself when a re-render is triggered on the parent component.

For example, in the code below, the messages are cleared when the Drawer is closed and reopened. Is there a way to retain the messages/state when a re-render of the component occurs?

export function App() {

    const [open, setOpen] = useState(false);

    return (
        <div className="app">
            <Button onClick={() => setOpen(true)}>Open Chat</Button>
            <Drawer
                title="Parent Component"
                width={500}
                open={open}
                onClose={() => {setOpen(false)}}
                placement = "right"
            >
                <DeepChat demo="true"/>               
            </Drawer>
        </div>
    );
}

Augmenting outgoing message body and custom handler for streams

hello, first of all I would like to tell you that I really like what you have done, it’s great work so thank you very much,
I know there is already a ticket from someone asking the same thing, I'm talking about customizing a request to be able to control the response logic, I tried deep-chat-dev but it doesn't work on React and since I'm not a fan of typescript I wanted to ask first before trying to modify myself, and the other thing I wanted to ask you if there was a way to change the structure or the object typing of the cors messages to send, for the moment it's in mode like this {role:"user",text:"sometext"},{role:"ai", text: "someothertext"} I would have liked to receive them on the server side ai and user in the same object {ai:"text", user:"text} if possible,
and thank you again for your work.

Vue Webpack 4 build error

Module parse failed: Unexpected token (110:14)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
| static processConfig(n, e) {
| var t;

return e ?? (e = {}), e.disabled ?? (e.disabled = n.isTextInputDisabled), e.placeholder ?? (e.placeholder = {}), (t = e.placeholder).text ?? (t.text = n.textInputPlaceholderText), e;

| }
| // this is is a bug fix where if the browser is scrolled down and the user types in text that creates new line

@ ./node_modules/cache-loader/dist/cjs.js??ref--13-0!./node_modules/babel-loader/lib!./node_modules/cache-loader/dist/cjs.js??ref--0-0!./node_modules/vue-loader/lib??vue-loader-options!./src/App.vue?vue&type=script&lang=js& 1:0-19
@ ./src/App.vue?vue&type=script&lang=js&
@ ./src/App.vue
@ ./src/main.js
@ multi (webpack)-dev-server/client?http://192.168.1.111:8080/sockjs-node (webpack)/hot/dev-server.js ./src/main.js

Ability to send multiple messages from custom backend

I want to know if there is any way or hack , that i can do to send multiple messages at once or one by one,

I am looking for something like this ,

User : Hello
-- get chat
AI : Hi , welcome
AI : How may I help you ?
AI : Hope you are well

these 3 messages i want to send at once

Save and Retrieve Chat Sessions

Description:

Overview
I would like to propose an enhancement for the ability to save chat sessions to local storage and retrieve them in sync. Currently, if a user refreshes the page, the entire chat session is lost, which can be inconvenient and disrupt the user's flow of interaction.

Feature Description
Save Session to Local Storage: Implement functionality that automatically saves the current state of the chat session to the browser's local storage periodically or upon specific user actions (e.g., sending a message).

Retrieve Previous Sessions: Upon returning to the chat or after a page refresh, users should be able to retrieve their previous session. This could be done automatically when the chat interface loads.

Use Case and Benefits

  • Enhanced User Experience: Users won't lose important information or context from their chat sessions due to accidental page refreshes or browser closures.

  • Convenience: This feature adds a level of convenience, allowing users to pick up right where they left off without having to start over.

Support for Phoenix & Phoenix Live View

I just saw example code for impressive number of Frameworks/Servers in the document site. Nice work.

I have following questions.

  1. Is the web component framework compatible with phoenix framework ?

  2. Is the web component framework compatible with phoenix live view framework ?

    • A small intro about phoenix live views in case you have not used it before - a thin js code is sent with server render pages with a persistent socket connection to server. For any user events, only efficient diffs of changed data/html are sent from server via existing sockets which will then make ui changes in the page.

It would be awesome to use deep chat with phoenix & live views to leverage the best of both worlds.

If not now, may be in the future, these integrations can be explored. Looking forward to use it.

Scrolling styles in the chat container

Hello! Thank you so much for working on the component, great job!

Can I clarify how it is possible to change the scrolling styles in the chat container?

Unfortunately, direct access to the component selector in css, using the pseudo-attribute ::-webkit-scrollbar and !important did not bring results.
The use of the scrollbarWidth (scrollbar-width) prop in styles in containerStyle and message Styles also did not bring results.

Thank you in advance for the answer!

Improvement Suggestions for the Message Display Interface

First of all, thank you very much for your in-depth research and development of DeepChat. The product is really great. I'm also willing to conduct thorough testing and provide suggestions for the product.

Below, I'd like to offer some of my suggestions regarding the 'message' feature. The screenshot below is from another open-source project at https://github.com/Chanzhaoyu/chatgpt-web. I hope that DeepChat can also provide similar functionality. The specific functionality points are as follows:

  1. Copy code: Sometimes, after AI processing, the content may contain markdown. I hope that this markdown can be copied. The main purpose is to copy it to other places (e.g., EditPlus, VSCode, Notepad) for editing.

  2. Code send to: Ideally, it would be great to be able to send this code to other places through an API. In my application, based on the chat conversation, useful content needs to be collected and distributed to other components or systems.

  3. Regenerate, run again: Sometimes, the message from AI is not good, and I hope AI can provide me with the results again.

  4. Display original: The purpose of displaying the original text is that sometimes I want to see unformatted content or to copy it.

  5. Copy to clipboard: This feature is also very effective for quickly copying to the clipboard.

  6. Delete the message: It should be possible to delete the message.

  7. Send to: I hope this feature, along with point 2, can be combined. That is, I hope this message can be quickly collected into other systems. So, you can format the message, for example, in a format similar to the following JSON:

    { original: xxx,
      format: [ normal-text: markdown-text: ... ] 
    }
    
  8. Display the time: It should display the time of sending and receiving.
    image

SvelteKit - navigator is not defined

Using the Svelte demo example in SvelteKit results in
ReferenceError: navigator is not defined

navigator seems to contain information about the browser, so import has to be done onMount to force it on client:

import { onMount } from 'svelte';
onMount(async () => {
  const DeepChat = await import("deep-chat");
})

Next.js "app router" version

It would be cool to have a second Next.js example for the new app router.

One thing the app router solves is the need for this:

  // Need to import the component dynamically as it uses the 'window' property.
  // If you have found a better way of adding the component in next, please create a new issue ticket so we can update the example!
  const DeepChat = dynamic(() => import('deep-chat-react').then((mod) => mod.DeepChat), {
    ssr: false,
  });

Instead you can create a new file call DeepChat.tsx like this:

"use client"

export { DeepChat } from 'deep-chat-react';

I have locally converted one of the API calls to use the app router for a personal project.

I am happy to help or do a PR for the whole thing.

Embedding custom components

Hello!

Note: This is a fantastic and well-documented package, and it's exactly what developers starting out with AI chat applications need.

Now for my question...

I'm using this in an Angular project and everything works great, but I'm curious about the extensibility when we need to inject custom elements into the chat component. Here are a few use-cases:

  • confirmation dialog
  • email input text box
  • user-feedback
  • sharing

The response intercepters available today allow us to update the context in the response, but could we do the same with the text-box element?

Thanks in advance!

Seeking advice from anyone who works with Svelte

The Deep Chat web component can be used in Svelte/Sveltekit, but I believe the live code examples do not portray the best way this component should be implemented.
My experience with Svelte is limited, hence when setting up the code examples I had difficulties in passing JSON/Array values into the component properties e.g. initialMessages, and the only way I got this to work was by stingifying the values. Is there any way to pass values into the properties without stringification?

Live code example - Svelte webb app
Live code example - SvelteKit

Questions about names, stream mode and images

Hi Ovidijus,

I have some questions and feedback regarding my current experience with deep-chat.
I'm combining different issues in this single message, so I apologize if this isn't the usual way to do it 🙏


Assistants API

When using the assistant's API, there is a log displayed in the developer console, and I don't see an option to hide it. It's really a minor detail, but I wanted to share it with you.

log-messages-assistant


Names and initial buttons

I've developed a function that hide the name when a message takes the form of a button (class deep-chat-button), as I feel this looks more "user-friendly".

hideNames

My solution feels a bit hacky. Do you have any suggestions for a cleaner approach?
The function looks like this:

export function hideNamesOnButtonMessages(chatId) {
  const intervalId = setInterval(() => {
    const chat = document.getElementById(chatId);
    if (chat && chat.shadowRoot) {
      const outerMessageContainers = chat.shadowRoot.querySelectorAll('.outer-message-container');
      outerMessageContainers.forEach((container) => {
        if (container.querySelector('.deep-chat-button')) {
          const nameElement = container.querySelector('.name');
          if (nameElement) {
            nameElement.remove();
          }
        }
      });
      clearInterval(intervalId);
    }
  }, 0);
}

Stream

When using the OpenAI Chat completions API, I'm retrieving the token usage stats (completion, prompt and total) through the function bound with chatElementRef.responseInterceptor.

However, I haven't been successful in making this work when the stream mode is enabled. It seems that the function isn't called in this case. Do you have any insights on how to handle this scenario? Is there a specific event triggered in stream mode?


Image

Currently, on some use cases, I'm using the submitUserMessage(text) function for text messages, and it's working perfectly.

I'm interested in knowing if it's possible to do the same with files with the current version. So basically be able to use my own file uploader and pass the file(s) to a submit file method or something similar.

Alternatively, is there another way to achieve this?


Please let me know if you would prefer that I separate these into different issues.
Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.