Giter Club home page Giter Club logo

taskingai's Introduction

https://www.tasking.ai

TaskingAI

Docs Docker Image Version (latest semver) GitHub License PyPI version X (formerly Twitter) URL YouTube Channel Subscribers

TaskingAI brings Firebase's simplicity to AI-native app development. The platform enables the creation of GPTs-like multi-tenant applications using a wide range of LLMs from various providers. It features distinct, modular functions such as Inference, Retrieval, Assistant, and Tool, seamlessly integrated to enhance the development process. TaskingAI’s cohesive design ensures an efficient, intelligent, and user-friendly experience in AI application development.

Key Features

  1. All-In-One LLM Platform: Access hundreds of AI models with unified APIs.
  2. Intuitive UI Console: Simplifies project management and allows in-console workflow testing.
  3. BaaS-Inspired Workflow: Separate AI logic (server-side) from product development (client-side), offering a clear pathway from console-based prototyping to scalable solutions using RESTful APIs and client SDKs.
  4. Customizable Integration: Enhance LLM functionalities with customizable tools and advanced Retrieval-Augmented Generation (RAG) system
  5. Asynchronous Efficiency: Harness Python FastAPI's asynchronous features for high-performance, concurrent computation, enhancing the responsiveness and scalability of the applications.

Integrations

Models: TaskingAI connects with hundreds of LLMs from various providers, including OpenAI, Anthropic, and more. We also allow users to integrate local host models through Ollama, LM Studio and Local AI.

Plugins: TaskingAI supports a wide range of built-in plugins to empower your AI agents, including Google search, website reader, stoke market retrieval, and more. Users can also create custom tools to meet their specific needs.

What Can You Build with TaskingAI?

  • Interactive Application Demos: Quickly create and deploy engaging application demos using TaskingAI's UI Console. It’s an ideal environment for demonstrating the potential of AI-native apps with real-time interaction and user engagement.

  • AI Agents for Enterprise Productivity: Create AI agents that utilize collective knowledge and tools to boost teamwork and productivity. TaskingAI streamlines the development of these shared AI resources, making it easier to foster collaboration and support across your organization. Leverage our Client SDKs and REST APIs for seamless integration and customization.

  • Multi-Tenant AI-Native Applications for Business: With TaskingAI, build robust, multi-tenant AI-native applications that are ready for production. It's perfectly suited for handling varied client needs while maintaining individual customization, security, and scalability. For serverless deployment, Please check out our official website to learn more.

# Easily build an agent with TaskingAI Client SDK
assistant = taskingai.assistant.create_assistant(
    model_id="YOUR_TASKINGAI_MODEL_ID",
    memory=AssistantNaiveMemory(),
    tools=[
        AssistantTool(
            type=AssistantToolType.ACTION,
            id=action_id,
        )
    ],
    retrievals=[
        AssistantRetrieval(
            type=AssistantRetrievalType.COLLECTION,
            id=collection_id,
        )
    ],
)

Please give us a FREE STAR 🌟 if you find it helpful 😇


Why TaskingAI?

Problems with existing solutions 🙁

LangChain is a tool framework for LLM application development, but it faces practical limitations:

  • Statelessness: Relies on client-side or external services for data management.
  • Scalability Challenges: Statelessness impacts consistent data handling across sessions.
  • External Dependencies: Depends on outside resources like model SDKs and vector storage.

OpenAI's Assistant API excels in delivering GPTs-like functionalities but comes with its own constraints:

  • Tied Functionalities: Integrations like tools and retrievals are tied to each assistant, not suitable for multi-tenant applications.
  • Proprietary Limitations: Restricted to OpenAI models, unsuitable for diverse needs.
  • Customization Limits: Users cannot customize agent configuration such as memory and retrieval system.

How TaskingAI solves the problem 😃

TaskingAI addresses these challenges with a decoupled modular design and extensive model compatibility, adhering to open-source principles.

Perfect for developers who need a scalable and customizable AI development environment, TaskingAI enhances project management with a user-friendly UI console. It supports development through Client SDKs and REST APIs, and its server efficiently stores data for various models. Importantly, TaskingAI natively supports vector storage, essential for the Retrieval-Augmented Generation (RAG) system, thereby facilitating a seamless progression from prototype to production.

Architecture

TaskingAI's architecture is designed with modularity and flexibility at its core, enabling compatibility with a wide spectrum of LLMs. This adaptability allows it to effortlessly support a variety of applications, from straightforward demos to sophisticated, multi-tenant AI systems. Constructed on a foundation of open-source principles, TaskingAI incorporates numerous open-source tools, ensuring that the platform is not only versatile but also customizable.

Nginx: Functions as the frontend web server, efficiently routing traffic to the designated services within the architecture.

Frontend (TypeScript + React): This interactive and responsive user interface is built with TypeScript and React, allowing users to smoothly interact with backend APIs.

Backend (Python + FastAPI): The backend, engineered with Python and FastAPI, offers high performance stemming from its asynchronous design. It manages business logic, data processing, and serves as the conduit between the frontend and AI inference services. Python's widespread use invites broader contributions, fostering a collaborative environment for continuous improvement and innovation.

TaskingAI-Inference: Dedicated to AI model inference, this component adeptly handles tasks such as response generation and natural language input processing. It's another standout project within TaskingAI's suite of open-source offerings.

TaskingAI Core Services: Comprises various services including Model, Assistant, Retrieval, and Tool, each integral to the platform's operation.

PostgreSQL + PGVector: Serves as the primary database, with PGVector enhancing vector operations for embedding comparisons, crucial for AI functionalities.

Redis: Delivers high-performance data caching, crucial for expediting response times and bolstering data retrieval efficiency.

Quickstart with Docker

A simple way to initiate self-hosted TaskingAI community edition is through Docker.

Prerequisites

  • Docker and Docker Compose installed on your machine.
  • Git installed for cloning the repository.
  • Python environment (above Python 3.8) for running the client SDK.

Installation

First, clone the TaskingAI (community edition) repository from GitHub.

git clone https://github.com/taskingai/taskingai.git
cd taskingai

Inside the cloned repository, go to the docker directory and launch the services using Docker Compose.

cd docker
docker-compose -p taskingai up -d

Once the service is up, access the TaskingAI console through your browser with the URL http://localhost:8080. The default username and password are admin and TaskingAI321.

Upgrade

If you have already installed TaskingAI with a previous version and want to upgrade to the latest version, first update the repository.

git pull origin master

Then stop current docker service, upgrade to the latest version by pulling the latest image, and finally restart the service.

cd docker
docker-compose -p taskingai down
docker-compose -p taskingai pull
docker-compose -p taskingai up -d

Don't worry about data loss; your data will be automatically migrated to the latest version schema if needed.

TaskingAI UI Console

TaskingAI Console Demo

Click the image above to view the TaskingAI Console Demo Video.

TaskingAI Client SDK

Once the console is up, you can programmatically interact with the TaskingAI server using the TaskingAI client SDK.

Ensure you have Python 3.8 or above installed, and set up a virtual environment (optional but recommended). Install the TaskingAI Python client SDK using pip.

pip install taskingai

Here is a client code example:

import taskingai
from taskingai.assistant.memory import AssistantNaiveMemory

taskingai.init(api_key='YOUR_API_KEY', host='http://localhost:8080')

# Create a new assistant
assistant = taskingai.assistant.create_assistant(
    model_id="YOUR_MODEL_ID",
    memory=AssistantNaiveMemory(),
)

# Create a new chat
chat = taskingai.assistant.create_chat(
    assistant_id=assistant.assistant_id,
)

# Send a user message
taskingai.assistant.create_message(
    assistant_id=assistant.assistant_id,
    chat_id=chat.chat_id,
    text="Hello!",
)

# generate assistant response
assistant_message = taskingai.assistant.generate_message(
    assistant_id=assistant.assistant_id,
    chat_id=chat.chat_id,
)

print(assistant_message)

Note that the YOUR_API_KEY and YOUR_MODEL_ID should be replaced with the actual API key and chat completion model ID you created in the console.

You can learn more in the documentation.

Resources

Community and Contribution

Please see our contribution guidelines for how to contribute to the project.

License and Code of Conduct

TaskingAI is released under a specific TaskingAI Open Source License. By contributing to this project, you agree to abide by its terms.

Support and Contact

For support, please refer to our documentation or contact us at [email protected].

taskingai's People

Contributors

bufferoverflow avatar dependabot[bot] avatar dynamesc avatar eltociear avatar jameszyao avatar jishen027 avatar simsonw avatar superiorlu avatar taskingaijc avatar taskingaiwww avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

taskingai's Issues

LOGIN ISSUE

I am not sure what's happening. it's taking a long time to login after using the default username and password. Is this a bug?

Can not Create Chunk by Client SDK

Describe the bug
Can not Create Chunk by Client SDK

To Reproduce
Steps to reproduce the behavior:
my source code is :


import taskingai


taskingai.init(api_key='tkOb2nWY19LJq2IpyXRnH82YRL4tv2sD', host='https://taskai.bnuzh.free.hr')

collection = taskingai.retrieval.create_collection(
    embedding_model_id="TpnkRhL0",
    capacity=1000,
    name = "BNU_GPT_v1.1"
)

import taskingai
from taskingai.retrieval import Chunk

collection = taskingai.retrieval.get_collection(
    collection_id="DbgYoclelujzk2p6hsnc4u23"
)


chunk: Chunk = taskingai.retrieval.create_chunk(
    collection_id=collection.collection_id,
    content= "111",
)

and an error occurred

...
ApiException: (400)
Reason: Bad Request
HTTP response headers: Headers({'server': 'openresty', 'date': 'Sat, 23 Mar 2024 08:33:56 GMT', 'content-type': 'application/json', 'content-length': '297', 'connection': 'keep-alive'})
HTTP response body: b'{"status":"error","error":{"code":"PROVIDER_API_ERROR","message":"Error on calling provider model API with non-JSON response: <html>\\r\\n<head><title>502 Bad Gateway</title></head>\\r\\n<body>\\r\\n<center><h1>502 Bad Gateway</h1></center>\\r\\n<hr><center>openresty</center>\\r\\n</body>\\r\\n</html>\\r\\n"}}

Looking forward user can define model url for openai model

Is your feature request related to a problem? Please describe.
Yes, currently users are unable to define the model URL for OpenAI models, which limits their flexibility in using custom models.

Describe the solution you'd like
I would like to have the ability to define a custom model URL for OpenAI models. This would allow users to specify their own model URL, enabling them to use custom-trained models or models hosted on different platforms.

Describe alternatives you've considered
One alternative solution could be to provide a configuration option where users can specify the model URL in their code or configuration files. However, having a dedicated parameter or API method specifically for defining the model URL would provide a more straightforward and user-friendly approach.

Additional context
Allowing users to define the model URL would greatly enhance the flexibility and customization options for working with OpenAI models. It would enable users to leverage their own trained models or models hosted on other platforms, expanding the possibilities for various applications and use cases.

Feel free to modify or add more details to the suggested answer based on your specific requirements.

Ollama support and support for Nomic Embedding.

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Support for nomic Embedding. Considering it's pretty fast and accurate and recently included in Ollama, support would be a good alternative to other embedding models already there.

Describe the solution you'd like
A clear and concise description of what you want to happen.
Integrate nomic embedding models.

with postgres_db_pool.get_db_connection()as connection

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

1.: AttributeError: enter
2.
3.
4.

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Where is repo of taskingai-inference?

Describe the bug
Hi, TaskingAI is a amazing work! I'm deploying the platform via source code on my MacBook. I followed the README, but I found a error. The repo of taskingai-inference link to https://github.com/taskingai/taskingai-inference, but click the URL, you will see a 404 page of Git Hub. The taskingai-inference is a necessary server, so please update the link ASAP. Thanks.

BTW, doc of your official website (https://docs.tasking.ai/docs/taskingai-inference/overview) has another error about taskingai-inference. In doc page of taskingai-inference, the repo of taskingai-inference link to https://github.com/TaskingAI/Universal-LLM-API. It is another a 404 page of github.

How to make streaming calls to an already established Assistant?

How can I use the TaskingAI Python SDK to make streaming calls to an already established Assistant? Currently, I am using the following code:

Start an async chat completion task with streaming

import time
import torch
import uvicorn
from pydantic import BaseModel, Field
from fastapi import FastAPI, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from contextlib import asynccontextmanager
from typing import Any, Dict, List, Literal, Optional, Union
from transformers import AutoTokenizer, AutoModel, AutoModelForCausalLM

from taskingai.inference import a_chat_completion
from taskingai.inference import UserMessage, SystemMessage
from enum import Enum
import taskingai
from taskingai.assistant import Assistant
from taskingai.assistant.message import Message

class MessageRole(str, Enum):
USER = "user"
ASSISTANT = "assistant"

class MessageChunk(BaseModel):
role: MessageRole = Field(...)
index: int = Field(...)
delta: str = Field(...)
created_timestamp: int = Field(..., ge=0)

taskingai.init(api_key='tkOb2nWY19LJq2IpyXRnH82YRL4tv2sD', host='https://taskai.bnuzh.free.hr')

assistant: Assistant = taskingai.assistant.get_assistant(
assistant_id="X5lMiuMf1AspX3GBKm1YIVN0"
)
chat = taskingai.assistant.create_chat(
assistant_id=assistant.assistant_id,
)
assistant_message_response = taskingai.assistant.generate_message(
assistant_id=assistant.assistant_id,
chat_id=chat.chat_id,
stream=True,
)

print(f"Assistant:", end=" ", flush=True)
for item in assistant_message_response:
if isinstance(item, MessageChunk):
print(item.delta, end="", flush=True)

How to set proxy config for taskingai?

My self-hosted TTaskingAI server is behind a proxy. It can not access the Internet ( OpenAI API) without connecting to the proxy.
I attempted to set the HTTPS_PROXY environment variables, but it didn't work.

Models: REQUEST_VALIDATION_ERROR when adding custom models

Describe the bug
When attempting to add custom model property error is raised. This happens with any combination of properties, toggled off, on etc.

{
"status": "error",
"error": {
"code": "REQUEST_VALIDATION_ERROR",
"message": "Properties are not allowed for this model schema."
}
}

To Reproduce
Steps to reproduce the behavior:

  1. Navigate to new Model
    2.Custom Model
  2. Select any custom hosted model (same error on all three)
  3. Fill out and click green button

Expected behavior
Creates a custom hosted model. In my case litellm endpoint.

Screenshots
image

Desktop (please complete the following information):

  • OS: Mac
  • Browser Brave
  • Version 0.1.3

Additional context
I would like to access models behind a litellm proxy i.e. llama2_70b text completion model, but can't get it to work on my or colleagues machine.

feature request: create record by uploading a file (PDF or word)

I am proposing a new feature to enhance the functionality of the platform by allowing users to create records not only through text input but also by uploading files, specifically PDFs and Word documents. This feature could significantly streamline the data entry process for users who often work with document files and wish to integrate their contents directly into the platform.

Current Limitation:
Currently, Tasking AI only supports record creation through text input. This requires users to manually copy and paste or retype the content from their documents, which can be time-consuming and prone to errors.
retrieval

Proposed Feature:
I suggest implementing a feature that enables users to upload a PDF or Word document as an alternative method to create a record. The system should automatically extract the text from the uploaded document and use it to create a new record. This process should maintain the original formatting of the document as closely as possible to ensure the integrity of the data being imported.

Installed through docker but stuck at the Login phase.

Describe the bug
Login doesn't seem to work, it's stuck.

To Reproduce
Steps to reproduce the behavior:

  1. Git Cloned the repository
  2. Modified the docker compose yaml with new aes and jwt generated keys
  3. docker-compose -p taskingai up -d
  4. Tried to login using default passwords an admin user, at localhost:8080 on the host OS

Expected behavior
Completing the login phase.

Screenshots
Login screenshot:
image
Active images shown on Docker Desktop:
image

Desktop (please complete the following information):

  • OS: Windows 10
  • Browser: Chrome and Edge
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.
From docker desktop log the part where I try logging in :
2024-03-14 17:10:56 nginx-1 | 2024/03/14 16:10:56 [error] 28#28: *47 connect() failed (113: No route to host) while connecting to upstream, client: 172.19.0.1, server: , request: "POST /api/v1/admins/login HTTP/1.1", upstream: "http://172.19.0.7:8000/api/v1/admins/login", host: "localhost:8080", referrer: "http://localhost:8080/auth/signin"
2024-03-14 17:10:56 nginx-1 | 172.19.0.1 - - [14/Mar/2024:16:10:56 +0000] "POST /api/v1/admins/login HTTP/1.1" 502 559 "http://localhost:8080/auth/signin" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36" "-"

Concurrent testing,asyncio.exceptions.CancelledError: Cancelled by cancel scope 10abec670

Describe the bug
asyncio.exceptions.CancelledError

To Reproduce
Steps to reproduce the behavior:

  1. Concurrent test an assistant, create chat concurrently, create messages, and generate results
  2. After the end, the server will report an error

Expected behavior
A clear and concise description of what you expected to happen.
INFO: 127.0.0.1:51131 - "POST /api/v1/assistants/X5lMSQinHLE8beIWz7mjDqxM/chats HTTP/1.1" 200 OK
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/aioredis/connection.py", line 860, in send_packed_command
await asyncio.wait_for(
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
return await fut
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/aioredis/connection.py", line 842, in _send_packed_command
await self._writer.drain()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/streams.py", line 387, in drain
await self._protocol._drain_helper()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/streams.py", line 190, in _drain_helper
raise ConnectionResetError('Connection lost')
ConnectionResetError: Connection lost

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call
return await self.app(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 164, in call
await self.app(scope, receive, _send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/middleware/cors.py", line 83, in call
await self.app(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 62, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/routing.py", line 758, in call
await self.middleware_stack(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/routing.py", line 778, in app
await route.handle(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/routing.py", line 299, in handle
await self.app(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/routing.py", line 79, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/routing.py", line 74, in app
response = await func(request)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/fastapi/routing.py", line 294, in app
raw_response = await run_endpoint_function(
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
File "/Users/yangzhibo/Projects/TaskingAI/backend/app/routes/auto/message.py", line 114, in api_create
entity = await ops.create(
File "/Users/yangzhibo/Projects/TaskingAI/backend/app/operators/assistant/message.py", line 35, in create
chat = await chat_ops.get(
File "/Users/yangzhibo/Projects/TaskingAI/backend/tkhelper/models/operator/postgres_operator.py", line 102, in get
entity = await self.redis.get(**kwargs)
File "/Users/yangzhibo/Projects/TaskingAI/backend/tkhelper/models/redis.py", line 49, in get
data = await self.redis_conn.get_object(key)
File "/Users/yangzhibo/Projects/TaskingAI/backend/tkhelper/database/redis/connection.py", line 94, in get_object
value_string = await self.redis.get(key)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/aioredis/client.py", line 1084, in execute_command
await conn.send_command(*args)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/aioredis/connection.py", line 885, in send_command
await self.send_packed_command(
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/aioredis/connection.py", line 868, in send_packed_command
await self.disconnect()
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/aioredis/connection.py", line 806, in disconnect
await self._writer.wait_closed() # type: ignore[union-attr]
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/streams.py", line 359, in wait_closed
await self._protocol._get_close_waiter(self)
asyncio.exceptions.CancelledError: Cancelled by cancel scope 10abec670
2024-04-01 12:40:37,629 - asyncio - WARNING - socket.send() raised exception.
2024-04-01 12:40:37,646 - asyncio - WARNING - socket.send() raised exception.
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/aioredis/connection.py", line 860, in send_packed_command
await asyncio.wait_for(
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
return await fut
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/aioredis/connection.py", line 842, in _send_packed_command
await self._writer.drain()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/streams.py", line 387, in drain
await self._protocol._drain_helper()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/streams.py", line 190, in _drain_helper
raise ConnectionResetError('Connection lost')
ConnectionResetError: Connection lost

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call
return await self.app(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 164, in call
await self.app(scope, receive, _send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/middleware/cors.py", line 83, in call
await self.app(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 62, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/routing.py", line 758, in call
await self.middleware_stack(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/routing.py", line 778, in app
await route.handle(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/routing.py", line 299, in handle
await self.app(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/routing.py", line 79, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/starlette/routing.py", line 74, in app
response = await func(request)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/fastapi/routing.py", line 294, in app
raw_response = await run_endpoint_function(
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
File "/Users/yangzhibo/Projects/TaskingAI/backend/app/routes/auto/message.py", line 114, in api_create
entity = await ops.create(
File "/Users/yangzhibo/Projects/TaskingAI/backend/app/operators/assistant/message.py", line 60, in create
await update_chat_memory(chat=chat, memory=updated_chat_memory)
File "/Users/yangzhibo/Projects/TaskingAI/backend/app/services/assistant/chat.py", line 22, in update_chat_memory
chat = await chat_ops.update(
File "/Users/yangzhibo/Projects/TaskingAI/backend/tkhelper/models/operator/postgres_operator.py", line 53, in update
entity = await self.get(**kwargs, postgres_conn=conn)
File "/Users/yangzhibo/Projects/TaskingAI/backend/tkhelper/models/operator/postgres_operator.py", line 102, in get
entity = await self.redis.get(**kwargs)
File "/Users/yangzhibo/Projects/TaskingAI/backend/tkhelper/models/redis.py", line 49, in get
data = await self.redis_conn.get_object(key)
File "/Users/yangzhibo/Projects/TaskingAI/backend/tkhelper/database/redis/connection.py", line 94, in get_object
value_string = await self.redis.get(key)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/aioredis/client.py", line 1084, in execute_command
await conn.send_command(*args)
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/aioredis/connection.py", line 885, in send_command
await self.send_packed_command(
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/aioredis/connection.py", line 868, in send_packed_command
await self.disconnect()
File "/Users/yangzhibo/Projects/TaskingAI/backend/.venv/lib/python3.9/site-packages/aioredis/connection.py", line 806, in disconnect
await self._writer.wait_closed() # type: ignore[union-attr]
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/streams.py", line 359, in wait_closed
await self._protocol._get_close_waiter(self)
asyncio.exceptions.CancelledError: Cancelled by cancel scope 10abec670
2024-04-01 12:40:37,681 - asyncio - WARNING - socket.send() raised exception.

docker-compose Version is unspported

Describe the bug
A clear and concise description of what the bug is.
when execute docker-compose up, has some error:

ERROR: Version in "./docker-compose.yml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version (e.g "2.2" or "3.3") and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/

To Reproduce
Steps to reproduce the behavior:

  1. cd docker
  2. docker-compose up

Expected behavior
A clear and concise description of what you expected to happen.

# docker-compose.yml
wrong
version: "v0.1.1" 

right
version: "3.3"

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Additional context
Add any other context about the problem here.

A tip about docker upgrades and static volumes

A tip about docker upgrades and static volumes.
Hi,
I observed that your docker compose file generates static volumes intrinsic to docker file system.
This may seem to be advantageus for "it runs on my computer" problems.
But in my experience it may cause trouble during upgrades (not your code upgrade).
In rare cases of docker upgrades, docker may mess up the internal docker file system.
I lived through three such events.
I would advise you to carry these volumes to host file system, at best to the directory of installation like ./database etc.
Thanks for your interest.

The “models" cannot display the return results of non streaming transmission models.

I created an “openai function call” under the “custom_host” , but its output is not streaming. When I made the following choices while creating the model, an error occurred: “Assistant model ****** does not support streaming”.

Properties
Function call
× Indicates if the model supports function call.
Streaming
×Indicates if the model supports streaming of text chunks.

TaskingAI-WXWork-assistant-bot

https://github.com/luolin-ai/TaskingAI-WXWork-assistant-bot
TaskingAI-WXWork Assistant Bot 为企业微信用户设计,集成了TaskingAI平台的核心人工智能技术。这款GPTs原生化智能体通过自定义高度的聊天模型、精准的信息搜索、灵活的助手功能以及与外部资源的无缝交互,旨在提供自动化、高效率的人工智能助手支持。极大地促进了企业内部的沟通和工作流程自动化,加速企业的数字化和自动化转型过程。

核心特性
蜡烛化部署:强调企业数据的安全与隐私保护,支持根据企业的安全策略和需求进行蜡烛化部署。
高度习惯的交互模型:针对不同的工作场景和用户需求提供定制化的高效交互模式和逻辑,保证服务的个性化和性。
精准的信息搜索:运用先进的搜索技术,快速准确地从庞大的数据海洋中提取所需信息,提供与需求高度相关的答案。
项目全面的助手功能:覆盖四川管理、协作、客户服务和内部培训等多个方面,成为企业运营中心的智能助手。
无缝的外部资源集成:深度集成外部API和系统,实时获取和处理外部数据,为企业提供最新的市场动态和客户信息。
应用场景
企业内部沟通:自动化解答常见问题,减少人工客服的压力,提高内部沟通的效率和质量。
客户管理:通过智能分析客户数据,提供定制化的服务方案,提升客户关系满意度和忠诚度。
项目管理:推动项目团队高效规划并跟踪项目进度,确保项目按时交付。
知识管理和内部培训:快速检索企业内部数据库,为员工提供即时的知识支持和培训,促进知识共享和团队成长。
技术优势
AI技术融合:将TaskingAI的多模型、搜索、助手和工具等核心技术融合应用,提供强大的AI支撑。
实时更新与优化:通过持续的学习和自动化训练,确保智能体能够更新并优化响应策略,及时满足企业不断变化的需求。
易于集成和扩展:设计时要考虑企业IT架构的复杂性,确保简单易行的集成流程和灵活的扩展性。
总结
TaskingAI-WX工作助手机器人利用TaskingAI平台的高级AI技术和企业微信的广泛用户基础,为企业带来了前所未有的定制工作体验。它不仅优化了工作流程,提高了工作效率,还促进了企业决策速度和质量,为企业数字化转型和转型升级提供了有力支撑。TaskingAI-WX工作助手机器人是推动企业发展、创新工作模式的强大动力。

Can TaskingAI be adapted to provide OpenAI API format calls?

The TaskingAI project is the most user-friendly project I've used so far, and I'm very grateful for the hard work of all the developers involved. I have a request that I'm not sure if you would consider.

Is your feature request related to a problem? Please describe.
Some existing excellent frontends (clients) cannot integrate with TaskingAI.

Describe the solution you'd like
Currently, frontend interfaces of AI applications generally adhere to the OpenAI API standards. However, the API provided by TaskingAI cannot be integrated with existing frontend projects that are already adapted to the OpenAI API, such as Chat-next-web.

Describe alternatives you've considered
If TaskingAI could provide an API format compatible with the OpenAI API, it would be possible to integrate with many more clients.

Backend Startup Failure Due to Unset Environment Variables

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

C:\Users\86185\Desktop\cuiyi\TaskingAI-0.2.0\TaskingAI-0.2.0\backend.venv\Scripts\python.exe C:\Users\86185\Desktop\cuiyi\TaskingAI-0.2.0\TaskingAI-0.2.0\backend\app\main.py
Traceback (most recent call last):
File "C:\Users\86185\Desktop\cuiyi\TaskingAI-0.2.0\TaskingAI-0.2.0\backend\app\main.py", line 2, in
from app.config import CONFIG
File "C:\Users\86185\Desktop\cuiyi\TaskingAI-0.2.0\TaskingAI-0.2.0\backend\app\config.py", line 103, in
CONFIG = Config()
File "C:\Users\86185\Desktop\cuiyi\TaskingAI-0.2.0\TaskingAI-0.2.0\backend\app\config.py", line 86, in init
self.TASKINGAI_PLUGIN_URL = load_str_env("TASKINGAI_PLUGIN_URL", required=True)
File "C:\Users\86185\Desktop\cuiyi\TaskingAI-0.2.0\TaskingAI-0.2.0\backend\app\config.py", line 33, in load_str_env
raise Exception(f"Env {name} is not set")
Exception: Env TASKINGAI_PLUGIN_URL is not set

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.
image

An error occurred when i import taskingai, seems to be related with Python Pydantic Package

Describe the bug
I changed my Pydantic python package version and when I use taskingai SDK, an error occurred

To Reproduce
my python version is 3.8.0
here are my pip list

accelerate                    0.28.0
aiohttp                       3.9.3
aiosignal                     1.3.1
annotated-types               0.6.0
anyio                         4.3.0
async-timeout                 4.0.3
attrs                         23.2.0
auto_gptq                     0.7.1
certifi                       2024.2.2
charset-normalizer            3.3.2
click                         8.1.7
coloredlogs                   15.0.1
datasets                      2.18.0
dill                          0.3.8
distro                        1.9.0
einops                        0.7.0
exceptiongroup                1.2.0
fastapi                       0.110.0
filelock                      3.13.1
frozenlist                    1.4.1
fsspec                        2024.2.0
gekko                         1.1.0
h11                           0.14.0
httpcore                      1.0.4
httpx                         0.27.0
huggingface-hub               0.21.4
humanfriendly                 10.0
idna                          3.6
Jinja2                        3.1.3
MarkupSafe                    2.1.5
mpmath                        1.3.0
multidict                     6.0.5
multiprocess                  0.70.16
networkx                      3.1
numpy                         1.24.4
nvidia-cublas-cu12            12.1.3.1
nvidia-cuda-cupti-cu12        12.1.105
nvidia-cuda-nvrtc-cu12        12.1.105
nvidia-cuda-runtime-cu12      12.1.105
nvidia-cudnn-cu12             8.9.2.26
nvidia-cufft-cu12             11.0.2.54
nvidia-curand-cu12            10.3.2.106
nvidia-cusolver-cu12          11.4.5.107
nvidia-cusparse-cu12          12.1.0.106
nvidia-nccl-cu12              2.19.3
nvidia-nvjitlink-cu12         12.4.99
nvidia-nvtx-cu12              12.1.105
openai                        1.14.2
optimum                       1.18.0
packaging                     24.0
pandas                        2.0.3
peft                          0.10.0
pip                           23.3.1
protobuf                      5.26.1
psutil                        5.9.8
pyarrow                       15.0.2
pyarrow-hotfix                0.6
pydantic                      1.10.11
pydantic_core                 2.16.3
python-dateutil               2.9.0.post0
pytz                          2024.1
PyYAML                        6.0.1
regex                         2023.12.25
requests                      2.31.0
rouge                         1.0.1
safetensors                   0.4.2
sentencepiece                 0.2.0
setuptools                    68.2.2
six                           1.16.0
sniffio                       1.3.1
sse-starlette                 2.0.0
starlette                     0.36.3
sympy                         1.12
taskingai                     0.2.1
tiktoken                      0.6.0
tokenizers                    0.15.2
torch                         2.2.1
tqdm                          4.66.2
transformers                  4.39.0
transformers-stream-generator 0.0.5
triton                        2.2.0
typing_extensions             4.10.0
tzdata                        2024.1
urllib3                       2.2.1
uvicorn                       0.29.0
wheel                         0.41.2
xxhash                        3.4.1
yarl                          1.9.4

Error Log
python code

  import taskingai
  from taskingai.assistant import Assistant

log

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/sse_starlette/sse.py", line 281, in __call__
    await wrap(partial(self.listen_for_disconnect, receive))
  File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/sse_starlette/sse.py", line 270, in wrap
    await func()
  File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/sse_starlette/sse.py", line 221, in listen_for_disconnect
    message = await receive()
  File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 535, in receive
    await self.message_event.wait()
  File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/asyncio/locks.py", line 309, in wait
    await fut
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

  + Exception Group Traceback (most recent call last):
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 407, in run_asgi
  |     result = await app(  # type: ignore[func-returns-value]
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
  |     return await self.app(scope, receive, send)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/fastapi/applications.py", line 1054, in __call__
  |     await super().__call__(scope, receive, send)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/applications.py", line 123, in __call__
  |     await self.middleware_stack(scope, receive, send)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/middleware/errors.py", line 186, in __call__
  |     raise exc
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/middleware/errors.py", line 164, in __call__
  |     await self.app(scope, receive, _send)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/middleware/cors.py", line 91, in __call__
  |     await self.simple_response(scope, receive, send, request_headers=headers)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/middleware/cors.py", line 146, in simple_response
  |     await self.app(scope, receive, send)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
  |     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
  |     raise exc
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
  |     await app(scope, receive, sender)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/routing.py", line 758, in __call__
  |     await self.middleware_stack(scope, receive, send)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/routing.py", line 778, in app
  |     await route.handle(scope, receive, send)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/routing.py", line 299, in handle
  |     await self.app(scope, receive, send)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/routing.py", line 79, in app
  |     await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
  |     raise exc
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
  |     await app(scope, receive, sender)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/starlette/routing.py", line 77, in app
  |     await response(scope, receive, send)
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/sse_starlette/sse.py", line 281, in __call__
  |     await wrap(partial(self.listen_for_disconnect, receive))
  |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 678, in __aexit__
  |     raise BaseExceptionGroup(
  | exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/sse_starlette/sse.py", line 270, in wrap
    |     await func()
    |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/sse_starlette/sse.py", line 251, in stream_response
    |     async for data in self.body_iterator:
    |   File "self_api_v4.py", line 218, in predict
    |     import taskingai
    |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/taskingai/__init__.py", line 1, in <module>
    |     from .config import *
    |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/taskingai/config.py", line 8, in <module>
    |     from .client.utils import check_kwargs
    |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/taskingai/client/utils.py", line 4, in <module>
    |     from taskingai.client.api_client import SyncApiClient, AsyncApiClient
    |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/taskingai/client/api_client.py", line 22, in <module>
    |     from taskingai.client import rest
    |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/taskingai/client/rest.py", line 21, in <module>
    |     from .stream import Stream, AsyncStream
    |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/taskingai/client/stream.py", line 6, in <module>
    |     from .models import MessageChunk, Message, ChatCompletion, ChatCompletionChunk
    |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/taskingai/client/models/__init__.py", line 18, in <module>
    |     from .entities import *
    |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/taskingai/client/models/entities/__init__.py", line 20, in <module>
    |     from .assistant import *
    |   File "/home/edge2/miniconda3/envs/fastapi/lib/python3.8/site-packages/taskingai/client/models/entities/assistant.py", line 24, in <module>
    |     class Assistant(BaseModel):
    |   File "pydantic/main.py", line 197, in pydantic.main.ModelMetaclass.__new__
    |   File "pydantic/fields.py", line 504, in pydantic.fields.ModelField.infer
    |   File "pydantic/schema.py", line 1022, in pydantic.schema.get_annotation_from_field_info
    | ValueError: On field "tools" the following field constraints are set but not enforced: min_length, max_length. 
    | For more details see https://docs.pydantic.dev/usage/schema/#unenforced-field-constraints
    +------------------------------------

Desktop (please complete the following information):

  • OS: [Debian]
  • Version [11]

introduce an option to split the text based on specific characters or symbols like ‘/n’

Describe the solution you'd like
I would like to suggest an enhancement to the text_splitter feature. Currently, it only allows splitting text based on a specified chunk_size. However, it would be beneficial if we could introduce an option to split the text based on specific characters or symbols, such as '/n' for newline.

This additional functionality would enable users to directly upload standardized documents for automatic creation of records. By leveraging newline or other suitable characters as delimiters, the text would be segmented more accurately, aligning with the structure of the original document.

Implementing this feature would streamline the process of creating records from structured documents, enhancing the overall user experience and efficiency.

Thank you for considering this suggestion. I appreciate your efforts in continuously improving the product.

Best regards,

raise client_error(req.connection_key, exc) from exc aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host localhost:8000 ssl:default [Connect call failed ('127.0.0.1', 8000)]

root@C20240305144798:/www/wwwroot/taskingai/backend# [2024-03-28 04:48:04 +0800] [1964434] [INFO] Starting gunicorn 21.2.0
[2024-03-28 04:48:04 +0800] [1964434] [INFO] Listening at: http://0.0.0.0:8010 (1964434)
[2024-03-28 04:48:04 +0800] [1964434] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2024-03-28 04:48:04 +0800] [1964473] [INFO] Booting worker with pid: 1964473
[2024-03-28 04:48:04 +0800] [1964473] [INFO] Started server process [1964473]
[2024-03-28 04:48:04 +0800] [1964473] [INFO] Waiting for application startup.
2024-03-28 04:48:04,726 - app.fastapi_app - INFO - fastapi app startup...
2024-03-28 04:48:04,726 - app.fastapi_app - INFO - start plugin cache scheduler...
2024-03-28 04:48:04,727 - apscheduler.scheduler - INFO - Adding job tentatively -- it will be properly scheduled when the scheduler starts
2024-03-28 04:48:04,728 - apscheduler.scheduler - INFO - Added job "sync_data" to job store "default"
2024-03-28 04:48:04,728 - apscheduler.scheduler - INFO - Scheduler started
2024-03-28 04:48:04,728 - app.fastapi_app - INFO - Syncing model schema data...
2024-03-28 04:48:04,733 - app.fastapi_app - ERROR - Failed to sync model schema data.
2024-03-28 04:48:04,733 - app.fastapi_app - INFO - fastapi app shutdown...
[2024-03-28 04:48:04 +0800] [1964473] [ERROR] Traceback (most recent call last):
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/site-packages/aiohttp/connector.py", line 992, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs)
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/asyncio/base_events.py", line 1055, in create_connection
raise exceptions[0]
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/asyncio/base_events.py", line 1040, in create_connection
sock = await self._connect_sock(
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/asyncio/base_events.py", line 954, in _connect_sock
await self.sock_connect(sock, address)
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/asyncio/selector_events.py", line 502, in sock_connect
return await fut
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/asyncio/selector_events.py", line 537, in _sock_connect_cb
raise OSError(err, f'Connect call failed {address}')
ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 8000)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/site-packages/starlette/routing.py", line 734, in lifespan
async with self.lifespan_context(app) as maybe_state:
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/contextlib.py", line 199, in aenter
return await anext(self.gen)
File "/www/wwwroot/taskingai/backend/app/fastapi_app.py", line 43, in lifespan
await sync_data(first_sync=True)
File "/www/wwwroot/taskingai/backend/app/fastapi_app.py", line 21, in sync_data
await sync_model_schema_data()
File "/www/wwwroot/taskingai/backend/app/services/model/model_schema.py", line 38, in sync_model_schema_data
response = await session.get(
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/site-packages/aiohttp/client.py", line 578, in _request
conn = await self._connector.connect(
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/site-packages/aiohttp/connector.py", line 544, in connect
proto = await self._create_connection(req, traces, timeout)
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/site-packages/aiohttp/connector.py", line 911, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/site-packages/aiohttp/connector.py", line 1235, in _create_direct_connection
raise last_exc
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/site-packages/aiohttp/connector.py", line 1204, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "/www/server/pyporject_evn/backend_venv/lib/python3.10/site-packages/aiohttp/connector.py", line 1000, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host localhost:8000 ssl:default [Connect call failed ('127.0.0.1', 8000)]

[2024-03-28 04:48:04 +0800] [1964473] [ERROR] Application startup failed. Exiting.
[2024-03-28 04:48:04 +0800] [1964473] [INFO] Worker exiting (pid: 1964473)
[2024-03-28 04:48:05 +0800] [1964434] [ERROR] Worker (pid:1964473) exited with code 3
[2024-03-28 04:48:05 +0800] [1964434] [ERROR] Shutting down: Master
[2024-03-28 04:48:05 +0800] [1964434] [ERROR] Reason: Worker failed to boot.image

Model type is required for wildcard models

Describe the bug
When running in docker and we try to add a wildcard based model we get the error Model type is required for wildcard models

To Reproduce
Steps to reproduce the behavior:

  1. Stand up docker stack
  2. Try to add any wildcard based model

Expected behavior
Expect the model to be added

Desktop (please complete the following information):

  • OS: Windows 11 running on wsl2
  • Browser [e.g. chrome, safari] Tried both firefox and chrome

Additional context
Running the latest taskingai docker stack

It is possible to integrate with Milvus and return similarity scores

Dear Developers,

I would like to request the following enhancements:

  1. Integrate TaskingAI system with the Milvus database to enable efficient vector similarity search.

  2. Allow the database to return similarity scores along with retrieved chunks, enabling flexible filtering based on similarity thresholds rather than a fixed number of chunks.

These improvements would greatly enhance the precision and flexibility of our system's search and retrieval capabilities.

Thank you for your consideration.

Best regards,

docker compose: can not change the default password.

i changed the default password, but it's not working, i can still login with the old password.

docker-compose.yml file:

backend-web:
image: taskingai/taskingai-server:v0.2.1
environment:
POSTGRES_URL: postgres://postgres:TaskingAI321@db:5432/taskingai
REDIS_URL: redis://:TaskingAI321@cache:6379/0
TASKINGAI_INFERENCE_URL: http://backend-inference:8000
TASKINGAI_PLUGIN_URL: http://backend-plugin:8000
AES_ENCRYPTION_KEY: b90e4648ad699c3bdf62c0860e09eb9efc098ee75f215bf750847ae19d41e4b0 # replace with your own
JWT_SECRET_KEY: dbefe42f34473990a3fa903a6a3283acdc3a910beb1ae271a6463ffa5a926bfb # replace with your own
PURPOSE: WEB
DEFAULT_ADMIN_USERNAME: admin
DEFAULT_ADMIN_PASSWORD: TaskingAI3210 # replace with your own

aiohttp.client_exceptions.ClientConnectorError

Describe the bug
When I run taskingai, this aiohttp.client_exceptions.ClientConnectorError appears. How to solve it?

To Reproduce
Steps to reproduce the behavior:

image

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Please a better understandable doc for LM Studio.

The common use for local llm and tasking ai is as follows mostly:
Local LLM server : Directly on OS. (because installing docker NVidia drivers are a burden).
TaskingAI : Docker. (because it is easy).
So please in your documentation state this at the TOP,
or maybe in the UI state the examples of LM Studio Addresses like:

"If TaskingAI is on docker, address should be 
  http://host.docker.internal:<your port>
  otherwise
  http://localhost:<your port>"

This seems ok so far it is working. But let me ask you another question:
IF I use a non docker version of TaskingAI, and give the address

http://localhost:<your port>

Will it connect? Since TaskingAI is trying to force https (TLS) ?

Feature Request: Integration of Multimodal LLMs

I want to suggest a significant enhancement that could vastly expand the capabilities of TaskingAI - the integration of multimodal Large Language Models (LLMs), particularly those akin to GPT-4V, which possess the ability to process and understand images alongside text.

I believe that this enhancement aligns with TaskingAI's commitment to innovation and would significantly benefit the diverse user base seeking to explore the frontiers of AI applications.

Bug Report: Body not properly built for POST actions

Describe the bug
When an assistant intends to trigger an action, the body will not be properly built according to schema if it is a POST action.

To Reproduce
Steps to reproduce the behavior:

  1. Create a proper action with POST method.
  2. Create an Assistant and grant it the access to the action created in 1. The model applied to assistant should support function calls
  3. In Playground, ask the assistant to trigger the action created in 1.

Expected behavior
The model successfully received response from the POST endpoint, and generate response based on the response.

Screenshots
WX20240126-153408

Desktop (please complete the following information):

  • OS: MacOS Monterey 12.5.1
  • Browser: Chrome 120.0.6099.109 (Official Build) (arm64)
  • Version 0.1.0

Additional context
Add any other context about the problem here.

Bug: Cannot add new chunks in console

Describe the bug
I discovered a bug. During the usage process, I suddenly found that I couldn't perform QA in the console. After investigation, I found that there was a problem with the vector database. Specifically, it can delete chunks but cannot add new chunks. I deployed Taskinga AI using Docker containers, but I'm not very clear about the relationships between the different containers. If you need the log files related to this bug, please tell me the file locations within the containers.

The error screenshot visible on the frontend

Screenshots
CleanShot 2024-03-21 at 21 57 23@2x

After the user interrupts the stream, a 500 error occurs on the api server.

Describe the bug
Internal Server Error
asyncio.exceptions.CancelledError: Cancelled by cancel scope

Expected behavior
After the front-end user interrupts the stream, a 500 error occurs on the api server.

Desktop (please complete the following information):

  • Server: Ubuntu 22.04.3 LTS
  • Browser chrome
  • Version TaskingAI Community v0.1.3

Additional context

2024-02-29 10:56:55,554 - asyncio - WARNING - socket.send() raised exception.
[2024-02-29 10:56:55 +0000] [9] [ERROR] Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 860, in send_packed_command
await asyncio.wait_for(
File "/usr/local/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
return await fut
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 842, in _send_packed_command
await self._writer.drain()
File "/usr/local/lib/python3.10/asyncio/streams.py", line 371, in drain
await self._protocol._drain_helper()
File "/usr/local/lib/python3.10/asyncio/streams.py", line 167, in _drain_helper
raise ConnectionResetError('Connection lost')
ConnectionResetError: Connection lost

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/cors.py", line 91, in call
await self.simple_response(scope, receive, send, request_headers=headers)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/cors.py", line 146, in simple_response
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 762, in call
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 782, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 275, in app
solved_result = await solve_dependencies(
File "/usr/local/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 598, in solve_dependencies
solved = await call(**sub_values)
File "/code/app/routes/utils.py", line 55, in auth_info_required
return await app_admin_auth_info_required(request)
File "/code/app/routes/utils.py", line 27, in app_admin_auth_info_required
admin = await verify_admin_token(token=ret["token"])
File "/code/common/services/auth/admin.py", line 43, in verify_admin_token
admin: Admin = await db_admin.verify_admin_token(token)
File "/code/common/database_ops/auth/admin/verify_token.py", line 29, in verify_admin_token
admin = await get_admin_by_id(admin_id)
File "/code/common/database_ops/auth/admin/get.py", line 12, in get_admin_by_id
admin: Admin = await Admin.get_redis_by_id(admin_id)
File "/code/common/models/auth/admin_user.py", line 56, in get_redis_by_id
return await redis_object_get_object(
File "/code/common/database/redis/management.py", line 83, in redis_object_get_object
value_string = await redis_pool.redis.get(real_key)
File "/usr/local/lib/python3.10/site-packages/aioredis/client.py", line 1084, in execute_command
await conn.send_command(*args)
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 885, in send_command
await self.send_packed_command(
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 868, in send_packed_command
await self.disconnect()
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 806, in disconnect
await self._writer.wait_closed() # type: ignore[union-attr]
File "/usr/local/lib/python3.10/asyncio/streams.py", line 343, in wait_closed
await self._protocol._get_close_waiter(self)
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7feabab2d300
172.19.0.8:56226 - "POST /api/v1/admins/verify_token HTTP/1.1" 500
2024-02-29 10:56:55,907 - asyncio - WARNING - socket.send() raised exception.
[2024-02-29 10:56:55 +0000] [9] [ERROR] Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 860, in send_packed_command
await asyncio.wait_for(
File "/usr/local/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
return await fut
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 842, in _send_packed_command
await self._writer.drain()
File "/usr/local/lib/python3.10/asyncio/streams.py", line 371, in drain
await self._protocol._drain_helper()
File "/usr/local/lib/python3.10/asyncio/streams.py", line 167, in _drain_helper
raise ConnectionResetError('Connection lost')
ConnectionResetError: Connection lost

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/cors.py", line 83, in call
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 762, in call
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 782, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 275, in app
solved_result = await solve_dependencies(
File "/usr/local/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 598, in solve_dependencies
solved = await call(**sub_values)
File "/code/app/routes/utils.py", line 55, in auth_info_required
return await app_admin_auth_info_required(request)
File "/code/app/routes/utils.py", line 27, in app_admin_auth_info_required
admin = await verify_admin_token(token=ret["token"])
File "/code/common/services/auth/admin.py", line 43, in verify_admin_token
admin: Admin = await db_admin.verify_admin_token(token)
File "/code/common/database_ops/auth/admin/verify_token.py", line 29, in verify_admin_token
admin = await get_admin_by_id(admin_id)
File "/code/common/database_ops/auth/admin/get.py", line 12, in get_admin_by_id
admin: Admin = await Admin.get_redis_by_id(admin_id)
File "/code/common/models/auth/admin_user.py", line 56, in get_redis_by_id
return await redis_object_get_object(
File "/code/common/database/redis/management.py", line 83, in redis_object_get_object
value_string = await redis_pool.redis.get(real_key)
File "/usr/local/lib/python3.10/site-packages/aioredis/client.py", line 1084, in execute_command
await conn.send_command(*args)
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 885, in send_command
await self.send_packed_command(
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 868, in send_packed_command
await self.disconnect()
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 806, in disconnect
await self._writer.wait_closed() # type: ignore[union-attr]
File "/usr/local/lib/python3.10/asyncio/streams.py", line 343, in wait_closed
await self._protocol._get_close_waiter(self)
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7feabab2d300
2024-02-29 10:56:55,909 - asyncio - WARNING - socket.send() raised exception.
172.19.0.8:56238 - "GET /api/v1/models?limit=20&type=text_embedding HTTP/1.1" 500
[2024-02-29 10:56:55 +0000] [9] [ERROR] Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 860, in send_packed_command
await asyncio.wait_for(
File "/usr/local/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
return await fut
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 842, in _send_packed_command
await self._writer.drain()
File "/usr/local/lib/python3.10/asyncio/streams.py", line 371, in drain
await self._protocol._drain_helper()
File "/usr/local/lib/python3.10/asyncio/streams.py", line 167, in _drain_helper
raise ConnectionResetError('Connection lost')
ConnectionResetError: Connection lost

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/cors.py", line 83, in call
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 762, in call
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 782, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 275, in app
solved_result = await solve_dependencies(
File "/usr/local/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 598, in solve_dependencies
solved = await call(**sub_values)
File "/code/app/routes/utils.py", line 55, in auth_info_required
return await app_admin_auth_info_required(request)
File "/code/app/routes/utils.py", line 27, in app_admin_auth_info_required
admin = await verify_admin_token(token=ret["token"])
File "/code/common/services/auth/admin.py", line 43, in verify_admin_token
admin: Admin = await db_admin.verify_admin_token(token)
File "/code/common/database_ops/auth/admin/verify_token.py", line 29, in verify_admin_token
admin = await get_admin_by_id(admin_id)
File "/code/common/database_ops/auth/admin/get.py", line 12, in get_admin_by_id
admin: Admin = await Admin.get_redis_by_id(admin_id)
File "/code/common/models/auth/admin_user.py", line 56, in get_redis_by_id
return await redis_object_get_object(
File "/code/common/database/redis/management.py", line 83, in redis_object_get_object
value_string = await redis_pool.redis.get(real_key)
File "/usr/local/lib/python3.10/site-packages/aioredis/client.py", line 1084, in execute_command
await conn.send_command(*args)
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 885, in send_command
await self.send_packed_command(
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 868, in send_packed_command
await self.disconnect()
File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 806, in disconnect
await self._writer.wait_closed() # type: ignore[union-attr]
File "/usr/local/lib/python3.10/asyncio/streams.py", line 343, in wait_closed
await self._protocol._get_close_waiter(self)
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7feabab2d300
172.19.0.8:56252 - "GET /api/v1/collections?limit=20 HTTP/1.1" 500

StabilityAI has bug

When the data returned by the image is obtained, an error is reported when passed to the model

async with ClientSession() as session:
async with session.post(url=url, headers=headers, json=data, proxy=CONFIG.PROXY) as response:
if response.status == 200:
data = await response.json()
base64_image = data["artifacts"][0]["base64"]
return PluginOutput(data={"base64_image": base64_image})
else:
data = await response.json()
print(data)
raise Exception(f"Error fetching data: {response.status}, {data}")

Deployment on a virtual machine ec2, the generate button has no effect

Describe the bug
I followed the self-hosting docs, to put it on ec2, using docker compose worked as a charm but once I was in the playground, I send a message on the chat and then when I tried to generate a response, nothing happened. Sometimes it would throw an error "invalid token" other time it wouldnt just react.

I tested the same token and the same steps using docker-compose on my laptop, it works flawless. I know this may not be with the application code but could be with aws.

I was wondering if someone might have some ideas, to debug ?

To Reproduce

  1. spin a ec2 machine, clone the repo
  2. need to adjust the nginx conf, to listen on port 80
  3. adjust the docker-compose to match host port 80, by default it listens on port 8080, in that case need to allow 8080 port to allow inbound connections
  4. docker-compose

Expected behavior
A clear and concise description of what you expected to happen.

When starting the application, there is an error related to connecting to the localhost on port 8002. The specific error is a ConnectionRefusedError with the message [Errno 10061] Connect call failed ('127.0.0.1', 8002).

Describe the bug
When starting the application, there is an error related to connecting to the localhost on port 8002. The specific error is a ConnectionRefusedError with the message [Errno 10061] Connect call failed ('127.0.0.1', 8002).

To Reproduce
Steps to reproduce the behavior:

Start the application.
Observe the error related to connecting to localhost on port 8002.
Expected behavior
The application should start without any connection errors and be able to connect to the specified localhost on port 8002.

Screenshots
111111111306
22222220318

N/A

Desktop (please complete the following information):

OS: Windows (based on the paths in the error log)
Browser: N/A
Version: N/A
Additional context
This error seems to be related to a connection issue with the localhost on port 8002. Further investigation is needed to determine the root cause of the connection problem and resolve it.

How to customize a text embedding? Or provide an openai proxy configuration?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.
How to customize a text embedding? Or provide an openai proxy configuration?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.