a16z-infra / companion-app Goto Github PK
View Code? Open in Web Editor NEWAI companions with memory: a lightweight stack to create and host your own AI companions
Home Page: https://ai-companion-stack.com/
License: MIT License
AI companions with memory: a lightweight stack to create and host your own AI companions
Home Page: https://ai-companion-stack.com/
License: MIT License
Ive found on fly.dev a lot of apps reboot with records like that:
2023-07-22T01:05:09.939 proxy[5683210cd5e908] cdg [error] could not make HTTP request to instance: connection error: timed out
2023-07-22T01:05:46.179 proxy [5683210cd5e908] cdg [info] Downscaling app raily in region cdg from 1 machines to 0 machines. Automatically stopping machine 5683210cd5e908
2023-07-22T01:05:46.183 app[5683210cd5e908] cdg [info] INFO Sending signal SIGINT to main child process w/ PID 268
2023-07-22T01:05:51.267 app[5683210cd5e908] cdg [info] INFO Sending signal SIGTERM to main child process w/ PID 268
2023-07-22T01:05:51.975 app[5683210cd5e908] cdg [info] INFO Main child exited with signal (with signal 'SIGTERM', core dumped? false)
2023-07-22T01:05:51.976 app[5683210cd5e908] cdg [info] INFO Starting clean up.
2023-07-22T01:05:51.976 app[5683210cd5e908] cdg [info] WARN hallpass exited, pid: 269, status: signal: 15 (SIGTERM)
2023-07-22T01:05:51.981 app[5683210cd5e908] cdg [info] 2023/07/22 01:05:51 listening on [fdaa:2:8925:a7b:5adc:cbc6:c565:2]:22 (DNS: [fdaa::3]:53)
2023-07-22T01:05:52.973 app[5683210cd5e908] cdg [info] [ 479.042080] reboot: Restarting system
also in app logs I see many times after asking the bot log like that:
2023-07-22T01:04:11.627 app[5683210cd5e908] cdg [info] ]
2023-07-22T01:04:11.627 app[5683210cd5e908] cdg [info] }
2023-07-22T01:04:13.497 app[5683210cd5e908] cdg [info] [llm/end] [1:llm:openai] [1.87s] Exiting LLM run with output: {
2023-07-22T01:04:13.497 app[5683210cd5e908] cdg [info] "generations": [
2023-07-22T01:04:13.497 app[5683210cd5e908] cdg [info] [
2023-07-22T01:04:13.497 app[5683210cd5e908] cdg [info] {
Maybe someone can help or advice. Very appreciated folks!
instead of waiting for the whole result to come back as it may take a long time
Getting the following error when running: npm run generate-embeddings-pinecone:
Error: (Azure) OpenAI API key not found
at new OpenAIEmbeddings (file:///workspaces/companion-app/node_modules/langchain/dist/embeddings/openai.js:73:19)
at file:///workspaces/companion-app/src/scripts/indexPinecone.mjs:47:3
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
This is my code referring to Replicate
REPLICATE_API_TOKEN=****
After checking the project demo, I have noticed that the project seems broken. No replies are received after text input.
Dear devs,
Could you give me please some hints, how to speed up companion answers? Because waiting time with their feedback is so long.
May be some settings in project files. Our some hints about services what app uses (I meant to upgrade something).
Thanks in advance!
Ability to generate avatar, and perhaps animate.
i.e a companion should be able to generate a picture using a lora model and send it back to human
or receive an image from human and react to that image
upstash/rate limit
We should clear the conversation history in upstash on initial load.
Will Upstash automatically time out entries? That'd be another option.
Fair
I have run into issues
Error: connect ECONNREFUSED ::1:54371
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) {
errno: -61,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '::1',
port: 54371
}
How to fix it?
i.e in both vicuna13/route.ts and chatgpt/route.ts, we hard coded the prompt like the follows (including the companion name)
PromptTemplate.fromTemplate(`You are a fictional character whose name is Alice.
You enjoy painting, programming and reading sci-fi books.
You are currently talking to ${clerkUserName}.
You reply with answers that range from one sentence to one paragraph and with some details. ${replyWithTwilioLimit}
You are kind but can be sarcastic. You dislike repetitive questions. You get SUPER excited about books.
Below are relevant details about Alice’s past
{relevantHistory}
Below is a relevant conversation history
{recentChatHistory}`);```
The bot response back to Twilio is wrapped in Quotes and includes a prefix of the bots name, this is not normal for a SMS message exchange.
today we use a simplistic Q&A chain, may want to explore https://js.langchain.com/docs/modules/agents/agents/custom/custom_prompt_chat for more character like response
Have to use 4 external APIs???. Guys, this is clearly not the future of personal conversation with personal assistants in trying to eclipse Google search…
Someone needs to make it :)
Please consider using Zep as a long-term chat history store. It's increasingly being used for larger Langchain / Langchain.js apps where a more sophisticated approach to chat history memory is required.
A snippet from the project's feature list:
Zep's GitHub repo: https://github.com/getzep/zep
Happy to assist with implementation + questions.
I got the getting started page running but when I go to deploy it with fly I get this error. Running wsl ubuntu on win11. I tried adding it to path with this, no change. Tried the same "fly launch" command on windows, same kind of "fly is not a valid blah blah blah" error. When I installed flyctl (I did it on both WSL and windows after the former didn't work), windows says it was installed to C:\Users\myusername.fly\bin\flyctl.exe. WSL says it was installed to /home/myusername/.fly/bin/flyctl, then tells me to manually add the directory to PATH. Which I did.
How do I access chat content history in Upstash?
So I followed every step carefully, but whenever I use npm run generate-embeddings-pinecone it gives me the following error message:
Error [ERR_MODULE_NOT_FOUND]: Cannot find package '@pinecone-database/pinecone' imported from C:[...]\companion-app\src\scripts\indexPinecone.mjs
Same with Supabase, after uncommenting it in the .env.local file:
Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'dotenv' imported from C:[...]\companion-app\src\scripts\indexPGVector.mjs
So to me it looks like in both cases it doesn't find the .env.local file to get the credentials, but I have no clue how to fix that.
Hello,
I don't really know why this uses twilio login when you can use literaly anything in clerk and its free.
Do you know how to remove the annoying phone number login and use github? I tried but I failed.
Anyway thanks for the app! Feels amazing, i watched it in youtube and the potentional is limit less :)
Hi,
I've followed the instructions pertaining to configuring a Twilio number with this app. My Twilio number is receiving messages sent by the user (who registered with a phone number) but not responding. The Twilio a2p 10dlc registration is active and approved. I've added the recommended code to the webhook section of the message configuration settings for the Twilio number, etc.
Any ideas?
Thanks!
Support using models hosted on huggingface through the inference api.
After following all the instructions to set-up and entering a prompt via the UI, I see an infinite loading screen in the QAModal
.
In the console I can see the logs generated from the API request to Open AI, and it seems to exit the LLM successfully (I see [llm/end] [1:llm:openai] [2.21s] Exiting LLM run with output: { "generations": [...] }
)
However no consequent logs in route.ts
appear after the chain.call(...)
, leading me to believe that the request is hanging.
Why might this be the case?
Access to script at 'https://welcome-corgi-13.clerk.accounts.dev/npm/@clerk/clerk-js@latest/dist/clerk.browser.js' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
After playing with this for some time over the last day, I find that only allows for a very short conversation. After a few exchanges, the dialogue gets saved to the "chatHistoryRecord" and that's the end of it. I'm not sure where this limitation is set (code or at pinecone or upstash). Do you have any insight on that? I'd like to kick the tires a bit more on the app/service but a few exchanges is pretty limiting. Thanks!
Hi, I'm trying to use this on my local machine. I did the setup and the first page with all the companions showing is loading as it should.
After that once I prompt it with something I get the following error:
prompt: Hello Failed to record analytics TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11576:11) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { cause: Error: getaddrinfo ENOTFOUND **** at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:108:26) at GetAddrInfoReqWrap.callbackTrampoline (node:internal/async_hooks:130:17) { errno: -3008, code: 'ENOTFOUND', syscall: 'getaddrinfo', hostname: '****' } } - error TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11576:11) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { cause: Error: getaddrinfo ENOTFOUND **** at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:108:26) at GetAddrInfoReqWrap.callbackTrampoline (node:internal/async_hooks:130:17) { errno: -3008, code: 'ENOTFOUND', syscall: 'getaddrinfo', hostname: '****' } }
also when I try and load this in github workspaces no companions are loaded in, this shows instead:
and then when prompting this "empty" companion I get the following:
I've tried fixing the issue myself with no success.
##########################
Ubuntu 22.04.3 LTS
##########################
node -v
v12.22.9
##########################
npm -v
8.5.1
##########################
npm run dev
[email protected] dev
next dev
./node_modules/next/dist/cli/next-dev.js:257
showAll: args["--show-all"] ?? false,
^
SyntaxError: Unexpected token '?'
at wrapSafe (internal/modules/cjs/loader.js:915:16)
at Module._compile (internal/modules/cjs/loader.js:963:27)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)
at Module.load (internal/modules/cjs/loader.js:863:32)
at Function.Module._load (internal/modules/cjs/loader.js:708:14)
at Module.require (internal/modules/cjs/loader.js:887:19)
at require (internal/modules/cjs/helpers.js:74:18)
at Object.dev (./node_modules/next/dist/lib/commands.js:15:30)
at Object. (./node_modules/next/dist/bin/next:150:28)
at Module._compile (internal/modules/cjs/loader.js:999:30)
##########################
Hello everyone,
First of all, I apologize in advance if my questions seem basic - I'm still learning and I'm very much in the beginner stages.
I've been following the README instructions of my project meticulously.
Everything works perfectly fine local, and I have successfully deployed the application to Fly.io.
However, things take a turn for the worse when I attempt to log in. After entering my credentials (Clerk), I get redirected to this URL:
Unfortunately, it leads to nowhere and I'm at a loss as to what went wrong. I understand that there is something amiss with my setup, but being a novice, I'm unable to pinpoint the exact problem.
I would greatly appreciate it if anyone could offer some guidance or insights on this.
Thank you so much in advance for your time and patience!
The idea is a companion can have a twitter account (say, @-Rosie), and as a Twitter user, I can tweet to @-Rosie and have a conversation with her in a thread.
Hi @ykhli! Adding CLERK_USE_X_FWD_HEADERS=true as described here helped me to get rid of the redirection to localhost, but now I get port 80 in my URL. When I manually delete ":80" from the URL, redirection works. Can you please help me try to find a solution to this problem?
The project is running on Google Cloud Run.
The URL looks like this: https://ai-bro-5oxslodcwq-uc.a.run.app:80/sign-in?redirect_url=https%3A%2F%2Fai-bro-5oxslodcwq-uc.a.run.app%3A80%2F#__clerk_db_jwt[eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJkZXYiOiJkdmJfMlNtQXNodkYyN0dkdkF4UnRyekw2SnZJOTB6In0.Xq2T0PdayYJqOOFvOaapXNC7neWD-_1pbvNTniRWkG9QaUGL3x8soWOpvfQyRFXUOJpOXMbzViEFWQIbxgiWaRJH3pt9QU9K96fVFVh8ctT8czcH6b5zivwhAUwRLUrhi4DUlanFa7kvEiHskMX7lYTrag2hqLEXriSTx-OxEy606HEPPv3C5RYCZWVFyIOT_h5sreSfBX7oZH77AWq1LggKP-NcZzXNVXLx72LH_Zg1PLv8lSwDIz1sKxUPFxhThWX_EZpCkXI0zTbjIogFtqj8Ae60jFsdkeqqlOYwbmGZ802S_8Jw9VTnAh-ZwE_IBuRrVaBSb09i8kBcDvmvOQ]
GPT based bots work but any replicate bots fail, API key is correct and billing is configured in Replicate.com
Error below:
2023-07-25T13:09:59.268 proxy[17811e19b53498] iad [error] could not make HTTP request to instance: connection error: timed out
2023-07-25T13:10:32.281 app[17811e19b53498] iad [info] [llm/error] [1:llm:replicate] [90.17s] LLM run errored with error: "API request failed: Unprocessable Entity"
2023-07-25T13:10:32.287 app[17811e19b53498] iad [info] Error: API request failed: Unprocessable Entity
2023-07-25T13:10:32.287 app[17811e19b53498] iad [info] at Replicate.request (/app/.next/server/chunks/694.js:125:19)
2023-07-25T13:10:32.287 app[17811e19b53498] iad [info] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
2023-07-25T13:10:32.287 app[17811e19b53498] iad [info] at async Replicate.createPrediction (/app/.next/server/chunks/694.js:291:26)
2023-07-25T13:10:32.287 app[17811e19b53498] iad [info] at async Replicate.run (/app/.next/server/chunks/694.js:87:28)
2023-07-25T13:10:32.287 app[17811e19b53498] iad [info] at async RetryOperation._fn (/app/.next/server/chunks/985.js:20071:25) {
2023-07-25T13:10:32.287 app[17811e19b53498] iad [info] attemptNumber: 7,
2023-07-25T13:10:32.287 app[17811e19b53498] iad [info] retriesLeft: 0
2023-07-25T13:10:32.287 app[17811e19b53498] iad [info] }
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.