Giter Club home page Giter Club logo

ai-jsx's People

Contributors

adriatic avatar benlower avatar farzadab avatar felixonmars avatar hessamb avatar jonathanplasse avatar juberti avatar larryatfixie avatar mdepinet avatar mdwelsh avatar nickheiner avatar petersalas avatar rexrowan avatar zkoch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ai-jsx's Issues

AI.jsx demo hangs and responds in an unexpected way

When using the docs chat (https://ai-jsx-nextjs-demo.vercel.app/docs-chat), I asked a question (see attached screenshot) and then experienced three issues:

  1. Hanging -> The chat seemed unresponsive and didn't do anything. Then I hit "Enter" again and then saw the robot's icon with no response.
  2. Incorrect Attribution -> Then the app output the response but under the user icon. I was expecting the robot to give the reply :)
  3. No Answer -> After giving what looks like a correct answer (but as the user), the robot then says there's no answer and says the document has a 404. I suspect this is related to the second press of "enter" when I didn't actually submit a query.

The app also seems to allow multiple presses of "enter" and just seems to create an empty dialog between the user and robot before finally replying to the empty queries (see the second screenshot below).

Screenshot
docschat-emptyreplies

crbug/1173575

In Chrome, from the user dashboard, when clicked on the user profile page URL, the browser shows profile/view and the console logged VM10:5 crbug/1173575, non-JS module files deprecated.
chrome issue

Clarifications in Sidekick Tutorial part 2

In the run some queries section:

  • There is no Git+Github collection
  • Thee is no "Query Test" to click on.
  • how does "RERERE" work (clearly a typo).

the suggested queries in this section produce expected results, so I am using some unknown corpora.

Document interaction between `memo` and render contexts

Namely, document that in cases like:

const memoized = memo(<Component />);
return <>
  <OpenAI chatModel="gpt-4">{memoized}</OpenAI>
  <OpenAI chatModel="gpt-3.5-turbo">{memoized}</OpenAI>
</>

the Component will not be memoized, whereas in cases like

const memoized = memo(AI.withContext(<Component />, renderContext));
return <>
  <OpenAI chatModel="gpt-4">{memoized}</OpenAI>
  <OpenAI chatModel="gpt-3.5-turbo">{memoized}</OpenAI>
</>

the Component will be memoized and will effectively disregard the <OpenAI> tags.

Create Battery for Pinecone access

Right now our <DocQA> component depends on a memory store. This would allow you to have a Pinecone provider to access embeddings stored in a Pinecone DB.

`fixie` CLI error message should be clearer

Error condition 1: https://fixie-ai.slack.com/archives/C05J75EFZ6X/p1693945790214519?thread_ts=1693945728.012129&cid=C05J75EFZ6X

Error condition 2: The user-defined npm run dev command fails. In this case, the CLI tool should present a clean message to the user telling them to fix their command. (One way it could fail is if the user references fixie-serve-bin but fails to install @fixieai/sdk.) In the event of an error, the CLI tool could try running npm run dev locally to ensure it succeeds.

Noise cancellation on top of Voice ASR

Love what @juberti is building with fixie voice, and the open-source benchmarking sites for ASR and TTS. Something I personally want to test is how noise cancellation algorithms like Picovoice's Koala affects these ASR's transcription, and I think that benchmarking site would be a good place for me to contribute and test that out.

Just wanted to check with @juberti if that's a welcome addition to the site for me to contribute? Not sure how exactly you'd imagine the layout, but I'd like to compare the transcriptions both with and without the noise cancellation going in. If you'd want it as a separate page or a checkbox on top of the ASR to enable noise cancellation? Happy to discuss!

Error using it in React/Vite project

Iam using this in a react project with vite and getting this error.

✘ [ERROR] Failed to resolve entry for package "@nick.heiner/openai-edge". The package may have incorrect main/module/exports specified in its package.json. [plugin vite:dep-pre-bundle]

node_modules/vite/node_modules/esbuild/lib/main.js:1360:21:
  1360 │         let result = await callback({

Error: TextEncoder is required for this module to work in the browser

Mobile Client Error

Error: TextEncoder is required for this module to work in the browser
ERROR Invariant Violation: Failed to call into JavaScript module method AppRegistry.runApplication(). Module has not been registered as callable. Registered callable JavaScript modules (n = 11): Systrace, JSTimers, HeapCapture, SamplingProfiler, RCTLog, RCTDeviceEventEmitter, RCTNativeAppEventEmitter, GlobalPerformanceLogger, JSDevSupportModule, HMRClient, RCTEventEmitter.
A frequent cause of the error is that the application entry file path is incorrect. This can also happen when the JS bundle is corrupt or there is an early initialization error when loading React Native.

Adopt a code of conduct for the community

We should create a draft of this and share out with the community. We should also make it so that someone has to adopt the code of conduct before they can contribute to the repo.

API docs could be a lot better

The autogenerated types from typedoc aren't super readable.

Also, sometimes when we link to generated docs from hand-written docs, we get a build failure. For example:

When you render on the top level, you use `AI.createRenderContext()` to create a [`renderContext`](../api/interfaces/RenderContext.md)

Locally, I run yarn workspace docs build, and it works. But on CI, I get a failure:

YN0000: │ ai-jsx-monorepo-root@workspace:. STDOUT docs:build: Error: Docs markdown link couldn't be resolved: (../api/interfaces/RenderContext.md) in "/vercel/path0/packages/docs/docs/guides/rendering.md" for version current
14:02:55.238

I think this is because the API docs aren't being generated prior to the link checker running – there's some race condition that only manifests on CI.

Demo Video in Youtube

Hello Community Maintainers,

I thought it would be nice to have demo video with a Youtube link in the repository for better outreach

Logging with Streaming

Add support for logging when streaming when using toStreamResponse (or document how if already possible)

 return toStreamResponse(
    <ChatCompletion>
      <SystemMessage>
        You are an assistant who writes poems. If the user asks for anything else, politely decline.
      </SystemMessage>
      <UserMessage>Write a poem about {topic}.</UserMessage>
    </ChatCompletion>
  );
}

appendOnly renders might not be appendOnly

If a component yields two non-append-only frames, the append-only render will not be append-only. The fix is to disable yielding intermediate values from non-append-only streams when processing an AsyncIterator in renderStream.

local llm support

are there plans to support local backends instead of just openai/claude etc?

Had to manually add tsx

When trying to run the RedditQA example, the first step is to load the docs with yarn tsx packages/examples/src/bakeoff/reddit-qa/load-articles.mts. This failed for me because tsx was missing. Not sure if I missed some step along the way but fixed this by running yarn add tsx.

Odd behavior of `npm run dev`

The context is described here. The oddity is this: manually edit the file index.js (which was initially created at the first time running this app) - something I should not be doing. Next time running the npm run dev, leaves the index.js unchanged (my edits are still there).

This is of course huge potential for problems and this behavior could be explained by "access rights" issue with the script npm run build && npm run serve.

Observed by running VSCode version 1.8.4 (Universal) on MacOS Sonoma 14.0

Implement CodeSandBox for examples

It would be great to have all the examples in the docs and elsewhere integrated with CodeSandBox. This makes it nicer to navigate the code, alter it, etc.

ai-jsx.tsx Throws Error

Trying to run yarn tsx packages/examples/src/bakeoff/reddit-qa/ai-jsx.tsx and it throws with the below. This is after running yarn tsx packages/examples/src/bakeoff/reddit-qa/load-articles.mts.

ben@Bens-MacBook-Air ai-jsx % yarn tsx packages/examples/src/bakeoff/reddit-qa/ai-jsx.tsx
/Users/ben/Dropbox/Mac (2)/Documents/GitHub/ai-jsx/packages/examples/src/bakeoff/reddit-qa/ai-jsx.tsx:34
  throw Error('No documents available. Did you run load-articles first?');
        ^


Error: No documents available. Did you run load-articles first?
    at <anonymous> (/Users/ben/Dropbox/Mac (2)/Documents/GitHub/ai-jsx/packages/examples/src/bakeoff/reddit-qa/ai-jsx.tsx:34:9)

Node.js v20.4.0

Automate .env variables for production

.env has a variable for the DOCS_URL that gets set to localhost for local development. Automate the setting of this to docs.ai-jsx.com when deploying to production.

Anthropic support requested

I've set EXPORT ANTHRPIC_API_KEY=sk-ant-api03-xxxxxxxxxxxxxxxxxxxxxxxxxxxx in .bashrc and during individual terminal sessions but it doesn't work as it does with openai.

Where do i set the following, or other option pointing to anthropic?

Clarify the use of the sample foxes corpus in Sidekicks DocQA

In https://docs.ai-jsx.com/sidekicks/sidekicks-docsqa change Example Corpus note to be:

Rather than creating a corpus from scratch, you can also query an existing corpus that has a bunch of web pages with exciting facts about foxes. This corpus has ID 44094d5a-f817-4c2e-a2a4-8f8a0c936d0f and can be used in your code if you don't want to take the time to create a corpus of your own. If that is the case, you can jump ahead to the section below titled "Querying the Corpus" (https://docs.ai-jsx.com/sidekicks/sidekicks-docsqa#querying-the-corpus). Otherwise, you can continue with "Adding Sources to the Corpus" and make your own corpus to use.

Dark Mode

Bring in the darkness for the dashboard

Potential bug in ai.jsx

I deployed (to the cloud) the sample https://docs.ai-jsx.com/sidekicks/sidekicks-quickstart and asked the question show me what can you help with. This resulted with the error from lookUpGitHubKnowledgeBase:

This model response had an error: "Error during generation: AI.JSX(1032): OpenAI API Error: 400 This model's maximum context length is 4097 tokens. However, your messages resulted in 6069 tokens (5970 in the messages, 99 in the functions). Please reduce the length of the messages or functions. It's unclear whether this was caused by a bug in AI.JSX, in your code, or is an expected runtime error.

I have no doubt that I exceeded my token limit - and am reporting it just to be safe (that I reported this possible bug)

Added later: Rerun with a different (but similar) question: show me what can you help with and this time everything went fine:

I can help you with various tasks related to Git and GitHub, including:

Providing guidance on Git and GitHub workflows.
Assisting with creating, cloning, and initializing repositories.
Explaining how to commit changes and manage branches in Git.
Guiding you through the process of creating and merging pull requests on GitHub.
Helping you resolve merge conflicts in Git.
Assisting with configuring and using Git remotes.
Explaining how to collaborate with others using Git and GitHub.
Providing information on Git and GitHub APIs and how to use them.
If you have any specific questions or need assistance with a particular task, please let me know, and I'll be glad to help you!

Perhaps this LLM is too smart for me, as running it with the first question, that resulted with Error - This model's maximum context length is 4097 tokens now responded fine.

Note: I am fascinated with the difference in answering first and second question. Debugging this seems like a nightmare 😄

Session persistence

When logged in to the console, the session doesn't seem to persist and requires repeated logging in, even if the session persists, the UI doesn't change to indicate the change. Also, the user data from console.fixie.ai doesn't carry into app.fixie.ai. I think after the user logs in, the red circled area should toggle into the user's avatar image to show the user is signed in.
Screenshot - 04_12_2023 , 17_15_29

Voice agent TODOs

interactivity:

  • finish and vet latency map
  • add interrupt button
  • allow interruption based on VAD
  • fix tts playout delay (AC)?
  • dropdowns for control in main page

latency:

  • shorter initial messages
  • caching or other initial startup opt? *
  • understand root cause of echo when initial response is fast (related to above?)
  • explore Piper via WASM/python

File ai-jsx-imperative.ts is missing

Reddit-QA readme refers to yarn tsx packages/examples/src/bakeoff/reddit-qa/ai-jsx-imperative.ts. However, the file, ai-jsx-imperative.ts, is not there.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.