Giter Club home page Giter Club logo

arakoodev / edgechains Goto Github PK

View Code? Open in Web Editor NEW
288.0 288.0 63.0 43.75 MB

EdgeChains.js Typescript/Javascript production-friendly Generative AI. Based on Jsonnet. Works anywhere that Webassembly does. Prompts live declaratively & "outside code in config". Kubernetes & edge friendly. Compatible with OpenAI GPT, Gemini, Llama2, Anthropic, Mistral and others

Home Page: https://www.arakoo.ai/

License: MIT License

JavaScript 88.33% TypeScript 5.93% Jsonnet 1.41% Rust 4.18% Dockerfile 0.01% Shell 0.10% Makefile 0.04%
agent ai ai-agents autogpt chatbot generative-ai gpt html javascript llm openai rest-api typescript vector web-development webassembly webdev ycombinator

edgechains's People

Contributors

actions-user avatar anuran-roy avatar arthsrivastava avatar emadhanif01 avatar ezhil56x avatar harsh4902 avatar msaifkhan01 avatar nooha01 avatar parteek2813 avatar pizzaboi21 avatar prakash-aathi avatar rahul007-bit avatar redoc-a2k avatar s-ishita avatar sadaf-a avatar sajiyah-salat avatar sandys avatar shyam-raghuwanshi avatar soumitra2001 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

edgechains's Issues

Add Auto Comment Feature to Improve Collaboration

Issue Description:
As an active contributor to your open-source project, I believe that implementing an auto-comment feature would greatly enhance collaboration and communication within the project. This feature would automatically generate comments in response to specific events, such as when an issue is opened, a pull request is created, an issue is assigned, or an issue is unassigned.

Feature Details:

  • When an issue is opened, the auto-comment should greet the author and provide a brief acknowledgement and request for additional context.
  • When a pull request is opened, the auto-comment should greet the author, express gratitude, and remind them to follow the project's contributing guidelines.
  • When an issue is closed, the auto-comment should thank the author for their contribution and encourage further engagement.
  • When an issue is assigned to someone, the auto-comment should notify the assignee and encourage them to start working on it.
  • When an issue is unassigned from someone, the auto-comment should notify the assignee about the change and suggest reassignment if they are offline.

Benefits:

  • Improved communication and engagement with contributors.
  • Provides clear instructions and acknowledgements for various events.
  • Enhances collaboration by setting expectations and providing reminders.
  • Reduces manual effort by automating comment generation.

Acceptance Criteria:

  • The auto-comment feature should be implemented using the "wow-actions/auto-comment" GitHub Action.
  • Comments should be appropriately customized for each event, mentioning relevant parties and providing the necessary information.
  • The auto-comment workflow should trigger on the following events: issues opened, pull requests opened, issues closed, issues assigned, and issues unassigned.
  • The feature should be added to the project's existing GitHub Actions workflow file.

Additional Context:
Feel free to ask any questions or seek clarification regarding the auto-comment feature. I'm excited about contributing to your project and believe this enhancement will greatly benefit its community.

FLARE - Active Retrieval Augmented Generation

When deciding what to retrieve, we argue
that it is important to consider what LMs intend to
generate in the future, as the goal of active retrieval
is to benefit future generations. Therefore, we propose anticipating the future by generating a temporary next sentence, using it as a query to retrieve
relevant documents, and then regenerating the next
sentence conditioning on the retrieved documents.
Combining the two aspects, we propose ForwardLooking Active REtrieval augmented generation
(FLARE), as illustrated in Figure 1. FLARE iteratively generates a temporary next sentence, use
it as the query to retrieve relevant documents if it
contains low-probability tokens and regenerate the
next sentence until reaches the end

https://arxiv.org/pdf/2305.06983.pdf

Missing issue template

Description:

I noticed that the repository does not have an issue template. Having an issue template can greatly improve the clarity and consistency of issue reports, making it easier for contributors and maintainers to understand and address the problems or feature requests effectively.

Here are three templates that I want to include:

Issue Template: This template enables users to create well-structured issue reports by providing sections for a concise title, detailed description, steps to reproduce (if applicable), expected behavior, actual behavior, additional information, and environment details.

Feature Request Template: The feature request template allows users to outline their desired features with clarity, including sections for a clear feature description, expected benefits, and any additional context or information.

Documentation Template: This template facilitates the creation of comprehensive documentation by providing a structured format, including sections for an introduction, usage instructions, examples, and other relevant details.

Please assign this issue to me under GSSoC'23 @sandys

Fix Chain Execution

Background - Edgechains are executed through Flyfly CLI which is using jbang to compile and run the chains.

Steps to replicate and verify the issue:-

  1. Go to BuildAndRun Action and download the latest artifact.
  2. The Script folder will contain 2 JARs and 1 java file. This java file is our chain.
  3. Open Script folder's directory in cmd/powershell and execute this command java -jar flyfly.jar jbang Flyopenaiwiki.java Edgechain.jar
  4. If the class name or package name (import com.example) will be changed, the program will not work.

Flyfly.jar is limited to executing com.example.flyopenaiwiki class only (the content does not matter, only the package and class name).

This issue can be easily fixed by making some changes in FlySpring/flyfly/src/main/java/com/flyspring/flyfly/commands/

Project time = 1 week.

Augmented Large Language Models with Parametric Knowledge Guiding

high costs for most researchers and companies seeking to fine-tune these models for
their specific use cases or domains. Moreover, users who can afford to fine-tune must provide their
private data to the LLMs’ owner, thereby exposing it to potential risks such as misuse, breaches, or
other security threats [BBC, 2023]. These limitations hinder the adaptability of LLMs to diverse
scenarios and domains.
A common approach to enhance LLMs is to leverage retrieval-based methods that access domainspecific knowledge from external sources [Liu, 2022; Shi et al., 2023; Peng et al., 2023a]. While
these methods have shown promise, they face several challenges. First, they heavily rely on modern
dual-stream dense retrieval models [Karpukhin et al., 2020] which suffer from shallow interaction
between the query and candidate documents. Second, most dense retrieval models are based on smallscale pre-trained models such as BERT [Devlin et al., 2019] and therefore cannot take advantage of
the world knowledge of large-scale pre-trained models. Third, retrieval models may struggle with
complex knowledge that requires the integration of information from multiple sources or modalities.
In this work, we propose the Parametric Knowledge Guiding (PKG) framework, which enables
LLMs to access relevant information without modifying their parameters, by incorporating a trainable
background knowledge generation module, as illustrated in Figure 1. Unlike retrieval-based methods,
our PKG module utilizes open-source and free-to-use "white-box" language models, LLaMa-7B [Touvron et al., 2023], which encode implicit world knowledge from large-scale pre-training. The
framework consists of two steps. First, we align the PKG module with the specific task or domain
knowledge via instruction fine-tuning [Ouyang et al., 2022] to capture the necessary expertise. Second, for a given input, the PKG module generates the related knowledge, fed as extra context to the
background-augmented prompting for LLMs. By supplying the nece

https://arxiv.org/pdf/2305.04757.pdf

Enable ReactChain to use multiple Tools

Although the current code has not been tested with multiple tools in the ToolArray, I can see the ReactChain not behaving correctly when it is provided with more than one tool.

  • Change the Reactchain.java loop (EdgeChain/src/main/java/com/application/project/services) to accomodate mutiple tools.
  • The prompt for ReactChain might be changed. (EdgeChain/src/main/java/com/application/project/parser)

To finally test out the chain with multiple tools, the same tool (wikisearch) can be added again in the toolArray and put into the ReactChain.

https://tsmatz.wordpress.com/2023/03/07/react-with-openai-gpt-and-langchain/

  • Read this article thoroughly.
  • Cross verify the prompt used in ChatWikiPrompt.java and in the research paper given above.
  • Decide, alter the prompt according to the needs of ReactChain code, and the goal of using multiple chains in the loop. (Scratchpad, Tool usage, Looping, etc.)

Project Duration - 1 Week

Light Mode: There is no light mode option

Many people prefer light mode and the website is made in dark mode as default. I can create a design for the light mode. Everything else will remain same just the color scheme will change that can provide users the option of light or dark mode on the website landing page. After creating the light mode design, a toggle button will also be required so I can work on that too or create a new issue for that

GSSOC'23

add back to top button

I want to add back to top button in readme which will enhance user experience.Please assign this issue to me under GSSOC

Support for openAPI

do you consider construct agents to consume arbitrary APIs, here APIs conformant to the OpenAPI/Swagger specification.

Early thoughts of java architecture

Customized Observable base class

let's create an abstraction EdgeChain on top of RxJava3, which will have customized versions of map, zip, and subscribe functions. We'll create two subclasses of EdgeChain, called MRKLChain and ReactChain, with different implementations for parsing. We'll also create an EndPoint class to encapsulate the API endpoint details.

First, let's define the EdgeChain class:

import io.reactivex.rxjava3.core.Observable;

public abstract class EdgeChain<T> {
    protected Observable<T> observable;

    protected EdgeChain(Observable<T> observable) {
        this.observable = observable;
    }

    public abstract <R> EdgeChain<R> transform(Function<T, R> mapper);

    public abstract <R> EdgeChain<R> combine(BiFunction<T, T, R> zipper);

    public abstract void execute(Consumer<? super T> onNext, Consumer<? super Throwable> onError);
}

Customized Endpoint base class

Now, let's create the EndPoint class:

public class EndPoint {
    private String url;
    private int maxRetries;
    private BackoffStrategy backoffStrategy;

    public EndPoint(String url, int maxRetries, BackoffStrategy backoffStrategy) {
        this.url = url;
        this.maxRetries = maxRetries;
        this.backoffStrategy = backoffStrategy;
    }

    // Getters and setters
}

Example implementations

This implementation allows you to create customized observable abstractions with different parsing implementations in the subclasses. The EndPoint class encapsulates the API endpoint details, and the Chain class reads from the EndPoint class to call the actual URL. The Chain class has an optional EndPoint class inside, and its behavior depends on whether it's declared inside, mandatory but empty, or not necessary.

Now, you can create instances of the MRKLChain and ReactChain classes and use their transform, combine, and execute methods to perform the required operations. Remember to provide an EndPoint instance with the required API endpoint details, and the number of retries and backoff strategy.

let's create the MRKLChain subclass:

import io.reactivex.rxjava3.functions.Function;

public class MRKLChain extends EdgeChain<String> {
    private EndPoint endPoint;

    public MRKLChain(Observable<String> observable, EndPoint endPoint) {
        super(observable);
        this.endPoint = endPoint;
    }

    @Override
    public <R> MRKLChain<R> transform(Function<String, R> mapper) {
        return new MRKLChain<>(observable.map(mapper), endPoint);
    }

    @Override
    public <R> MRKLChain<R> combine(BiFunction<String, String, R> zipper) {
        return new MRKLChain<>(observable.zipWith(observable, zipper), endPoint);
    }

    @Override
    public void execute(Consumer<? super String> onNext, Consumer<? super Throwable> onError) {
        observable
            .retryWhen(Retry.backoff(endPoint.getMaxRetries(), endPoint.getBackoffStrategy()))
            .subscribe(onNext, onError);
    }
}

Now, let's create the ReactChain subclass:

import io.reactivex.rxjava3.functions.Function;

public class ReactChain extends EdgeChain<String> {
    private EndPoint endPoint;

    public ReactChain(Observable<String> observable, EndPoint endPoint) {
        super(observable);
        this.endPoint = endPoint;
    }

    @Override
    public <R> ReactChain<R> transform(Function<String, R> mapper) {
        return new ReactChain<>(observable.map(mapper), endPoint);
    }

    @Override
    public <R> ReactChain<R> combine(BiFunction<String, String, R> zipper) {
        return new ReactChain<>(observable.zipWith(observable, zipper), endPoint);
    }

    @Override
    public void execute(Consumer<? super String> onNext, Consumer<? super Throwable> onError) {
        observable
            .retryWhen(Retry.backoff(endPoint.getMaxRetries(), endPoint.getBackoffStrategy()))
            .subscribe(onNext, onError);
    }
}

Combined Example

Let's create an example where two chains, MRKLChain and ReactChain, are used together with transform, combine, and execute. We'll use a simple transformation function to parse and reformat the text in each chain. We'll also use the forkJoin operator from RxJava3 to combine the results of the two chains before passing them to another instance of the chain.

First, let's create some sample API endpoints and transformation functions:

EndPoint endPoint1 = new EndPoint("https://api.example.com/data1", 3, BackoffStrategy.exponential());
EndPoint endPoint2 = new EndPoint("https://api.example.com/data2", 3, BackoffStrategy.exponential());

Function<String, List<String>> mrklParser = text -> Arrays.asList(text.split("\\s+"));
Function<String, List<String>> reactParser = text -> Arrays.asList(text.split("[\\r\\n]+"));

Next, let's create instances of MRKLChain and ReactChain:

MRKLChain mrklChain = new MRKLChain(Observable.just("sample text for MRKLChain"), endPoint1);
ReactChain reactChain = new ReactChain(Observable.just("sample text\nfor ReactChain"), endPoint2);

Now, let's transform the data, combine the results using forkJoin, and execute the combined chain:

Observable<List<String>> mrklTransformed = mrklChain.transform(mrklParser).observable;
Observable<List<String>> reactTransformed = reactChain.transform(reactParser).observable;

Observable<List<String>> combined = Observable.combineLatest(mrklTransformed, reactTransformed, (mrklData, reactData) -> {
    List<String> result = new ArrayList<>(mrklData);
    result.addAll(reactData);
    return result;
});

MRKLChain combinedChain = new MRKLChain(combined, endPoint1);

combinedChain.execute(
    result -> System.out.println("Combined result: " + result),
    error -> System.err.println("Error: " + error)
);

In this example, the MRKLChain and ReactChain instances are created with sample text and API endpoints. The transformation functions, mrklParser and reactParser, are used to parse and reformat the text. The forkJoin operator is used to combine the results of the two chains before passing them to another instance of the chain (combinedChain). The execute method is used to process the combined result or handle any errors.

This demonstrates how to use the customized observable abstractions to call external APIs, parse and reformat the result, and pass the reformatted data to another instance of the abstraction for further processing.

Putting it inside Spring Boot Webflux

To integrate the EdgeChain abstractions into a Spring Boot Webflux application, you can create a Spring Webflux controller for each chain and expose them as RESTful API endpoints. By doing this, you can call the chain endpoints from your code or any other external clients, without worrying about whether they are executing on your local machine or a remote endpoint.

Let's continue using RxJava3 with Spring Boot Webflux. You can use Observable directly in the controller and return a Mono by converting the Observable using the from method. This way, we can continue using RxJava3, and the extra configuration needed is minimal.

First, let's create a ChainController class that will handle the incoming requests and delegate the processing to the appropriate chain instances:

import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import reactor.core.publisher.Mono;

@RestController
@RequestMapping("/chains")
public class ChainController {

    @PostMapping("/mrkl")
    public Mono<ResponseEntity<String>> processMRKLChain(@RequestBody String input) {
        // Create MRKLChain instance and process the input
        // Return the result as a Mono<ResponseEntity<String>>
    }

    @PostMapping("/react")
    public Mono<ResponseEntity<String>> processReactChain(@RequestBody String input) {
        // Create ReactChain instance and process the input
        // Return the result as a Mono<ResponseEntity<String>>
    }
}

Now, let's implement the processMRKLChain and processReactChain methods in the ChainController. You can keep the execute method in the EdgeChain, MRKLChain, and ReactChain classes as they were before:

@PostMapping("/mrkl")
public Mono<ResponseEntity<String>> processMRKLChain(@RequestBody String input) {
    EndPoint endPoint = new EndPoint("https://api.example.com/data1", 3, BackoffStrategy.exponential());
    MRKLChain mrklChain = new MRKLChain(Observable.just(input), endPoint);

    return Mono.from(mrklChain.transform(mrklParser).observable)
            .map(result -> ResponseEntity.ok().body(result))
            .onErrorResume(e -> Mono.just(ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error: " + e.getMessage())));
}

@PostMapping("/react")
public Mono<ResponseEntity<String>> processReactChain(@RequestBody String input) {
    EndPoint endPoint = new EndPoint("https://api.example.com/data2", 3, BackoffStrategy.exponential());
    ReactChain reactChain = new ReactChain(Observable.just(input), endPoint);

    return Mono.from(reactChain.transform(reactParser).observable)
            .map(result -> ResponseEntity.ok().body(result))
            .onErrorResume(e -> Mono.just(ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error: " + e.getMessage())));
}

In this implementation, we create MRKLChain and ReactChain instances and process the input using the transform method. The result is an Observable, which is then converted to a Mono using the Mono.from method. This approach allows you to keep using RxJava3 in your Spring Boot Webflux application and expose the chains as RESTful API endpoints. When you call these endpoints from your code or an external client, they will process the input using the corresponding chain and return the result.

cleaning up response classes, etc. Pure cosmetics

Sure, we can create a custom class ArkResponse that wraps the Mono<ResponseEntity<String>> and provides a method to create an instance of ArkResponse from an Observable<String>.

Here's the updated ArkResponse class:

import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import reactor.core.publisher.Mono;
import io.reactivex.rxjava3.core.Observable;

public class ArkResponse {

    private final Mono<ResponseEntity<String>> response;

    private ArkResponse(Mono<ResponseEntity<String>> response) {
        this.response = response;
    }

    public static ArkResponse fromObservable(Observable<String> observable) {
        Mono<ResponseEntity<String>> mono = Mono.from(observable)
                .map(result -> ResponseEntity.ok().body(result))
                .onErrorResume(e -> Mono.just(ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Error: " + e.getMessage())));
        return new ArkResponse(mono);
    }

    public Mono<ResponseEntity<String>> getResponse() {
        return response;
    }
}

Now, let's update the ChainController to return ArkResponse instead of Mono<ResponseEntity<String>>:

import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/chains")
public class ChainController {

    @PostMapping("/mrkl")
    public ArkResponse processMRKLChain(@RequestBody String input) {
        // Create MRKLChain instance and process the input
        // Return the result as an ArkResponse
    }

    @PostMapping("/react")
    public ArkResponse processReactChain(@RequestBody String input) {
        // Create ReactChain instance and process the input
        // Return the result as an ArkResponse
    }
}

Update the processMRKLChain and processReactChain methods in the ChainController to use ArkResponse.fromObservable:

@PostMapping("/mrkl")
public ArkResponse processMRKLChain(@RequestBody String input) {
    EndPoint endPoint = new EndPoint("https://api.example.com/data1", 3, BackoffStrategy.exponential());
    MRKLChain mrklChain = new MRKLChain(Observable.just(input), endPoint);

    return ArkResponse.fromObservable(mrklChain.transform(mrklParser).observable);
}

@PostMapping("/react")
public ArkResponse processReactChain(@RequestBody String input) {
    EndPoint endPoint = new EndPoint("https://api.example.com/data2", 3, BackoffStrategy.exponential());
    ReactChain reactChain = new ReactChain(Observable.just(input), endPoint);

    return ArkResponse.fromObservable(reactChain.transform(reactParser).observable);
}

With this implementation, the ArkResponse class wraps the Mono<ResponseEntity<String>> and the ChainController methods return ArkResponse instances, reducing verbosity in the controller. To access the underlying Mono<ResponseEntity<String>> when needed, you can use the getResponse() method provided by the ArkResponse class.

tying it all together

In this example, we'll use two chains, MRKLChain and ReactChain, to transform and reformat the input text using simple transformation functions. Then, we'll combine the results of these two chains using the forkJoin operator from RxJava3 and pass the combined result to another instance of the chain.

First, let's create simple transformation functions for MRKLChain and ReactChain. We'll use a lambda expression to define these functions:

Function<String, String> mrklParser = input -> "MRKL: " + input.toUpperCase();
Function<String, String> reactParser = input -> "React: " + input.toLowerCase();

Now, let's create instances of MRKLChain and ReactChain and apply the transformation functions using the transform method:

EndPoint endPoint1 = new EndPoint("https://api.example.com/data1", 3, BackoffStrategy.exponential());
MRKLChain mrklChain = new MRKLChain(Observable.just("Hello MRKL"), endPoint1);
Observable<String> mrklTransformed = mrklChain.transform(mrklParser).observable;

EndPoint endPoint2 = new EndPoint("https://api.example.com/data2", 3, BackoffStrategy.exponential());
ReactChain reactChain = new ReactChain(Observable.just("Hello React"), endPoint2);
Observable<String> reactTransformed = reactChain.transform(reactParser).observable;

Next, let's use the forkJoin operator from RxJava3 to combine the results of the two chains:

import io.reactivex.rxjava3.core.Observable;

Observable<String> combined = Observable.forkJoin(mrklTransformed, reactTransformed, (mrklResult, reactResult) -> mrklResult + " | " + reactResult);

Now, we'll create another instance of the chain (e.g., MRKLChain) and pass the combined result to it. We'll also apply a transformation function that concatenates a string to the result:

Function<String, String> combinedParser = input -> "Combined: " + input;

MRKLChain combinedChain = new MRKLChain(combined, endPoint1);
Observable<String> combinedTransformed = combinedChain.transform(combinedParser).observable;

Finally, let's create a new endpoint in the ChainController that uses the combined chain:

@PostMapping("/combined")
public ArkResponse processCombinedChain() {
    return ArkResponse.fromObservable(combinedTransformed);
}

In this example, we created instances of MRKLChain and ReactChain, applied simple transformation functions to reformat the input text, combined the results using the forkJoin operator, and passed the combined result to another instance of the chain. We also added a new endpoint in the ChainController to handle the combined chain.

Missing Code of conduct.md file

Hi there!
I recommend adding a CODE_OF_CONDUCT.md file to your repository. This file would serve as a guide for potential contributors, providing them with a clear set of expectations, guidelines, and behavior standards for participating in your project. It plays an essential role in creating a welcoming and inclusive environment for everyone involved.

I would like to work on this issue. Please assign it to me under GSSoC'23.

Thank you!

Fix execution in linux

If you download the latest artifact from BuildaAndRun Action, and cd into Script folder, Execution of this command :- java -jar flyfly.jar jbang Flyopenaiwiki.java Edgechain.jar is not working in linux.

Flyfly is executing java -cp jbang.jar dev.jbang.Main --cp Edgechain.jar Flyopenaiwiki.java and then in next step executing java -classpath classPath MainClass
check FlySpring/flyfly/src/main/java/com/flyspring/flyfly/commands/jbang/JbangCommand.java

This process works only in cmd/powershell [windows]

For Linux, this method works:-

  1. Execute java -cp jbang.jar dev.jbang.Main --cp Edgechain.jar Flyopenaiwiki.java
  2. Copy the output and execute.

Automate these two processes just like it is for Windows in JbangCommand.java, making sure that Flyfly detects the OS and executes the right method(s).

set prompt window length for chat

keep a preffered token length of variables inside a prompt template?

Oftentimes I want to trim out a particular variable like chat history if overall length of prompt is exceeding 4K tokens.

create a function to accept preffered token length and cut from start or back configurablity which I use with prompt templates these days but it has limitations around not having priority order of variables to cut tokens from if overall length is exceeding 4K so I've done patches on top of it but haven't reached a very sophisticated method yet

Fix Grammar in README

I have noticed some grammatical errors in README.

This project hopes and requests for clean pull request merges. the way we merge is squash and merge. This fundamentally can only work if you NEVER ISSUE A PULL REQUEST TWICE FROM THE SAME LOCAL BRANCH. If you create another pull request from same local branch, then the merge will always fail.

solution is simple - ONE BRANCH PER PULL REQUEST. We Follow this strictly. if you have created this pull request using your master/main branch, then follow these steps to fix it:

Fixes:

hopes -> hopes for
requests for -> requests
the way we merge -> The method which we use for merging
solution -> The solution
Follow -> follow
if -> If

Missing Contributing.md file

Missing Contributing guidelines file in the repo to provide clear guidelines to the contributors. I can add it

Please assign me this issue under Gssoc'23 @sandys

implement hyde

https://twitter.com/darrenangle/status/1652014961806196745

https://wfhbrian.com/revolutionizing-search-how-hypothetical-document-embeddings-hyde-can-save-time-and-increase-productivity/

https://arxiv.org/pdf/2212.10496.pdf

Semantic search on embeddings is hard to get right. Embedding long documents is a challenge. User queries are a challenge, if a user provides an ambiguous query they’ll get ambiguous matches

LLMs just follow what was in the context and hallucinate answers as a result

I've had a lot of success using HyDE for the query problem.

Essentially, let an LLM generate the query, or even use the hallucinated answer as the query.

if chat, fold the response back into the chat step with a prompt along the lines of "thought: I can use this data to answer the user"

works really well

add readme badges

i want to add readme badges. .Please assign this issue to me under GSSOC
badges

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.