Giter Club home page Giter Club logo

ragdaemon's Introduction

Ragdaemon

Ragdaemon is a Retrieval-Augmented Generation (RAG) system for code. It runs a daemon (background process) to watch your active code, put it in a knowledge graph, and query the knowledge graph to (among other things) generate context for LLM completions.

Three ways to use Ragdaemon:

1. Help me write code

Ragdaemon powers the 'auto-context' feature in Mentat, a command-line coding assistant. You can install Mentat using pip install mentat. Run with the --auto-context-tokens <amount> or -a (default=5000) flag, and ragdaemon-selected context will be added to all of your prompts.

2. Explore the knowledge graph

Install locally to visualize and query the knowledge graph directly. Install using pip install ragdaemon, and run in your codebase's directory, e.g. ragdaemon. This will start a Daemon on your codebase, and an interface at localhost:5001. Options:

  • --chunk-extensions <ext>[..<ext>]: Which file extensions to chunk. If not specified, defaults to the top 20 most common code file extensions.
  • --chunk-model: OpenAI's gpt-4-0215-preview by default.
  • --embeddings-model: OpenAI's text-embedding-3-large by default.
  • --diff: A git diff to include in the knowledge graph. By default, the active diff (if any) is included with each code feature.

3. Use ragdaemon Python API

Ragdaemon is released open-source as a standalone RAG system. It includes a library of python classes to generate and query the knowledge graph. The graph itself is a NetworkX MultiDiGraph which saves/loads to a .json file.

import asyncio
from pathlib import Path
from ragdaemon.daemon import Daemon

async def main():
    cwd = Path.cwd()
    daemon = Daemon(cwd)
    await daemon.update()

    results = daemon.search("javascript")
    for result in results:
        print(f"{result['distance']} | {result['id']}")

    query = "How do I run the tests?"
    context_builder = daemon.get_context(
        query, 
        auto_tokens=5000
    )
    context = context_builder.render()
    messages = [
        {"role": "user", "content": query},
        {"role": "user", "content": f"CODE CONTEXT\n{context}"}
    ]
    print(messages)

asyncio.run(main())

ragdaemon's People

Contributors

granawkins avatar pcswingle avatar jakethekoenig avatar biobootloader avatar

Stargazers

Nikolaus Schlemm avatar Adrian Carballo avatar Ty Fiero avatar Mike Bird avatar  avatar Ivan Lugo avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ragdaemon's Issues

Graph isn't setup to handle two functions with the same name.

In certain cases, like when using @override, you'd get two identical 'nodes' (because they're just the function name).

Currently, because of our remove_add_to_db_duplicates method, it's likely only adding the FIRST to the graph. They may or may not have the same checksum / database entry.

Road Map

Backend

  1. Add chromadb / embeddings
  2. Generate positions: filetree-spring & tsne
  3. Add search route to backend

Frontend

  1. Refactor frontend components: global funcs passed to scene / controlPanel
  2. Rebuild control panel to search / results
  3. Update three to match latent-dictionary: transparency, camera movements

Database hygiene

In some cases the database will contain outdated records, and the only way to 'update' it is to remove the database (~/.mentat/chroma). One case in particular is if you change between chunker_line and chunker_llm, it won't update the db record.

Potential solutions:

  • Add more fields or a checksum to the collection name
  • Add chunker-type-specific data so 'Chunker[type].is_complete' checks for a record of that type specifically
  • Refresh the database every time 'version' is updated.

Chunking Issues

  1. Look at a diff to deterministically update non-edited chunks from a file. Look at line number changes in hunk header.

  2. New chunking protocol:
    a. Grab the first N characters (50k tokens * 5 chars/token = 250k chars)
    b. Chunk that with our llm chunker
    c. Remove the last chunk
    d. Grab a new N characters starting with the start_line of that last chunk
    e. Include the 'call path' from the last chunk in case we're mid-class

Issues:

  1. What if a function is more than 50k tokens? A: Chunker should be able to tell you that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.