Giter Club home page Giter Club logo

helicone logo

🔍 Observability 🕸️ Agent Tracing 💬 Prompt Management
📊 Evaluations 📚 Datasets 🎛️ Fine-tuning

Open Source

DocsDiscordRoadmapChangelogBug reports

See Helicone in Action! (Free)

Contributors GitHub stars GitHub commit activity GitHub closed issues Y Combinator

Helicone is the all-in-one, open-source LLM developer platform

  • 🔌 Integrate: One-line of code to log all your requests to OpenAI, Anthropic, LangChain, Gemini, TogetherAI, LlamaIndex, LiteLLM, OpenRouter, and more
  • 📊 Observe: Inspect and debug traces & sessions for agents, chatbots, document processing pipelines, and more
  • 📈 Analyze: Track metrics like cost, latency, quality, and more. Export to PostHog in one-line for custom dashboards
  • 🎮 Playground: Rapidly test and iterate on prompts, sessions and traces in our UI
  • 🧠 Prompt Management: Version and experiment with prompts using production data. Your prompts remain under your control, always accessible.
  • 🔍 Evaluate: Automatically run evals on traces or sessions using the latest platforms: LastMile or Ragas (more coming soon)
  • 🎛️ Fine-tune: Fine-tune with one of our fine-tuning partners: OpenPipe or Autonomi (more coming soon)
  • 🛜 Gateway: Caching, custom rate limits, LLM security, and more with our gateway
  • 🛡️ Enterprise Ready: SOC 2 and GDPR compliant

🎁 Generous monthly free tier (100k requests/month) - No credit card required!

Quick Start ⚡️ One line of code

  1. Get your write-only API key by signing up here.

  2. Update only the baseURL in your code:

    import OpenAI from "openai";
    
    const openai = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY,
      baseURL: `https://oai.helicone.ai/v1/${process.env.HELICONE_API_KEY}`,
    });

or - use headers for more secure environments

import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: `https://oai.helicone.ai/v1`,
  defaultHeaders: {
   "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
  },
});
  1. 🎉 You're all set! View your logs at Helicone.

This quick start uses Helicone Cloud with OpenAI. For other providers or self-hosted options, see below.

Get Started For Free

Helicone Cloud (Recommended)

The fastest and most reliable way to get started with Helicone. Get started for free at Helicone US or Helicone EU. Your first 100k requests are free every month, after which you'll pay based on usage. Try our demo to see Helicone in action!

Integrations: View our supported integrations.

Latency Concerns: Helicone's Cloud offering is deployed on Cloudflare workers and ensures the lowest latency (~10ms) add-on to your API requests. View our latency benchmarks.

Self-Hosting Open Source LLM Observability with Helicone

Docker

Helicone is simple to self-host and update. To get started locally, just use our docker-compose file.

Pre-Request:

  • Copy the shared directory to the valhalla directory
  • Create a valhalla folder in the valhalla directory and put /valhalla/jawn in it
# Clone the repository
git clone https://github.com/Helicone/helicone.git
cd docker
cp .env.example .env

# Start the services
docker compose up

Helm

For Enterprise workloads, we also have a production-ready Helm chart available. To access, contact us at [email protected].

Manual (Not Recommended)

Manual deployment is not recommended. Please use Docker or Helm. If you must, follow the instructions here.

Architecture

Helicone is comprised of five services:

  • Web: Frontend Platform (NextJS)
  • Worker: Proxy Logging (Cloudflare Workers)
  • Jawn: Dedicated Server for serving collecting logs (Express + Tsoa)
  • Supabase: Application Database and Auth
  • ClickHouse: Analytics Database
  • Minio: Object Storage for logs.

LLM Observability Integrations

Main Integrations

Integration Supports Description
Generic Gateway Python, Node.js, Python w/package, LangChain JS, LangChain, cURL Flexible integration method for various LLM providers
Async Logging (OpenLLMetry) JS/TS, Python Asynchronous logging for multiple LLM platforms
OpenAI JS/TS, Python -
Azure OpenAI JS/TS, Python -
Anthropic JS/TS, Python -
Ollama JS/TS Run and use large language models locally
AWS Bedrock JS/TS -
Gemini API JS/TS -
Gemini Vertex AI JS/TS Gemini models on Google Cloud's Vertex AI
Vercel AI JS/TS AI SDK for building AI-powered applications
Anyscale JS/TS, Python -
TogetherAI JS/TS, Python -
Hyperbolic JS/TS, Python High-performance AI inference platform
Groq JS/TS, Python High-performance models
DeepInfra JS/TS, Python Serverless AI inference for various models
OpenRouter JS/TS, Python Unified API for multiple AI models
LiteLLM JS/TS, Python Proxy server supporting multiple LLM providers
Fireworks AI JS/TS, Python Fast inference API for open-source LLMs

Supported Frameworks

Framework Supports Description
LangChain JS/TS, Python -
LlamaIndex Python Framework for building LLM-powered data applications
CrewAI - Framework for orchestrating role-playing AI agents
Big-AGI JS/TS Generative AI suite
ModelFusion JS/TS Abstraction layer for integrating AI models into JavaScript and TypeScript applications

Other Integrations

Integration Description
PostHog Product analytics platform. Build custom dashboards.
RAGAS Evaluation framework for retrieval-augmented generation
Open WebUI Web interface for interacting with local LLMs
MetaGPT Multi-agent framework
Open Devin AI software engineer
Mem0 EmbedChain Framework for building RAG applications
Dify LLMOps platform for AI-native application development

This list may be out of date. Don't see your provider or framework? Check out the latest integrations in our docs. If not found there, request a new integration by contacting [email protected].

Community 🌍

Learn this repo with Greptile

learnthisrepo.com/helicone |

Contributing

We ❤️ our contributors! We warmly welcome contributions for documentation, integrations, costs, and feature requests.

License

Helicone is licensed under the Apache v2.0 License.

Additional Resources

For more information, visit our documentation.

Contributors

Helicone's Projects

helicone icon helicone

🧊 Open source LLM-Observability Platform for Developers. One-line integration for monitoring, metrics, evals, agent tracing, prompt management, playground, etc. Supports OpenAI SDK, Vercel AI SDK, Anthropic SDK, LiteLLM, LLamaIndex, LangChain, and more. 🍓 YC W23

helicone-resources icon helicone-resources

Helicone's Header Cheatsheet. Every header you need to know to access Helicone features.

ragas icon ragas

Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.