Giter Club home page Giter Club logo

boundaryml / baml Goto Github PK

View Code? Open in Web Editor NEW
466.0 10.0 13.0 64.81 MB

BAML is a templating language to write typed LLM functions. Check out the promptfiddle.com playground

Home Page: https://docs.boundaryml.com

License: Apache License 2.0

Rust 39.27% Python 18.51% Shell 0.75% TypeScript 27.24% JavaScript 0.79% HTML 0.01% CSS 1.34% Ruby 11.45% Dockerfile 0.14% Jinja 0.50%
baml llm boundaryml guardrails llm-playground playground vscode structured-data prompt prompt-config

baml's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

baml's Issues

Function Chaining

It would be great in baml files to be able to chain functions together and have some sort of if, then, else logic whereby a model could, for example, run a validation function on a call, and then take action based on it.

[VScode] Switch to file-watcher architecture to prevent stale-file issues

VSCode only listens to opened files or files in the baml_src directory. Our current approach of lazily refreshing the list of files in baml_src isn't very robust.

We should just watch for file changes preemptively via a file-watcher mechanism, so we always have the latest state.

Things to keep in mind:

  1. If we retrieve a doc from the cache, it should also have the unsaved changes as returned by VSCode onDocumentChanged(..) callbacks. Our playground works off the in-memory content even if you havent saved a file.

baml-fallback should maybe be a strategy as opposed to another client

I would like to have some separation between azure/anthropic/openai/etc clients and fallback clients

I see fallback clients as a strategy which defines a ruleset for orchestrating various LLM clients. I would prefer if this strategy gave me the ability to set the order of clients (ideally including which ones I can send in parallel), how to deal with various errors (which Baml already provides), and an ability to override default options (such as request timeouts).

Here is a specific example: I have two clients which are both responsible for calling the GPT-3.5-Turbo deployment on Azure given below. The only difference is the request timeout

client<llm> AzureGPT35Turbo {
    provider baml-azure-chat
    retry_policy ZenfetchDefaultPolicy
    options {
      api_key env.AZURE_OPENAI_API_KEY
      api_base env.AZURE_OPENAI_BASE
      engine env.AZURE_GPT_35_TURBO_DEPLOYMENT_NAME
      api_version "2023-07-01-preview"
      api_type azure
      request_timeout 30
    }
}

client<llm> AzureGPT35TurboShortTimeout {
    provider baml-azure-chat
    retry_policy ZenfetchDefaultPolicy
    options {
      api_key env.AZURE_OPENAI_API_KEY
      api_base env.AZURE_OPENAI_BASE
      engine env.AZURE_GPT_35_TURBO_DEPLOYMENT_NAME
      api_version "2023-07-01-preview"
      api_type azure
      request_timeout 5
    }
}

// My version of "strategy"
client<llm> GPTFamilyShortTimeout {
  provider baml-fallback
  options {
    strategy [
      AzureGPT35TurboShortTimeout,
      AzureGPT4TurboShortTimeout
    ]
  }
}

Notice all of the options are the same with the exception of the request_timeout field. The reason I did this is because in certain AI functions, I need the operations to complete quickly, so it's not reasonable to wait for the default 30 second timeout. This leads to a lot of redundancy in my clients (really the "strategies").

With the proposed functionality, I could instead do something like

client<llm> AzureGPT35Turbo {
    provider baml-azure-chat
    retry_policy ZenfetchDefaultPolicy
    options {
      api_key env.AZURE_OPENAI_API_KEY
      api_base env.AZURE_OPENAI_BASE
      engine env.AZURE_GPT_35_TURBO_DEPLOYMENT_NAME
      api_version "2023-07-01-preview"
      api_type azure
      request_timeout 30
    }
}

// My version of "strategy"
client<llm> GPTFamilyShortTimeout {
  provider baml-fallback
  options {
    strategy [
      AzureGPT35Turbo.options(request_timeout=5),
    ]
  }
}

Baml init fixes

BAML init should be usable by anyone to add baml to their project.

  • Ask for what language they use
  • Ask for what package manager (pip or poetry or pnpm or yarn)
  • ask for what type of project (empty, tutorial - classification or entity extraction, chatbot, function-calling, etc) they should start with.
  • Ask what clients you want (openai, anthropic, azure? a sample of some) [P2]
  • Do a confirmation

P0 is just empty with a READMe that just says go to the docs or example repo.

Autosave .baml files before running tests

The playground prompt updates on unsaved changes, but the tests do not. So if you run a test you may be running witj and old prompt intil you save your changes.

Find a way to save unsaved .baml changes before running a test so it reflects what is shown in playground.

GetFirst strategy for LLM Clients

In certain situations, I would like to issue multiple LLM calls at once and use whichever client returns first successfully.

I think I would like to have this as part of a strategy option in the client interface.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.