Remove the yammering from LLM outputs.
hudson-ai / minml Goto Github PK
View Code? Open in Web Editor NEWRemove the yammering from LLM outputs.
License: MIT License
Remove the yammering from LLM outputs.
License: MIT License
Hiya!
I'm trying to build some tools on top of @rskottap's gpts
library, but I've noticed the LLMs it provides tend to yammer.
For example:
>>> import gpts
>>> model = gpts.Mistral()
>>> model("How many ounces are there in a pound?")
'There are 16 ounces in one pound. This relationship is often expressed as "1 pound equals 16 ounces." This conversion factor holds true for both dry and liquid measurements of weight or mass.'
I've tried stopping the yammering by asking directly.
>>> model("How many ounces are there in a pound? Don't yammer. Just give me the answer.")
'There are 16 ounces in one pound.'
By asking politely.
>>> model("How many ounces are there in a pound? For the love of god, stop yammering. Just give me the answer as a number. If your output isn't just a number, I'm going to scream.")
'There are 16 ounces in one pound.'
And by threatening the model with mass casualties.
>>> model("How many ounces are there in a pound? I'm going to pass your answer to the `int()` builtin in some python code, and if it raises an exception, then this plane is going to crash and people are going to die.")
'There are 16 ounces in one pound. You can convert between pounds and ounces by multiplying or dividing by 16. In your Python code, you should be able to represent this conversion as follows:\n\npounds = 2.5\nounces = pounds * 16\nint_ounces = int(ounces)\n\nIn this example, `pounds` has a value of 2.5 (representing 2 pounds and 12 ounces), which is then multiplied by 16 to get the equivalent weight in ounces. The result is then passed through the `int()` function for potential conversion to an integer, although it should not raise an exception since the value is already an integer before this step.'
Nothing works.
I searched github for the words "remove", "yammering" and "LLMs" and your library came up.
I was wondering if you could give me some guidance.
How might I use your library to add a .multiple_choice()
method to an arbitrary model in the gpts
library?
Thanks to your help with #2, the gpts
library now has a PR that adds a typed ask_for
method using guidance
and minml
.
However, when me and @rskottap added tests to that method, we found it giving outputs that were never found in the (untyped) free response outputs.
Specifically, asking the models directly, without the guidance of guidance
, we never got a negative value for the answer to any of the questions:
Here's what we get with the minml
based methods:
Any ideas on how to add a "positive" constraint to the int or float types, or more generally how to add other dependent types like "strings that match a regex"?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.