Comments (3)
Hey! You are able to specify the needle, the question, and the background context, so you can choose whatever you want
We did this to enable others to do their own context
from llmtest_needleinahaystack.
What would you consider "fair" conditions for ad-hoc generation of haystack vs needle? Would there be tools to help with the randomized construction of the haystack (and maybe averaged performance of multiple tests)?
- Word length of the needle (Single sentence vs single paragraph)
- The size of the haystack (relative to the needle, singular book or single whole anthology)
- The variance of the haystack (single source vs multiple source shuffled into a collection)
Bonus question: can this be used to evaluate FOSS models as well (esp. those without OpenAI APIs)? Would Ollama or similar do the job?
from llmtest_needleinahaystack.
All those questions you asked are great research questions and I haven't seen anyone dig into them rigorously yet
Yep you can definitely test out other models
from llmtest_needleinahaystack.
Related Issues (20)
- Replace os.path with Pathlib
- Update package Anthropic HOT 2
- Anthropic Naming Conflict Error HOT 2
- Implement Docker for testing HOT 1
- Code optimizations
- Model kwargs support HOT 1
- Add Makefile target for resetting run results HOT 1
- Standard Tokenizer HOT 12
- Convert the repository to a PyPi package HOT 1
- Remove passing of API keys as parameters and read them from environment variables HOT 1
- multi-needle-eval-pizza-3 dataset not found HOT 1
- I was wondering about the evaluation method HOT 2
- [Feature Proposal] Multi-needle in a haystack HOT 2
- does it run at all? Basic commands failed to run as per the README. HOT 1
- Possibility to specify custom API endpoint address? HOT 4
- Can I use local LLM as the evaluator and provider? HOT 3
- How can we cite the Needle-in-a-Haystack? HOT 1
- add base_url env in openai provider - to support OpenAI compatibility local inference like - ollama, tgi, etc
- Different prompts in providers - I just wonder why cohere don't have "Don't give information outside the document or repeat your findings" and does it make a bit difference?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llmtest_needleinahaystack.