Comments (8)
Error details when running lml run
with a file:
Exception ignored in: <function AsyncOpenAIAPI.__del__ at 0x7faf6116aa70>
Traceback (most recent call last):
File "/home/kuba/Projects/forks/lmql/src/lmql/runtime/bopenai/batched_openai.py", line 553, in __del__
File "/etc/conda/envs/lmql/lib/python3.10/asyncio/events.py", line 656, in get_event_loop
RuntimeError: There is no current event loop in thread 'MainThread'.
from lmql.
Thanks for reporting this. Does the query otherwise finish execution, or does it not execute at all?
from lmql.
Query finishes. In app the problem is that every next call does not work.
from lmql.
I think I have observed this before on Linux systems specifically. This is very likely a asyncio clean-up issue. I will investigate this.
When you say "next call", what exactly do you mean? Once the query finishes, lmql run
should have terminated, right?
from lmql.
I mean in web app.
from lmql.
Okay, thank. I will investigate further and report back here. If you run locally, the playground will actually re-run a new lmql process on each query. So it seems odd that it would block the next call.
from lmql.
Seeing the exact same problem here as well (running on Arch Linux)
from lmql.
I just released a bug fix release lmql==0.0.4.2, where this issue should be fixed. It would be really cool, if anyone who has experienced this bug, could briefly report back, if this is fixed.
from lmql.
Related Issues (20)
- Unable to use model loaded on gpu and cpu HOT 1
- Client-side (decoding) memory leak with async @lmql.query HOT 3
- [BUG] sub-queries fail with temperature=0 HOT 1
- Possible error with nests of nested queries in Python HOT 2
- Proposal for new KeyWord `using`
- Model Repeats Same Phrase HOT 1
- `--layout` ignores CUDA_VISIBLE_DEVICES
- cannot import name 'OutputGraph' from 'torch._dynamo.output_graph' HOT 1
- Add a normal llama.cpp server endpoint option. HOT 1
- Batched OpenAI request worker async race (AssertionError: bopenai...)
- Use external files and avoid quoting?
- Docs for cache behaviour HOT 1
- lmql run not working for llama models even though same script works in playground and also for openai models HOT 1
- `lmql serve-model llama.cpp:<PATH TO WEIGHTS>.gguf` only works with an absolute path
- Azure OpenAI API instruct model not ERROR on query HOT 1
- [Question] Whats is the default decoder used in queries ? HOT 1
- FileNotFoundError: [WinError 2] 系统找不到指定的文件。
- python AttributeError: 'NoneType' object has no attribute 'get_event_loop', lmql == 0.7.3 HOT 1
- Llama 3 GGUF Tokenizer HOT 2
- Make STOPS_AT work with Tokens HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lmql.