Comments (4)
This might be a dup and being investigated here: #5404
from vllm.
Also worth mentioning that I see the same behaviour if I remove the top_k
from your repro script, e.g.:
sampling_params = []
sampling_params.append(
{
"prompt": prompt,
"temperature": 0.0,
"max_tokens": 5,
"repetition_penalty": 2,
}
)
sampling_params.append(
{
"prompt": prompt,
"seed": 99,
"max_tokens": 4,
}
)
sampling_params = sampling_params * 10
sometimes produces:
['Once upon a time, there were two best']
['Once upon a time, there were two best']
['Once upon a time, there were two best']
['Once upon a time, there were two best']
['Once upon a time, there were two best']
['Once upon a time, there were two best']
['Once upon a time, there were two best']
['Once upon a time, there were two best']
['Once upon a time, there were two best']
['Once upon a time, there were two best']
['Once upon a time, there was a young woman']
['Once upon a time, there was an old man']
['Once upon a time, there was an old man']
['Once upon a time, there was an old man']
['Once upon a time, there was an old man']
['Once upon a time, there was an old man']
['Once upon a time, there was an old man']
['Once upon a time, there was an old man']
['Once upon a time, there was an old man']
['Once upon a time, there was an old man']
However, if I have both requests be greedy then the issue goes away. I think that is because the offending code only gets excecuted if seq_group.do_sample=True
and at least one request has a repetition penalty.
from vllm.
This seems to be related to the fact that some requests have repetition_penalty
set and others do not. There is some inconsistent behaviour in this function that means some tensor sizes depends on the order in which the requests get batched together.
See #5639
from vllm.
Thanks @prashantgupta24 @tdoublep for finding and investigating this!
from vllm.
Related Issues (20)
- [Bug]: vLLM 0.5.1 tensor parallel 2 hang HOT 12
- stream_options.include_usage does not work HOT 1
- gfx908 architecture not working for version 0.5.1 HOT 3
- [Bug]: llava model gets stuck with RuntimeError: Please increase the max_chunk_bytes parameter. HOT 3
- [RFC]: A Graph Optimization System in vLLM using torch.compile HOT 2
- [Bug]: Runtime AssertionError: 32768 is not divisible by 3, multiproc_worker_utils.py:120, when using 3 GPUs for tensor-parallel HOT 6
- [Bug]: Problem loading Gemma 2 27b-it HOT 1
- [Bug]: Gemma-2 + FlashInfer: ValueError: Unsupported max_frags_z:
- [Misc]: _run_workers_async function of DistributedGPUExecutorAsync HOT 1
- when i set tensor_parallel_size>1(A100 * 4), it does not work HOT 8
- [Bug]: `samplers/test_logprobs.py` fail on H100
- [Bug]: Timeout Error When Deploying Llamafied InternLM2-5-7B-Chat-1M Model via vLLM OpenAI API Server
- [Feature]: Apply chat template through `LLM` class HOT 11
- [Bug]: When using qwen-32b-chat-awq with multi-threaded access, errors occur after approximately several hundred visits.”vllm.engine.async_llm_engine.AsyncEngineDeadError: Background loop has errored already.“ HOT 1
- [Feature]: Return softmax of attention layer. HOT 3
- [Bug]: Paligemma support for PNG files HOT 5
- [Bug]: illegal memory access when increase max_model_length on FP8 models HOT 6
- [Bug]: autogen can't work with vllm v0.5.1
- v0.5.2, v0.5.3, v0.6.0 Release Tracker HOT 5
- [Bug]: Severe computation errors when batching request for microsoft/Phi-3-mini-128k-instruct HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vllm.