Comments (9)
from vllm.
What GPU type are you on?
You don’t need to specify the —quantization argument. We automatically detect the quantization type based on the config and use the marlin kernels if possible. Specifically the marlin kernels require at least ampere
I will improve the error message here
from vllm.
What GPU type are you on?
You don’t need to specify the —quantization argument. We automatically detect the quantization type based on the config and use the marlin kernels if possible. Specifically the marlin kernels require at least ampere
I will improve the error message here
Thanks for reply. My GPU type is NVIDIA A100-SXM4-40GB. I find that it works when not specifying the argument. By the way, I have a small question, is it possible to use GPTQ without marlin kernel since vLLM can automatically detect it?
from vllm.
Passing just —quantization gptq will use the slow kernels
I’ll look into why this error is being thrown tomorrow.
thanks for reporting!
from vllm.
Passing just —quantization gptq will use the slow kernels
I’ll look into why this error is being thrown tomorrow.
thanks for reporting!
Thanks!
from vllm.
What GPU type are you on?
You don’t need to specify the —quantization argument. We automatically detect the quantization type based on the config and use the marlin kernels if possible. Specifically the marlin kernels require at least ampere
I will improve the error message here
Does gptq_marlin support chunked prefill?
I tried to run benchmark_serving to measure throughput. It works well when I do not select --enable-chunked-prefill, but it has some errors when I try to use chunked prefill.
Here are the commands I used:
On the server side,
python -m vllm.entrypoints.openai.api_server --disable-log-requests --model TheBloke/Llama-2-7B-Chat-GPTQ --seed 0 --enable-chunked-prefill
On the clinet side
python benchmarks/benchmark_serving.py --model TheBloke/Llama-2-7B-Chat-GPTQ --dataset-path /u/yao1/ShareGPT_V3_unfiltered_cleaned_split.json --request-rate 10 --num-prompts 100 --seed 0
Here is the printout on the server side:
INFO: ::1:44634 - "POST /v1/completions HTTP/1.1" 200 OK
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 265, in __call__
await wrap(partial(self.listen_for_disconnect, receive))
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 261, in wrap
await func()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 238, in listen_for_disconnect
message = await receive()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 553, in receive
await self.message_event.wait()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/locks.py", line 226, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f146c70d580
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
return await self.app(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/cors.py", line 85, in __call__
await self.app(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 75, in app
await response(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 265, in __call__
await wrap(partial(self.listen_for_disconnect, receive))
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 680, in __aexit__
raise BaseExceptionGroup(
exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
from vllm.
Marlin kernel for the linear layers is completely orthogonal to chunked prefill
chunked prefill only impacts the attention calculation for model execution. Otherwise, the changes are just on the server side
from vllm.
Are there any other logs to share?
from vllm.
Are there any other logs to share?
What kinds of information can I provide?
Here is the printout on the client side. The progress bar is frozen at 98/100 and I use ctrl c to quit.
And the printout on the server side is as above, which says "exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)"
$ python benchmarks/benchmark_serving.py --model TheBloke/Llama-2-7B-Chat-GPTQ --dataset-path /u/yao1/ShareGPT_V3_unfiltered_cleaned_split.json --request-rate 10 --num-prompts 100 --seed 0
Namespace(backend='vllm', base_url=None, host='localhost', port=8000, endpoint='/v1/completions', dataset=None, dataset_name='sharegpt', dataset_path='/u/yao1/ShareGPT_V3_unfiltered_cleaned_split.json', model='TheBloke/Llama-2-7B-Chat-GPTQ', tokenizer=None, best_of=1, use_beam_search=False, num_prompts=100, sharegpt_output_len=None, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, request_rate=10.0, seed=0, trust_remote_code=False, disable_tqdm=False, save_result=False, metadata=None, result_dir=None, result_filename=None)
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Starting initial single prompt test run...
Initial test run completed.
Starting main benchmark run...
Traffic request rate: 10.0
98%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 98/100 [00:19<00:00, 12.85it/s]
^CTraceback (most recent call last):
File "/u/yao1/vllm/benchmarks/benchmark_serving.py", line 679, in <module>
main(args)
File "/u/yao1/vllm/benchmarks/benchmark_serving.py", line 478, in main
benchmark_result = asyncio.run(
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/base_events.py", line 634, in run_until_complete
self.run_forever()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/base_events.py", line 601, in run_forever
self._run_once()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/base_events.py", line 1869, in _run_once
event_list = self._selector.select(timeout)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/selectors.py", line 469, in select
fd_event_list = self._selector.poll(timeout, max_ev)
KeyboardInterrupt
98%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 98/100 [01:41<00:02, 1.04s/it]
from vllm.
Related Issues (20)
- [Bug]: vLLM 0.5.1 tensor parallel 2 hang HOT 12
- stream_options.include_usage does not work HOT 1
- gfx908 architecture not working for version 0.5.1 HOT 3
- [Bug]: llava model gets stuck with RuntimeError: Please increase the max_chunk_bytes parameter. HOT 3
- [RFC]: A Graph Optimization System in vLLM using torch.compile HOT 2
- [Bug]: Runtime AssertionError: 32768 is not divisible by 3, multiproc_worker_utils.py:120, when using 3 GPUs for tensor-parallel HOT 6
- [Bug]: Problem loading Gemma 2 27b-it HOT 1
- [Bug]: Gemma-2 + FlashInfer: ValueError: Unsupported max_frags_z:
- [Misc]: _run_workers_async function of DistributedGPUExecutorAsync HOT 1
- when i set tensor_parallel_size>1(A100 * 4), it does not work HOT 8
- [Bug]: `samplers/test_logprobs.py` fail on H100
- [Bug]: Timeout Error When Deploying Llamafied InternLM2-5-7B-Chat-1M Model via vLLM OpenAI API Server
- [Feature]: Apply chat template through `LLM` class HOT 11
- [Bug]: When using qwen-32b-chat-awq with multi-threaded access, errors occur after approximately several hundred visits.”vllm.engine.async_llm_engine.AsyncEngineDeadError: Background loop has errored already.“ HOT 1
- [Feature]: Return softmax of attention layer. HOT 3
- [Bug]: Paligemma support for PNG files HOT 5
- [Bug]: illegal memory access when increase max_model_length on FP8 models HOT 6
- [Bug]: autogen can't work with vllm v0.5.1
- v0.5.2, v0.5.3, v0.6.0 Release Tracker HOT 5
- [Bug]: Severe computation errors when batching request for microsoft/Phi-3-mini-128k-instruct HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vllm.