Giter Club home page Giter Club logo

Comments (9)

simon-mo avatar simon-mo commented on July 23, 2024

cc @robertgshaw2-neuralmagic

from vllm.

robertgshaw2-neuralmagic avatar robertgshaw2-neuralmagic commented on July 23, 2024

What GPU type are you on?

You don’t need to specify the —quantization argument. We automatically detect the quantization type based on the config and use the marlin kernels if possible. Specifically the marlin kernels require at least ampere

I will improve the error message here

from vllm.

George-ao avatar George-ao commented on July 23, 2024

What GPU type are you on?

You don’t need to specify the —quantization argument. We automatically detect the quantization type based on the config and use the marlin kernels if possible. Specifically the marlin kernels require at least ampere

I will improve the error message here

Thanks for reply. My GPU type is NVIDIA A100-SXM4-40GB. I find that it works when not specifying the argument. By the way, I have a small question, is it possible to use GPTQ without marlin kernel since vLLM can automatically detect it?

from vllm.

robertgshaw2-neuralmagic avatar robertgshaw2-neuralmagic commented on July 23, 2024

Passing just —quantization gptq will use the slow kernels

I’ll look into why this error is being thrown tomorrow.

thanks for reporting!

from vllm.

George-ao avatar George-ao commented on July 23, 2024

Passing just —quantization gptq will use the slow kernels

I’ll look into why this error is being thrown tomorrow.

thanks for reporting!

Thanks!

from vllm.

George-ao avatar George-ao commented on July 23, 2024

What GPU type are you on?

You don’t need to specify the —quantization argument. We automatically detect the quantization type based on the config and use the marlin kernels if possible. Specifically the marlin kernels require at least ampere

I will improve the error message here

Does gptq_marlin support chunked prefill?
I tried to run benchmark_serving to measure throughput. It works well when I do not select --enable-chunked-prefill, but it has some errors when I try to use chunked prefill.
Here are the commands I used:
On the server side,
python -m vllm.entrypoints.openai.api_server --disable-log-requests --model TheBloke/Llama-2-7B-Chat-GPTQ --seed 0 --enable-chunked-prefill
On the clinet side
python benchmarks/benchmark_serving.py --model TheBloke/Llama-2-7B-Chat-GPTQ --dataset-path /u/yao1/ShareGPT_V3_unfiltered_cleaned_split.json --request-rate 10 --num-prompts 100 --seed 0

Here is the printout on the server side:

INFO:     ::1:44634 - "POST /v1/completions HTTP/1.1" 200 OK                                                                                                                                                        
ERROR:    Exception in ASGI application                                                                                                                                                                             
Traceback (most recent call last):                                                                                                                                                                                  
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 265, in __call__                                                                                                  
    await wrap(partial(self.listen_for_disconnect, receive))                                                                                                                                                        
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 261, in wrap                                                                                                      
    await func()                                                                                                                                                                                                    
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 238, in listen_for_disconnect                                                                                     
    message = await receive()                                                                                                                                                                                       
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 553, in receive                                                                                 
    await self.message_event.wait()                                                                                                                                                                                 
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/locks.py", line 226, in wait                                                                                                                          
    await fut                                                                                                                                                                                                       
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f146c70d580                                                                                                                                           
                                                                                                                                                                                                                    
During handling of the above exception, another exception occurred: 
Traceback (most recent call last):                                                                                                                                                                                  
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi                                                                                
    result = await app(  # type: ignore[func-returns-value]                                                                                                                                                         
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__                                                                                      
    return await self.app(scope, receive, send)                                                                                                                                                                     
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in __call__                                                                                                
    await super().__call__(scope, receive, send)                                                                                                                                                                    
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/applications.py", line 123, in __call__                                                                                               
    await self.middleware_stack(scope, receive, send)                                                                                                                                                               
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/errors.py", line 186, in __call__                                                                                          
    raise exc                                                                                                                                                                                                       
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/errors.py", line 164, in __call__                                                                                          
    await self.app(scope, receive, _send)                                                                                                                                                                           
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/cors.py", line 85, in __call__                                                                                             
    await self.app(scope, receive, send)                                                                                                                                                                            
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 65, in __call__                                                                                       
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)                                                                                                                                        
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app                                                                                       
    raise exc                                                                                                                                                                                                       
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app                                                                                       
    await app(scope, receive, sender)                                                                                                                                                                               
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 756, in __call__                                                                                                    
    await self.middleware_stack(scope, receive, send)                                                                                                                                                               
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 776, in app                                                                                                         
    await route.handle(scope, receive, send)
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 75, in app
    await response(scope, receive, send)
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 265, in __call__
    await wrap(partial(self.listen_for_disconnect, receive))
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 680, in __aexit__
    raise BaseExceptionGroup(
exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)

from vllm.

robertgshaw2-neuralmagic avatar robertgshaw2-neuralmagic commented on July 23, 2024

Marlin kernel for the linear layers is completely orthogonal to chunked prefill

chunked prefill only impacts the attention calculation for model execution. Otherwise, the changes are just on the server side

from vllm.

robertgshaw2-neuralmagic avatar robertgshaw2-neuralmagic commented on July 23, 2024

Are there any other logs to share?

from vllm.

George-ao avatar George-ao commented on July 23, 2024

Are there any other logs to share?

What kinds of information can I provide?

Here is the printout on the client side. The progress bar is frozen at 98/100 and I use ctrl c to quit.
And the printout on the server side is as above, which says "exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)"

$ python benchmarks/benchmark_serving.py --model TheBloke/Llama-2-7B-Chat-GPTQ --dataset-path /u/yao1/ShareGPT_V3_unfiltered_cleaned_split.json --request-rate 10 --num-prompts 100 --seed 0
Namespace(backend='vllm', base_url=None, host='localhost', port=8000, endpoint='/v1/completions', dataset=None, dataset_name='sharegpt', dataset_path='/u/yao1/ShareGPT_V3_unfiltered_cleaned_split.json', model='TheBloke/Llama-2-7B-Chat-GPTQ', tokenizer=None, best_of=1, use_beam_search=False, num_prompts=100, sharegpt_output_len=None, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, request_rate=10.0, seed=0, trust_remote_code=False, disable_tqdm=False, save_result=False, metadata=None, result_dir=None, result_filename=None)                                                                                        
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Starting initial single prompt test run...
Initial test run completed.
Starting main benchmark run...
Traffic request rate: 10.0
 98%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌   | 98/100 [00:19<00:00, 12.85it/s]
^CTraceback (most recent call last):
  File "/u/yao1/vllm/benchmarks/benchmark_serving.py", line 679, in <module>
    main(args)
  File "/u/yao1/vllm/benchmarks/benchmark_serving.py", line 478, in main
    benchmark_result = asyncio.run(
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/base_events.py", line 634, in run_until_complete
    self.run_forever()
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/base_events.py", line 601, in run_forever
    self._run_once()
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/base_events.py", line 1869, in _run_once
    event_list = self._selector.select(timeout)
  File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/selectors.py", line 469, in select
    fd_event_list = self._selector.poll(timeout, max_ev)
KeyboardInterrupt
 98%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌   | 98/100 [01:41<00:02,  1.04s/it]

from vllm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.