Comments (10)
same issue. The "generating GPU P2P access cache" progress is very slow.
from vllm.
I also tested the loading of the glm-4-9b-chat-1m model. The speed of loading the glm-4-9b-chat-1m model in v0.5.0 is also much lower than that in v0.4.2. The "generating GPU P2P access cache" progress is very slow.
from vllm.
The first "generating GPU P2P access cache" is very slow which is not related with specify models I think, if the file exists, it will reading from the file directly.
from vllm.
I understand "generating GPU P2P access cache" can be slow, because it needs to test the p2p of each gpu pair. I don't think it can take 20 minutes though. In my experience that's kind of 1~2 minutes.
from vllm.
I understand "generating GPU P2P access cache" can be slow, because it needs to test the p2p of each gpu pair. I don't think it can take 20 minutes though. In my experience that's kind of 1~2 minutes.
can you reproduce my step? And you will see what i said.
from vllm.
Can you try #5528 and give some feedback? I think that should solve this issue.
from vllm.
Can you try #5528 and give some feedback? I think that should solve this issue.
What should I try? I use a docker image to start the service instead of compiling the source code.
Should I modify the code in the docker container of vllm version 0.5.0 as the PR #5528 ?
Then restart the container.
Or should I pull the latest v0.5.0.post1 docker image and load the model with the latest image?
from vllm.
Can you try #5528 and give some feedback? I think that should solve this issue.
What should I try? I use a docker image to start the service instead of compiling the source code. Should I modify the code in the docker container of vllm version 0.5.0 as the PR #5528 ? Then restart the container. Or should I pull the latest v0.5.0.post1 docker image and load the model with the latest image?
@youkaichao Maybe we need a ci to build nightly package for pr? Locally build and install is too boring and slow.
This is my test results:
v0.5.0: 8min50s
INFO 06-14 14:22:10 custom_all_reduce_utils.py:169] generating GPU P2P access cache in /root/.config/vllm/gpu_p2p_access_cache_for_0,1,2,3,4,5,6,7.json
INFO 06-14 14:31:00 custom_all_reduce_utils.py:179] reading GPU P2P access cache from /root/.config/vllm/gpu_p2p_access_cache_for_0,1,2,3,4,5,6,7.json
pr #5528 (I just remove the vllm config /root/.config/vllm/gpu_p2p_access_cache_for_0,1,2,3,4,5,6,7.json
): 44s
INFO 06-14 18:44:29 custom_all_reduce_utils.py:184] generating GPU P2P access cache in /root/.config/vllm/gpu_p2p_access_cache_for_0,1,2,3,4,5,6,7.json
INFO 06-14 18:45:13 custom_all_reduce_utils.py:199] reading GPU P2P access cache from /root/.config/vllm/gpu_p2p_access_cache_for_0,1,2,3,4,5,6,7.json
from vllm.
I try to load qwen2_57b_a14b_instruct and met the same issue.
I had already updated to the latest code and used os.environ['NCCL_IGNORE_DISABLED_P2P']
= '1'`
from vllm.
@NiuBlibing glad it helps, the PR should be merged recently.
Maybe we need a ci to build nightly package for pr?
It's ongoing. Since most PR only touches Python files, you can also manually replace these files to test.
from vllm.
Related Issues (20)
- [Misc]: vLLM logger disables other existing loggers by default HOT 1
- [Usage]: AttributeError: '_OpNamespace' '_C' object has no attribute 'rms_norm' HOT 1
- [Roadmap] vLLM Roadmap Q3 2024 HOT 2
- v0.5.1 Release Tracker HOT 5
- [Model]: Adding support for MiniCPM-Llama3-V-2_5
- [Feature]: MLPSpeculator Tensor Parallel support HOT 3
- [Bug]: Test_skip_speculation fails in distributed execution HOT 3
- [Bug]: flashinfer backend bug HOT 1
- [RFC] Changes to CI workflow for PRs HOT 6
- [Bug]: 推理时异常 HOT 1
- [Feature]: Phi-3 vision -- allow multiple images as Microsoft shows can be done HOT 1
- [Bug]: AsyncEngineDeadError: Background loop is stopped after invalid parameter in request
- [Bug]: Request never returns if temperature > 2
- [RFC]: Classifier-Free Guidance
- [Bug]: Model architectures ['NVEmbedModel'] are not supported for now
- [Bug]: Internal Server Error when hosting Alibaba-NLP/gte-Qwen2-7B-instruct HOT 1
- [Bug]: unhandled system error with NCCL on v0.5.0.post1 HOT 3
- [Usage]: Multi-LoRA questions
- [Bug]: Inconsistent Output from OPT-x models
- [Feature]: Support in distributed speculative inference
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vllm.