Comments (4)
To address the issue of your model inference getting stuck, especially with longer prompts, consider implementing the following optimizations and strategies:
-
PagedAttention: This method reduces memory waste by dividing the KV cache into blocks for efficient retrieval during attention computation. It's particularly effective in managing the large memory footprint associated with KV caches.
-
Efficient Memory Sharing with PagedAttention: By sharing blocks among sequences through a block table, this approach can significantly reduce memory overhead, leading to improved throughput.
-
Optimize CUDA Kernels and Implement Continuous Batching: Focus on CUDA kernel optimizations and use continuous batching to improve GPU utilization by dynamically inserting new sequences into the batch based on resource availability.
-
FlashAttention for Attention Mechanism Optimization: FlashAttention optimizes the attention mechanism by introducing tiling and recomputation strategies, reducing the need for large intermediate attention matrices and speeding up the computation.
-
Dynamic and Inflight Batching for Scheduling Optimization: Use dynamic batching and inflight batching to refine scheduling and improve GPU resource utilization by addressing "bubbles" during inference.
-
Memory Management in Conversation History: Ensure the conversation history buffer does not exceed a predefined maximum token limit to prevent memory overflow issues.
-
GPU Memory Usage Management: Manage GPU memory usage through parameters such as
gpu_memory_utilization
,swap_space
,max_num_batched_tokens
, andmax_num_seqs
to control memory allocation during inference.
These strategies are derived from practices in managing large model inference processes and should be adjusted based on your specific setup and requirements. If the problem persists, profiling the memory usage during inference might help identify any bottlenecks or areas of excessive memory use.
Details
Sources
- knowledge_base/samples/content/llm/大模型技术栈-算法与原理.md
- knowledge_base/samples/content/llm/大模型推理优化策略.md
- knowledge_base/samples/content/test_files/langchain-ChatGLM_open.jsonl
- server/memory/conversation_db_buffer_memory.py
- startup.py
- knowledge_base/samples/content/test_files/langchain-ChatGLM_closed.jsonl
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
To continue the conversation, mention @dosu.
from langchain-chatchat.
我在微调b6的时候也遇到了这个问题,是模型出现了复读现象,可以使用stream_chat看下是不是在正常输出。GPU占用率没有掉下去就没有卡死,只是复读造成生成的文本过长,在等待生成完成
from langchain-chatchat.
估计显卡没有用到,你检查显卡推理吧,另外这个模型应该有问题,不建议调这个
from langchain-chatchat.
估计显卡没有用到,你检查显卡推理吧,另外这个模型应该有问题,不建议调这个
显卡用到了,显存占用着呢,这个模型有什么问题啊,可以说说吗
from langchain-chatchat.
Related Issues (20)
- The request client is not a secure context and the resource is in more-private address space。特殊跨域问题如何解决。
- 求助:python startup.py -a 报错assert r.status_code == 200 AssertionError HOT 1
- 求助:如何不使用api,批量进行预测 HOT 2
- 启动startup.py,即便显卡不存在运行进程,依然显示报错:torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 134.00 MiB. GPU 0 has a total capacty of 21.99 GiB of which 119.06 MiB is free HOT 2
- [BUG] 知识库名称使用中文出现错误 HOT 1
- 0.2.10版本,知识库,不能基于上次回答的内容连续对话。
- 0.2.10版本,知识库内容拆分,指定特殊分割符,没有效果。
- 0.2.10版本,知识库对话,多个问题回答不了。
- [BUG] glm-4-9b-chat模型运行问题 HOT 9
- 自动生成测试用例
- bge-m3无法检索到内容
- 项目首页底下 微信交流群二维码失效了
- embedding 模型可以换成在线的API吗,怎么换成在线的API
- 能支持glm-4-9b-chat吗模型配置文件里面好像没有
- 大模型问答用户输入问题 冒号后面的英文被自动去除格式也错乱了
- [FEATURE] agent对话时,使用工具如何让用户确认是否执行工具呢?
- [BUG] 使用qwen-api在线模型报错ERROR: RemoteProtocolError: Caught exception: peer closed connection without sending complete message body (incomplete chunked read)
- [BUG] 容器化项目,添加文件到知识库卡住,一直running,成功上传,但是没有添加到向量库 HOT 2
- langchain agents executor throws: assert generation is not None #22585 HOT 1
- 0.2.10版本无法与Qwen2正常对话 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from langchain-chatchat.