Comments (9)
您好,我们写了一份关于加载自定义内容的教程文档,请移步:https://github.com/hiyouga/ChatGLM-Efficient-Tuning/blob/main/examples/alter_self_cognition.md
from chatglm-efficient-tuning.
没有看到详细的报错信息,请问你微调模型了吗。把微调日志发一下。
from chatglm-efficient-tuning.
没有报错,看起来一切正常啊
CUDA_VISIBLE_DEVICES=0 python finetune_chatglm.py
--do_train
--dataset example
--finetuning_type lora
--output_dir output
--per_device_train_batch_size 16
--gradient_accumulation_steps 1
--lr_scheduler_type cosine
--logging_steps 10
--save_steps 1000
--max_train_samples 3000
--learning_rate 5e-5
--num_train_epochs 1.0
--fp16
from chatglm-efficient-tuning.
这个只是做个例子。具体微调要准备其他大型数据集,这个示例数据集数据量太小了。
from chatglm-efficient-tuning.
这个只是做个例子。具体微调要准备其他大型数据集,这个示例数据集数据量太小了。
用了自己的数据集900条也不出结果,用官方Tuning是可以出来的
from chatglm-efficient-tuning.
可以试着增大Lora的r
值,或者使用和官方一样的pre_seq_len=128
的P-Tuning方法。同时增大learning_rate=1e-3
。
默认参数中为了避免模型发生灾难性遗忘并过拟合到新数据集上,采用的都是较为保守的参数。
from chatglm-efficient-tuning.
可以试着增大Lora的
r
值,或者使用和官方一样的pre_seq_len=128
的P-Tuning方法。同时增大learning_rate=1e-3
。默认参数中为了避免模型发生灾难性遗忘并过拟合到新数据集上,采用的都是较为保守的参数。
你好,小白想请教一个问题,这个增大lora的r值具体要怎么做呢?需要加什么参数么?
from chatglm-efficient-tuning.
你好,小白想请教一个问题,这个增大lora的r值具体要怎么做呢?需要加什么参数么?
@LainNetWork 加入参数--lora_rank=16
from chatglm-efficient-tuning.
你好,小白想请教一个问题,这个增大lora的r值具体要怎么做呢?需要加什么参数么?
@LainNetWork 加入参数--lora_rank=16
明白了,感谢回答~
from chatglm-efficient-tuning.
Related Issues (20)
- when `per_device_eval_batch_size` > 1 and launch by deepspeed, RuntimeError: Tensors must be contiguous HOT 5
- 请问lora训练是只会训练注意力层,在注意力层加适配器吗?而不训练前馈层。然后freeze是训练前馈层 HOT 1
- lora微调之后导出的完整模型文件应该要如何调用 我使用transformers的接口加载模型报错 HOT 2
- 显存占用问题
- ValueError: Cannot merge LORA layers when the model is loaded in 8-bit mode
- collator.py的第126行与preprocess.py的preprocess_supervised_dataset是不是存在冲突?input_ids拼接了2次labels HOT 2
- 4*V100 32g配置,满足zero3全量微调吗? HOT 4
- Learning Scheduler Issue HOT 2
- 请问下在4-bit量化模式Lora微调最小的显存要求 HOT 1
- 2400条数据,10个epoch,pre_seq_len=128,lora训练,为什么推理时显示trainable params: 0 || all params: 6243584000 || trainable%: 0.0000,是数据量不够吗,下边贴出训练参数 HOT 1
- 跑sft阶段出现这个问题,环境应该是ok的 HOT 2
- 仓库过大,git下载慢 HOT 2
- 不理解为什么这个模型是paddingleft. 编码之后开始全是一堆-100的token, 感觉很难llm收敛.
- lora_target 的可用值有哪些? HOT 2
- step和epoch一样 HOT 1
- 用默认参数微调chatglm2之后对话能力大幅度下降 HOT 1
- 数据集oaast_rm_zh问题
- epoch的选择&数据集构造 HOT 1
- CUDA error: an illegal memory access was encountered HOT 1
- 微调后不起效果,是我哪步有问题吗 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from chatglm-efficient-tuning.