Comments (6)
@WangFei-2019 has encountered the same problem. I was helping debug with it but didn't identify the issue yet. Some users have reproduced the experiments without meeting this issue though, so I am also very confused. Will keep you updated once we have more findings about it!
from llm-shearing.
I reran the experiments with my data, and got a curve as this: https://wandb.ai/xmzzzzz/pruning/runs/u7c5y8t5/overview
So I think what you are seeing should mostly be a data problem, though I am not exactly sure why that happens... I will upload my data as soon as I can!
from llm-shearing.
@xiamengzhou Thanks for your reply. I found that the wiki is encoded in UTF-8. Should I decode the wiki to text before feeding it into the tokenizer?
from llm-shearing.
@lippman1125 We have successfully resolved this problem by splitting the Wiki dataset into smaller subfiles and recalculating the reference loss. It is possible that the issue stemmed from inadequate randomness in the data sampling process. Additionally, inconsistencies between the reference loss and the test dataset could also play a role in contributing to this problem.
from llm-shearing.
@WangFei-2019 I split the Wiki dataset into 100 subfiles. Is it enough? Another question: How many sentences did you select for evaluation? The paper said that 500 sentences are enough. Could you share your wiki processing code with me? If it's possible, I would really appreciate it.
from llm-shearing.
@WangFei-2019 Ok. I will have a try. It looks like recalculating the reference loss is a key solution.
from llm-shearing.
Related Issues (20)
- Could you provide tokenized continue-pretraining dataset for reproduction? HOT 2
- missmatch shape
- Start training but nothing continue HOT 6
- TypeError: buffer is too small for requested array
- Pruning fine-tuned model HOT 2
- save model meet problem HOT 1
- Instruction tuning dataset HOT 2
- If I can't configure Slurm on a cluster, does that mean I can't use multi-node multi-GPU setups? HOT 5
- 有没有不用Slurm跑剪枝的方法?
- None
- Start training but only output config information HOT 3
- The Project is not implemented for 70B llama? HOT 7
- LlamaRMSNorm() layer differs from original llama HOT 1
- composer model trans to pythia problem
- The dtype of tokenized data should be uint32
- Why the rope params are ignored while converting hf checkpoint to composer checkpoint? HOT 3
- about shearing params config
- Can LLM-Shearing be used on ViT models?
- Support for Llama-3 / GQA?
- Open source the pruning mask.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llm-shearing.