abacaj / fine-tune-mistral Goto Github PK
View Code? Open in Web Editor NEWFine-tune mistral-7B on 3090s, a100s, h100s
License: MIT License
Fine-tune mistral-7B on 3090s, a100s, h100s
License: MIT License
Just want to know what have you used, what's the data looks like, special tokens etc
Hi, i am running out of memory:
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 732.00 MiB (GPU 0; 24.00 GiB total capacity; 20.62 GiB already allocated; 0 bytes free; 22.68 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
Thanks for putting together this!
I am looking into multipack sampler to have a better understanding of what its doing. My initial understanding is that it will pack sequence so that each bin satisfies total length in batch < bs x seqlen
. Later collator is padding to the longest sequence. I created a toy example to check the unpadded token ratios in each batch, and it turned out to be lower than I expected. I also printed to efficiency()
computed in the batch sampler and it gives a different number.
class DummyTokenizer:
pad_token_id = 0
@dataclass
class DataCollatorForSupervisedDataset(object):
"""Collate examples for supervised fine-tuning."""
tokenizer: transformers.PreTrainedTokenizer
def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]:
input_ids, labels = tuple(
[instance[key] for instance in instances] for key in ("input_ids", "labels")
)
# BEGIN: added line to return torch.tensor
input_ids = [torch.tensor(x) for x in input_ids]
labels = [torch.tensor(x) for x in labels]
# END
input_ids = torch.nn.utils.rnn.pad_sequence(
input_ids, batch_first=True, padding_value=self.tokenizer.pad_token_id
)
labels = torch.nn.utils.rnn.pad_sequence(
labels, batch_first=True, padding_value=-100
)
return dict(
input_ids=input_ids,
labels=labels,
attention_mask=input_ids.ne(self.tokenizer.pad_token_id),
)
ds = [(torch.ones(x)*x).long() for x in np.random.permutation(np.arange(1,101))]
ds = [{"input_ids":x, "labels":x} for x in ds]
ds = datasets.Dataset.from_list(ds)
lengths = np.array([len(x['input_ids']) for x in ds])
train_sampler = MultipackDistributedBatchSampler(
batch_max_length=4*128,
lengths=lengths,
num_replicas=1,
rank=0,
seed=42,
)
tokenizer = DummyTokenizer()
collator = DataCollatorForSupervisedDataset(tokenizer)
train_loader = DataLoader(
ds,
pin_memory=False,
collate_fn=collator,
batch_sampler=train_sampler,
)
for b in train_loader:
print((b['input_ids'] != tokenizer.pad_token_id).view(-1).float().mean())
print(train_loader.batch_sampler.efficiency())
tensor(0.5262)
0.8966619318181818
tensor(0.6364)
0.8966619318181818
tensor(0.4837)
0.8966619318181818
tensor(0.4582)
0.8966619318181818
tensor(0.6306)
0.8966619318181818
tensor(0.7002)
0.8966619318181818
tensor(0.5488)
0.8966619318181818
tensor(0.5200)
0.8966619318181818
tensor(0.4594)
0.8966619318181818
tensor(0.8535)
0.8966619318181818
tensor(0.5618)
0.8966619318181818
Maybe I am missing something here. Thanks!
Hi,
I am running on 4x Tesla T4. So, vRAM size is around 4*16 = 64 GB. Azure VM being used is NC64as_T4_v3.
the command I am running to execute is:
torchrun --nnodes=1 --nproc-per-node=4 train.py
I an getting the below error across all the 4GPUs. A sample error for GPU3 is as below:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 13.49GiB. GPU3 has a total capacity of 14.58 GiB of which 233.75MiB is free.
I was of the impression that the model would be distributed across the 4 GPUs with a cumulative RAM sixe of 64 GB and I would not need to use qLORA for FT.
Can you please tell me if I am missing something?
What is the minimum memory needed to run the fine-tuning script? Or what GPUs can it run on
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.