bigscience-workshop / xmtf Goto Github PK
View Code? Open in Web Editor NEWCrosslingual Generalization through Multitask Finetuning
Home Page: https://arxiv.org/abs/2211.01786
License: Apache License 2.0
Crosslingual Generalization through Multitask Finetuning
Home Page: https://arxiv.org/abs/2211.01786
License: Apache License 2.0
Hello! It seems that some datasets like xnli and xwinogrande-ru are not in xP3all in huggingface. I wonder if they would be uploaded later? Thank you!
Thank you for contributing such an excellent work.
I notice that bloomz-* outperform bloom-* via instruct tuning, I want to build a new bloomz-* model upon bloom model, (e.g. bloom-1b7-> bloomz-1b7-mt), but after finetuning bloom-1b7 model on some instruct data from xp3mt, the performance drops much.
I use a batch size of 2048 and learn rate of 2e-5, and labels on inputs are masked.
what else do i need to pay attention to? Or if there are some scripts to do this?
Q1: I am trying to extract the Arabic instructions from the xP3 dataset, and I want to put them in the format: “Instruction”, “Input”, and “Output”. Currently, the data is in this format: “inputs” and “targets”.
I found that the instructions sometimes are in the last part of the “inputs” and preceded by \n, and sometimes without any delimiter. In other cases, the instructions are at the beginning of the “inputs”, etc.
Here is an example where the instruction is at the end of the input, but without any delimiter to recognize it.
File: xp3_GEM_xlsum_arabic_train_xp3longrest.jsonl
{"inputs":"...\nووسط هذه القلة يقف أيضا شقيقيها الفنان فيصل لعيبي، الذي أثر كثيرا في تطورها الفني وباتت تشكل معه ثنائيا فنيا مميزا، يجعل من أعمالهما في حوار دائم، فتحمل كثيرا من الوشائج والتشابهات الأسلوبية والشكلية لكنها تفترق في التوجه. ففي الوقت الذي يسعى فيصل إلى تأصيل فنه في قلب منجز الرسم العراقي بالتركيز على الخصوصية العراقية واللمسة المحلية والنهل من التراث الفني الرافديني في مراحله المختلفة وعكسه بلغة فنية حداثوية معاصرة، تسعى عفيفة إلى تمييز نفسها عنه بالتحليق في فضاء إنساني عام، مبعدة لوحاتها عن أي ... Write the rest of the article:","targets":"حوارا مع نظرتها ويركز عليها ليكتشف أنها تنظر في مكان آخر أو ربما في ماضٍ بعيد. \n\nوتعنى عفيفة باختيار
Q2: I found many incomplete inputs and outputs, ex: having this string:
"... Continue the article for another 4000 characters max:","targets":"."}
What should we do in such cases?
Thanks
Hamdy Mubarak
Can you provide the continue fine-tune code for mT0 to specific down-stream data task,we want to test it for specific scene, e.g. retrieval and recommendation.
We find a similar version for continue fine-tuning flan-t5 in https://www.philschmid.de/fine-tune-flan-t5-deepspeed. Is it same to xmtf, or can you provide an official example, just like classification or QA?
Best wishes.
Hi, I thank again for your awesome work.
Your paper states that "We select the final checkpoint based on validation performance."
Does the "validation performance" mean held-out performance, or seen task performance measured on their available eval subsets?
It seems like there are mixed approaches in the literatures. While T0 checkpoints were picked solely based on seen task performance, Flan-T5 checkpoints were picked based on held-out performance.
When I first read your paper, I assumed they were picked based on held-out performance, but I recently found that prepare_xp3_train.py
saves seen task validation sets separately when available.
It would help us a lot if you could please provide additional information on this. Thank you.
I try to use https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/main/tools/convert_checkpoint/deepspeed_to_transformers.py to convert checkpoints, but i meet this issue bigscience-workshop/Megatron-DeepSpeed#355, could u help me resolve it ? thanks !
According to link, Florse-200 only has dev, devtest sets.
So, if I want to evaluate the translation performance of mt0&bloomz, It is unreasonable to evaluate on the Florse-200 dev, devtest datasets directly.
Could you please fix the URL for: P3megds. This URL currently is not available.
Thanks for the great work!
I have a few questions regarding data creation of xP3 after following the guide here to create instruction data on the code
language subset.
I noticed the total samples of the public processed data (from here) on the code
split is 2707724. However, my resulting data following the above github guide is much more than that (approximately >3M samples). I wonder if there were any additional post-processing to get the final instruction data for tuning?
Following the above github guide, I noticed there was no prompt for this particular dataset State Changes. I got this warning when running the creation code:
Tried instantiating `DatasetTemplates` for Fraser/python-state-changes, but no prompts found. Please ignore this warning if you are creating new prompts for this dataset.
Is this dataset not assigned with any prompt (similar to how HumanEval was treated). Or is the below version of PromptSource I used is not correct:
git clone -b tr13 https://github.com/Muennighoff/promptsource.git & install cd promptsource; pip install -e .
Hi,
Thank you for the very interesting work and releasing the code. It is very helpful!
Is there a way I can get the machine-translated prompts per task?
For example, how would I get the Spanish (es) prompt for Paws-x only?
bigscience/xP3mt
seems to contains the input, output pairs in Spanish for all the training tasks. Is there a way I can get the input, output pairs for Paws-x only?data/xp3/prepare_xp3_train.py
, setting USE_ENGLISH_PROMPTS
to False
seems to load prompts in different languages from PromptSource, but PromptSource only has prompts in English for Paws-x (https://github.com/bigscience-workshop/promptsource/tree/main/promptsource/templates/paws-x
)Also, more generally, how do you do machine-translation for prompts if the language is from right-to-left instead of left-to-right or has different ordering like subject-object-verb instead of subject-verb-object? Would the target come before the input or would you reorder the sentences in the input (i.e premise or hypothesis) in the prompt? And if the target comes before the input, how would the model work since it generates from left to right?
Thank you,
Derek
Hello!
Thanks a lot for your job!
I want to finetune bloomz-mt by your Megatron-DeepSpeed,but I can not find a universal version checkpoint of bloomz-mt or bloomz. I only found the bloom universal checkpoint below.
https://huggingface.co/bigscience/bloom-optimizer-states/tree/global_step95000_universal
With limited GPUs,I have to use TP 4, PP 12 to finetune, but I found that you suggest not to merge TP in below document. So I want to find the bloomz-mt universal checkpoint
https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/finetune.md
Hello, thanks for your work! I want to try to implement this work myself, but I cann't achieve the high performance by xP3 and mT0-xxl as shown in the paper Crosslingual Generalization through Multitask Finetuning. I wonder the training details of this work, how many steps do you train the model, and what is your lr-decay-ratio? Could I get the config file to implement your result? Thank you very much!
Hello!
Thanks a lot for your job! I'm using mT0-xxl for question answering task, however it performs with not so high quality I expected it to do. So I'm trying to finetune the model a little bit. If I understood correctly, first of all I should get checkpoint and gin file for the model I want to finetune. Could you please share with these?
And is it possible to finetune it with torch or tf is the only way?
Hello, thank you for your inspiring work!
I assumed for xP3mt, all languages would have the same number of templates within a dataset, as they are all machine translated from the English templates. However, while I was taking a look at xP3mt, I noticed that the number of templates differ between languages within the same dataset.
For example, XCOPA has 12, 10 and 5 templates for English, Chinese and Italian, respectively.
It seems like,
I checked that XQuAD is also like this.
Your paper explains the experiments in great detail, however I believe the above detail was not mentioned. Could you please provide some additional information about this decision? Thank you in advance.
Hi, how to convert model weights(e.g., bigscience/bloomz-560m-optimizer-states) to Hugging Face model.bin file?
Is mT0 suitable / recommended for continued training on mixture of denoising (span corruption, extreme span corruption, prefix LM) tasks similar to UL2? Like below
# span_corruption
{
"text_input": "The <extra_id_0> walks in <extra_id_1> park",
"text_output": "<extra_id_0> cute dog <extra_id_1> the <extra_id_2>"
}
# extreme_span_corruption
{
"text_input": "The <extra_id_0> park",
"text_output": "<extra_id_0> cute dog walks in the <extra_id_1>"
}
# prefix LM
{
"text_input": "The cute <extra_id_0>",
"text_output": "<extra_id_0> dog walks in the park"
}
My domain text is quite different from internet text so I assume span corruption task would help mT0 learn special syntax / semantics of my domain.
https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/main/megatron/data/mtf_dataset.py#L34
The MTFDataset class take documents
as arguments, but didn't use it(except in assert statement).
I think documents
is train/valid/test split index, is it ok to ignore documents
?
https://arxiv.org/pdf/2212.09535.pdf
I was reading this paper, and really interested into trying this myself. But I can't find the model weights (bloom-3b) anywhere. Can you link that? would be great.
is it possible to use petals for inferring/prompt tuning without sharing my gpu?
Hello, guys!
As the title says, I'm trying to export mt0-xxl-mt
(with some adjustments, which I specify later) to ONNX
, but the export fails all the time.
So, regarding model adjustments: I've loaded the model from hugging face in 8bit
precision mode, then I fine-tuned it on my downstream task with LORA/PEFT
and after that trying to export it to ONNX
.
I've just realized that In both basic model from hugging face and model after LORA/PEFT
finetuning state_dict
there is a curious layer named 'weight_format'
with the value 'row'
instead of weights tensor. And the export to ONNX
fails because of the export function trying to apply detach()
method on that value, which obviously generates an error.
So my questions are:
'weight_format'
layer and what it stands for?state_dict
and the model architecture, will it cause a further error/model work instability?ONNX
, without adjusting the state_dict
and the model architecture?Hi!
Thanks for the amazing job!
Have a couple of quick questions. I'm trying to use mT0-xxl-mt for QA. When I provide the context and ask a question, subject of which is not presented in the context, the model anyway provide something from the context even if it's totally wrong. Ideal scenario in this case - is if the model could output like 'I cannot answer this question with this context" or something like that.
Hi 😀
First of all, thank you for your very interesting work 🚀
I was wondering about two points where I didn't find an answer by myself (maybe I didn't search well) and I would need your help.
I would have liked to know for a given task, what is the prompt used for finetuning for a given language. For example, let's say French summarization. So I started to search to know which prompt were used for the French summarization but I didn't find a list that would summarize such information. PromptSource provides 2085 prompts in English, but nothing about translations in other languages. Does such a list exist? 🤔
To try to have a solution to the previous point, I thought I had to download the xP3mt dataset and read directly which prompts were used. The problem is that you can actually download all the data for a selected language but you can't do an additional filter on the task/(sub)dataset. Would this be something that could be added?
Or even better, create individual multilingual datasets of the translations you have done. For example, having the ability to upload an "mSamSum" which would be the multilingual version of "SamSum" which is purely in English at the base. This would probably allow to be reused in other works, especially monolingual ones. If I take again the example of French summary, there are few data currently available: Orangesum, XLSum and Wiki-lingua. Having easy access to the translations of CNN Daily Mail, Gigaword, MultiNews, SamSum and XSum would allow to do very interesting things 🤯
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.