Comments (10)
@MostafaDehghani for clarity on image tasks.
Note: we might take awhile to reply due to the upcoming deadlines. Thanks
from long-range-arena.
@vanzytay Thanks for the quick response!
@MostafaDehghani Could you help me check this when you got time and let me know any possible ones I could try (we're also trying for the upcoming deadline :)) Thanks!
from long-range-arena.
Thank you @keroro824 for the question.
So if I understood correctly, you are looking for the configs for the vanilla transformer to reproduce its results on CIFAR10 dataset in LRA. For that, you can use the followings as the model hparams:
# model params
config.model = ml_collections.ConfigDict()
config.model.emb_dim = 128
config.model.num_heads = 8
config.model.num_layers = 1
config.model.qkv_dim = 64
config.model.mlp_dim = 128
config.model.dropout_rate = 0.3
config.model.attention_dropout_rate = 0.2
config.model.classifier_pool = 'CLS'
config.model.learn_pos_emb = True
We are planning to release the code for all models and the best performing configurations, as soon as possible. In the meantime, please let us know if you had any questions :)
from long-range-arena.
@MostafaDehghani Thank you !!! I can replicate it now!
from long-range-arena.
No problem at all! Perfect!
... and good luck with the deadline :)
from long-range-arena.
The above comment states 1 layer and leaves learning rate unspecified. This means learning rate will be 0.0005 inherited from base_cifar10_config.
The arxiv paper states: 3 layers, learning rate 0.01.
The openreview paper states: 3 layers, learning rate 0.01.
Notably, the config file still contains nothing.
Currently, the code in this repository is inconsistent with the published articles. Do you plan on fixing these inconsistencies? Or did you abandon this project?
from long-range-arena.
IIRC config files takes precedence over the paper hparams. We will update the readme here to state this.
from long-range-arena.
The best results in the paper are all reproducible from the code in the repo. Have you tried the configs that are shared here? Many people reproduced the results without any issue after our last update.
LRA is a living benchmark. We tried our best to tune hyper-parameters of each model we had in the paper and some of the authors of those models reached out to help us find better hyper parameters. The codebase has the most updated version of those and it can be used for reproducing the results.
Notably, the config file still contains nothing.
If you read the code carefully, you can see that the config file you are referring to is inheriting from the base config!
from long-range-arena.
IIRC config files takes precedence over the paper hparams. We will update the readme here to state this.
This was not clear to me, I apologize for the misunderstanding.
If you read the code carefully, you can see that the config file you are referring to is inheriting from the base config!
I meant that the file was empty, so learning rate was inherited as 0.0005 from base config file while article reported learning rate 0.01. I was under the impression that the article hyperparams would be used, but as vanzytay clarified this is not the case.
from long-range-arena.
It is a good reminder to us that an update of the paper is due to ask researchers to defer to the codebase to reproduce the results.
In our 2nd update, we ran all the cifar results again to make sure they were reproducible. So the code configs should be good. Do give it a try and let us know if you run into any other issues. Thanks!
from long-range-arena.
Related Issues (20)
- bug in Pathfinder-128 dataset HOT 9
- Error in matching task
- Perceiver on LRA
- Pathfinder not learning three times in a row. HOT 1
- Error when run document retrival HOT 3
- Request about cuda version when using GPUs HOT 4
- Quadratic Longformer suspicion HOT 1
- Dataset for the matching task HOT 1
- Are encoder and decoder both implemented with sparse attention for bigbird? How long is the verified output length for the decoder?
- Current code doesn't work with latest flax version and run on CPU only HOT 15
- The best checkpoint of Transformer
- AAN dataset unavailable HOT 1
- AAN dataset crashing when loading .tsv file HOT 4
- ModuleNotFoundError: No module named 'flax.deprecated' HOT 3
- How to use the pathfinder.py code to generate the dataset? HOT 1
- Pretrained models
- Is there a pytorch equivalent of this implementation? HOT 2
- Question regarding model checkpoint
- Question regarding Pathfinder and Listops performance HOT 2
- Is it really byte-level?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from long-range-arena.