Comments (6)
Thanks for the suggestion about the masking for larger models.
As a first step we want to reproduce your results on the Genomic Benchmark Dataset showed in the paper.
Thank you very much for all your advices!
from hyena-dna.
The OOM error is more relevant, the other error is because of the checkpoing_mixer/mlp flag.
what's your batch size? For 1M, we use a batch size of 1 (per gpu) and then use the grad_accummulate_batch
flag to adjust the effective batch size.
from hyena-dna.
oh I reread your response and see you're using 1M model on short seqs, that's highly inefficient. I would use the short seq models for short tasks. Then you can use much bigger batch sizes too.
from hyena-dna.
Thank you very much for your advice! That totally makes sense since I noticed I was using a batch size of 256 that is actually too large.
Using a batch size of 1 gives me the following error:
ValueError: Expected input batch_size (1) to match target batch_size (0).
Setting the batch size to 2 or a bit higher solves the problem. What can be the cause?
You're right about using short seq models for short sequences but we would like to test Hyena 1M in order to compare its performance with its other context sizes. Unfortunately the Genomic Benchmark Dataset contains only short sequences and the same is true for most Nucleotide Transformer datasets.
Can you suggest me a proper dataset on which we can test the larger versions of Hyena?
Our main goal is to reproduce the results you showed in the paper on the Genomic Benchmark Dataset. Did you use the tiny-1k? Can you provide the exact hyperparameters?
Thank you for your time.
from hyena-dna.
We can help in general for using the codebase as is (with existing datasets) but we're going to need a lot more context: what dataset / task, cmd used to launch, wandb (if avail) etc.
If it's a custom dataset, then you'll need to get fairly intimate with the code and how things flow, eg, putting breakpoints everywhere. That'll be the most efficient for you.
In general, start small, check each module individually - eg, is my dataloader returning the exact shapes I expect, and if not, add a breakpoint and check. Better yet, use the existing datasets and check that it works the way you think.
Good luck!
from hyena-dna.
Also for using a giant context on short datasets, you'll want to use the masking functionality, otherwise it's not a fair comparison, since we average embeddings, it'll get drowned out by 1M random embeddings lol.
See masking section here for how to use it.
from hyena-dna.
Related Issues (20)
- How to recreate the result of DNABERT in paper HOT 1
- How to convert the batch cell from the GenomicBenchmarks data to user data? CUDA memory overload if running "Single example" cell multiple times to produce embeddings.
- How to pre-train on custom dataset using hyena-dna
- need to swap layer norm op for triton-based layer norm? HOT 2
- Reproducing the HyenaDNA results on NT Benchmarks
- Running with standard Huggingface config and trainer files does not give optimal results
- The default for pretrained_model_path in config files is a personal directory
- Bugs when I try to access the embeddings HOT 3
- Question: How to generate DNA sequence or sequence embeddings based on own bed file
- How can I access the dataset--genomic_benchmark I got timeout issue
- Failure to reproduce the hyenaDNA reported results on NT tasks. HURRY! HURRY HURRY
- Symbol lookup error
- Error when Resuming pre-training
- Training loss becomes NAN during pretraining HOT 1
- Questions about pre-training with multiple sequences HOT 1
- Inquire about GWAS tasks
- install problem
- CUDA out of memory occurs when the training length reaches 450k on a100 HOT 1
- Where are genomic_bench_dataloader.py and nucleotide_transformer_dataloader.py located?
- RuntimeError: cuFFT error: CUFFT_INTERNAL_ERROR
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hyena-dna.