Comments (13)
Hi all, I have a few questions about reproducing this project as well:
-
I suppose this means that we have to download the Youtube videos ourselves and apply the pre-processing as per https://github.com/deepmind/dmvr/tree/master/examples - Is that correct?
-
Also, to expand on @wentaozhu's point, MBT supports RGB as well as spectrograms, but in your
projects/mbt/configs/audioset/balanced_audioset_base.py
config file,config.dataset_configs.tables
only seems to contain spectrogram tfrecords, i.e.balanced_train.se.melspec.tfrecord.sst@1024
. How can the RGB component of the data be integrated into this config file? -
It is also not very clear to me how the dataset split is generated. For training and validation, I assume we use the .csv files provided by Audioset, but they make no mention of a test set. Do we just use the same records as the validation set for the test as well?
-
Finally, a minor query about the naming convention of the tfrecord files: what is the significance of the
.sst@1024
at the end of each record? This is from the config file I mentioned earlier. Does this have something to do with the number of shards the dataset is split into?
Sorry for the barrage of questions, this is my first foray into deep learning research and I'm trying to get an understanding of the best practices etc!
Thank you!
from scenic.
Hello everyone,
I also got some questions for the data preparing process. I couldn't find answers for these in the paper.
-
To get the log mel spectrograms from the audio data. In the process of converting from amplitude to db what reference point was used ?(1, max, median...)
-
I didn't see it get mentioned in the paper but I've seen in the code there is optional zero centering for both the rgb and the spectrogram. I wanted to know if this was used for the data used to train model checkpoints.
-
Lastly if it’s possible, where can we get the build configurations that the checkpoints expect of AVTFRecordDatasetFactory ? There seems to be some mismatch that created some confusion. For example when we wan’t to create the dataset using AVTF default num_spec_frame is 5 but checkpoint seems to expect and the paper mentions 8 seconds sampled. I might have seen additional mismatch as well so I would like to be sure.
Sorry for piling on more questions :) I am warming up to these topics so, if you want to point me to additional resources that would be great as well.
Thanks a lot!
from scenic.
Thank you so much, Mostafa! @MostafaDehghani and @anuragarnab
from scenic.
Hi,
The audio for all datasets is sampled at 16kHz and converted to mono channel. We then extract log mel
spectrograms with a frequency dimension of 128 computed using a 25ms Hamming window with
hop length 10ms. This gives us an input of size 128 × 100t for t seconds of audio. No other processing is applied to the spectrograms before they are stored in tfrecords.
The details are described in Sec. 4.2 here: https://arxiv.org/pdf/2107.00135.pdf. Please let me know if you have any more questions!
from scenic.
Could you provide a script for generating correspoding tfrecords file from audioset dataset?
from scenic.
Hi, sorry but we can't release our data processing scripts. However you can follow the instructions here: https://github.com/deepmind/dmvr/tree/master/examples to create tfrecords files in the correct DMVR format.
from scenic.
Hello everyone,
I also got some questions for the data preparing process. I couldn't find answers for these in the paper.
- To get the log mel spectrograms from the audio data. In the process of converting from amplitude to db what reference point was used ?(1, max, median...)
- I didn't see it get mentioned in the paper but I've seen in the code there is optional zero centering for both the rgb and the spectrogram. I wanted to know if this was used for the data used to train model checkpoints.
- Lastly if it’s possible, where can we get the build configurations that the checkpoints expect of AVTFRecordDatasetFactory ? There seems to be some mismatch that created some confusion. For example when we wan’t to create the dataset using AVTF default num_spec_frame is 5 but checkpoint seems to expect and the paper mentions 8 seconds sampled. I might have seen additional mismatch as well so I would like to be sure.
Sorry for piling on more questions :) I am warming up to these topics so, if you want to point me to additional resources that would be great as well.
Thanks a lot!
Hi, have you resolved these issues?
from scenic.
Hi all, I have a few questions about reproducing this project as well:
- I suppose this means that we have to download the Youtube videos ourselves and apply the pre-processing as per https://github.com/deepmind/dmvr/tree/master/examples - Is that correct?
- Also, to expand on @wentaozhu's point, MBT supports RGB as well as spectrograms, but in your
projects/mbt/configs/audioset/balanced_audioset_base.py
config file,config.dataset_configs.tables
only seems to contain spectrogram tfrecords, i.e.balanced_train.se.melspec.tfrecord.sst@1024
. How can the RGB component of the data be integrated into this config file?- It is also not very clear to me how the dataset split is generated. For training and validation, I assume we use the .csv files provided by Audioset, but they make no mention of a test set. Do we just use the same records as the validation set for the test as well?
- Finally, a minor query about the naming convention of the tfrecord files: what is the significance of the
.sst@1024
at the end of each record? This is from the config file I mentioned earlier. Does this have something to do with the number of shards the dataset is split into?Sorry for the barrage of questions, this is my first foray into deep learning research and I'm trying to get an understanding of the best practices etc!
Thank you!
Hi, have you resolved these issues?
from scenic.
Could you provide a script for generating correspoding tfrecords file from audioset dataset?
Hi, have you resolved these issues?
from scenic.
Hi, have you resolved these issues?
Hi, have you resolved these issues?
from scenic.
Hi, sorry but we can't release our data processing scripts. However you can follow the instructions here: https://github.com/deepmind/dmvr/tree/master/examples to create tfrecords files in the correct DMVR format.
Hi, could you disclose the processed data?
from scenic.
@wentaozhu Following this up, any updates on this?
from scenic.
from scenic.
Related Issues (20)
- UniVRD model release? HOT 2
- scenic/scenic/projects /mbt bottleneck shape shape?
- CUDA12.3 install
- Dockerfile for running SCENIC+CUDA
- What is the file format of the checkpoint files? HOT 1
- Focal loss in OWL-ViT HOT 6
- playground.ipynb of OWL-ViT starts having dependency and import error on Google Colab since Nov. 15 HOT 4
- AttributeError: module 'numpy.linalg._umath_linalg' has no attribute '_ilp64' HOT 3
- Clip implementation with Hugging Face Datasets
- Inconsistent Object Detection Results on Constant Video Stream and Text Query
- Knowledge Distillation - OwlViT Model
- OWL-ViT inference playround: ModuleNotFoundError: No module named 'jax.experimental.gda_serialization' HOT 1
- Full training example with custom dataset HOT 1
- I would like to ask for the code release of UniVRD (ICCV 2023). HOT 1
- Port Vid2Seq to use new train_lib
- not module named ’grain.python‘
- local checkpoint error
- Requirements for Vid2Seq
- finetune owl on image dataset with no caption
- CLIP feature compression package for the ActivityNet Captions dataset
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from scenic.