Giter Club home page Giter Club logo

Comments (13)

jayathungek avatar jayathungek commented on May 17, 2024 3

Hi all, I have a few questions about reproducing this project as well:

  1. I suppose this means that we have to download the Youtube videos ourselves and apply the pre-processing as per https://github.com/deepmind/dmvr/tree/master/examples - Is that correct?

  2. Also, to expand on @wentaozhu's point, MBT supports RGB as well as spectrograms, but in your projects/mbt/configs/audioset/balanced_audioset_base.py config file, config.dataset_configs.tables only seems to contain spectrogram tfrecords, i.e. balanced_train.se.melspec.tfrecord.sst@1024. How can the RGB component of the data be integrated into this config file?

  3. It is also not very clear to me how the dataset split is generated. For training and validation, I assume we use the .csv files provided by Audioset, but they make no mention of a test set. Do we just use the same records as the validation set for the test as well?

  4. Finally, a minor query about the naming convention of the tfrecord files: what is the significance of the .sst@1024 at the end of each record? This is from the config file I mentioned earlier. Does this have something to do with the number of shards the dataset is split into?

Sorry for the barrage of questions, this is my first foray into deep learning research and I'm trying to get an understanding of the best practices etc!

Thank you!

from scenic.

uck16m1997 avatar uck16m1997 commented on May 17, 2024 1

Hello everyone,

I also got some questions for the data preparing process. I couldn't find answers for these in the paper.

  1. To get the log mel spectrograms from the audio data. In the process of converting from amplitude to db what reference point was used ?(1, max, median...)

  2. I didn't see it get mentioned in the paper but I've seen in the code there is optional zero centering for both the rgb and the spectrogram. I wanted to know if this was used for the data used to train model checkpoints.

  3. Lastly if it’s possible, where can we get the build configurations that the checkpoints expect of AVTFRecordDatasetFactory ? There seems to be some mismatch that created some confusion. For example when we wan’t to create the dataset using AVTF default num_spec_frame is 5 but checkpoint seems to expect and the paper mentions 8 seconds sampled. I might have seen additional mismatch as well so I would like to be sure.

Sorry for piling on more questions :) I am warming up to these topics so, if you want to point me to additional resources that would be great as well.

Thanks a lot!

from scenic.

wentaozhu avatar wentaozhu commented on May 17, 2024

Thank you so much, Mostafa! @MostafaDehghani and @anuragarnab

from scenic.

a-nagrani avatar a-nagrani commented on May 17, 2024

Hi,

The audio for all datasets is sampled at 16kHz and converted to mono channel. We then extract log mel
spectrograms with a frequency dimension of 128 computed using a 25ms Hamming window with
hop length 10ms. This gives us an input of size 128 × 100t for t seconds of audio. No other processing is applied to the spectrograms before they are stored in tfrecords.

The details are described in Sec. 4.2 here: https://arxiv.org/pdf/2107.00135.pdf. Please let me know if you have any more questions!

from scenic.

LogicSense1 avatar LogicSense1 commented on May 17, 2024

Could you provide a script for generating correspoding tfrecords file from audioset dataset?

from scenic.

a-nagrani avatar a-nagrani commented on May 17, 2024

Hi, sorry but we can't release our data processing scripts. However you can follow the instructions here: https://github.com/deepmind/dmvr/tree/master/examples to create tfrecords files in the correct DMVR format.

from scenic.

yangjiangeyjg avatar yangjiangeyjg commented on May 17, 2024

Hello everyone,

I also got some questions for the data preparing process. I couldn't find answers for these in the paper.

  1. To get the log mel spectrograms from the audio data. In the process of converting from amplitude to db what reference point was used ?(1, max, median...)
  2. I didn't see it get mentioned in the paper but I've seen in the code there is optional zero centering for both the rgb and the spectrogram. I wanted to know if this was used for the data used to train model checkpoints.
  3. Lastly if it’s possible, where can we get the build configurations that the checkpoints expect of AVTFRecordDatasetFactory ? There seems to be some mismatch that created some confusion. For example when we wan’t to create the dataset using AVTF default num_spec_frame is 5 but checkpoint seems to expect and the paper mentions 8 seconds sampled. I might have seen additional mismatch as well so I would like to be sure.

Sorry for piling on more questions :) I am warming up to these topics so, if you want to point me to additional resources that would be great as well.

Thanks a lot!

Hi, have you resolved these issues?

from scenic.

yangjiangeyjg avatar yangjiangeyjg commented on May 17, 2024

Hi all, I have a few questions about reproducing this project as well:

  1. I suppose this means that we have to download the Youtube videos ourselves and apply the pre-processing as per https://github.com/deepmind/dmvr/tree/master/examples - Is that correct?
  2. Also, to expand on @wentaozhu's point, MBT supports RGB as well as spectrograms, but in your projects/mbt/configs/audioset/balanced_audioset_base.py config file, config.dataset_configs.tables only seems to contain spectrogram tfrecords, i.e. balanced_train.se.melspec.tfrecord.sst@1024. How can the RGB component of the data be integrated into this config file?
  3. It is also not very clear to me how the dataset split is generated. For training and validation, I assume we use the .csv files provided by Audioset, but they make no mention of a test set. Do we just use the same records as the validation set for the test as well?
  4. Finally, a minor query about the naming convention of the tfrecord files: what is the significance of the .sst@1024 at the end of each record? This is from the config file I mentioned earlier. Does this have something to do with the number of shards the dataset is split into?

Sorry for the barrage of questions, this is my first foray into deep learning research and I'm trying to get an understanding of the best practices etc!

Thank you!

Hi, have you resolved these issues?

from scenic.

yangjiangeyjg avatar yangjiangeyjg commented on May 17, 2024

Could you provide a script for generating correspoding tfrecords file from audioset dataset?

Hi, have you resolved these issues?

from scenic.

yangjiangeyjg avatar yangjiangeyjg commented on May 17, 2024

Hi, have you resolved these issues?

Hi, have you resolved these issues?

from scenic.

yangjiangeyjg avatar yangjiangeyjg commented on May 17, 2024

Hi, sorry but we can't release our data processing scripts. However you can follow the instructions here: https://github.com/deepmind/dmvr/tree/master/examples to create tfrecords files in the correct DMVR format.

Hi, could you disclose the processed data?

from scenic.

BDHU avatar BDHU commented on May 17, 2024

@wentaozhu Following this up, any updates on this?

from scenic.

a-nagrani avatar a-nagrani commented on May 17, 2024

from scenic.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.