Giter Club home page Giter Club logo

mulap's Introduction

Learning Music Audio Representations Via Weak Language Supervision

Ilaria Manco1 2, Emmanouil Benetos1, Elio Quinton2, Gyorgy Fazekas1
1 Queen Mary University of London, 2 Universal Music Group

License: GPL v3 arXiv

This repository is the official implementation of "Learning Music Audio Representations Via Weak Language Supervision" (ICASSP 2022).

In this work we introduced MuLaP, a framework for music-and-language pre-training to learn general-purpose music audio representations. MuLaP allows an audio backbone to learn from weakly aligned natural language descriptions of the audio content via a multimodal co-attention Transformer module. This audio-linguistic pre-training endows the model with good transfer learning capabilities, resulting in representations that are useful for a variety of music classification and regression downstream tasks.

We provide code for pre-training, downstream training and evaluation of MuLaP on 4 tasks: music auto-tagging, genre classification, instrument recognition and emotion recognition.

Installation

Clone the repository and install the dependencies. We recommend using a fresh virtual environment.

git clone https://www.github.com/ilaria-manco/mulap 
cd mulap 
pip install -r requirements.txt
pip install -e .

Preparing the dataset

MuLaP is pre-trained on a multimodal dataset of (audio, text) pairs.

Annotations should be provided in JSON format and must include the following fields:

audio_id: the unique identifier for each audio track in the dataset

caption : a string with the textual description of the audio track

audio_path: path to the audio track, relative to the root audio directory

One JSON file per split must be provided and stored in the data/datasets directory, following this structure:

dataset_name
├── audio            
│   ├── track_1.npy
│   ├── track_2.npy
|   └── ...
├── dataset_train.json    
├── dataset_val.json    
└── dataset_test.json

An illustrative example of the dataset is provided in data/datasets/audiocaption/.

Pre-training MuLaP

Dataset, model and training configurations are set in the respective yaml files in configs. You can also pass some options via the CLI, overwriting the arguments in the config files. For more details on the CLI options, please refer to the training script.

To pre-train the model with the default configs, simply run

cd mulap/scripts/
python pretrain.py 

This will generate a pretrain_id and create a new folder in save/experiments/ where the output will be saved.

If you wish to resume pre-training from a saved checkpoint, run this command:

python pretrain.py --experiment_id <pretrain_id> 

Transferring MuLaP to downstream tasks

After pre-training, you can train a classifier on top of the audio backbone for one of the downstream tasks supported by running

cd mulap/scripts/
python downstream.py <pretrain_id> <downstream_task>

The downstream tasks supported are:

You'll need to download the datasets inside the datasets/ folder and preprocess them before running downstream training. Dataset, model and training configurations for each task are set in the respective yaml files in configs/downstream.

Evaluating downstream performance

After downstream training, you can run the evaluation as follows:

cd <project_name>/scripts/
python eval.py <pretrain_id> <downstream_id> 

Cite

If you use the code in this repo, please consider citing our work:

@inproceedings{manco2022learning,
  title={Learning Music Audio Representations Via Weak Language Supervision}, 
  author={Manco, Ilaria and Benetos, Emmanouil and Quinton, Elio and Fazekas, György},
  booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
  year={2022},
  pages={456-460},
  doi={10.1109/ICASSP43922.2022.9746996}
}

License

This repository is released under the GNU General Public License v3.0 license. Please see the LICENSE file for more details.

Some of the code is adapted from the following repos:

Contact

If you have any questions, please get in touch: [email protected].

mulap's People

Contributors

ilaria-manco avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.