Warmest greeting from Haohe. PR is most welcomed for my repos.
"What good is a newborn baby?" -Franklin
AudioLDM: Generate speech, sound effects, music and beyond, with text.
Home Page: https://audioldm.github.io/
License: Other
It seems like the package is missing files, or at least I think. command prompt does not work as it is.
command:
python -m audioldm
error I got:
No module named audioldm.main; 'audioldm' is a package and cannot be directly executed
everything is installed. when something is missing, it does show errors, however when it should run, it doesn't
(i think i posted this somewhere else too, sorry if its not related to that place)
Hello, when running app.py, i get a large error, not sure what to do.
error:
FutureWarning: Pass size=1024 as keyword args. From version 0.10 passing these as positional arguments will result in an error
fft_window = librosa.util.pad_center(fft_window, n_fft)
C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:3191.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.dense.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias']
Running audioldm commandline interface on and Windows 10 machine this error appeared:
import chardet
ModuleNotFoundError: No module named 'chardet'
I was able to resolved the issue by installing chardet package with pip3 install chardet
, but i belive that the dependency needs to be added to the site packages: https://stackoverflow.com/questions/51775462/python-3-7-import-requests-returns-chardet-error
程序运行时自动下载的速度太慢了
`
/root/.cache/audiol 100%[===================>] 2.38G 69.4MB/s in 4m 15s
2023-03-18 15:21:18 (9.58 MB/s) - ‘/root/.cache/audioldm/audioldm-full-s-v2.ckpt’ saved [2559017383/2559017383]
SameFileError Traceback (most recent call last)
in
216 op(c.warn, 'Downloading', use_ckpt)
217 get_ipython().system('wget {ckpt_url} -O {models_dir}{use_ckpt}')
--> 218 shutil.copy(models_dir+use_ckpt, use_ckpt_path+use_ckpt)
219 op(c.ok, 'Done.')
220
1 frames
/usr/lib/python3.9/shutil.py in copyfile(src, dst, follow_symlinks)
242
243 if _samefile(src, dst):
--> 244 raise SameFileError("{!r} and {!r} are the same file".format(src, dst))
245
246 file_size = 0
SameFileError: '/root/.cache/audioldm/audioldm-full-s-v2.ckpt' and '/root/.cache/audioldm/audioldm-full-s-v2.ckpt' are the same file
NameError Traceback (most recent call last)
in
93 if action == 'generate':
94 file_out = dir_out+uniq_id+''+slug(input)[:60]+'_'+str(i).zfill(3)+'.wav'
---> 95 generated_audio = text2audio(input, duration, None, guidance_scale, seed, candidates, ddim_steps)
96 elif action == 'audio2audio':
97 file_out = dir_out+uniq_id+''+basename(init_path)+'_'+str(i).zfill(3)+'.wav'
NameError: name 'text2audio' is not defined
`
Hello, I was reading the paper and noticed a "superior" version of the model, AudioLDM-L
are the weights of this version going to be released?
Also, I registered the "audioldm" org on hf, so just let me know if you want it so I can pass it to you
Hello,
I have been experimenting with > 10 seconds generation via infilling; %50 past audio (5 seconds) %50 blank audio (5 seconds). What I saw so far was;
Is there a way to improve this task, extending music generation by infilling?
Hugging Face web demo at this moment is failing to output the result as a video file. Are there any plans to add Google Colab notebook in the repository?
$ python3 app.py
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Load AudioLDM: %s audioldm-s-full
DiffusionWrapper has 185.04 M params.
/home/teamy/miniconda3/envs/audioldm/lib/python3.8/site-packages/torchlibrosa/stft.py:193: FutureWarning: Pass size=1024 as keyword args. From version 0.10 passing these as positional arguments will result in an error
fft_window = librosa.util.pad_center(fft_window, n_fft)
/home/teamy/miniconda3/envs/audioldm/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.bias', 'lm_head.layer_norm.weight', 'lm_head.dense.bias', 'lm_head.dense.weight']
- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Generate audio using text A hammer is hitting a wooden surface
Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory
Aborted
Running under wsl Ubuntu 22.04.2 LTS D:
i can just restart everything by interrupting then re -> app.py , it's okay. but i mean . . . kinda sucks . It occupies 5.4 then doubble that exeeds 8 gigs.
interrupting the cycle ofc is gonna reset it . Maybe add automatic purge ? idk .
running locally - 8 gb ram | 7.8 max available - tested large and small models
egs: 3 minites
On apple silicon macs cuda()
isn't working. I tried replacing the torch.device("cuda")
in the ddmi.py
but there's more errors depending on what you're doing.
Is there a way you could provide an pip3 package that accounts for m1 macs and just uses the CPU? I tried the audio transfer with torch.device("cpu") and it does work.
Thank you!
hey all trying to run the app.py with the readme directions and am getting back out the following error on ubuntu 23.04
``
DiffusionWrapper has 185.04 M params.
/home/jerrick/anaconda3/envs/audioldm/lib/python3.8/site-packages/torchlibrosa/stft.py:193: FutureWarning: Pass size=1024 as keyword args. From version 0.10 passing these as positional arguments will result in an error
fft_window = librosa.util.pad_center(fft_window, n_fft)
/home/jerrick/anaconda3/envs/audioldm/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.decoder.weight']
I'm an enthusiast trying to see what is possible with this model and I can generate 30 second pieces on my RTX 3060 just fine but when I try 60 seconds I get this:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 6.75 GiB (GPU 0; 12.00 GiB total capacity; 10.34 GiB already allocated; 0 bytes free; 10.69 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Is it possible to generate longer audio without buying a new GPU?
when? soon? later? even later?
Following the instructions for running app.py through terminal, after I run "python3 app.py" I get:
Downloading the main structure of audioldm
And then it gets stuck. I tried running "rm ~/.cache/audioldm/audioldm-s-full.ckpt" and using pip and python instead of pip3 and python3 just in case but it didn't help.
Dear authors,
I'm writing to inquire about the availability of Audioldm training code on the Github repository.
I couldn't find the code there, and I'm wondering if there are any plans to provide it.
Thank you very much for releasing your code.
When attempting to run app.py, an error is returned.
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined] RuntimeError: cuFFT error: CUFFT_INTERNAL_ERROR
when running the audio to audio script I get this error
full log:
Traceback (most recent call last):
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\runpy.py", line 146, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "E:\AudioLDM\AudioLDM\audioldm\__init__.py", line 3, in <module>
from .pipeline import *
File "E:\AudioLDM\AudioLDM\audioldm\pipeline.py", line 11, in <module>
from audioldm.audio import wav_to_fbank, TacotronSTFT, read_wav_file
File "E:\AudioLDM\AudioLDM\audioldm\audio\__init__.py", line 1, in <module>
from .tools import wav_to_fbank, read_wav_file
File "E:\AudioLDM\AudioLDM\audioldm\audio\tools.py", line 3, in <module>
import torchaudio
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\__init__.py", line 1, in <module>
from torchaudio import _extension # noqa: F401
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\_extension.py", line 67, in <module>
_init_extension()
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\_extension.py", line 61, in _init_extension
_load_lib("libtorchaudio")
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\_extension.py", line 51, in _load_lib
torch.ops.load_library(path)
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torch\_ops.py", line 573, in load_library
ctypes.CDLL(path)
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\ctypes\__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 127] The specified procedure could not be found
(audioldm) E:\AudioLDM\AudioLDM>audioldm --file_path E:\AudioLDM\AudioLDM\inputs\escapism.mp3
Traceback (most recent call last):
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\runpy.py", line 146, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "E:\AudioLDM\AudioLDM\audioldm\__init__.py", line 3, in <module>
from .pipeline import *
File "E:\AudioLDM\AudioLDM\audioldm\pipeline.py", line 11, in <module>
from audioldm.audio import wav_to_fbank, TacotronSTFT, read_wav_file
File "E:\AudioLDM\AudioLDM\audioldm\audio\__init__.py", line 1, in <module>
from .tools import wav_to_fbank, read_wav_file
File "E:\AudioLDM\AudioLDM\audioldm\audio\tools.py", line 3, in <module>
import torchaudio
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\__init__.py", line 1, in <module>
from torchaudio import _extension # noqa: F401
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\_extension.py", line 67, in <module>
_init_extension()
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\_extension.py", line 61, in _init_extension
_load_lib("libtorchaudio")
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\_extension.py", line 51, in _load_lib
torch.ops.load_library(path)
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torch\_ops.py", line 573, in load_library
ctypes.CDLL(path)
File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\ctypes\__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 127] The specified procedure could not be found
Sometime a user might want to use a different directory for storing model weights. Would be nice to be able to set this directory.
I did everything according to the instructions, but I still ran into this problem.
my system is windows 11
then ran the command:
python3 scripts/text2sound.py -t "A hammer is hitting a wooden surface"
the error i got:
Traceback (most recent call last):
File "path-to-directory\scripts\text2sound.py", line 2, in
from audioldm import text_to_audio, build_model, save_wave
ModuleNotFoundError: No module named 'audioldm'
Input: "Any text prompt"
Duration: I tried 40,45,70
Result: AudioLDM gives me for example 40 second audio without any sound
How to solve this?
But when i generate 10 second generation it seems to work, why?
Could anyone provide me with the instructions to install and run the CLI version of AudioLDM with metal support?
(I believe this needs a different metal-enabled version of pytorch?)
https://developer.apple.com/metal/pytorch/
Thanks so much
Running
audioldm -t "A hammer is hitting a wooden surface"
in terminal leads me to
RuntimeError: PytorchStreamReader failed reading file data/11: invalid header or archive is corrupted
Installed through pip install audioldm
on a mac (macOS Monterey 12.5.1), in python 3.10.
I am on a 2019 mac pro (AMD) and am getting the error:
Torch not compiled with CUDA enabled
Is there a way to make AudioDLM work on my machine?
Related question:
I am getting the same error on my M1 Macbook air. Can this be used on the Macbook?
I think Pytorch just added meal support but I am not sure if this helps here?
Thank you!
Would love to see code to reproduce the paper's super resolution
Hey, I just created a Discord server where we can connect together to discuss our research, sounds and ideas using AudioLDM.
If Admin appprove - maybe lets add it to project's description.
I can lead the server and assign moderators if needed.
I followed the instructions but kept getting complaints that "Torch not compiled with CUDA enabled".
I tried switching Torch to the CU variants, but that only resulting in cascading complaints regarding missing DLLs.
Has this repository been tested in a windows environment? (Or am I on a fool's errand?)
return func(*args, **kwargs)
File "G:\Audio\audioldm-text-to-audio-generation\audioldm\latent_diffusion\ddim.py", line 127, in sample
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
File "G:\Audio\audioldm-text-to-audio-generation\audioldm\latent_diffusion\ddim.py", line 43, in make_schedule
self.register_buffer("betas", to_torch(self.model.betas))
File "G:\Audio\audioldm-text-to-audio-generation\audioldm\latent_diffusion\ddim.py", line 25, in register_buffer
attr = attr.to(torch.device("cuda"))
File "C:\Users\RandomName\anaconda3\envs\audioldm\lib\site-packages\torch\cuda\__init__.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Hi! Thanks for this, this is amazing to play around with :)
I'm trying to use it for a film to transform music via an algorithm to create a transition from acoustic to artificial.
When I'm using --duration 20, or more, it results in purely noise.
I'm running it locally on an M1 Macbook Pro.
I've also manually changed to use the CPU as it otherwise won't run on this machine, I think.
Anyone got an idea what to do?
Thank you!
Could you share the setting of mel-spectrogram?
Found in your paper, thanks!
Hi,
Great work! I noticed that AudioLDM-L was pre-trained for 0.6M steps on the Audiocaps dataset. You used a batch size of 5/8 and a single RTX 3090. Thus I am wondering how long it took to train the model.
Thanks
清问能否提供训练脚本,还有里面的几个模型的训练顺序是怎么样的?
Hi.
Played a bit with the model (both -s models fit into 8GB VRAM, very nice) and tried my luck with SD-derived prompting skills.
Very impressive. Thank you very much for giving it to us.
I've got a weird request, I guess, or asking for guidance how to accomplish it.
I noticed that transfer with transfer strength 0 is okay-ish close to the original, despite it being decomposed to the latent space and then assembled back.
Is it possible to make or where should I start digging on my own to make a mode that will attempt to seamlessly loop an existing SFX snippet?
Making a seamless loop is usually manual work.
I'm aware only of nvk_LOOPMAKER plugin for Reaper that semi-automates the process and I kinda want to make a batch of them and go drink some tea.
Basically I just want to select a bunch of sounds from a soundbank (all licensed for commercial use, obviously), run a batch file, go drink some tea and return to processed loops that are close to the original and are looped seamlessly.
Maybe, hopefully, there's a magic prompt that will do it for me already? Fat chance, but you never know.
Thank you.
Hi authors,
Wonderful work! Could you please share the scripts to train the autoencoder (VAE)? Thanks a lot.
I know upscaling is going to be released on Friday (which is OMG), but is it possible to retrain the model with 44.1? Is it feasible? Would a typical GPU run that? I think having a richer depth even in generation might give much clearer and crispier generations.
A quick question. What scale factor did you use in the training?
Would it be able to fine-tune the model with folder of own sounds like e.g. kick samples?
Could the authors kindly share the training code and dataset?
plz i need
I think I've heard some examples in stereo?
Is this possible using the CLI version?
I'm running into this error when trying to run audioldm -t "A hammer is hitting a wooden surface"
.
My setup is Windows, python 3.9, creating a virtual environment and installing with pip install audioldm
.
I had to uninstall torch and re-install using pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
after getting the error AssertionError: Torch not compiled with CUDA enabled
.
I tried clearing the cache of cuda using a script (after googling this issue) with no luck.
I don't really know where to go from here? (New world to me)
Here's the full error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 6.00 GiB total capacity; 5.35 GiB already allocated; 0 bytes free; 5.39 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Any help appreciated!
And also amazing work, AudioLDM is insanely cool.
I managed to install the web app from hugging face, but I want the terminal for more options and I followed the instruction but in the end it said that audioldm is a package not a module, any help?
hello mates , first of all thanks alot for this awesome tool,i was playing with it and the results are awesome.
is there any way for me to train my own model based on my dataset i read that this was inspired by stable diffusion so i think its possible to train my own dataset and make different models
Traceback (most recent call last):
File "X:\AudioLDM-Neu\AudioLDM-Neu-venv\lib\runpy.py", line 185, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "X:\AudioLDM-Neu\AudioLDM-Neu-venv\lib\runpy.py", line 111, in _get_module_details
import(pkg_name)
File "X:\AudioLDM-Neu\app.py", line 9, in
audioldm = build_model()
File "X:\AudioLDM-Neu\audioldm\pipeline.py", line 64, in build_model
checkpoint = torch.load(resume_from_checkpoint, map_location=device)
File "X:\AudioLDM-Neu\AudioLDM-Neu-venv\lib\site-packages\torch\serialization.py", line 777, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "X:\AudioLDM-Neu\AudioLDM-Neu-venv\lib\site-packages\torch\serialization.py", line 282, in init
super(_open_zipfile_reader, self).init(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
I'd appreciate any help
I have installed pytorch for cuda, and have a checkpoint in the ckpt folder, removing the checkpoint didn't change anything
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.