Comments (5)
Got the trick to solve this error. Actually the model size gets too big for the limited GPU memory to accommodate the model with the default parameters in the config.json file. To reduce the model size, you just need to tweak one parameter that is "dilations" in "model" dictionary from default value of 9 to something small like 4 or 5. The config.json file to modify is the one present in the parent directory itself and NOT the one in sessions/001/config.json.
Mind you that the given pretrained model won't work after changing 'dilations' value for inference. So you will have to first train a new model using the training dataset given and then do the inference. @wuweijia1994
from speech-denoising-wavenet.
I also meet this problem of ResourceExhaustedError. My setup is Tesla K80 GPU.
from speech-denoising-wavenet.
Thank you so much for the reply. Now the problem is solved. Now I know how good the GPU this guy has for "9 dilations".
from speech-denoising-wavenet.
DillipKS is correct, feel free to re-train a smaller model. Note that the GPU we used for our work was a Titan X Pascal (12GB-VRAM).
from speech-denoising-wavenet.
@jordipons I am unable to perform inference on the machine with Nvidia Tesla T4
16gb, I am getting out of memory error
from speech-denoising-wavenet.
Related Issues (20)
- Nvidia driver version exception HOT 1
- Denoised Speech is silence HOT 3
- What kind of HW is needed to run the "best performing model"? HOT 2
- Denoised audio 0db on NSDTSEA HOT 5
- Meaning of "in_memory_percentage" config parameter HOT 2
- index 2 is out of bounds for axis 0 with size 2
- PESQ and STOI
- requirements doesn't include TF HOT 2
- Readme requirements HOT 7
- TypeError: 'float' object cannot be interpreted as an integer HOT 7
- No optparse in python3
- No version in python3 ? HOT 1
- Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
- cudnn version
- parameter size
- out memory? HOT 5
- Difference Between this and others
- sor with shape[655360,31,1,256] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu HOT 3
- Has someone do inference (denoise an audio) successfully? HOT 2
- Implement the code
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from speech-denoising-wavenet.