Comments (2)
Not sure if it works, but the simplest way is yl4579/StarGANv2-VC#4
from auxiliaryasr.
Got multi-gpu (2, Nvidia-T4-gpus) working with Colossal Ai library(https://colossalai.org/), not sure if its doing much but iters/s is a bit faster went from 1-1.35ish to 1.5-2iters/s. does that seem right just curious? Edit: probably didn't do much.
from logging import StreamHandler
import colossalai
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
handler = StreamHandler()
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
torch.backends.cudnn.benchmark = True
@click.command()
@click.option('-p', '--config_path', default='Configs/config.yml', type=str)
def main(config_path):
colossalai.launch_from_torch(config="Configs/config.py")
config = yaml.safe_load(open(config_path))
log_dir = config['log_dir']
if not osp.exists(log_dir): os.mkdir(log_dir)
shutil.copy(config_path, osp.join(log_dir, osp.basename(config_path)))
...
config.py
from colossalai.amp import AMP_TYPE
fp16 = dict(
mode=AMP_TYPE.TORCH
# below are default values for grad scaler
)
parallel = dict(
tensor=dict(size=2, mode='1d')
)
gradient_accumulation = 4
clip_grad_norm = 1.0
rank=0
world_size=1
host="localhost"
port=29500
Logs:
/bin/bash: /opt/conda/lib/libtinfo.so.6: no version information available (required by /bin/bash)
[02/22/23 04:27:13] INFO colossalai - colossalai - INFO:
/opt/conda/lib/python3.7/site-packages/colossalai/c
ontext/parallel_context.py:521 set_device
[02/22/23 04:27:13] INFO colossalai - colossalai - INFO:
/opt/conda/lib/python3.7/site-packages/colossalai/c
ontext/parallel_context.py:521 set_device
INFO colossalai - colossalai - INFO: process rank 1 is
bound to device 1
INFO colossalai - colossalai - INFO: process rank 0 is
bound to device 0
[02/22/23 04:27:16] INFO colossalai - colossalai - INFO:
/opt/conda/lib/python3.7/site-packages/colossalai/c
ontext/parallel_context.py:557 set_seed
[02/22/23 04:27:16] INFO colossalai - colossalai - INFO:
/opt/conda/lib/python3.7/site-packages/colossalai/c
ontext/parallel_context.py:557 set_seed
INFO colossalai - colossalai - INFO: initialized seed on
rank 1, numpy: 1024, python random: 1024,
ParallelMode.DATA: 1024, ParallelMode.TENSOR:
1025,the default parallel seed is
ParallelMode.DATA.
INFO colossalai - colossalai - INFO: initialized seed on
rank 0, numpy: 1024, python random: 1024,
ParallelMode.DATA: 1024, ParallelMode.TENSOR:
1024,the default parallel seed is
ParallelMode.DATA.
INFO colossalai - colossalai - INFO:
/opt/conda/lib/python3.7/site-packages/colossalai/i
nitialize.py:120 launch
INFO colossalai - colossalai - INFO: Distributed
environment is initialized, data parallel size: 1,
pipeline parallel size: 1, tensor parallel size: 2
/bin/bash: /opt/conda/lib/libtinfo.so.6: no version information available (required by /bin/bash)
[02/22/23 04:27:13] INFO colossalai - colossalai - INFO:
/opt/conda/lib/python3.7/site-packages/colossalai/c
ontext/parallel_context.py:521 set_device
[02/22/23 04:27:13] INFO colossalai - colossalai - INFO:
/opt/conda/lib/python3.7/site-packages/colossalai/c
ontext/parallel_context.py:521 set_device
INFO colossalai - colossalai - INFO: process rank 1 is
bound to device 1
INFO colossalai - colossalai - INFO: process rank 0 is
bound to device 0
[02/22/23 04:27:16] INFO colossalai - colossalai - INFO:
/opt/conda/lib/python3.7/site-packages/colossalai/c
ontext/parallel_context.py:557 set_seed
[02/22/23 04:27:16] INFO colossalai - colossalai - INFO:
/opt/conda/lib/python3.7/site-packages/colossalai/c
ontext/parallel_context.py:557 set_seed
INFO colossalai - colossalai - INFO: initialized seed on
rank 1, numpy: 1024, python random: 1024,
ParallelMode.DATA: 1024, ParallelMode.TENSOR:
1025,the default parallel seed is
ParallelMode.DATA.
INFO colossalai - colossalai - INFO: initialized seed on
rank 0, numpy: 1024, python random: 1024,
ParallelMode.DATA: 1024, ParallelMode.TENSOR:
1024,the default parallel seed is
ParallelMode.DATA.
INFO colossalai - colossalai - INFO:
/opt/conda/lib/python3.7/site-packages/colossalai/i
nitialize.py:120 launch
INFO colossalai - colossalai - INFO: Distributed
environment is initialized, data parallel size: 1,
pipeline parallel size: 1, tensor parallel size: 2
/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
cpuset_checked))
/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
cpuset_checked))
{'max_lr': 0.0001, 'pct_start': 0.0, 'epochs': 200, 'steps_per_epoch': 15502}
[train]: 0%| | 0/15502 [00:00<?, ?it/s]{'max_lr': 0.0001, 'pct_start': 0.0, 'epochs': 200, 'steps_per_epoch': 15502}
[train]: 0%| | 0/15502 [00:00<?, ?it/s]'
'
'
'
''
'
'
'
'
'
'
'
'
'
'
/kaggle/AuxiliaryASR/AuxiliaryASR/trainer.py:158: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
mel_input_length = mel_input_length // (2 ** self.model.n_down)
/kaggle/AuxiliaryASR/AuxiliaryASR/trainer.py:158: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
mel_input_length = mel_input_length // (2 ** self.model.n_down)
'
'
'
'
'
'
[train]: 0%| | 1/15502 [00:13<56:25:46, 13.11s/it]'
'
'
'
'
'
''
[train]: 0%| | 4/15502 [00:18<14:02:20, 3.26s/it]'
'
[train]: 0%| | 4/15502 [00:18<14:09:01, 3.29s/it]'
'
[train]: 0%| | 5/15502 [00:20<12:39:14, 2.94s/it]'
'
'
'
'
'
from auxiliaryasr.
Related Issues (11)
- How much data did you use to train the model? HOT 1
- how to train for mandarin asr? HOT 8
- how to make word_index_dict.txt HOT 3
- About the loss HOT 4
- why mel_spectrogam feature extracting using only MEL_PARAMS here? HOT 3
- get error HOT 28
- Is there anyone who has used the phonemizer? Any advice, please, on how to change the code correctly HOT 3
- Why is " " used as the blank in the CTCLoss?
- Error Message: RuntimeError: Argument #4: Padding size should be less than the corresponding input dimension, but got: padding (1024, 1024) at dimension 2 of input [1, 65621, 2] HOT 3
- How to train ZH-EN duo language aligner? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from auxiliaryasr.