Comments (11)
so your batch size is 12 * 8? I only use batch 12 with 1 gpus which is consistent with paper.
from segmentron.
so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.
The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu
from segmentron.
so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.
The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu
you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.
from segmentron.
so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.
The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu
you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.
Thank you for your reply. Are all the parameters in the configs file trained by 1 gpu by default?
from segmentron.
so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.
The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu
you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.
Only one GPU training and bs=12 is so slow, If I train with 4 GPUs and thus the batchsize=48? Any else should I change to reproduce 68.9 by training with 4gpus and bs=48 ?
from segmentron.
so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.
The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu
you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.
Only one GPU training and bs=12 is so slow, If I train with 4 GPUs and thus the batchsize=48? Any else should I change to reproduce 68.9 by training with 4gpus and bs=48 ?
you can set batch_size=3 in config file, and use 4gpu to keep total batch size=12
from segmentron.
so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.
The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu
you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.
Thank you for your reply. Are all the parameters in the configs file trained by 1 gpu by default?
no, i only trained 3 or 4 models. i think keep total batch size=12~16 will be ok.
from segmentron.
so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.
The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu
you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.
Only one GPU training and bs=12 is so slow, If I train with 4 GPUs and thus the batchsize=48? Any else should I change to reproduce 68.9 by training with 4gpus and bs=48 ?
you can set batch_size=3 in config file, and use 4gpu to keep total batch size=12
I mean bs=12 training is so slow, I just want train with larger batchsize . If I use bs=48 (12*4) and change lr=0.045x4 , can I reproduce 68.9?
from segmentron.
so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.
The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu
you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.
Only one GPU training and bs=12 is so slow, If I train with 4 GPUs and thus the batchsize=48? Any else should I change to reproduce 68.9 by training with 4gpus and bs=48 ?
you can set batch_size=3 in config file, and use 4gpu to keep total batch size=12
I mean bs=12 training is so slow, I just want train with larger batchsize . If I use bs=48 (12*4) and change lr=0.045x4 , can I reproduce 68.9?
i have not tried this hyper parameters.
from segmentron.
hi, I see you get the trained model, dou you know where can i obtain the trained cityscapes_deeplabv3_plus_mobilenet model?
from segmentron.
hi, I see you get the trained model, dou you know where can i obtain the trained cityscapes_deeplabv3_plus_mobilenet model?
https://github.com/LikeLy-Journey/SegmenTron#real-time-models
https://github.com/LikeLy-Journey/SegmenTron/releases
from segmentron.
Related Issues (20)
- PointRend with HardNet
- Bug in tool/demo.py HOT 3
- How to load pointrend weight in demo.py?
- the problem occurred during training
- Multi-GPU Training Error HOT 4
- how to compute fps?
- RuntimeError when resuming training HOT 2
- About some parameter
- About mode
- How to convert 19cls into 34cls according to your mapping?
- module 'yaml' has no attribute 'FullLoader'
- Deeplabv3plus (mobilenetv2) cityscapes with ASPP
- unet+backbone
- What is your default Loss Function?
- What is the default weight mean?
- Error on instalation
- How to reproduce the result of Mobilenet+Deeplabv3
- Training stuck on the first epoch HOT 1
- how to reproduce deeplabv3+ with xception65 achieve 88% in pascal voc?
- Something happened when I try to make configuration about enviornment
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from segmentron.