Comments (7)
i am training now, i will report my result later.
from dexined.
Question 8. line49 in train.py: if self.args.dataset_name.lower()!='biped', it seems there is no a para named as dataset_name.
from dexined.
Question 8. line49 in train.py: if self.args.dataset_name.lower()!='biped', it seems there is no a para named as dataset_name.
Hi @nywang2019 you are right dataset_name is no longer used, instead is train_dataset
from dexined.
Thank u for sharing your wonderful work. I tested my images, it looks good. I am interested in this work. Here are some questions about training:
Question 1. According to your instruction, I downloaded the BIPED data, and executed data augmentation, and now the dataset is very large, nearly 10G. my question is: the images in the augmented dataset have different sizes, so how to set image size in run_model.py? still 1280X720?
In the data loader, before fed DexiNed we resized to 400x400. In the augmentation, the image size is different from 400 till 1280(for the original image), maybe the efficient way is to use random cut, right now we are preparing a new version of defined with the base .nn.conv2d, maybe at the end of this week, we can improve even the data_loader.
Question 2. in test_rgb.lst and train_rgb.lst, the both columns in each row are the same, such as in test_rgb.lst:
rgbr/RGB_008.jpg rgbr/RGB_008.png
rgbr/RGB_010.jpg rgbr/RGB_010.png
rgbr/RGB_017.jpg rgbr/RGB_017.png
rgbr/RGB_025.jpg rgbr/RGB_025.png
and in train_rgb.lst, such as:
rgbr/aug/p1/RGB_001.jpg rgbr/aug/p1/RGB_001.png
rgbr/aug/p1/RGB_002.jpg rgbr/aug/p1/RGB_002.png
rgbr/aug/p1/RGB_003.jpg rgbr/aug/p1/RGB_003.png
my question is why there is no edge image in each line?
well in the beginnings :) we were thinking set with another name for the image and the ground truth is a wise idea I will improve this part.
Question 3. after augmentation, the total training images are 57600, you set max_iterations=150000, which means each image will be trained nearly 3 times, is that right?
Question 4. My dataset locates at: ./MBIPED/dataset/ BIPED, like:
./MBIPED/dataset/ BIPED/edges/edge_maps/test
./MBIPED/dataset/ BIPED/edges/edge_maps/train
./MBIPED/dataset/ BIPED/edges/imgs/test
./MBIPED/dataset/ BIPED/edges/imgs/train
./MBIPED/dataset/ BIPED/edges/test_rgb.lst
./MBIPED/dataset/ BIPED/edges/train_rgb.lst
And then I set the parameter '--dataset_dir',default='./MBIPED/dataset/', is it right?
Yes, you can see in the line 1011 dataset_manager.py if you want to improve the data parser
Question 5. How to understand and use the following parameters?
'--use_nir', default=False, type=bool
'--use_v1', default=False,type=bool
'--deep_supervision', default=True, type= bool
'--testing_threshold', default=0.0, type=float
Sorry, I should clean the two first parameters, even the one for deep_supervision. If the seep_superiovion is True, we apply the loss function to the whole of outputs if don't just to the DexiNed-f is applied
--testing_threshold', default=0.0, type=float
this means that once the prediction is performed in DexiNEd, Y_hat, we do: Y_hat[Y_hat>=0.0]=0.0 (before post-processing).
Question 6. '--train_split', default=0.9, type=float. This is to split the data set into train set and validation set, right?
Yes
Question 7. How long will the training take if I use 1080 ti GPU?
well, I have tested in a titan X 12GB, it takes around 2 days, maybe in your GPU ill be the same.
Cheers,
from dexined.
many thanks!
I followed your default para setting in run_model.py. after data augmentation, I tried to start training, but it does not work. I changed the training and validation batch sizes to 4, and then it works.
so, here are still some questions:
- I need not concern the sizes of augmented images, is that right?
- in order to decrease the validation step and speed up training, i set val_interval=300 (default=30), will it affect the performance of model?
from dexined.
by the way, please have a look at #23 if you have time
from dexined.
many thanks!
I followed your default para setting in run_model.py. after data augmentation, I tried to start training, but it does not work. I changed the training and validation batch sizes to 4, and then it works.
so, here are still some questions:
- I need not concern the sizes of augmented images, is that right?
Yes, don't worry about that
- in order to decrease the validation step and speed up training, i set val_interval=300 (default=30), will it affect the performance of model?
Maybe not, but you could try reducing the size of the training image
I don't remember the paper, but some of them say that if you increment the image size on your training your performance will be better. It will be lovely if you training with 300 (now it is 400) and let me know about that.
from dexined.
Related Issues (20)
- How can we export its weights for Mobile
- ValueError in TF2 model
- Testing on a single image (lena_std.tif) HOT 3
- about evaluation HOT 2
- Works once but stops working after testing own image HOT 3
- Can I change the resize value in dataset transform function? HOT 3
- About Bsds and Biped HOT 2
- Size of the CLASSIC images
- About args.double_img HOT 5
- Can't parse 'dsize'. Sequence item with index 0 has a wrong type HOT 2
- Should the l_weight value change according to the dataset? HOT 3
- Inference mode is available? HOT 2
- GT HOT 1
- loss does not converge HOT 25
- How can I calculate ODS, OIS, AP metrics? HOT 5
- does DexiNed support multiple GPUs .. if Yes, how to implement it for DexiNed TF2 HOT 4
- high train loss
- About Precision-Recall Curves on Biped
- Getting an NoneType error while testing with my images HOT 1
- Unable to reproduce results HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dexined.