Giter Club home page Giter Club logo

Comments (7)

nywang2019 avatar nywang2019 commented on August 16, 2024 1

i am training now, i will report my result later.

from dexined.

nywang2019 avatar nywang2019 commented on August 16, 2024

Question 8. line49 in train.py: if self.args.dataset_name.lower()!='biped', it seems there is no a para named as dataset_name.

from dexined.

xavysp avatar xavysp commented on August 16, 2024

Question 8. line49 in train.py: if self.args.dataset_name.lower()!='biped', it seems there is no a para named as dataset_name.

Hi @nywang2019 you are right dataset_name is no longer used, instead is train_dataset

from dexined.

xavysp avatar xavysp commented on August 16, 2024

Thank u for sharing your wonderful work. I tested my images, it looks good. I am interested in this work. Here are some questions about training:
Question 1. According to your instruction, I downloaded the BIPED data, and executed data augmentation, and now the dataset is very large, nearly 10G. my question is: the images in the augmented dataset have different sizes, so how to set image size in run_model.py? still 1280X720?

In the data loader, before fed DexiNed we resized to 400x400. In the augmentation, the image size is different from 400 till 1280(for the original image), maybe the efficient way is to use random cut, right now we are preparing a new version of defined with the base .nn.conv2d, maybe at the end of this week, we can improve even the data_loader.

Question 2. in test_rgb.lst and train_rgb.lst, the both columns in each row are the same, such as in test_rgb.lst:
rgbr/RGB_008.jpg rgbr/RGB_008.png
rgbr/RGB_010.jpg rgbr/RGB_010.png
rgbr/RGB_017.jpg rgbr/RGB_017.png
rgbr/RGB_025.jpg rgbr/RGB_025.png
and in train_rgb.lst, such as:
rgbr/aug/p1/RGB_001.jpg rgbr/aug/p1/RGB_001.png
rgbr/aug/p1/RGB_002.jpg rgbr/aug/p1/RGB_002.png
rgbr/aug/p1/RGB_003.jpg rgbr/aug/p1/RGB_003.png
my question is why there is no edge image in each line?

well in the beginnings :) we were thinking set with another name for the image and the ground truth is a wise idea I will improve this part.

Question 3. after augmentation, the total training images are 57600, you set max_iterations=150000, which means each image will be trained nearly 3 times, is that right?

Question 4. My dataset locates at: ./MBIPED/dataset/ BIPED, like:
./MBIPED/dataset/ BIPED/edges/edge_maps/test
./MBIPED/dataset/ BIPED/edges/edge_maps/train
./MBIPED/dataset/ BIPED/edges/imgs/test
./MBIPED/dataset/ BIPED/edges/imgs/train
./MBIPED/dataset/ BIPED/edges/test_rgb.lst
./MBIPED/dataset/ BIPED/edges/train_rgb.lst
And then I set the parameter '--dataset_dir',default='./MBIPED/dataset/', is it right?

Yes, you can see in the line 1011 dataset_manager.py if you want to improve the data parser

Question 5. How to understand and use the following parameters?
'--use_nir', default=False, type=bool
'--use_v1', default=False,type=bool
'--deep_supervision', default=True, type= bool
'--testing_threshold', default=0.0, type=float

Sorry, I should clean the two first parameters, even the one for deep_supervision. If the seep_superiovion is True, we apply the loss function to the whole of outputs if don't just to the DexiNed-f is applied
--testing_threshold', default=0.0, type=float
this means that once the prediction is performed in DexiNEd, Y_hat, we do: Y_hat[Y_hat>=0.0]=0.0 (before post-processing).

Question 6. '--train_split', default=0.9, type=float. This is to split the data set into train set and validation set, right?

Yes

Question 7. How long will the training take if I use 1080 ti GPU?

well, I have tested in a titan X 12GB, it takes around 2 days, maybe in your GPU ill be the same.

Cheers,

from dexined.

nywang2019 avatar nywang2019 commented on August 16, 2024

many thanks!
I followed your default para setting in run_model.py. after data augmentation, I tried to start training, but it does not work. I changed the training and validation batch sizes to 4, and then it works.
so, here are still some questions:

  1. I need not concern the sizes of augmented images, is that right?
  2. in order to decrease the validation step and speed up training, i set val_interval=300 (default=30), will it affect the performance of model?

from dexined.

nywang2019 avatar nywang2019 commented on August 16, 2024

by the way, please have a look at #23 if you have time

from dexined.

xavysp avatar xavysp commented on August 16, 2024

many thanks!
I followed your default para setting in run_model.py. after data augmentation, I tried to start training, but it does not work. I changed the training and validation batch sizes to 4, and then it works.
so, here are still some questions:

  1. I need not concern the sizes of augmented images, is that right?

Yes, don't worry about that

  1. in order to decrease the validation step and speed up training, i set val_interval=300 (default=30), will it affect the performance of model?

Maybe not, but you could try reducing the size of the training image
I don't remember the paper, but some of them say that if you increment the image size on your training your performance will be better. It will be lovely if you training with 300 (now it is 400) and let me know about that.

from dexined.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.