Giter Club home page Giter Club logo

Comments (18)

yuhuixu1993 avatar yuhuixu1993 commented on July 1, 2024

@ldd91 ,hi,

  1. What is your partition ratio for training and validation? Considering the time cost, we recommend to use a sampled subset of ImageNet as mentioned in the paper.
  2. This means that we do not update the architecture parameters until epochs>=35. This code controls the update epochs. You can find similar strategies in auto-deeplab and pdarts.

from pc-darts.

ldd91 avatar ldd91 commented on July 1, 2024

Hi @yuhuixu1993 ,I didn't set partition ratio,just use the train data and val data in ImageNet

from pc-darts.

yuhuixu1993 avatar yuhuixu1993 commented on July 1, 2024

@ldd91 , we can only use training data. It need to be partition into two parts, one part to training supernets and the other used for architecture as also described in the original darts and following other works(proxylessnas, pdarts...)

from pc-darts.

ldd91 avatar ldd91 commented on July 1, 2024

@yuhuixu1993 thank you,i will change the code and have a try

from pc-darts.

yuhuixu1993 avatar yuhuixu1993 commented on July 1, 2024

@ldd91, I still recommend you to use a subset.

from pc-darts.

ldd91 avatar ldd91 commented on July 1, 2024

@yuhuixu1993 ,Thank you I will try to use a subnet

from pc-darts.

ldd91 avatar ldd91 commented on July 1, 2024

@yuhuixu1993 hi,I use
split = int(np.floor(portion*num_train))
dataloader = torch.utils.data.DataLoader(batch_size=1024,sampler=torch.utils.data.sample.SubsetRandomSampler(indices[:split]))
i set the portion as 0.1 to use a sampled subset of ImageNet ,in the log there are only 3 step in each epoch,after first epoch the train_acc is 3.37 and each epoch takes about 25 mimutes,

from pc-darts.

ldd91 avatar ldd91 commented on July 1, 2024

I wanna know how many step in your each epoch

from pc-darts.

ldd91 avatar ldd91 commented on July 1, 2024

@yuhuixu1993 hi,I set split the train data into train_data and valid_data,and then i set 0.1train_data and 0.025valid_data is it correct?

from pc-darts.

yuhuixu1993 avatar yuhuixu1993 commented on July 1, 2024

Please refer to our paper, thanks.The steps are not important as it depends on the batch size you use. About the split proportion Yes,just according to the settings described in the paper. While I wrote the sampling codes myself to make sure the data in each class is sampled evenly.

from pc-darts.

ldd91 avatar ldd91 commented on July 1, 2024

Thank you for your reply,i encounter another issue,when I use 1 V100 and set batch size = 128,one epoch can be finished in 13 minutes which is faster than experiment in 8 V100(batch size = 1024 cost 25 minutes each epoch )

from pc-darts.

yuhuixu1993 avatar yuhuixu1993 commented on July 1, 2024

sorry, I have not tried one V100 on Imagenet. You may check carefully.

from pc-darts.

ldd91 avatar ldd91 commented on July 1, 2024

hi @yuhuixu1993 ,I found the last experiment that I can set batch_size=1024 was because I set architect.ste can be execed when epoch >15 ,when epoch >16 it was out of menory(8 V100),and i can only set batch_size=256,I exec nvidia-smi and found gpu0 was out of memory however the last seven gpus's memory was less than gpu0,the last seven gpu's memory is same

from pc-darts.

yuhuixu1993 avatar yuhuixu1993 commented on July 1, 2024

@xxsgcjwddsg, he had the same problem in this issue. I think he can help you.

from pc-darts.

pawopawo avatar pawopawo commented on July 1, 2024

Can not multi-gpu training may because ‘model.module.loss’ can not multi-gpu, so do not put this in the network. you can delete the loss from the network, and then calculate the loss after the network output.

from pc-darts.

ldd91 avatar ldd91 commented on July 1, 2024

@xxsgcjwddsg thank you,you mean delete the loss in the network in model_search_imagenet.py?

from pc-darts.

ldd91 avatar ldd91 commented on July 1, 2024

@xxsgcjwddsg i can use multi-gpu but the memory in GPU0 is different from the others

from pc-darts.

bitluozhuang avatar bitluozhuang commented on July 1, 2024

Thanks a lot for this project and @yuhuixu1993.I implemented a distributed version with pytorch 1.1.0 on cifar10.People who are interested can go to test and verify.https://github.com/bitluozhuang/Distributed-PC-Darts

from pc-darts.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.