Giter Club home page Giter Club logo

eliahuhorwitz / deepsim Goto Github PK

View Code? Open in Web Editor NEW
421.0 13.0 50.0 79.1 MB

Official PyTorch implementation of the paper: "DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample" (ICCV 2021 Oral)

Home Page: http://www.vision.huji.ac.il/deepsim

License: Other

Python 98.98% Shell 1.02%
single-image image-editing deep-neural-networks generative-adversarial-network computer-vision computer-graphics edge-to-image segmantation-to-image pytorch image-to-image-translation

deepsim's Introduction

DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample (ICCV 2021 Oral)

Official PyTorch implementation of the paper: "DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample".

DeepSIM: Given a single real training image (b) and a corresponding primitive representation (a), our model learns to map between the primitive (a) to the target image (b). At inference, the original primitive (a) is manipulated by the user. Then, the manipulated primitive is passed through the network which outputs a corresponding manipulated image (e) in the real image domain.


DeepSIM was trained on a single training pair, shown to the left of each sample. First row "face" output- (left) flipping eyebrows, (right) lifting nose. Second row "dog" output- changing shape of dog's hat, removing ribbon, and making face longer. Second row "car" output- (top) adding wheel, (bottom) conversion to sports car.


DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample
Yael Vinker*, Eliahu Horwitz*, Nir Zabari, Yedid Hoshen
*Equal contribution
https://arxiv.org/pdf/2007.01289

Abstract: We present DeepSIM, a generative model for conditional image manipulation based on a single image. We find that extensive augmentation is key for enabling single image training, and incorporate the use of thin-plate-spline (TPS) as an effective augmentation. Our network learns to map between a primitive representation of the image to the image itself. The choice of a primitive representation has an impact on the ease and expressiveness of the manipulations and can be automatic (e.g. edges), manual (e.g. segmentation) or hybrid such as edges on top of segmentations. At manipulation time, our generator allows for making complex image changes by modifying the primitive input representation and mapping it through the network. Our method is shown to achieve remarkable performance on image manipulation tasks.

Getting Started

Setup

  1. Clone the repo:
git clone https://github.com/eliahuhorwitz/DeepSIM.git
cd DeepSIM
  1. Create a new environment and install the libraries:
python3.7 -m venv deepsim_venv
source deepsim_venv/bin/activate
pip install -r requirements.txt


Training

The input primitive used for training should be specified using --primitive and can be one of the following:

  1. "seg" - train using segmentation only
  2. "edges" - train using edges only
  3. "seg_edges" - train using a combination of edges and segmentation
  4. "manual" - could be anything (for example, a painting)

For the chosen option, a suitable input file should be provided under /"train_" (e.g. ./datasets/car/train_seg). For automatic edges, you can leave the "train_edges" folder empty, and an edge map will be generated automatically. Note that for the segmentation primitive option, you must verify that the input at test time fits exactly the input at train time in terms of colors.

To train on CPU please specify --gpu_ids '-1'.

  • Train DeepSIM on the "face" video using both edges and segmentations (bash ./scripts/train_face_vid_seg_edges.sh):
#!./scripts/train_face_vid_seg_edges.sh
python3.7 ./train.py --dataroot ./datasets/face_video --primitive seg_edges --no_instance --tps_aug 1 --name DeepSIMFaceVideo
  • Train DeepSIM on the "car" image using segmentation only (bash ./scripts/train_car_seg.sh):
#!./scripts/train_car_seg.sh
python3.7 ./train.py --dataroot ./datasets/car --primitive seg --no_instance --tps_aug 1 --name DeepSIMCar
  • Train DeepSIM on the "face" image using edges only (bash ./scripts/train_face_edges.sh):
#!./scripts/train_face_edges.sh
python3.7 ./train.py --dataroot ./datasets/face --primitive edges --no_instance --tps_aug 1 --name DeepSIMFace

Testing

  • Test DeepSIM on the "face" video using both edges and segmentations (bash ./scripts/test_face_vid_seg_edges.sh):
#!./scripts/test_face_vid_seg_edges.sh
python3.7 ./test.py --dataroot ./datasets/face_video --primitive seg_edges --phase "test" --no_instance --name DeepSIMFaceVideo --vid_mode 1 --test_canny_sigma 0.5
  • Test DeepSIM on the "car" image using segmentation only (bash ./scripts/test_car_seg.sh):
#!./scripts/test_car_seg.sh
python3.7 ./test.py --dataroot ./datasets/car --primitive seg --phase "test" --no_instance --name DeepSIMCar
  • Test DeepSIM on the "face" image using edges only (bash ./scripts/test_face_edges.sh):
#!./scripts/test_face_edges.sh
python3.7 ./test.py --dataroot ./datasets/face --primitive edges --phase "test" --no_instance --name DeepSIMFace

Additional Augmentations

As shown in the supplementary, adding augmentations on top of TPS may lead to better results

  • Train DeepSIM on the "face" video using both edges and segmentations with sheer, rotations, "cutmix", and canny sigma augmentations (bash ./scripts/train_face_vid_seg_edges_all_augmentations.sh):
#!./scripts/train_face_vid_seg_edges_all_augmentations.sh
python3.7 ./train.py --dataroot ./datasets/face_video --primitive seg_edges --no_instance --tps_aug 1 --name DeepSIMFaceVideoAugmentations --cutmix_aug 1 --affine_aug "shearx_sheary_rotation" --canny_aug 1
  • When using edges or seg_edges, it may be beneficial to have white edges instead of black ones, to do so add the --canny_color 1 option
  • Check ./options/base_options.py for more augmentation related settings
  • When using edges or seg_edges and adding edges manually at test time, it may be beneficial to apply "skeletonize" (e.g skimage skeletonize )on the edges in order for them to resemble the canny edges

More Results

Top row - primitive images. Left - original pair used for training. Center- switching the positions between the two rightmost cars. Right- removing the leftmost car and inpainting the background.


The leftmost column shows the source image, then each column demonstrate the result of our model when trained on the specified primitive. We manipulated the image primitives, adding a right eye, changing the point of view and shortening the beak. Our results are presented next to each manipulated primitive. The combined primitive performed best on high-level changes (e.g. the eye), and low-level changes (e.g. the background).


On the left is the training image pair, in the middle are the manipulated primitives and on the right are the manipulated outputs- left to right: dress length, strapless, wrap around the neck.

Single Image Animation

Animation to Video

Video to Animation

Citation

If you find this useful for your research, please use the following.

@InProceedings{Vinker_2021_ICCV,
    author    = {Vinker, Yael and Horwitz, Eliahu and Zabari, Nir and Hoshen, Yedid},
    title     = {Image Shape Manipulation From a Single Augmented Training Sample},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {13769-13778}
}

Acknowledgments

deepsim's People

Contributors

eliahuhorwitz avatar yael-vinker avatar yedidh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepsim's Issues

OOM due to unactive loadsize/finesize settings

hello,
i'm trying to run the training on 8gb GPU and facing OOM, whatever load/fine size is set at options (even 64).
could you advise what other options may fix this?

UPD: this happens only with face setup, with car the training starts well

Pretrained models

Thanks for this amazing project! Any plan to release a pre-trained model?
Thank you in advance.

Batch Size?

pix2pixhd has batchSize option and it seems that you have removed it? Any reason?

Retain synthesized output structure from different labels.

This is less of an issue and more of a discussion. First of all, great work. I've trained a few models and it works great, but was wondering if a certain functionality exists.

Is it possible to do motion re-targeting with this repo, similar to First Order Motion Model??

I know that you can create labels on a single photo from a video, train it, then drive the synthesized image with that video sequence. The problem is, what if someone wanted to drive the synthesized result with something that has a completely different structure from what it was trained on? The synthesized video will conform to the user created labels, thus distorting it and making it look strange.

Let me explain using your starfish example. The training pair is a labelled set and a starfish. If I labeled a starfish that was bigger than the one it was trained on, the synthesized result would conform to the bigger one, thus distorting the one it was trained on. This is the case scenario you don't want.

Does this functionality exist, or are there any plans of this implementation. Thanks!

Implementation

You used only Pytorch or any other software also to implement this project (code) ?
I'm a beginner. Is there any more easy way to make me understand which libraries to install in Pytorch and what extra stuff should I do so that I can make an animated image from a normal image live infront of my teacher?

Infinite Loop Error (keeps starting train.py for some reason)

Hello,
I haven't made any modifications to the code. I cloned it, installed the requirements, and ran the script for training. No other options, no custom data.

As you can see, it ran train.py twice for some reason and then got out with a broken pipe error. I've included the error output below. (Output 1)

I then tried debugging this on another machine with the if name == main modification to prevent train.py from calling itself.

It seemed like line 74 from train.py was causing this issue:
image

This too caused an error, albeit a different one. I've included that one as well (Output 2)

Thank you.

Here's the error Output 1:
(deepsim2) D:\DeepSIM>python ./train.py --dataroot ./datasets/car --primitive seg --no_instance --tps_aug 1 --name DeepSIMCar
C:\Users\Public\Anaconda\envs\deepsim2\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:
warn(f"Failed to load image Python extension: {e}")
name DeepSIMCar
[0]
------------ Options -------------
affine_aug: none
batchSize: 1
beta1: 0.5
canny_aug: 0
canny_color: 0
canny_sigma_l_bound: 1.2
canny_sigma_step: 0.3
canny_sigma_u_bound: 3
checkpoints_dir: ./checkpoints
continue_train: False
cutmix_aug: 0
cutmix_max_size: 96
cutmix_min_size: 32
data_type: 32
dataroot: ./datasets/car
debug: False
display_freq: 100
display_winsize: 512
feat_num: 3
fineSize: 256
fp16: False
gpu_ids: [0]
input_nc: 3
instance_feat: False
isTrain: True
label_feat: False
label_nc: 0
lambda_feat: 10.0
loadSize: 256
load_features: False
load_pretrain:
local_rank: 0
lr: 0.0002
max_dataset_size: inf
model: pix2pixHD
nThreads: 2
n_blocks_global: 9
n_blocks_local: 3
n_clusters: 10
n_downsample_E: 4
n_downsample_global: 4
n_layers_D: 3
n_local_enhancers: 1
name: DeepSIMCar
ndf: 64
nef: 16
netG: global
ngf: 64
niter: 8000
niter_decay: 8000
niter_fix_global: 0
no_flip: False
no_ganFeat_loss: False
no_html: False
no_instance: True
no_lsgan: False
no_vgg_loss: False
norm: instance
num_D: 2
output_nc: 3
phase: train
pool_size: 0
primitive: seg
print_freq: 100
resize_or_crop: none
save_epoch_freq: 20000
save_latest_freq: 20000
serial_batches: False
test_canny_sigma: 2
tf_log: False
tps_aug: 1
tps_percent: 0.99
tps_points_per_dim: 3
use_dropout: False
verbose: False
which_epoch: latest
-------------- End ----------------
./train.py:11: DeprecationWarning: fractions.gcd() is deprecated. Use math.gcd() instead.
def lcm(a, b): return abs(a * b) / fractions.gcd(a, b) if a and b else 0
CustomDatasetDataLoader
dataset [AlignedDataset] was created
#training images = 1
C:\Users\Public\Anaconda\envs\deepsim2\lib\site-packages\torchvision\models_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
C:\Users\Public\Anaconda\envs\deepsim2\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing weights=VGG19_Weights.IMAGENET1K_V1. You can also use weights=VGG19_Weights.DEFAULT to get the most up-to-date weights.
warnings.warn(msg)
create web directory ./checkpoints\DeepSIMCar\web...
display_delta 0
print_delta 0.0
save_delta 0
C:\Users\Public\Anaconda\envs\deepsim2\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:
warn(f"Failed to load image Python extension: {e}")
name DeepSIMCar
[0]
------------ Options -------------
affine_aug: none
batchSize: 1
beta1: 0.5
canny_aug: 0
canny_color: 0
canny_sigma_l_bound: 1.2
canny_sigma_step: 0.3
canny_sigma_u_bound: 3
checkpoints_dir: ./checkpoints
continue_train: False
cutmix_aug: 0
cutmix_max_size: 96
cutmix_min_size: 32
data_type: 32
dataroot: ./datasets/car
debug: False
display_freq: 100
display_winsize: 512
feat_num: 3
fineSize: 256
fp16: False
gpu_ids: [0]
input_nc: 3
instance_feat: False
isTrain: True
label_feat: False
label_nc: 0
lambda_feat: 10.0
loadSize: 256
load_features: False
load_pretrain:
local_rank: 0
lr: 0.0002
max_dataset_size: inf
model: pix2pixHD
nThreads: 2
n_blocks_global: 9
n_blocks_local: 3
n_clusters: 10
n_downsample_E: 4
n_downsample_global: 4
n_layers_D: 3
n_local_enhancers: 1
name: DeepSIMCar
ndf: 64
nef: 16
netG: global
ngf: 64
niter: 8000
niter_decay: 8000
niter_fix_global: 0
no_flip: False
no_ganFeat_loss: False
no_html: False
no_instance: True
no_lsgan: False
no_vgg_loss: False
norm: instance
num_D: 2
output_nc: 3
phase: train
pool_size: 0
primitive: seg
print_freq: 100
resize_or_crop: none
save_epoch_freq: 20000
save_latest_freq: 20000
serial_batches: False
test_canny_sigma: 2
tf_log: False
tps_aug: 1
tps_percent: 0.99
tps_points_per_dim: 3
use_dropout: False
verbose: False
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [AlignedDataset] was created
#training images = 1
C:\Users\Public\Anaconda\envs\deepsim2\lib\site-packages\torchvision\models_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
C:\Users\Public\Anaconda\envs\deepsim2\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing weights=VGG19_Weights.IMAGENET1K_V1. You can also use weights=VGG19_Weights.DEFAULT to get the most up-to-date weights.
warnings.warn(msg)
create web directory ./checkpoints\DeepSIMCar\web...
display_delta 0
print_delta 0.0
save_delta 0
Traceback (most recent call last):
File "", line 1, in
Traceback (most recent call last):
File "./train.py", line 74, in
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\spawn.py", line 105, in spawn_main
for i, data in enumerate(dataset, start=epoch_iter):
exitcode = _main(fd)
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\site-packages\torch\utils\data\dataloader.py", line 438, in iter
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\DeepSIM\train.py", line 74, in
for i, data in enumerate(dataset, start=epoch_iter):
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\site-packages\torch\utils\data\dataloader.py", line 438, in iter
return self._get_iterator()
return self._get_iterator()
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\site-packages\torch\utils\data\dataloader.py", line 384, in _get_iterator
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\site-packages\torch\utils\data\dataloader.py", line 384, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\site-packages\torch\utils\data\dataloader.py", line 1048, in init
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\site-packages\torch\utils\data\dataloader.py", line 1048, in init
w.start()
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\process.py", line 112, in start
w.start()
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
self._popen = self._Popen(self)
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\context.py", line 223, in _Popen
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\context.py", line 322, in _Popen
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
return Popen(process_obj)
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\popen_spawn_win32.py", line 46, in init
reduction.dump(process_obj, to_child)
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\reduction.py", line 60, in dump
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\Public\Anaconda\envs\deepsim2\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
ForkingPickler(file, protocol).dump(obj)
is not going to be frozen to produce an executable.''')
BrokenPipeError: [Errno 32] Broken pipe
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.
























Here is (Error Output 2)
(deepsim) PS E:\JM\GAN\deepsim> python ./train.py --dataroot ./datasets/car --primitive seg --no_instance --tps_aug 1 --name DeepSIMCar
C:\Users\TWiM.conda\envs\deepsim\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:
warn(f"Failed to load image Python extension: {e}")
name DeepSIMCar
[0]
------------ Options -------------
affine_aug: none
batchSize: 1
beta1: 0.5
canny_aug: 0
canny_color: 0
canny_sigma_l_bound: 1.2
canny_sigma_step: 0.3
canny_sigma_u_bound: 3
checkpoints_dir: ./checkpoints
continue_train: False
cutmix_aug: 0
cutmix_max_size: 96
cutmix_min_size: 32
data_type: 32
dataroot: ./datasets/car
debug: False
display_freq: 100
display_winsize: 512
feat_num: 3
fineSize: 256
fp16: False
gpu_ids: [0]
input_nc: 3
instance_feat: False
isTrain: True
label_feat: False
label_nc: 0
lambda_feat: 10.0
loadSize: 256
load_features: False
load_pretrain:
local_rank: 0
lr: 0.0002
max_dataset_size: inf
model: pix2pixHD
nThreads: 2
n_blocks_global: 9
n_blocks_local: 3
n_clusters: 10
n_downsample_E: 4
n_downsample_global: 4
n_layers_D: 3
n_local_enhancers: 1
name: DeepSIMCar
ndf: 64
nef: 16
netG: global
ngf: 64
niter: 8000
niter_decay: 8000
niter_fix_global: 0
no_flip: False
no_ganFeat_loss: False
no_html: False
no_instance: True
no_lsgan: False
no_vgg_loss: False
norm: instance
num_D: 2
output_nc: 3
phase: train
pool_size: 0
primitive: seg
print_freq: 100
resize_or_crop: none
save_epoch_freq: 20000
save_latest_freq: 20000
serial_batches: False
test_canny_sigma: 2
tf_log: False
tps_aug: 1
tps_percent: 0.99
tps_points_per_dim: 3
use_dropout: False
verbose: False
which_epoch: latest
-------------- End ----------------
./train.py:16: DeprecationWarning: fractions.gcd() is deprecated. Use math.gcd() instead.
def lcm(a, b): return abs(a * b) / fractions.gcd(a, b) if a and b else 0
CustomDatasetDataLoader
dataset [AlignedDataset] was created
#training images = 1
C:\Users\TWiM.conda\envs\deepsim\lib\site-packages\torchvision\models_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
C:\Users\TWiM.conda\envs\deepsim\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing weights=VGG19_Weights.IMAGENET1K_V1. You can also use weights=VGG19_Weights.DEFAULT to get the most up-to-date weights.
warnings.warn(msg)
create web directory ./checkpoints\DeepSIMCar\web...
display_delta 0
print_delta 0.0
save_delta 0
C:\Users\TWiM.conda\envs\deepsim\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:
warn(f"Failed to load image Python extension: {e}")
Traceback (most recent call last):
File "", line 1, in
Traceback (most recent call last):
File "C:\Users\TWiM.conda\envs\deepsim\lib\multiprocessing\spawn.py", line 105, in spawn_main
File "./train.py", line 197, in
exitcode = _main(fd)
main()
File "./train.py", line 81, in main
File "C:\Users\TWiM.conda\envs\deepsim\lib\multiprocessing\spawn.py", line 115, in _main
for i, data in enumerate(dataset, start=epoch_iter):
self = reduction.pickle.load(from_parent)
File "C:\Users\TWiM.conda\envs\deepsim\lib\site-packages\torch\utils\data\dataloader.py", line 438, in iter
EOFError: Ran out of input
return self._get_iterator()
File "C:\Users\TWiM.conda\envs\deepsim\lib\site-packages\torch\utils\data\dataloader.py", line 384, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\TWiM.conda\envs\deepsim\lib\site-packages\torch\utils\data\dataloader.py", line 1048, in init
w.start()
File "C:\Users\TWiM.conda\envs\deepsim\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\TWiM.conda\envs\deepsim\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\TWiM.conda\envs\deepsim\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\TWiM.conda\envs\deepsim\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\TWiM.conda\envs\deepsim\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'CustomDatasetDataLoader.initialize..'

unrecognized arguments: --online_tps 0

happens during test phase. is this obsolete or should be replaced by something else?

also small notes:

  • the option --resize_or_crop none repeated twice in training command
  • the option --no_instance repeated twice in testing command

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.