Giter Club home page Giter Club logo

unsuperviseddeephomographyral2018's Introduction

Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model

This paper will be presented in International Conference on Robotics and Automation (ICRA) 2018 (Brisbane, Australia) and appear in proceedings of IEEE Robotics and Automation Letters.

We devise an unsupervised learning algorithm that trains a Deep Convolutional Neural Network to estimate planar homographies. We compare the proposed algorithm to traditional feature-based and direct methods, as well as a corresponding supervised learning algorithm. Our empirical results demonstrate that compared to traditional approaches, the unsupervised algorithm achieves faster inference speed, while maintaining comparable or better accuracy and robustness to illumination variation. In addition, on both a synthetic dataset and representative real-world aerial dataset, our unsupervised method has superior adaptability and performance compared to the supervised deep learning method.

Citation

If you use this code for research please cite:

@InProceedings{nguyen2017unsupervised,
  title={Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model},
  author={Nguyen, Ty and Chen, Steven W and Shivakumar, Shreyas S and Taylor, Camillo J and Kumar, Vijay},
  booktitle={RA-L},
  pages={},
  year={2018},
  organization={IEEE}
  month        = " ",
  year         = "2018",
  url          = "https://arxiv.org/abs/1709.03966"
}

Installation

Building and using requires the following libraries and programs
cuda 8.0.61 (required for gpu support)
python 2.7.12
tensorflow 1.2.1 (or higher)
opencv 3.4.0 (can be installed using: pip install opencv-python )\

We built our system on ubuntu 16.04. Tensorflow (CPU) and Tensorflow (GPU) can both work well; they are installed in virtualenv. Other methods to install tensorflow have not been tested.

Install required python packages (pip is required)

source virtualenv_name/bin/activate 
pip install -r requirements.txt 

Build instructions

Clone repo

git clone https://github.com/tynguyen/unsupervisedDeepHomographyRAL2018.git

Trained Models

Model trained on Synthetic Data

Download at

https://drive.google.com/drive/folders/1Y9oNgbJTrAdkgf5-T1xONtU9n2ZqwDta?usp=sharing

Then, store the synthetic_models to folder models

Model trained on Aerial Image Data

https://drive.google.com/drive/folders/16RI7R0EVayiXfYoP2Ahhl4yN2sWhG76Z?usp=sharing

Note: you need to format your image data in a correct size in order to make use of this trained model. Please refer to the next sections to get how to format the raw images

Preparing training dataset (synthetic)

Download MS-COCO dataset http://cocodataset.org/#download

We use 2014/Train to generate training data and 2014/Testing to generate test set. Store them into RAW_DATA_PATH and TEST_RAW_DATA_PATH which are repositories declared in generating synthetic data.

Generate synthetic dataset

In the file code/utils/gen_synthetic_data.py, set important parameters as follows

  RHO = 45 # The maximum value of pertubation. The higher it is, the larger displacement between 
  # two generated images is. 

  DATA_NUMBER = 100000  # number of pair of synthetic images in training dataset 
  TEST_DATA_NUMBER = 5000 # number of pair of synthetic images in test dataset

  IM_PER_REAL = 2 # Generate 2 different synthetic images from one single real image

  # Size of synthetic image
  HEIGHT = 240  
  WIDTH = 320
  # Size of crop 
  PATCH_SIZE = 128

  # Directories to MS-COCO images 
  RAW_DATA_PATH = "/Earthbyte/tynguyen/rawdata/train/" # Real images used for generating synthetic data
  TEST_RAW_DATA_PATH = "/Earthbyte/tynguyen/rawdata/test/" # Real images used for generating test synthetic data

  # Synthetic data directories
  DATA_PATH = "/home/tynguyen/pose_estimation/data/synthetic/" + str(RHO) + '/'

  I_DIR = DATA_PATH + 'I/' # First large image in one pair 
  I_PRIME_DIR = DATA_PATH + 'I_prime/' # Second large image in one pair 

  # Since all generated images will be stored at the same location, we need .txt files to 
  # maintain training images and test images 
  FILENAMES_FILE = os.path.join(DATA_PATH,'train_synthetic.txt') # List of training images 
  TEST_FILENAMES_FILE = os.path.join(DATA_PATH,'test_synthetic.txt') # List of test images 

  GROUND_TRUTH_FILE = os.path.join(DATA_PATH,'gt.txt') # (In training set): ground truth of homography parameters (delta movement of 4 corners)
  PTS1_FILE = os.path.join(DATA_PATH,'pts1.txt') # (in training set): path to 4 corners on the first image 

  TEST_PTS1_FILE = os.path.join(DATA_PATH,'test_pts1.txt') # Test set: ground truth of homography parameters (delta movement of 4 corners)
  TEST_GROUND_TRUTH_FILE = os.path.join(DATA_PATH,'test_gt.txt') # Test: path to 4 corners on the first image 

Generate training dataset

It will take a few hours to generate 100.000 data samples. You can choose a smaller number of data for debugging.

python utils/gen_synthetic_data.py --mode train --num_data [number of data] 

Generate test dataset

python utils/gen_synthetic_data.py --mode test 

Debugging

In all training and testing processes, you can visualize images using either Tensorboard or just set --visual True in calling python functions. Tensorboard is highly recommended since it does not reduce the running speed as much as plotting using --visual flag. For example

python homography_CNN_synthetic.py --mode train --lr 5e-4 --loss_type h_loss --visual True 

Train model with synthetic dataset

In the file code/homography_CNN_synthetic.py, set important parameters as follows

  # Size of synthetic image and the pertubation range (RH0)
  HEIGHT = 240 #
  WIDTH = 320
  RHO = 45 # The maximum value of pertubation. The higher it is, the larger displacement between 
  # two generated images is. Change this value to evaluate different levels of displacements 
  PATCH_SIZE = 128

  # Synthetic data directories
  DATA_PATH = "/home/tynguyen/pose_estimation/data/synthetic/" + str(RHO) + '/'

  I_DIR = DATA_PATH + 'I/' # First large image in one pair 
  I_PRIME_DIR = DATA_PATH + 'I_prime/' # Second large image in one pair 

  # Since all generated images will be stored at the same location, we need .txt files to 
  # maintain training images and test images 
  FILENAMES_FILE = os.path.join(DATA_PATH,'train_synthetic.txt') # List of training images 
  TEST_FILENAMES_FILE = os.path.join(DATA_PATH,'test_synthetic.txt') # List of test images 

  GROUND_TRUTH_FILE = os.path.join(DATA_PATH,'gt.txt') # (In training set): ground truth of homography parameters (delta movement of 4 corners)
  PTS1_FILE = os.path.join(DATA_PATH,'pts1.txt') # (in training set): path to 4 corners on the first image 

  TEST_PTS1_FILE = os.path.join(DATA_PATH,'test_pts1.txt') # Test set: ground truth of homography parameters (delta movement of 4 corners)
  TEST_GROUND_TRUTH_FILE = os.path.join(DATA_PATH,'test_gt.txt') # Test: path to 4 corners on the first image 

  # Log and model directories
  MAIN_LOG_PATH = '/media/tynguyen/'
  LOG_DIR       = MAIN_LOG_PATH + "docker_folder/pose_estimation/logs/"
  MODEL_DIR     = MAIN_LOG_PATH + "docker_folder/pose_estimation/models/"

  # Where to save visualization images (for report)
  RESULTS_DIR   = MAIN_LOG_PATH + "docker_folder/pose_estimation/results/synthetic/report/"

  # List of augmentations to the data.
  AUGMENT_LIST = ['normalize'] # 'normalize': standardize images

Supervised

python homography_CNN_synthetic.py --mode train --lr 5e-4 --loss_type h_loss

Unsupervised

python homography_CNN_synthetic.py --mode train --lr 1e-4 --loss_type l1_loss 

Test model with synthetic dataset

Supervised

python homography_CNN_synthetic.py --mode test --lr 5e-4 --loss_type h_loss 

Unsupervised

python homography_CNN_synthetic.py --mode test --lr 1e-4 --loss_type l1_loss  

Generate aerial dataset

Due to the company's privacy, we cannot make our aerial dataset publically available. However, there is an alternative which readers might be interested in, from: https://github.com/OpenDroneMap/OpenDroneMap/tree/master/tests/test_data

These datasets are quite similar to ours.

Supervised

For the supervised method, everything should be as same as in synthetic dataset. We use aerial images to generate synthetic images to train the model.

Unsupervised

In our aerial dataset, images are recorded in time sequence. Thus, we consider two consecutive images as a pair and generate some new pair of training samples (by randomly cropping). Each training sample consists of: a pair of (HEIGHT x WIDTH) images and a pair of corresponding crops.

As mentioned in the paper, from these original pair of images, we first resize from (FULL_HEIGHT x FULL_WIDTH) to (HEIGHT x WIDTH) then crop each pair of resized images at the same location (y,x). From each pair of original images, we generate IM_PER_REAL training samples by keeping y constant and romdomizing x (with max pertubation = RHO).

It is recommended that the resizing and cropping are highly dependent on the level of displacement between a pair of original images. Our aerial dataset features a large displacement so we have to keep y constant and make a large crop (PATCH_SIZE/WIDTH). However, there are still border effect during the warping: the warped crop of the second image has a black area near its edge. For a better performance, ones can think of moving the crop window to the largest overlapping areas in the images other than just center-cropping.

   RHO = 24 # Maximum range of pertubation
   DATA_NUMBER = 10000
   TEST_DATA_NUMBER = 1000
   IM_PER_REAL = 20 # Generate 20 different pairs of images from one single real image

   # Size of synthetic image
   HEIGHT = 142 #
   WIDTH = 190
   PATCH_SIZE = 128

   FULL_HEIGHT = 480 #
   FULL_WIDTH  =  640
   # Directories to files
   RAW_DATA_PATH = "/Earthbyte/tynguyen/real_rawdata/joe_data/train/" # Real images used for generating real dataset
   TEST_RAW_DATA_PATH = "/Earthbyte/tynguyen/real_rawdata/joe_data/test/" # Real images used for generating real test dataset

   # Data directories
   DATA_PATH = "/Earthbyte/tynguyen/docker_folder/pose_estimation/data/synthetic/" + str(RHO) + '/'

   I_DIR = DATA_PATH + 'I/' # Large image 240 x 320
   I_PRIME_DIR = DATA_PATH + 'I_prime/' # Large image 240 x 320

   FULL_I_DIR = DATA_PATH + 'FULL_I/' # Full image size 480 x 640
   FULL_I_PRIME_DIR = DATA_PATH + 'FULL_I_prime/' # Full image size 480 x 640

   PTS1_FILE = os.path.join(DATA_PATH,'pts1.txt')
   FILENAMES_FILE = os.path.join(DATA_PATH,'train_real.txt')
   GROUND_TRUTH_FILE = os.path.join(DATA_PATH,'gt.txt')
   TEST_PTS1_FILE = os.path.join(DATA_PATH,'test_pts1.txt')
   TEST_FILENAMES_FILE = os.path.join(DATA_PATH,'test_real.txt')
   # In real dataset, ground truth file consists of correspondences
   # Each row in the file contains 8 numbers:[corr1, corr2]
   TEST_GROUND_TRUTH_FILE = os.path.join(DATA_PATH,'test_gt.txt')
python utils/gen_real_data.py --mode train --num_data [number of training data] 
python utils/gen_real_data.py --mode test --num_data [number of test data] 

Train model with aerial dataset

In the file homography_CNN_real.py, set parameters as follows

   # Size of synthetic image and the pertubation range (RH0)
   HEIGHT = 142 #
   WIDTH = 190
   RHO = 24
   PATCH_SIZE = 128
   # Full image size (used for displaying)
   FULL_HEIGHT = 240 #
   FULL_WIDTH =  320

   # Data directories
   DATA_PATH = "/home/tynguyen/pose_estimation/data/real/" + str(RHO) + '/'

   I_DIR = DATA_PATH + 'I/' # Large image
   I_PRIME_DIR = DATA_PATH + 'I_prime/' # Large image
   PTS1_FILE = os.path.join(DATA_PATH,'pts1.txt')
   FILENAMES_FILE = os.path.join(DATA_PATH,'train_real.txt')
   GROUND_TRUTH_FILE = None # There is no ground truth during training 

   FULL_I_DIR = DATA_PATH + 'FULL_I/' # Large image
   FULL_I_PRIME_DIR = DATA_PATH + 'FULL_I_prime/' # Large image
   TEST_PTS1_FILE = os.path.join(DATA_PATH,'test_pts1.txt')
   TEST_FILENAMES_FILE = os.path.join(DATA_PATH,'test_real.txt')
   # Correspondences in test set
   TEST_GROUND_TRUTH_FILE = os.path.join(DATA_PATH,'test_gt.txt')

   # Log and model directories
   MAIN_LOG_PATH = '/media/tynguyen/DATA/'
   LOG_DIR       = MAIN_LOG_PATH + "docker_folder/pose_estimation/logs/"

   # Where to load model. This could be the location of the model trained on synthetic data
   # or any other dataset
   LOAD_MODEL_DIR = MAIN_LOG_PATH + "docker_folder/pose_estimation/models/"
   # Where to save new model. This is the location of the fine-tuned model
   SAVE_MODEL_DIR = MAIN_LOG_PATH + "docker_folder/pose_estimation/models/real_models/"

   # Where to save visualization images (for report)
   RESULTS_DIR   = MAIN_LOG_PATH + "docker_folder/pose_estimation/results/synthetic/report/"

   # list of augmentations to the data
   AUGMENT_LIST = ['normalize']

Supervised

For supervised method, after generating a new set of synthetic images using the aerial dataset, change DATA_PATH in homography_CNN_synthetic.py accordingly and run

python homography_CNN_synthetic.py --mode train --lr 5e-4 --loss_type h_loss 

Unsupervised

python homography_CNN_real.py --mode train --lr 1e-4 --loss_type l1_loss 

There are a couple of options during the training.

Training model from scratch

python homography_CNN_real.py --mode train --lr 1e-4 --loss_type l1_loss --finetune False

Finetune model (after training on synthetic dataset or other datasets)

python homography_CNN_real.py --mode train --lr 1e-4 --loss_type l1_loss --finetune True

Resume training the model (after training on aerial dataset for a while)

python homography_CNN_real.py --mode train --lr 1e-4 --loss_type l1_loss --resume True

Resume training the model (after training on aerial dataset for a while) but reset iteration number

python homography_CNN_real.py --mode train --lr 1e-4 --loss_type l1_loss --resume True --retrain True 

unsuperviseddeephomographyral2018's People

Contributors

tyunist avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unsuperviseddeephomographyral2018's Issues

Using the model on own real homography dataset

Hi there,

I'm trying to run your model on real homograpy dataset (like the one provided here which includes the real GT homography matrix) in order to estimate the homography between two images of the same scene taken from different angles.

I assume that I will need to run homography_CNN_real.py, but when I'm generating the test dataset using gen_real_data.py your code goes to query_gt_test_set() with some hardcoded params refer to the aerial dataset which is not supplied.

Can you please explain what exact inputs are needed for homography_CNN_real.py when I'm using my own data,
especially what should be in --test_gt_file and what exact preprocessing steps needed to fit my own data to the NN.

I see that the repository is kind of inactive for a while, hope the author (or anyone else who encountered same issue) will help.

Regards,
Yuval

OutOfRange Error

OutOfRangeError (see above for traceback): RandomShuffleQueue '_1_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 128, current size 0)
[[node shuffle_batch (defined at D:\projects\unsupervisedDeepHomographyRAL2018\code\dataloader.py:233) = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n/_141)]]

Is there something wrong with the data loader? or should I generate specific number of training data?

What is the fail_percentage? Why supervised's fail_percentage less than unsupervised?

Hi,

Thank you so much for your code!
I successfully generate the synthetic datasets and test both supervised and unsupervised homography network by homography_CNN_synthetic.py. One of its outputs is fail_percentage. I checked where it is from. In the train(), it should be "apply_grad_opt". But I don't really understand what it is. Can you explain it to me?

Another related question is why supervised's fail_percentage less than unsupervised?
Both l1_loss and h_loss are very closed for two networks. But big difference shows in fail_percentage.

Thanks!

A question

Hi ,

Thanks for your code.

I have a question regarding the training script.

Through our experiments, we find that the direction that makes L1 loss function decrease does not improve the registration accuracy, that is, the loss function decreases, but the registration result is farther and farther away from the real offset direction. We think that the gradient decrease is contrary to the registration accuracy. So could you tell me how you solved this problem?

Thanks very much!

Best regards,
Qianqian Kong

Disparity in Fig 2c and section 4 B,C

We observed that the code and Fig2c correlate with idea that losses are not backpropagating through Tensor DLT and Spatial Transformer Network. However, section 4 B&C derives gradients across these layers. Why are you deriving the gradients if it's not been used?

the effect between cross-brightness inputs

Hi๏ผŒI used the model trained on synthetic data to test on the synthetic data. When the light levels of the input images are different, the model's prediction effect is significantly worse than the ordinary case. How do I get the effect described in the paper?

Sports and Soccer video

Hello, Thanks for this contribution, i'm really interested in trying to run this module on Soccer broadcast registration problem,

What do you think? should i create my own dataset, or can the trained weights you provided work on this kind of problem?
Any recommendation would be appreciated,
Thanks.

Is it possible to estimate images from folder I_prime to folder I?

Hi,

to my understanding, in the unsupervised method you estimate the transformed version of the images (i.e., images with homography) from the original images. Is it possible to do the other way around?

In other words, is it possible to estimate the original images from the images with homography?

Thank you and best regards
Maria

finetune on fingerprint dataset

Hi,

I try to use the framework to finetune on fingerprint dataset, but the loss is very high and after several hours it is still very high, the loss dose not change. But I tried some other datasets, it works week. I want to know if this model is not suitable for the fingerprint.

about loss function

image
In this loss function, may I ask you something as follows:
What do Xi and Xj at the denominator mean?

Error in FineTuning the Model

After downloading the Pretrained model on synthetic dataset and storing it in models directory. I ran the homography_CNN_real.py script, but it gave the error as:

e1

I have done the preprocessing of my dataset using gen_real_data.py script. I was using l1_loss for finetuning the network.

License Version

Hi,

Can you please specify the license under which this project is released? I need it for an industrial application and wanted to be clear on this front. Thank you.

tcmalloc: large alloc while finetune unsupervised model

Hello, i'm trying to finetune unsupervised model using Model trained on Aerial Image Data. I generated training images as you suggest in Generate aerial dataset section, for training i'm using this command:
python "/home/UDH/code/homography_CNN_real.py" --mode train --lr 1e-4 --loss_type l1_loss --load_model_dir "/home/real_models" --save_model_dir "/home/save_models" --finetune True

This is the error that i got:
<==================== Loading data ===================>

===> Load model from (if want) /home/real_models
===> Save model to (if want) /home/save_models
===> Save visualization to (if want) /home/report/
===> There are totally 71 test files
===> Train: There are totally 140 training files
args lr: 0.0001 9e-05
===> Decay steps: 58117.5893057
x_t_flat: Tensor("Reshape_1:0", shape=(16384,), dtype=float32, device=/device:GPU:0)
y_t_flat: Tensor("Reshape_2:0", shape=(16384,), dtype=float32, device=/device:GPU:0)
--Shape of A_mat: [64, 8, 8]
--shape of b: [64, 8, 1]
--shape of H_8el Tensor("MatrixSolve:0", shape=(64, 8, 1), dtype=float32, device=/device:GPU:0)
('--Inter- scale_h:', True)
/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py:93: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
====> Use loss type: l1_loss
--Shape of A_mat: [64, 8, 8]
--shape of b: [64, 8, 1]
--shape of H_8el Tensor("MatrixSolve_1:0", shape=(64, 8, 1), dtype=float32, device=/device:GPU:1)
('--Inter- scale_h:', True)
====> Use loss type: l1_loss
2018-12-07 11:55:48.830193: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-12-07 11:55:48.830267: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-12-07 11:55:48.830296: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2018-12-07 11:55:48.830314: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-12-07 11:55:48.830333: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
===> Start step: 0
tcmalloc: large alloc 2684600320 bytes == 0x557650d7c000 @ 0x7f031dcac1e7 0x7f0311d364ef 0x7f03131f3869 0x7f031320027d 0x7f031322c040 0x7f031322c4ae 0x7f031322d75f 0x7f031340e81c 0x7f03133df108 0x7f03133df8ca 0x7f0313723a92 0x7f0313722d22 0x7f031c7c08f0 0x7f031d6706db 0x7f031d9a988f
tcmalloc: large alloc 2684600320 bytes == 0x557650d7c000 @ 0x7f031dcac1e7 0x7f0311d364ef 0x7f03131f3869 0x7f031320027d 0x7f031322c040 0x7f031322c4ae 0x7f031322d75f 0x7f031340e81c 0x7f03133df108 0x7f03133ce630 0x7f0313723a92 0x7f0313722d22 0x7f031c7c08f0 0x7f031d6706db 0x7f031d9a988f
tcmalloc: large alloc 2684600320 bytes == 0x557651c18000 @ 0x7f031dcac1e7 0x7f0311d364ef 0x7f03131f3869 0x7f031320027d 0x7f031322c040 0x7f031322c4ae 0x7f031322d75f 0x7f031340e81c 0x7f03133df108 0x7f03133ce630 0x7f0313723a92 0x7f0313722d22 0x7f031c7c08f0 0x7f031d6706db 0x7f031d9a988f
tcmalloc: large alloc 2684600320 bytes == 0x5576f1c54000 @ 0x7f031dcac1e7 0x7f0311d364ef 0x7f03131f3869 0x7f031320027d 0x7f031322c040 0x7f031322c4ae 0x7f031322d75f 0x7f031340e81c 0x7f03133df108 0x7f03133ce630 0x7f0313723a92 0x7f0313722d22 0x7f031c7c08f0 0x7f031d6706db 0x7f031d9a988f
tcmalloc: large alloc 2684600320 bytes == 0x557651c18000 @ 0x7f031dcac1e7 0x7f0311d364ef 0x7f03131f3869 0x7f031320027d 0x7f031322c040 0x7f031322c4ae 0x7f031322d75f 0x7f031340e81c 0x7f03133df108 0x7f03133df8ca 0x7f0313723a92 0x7f0313722d22 0x7f031c7c08f0 0x7f031d6706db 0x7f031d9a988f
tcmalloc: large alloc 2684600320 bytes == 0x557679c46000 @ 0x7f031dcac1e7 0x7f0311d364ef 0x7f03131f3869 0x7f031320027d 0x7f031322c040 0x7f031322c4ae 0x7f031322d75f 0x7f031340e81c 0x7f03133df108 0x7f03133ce630 0x7f0313723a92 0x7f0313722d22 0x7f031c7c08f0 0x7f031d6706db 0x7f031d9a988f
tcmalloc: large alloc 2684600320 bytes == 0x557651c18000 @ 0x7f031dcac1e7 0x7f0311d364ef 0x7f03131f3869 0x7f031320027d 0x7f031322c040 0x7f031322c4ae 0x7f031322d75f 0x7f031340e81c 0x7f03133df108 0x7f03133df8ca 0x7f0313723a92 0x7f0313722d22 0x7f031c7c08f0 0x7f031d6706db 0x7f031d9a988f
tcmalloc: large alloc 2684600320 bytes == 0x557651c18000 @ 0x7f031dcac1e7 0x7f0311d364ef 0x7f03131f3869 0x7f031320027d 0x7f031322c040 0x7f031322c4ae 0x7f031322d75f 0x7f031340e81c 0x7f03133df108 0x7f03133ce630 0x7f0313723a92 0x7f0313722d22 0x7f031c7c08f0 0x7f031d6706db 0x7f031d9a988f

after that script freezes and nothing happens

I run all scripts in google colaboratory on python 2.7 and gpu support, tensorflow version is 1.2.1

Also i found that patches in gen_real_data.py are used only in visualization, but according to papper NN needs this patches, are they generating somewhere else?
# grab image patches
I1 = I_gray[y:y + patch_size, x:x + patch_size]
I2 = I_prime_gray[y:y + patch_size, x:x + patch_size]

Relationship between total number of steps and epochs

Hi @tyunist ,

Thanks for your code.

I have a question regarding the synthetic training script.

you have set num_total_steps = 150000 here

And I see that batch size is 128

Could you please explain why did you select these values and how can i determine the total number of epochs using num_total_steps?

Best regards,
Nagaraj

RMSE/MACE of supervised and unsupervised methods

Hi @tynguyen ,

thank you for your reply HERE. Since you've closed my issue I had to start a new one. I've downloaded your pretrained models and modified the code to output aggregated MACE and RMSE and got as follows:

SUPERVISED:

|Steps  |   h_loss   |    l1_loss   |  Fail percent    |
119 12.241411097844441 0.6906676491101583 6.328125
>> Top 0 -  30 %
>> Top 30 -  60 %
>> Top 60 -  100 %
===> Percentile Values: (20, 50, 80, 100):
[[11.565539    0.36131516]
 [12.173439    0.16126795]
 [12.799296    0.27282068]]
======> End! ====================================
MACE 10.368597
RMSE 14.589684

UNSUPERVISED:

|Steps  |   h_loss   |    l1_loss   |  Fail percent    |
119 14.196217513084411 0.6258547094961008 19.4140625
>> Top 0 -  30 %
>> Top 30 -  60 %
>> Top 60 -  100 %
===> Percentile Values: (20, 50, 80, 100):
[[13.357981    0.33709136]
 [14.083497    0.1984129 ]
 [14.909436    0.37759867]]
======> End! ====================================
MACE 12.908667
RMSE 19.376993

Here's the MACE/RMSE calculation snippet:


daniel_gt_4p = []
daniel_pred_4p = []

for step in range(self.num_total_steps):
    num_fail_value, h_loss_value,  rec_loss_value, ssim_loss_value, l1_loss_value, l1_smooth_loss_value,  pred_I2_value, I1_aug_value, I2_aug_value, I_value, I_prime_value, pts1_value, gt_value, pred_h4p_value  = sess.run([self.total_num_fail, self.total_h_loss, self.total_rec_loss, self.total_ssim_loss, self.total_l1_loss, self.total_l1_smooth_loss, self.pred_I2, self.I1_aug, self.I2_aug, self.I, self.I_prime, self.pts1, self.gt, self.pred_h4p])

    daniel_gt_4p.append(gt_value)
    daniel_pred_4p.append(pred_h4p_value)

(...)

print('MACE', np.mean(np.abs(np.array(daniel_gt_4p) - np.array(daniel_pred_4p))))
print('RMSE', np.sqrt(np.mean((np.array(daniel_gt_4p)- np.array(daniel_pred_4p))**2)))

Now I am a bit confused. In your paper, in figure 4 you've reported RMSE of supervised method about 35 and RMSE of unsupervised about 13. So, according to your paper unsupervised method should be MUCH better than the supervised, but it's not the case with your provided models. I've seen others got similar results HERE, but instead of commenting on that, you've closed the issue.

Is my understanding of figure 4 wrong?

Best,
Daniel

Getting dimension mismatch error

In numpy_spatial_transform line 90, it says:

wa = np.expand_dims(wa, 2)

and I am getting the following error:

     axis = tuple([normalize_axis_index(ax, ndim, argname) for ax in axis])
     numpy.AxisError: axis 2 is out of bounds for array of dimension 2

I have tried changing the second argument in expand_dims to:

1 - it works but crashes with image error in generating synthetic dataset with following error:
Error with image: /home/yatharth/Desktop/CL/unsupervisedDeepHomographyRAL2018/test/test_synth

0 - doesnt work with following error:
File "/home/yatharth/Desktop/CL/unsupervisedDeepHomographyRAL2018/code/utils/numpy_spatial_transformer.py", line 96, in _interpolate
out = waIa + wbIb + wcIc + wdId
ValueError: operands could not be broadcast together with shapes (1,76800) (76800,3)

I am using a smaller number of images from COCO dataset itself for debugging so I am not sure if there is any expectation mismatch or correction in input images. Can somebody please clarify and help me with the issue here?
Thanks in advance!

Test with own images

Hello,

I want to find the homography between two images of the same scene captured by two cameras with different orientations, based on 4 co-planar points visible in both images. I would like to test your framework but I am not sure which piece of code I should use and which extra information I should provide:
-> to use: homogaphy_CNN_real.py or homography_CNN_synthetic.py?
-> to provide: at least folder with the two images (DATA_PATH), text file with the image names (FILENAMES_FILE), text file with the coordinates of the 4 co-planar points in one image (PTS1_FILE), anything else?

I guess I do not have to modify any parameters related to the training, the ground truth, etc?

Thank you for your answer!

Best regards,
Guillaume

about homography_conventional_real

Hi
Thanks for your great job.
I want to test my own data.What code should I use?homography_conventional_real or homography_conventional_real.
What's difference between them?
what does TEST_GROUND_TRUTH_FILE mean if I don't have GT?

Creating Test Dataset Using utils/gen_real_data.py

For creating test dataset, the code calls a method query_gt_test_set() which require correspondence's mat files. What these files contains and how can we create them ?

def query_gt_test_set():
  label_path = '/Earthbyte/tynguyen/real_rawdata/joe_data/test/labels/'
  mat_file_name_list = [label_path+'corresponences0_10.mat',
                        label_path+'correspondences11_21.mat',
                        label_path+'correspondences22_30.mat',
                        label_path+'correspondences31_40.mat',
                        label_path+'correspondences41_49.mat']
  for i in range(len(mat_file_name_list)):
    gt_array = io.loadmat(mat_file_name_list[i])
    corr1_array = gt_array['all_corr1']
    corr2_array = gt_array['all_corr2']
    if i == 0:
      complete_corr_array1 = corr1_array
      complete_corr_array2 = corr2_array
    else:
      complete_corr_array1 = np.concatenate((complete_corr_array1, corr1_array), axis=0)
      complete_corr_array2 = np.concatenate((complete_corr_array2, corr2_array), axis=0)
  # Return 200x2, 200x2 arrays.
  # To query 4 points on the first image, do:
  # complete_corr_array1[image_index*4:(image_index + 1)*4] => 4x2
  return complete_corr_array1, complete_corr_array2

The problem of DLT

The homography matrix computed by opencv function findHomography() is different from solve_DLT().

tip๏ผšusing opencv
p1 = np.float32([[194.6595, 118.8366],
[258.6257, 84.6884],
[196.5097, 239.1188],
[215.2767, 214.2356]])
p2 = np.float32([[102.3168, 49.8535],
[155.5462, 21.9262],
[96.9939, 152.5195],
[114.5596, 130.9756]])
p3 = p1 - p2

h_mat, mask = cv2.findHomography(p1, p2)
print(h_mat)

The Result:
[[ 5.06455163e-01 -1.05217436e-01 -7.85096847e+00]
[ -4.87824842e-02 5.53376881e-01 -1.81473551e+01]
[ -8.16705616e-04 -6.43053066e-04 1.00000000e+00]]

tip๏ผšusing solve_DLT in this repo.

p1_tensor = tf.placeholder(tf.float32, (1, 8, 1))
p2_tensor = tf.placeholder(tf.float32, (1, 8, 1))
// I have modify solve_DLT function, like that below๏ผš
// # pred_h4p_tile = tf.expand_dims(pred_h4p, [2]) # BATCH_SIZE x 8 x 1
// pred_pts_2_tile = tf.add(pred_h4p, pts_1_tile)
h = solve_DLT(1, p1_tensor, p2_tensor)

with tf.Session() as sess:
p1_feed = np.expand_dims(np.reshape(p1, (8, 1)), axis=0)
print(p1_feed)
p2_feed = np.expand_dims(np.reshape(p3, (8, 1)), axis=0)
mat = sess.run(h, feed_dict={p1_tensor:p1_feed, p2_tensor:p2_feed})
print(mat)

The Result๏ผš
[[ 2.08181953e+00 3.77683282e-01 -5.56897621e+01]
[ 2.63900846e-01 1.90892375e+00 -2.00925694e+01]
[ 1.30298373e-03 1.01570075e-03 1.00000000e+00]]

A small bug

In gen_synthetic_data.py, line 63 should be inv instead of iinv

Final aggregated MACE

Hi @tynguyen ,

what is the final MACE of your model? You've broken down the metric into overlap and percentile, but I can't find the final MACE just like in DeTone paper.

Best,
Daniel

attempting to perform BLAS operation using StreamExecutor without BLAS support

Hi. I tried to test model with synthetic dataset.
python homography_CNN_synthetic.py --mode test --lr 5e-4 --loss_type h_loss
But there are something wrong....
How can I solve the error?
Thanks a lot.

<==================== Loading data ===================>

===> There are totally 5000 test files
===> Test: There are totally 5000 Test files
--Shape of A_mat: [64, 8, 8]
--shape of b: [64, 8, 1]
--shape of H_8el Tensor("MatrixSolve:0", shape=(64, 8, 1), dtype=float32, device=/device:GPU:0)
('--Inter- scale_h:', True)
--Shape of A_mat: [64, 8, 8]
--shape of b: [64, 8, 1]
--shape of H_8el Tensor("MatrixSolve_1:0", shape=(64, 8, 1), dtype=float32, device=/device:GPU:1)
('--Inter- scale_h:', True)
2019-11-08 13:13:32.405944: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2019-11-08 13:13:32.405991: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2019-11-08 13:13:32.406016: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2019-11-08 13:13:32.406028: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2019-11-08 13:13:32.406037: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2019-11-08 13:13:32.803078: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.721
pciBusID 0000:03:00.0
Total memory: 10.91GiB
Free memory: 10.72GiB
2019-11-08 13:13:33.060704: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x555d0605e760 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-11-08 13:13:33.061707: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 1 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.721
pciBusID 0000:04:00.0
Total memory: 10.91GiB
Free memory: 10.75GiB
2019-11-08 13:13:33.261950: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x555d06062cb0 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-11-08 13:13:33.263073: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 2 with properties:
name: Tesla K40c
major: 3 minor: 5 memoryClockRate (GHz) 0.745
pciBusID 0000:81:00.0
Total memory: 11.17GiB
Free memory: 11.09GiB
2019-11-08 13:13:33.484606: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x555d06067220 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-11-08 13:13:33.485397: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 3 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.721
pciBusID 0000:82:00.0
Total memory: 10.91GiB
Free memory: 2.35GiB
2019-11-08 13:13:33.487296: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 0 and 2
2019-11-08 13:13:33.487355: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 0 and 3
2019-11-08 13:13:33.487403: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 1 and 2
2019-11-08 13:13:33.487426: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 1 and 3
2019-11-08 13:13:33.487440: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 2 and 0
2019-11-08 13:13:33.487451: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 2 and 1
2019-11-08 13:13:33.487462: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 2 and 3
2019-11-08 13:13:33.487480: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 3 and 0
2019-11-08 13:13:33.487499: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 3 and 1
2019-11-08 13:13:33.487512: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 3 and 2
2019-11-08 13:13:33.487569: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 1 2 3
2019-11-08 13:13:33.487583: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y Y N N
2019-11-08 13:13:33.487592: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 1: Y Y N N
2019-11-08 13:13:33.487601: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 2: N N Y N
2019-11-08 13:13:33.487610: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 3: N N N Y
2019-11-08 13:13:33.487626: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:03:00.0)
2019-11-08 13:13:33.487638: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:04:00.0)
2019-11-08 13:13:33.487649: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:2) -> (device: 2, name: Tesla K40c, pci bus id: 0000:81:00.0)
2019-11-08 13:13:33.487659: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:3) -> (device: 3, name: GeForce GTX 1080 Ti, pci bus id: 0000:82:00.0)
/home/chenxy/unsupervisedDeepHomographyRAL2018-master/main_log/docker_folder/post_estimation/models/synthetic_models/h_loss_normalize
Traceback (most recent call last):
File "homography_CNN_synthetic.py", line 597, in
test_homography()
File "homography_CNN_synthetic.py", line 584, in test_homography
test_obj.run_test(0)
File "homography_CNN_synthetic.py", line 517, in run_test
train_saver.restore(sess,tf.train.latest_checkpoint(args.model_dir))
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1548, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 997, in _run
feed_dict_string, options, run_metadata)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1132, in _do_run
target_list, options, run_metadata)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1152, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Unable to get element from the feed as bytes.
2019-11-08 13:13:34.661958: E tensorflow/stream_executor/cuda/cuda_blas.cc:365] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2019-11-08 13:13:34.661984: E tensorflow/stream_executor/stream.cc:289] Error recording event in stream: error recording CUDA event on stream 0x555d07914c90: CUDA_ERROR_DEINITIALIZED; not marking stream as bad, as the Event object may be at fault. Monitor for further errors.
2019-11-08 13:13:34.662042: W tensorflow/stream_executor/stream.cc:1601] attempting to perform BLAS operation using StreamExecutor without BLAS support
2019-11-08 13:13:34.662096: E tensorflow/stream_executor/cuda/cuda_blas.cc:365] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2019-11-08 13:13:34.662136: E tensorflow/stream_executor/cuda/cuda_event.cc:49] Error polling for event status: failed to query event: CUDA_ERROR_DEINITIALIZED
2019-11-08 13:13:34.662169: W tensorflow/stream_executor/stream.cc:1601] attempting to perform BLAS operation using StreamExecutor without BLAS support
2019-11-08 13:13:34.662176: F tensorflow/core/common_runtime/gpu/gpu_event_mgr.cc:203] Unexpected Event status: 1
2019-11-08 13:13:34.662232: E tensorflow/stream_executor/cuda/cuda_blas.cc:365] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2019-11-08 13:13:34.662292: W tensorflow/stream_executor/stream.cc:1601] attempting to perform BLAS operation using StreamExecutor without BLAS support
Aborted (core dumped)

InvalidArgumentError (see above for traceback): Input matrix is not invertible.

Hi.When I Train Unsupervised model with synthetic dataset, using "python homography_CNN_synthetic.py --mode test --lr 1e-4 --loss_type l1_loss". At first it run well, but 7 hours later error happended.
I don't know how to debug. Do you have any ideas?

my environment :
cuda 8.0.61
python 2.7
tensorflow-gpu 1.2.1 (or higher)
opencv 4.1.1
<==================== Loading data ===================>

===> There are totally 500 test files
===> Train: There are totally 10000 training files
args lr: 0.0001 9e-05
===> Decay steps: 58117.5893057
--Shape of A_mat: [64, 8, 8]
--shape of b: [64, 8, 1]
--shape of H_8el Tensor("MatrixSolve:0", shape=(64, 8, 1), dtype=float32, device=/device:GPU:0)
('--Inter- scale_h:', True)
/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py:93: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
====> Use loss type: l1_loss
--Shape of A_mat: [64, 8, 8]
--shape of b: [64, 8, 1]
--shape of H_8el Tensor("MatrixSolve_1:0", shape=(64, 8, 1), dtype=float32, device=/device:GPU:1)
('--Inter- scale_h:', True)
====> Use loss type: l1_loss
2019-11-05 11:33:53.441071: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2019-11-05 11:33:53.441128: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2019-11-05 11:33:53.441136: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2019-11-05 11:33:53.441142: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2019-11-05 11:33:53.441147: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2019-11-05 11:33:53.777061: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.721
pciBusID 0000:03:00.0
Total memory: 10.91GiB
Free memory: 10.72GiB
2019-11-05 11:33:53.999777: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x55cbe7c999a0 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-11-05 11:33:54.000875: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 1 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.721
pciBusID 0000:04:00.0
Total memory: 10.91GiB
Free memory: 10.75GiB
2019-11-05 11:33:54.179809: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x55cbe7c9dec0 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-11-05 11:33:54.180909: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 2 with properties:
name: Tesla K40c
major: 3 minor: 5 memoryClockRate (GHz) 0.745
pciBusID 0000:81:00.0
Total memory: 11.17GiB
Free memory: 11.09GiB
2019-11-05 11:33:54.377269: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x55cbe7ca2430 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-11-05 11:33:54.378280: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 3 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.721
pciBusID 0000:82:00.0
Total memory: 10.91GiB
Free memory: 10.75GiB
2019-11-05 11:33:54.379452: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 0 and 2
2019-11-05 11:33:54.379477: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 0 and 3
2019-11-05 11:33:54.379501: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 1 and 2
2019-11-05 11:33:54.379527: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 1 and 3
2019-11-05 11:33:54.379537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 2 and 0
2019-11-05 11:33:54.379544: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 2 and 1
2019-11-05 11:33:54.379552: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 2 and 3
2019-11-05 11:33:54.379565: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 3 and 0
2019-11-05 11:33:54.379578: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 3 and 1
2019-11-05 11:33:54.379588: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 3 and 2
2019-11-05 11:33:54.379631: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 1 2 3
2019-11-05 11:33:54.379641: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y Y N N
2019-11-05 11:33:54.379648: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 1: Y Y N N
2019-11-05 11:33:54.379655: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 2: N N Y N
2019-11-05 11:33:54.379662: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 3: N N N Y
2019-11-05 11:33:54.379692: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:03:00.0)
2019-11-05 11:33:54.379704: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:04:00.0)
2019-11-05 11:33:54.379712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:2) -> (device: 2, name: Tesla K40c, pci bus id: 0000:81:00.0)
2019-11-05 11:33:54.379720: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:3) -> (device: 3, name: GeForce GTX 1080 Ti, pci bus id: 0000:82:00.0)
===> Start step: 0
[>................................................................] Step: 14s924ms | Tot: 1ms | Train: 1, h_loss 26.239, l1_loss 0.616609, l1_smooth_loss 0.332019-11-05 11:35:10.127705: I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 53954 get requests, put_count=53950 evicted_count=1000 eviction_rate=0.0185357 and unsatisfied allocation rate=0.0204619
2019-11-05 11:35:10.127768: I tensorflow/core/common_runtime/gpu/pool_allocator.cc:259] Raising pool_size_limit_ from 100 to 110
[>................................................................] Step: 1m26s | Tot: 1m26s | Train: 1, h_loss 26.192, l1_loss 0.611157, l1_smooth_loss 0.332 [>................................................................] Step: 1m25s | Tot: 2m51s | Train: 1, h_loss 26.174, l1_loss 0.606227, l1_smooth_loss 0.328 [>................................................................] Step: 1m25s | Tot: 4m17s | Train: 1, h_loss 25.880, l1_loss 0.583459, l1_smooth_loss 0.311 [>................................................................] Step: 1m26s | Tot: 5m44s | Train: 1, h_loss 25.683, l1_loss 0.568325, l1_smooth_loss 0.301 [>................................................................] Step: 1m24s | Tot: 7m9s | Train: 1, h_loss 25.359, l1_loss 0.554256, l1_smooth_loss 0.2910 [>................................................................] Step: 1m25s | Tot: 8m34s | Train: 1, h_loss 25.107, l1_loss 0.543742, l1_smooth_loss 0.283 [>................................................................] Step: 1m26s | Tot: 10m788ms | Train: 1, h_loss 24.883, l1_loss 0.534829, l1_smooth_loss 0. [>................................................................] Step: 1m25s | Tot: 11m26s | Train: 1, h_loss 24.699, l1_loss 0.527596, l1_smooth_loss 0.27 [>................................................................] Step: 1m26s | Tot: 12m52s | Train: 1, h_loss 24.533, l1_loss 0.521479, l1_smooth_loss 0.26 [>................................................................] Step: 1m24s | Tot: 14m17s | Train: 1, h_loss 24.384, l1_loss 0.515694, l1_smooth_loss 0.26 [>................................................................] Step: 1m34s | Tot: 15m52s | Train: 1, h_loss 24.241, l1_loss 0.510811, l1_smooth_loss 0.26 [>................................................................] Step: 1m30s | Tot: 17m23s | Train: 1, h_loss 24.112, l1_loss 0.506412, l1_smooth_loss 0.25 [>................................................................] Step: 1m30s | Tot: 18m53s | Train: 1, h_loss 23.984, l1_loss 0.502348, l1_smooth_loss 0.25 [>................................................................] Step: 1m33s | Tot: 20m26s | Train: 1, h_loss 23.859, l1_loss 0.498583, l1_smooth_loss 0.25 [>................................................................] Step: 1m32s | Tot: 21m59s | Train: 1, h_loss 23.745, l1_loss 0.495188, l1_smooth_loss 0.25 [>................................................................] Step: 1m31s | Tot: 23m30s | Train: 1, h_loss 23.639, l1_loss 0.491997, l1_smooth_loss 0.24 [>................................................................] Step: 1m28s | Tot: 24m58s | Train: 1, h_loss 23.532, l1_loss 0.488869, l1_smooth_loss 0.24 [>................................................................] Step: 1m28s | Tot: 26m27s | Train: 1, h_loss 23.436, l1_loss 0.486157, l1_smooth_loss 0.24 [>................................................................] Step: 1m28s | Tot: 27m55s | Train: 1, h_loss 23.342, l1_loss 0.483412, l1_smooth_loss 0.24 [>................................................................] Step: 1m29s | Tot: 29m25s | Train: 1, h_loss 23.247, l1_loss 0.480875, l1_smooth_loss 0.24 [>................................................................] Step: 1m35s | Tot: 31m949ms | Train: 1, h_loss 23.154, l1_loss 0.478598, l1_smooth_loss 0. [>................................................................] Step: 1m28s | Tot: 32m29s | Train: 1, h_loss 23.066, l1_loss 0.476392, l1_smooth_loss 0.23 [>................................................................] Step: 1m32s | Tot: 34m1s | Train: 1, h_loss 22.982, l1_loss 0.474110, l1_smooth_loss 0.236 [=>...............................................................] Step: 1m34s | Tot: 35m36s | Train: 1, h_loss 22.901, l1_loss 0.472048, l1_smooth_loss 0.23 [=>...............................................................] Step: 1m27s | Tot: 37m4s | Train: 1, h_loss 22.820, l1_loss 0.470051, l1_smooth_loss 0.233 [=>...............................................................] Step: 1m26s | Tot: 38m31s | Train: 1, h_loss 22.744, l1_loss 0.468228, l1_smooth_loss 0.23 [=>...............................................................] Step: 1m26s | Tot: 39m57s | Train: 1, h_loss 22.666, l1_loss 0.466267, l1_smooth_loss 0.23 [=>...............................................................] Step: 1m26s | Tot: 41m24s | Train: 1, h_loss 22.595, l1_loss 0.464493, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m26s | Tot: 42m50s | Train: 1, h_loss 22.521, l1_loss 0.462736, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m26s | Tot: 44m16s | Train: 1, h_loss 22.453, l1_loss 0.461220, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m35s | Tot: 45m52s | Train: 1, h_loss 22.387, l1_loss 0.459685, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m31s | Tot: 47m23s | Train: 1, h_loss 22.320, l1_loss 0.458146, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m29s | Tot: 48m52s | Train: 1, h_loss 22.253, l1_loss 0.456672, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m32s | Tot: 50m24s | Train: 1, h_loss 22.190, l1_loss 0.455299, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m36s | Tot: 52m1s | Train: 1, h_loss 22.126, l1_loss 0.453853, l1_smooth_loss 0.222 [=>...............................................................] Step: 1m29s | Tot: 53m31s | Train: 1, h_loss 22.063, l1_loss 0.452488, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m29s | Tot: 55m904ms | Train: 1, h_loss 22.003, l1_loss 0.451199, l1_smooth_loss 0. [=>...............................................................] Step: 1m34s | Tot: 56m35s | Train: 1, h_loss 21.943, l1_loss 0.449866, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m33s | Tot: 58m9s | Train: 1, h_loss 21.884, l1_loss 0.448597, l1_smooth_loss 0.219 [=>...............................................................] Step: 1m33s | Tot: 59m42s | Train: 1, h_loss 21.826, l1_loss 0.447414, l1_smooth_loss 0.21 [=>...............................................................] Step: 1m33s | Tot: 1h1m | Train: 1, h_loss 21.768, l1_loss 0.446217, l1_smooth_loss 0.2177 [=>...............................................................] Step: 1m29s | Tot: 1h2m | Train: 1, h_loss 21.714, l1_loss 0.445078, l1_smooth_loss 0.2170 [=>...............................................................] Step: 1m29s | Tot: 1h4m | Train: 1, h_loss 21.659, l1_loss 0.443904, l1_smooth_loss 0.2162 [=>...............................................................] Step: 1m28s | Tot: 1h5m | Train: 1, h_loss 21.605, l1_loss 0.442784, l1_smooth_loss 0.2155 [=>...............................................................] Step: 1m30s | Tot: 1h7m | Train: 1, h_loss 21.553, l1_loss 0.441711, l1_smooth_loss 0.2148 [=>...............................................................] Step: 1m27s | Tot: 1h8m | Train: 1, h_loss 21.503, l1_loss 0.440683, l1_smooth_loss 0.2141 [==>..............................................................] Step: 1m26s | Tot: 1h10m | Train: 1, h_loss 21.451, l1_loss 0.439621, l1_smooth_loss 0.213 [==>..............................................................] Step: 1m26s | Tot: 1h11m | Train: 1, h_loss 21.403, l1_loss 0.438687, l1_smooth_loss 0.212 [==>..............................................................] Step: 1m27s | Tot: 1h13m | Train: 1, h_loss 21.354, l1_loss 0.437716, l1_smooth_loss 0.212 [==>..............................................................] Step: 1m27s | Tot: 1h14m | Train: 1, h_loss 21.306, l1_loss 0.436762, l1_smooth_loss 0.211 [==>..............................................................] Step: 1m28s | Tot: 1h15m | Train: 1, h_loss 21.258, l1_loss 0.435809, l1_smooth_loss 0.210 [==>..............................................................] Step: 1m27s | Tot: 1h17m | Train: 1, h_loss 21.211, l1_loss 0.434848, l1_smooth_loss 0.210 [==>..............................................................] Step: 1m26s | Tot: 1h18m | Train: 1, h_loss 21.166, l1_loss 0.433960, l1_smooth_loss 0.209 [==>..............................................................] Step: 1m27s | Tot: 1h20m | Train: 1, h_loss 21.121, l1_loss 0.433098, l1_smooth_loss 0.209 [==>..............................................................] Step: 1m26s | Tot: 1h21m | Train: 1, h_loss 21.074, l1_loss 0.432201, l1_smooth_loss 0.208 [==>..............................................................] Step: 1m26s | Tot: 1h23m | Train: 1, h_loss 21.032, l1_loss 0.431393, l1_smooth_loss 0.208 [==>..............................................................] Step: 1m26s | Tot: 1h24m | Train: 1, h_loss 20.989, l1_loss 0.430558, l1_smooth_loss 0.207 [==>..............................................................] Step: 1m27s | Tot: 1h26m | Train: 1, h_loss 20.946, l1_loss 0.429782, l1_smooth_loss 0.207 [==>..............................................................] Step: 1m27s | Tot: 1h27m | Train: 1, h_loss 20.905, l1_loss 0.428980, l1_smooth_loss 0.206 [==>..............................................................] Step: 1m26s | Tot: 1h28m | Train: 1, h_loss 20.864, l1_loss 0.428209, l1_smooth_loss 0.206 [==>..............................................................] Step: 1m28s | Tot: 1h30m | Train: 1, h_loss 20.822, l1_loss 0.427423, l1_smooth_loss 0.205 [==>..............................................................] Step: 1m28s | Tot: 1h31m | Train: 1, h_loss 20.782, l1_loss 0.426673, l1_smooth_loss 0.205 [==>..............................................................] Step: 1m28s | Tot: 1h33m | Train: 1, h_loss 20.743, l1_loss 0.425946, l1_smooth_loss 0.204 [==>..............................................................] Step: 1m26s | Tot: 1h34m | Train: 1, h_loss 20.703, l1_loss 0.425169, l1_smooth_loss 0.204 [==>..............................................................] Step: 1m27s | Tot: 1h36m | Train: 1, h_loss 20.664, l1_loss 0.424445, l1_smooth_loss 0.203 [==>..............................................................] Step: 1m26s | Tot: 1h37m | Train: 1, h_loss 20.626, l1_loss 0.423709, l1_smooth_loss 0.203 [==>..............................................................] Step: 1m26s | Tot: 1h39m | Train: 1, h_loss 20.588, l1_loss 0.423001, l1_smooth_loss 0.202 [==>..............................................................] Step: 1m26s | Tot: 1h40m | Train: 1, h_loss 20.552, l1_loss 0.422324, l1_smooth_loss 0.202 [==>..............................................................] Step: 1m26s | Tot: 1h42m | Train: 1, h_loss 20.515, l1_loss 0.421610, l1_smooth_loss 0.201 [===>.............................................................] Step: 1m26s | Tot: 1h43m | Train: 1, h_loss 20.479, l1_loss 0.420979, l1_smooth_loss 0.201 [===>.............................................................] Step: 1m30s | Tot: 1h45m | Train: 1, h_loss 20.443, l1_loss 0.420344, l1_smooth_loss 0.201 [===>.............................................................] Step: 1m27s | Tot: 1h46m | Train: 1, h_loss 20.408, l1_loss 0.419715, l1_smooth_loss 0.200 [===>.............................................................] Step: 1m27s | Tot: 1h47m | Train: 1, h_loss 20.374, l1_loss 0.419077, l1_smooth_loss 0.200 [===>.............................................................] Step: 1m26s | Tot: 1h49m | Train: 1, h_loss 20.338, l1_loss 0.418427, l1_smooth_loss 0.199 [===>.............................................................] Step: 1m26s | Tot: 1h50m | Train: 1, h_loss 20.304, l1_loss 0.417816, l1_smooth_loss 0.199 [===>.............................................................] Step: 1m27s | Tot: 1h52m | Train: 1, h_loss 20.270, l1_loss 0.417188, l1_smooth_loss 0.198 [===>.............................................................] Step: 1m27s | Tot: 1h53m | Train: 1, h_loss 20.237, l1_loss 0.416634, l1_smooth_loss 0.198 [===>.............................................................] Step: 1m27s | Tot: 1h55m | Train: 1, h_loss 20.204, l1_loss 0.416032, l1_smooth_loss 0.198 [===>.............................................................] Step: 1m27s | Tot: 1h56m | Train: 1, h_loss 20.171, l1_loss 0.415416, l1_smooth_loss 0.197 [===>.............................................................] Step: 1m27s | Tot: 1h58m | Train: 1, h_loss 20.138, l1_loss 0.414849, l1_smooth_loss 0.197 [===>.............................................................] Step: 1m29s | Tot: 1h59m | Train: 1, h_loss 20.106, l1_loss 0.414306, l1_smooth_loss 0.197 [===>.............................................................] Step: 1m29s | Tot: 2h1m | Train: 1, h_loss 20.075, l1_loss 0.413756, l1_smooth_loss 0.1967 [===>.............................................................] Step: 1m30s | Tot: 2h2m | Train: 1, h_loss 20.044, l1_loss 0.413221, l1_smooth_loss 0.1964 [===>.............................................................] Step: 1m29s | Tot: 2h4m | Train: 1, h_loss 20.012, l1_loss 0.412650, l1_smooth_loss 0.1960 [===>.............................................................] Step: 1m30s | Tot: 2h5m | Train: 1, h_loss 19.983, l1_loss 0.412146, l1_smooth_loss 0.1957 [===>.............................................................] Step: 1m33s | Tot: 2h7m | Train: 1, h_loss 19.953, l1_loss 0.411599, l1_smooth_loss 0.1954 [===>.............................................................] Step: 1m33s | Tot: 2h8m | Train: 1, h_loss 19.922, l1_loss 0.411070, l1_smooth_loss 0.1950 [===>.............................................................] Step: 1m33s | Tot: 2h10m | Train: 1, h_loss 19.893, l1_loss 0.410569, l1_smooth_loss 0.194 [===>.............................................................] Step: 1m32s | Tot: 2h11m | Train: 1, h_loss 19.864, l1_loss 0.410056, l1_smooth_loss 0.194 [===>.............................................................] Step: 1m30s | Tot: 2h13m | Train: 1, h_loss 19.836, l1_loss 0.409549, l1_smooth_loss 0.194 [===>.............................................................] Step: 1m30s | Tot: 2h14m | Train: 1, h_loss 19.807, l1_loss 0.409056, l1_smooth_loss 0.193 [===>.............................................................] Step: 1m28s | Tot: 2h16m | Train: 1, h_loss 19.779, l1_loss 0.408603, l1_smooth_loss 0.193 [====>............................................................] Step: 1m29s | Tot: 2h17m | Train: 1, h_loss 19.753, l1_loss 0.408109, l1_smooth_loss 0.193 [====>............................................................] Step: 1m27s | Tot: 2h19m | Train: 1, h_loss 19.726, l1_loss 0.407637, l1_smooth_loss 0.192 [====>............................................................] Step: 1m27s | Tot: 2h20m | Train: 1, h_loss 19.698, l1_loss 0.407153, l1_smooth_loss 0.192 [====>............................................................] Step: 1m26s | Tot: 2h22m | Train: 1, h_loss 19.672, l1_loss 0.406692, l1_smooth_loss 0.192 [====>............................................................] Step: 1m26s | Tot: 2h23m | Train: 1, h_loss 19.644, l1_loss 0.406237, l1_smooth_loss 0.191 [====>............................................................] Step: 1m27s | Tot: 2h25m | Train: 1, h_loss 19.617, l1_loss 0.405781, l1_smooth_loss 0.191 [====>............................................................] Step: 1m26s | Tot: 2h26m | Train: 1, h_loss 19.591, l1_loss 0.405339, l1_smooth_loss 0.191 [====>............................................................] Step: 1m27s | Tot: 2h27m | Train: 1, h_loss 19.565, l1_loss 0.404897, l1_smooth_loss 0.191 [====>............................................................] Step: 1m28s | Tot: 2h29m | Train: 1, h_loss 19.539, l1_loss 0.404433, l1_smooth_loss 0.190 [====>............................................................] Step: 1m27s | Tot: 2h30m | Train: 1, h_loss 19.514, l1_loss 0.404012, l1_smooth_loss 0.190 [====>............................................................] Step: 1m27s | Tot: 2h32m | Train: 1, h_loss 19.490, l1_loss 0.403577, l1_smooth_loss 0.190 [====>............................................................] Step: 1m27s | Tot: 2h33m | Train: 1, h_loss 19.465, l1_loss 0.403162, l1_smooth_loss 0.190 [====>............................................................] Step: 1m29s | Tot: 2h35m | Train: 1, h_loss 19.441, l1_loss 0.402747, l1_smooth_loss 0.189 [====>............................................................] Step: 1m31s | Tot: 2h36m | Train: 1, h_loss 19.416, l1_loss 0.402336, l1_smooth_loss 0.189 [====>............................................................] Step: 1m29s | Tot: 2h38m | Train: 1, h_loss 19.391, l1_loss 0.401917, l1_smooth_loss 0.189 [====>............................................................] Step: 1m27s | Tot: 2h39m | Train: 1, h_loss 19.366, l1_loss 0.401521, l1_smooth_loss 0.188 [====>............................................................] Step: 1m27s | Tot: 2h41m | Train: 1, h_loss 19.342, l1_loss 0.401105, l1_smooth_loss 0.188 [====>............................................................] Step: 1m27s | Tot: 2h42m | Train: 1, h_loss 19.318, l1_loss 0.400708, l1_smooth_loss 0.188 [====>............................................................] Step: 1m30s | Tot: 2h44m | Train: 1, h_loss 19.295, l1_loss 0.400314, l1_smooth_loss 0.188 [====>............................................................] Step: 1m27s | Tot: 2h45m | Train: 1, h_loss 19.271, l1_loss 0.399921, l1_smooth_loss 0.187 [====>............................................................] Step: 1m29s | Tot: 2h47m | Train: 1, h_loss 19.248, l1_loss 0.399544, l1_smooth_loss 0.187 [====>............................................................] Step: 1m27s | Tot: 2h48m | Train: 1, h_loss 19.225, l1_loss 0.399165, l1_smooth_loss 0.187 [====>............................................................] Step: 1m29s | Tot: 2h50m | Train: 1, h_loss 19.202, l1_loss 0.398792, l1_smooth_loss 0.187 [=====>...........................................................] Step: 1m33s | Tot: 2h51m | Train: 1, h_loss 19.180, l1_loss 0.398408, l1_smooth_loss 0.186 [=====>...........................................................] Step: 1m30s | Tot: 2h53m | Train: 1, h_loss 19.158, l1_loss 0.398033, l1_smooth_loss 0.186 [=====>...........................................................] Step: 1m27s | Tot: 2h54m | Train: 1, h_loss 19.136, l1_loss 0.397679, l1_smooth_loss 0.186 [=====>...........................................................] Step: 1m27s | Tot: 2h56m | Train: 1, h_loss 19.114, l1_loss 0.397290, l1_smooth_loss 0.186 [=====>...........................................................] Step: 1m27s | Tot: 2h57m | Train: 1, h_loss 19.092, l1_loss 0.396938, l1_smooth_loss 0.186 [=====>...........................................................] Step: 1m28s | Tot: 2h58m | Train: 1, h_loss 19.071, l1_loss 0.396579, l1_smooth_loss 0.185 [=====>...........................................................] Step: 1m27s | Tot: 3h26s | Train: 1, h_loss 19.049, l1_loss 0.396225, l1_smooth_loss 0.185 [=====>...........................................................] Step: 1m27s | Tot: 3h1m | Train: 1, h_loss 19.028, l1_loss 0.395865, l1_smooth_loss 0.1853 [=====>...........................................................] Step: 1m26s | Tot: 3h3m | Train: 1, h_loss 19.007, l1_loss 0.395526, l1_smooth_loss 0.1851 [=====>...........................................................] Step: 1m26s | Tot: 3h4m | Train: 1, h_loss 18.986, l1_loss 0.395203, l1_smooth_loss 0.1849 [=====>...........................................................] Step: 1m26s | Tot: 3h6m | Train: 1, h_loss 18.966, l1_loss 0.394842, l1_smooth_loss 0.1847 [=====>...........................................................] Step: 1m26s | Tot: 3h7m | Train: 1, h_loss 18.945, l1_loss 0.394488, l1_smooth_loss 0.1845 [=====>...........................................................] Step: 1m26s | Tot: 3h9m | Train: 1, h_loss 18.925, l1_loss 0.394175, l1_smooth_loss 0.1843 [=====>...........................................................] Step: 1m26s | Tot: 3h10m | Train: 1, h_loss 18.904, l1_loss 0.393843, l1_smooth_loss 0.184 [=====>...........................................................] Step: 1m26s | Tot: 3h11m | Train: 1, h_loss 18.883, l1_loss 0.393518, l1_smooth_loss 0.183 [=====>...........................................................] Step: 1m28s | Tot: 3h13m | Train: 1, h_loss 18.864, l1_loss 0.393197, l1_smooth_loss 0.183 [=====>...........................................................] Step: 1m27s | Tot: 3h14m | Train: 1, h_loss 18.845, l1_loss 0.392881, l1_smooth_loss 0.183 [=====>...........................................................] Step: 1m27s | Tot: 3h16m | Train: 1, h_loss 18.825, l1_loss 0.392557, l1_smooth_loss 0.183 [=====>...........................................................] Step: 1m27s | Tot: 3h17m | Train: 1, h_loss 18.805, l1_loss 0.392247, l1_smooth_loss 0.183 [=====>...........................................................] Step: 1m29s | Tot: 3h19m | Train: 1, h_loss 18.786, l1_loss 0.391926, l1_smooth_loss 0.182 [=====>...........................................................] Step: 1m31s | Tot: 3h20m | Train: 1, h_loss 18.766, l1_loss 0.391615, l1_smooth_loss 0.182 [=====>...........................................................] Step: 1m30s | Tot: 3h22m | Train: 1, h_loss 18.747, l1_loss 0.391311, l1_smooth_loss 0.182 [=====>...........................................................] Step: 1m29s | Tot: 3h23m | Train: 1, h_loss 18.728, l1_loss 0.390995, l1_smooth_loss 0.182 [======>..........................................................] Step: 1m26s | Tot: 3h25m | Train: 1, h_loss 18.710, l1_loss 0.390693, l1_smooth_loss 0.182 [======>..........................................................] Step: 1m26s | Tot: 3h26m | Train: 1, h_loss 18.691, l1_loss 0.390394, l1_smooth_loss 0.181 [======>..........................................................] Step: 1m33s | Tot: 3h28m | Train: 1, h_loss 18.673, l1_loss 0.390099, l1_smooth_loss 0.181 [======>..........................................................] Step: 1m31s | Tot: 3h29m | Train: 1, h_loss 18.654, l1_loss 0.389809, l1_smooth_loss 0.181 [======>..........................................................] Step: 1m30s | Tot: 3h31m | Train: 1, h_loss 18.635, l1_loss 0.389516, l1_smooth_loss 0.181 [======>..........................................................] Step: 1m33s | Tot: 3h32m | Train: 1, h_loss 18.617, l1_loss 0.389227, l1_smooth_loss 0.181 [======>..........................................................] Step: 1m33s | Tot: 3h34m | Train: 1, h_loss 18.599, l1_loss 0.388930, l1_smooth_loss 0.181 [======>..........................................................] Step: 1m32s | Tot: 3h36m | Train: 1, h_loss 18.581, l1_loss 0.388637, l1_smooth_loss 0.180 [======>..........................................................] Step: 1m30s | Tot: 3h37m | Train: 1, h_loss 18.564, l1_loss 0.388364, l1_smooth_loss 0.180 [======>..........................................................] Step: 1m30s | Tot: 3h39m | Train: 1, h_loss 18.546, l1_loss 0.388078, l1_smooth_loss 0.180 [======>..........................................................] Step: 1m30s | Tot: 3h40m | Train: 1, h_loss 18.529, l1_loss 0.387813, l1_smooth_loss 0.180 [======>..........................................................] Step: 1m30s | Tot: 3h42m | Train: 1, h_loss 18.512, l1_loss 0.387525, l1_smooth_loss 0.180 [======>..........................................................] Step: 1m34s | Tot: 3h43m | Train: 1, h_loss 18.495, l1_loss 0.387244, l1_smooth_loss 0.179 [======>..........................................................] Step: 1m32s | Tot: 3h45m | Train: 1, h_loss 18.478, l1_loss 0.386972, l1_smooth_loss 0.179 [======>..........................................................] Step: 1m33s | Tot: 3h46m | Train: 1, h_loss 18.460, l1_loss 0.386691, l1_smooth_loss 0.179 [======>..........................................................] Step: 1m34s | Tot: 3h48m | Train: 1, h_loss 18.444, l1_loss 0.386423, l1_smooth_loss 0.179 [======>..........................................................] Step: 1m35s | Tot: 3h49m | Train: 1, h_loss 18.427, l1_loss 0.386161, l1_smooth_loss 0.179 [======>..........................................................] Step: 1m36s | Tot: 3h51m | Train: 1, h_loss 18.410, l1_loss 0.385898, l1_smooth_loss 0.179 [======>..........................................................] Step: 1m36s | Tot: 3h53m | Train: 1, h_loss 18.394, l1_loss 0.385641, l1_smooth_loss 0.178 [======>..........................................................] Step: 1m34s | Tot: 3h54m | Train: 1, h_loss 18.378, l1_loss 0.385385, l1_smooth_loss 0.178 [======>..........................................................] Step: 1m32s | Tot: 3h56m | Train: 1, h_loss 18.361, l1_loss 0.385110, l1_smooth_loss 0.178 [======>..........................................................] Step: 1m32s | Tot: 3h57m | Train: 1, h_loss 18.344, l1_loss 0.384854, l1_smooth_loss 0.178 [======>..........................................................] Step: 1m34s | Tot: 3h59m | Train: 1, h_loss 18.329, l1_loss 0.384604, l1_smooth_loss 0.178 [=======>.........................................................] Step: 1m32s | Tot: 4h51s | Train: 1, h_loss 18.314, l1_loss 0.384355, l1_smooth_loss 0.178 [=======>.........................................................] Step: 1m34s | Tot: 4h2m | Train: 1, h_loss 18.297, l1_loss 0.384115, l1_smooth_loss 0.1779 [=======>.........................................................] Step: 1m33s | Tot: 4h3m | Train: 1, h_loss 18.281, l1_loss 0.383865, l1_smooth_loss 0.1778 [=======>.........................................................] Step: 1m35s | Tot: 4h5m | Train: 1, h_loss 18.266, l1_loss 0.383631, l1_smooth_loss 0.1776 [=======>.........................................................] Step: 1m38s | Tot: 4h7m | Train: 1, h_loss 18.250, l1_loss 0.383370, l1_smooth_loss 0.1775 [=======>.........................................................] Step: 1m32s | Tot: 4h8m | Train: 1, h_loss 18.234, l1_loss 0.383129, l1_smooth_loss 0.1773 [=======>.........................................................] Step: 1m30s | Tot: 4h10m | Train: 1, h_loss 18.218, l1_loss 0.382876, l1_smooth_loss 0.177 [=======>.........................................................] Step: 1m31s | Tot: 4h11m | Train: 1, h_loss 18.203, l1_loss 0.382635, l1_smooth_loss 0.177 [=======>.........................................................] Step: 1m30s | Tot: 4h13m | Train: 1, h_loss 18.188, l1_loss 0.382404, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m29s | Tot: 4h14m | Train: 1, h_loss 18.173, l1_loss 0.382161, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m27s | Tot: 4h16m | Train: 1, h_loss 18.158, l1_loss 0.381939, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m27s | Tot: 4h17m | Train: 1, h_loss 18.143, l1_loss 0.381711, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m27s | Tot: 4h19m | Train: 1, h_loss 18.128, l1_loss 0.381454, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m26s | Tot: 4h20m | Train: 1, h_loss 18.113, l1_loss 0.381229, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m26s | Tot: 4h22m | Train: 1, h_loss 18.099, l1_loss 0.381004, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m26s | Tot: 4h23m | Train: 1, h_loss 18.084, l1_loss 0.380790, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m26s | Tot: 4h24m | Train: 1, h_loss 18.070, l1_loss 0.380549, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m26s | Tot: 4h26m | Train: 1, h_loss 18.055, l1_loss 0.380339, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m26s | Tot: 4h27m | Train: 1, h_loss 18.041, l1_loss 0.380107, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m30s | Tot: 4h29m | Train: 1, h_loss 18.027, l1_loss 0.379878, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m29s | Tot: 4h30m | Train: 1, h_loss 18.012, l1_loss 0.379647, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m27s | Tot: 4h32m | Train: 1, h_loss 17.998, l1_loss 0.379425, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m27s | Tot: 4h33m | Train: 1, h_loss 17.984, l1_loss 0.379205, l1_smooth_loss 0.174 [========>........................................................] Step: 1m26s | Tot: 4h35m | Train: 1, h_loss 17.970, l1_loss 0.378988, l1_smooth_loss 0.174 [========>........................................................] Step: 1m27s | Tot: 4h36m | Train: 1, h_loss 17.957, l1_loss 0.378771, l1_smooth_loss 0.174 [========>........................................................] Step: 1m27s | Tot: 4h38m | Train: 1, h_loss 17.943, l1_loss 0.378558, l1_smooth_loss 0.174 [========>........................................................] Step: 1m27s | Tot: 4h39m | Train: 1, h_loss 17.929, l1_loss 0.378355, l1_smooth_loss 0.174 [========>........................................................] Step: 1m27s | Tot: 4h40m | Train: 1, h_loss 17.915, l1_loss 0.378141, l1_smooth_loss 0.174 [========>........................................................] Step: 1m26s | Tot: 4h42m | Train: 1, h_loss 17.901, l1_loss 0.377922, l1_smooth_loss 0.174 [========>........................................................] Step: 1m29s | Tot: 4h43m | Train: 1, h_loss 17.888, l1_loss 0.377727, l1_smooth_loss 0.173 [========>........................................................] Step: 1m28s | Tot: 4h45m | Train: 1, h_loss 17.875, l1_loss 0.377539, l1_smooth_loss 0.173 [========>........................................................] Step: 1m28s | Tot: 4h46m | Train: 1, h_loss 17.861, l1_loss 0.377336, l1_smooth_loss 0.173 [========>........................................................] Step: 1m27s | Tot: 4h48m | Train: 1, h_loss 17.848, l1_loss 0.377132, l1_smooth_loss 0.173 [========>........................................................] Step: 1m27s | Tot: 4h49m | Train: 1, h_loss 17.835, l1_loss 0.376942, l1_smooth_loss 0.173 [========>........................................................] Step: 1m28s | Tot: 4h51m | Train: 1, h_loss 17.822, l1_loss 0.376748, l1_smooth_loss 0.173 [========>........................................................] Step: 1m27s | Tot: 4h52m | Train: 1, h_loss 17.808, l1_loss 0.376552, l1_smooth_loss 0.173 [========>........................................................] Step: 1m27s | Tot: 4h54m | Train: 1, h_loss 17.796, l1_loss 0.376356, l1_smooth_loss 0.173 [========>........................................................] Step: 1m27s | Tot: 4h55m | Train: 1, h_loss 17.783, l1_loss 0.376159, l1_smooth_loss 0.173 [========>........................................................] Step: 1m27s | Tot: 4h57m | Train: 1, h_loss 17.770, l1_loss 0.375960, l1_smooth_loss 0.172 [========>........................................................] Step: 1m37s | Tot: 4h58m | Train: 1, h_loss 17.757, l1_loss 0.375771, l1_smooth_loss 0.172 [========>........................................................] Step: 1m33s | Tot: 5h18s | Train: 1, h_loss 17.744, l1_loss 0.375581, l1_smooth_loss 0.172 [========>........................................................] Step: 1m32s | Tot: 5h1m | Train: 1, h_loss 17.732, l1_loss 0.375392, l1_smooth_loss 0.1725 [========>........................................................] Step: 1m31s | Tot: 5h3m | Train: 1, h_loss 17.720, l1_loss 0.375207, l1_smooth_loss 0.1724 [========>........................................................] Step: 1m29s | Tot: 5h4m | Train: 1, h_loss 17.707, l1_loss 0.375017, l1_smooth_loss 0.1723 [========>........................................................] Step: 1m28s | Tot: 5h6m | Train: 1, h_loss 17.694, l1_loss 0.374831, l1_smooth_loss 0.1721 [========>........................................................] Step: 1m29s | Tot: 5h7m | Train: 1, h_loss 17.682, l1_loss 0.374640, l1_smooth_loss 0.1720 [=========>.......................................................] Step: 1m28s | Tot: 5h9m | Train: 1, h_loss 17.670, l1_loss 0.374454, l1_smooth_loss 0.1719 [=========>.......................................................] Step: 1m28s | Tot: 5h10m | Train: 1, h_loss 17.657, l1_loss 0.374258, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m28s | Tot: 5h12m | Train: 1, h_loss 17.645, l1_loss 0.374077, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m36s | Tot: 5h13m | Train: 1, h_loss 17.633, l1_loss 0.373894, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m32s | Tot: 5h15m | Train: 1, h_loss 17.621, l1_loss 0.373720, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m30s | Tot: 5h16m | Train: 1, h_loss 17.609, l1_loss 0.373545, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m28s | Tot: 5h18m | Train: 1, h_loss 17.597, l1_loss 0.373368, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m28s | Tot: 5h19m | Train: 1, h_loss 17.585, l1_loss 0.373184, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m28s | Tot: 5h21m | Train: 1, h_loss 17.573, l1_loss 0.373002, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m28s | Tot: 5h22m | Train: 1, h_loss 17.562, l1_loss 0.372830, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m27s | Tot: 5h24m | Train: 1, h_loss 17.550, l1_loss 0.372653, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m28s | Tot: 5h25m | Train: 1, h_loss 17.538, l1_loss 0.372482, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m28s | Tot: 5h27m | Train: 1, h_loss 17.526, l1_loss 0.372299, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m36s | Tot: 5h28m | Train: 1, h_loss 17.515, l1_loss 0.372128, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m32s | Tot: 5h30m | Train: 1, h_loss 17.504, l1_loss 0.371962, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m31s | Tot: 5h31m | Train: 1, h_loss 17.492, l1_loss 0.371791, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m29s | Tot: 5h33m | Train: 1, h_loss 17.481, l1_loss 0.371626, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m29s | Tot: 5h34m | Train: 1, h_loss 17.469, l1_loss 0.371464, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m29s | Tot: 5h36m | Train: 1, h_loss 17.458, l1_loss 0.371293, l1_smooth_loss 0.169 [=========>.......................................................] Step: 1m29s | Tot: 5h37m | Train: 1, h_loss 17.447, l1_loss 0.371116, l1_smooth_loss 0.169 [=========>.......................................................] Step: 1m28s | Tot: 5h39m | Train: 1, h_loss 17.436, l1_loss 0.370954, l1_smooth_loss 0.169 [=========>.......................................................] Step: 1m28s | Tot: 5h40m | Train: 1, h_loss 17.425, l1_loss 0.370783, l1_smooth_loss 0.169 [=========>.......................................................] Step: 1m28s | Tot: 5h42m | Train: 1, h_loss 17.414, l1_loss 0.370616, l1_smooth_loss 0.169 [==========>......................................................] Step: 1m38s | Tot: 5h43m | Train: 1, h_loss 17.403, l1_loss 0.370454, l1_smooth_loss 0.169 [==========>......................................................] Step: 1m32s | Tot: 5h45m | Train: 1, h_loss 17.392, l1_loss 0.370305, l1_smooth_loss 0.169 [==========>......................................................] Step: 1m35s | Tot: 5h47m | Train: 1, h_loss 17.381, l1_loss 0.370136, l1_smooth_loss 0.169 [==========>......................................................] Step: 1m30s | Tot: 5h48m | Train: 1, h_loss 17.370, l1_loss 0.369974, l1_smooth_loss 0.169 [==========>......................................................] Step: 1m29s | Tot: 5h50m | Train: 1, h_loss 17.359, l1_loss 0.369803, l1_smooth_loss 0.169 [==========>......................................................] Step: 1m28s | Tot: 5h51m | Train: 1, h_loss 17.348, l1_loss 0.369638, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m29s | Tot: 5h53m | Train: 1, h_loss 17.338, l1_loss 0.369479, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m29s | Tot: 5h54m | Train: 1, h_loss 17.327, l1_loss 0.369318, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m28s | Tot: 5h56m | Train: 1, h_loss 17.317, l1_loss 0.369161, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m29s | Tot: 5h57m | Train: 1, h_loss 17.306, l1_loss 0.369008, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m36s | Tot: 5h59m | Train: 1, h_loss 17.295, l1_loss 0.368849, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m34s | Tot: 6h39s | Train: 1, h_loss 17.285, l1_loss 0.368691, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m33s | Tot: 6h2m | Train: 1, h_loss 17.275, l1_loss 0.368535, l1_smooth_loss 0.1682 [==========>......................................................] Step: 1m32s | Tot: 6h3m | Train: 1, h_loss 17.264, l1_loss 0.368386, l1_smooth_loss 0.1681 [==========>......................................................] Step: 1m30s | Tot: 6h5m | Train: 1, h_loss 17.254, l1_loss 0.368228, l1_smooth_loss 0.1680 [==========>......................................................] Step: 1m28s | Tot: 6h6m | Train: 1, h_loss 17.243, l1_loss 0.368077, l1_smooth_loss 0.1679 [==========>......................................................] Step: 1m28s | Tot: 6h8m | Train: 1, h_loss 17.233, l1_loss 0.367915, l1_smooth_loss 0.1678 [==========>......................................................] Step: 1m29s | Tot: 6h9m | Train: 1, h_loss 17.224, l1_loss 0.367772, l1_smooth_loss 0.1678 [==========>......................................................] Step: 1m28s | Tot: 6h11m | Train: 1, h_loss 17.214, l1_loss 0.367617, l1_smooth_loss 0.167 [==========>......................................................] Step: 1m28s | Tot: 6h12m | Train: 1, h_loss 17.203, l1_loss 0.367470, l1_smooth_loss 0.167 [==========>......................................................] Step: 1m34s | Tot: 6h14m | Train: 1, h_loss 17.194, l1_loss 0.367321, l1_smooth_loss 0.167 [==========>......................................................] Step: 1m32s | Tot: 6h15m | Train: 1, h_loss 17.183, l1_loss 0.367172, l1_smooth_loss 0.167 [==========>......................................................] Step: 1m30s | Tot: 6h17m | Train: 1, h_loss 17.174, l1_loss 0.367031, l1_smooth_loss 0.167 [===========>.....................................................] Step: 1m33s | Tot: 6h18m | Train: 1, h_loss 17.164, l1_loss 0.366884, l1_smooth_loss 0.167 [===========>.....................................................] Step: 1m34s | Tot: 6h20m | Train: 1, h_loss 17.154, l1_loss 0.366727, l1_smooth_loss 0.167 [===========>.....................................................] Step: 1m33s | Tot: 6h21m | Train: 1, h_loss 17.144, l1_loss 0.366583, l1_smooth_loss 0.167 [===========>.....................................................] Step: 1m33s | Tot: 6h23m | Train: 1, h_loss 17.134, l1_loss 0.366440, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m36s | Tot: 6h25m | Train: 1, h_loss 17.124, l1_loss 0.366297, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m36s | Tot: 6h26m | Train: 1, h_loss 17.115, l1_loss 0.366161, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m36s | Tot: 6h28m | Train: 1, h_loss 17.105, l1_loss 0.366021, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m31s | Tot: 6h29m | Train: 1, h_loss 17.095, l1_loss 0.365882, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m32s | Tot: 6h31m | Train: 1, h_loss 17.085, l1_loss 0.365740, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m30s | Tot: 6h32m | Train: 1, h_loss 17.076, l1_loss 0.365607, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m32s | Tot: 6h34m | Train: 1, h_loss 17.066, l1_loss 0.365467, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m31s | Tot: 6h35m | Train: 1, h_loss 17.057, l1_loss 0.365331, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m28s | Tot: 6h37m | Train: 1, h_loss 17.047, l1_loss 0.365201, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m28s | Tot: 6h38m | Train: 1, h_loss 17.038, l1_loss 0.365061, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m27s | Tot: 6h40m | Train: 1, h_loss 17.028, l1_loss 0.364926, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m27s | Tot: 6h41m | Train: 1, h_loss 17.019, l1_loss 0.364787, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m28s | Tot: 6h43m | Train: 1, h_loss 17.010, l1_loss 0.364656, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m32s | Tot: 6h44m | Train: 1, h_loss 17.001, l1_loss 0.364521, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m31s | Tot: 6h46m | Train: 1, h_loss 16.991, l1_loss 0.364384, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m31s | Tot: 6h47m | Train: 1, h_loss 16.982, l1_loss 0.364251, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m29s | Tot: 6h49m | Train: 1, h_loss 16.973, l1_loss 0.364125, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m34s | Tot: 6h50m | Train: 1, h_loss 16.964, l1_loss 0.363998, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m34s | Tot: 6h52m | Train: 1, h_loss 16.955, l1_loss 0.363857, l1_smooth_loss 0.165 [============>....................................................] Step: 1m33s | Tot: 6h54m | Train: 1, h_loss 16.946, l1_loss 0.363725, l1_smooth_loss 0.165 [============>....................................................] Step: 1m33s | Tot: 6h55m | Train: 1, h_loss 16.937, l1_loss 0.363590, l1_smooth_loss 0.165 [============>....................................................] Step: 1m30s | Tot: 6h57m | Train: 1, h_loss 16.928, l1_loss 0.363456, l1_smooth_loss 0.165 [============>....................................................] Step: 1m29s | Tot: 6h58m | Train: 1, h_loss 16.919, l1_loss 0.363323, l1_smooth_loss 0.165 [============>....................................................] Step: 1m31s | Tot: 7h11s | Train: 1, h_loss 16.910, l1_loss 0.363194, l1_smooth_loss 0.164 [============>....................................................] Step: 1m31s | Tot: 7h1m | Train: 1, h_loss 16.902, l1_loss 0.363065, l1_smooth_loss 0.1648 [============>....................................................] Step: 1m28s | Tot: 7h3m | Train: 1, h_loss 16.893, l1_loss 0.362929, l1_smooth_loss 0.1648 [============>....................................................] Step: 1m28s | Tot: 7h4m | Train: 1, h_loss 16.885, l1_loss 0.362801, l1_smooth_loss 0.1647 [============>....................................................] Step: 1m28s | Tot: 7h6m | Train: 1, h_loss 16.876, l1_loss 0.362674, l1_smooth_loss 0.1646 [============>....................................................] Step: 1m27s | Tot: 7h7m | Train: 1, h_loss 16.867, l1_loss 0.362546, l1_smooth_loss 0.1645 [============>....................................................] Step: 1m27s | Tot: 7h9m | Train: 1, h_loss 16.859, l1_loss 0.362419, l1_smooth_loss 0.1644 [============>....................................................] Step: 1m28s | Tot: 7h10m | Train: 1, h_loss 16.850, l1_loss 0.362291, l1_smooth_loss 0.164 [============>....................................................] Step: 1m28s | Tot: 7h12m | Train: 1, h_loss 16.842, l1_loss 0.362177, l1_smooth_loss 0.164Traceback (most recent call last):
File "homography_CNN_synthetic.py", line 595, in
train()
File "homography_CNN_synthetic.py", line 337, in train
_, h_loss_value, l1_loss_value, l1_smooth_loss_value, lr_value = sess.run([apply_grad_opt, total_h_loss, total_l1_loss, total_l1_smooth_loss, learning_rate])
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 997, in _run
feed_dict_string, options, run_metadata)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1132, in _do_run
target_list, options, run_metadata)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1152, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input matrix is not invertible.
[[Node: gradients/MatrixSolve_grad/MatrixSolve = MatrixSolve[T=DT_FLOAT, adjoint=true, _device="/job:localhost/replica:0/task:0/cpu:0"](transpose_1/_551, gradients/concat_grad/tuple/control_dependency/_593)]]

Caused by op u'gradients/MatrixSolve_grad/MatrixSolve', defined at:
File "homography_CNN_synthetic.py", line 595, in
train()
File "homography_CNN_synthetic.py", line 265, in train
grads = opt_step.compute_gradients(l1_loss)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 386, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 540, in gradients
grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 346, in _MaybeCompile
return grad_fn() # Exit early
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 540, in
grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/linalg_grad.py", line 69, in _MatrixSolveGrad
grad_b = linalg_ops.matrix_solve(a, grad, adjoint=not adjoint_a)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/gen_linalg_ops.py", line 336, in matrix_solve
adjoint=adjoint, name=name)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2506, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1269, in init
self._traceback = _extract_stack()

...which was originally created as op u'MatrixSolve', defined at:
File "homography_CNN_synthetic.py", line 595, in
train()
File "homography_CNN_synthetic.py", line 233, in train
pts1_splits[i], gt_splits[i], patch_indices_splits[i], reuse_variables=reuse_variables, model_index=i)
File "/home/chenxy/unsupervisedDeepHomographyRAL2018-master/code/homography_model.py", line 82, in init
self.solve_DLT()
File "/home/chenxy/unsupervisedDeepHomographyRAL2018-master/code/homography_model.py", line 242, in solve_DLT
H_8el = tf.matrix_solve(A_mat , b_mat) # BATCH_SIZE x 8.
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/gen_linalg_ops.py", line 336, in matrix_solve
adjoint=adjoint, name=name)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2506, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1269, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): Input matrix is not invertible.
[[Node: gradients/MatrixSolve_grad/MatrixSolve = MatrixSolve[T=DT_FLOAT, adjoint=true, _device="/job:localhost/replica:0/task:0/cpu:0"](transpose_1/_551, gradients/concat_grad/tuple/control_dependency/_593)]]

Error - dataloader - TypeError: "cannot create weak reference to 'builtin_function_or_method' object"

I have been trying to run the homography_CNN_synthetic.py. I have followed to the instructions and managed to generate the synthesis data. But when I run homography_CNN_synthetic.py in test mode, I get the following error in dataloader.py", line 156, in init
full_I = self.read_image(I_path, [self.full_img_h, self.full_img_w], channels=3)

TypeError: cannot create weak reference to 'builtin_function_or_method' object

(complete error pasted below)

I tried debugging it and no luck. Any idea what might be the cause of this error?

python homography_CNN_synthetic.py --mode test --lr 5e-4 --loss_type h_loss
<==================== Loading data ===================>

===> There are totally 5000 test files
===> Test: There are totally 5000 Test files
Traceback (most recent call last):
File "homography_CNN_synthetic.py", line 622, in
test_homography()
File "homography_CNN_synthetic.py", line 608, in test_homography
test_obj = TestHomography()
File "homography_CNN_synthetic.py", line 429, in init
data_loader = Dataloader(test_dataloader_params, shuffle=True) # No shuffle
File "/home/sophiabano/Python_code/unsupervisedDeepHomographyRAL2018/code/dataloader.py", line 156, in init
full_I = self.read_image(I_path, [self.full_img_h, self.full_img_w], channels=3)
File "/home/sophiabano/Python_code/unsupervisedDeepHomographyRAL2018/code/dataloader.py", line 240, in read_image
path_length = string_length_tf(image_path)[0]
File "/home/sophiabano/Python_code/unsupervisedDeepHomographyRAL2018/code/dataloader.py", line 44, in string_length_tf
return tf.py_func(len, [t], [tf.int64])
File "/home/sophiabano/Environments/projectunsupervisedhomo/local/lib/python2.7/site-packages/tensorflow/python/ops/script_ops.py", line 384, in py_func
func=func, inp=inp, Tout=Tout, stateful=stateful, eager=False, name=name)
File "/home/sophiabano/Environments/projectunsupervisedhomo/local/lib/python2.7/site-packages/tensorflow/python/ops/script_ops.py", line 199, in _internal_py_func
token = _py_funcs.insert(func)
File "/home/sophiabano/Environments/projectunsupervisedhomo/local/lib/python2.7/site-packages/tensorflow/python/ops/script_ops.py", line 100, in insert
self._funcs[token] = func
File "/usr/lib/python2.7/weakref.py", line 108, in setitem
self.data[key] = KeyedRef(value, self._remove, key)
File "/usr/lib/python2.7/weakref.py", line 278, in new
self = ref.new(type, ob, callback)
TypeError: cannot create weak reference to 'builtin_function_or_method' object

version of python

hello,if this project can run in windows? if yes,please tell which version of python it needs,thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.