Giter Club home page Giter Club logo

multi-grid-deep-homography's Introduction

Depth-Aware Multi-Grid Deep homogrpahy Estimation with Contextual Correlation (paper)

Lang Nie*, Chunyu Lin*, Kang Liao*, Shuaicheng Liu`, Yao Zhao*

* Institute of Information Science, Beijing Jiaotong University

` School of Information and Communication Engineering, University of Electronic Science and Technology of China

image

Requirement

  • python 3.6
  • numpy 1.18.1
  • tensorflow 1.13.1

For pytorch users

The official codes are based on tensorflow. We also provide a simple pytorch implementation of CCL for pytorch users, please refer to https://github.com/nie-lang/Multi-Grid-Deep-Homography/blob/main/CCL_pytorch.py.

The pytorch version has not been strictly tested. If you encounter some problems, please feel free to concat me ([email protected]).

Dataset Preparation

step 1

We use UDIS-D for training. Please download it.

step 2

We adopt a pretrained monocular depth estimation model to get the depth of 'input2' in the training set. Please download the results of depth estimation in Google Drive or Baidu Cloud(Extraction code: 1234). Then place the 'depth2' folder in the 'training' folder of UDIS-D. (Please refer to the paper for more details about the depth. )

For windows system

For windows OS users, you have to change '/' to '\\' in 'line 73 of Codes/utils.py'.

Training

Step 1: Training without depth assistance

Modidy the 'Codes/constant.py' to set the 'TRAIN_FOLDER'/'ITERATIONS'/'GPU'. In our experiment, we set 'ITERATIONS' to 300,000.

Modify the weight of shape-preserved loss in 'Codes/train_H.py' by setting 'lam_mesh' to 0.

Then, start the training without depth assistance:

cd Codes/
python train_H.py

Step 2: Finetuning with depth assistance

Modidy the 'Codes/constant.py' to set the 'TRAIN_FOLDER'/'ITERATIONS'/'GPU'. In our experiment, we set 'ITERATIONS' to 500,000.

Modify the weight of shape-preserved loss in 'Codes/train_H.py' by setting 'lam_mesh' to 10.

Then, finetune the model with depth assistance:

python train_H.py

Testing

Our pretrained model

Our pretrained homography model can be available at Google Drive or Baidu Cloud(Extraction code: 1234). And place it to 'Codes/checkpoints/' folder.

Testing with your own model

Modidy the 'Codes/constant.py'to set the 'TEST_FOLDER'/'GPU'. The path for the checkpoint file can be modified in 'Codes/inference.py'.

Run:

python inference.py

Meta

NIE Lang -- [email protected]

@ARTICLE{9605632,
  author={Nie, Lang and Lin, Chunyu and Liao, Kang and Liu, Shuaicheng and Zhao, Yao},
  journal={IEEE Transactions on Circuits and Systems for Video Technology}, 
  title={Depth-Aware Multi-Grid Deep Homography Estimation With Contextual Correlation}, 
  year={2022},
  volume={32},
  number={7},
  pages={4460-4472},
  doi={10.1109/TCSVT.2021.3125736}}

multi-grid-deep-homography's People

Contributors

nie-lang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

multi-grid-deep-homography's Issues

提问

作者您好,请问用论文中提到的预训练模型预测深度图时的具体步骤是怎样的

requirements.txt file isn't necessary??

Hi,
Thank you for sharing your code.
Don't you think that a requirements.txt file is necessary in order to have the right version for all modules that are used in your code?? For example "tensorflow.contrib.layers import conv2d" is referred to an older version of tensorflow and an older version of python I think is required in order to import this module. Can you enlighten me??

Thanks

Training depth image

Excuse me, how can I get the training depth image? It seems doesn't work when I training with my own dataset.

CCL_pytorch

Very nice project! but the implementation of CCL_pytorch that you have created may cause a memory overflow when used, did you try it.
just test it like this. Looking forward to your reply。
image
image

Doubt

Hi I am going through the code have some doubts.

  1. What is the purpose of scale transformation matrix .
M = np.array([[patch_size / 2.0, 0., patch_size / 2.0],
                  [0., patch_size / 2.0, patch_size / 2.0],
                  [0., 0., 1.]]).astype(np.float32)

It seems to me Matrix M also has translation component.
2)What is the this line doing?
H1_mat = tf.matmul(tf.matmul(M_tile_inv, H1), M_tile)

(code snippets are from H_model.py,function name :H_model)

预测深度图

作者您好,请问您在用论文中提到的预训练模型预测深度图时,训练环境pytorch版本一定是原文中的0.4.1吗,

Resolution of 128x128

Hello, I would like to ask which parts of the code need to be modified to test with a resolution of 128x128. I tried to make some modifications but encountered the following error.

ValueError: Dimension 1 in both shapes must be equal, but are 16 and 64. Shapes are [1,16,16] and [1,64,64]. for 'generator/model/feature_extract/conv_block4/concat' (op: 'ConcatV2') with input shapes: [1,16,16,128], [1,64,64,64], [1,64,64,64], [1,64,64,128], [] and with computed input tensors: input[4] = <3>.

about feature flow

Dr. Nie, I would like to ask why there is one mod W and one //W in the formula for predicting the horizontal and vertical motions value of feature flow. In addition, does the formula have a more specific description, some seem to understand but not understand, thank you

question

Dear Dr. Nie! I have been reproducing your work recently and want to further apply it to complete the stitching of the target image and the reference image, so as to achieve the effect of a complete stitching, but I don't know how to do it, could you please give me an answer? Thank you.

Pytorch

Dear Dr. Nie! Could you please provide the pytorch version about this paper implementation?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.