Giter Club home page Giter Club logo

deepmatchvo's Introduction

DeepMatchVO

Implementation of ICRA 2019 paper: Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation

@inproceedings{shen2019icra,  
  title={Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation},
  author={Shen, Tianwei and Luo, Zixin and Zhou, Lei and Deng, Hanyu and Zhang, Runze and Fang, Tian and Quan, Long},  
  booktitle={International Conference on Robotics and Automation},  
  year={2019},  
  organization={IEEE}  
}

Update (Sep-26, 19):

We published an follow-up paper on this topic, whose updated loss terms have positive influence on the depth estimation performance. See Self-Supervised Learning of Depth and Motion Under Photometric Inconsistency for details.

@inproceedings{shen2019iccvw,  
  title={Self-Supervised Learning of Depth and Motion Under Photometric Inconsistency},
  author={Shen, Tianwei and Zhou, Lei and Luo, Zixin and Yao, Yao and Li, Shiwei and Zhang, Jiahui and Fang, Tian and Quan, Long},  
  booktitle={International Conference on Computer Vision (ICCV) Workshops},  
  year={2019},  
  organization={IEEE}  
}

Environment

This codebase is tested on Ubuntu 16.04 with Tensorflow 1.7 and CUDA 9.0.

Demo

Download Pre-trained Models

Download the models presented in the paper, and then unzip them into the ckpt folder under the root.

Run a Simple Script

After downloading the model, you can run a simple demo to make sure the setup is correct.

python demo.py

The output is shown below

Generate Train and Test Data

Given that you have already downloaded the KITTI odometry and raw datasets, the provided python script data/prepare_train_data.py is able to generate the training data with SIFT feature matches. Yet, the feature and match files are in accord with our internal format, which are not publicly available at this point. Alternatively, we suggest first generating the concatenated image triplets by

# for odometry dataset
python data/prepare_train_data.py --dataset_dir=$kitti_raw_odom --dataset_name=kitti_odom --dump_root=$kitti_odom_match3 --seq_length=3 --img_width=416 --img_height=128 --num_threads=8

where $kitti_raw_odom and $kitti_odom_match3 are the input odometry dataset and output files for training. Some example input paths (on my machine) are shown in command.sh.

Then download our pre-computed camera/match files from link. Replace the corresponding generated camera files in $kitti_odom_match3 with the ones you have downloaded. It contains the all the camera intrinsics and the sampled matching information (for each file of an image triplet, the first line is the camera intrinsics, then the next 200 (2*100) lines are the matching coordinates for two image pairs (target image with left source image and target image with right source image)).

Train

The training is done, e.g. on the KITTI odometry dataset, by using

# Train on KITTI odometry dataset
match_num=100
python train.py --dataset_dir=$kitti_odom_match3 --checkpoint_dir=$checkpoint_dir --img_width=416 --img_height=128 --batch_size=4 --seq_length 3 \
    --max_steps 300000 --save_freq 2000 --learning_rate 0.001 --num_scales 1 --init_ckpt_file $checkpoint_dir'model-'$model_idx --continue_train=True --match_num $match_num

We suggest training from a pre-trained model, such as the ones we have provided in models. Also note that do not use the model trained on the KITTI odometry dataset (for pose evaluation) on depth evaluation, nor the model trained on the KITTI Eigen split on pose evaluation. Otherwise, you will get better but biased (train-on-test) results because test samples in one dataset have overlap with the training samples in another.

Test

To evaluate the depth and pose estimation performance in the paper, use

# Testing depth model
r=250000
depth_ckpt_file=$rootfolder$checkpoint_dir'model-'$r
depth_pred_file='output/model-'$r'.npy' 
python test_kitti_depth.py --dataset_dir $kitti_raw_dir --output_dir $output_folder --ckpt_file $depth_ckpt_file #--show
python kitti_eval/eval_depth.py --kitti_dir=$kitti_raw_dir --pred_file $depth_pred_file #--show True --use_interp_depth True

You can also use --show option to visualize the depth maps.

# Testing pose model
sl=3
r=258000
pose_ckpt_file=$root_folder$checkpoint_dir'model-'$r
for seq_num in 09 10
do 
    rm -rf $output_folder/$seq_num/
    echo 'seq '$seq_num
    python test_kitti_pose.py --test_seq $seq_num --dataset_dir $kitti_raw_odom --output_dir $output_folder'/'$seq_num'/' --ckpt_file $pose_ckpt_file --seq_length $sl --concat_img_dir $kitti_odom_match3
    python kitti_eval/eval_pose.py --gtruth_dir=$root_folder'kitti_eval/pose_data/ground_truth/seq'$sl'/'$seq_num/  --pred_dir=$output_folder'/'$seq_num'/'
done

It outputs the same result in the paper:

Seq ATE mean std
09 0.0089 0.0054
10 0.0084 0.0071

Contact

Feel free to contact me (Tianwei) if you have any questions, either by email or by issue.

Acknowledgements

We appreciate the great works/repos along this direction, such as SfMLearner and GeoNet, and also the evaluation tool evo for KITTI full sequence evaluation.

deepmatchvo's People

Contributors

hlzz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepmatchvo's Issues

SIFT feature matches files format

Hi, Thanks for sharing the codes!

I have a question regarding feature matches _cam.txt file. There are 200 (2*100) lines of matching coordinates for two image pairs. And I assume the first 100 is target image (time t) with source image (time t-1) and the last 100 is target image with source image (time t+1). Do I interpret this correctly?

And for each line, it should be two points (x1, y1, x2, y2). Is p1 (x1, y1) always from target image, and p2 from source image (t-1 or t+1)?

我要如何实现

您好,我想问如何用摄像头获取的视频在你的程序上进行测距

feature and matching file

Dr.shen, can u share a feature file(*.sift) and a matching file (.mat) with us, or the corresponding Matlab file(.m), thank u very much!

Trajectory Points

Hi Tianwei,

Can I get all trajectory points coordinate in your method ? Unlike ORB_SLAM2 only export KeyFrame trajectory points coordiante.

Thanks,
Yiyi

can not reproduce full trajectory of 09 sequence

hi, firstly thanks for making your work public. recently i try to reproduce 09 sequence full pose trajectory compared with ground-truth by using the python package evo and the pre-trained model 258000.ckpt. However, the result seems that the odometry scale can not be adjusted by the evo automatically as you demonstrated in your article. I wonder how you address the undefined scale issue. (09 dash line is ground truth, 09_full is result of pre-trained model)

my command is :~/work/DeepMatchVO/output$ evo_traj kitti 09_full.txt --ref=09.txt -p --plot_mode=xz
微信图片_20190411100533

command.sh

It seem the file command.sh can not be open ?

Error: Expected size[2] in [0, 26], but got 1200

Hi, I've got error when I run test_kitti_pose.py.

The output in terminal:

seq 09
/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From test_kitti_pose.py:52: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2020-09-24 12:58:31.655729: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2020-09-24 12:58:31.677125: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2899885000 Hz
2020-09-24 12:58:31.677621: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x56188b778e00 executing computations on platform Host. Devices:
2020-09-24 12:58:31.677636: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
WARNING:tensorflow:From /home/cds-s/workspace/DeepMatchVO/data_loader.py:83: The name tf.read_file is deprecated. Please use tf.io.read_file instead.

WARNING:tensorflow:From /home/cds-s/workspace/DeepMatchVO/data_loader.py:89: DatasetV1.make_initializable_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_initializable_iterator(dataset)`.
2020-09-24 12:58:31.719253: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

input_batch
Tensor("IteratorGetNext:0", shape=(1, 600, 3600, 3), dtype=uint8)


FLAGS.img_height = 600
FLAGS.img_width = 1200
FLAGS.seq_length = 3
FLAGS.batch_size = 1
WARNING:tensorflow:From /home/cds-s/workspace/DeepMatchVO/nets.py:27: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f03db150>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f03db150>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f03db2d0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f03db2d0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f036e250>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f036e250>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f03dbc50>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f03dbc50>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f03db250>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f03db250>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f036e710>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f036e710>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f031be90>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f031be90>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f03db690>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fd5f03db690>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:From test_kitti_pose.py:77: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

WARNING:tensorflow:From test_kitti_pose.py:77: The name tf.trainable_variables is deprecated. Please use tf.compat.v1.trainable_variables instead.

WARNING:tensorflow:From /home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
Traceback (most recent call last):
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call
    return fn(*args)
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected size[2] in [0, 26], but got 1200
	 [[{{node Slice}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "test_kitti_pose.py", line 93, in <module>
    main()
  File "test_kitti_pose.py", line 82, in main
    pred = system.inference(sess, mode='pose')
  File "/home/cds-s/workspace/DeepMatchVO/deep_slam.py", line 433, in inference
    results = sess.run(fetches)
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 950, in run
    run_metadata_ptr)
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1173, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
    run_metadata)
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected size[2] in [0, 26], but got 1200
	 [[node Slice (defined at /home/cds-s/workspace/DeepMatchVO/data_loader.py:255) ]]

Errors may have originated from an input operation.
Input Source operations connected to node Slice:
 sub (defined at /home/cds-s/workspace/DeepMatchVO/deep_slam.py:398)

Original stack trace for 'Slice':
  File "test_kitti_pose.py", line 93, in <module>
    main()
  File "test_kitti_pose.py", line 76, in main
    'pose', FLAGS.seq_length, FLAGS.batch_size, input_batch)
  File "/home/cds-s/workspace/DeepMatchVO/deep_slam.py", line 423, in setup_inference
    self.build_pose_test_graph(input_img_uint8)
  File "/home/cds-s/workspace/DeepMatchVO/deep_slam.py", line 389, in build_pose_test_graph
    input_mc, self.img_height, self.img_width, self.num_source)
  File "/home/cds-s/workspace/DeepMatchVO/data_loader.py", line 255, in batch_unpack_image_sequence
    [-1, -1, img_width, -1])
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py", line 733, in slice
    return gen_array_ops._slice(input_, begin, size, name=name)
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 8823, in _slice
    "Slice", input=input, begin=begin, size=size, name=name)
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
    op_def=op_def)
  File "/home/cds-s/anaconda3/envs/python37/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2005, in __init__
    self._traceback = tf_stack.extract_stack()

The error occurs in deep_slam.py in line results = sess.run(fetches) :

    def inference(self, sess, mode, inputs=None):
        fetches = {}
        if mode == 'depth':
            fetches['depth'] = self.pred_depth
        if mode == 'pose':
            fetches['pose'] = self.pred_poses
        if inputs is None:
            results = sess.run(fetches)
        else:
            results = sess.run(fetches, feed_dict={self.inputs:inputs})
        return results

What's wrong ?
I also cannot understand the idea of target and source images.

pb

Hello,I want to freeze the graph, switch ckpt to pb files. But I failed because of the wrong name of the "output_node_names".

the code is like this:

`def freeze(input_checkpoint, output_graph):
output_node_names = "pose_exp_net/pose/mul:0"
saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=True)
with tf.Session() as sess:
saver.restore(sess, input_checkpoint) #
output_graph_def = graph_util.convert_variables_to_constants(
sess=sess,
input_graph_def=sess.graph_def,
output_node_names=output_node_names.split(","))

    with tf.gfile.GFile(output_graph, "wb") as f:  
        f.write(output_graph_def.SerializeToString())  
    print("%d ops in the final graph." % len(output_graph_def.node))  

I dont know how to tackle it , maybe I dont understand the whole project totally. I will be very appreciated if you can answer my questions above.

How to generate sequence 00~08 pose ground truth for evaluation

Hi, I tried to evaluate the model on sequence 00~08. I used dump_pose_seq_TUM function to translate the pose from data/prepare_train_data.py as the ground truth. There are ATE errors of sequence 00~10 by using eval_pose.py. (ground truth 09~10 in pose_data/, 00~08 generated by myself)
image
I doubt that ground truths 00~08 are generated incorrectly, which causes the error to be so large. Could you tell me how to get the ground truth pose for evaluation? Thanks.

About deterministic mask

Hello,
This is a great work, I'm really interested in it. And I want to know where the deterministic mask is defined in your code, I tried to find it in the location of total_loss( 157 line in this file) in the deep_slam.py but I didn't find it. I want to know how it is defined in the code, because I have some doubts about this part of the paper. Thank you.

Dataset

Hi Tianwei,

Can I use my own data as the input dataset ?

Thanks,
Yiyi

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.