ai4ce / deepmapping Goto Github PK
View Code? Open in Web Editor NEW[CVPR2019 Oral] Self-supervised Point Cloud Map Estimation
Home Page: https://ai4ce.github.io/DeepMapping/
License: Other
[CVPR2019 Oral] Self-supervised Point Cloud Map Estimation
Home Page: https://ai4ce.github.io/DeepMapping/
License: Other
hello, @dmax123
as in https://github.com/ai4ce/DeepMapping/blob/master/models/deepmapping.py#L92
self.occup_net = MLP(dim)
the last output of MLP neither using sigmoid nor relu, just output of linear layer, so does this harm the bce loss training? in the paper, it said we shoud using sigmoid to normalized output to [0~1], but the code here is not.
Hi,
I am reading your code.
And I believe something wrong with the calling of
sample_unoccupied_point()
in models/deepmapping.py file.
because you feed self.obs_local to sampling, and in the sampling, rays are shot from (0,0,0) to occupied positions, this works perfect when there is no initialized pose for self.obs_local.
When we use initialized poses, self.obs_local has already transformed by the initial pose, and center of scans(point clouds) is no longer (0,0,0), this sampling method has an issue to mess up the loss.
Do you agree this is a problem? or you design this on purpose, but based on my visualization of cases when it is evaluated with initialized poses, empty and occupied space is overlapped with each other.
Great work! Also thanks for releasing the codes.
The 2D optimisation works very well. And I am also very interested in the 3D case on real-world AVD dataset.
Could you please tell me if it is possible to kindly release the used trajectory data (images, poses, related scripts) to reproduce the same 3D results in the paper?
It would be great for us to have a better understanding of DeepMapping in 3D case.
DeepMapping/script$ ./run_train_2D.sh
Traceback (most recent call last):
File "train_2D.py", line 12, in
import utils
File "../utils/init.py", line 2, in
from .geometry_utils import *
File "../utils/geometry_utils.py", line 188
raise ValueError(f'metrics: {metrics} not recognized.')
^
SyntaxError: invalid syntax
loading dataset
creating model
start training
Traceback (most recent call last):
File "train_AVD.py", line 77, in
loss = model(obs_batch, valid_pt, pose_batch)
File "/home/thebs/anaconda3/envs/fmr/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "../models/deepmapping.py", line 114, in forward
self.obs_local, self.n_samples,sensor_center)
File "../models/deepmapping.py", line 35, in sample_unoccupied_point
unoccupied[:, (idx - 1) * L:idx * L, :] = center + (local_point_cloud-center) * fac
RuntimeError: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 2
RT,how to understand “we do not necessarily expect the trained DNNs to generalize to other scenes” in paper?
Hi,
Thanks for sharing this beautiful idea and code.
My question is, after training and registration is over, how to generate the occupancy by using the trained M-Net model?
Any idea to implement this function would be appreciated, since this part of code is not released now.
Hi, thank you for sharing the nice work. I get a question about the warm start mentioned in Section 3.5 in the paper. You mentioned you use ICP as a warm start. In my understanding, the ICP gives a rough rotation, but the parameters for initialization should be the NN weights. Would you like to tell me what's wrong here?
my version of python is : Python 3.6.8
Traceback (most recent call last):
File "train_2D.py", line 12, in
import utils
File "../utils/init.py", line 2, in
from .geometry_utils import *
File "../utils/geometry_utils.py", line 3, in
import open3d
File "/home/ct/.local/lib/python3.6/site-packages/open3d/init.py", line 28, in
from .open3d import * # py2 py3 compatible
ImportError: Invalid character class.
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 63, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
File "/usr/lib/python3/dist-packages/apport/init.py", line 5, in
from apport.report import Report
File "/usr/lib/python3/dist-packages/apport/report.py", line 30, in
import apport.fileutils
File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 23, in
from apport.packaging_impl import impl as packaging
File "/usr/lib/python3/dist-packages/apport/packaging_impl.py", line 23, in
import apt
File "/usr/lib/python3/dist-packages/apt/init.py", line 23, in
import apt_pkg
ModuleNotFoundError: No module named 'apt_pkg'
Original exception was:
Traceback (most recent call last):
File "train_2D.py", line 12, in
import utils
File "../utils/init.py", line 2, in
from .geometry_utils import *
File "../utils/geometry_utils.py", line 3, in
import open3d
File "/home/ct/.local/lib/python3.6/site-packages/open3d/init.py", line 28, in
from .open3d import * # py2 py3 compatible
ImportError: Invalid character class.
So is there anything I can do to solve this error? thx
Hello, Thanks for the nice work. I am quite new in this field, I have run into the problem of getting 3D ground truth for custom trajectory. I can use the following function
def transform_to_global_AVD(pose, obs_local):
- transform obs local coordinate to global corrdinate frame .
- :param pose: <Bx3> <x,z,theta> y = 0 .
- :param obs_local: <BxLx3> (unorganized) or <BxHxWx3> (organized) .
- :return obs_global: <BxLx3> (unorganized) or <BxHxWx3> (organized) .
on estimated pose (B*3)<x,z,theta> which was obtained by L-NET.
However, the ground truth pose has the size of (B*6). So I treated it as <x,y,z,dx,dy,dz> and transform it into <x,z,theta>:
gt_angle_y =np.arctan2(np.sqrt(gt_pose[:,3]*gt_pose[:,3]+gt_pose[:,5]*gt_pose[:,5]),gt_pose[:,4]) gt_pose_xzth = np.vstack((gt_pose[:,0],gt_pose[:,2],gt_angle_y)).transpose()
However, the result is not correct. I also tried to use a translation matrix and three rotation matrixes along each axis. It also failed. How can I do this?
Hi, first thanks you work and providing the release code for us.
I have some question about the AVD training with the provided DeepMapping network. I repeat the code you provided. However, I found that it only used 16 points cloud for training. Is that reliable with so little training data? In your paper, you mentioned that you have randomly produced 108 trajectories with the given datasets for training,but I could not find the corresponding code. And both the estimated results and ground truth pose show that there are only 16 point cloud for training. And in the trajectory's picture, I plot the ground truth pose with black color and estimated pose with red color, and find that these two can not overlap nicely. While on 2D experiments , your method works very well. I am curious about the reason. Hope for your reply, thanks a lot.
Hi,
In the paper, you mentioned that you tested both ordered and un-ordered data points - with CNN and PointNet correspondingly.
If I understand correctly, the current version of the code uses CNN for an ordered set of data points. Am I correct? Do you also support the other configuration?
Thanks!
Yotam
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.