Giter Club home page Giter Club logo

lcdnet's People

Contributors

avalada avatar cattaneod avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lcdnet's Issues

NCLT dataset

Hello @cattaneod,

Thanks for your fantastic work!

I'd like to compare my method with your LCDNet on the NCLT dataset. Do you have any plan to release the code about training and evaluation on the NCLT dataset?

Incorrect index used while generating the training dataset

Hi,

Congrats on the great work.

I have a question regarding the sample generated when the train flag is enabled in the KITTILoader3DPoses and/or KITTI3603DPoses

The index used in this line (https://github.com/robot-learning-freiburg/LCDNet/blob/main/datasets/KITTIDataset.py#L109) should ideally be the following. Use the variable i instead of idx to get the positive and negative match. Otherwise, this would always be comparing with the anchor pose for all iterations

                i = random.choice(indices)
                possible_match_pose[0] = self.poses[i][0, 3]
                possible_match_pose[1] = self.poses[i][1, 3]
                possible_match_pose[2] = self.poses[i][2, 3]
                distance = torch.norm(anchor_pose - possible_match_pose)

Visualization of the final map

Thank you for the code. Could you please share what toolkit/ framework/code you used to visualize the final aligned map of the test sequence?

integration with LIO-SAM

@cattaneod
First of all i would like to appreciate and thanks for your great efforts on this project.
I've implemented this code with kitti old odometry dataset and received quite promising results. I've few question mentioned below;
1- This code must need groundtruth which means its supervised learning. is it possible to test your architecture with self-supervised and if yes then how could we do that?
2- I've modified a code to save the 6dof poses and compare them with the groundtruth as well but how could we integrate this with the LIO-SAM. I checked previous questions and you mentioned few steps but i believe its not simple task. Is there any possibility that you could opensource that integration with LIO-SAM code as well? Such as https://github.com/leggedrobotics/delora have published their code integration part with LOAM as well.

Looking forward to your feedback !
Regards
Arkin

reprodcing issue

Hi!

Thanks for sharing! The performance of LCDNet on loop closure detection is remarkable. I have been attempting to reproduce the experimental results of the paper, but I encountered issues with the inference_yaw_general.py script. While I was able to successfully replicate the paper's results using inference_loop_closure.py, the estimated rotations and translations generated by inference_yaw_general.py were consistently close to 0 and failed to produce reasonable outputs, with RANSAC or withou RANSAC.
Can you provide some insights of what could be causing this issue, and how can I better debug the issue?
Following are my configs:
parser.add_argument('--root_folder', default='/mnt/Mount/Dataset/KITTI_odometry',
help='dataset directory')
parser.add_argument('--weights_path', default='/home/chenghao/DL_workspace/loop-closing/LCDNet/checkpoints/LCDNet-kitti360.tar')
parser.add_argument('--num_iters', type=int, default=1)
parser.add_argument('--dataset', type=str, default='kitti')
parser.add_argument('--ransac', action='store_true', default=True)
parser.add_argument('--teaser', action='store_true', default=False)
parser.add_argument('--icp', action='store_true', default=False)
parser.add_argument('--remove_random_angle', type=int, default=-1)
parser.add_argument('--validation_sequence', type=str, default='00')
parser.add_argument('--without_ground', action='store_true', default=True,
help='Use preprocessed point clouds with ground plane removed')
args = parser.parse_args()

Moreover, I was wondering if it would be possible to release the pre-trained model on KITTI? It would be better if I can use this model for further test. Thank you!

How to Run LIO-SAM with LCDNet on KITTI sequence 02

Dear Author:
First of all, I really appreciate your project, and thank you very much to opensource your code.
I've read your paper and watched your video, and I noticed that you test your new LIO-SAM system integrated with LCDNet on sequence 02 of KITTI dataset. To my best knowledge, IMU data wasn't provided in KITTI odometry benchmark(Meanwhile, KITTI raw data provides IMU/GPS data but lack of ground truth). I believe that's why LIO-SAM's GitHub page claims that "Since LIO-SAM needs a high-frequency IMU for function properly, we need to use KITTI raw data for testing." How did you overcome the problem of the missing IMU data in sequence 02?
I would like to integrate your work with other state-of-the-art Lidar SLAM systems as well, and I'm hoping to test them on KITTI odometry sequences if possible. So I'll be very grateful if you can answer my little question.

question about the pose transformation

Hello! In generate_loop_GT_KITTI.py, why there is pose = np.linalg.inv(T_cam_velo) @ (pose @ T_cam_velo)?

I know that T_cam_velo means the transformation matrix from velodyne to camera0, while pose is that from camera0 to the world. Thus the transformation between velodyne to the world can be just calculated by pose @ T_cam_velo. I can not understand why there is still a term of np.linalg.inv(T_cam_velo). Could you help me with this question? Thanks!

using point cloud not centered around origin

Hello,
Congrate for the nice work!

I was wondering if the network handles correctly the global position of points in the input point cloud (ex: a point cloud with points coordinate far away from origin) or if it is expecting to receive the point cloud as scans (= points centered around the origin with a given max range).
I would be interested to know specifically:

  • if the same point cloud around origin and shifted 500m away from origin would give the same global descriptors
  • if the network would give correct transform between two overlapping point clouds away from origin

I guess if this is not the case one can just compute the barrycenter of the point clouds and shift them to the origin, and then do the global descriptor and transform computation.

Thanks for your answers.

Question about integration to LIO-SAM

@cattaneod
Thanks for your great work!

I have a question about integrating to LIO-SAM.
As you know, LIO-SAM is based on C++ and uses ROS programming.

However, LCDNet is based on Python and uses Pytorch or other python packages.

Maybe I'm not familiar with that, I wonder how to integrate these two algorithms.
Did you make your own LCDNet ROS packages?
Is it possible to use other SLAM algorithms besides LIO-SAM?

I would appreciate it if you could give me an engineering tip. ๐Ÿ˜„

Thanks,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.