Giter Club home page Giter Club logo

r3d3's People

Contributors

arondisc avatar fyu avatar tobiasfshr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

r3d3's Issues

cuda out of memory

hello! I want to learn this project, as a beginner, in order to generate training data , but my Gpu memory is not enough, 8GB only, which configuration parameters need to modify, or how much graphic memory is needed at least.
Thank you for your reply

how to generate the point cloud results

First, thank you for your excellent work.
Second, I would like to ask how to generate the ply point cloud file from the result files (.npz)? As shown in Fig. 11 and 12 in your paper for visualization results, is there a conversion script? Thank you!

The purpose of generating training data and why it's so slow?

Hi, thanks for opening this great work!

I wonder why the command to generate training data in step 1 is almost the same as the Evaluation process? And why the step for generating training data is so slow, and it will evaluate the metrics like the below for the nuScenes dataset?
image

Are there any ideas to accelerate the training data generation step?

Is the reconstructed scene scaled?

Hello, thank you very much for your paper. I am very interested in your work. I would like to know if the reconstructed scene can be measured and if it has measurable scale information.

ValueError: The deleter and context arguments are mutually exclusive.

I have a problem when I run the code:
python evaluate.py \ --config configs/evaluation/dataset_generation/dataset_generation_ddad.yaml \ --r3d3_weights=data/models/r3d3/r3d3_finetuned.ckpt \ --r3d3_image_size 384 640 \ --r3d3_n_warmup=5 \ --r3d3_optm_window=5 \ --r3d3_corr_impl=lowmem \ --r3d3_graph_type=droid_slam \ --training_data_path=./data/datasets/DDAD

image

About thirdparty

Holle, first of all, thank you for open-sourcing this work! I am interested in reproducing this work, but I encountered an issue when trying to create the virtual environment using the command (git clone --recurse-submodules https://github.com/AronDiSc/r3d3.git) as the third-party submodule (r3d3/thirdparty/eigen @ 2873916) is no longer accessible. Is there an alternative available?

Visualization with or without post-processing?

Hello, thanks for your great work!

There are some outlier points that are hard to remove, when I use the '.ckpt' file you provided and visualize the result using Open3d in nuscens-0268.

May I ask how you dealt with them?

The photos below have shown the result after using different levels of uniform_down_sample and remove_radius_outlier.
image
image

And I found that these outliers appear in car because depth prediction of the overleap part of two camera is not well on a frame.
image

In checkpoint but not in model

Hello, I encountered a problem when evaluating the model. It seems that the pre-trained model is different. Can you help me solve it?

8B2A920D-04AC-4647-96A0-F6FE7EE8AA51

[Question] Trajectory estimation result on each camera (DDAD)

Hi, thank you for releasing this great interesting work!

BTW, could you please share the trajectory estimation result on each camera, of DDAD dataset (especially CAMERA_01)?
To my understanding, their x6 camera average achieves 0.433 following your paper.

Thank you very much!

How about without spatial-temporal edges?

Great job!
I'd like to ask, have you tried using only spatial and temporal edges? Would that be sufficient?
Meanwhile, removing spatial-temporal edges could save a lot of computation.

Generate Training data

Hello, thank you very much for your code. I am very interested in your work and I have tried using evaluate.py to generate nuScenes training data. But, I have noticed that its speed is a bit slow (only one GPU is in use). May I ask if there is any solution?

I tried to modify a single GPU to multiple GPUs, but failed: all GPUs are processing the same scene. May I know how to modify this code? Looking forward to your reply!

visualization using open3D

Hello, thanks for your code!
I'm blocked in the visualization step, because I'm using remote server to run this code and can't return the online visualization.
Therefore, I want to use pred result (e.g., disp, pose of every frame) for visualization via Open3D.
But it seems that that my visualization method can't align multi-frame images.

Two keyframe (interveral 5 frame) in nuscenes-0268
image
WXWorkCapture_16981435734272

I'm not quite sure what step went wrong.

# Visualization Code

def processing(disp, K, T, image):
    depth = 1.0 / disp.squeeze()    # h, w
    height, width = depth.shape
    img = cv2.resize(cv2.cvtColor(image, cv2.COLOR_BGR2RGB), dsize=(
        width, height), interpolation=cv2.INTER_LINEAR)
    
    fx, fy, cx, cy = K
    intrinsics_matrix = np.array([[fx, 0, cx],
                                  [0, fy, cy],
                                  [0, 0, 1]])
    intrinsics_matrix_homo = np.zeros((3, 4), dtype=np.float32)
    intrinsics_matrix_homo[:3, :3] = intrinsics_matrix
    intrinsics_matrix_homo[2,3] = 1

    pose = pose_quaternion_to_matrix(T)

    points = []
    rgbs = []
    for v in range(height):
        for u in range(width):
            d = depth[v, u]

            if d==0: continue

            x_cam = (u - cx) / fx * d
            y_cam = (v - cy) / fy * d
            z_cam = d
            points.append([x_cam, y_cam, z_cam])
            rgbs.append(img[v, u, :])
    
    rgbs = np.array(rgbs)
    points = np.array(points)
    ones = np.ones((points.shape[0], 1))
    points = np.hstack((points, ones))  # homo

    pc_world = (pose @ points.T).T

    return points, pc_world, intrinsics_matrix_homo, pose, rgbs

cam_npy = np.load(cam_npz)
img = cv2.imread(cam_img)
disp = cam_npy['disp_up']
K = cam_npy['intrinsics']
T = cam_npy['pose']

points, pc_world, k, t, rgbs = processing(disp=disp,K=K,T=T,image=img)
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(pc_world[:, :3])
pcd.colors = o3d.utility.Vector3dVector(rgbs / 255.0)
o3d.visualization.draw_geometries([pcd])

Test in a new scene

Hello, I want to know whether the pre-trained model can be used to estimate the absolute depth map in a new scene, such as inputting an rgb image or a video sequence. If so, how can the scale information of multiple depth maps estimated by the pre-trained model be obtained? I want to splice multiple depth maps into a point cloud, as your video demo shows. Do you have any suggestions? I would appreciate it very much.

About the results of two frames

Hello, I'm very interested in your work! I have a question, have you tried training the network just using two frames? I've modify some of the settings and ended up with a result that doesn't look too good. May I ask if this result is normal?

Here are my modifications:

r3d3_n_warmup = 2
r3d3_optm_window = 1
r3d3_dt_intra = 1
r3d3_dt_inter = 1

And my results:
Screenshot from 2023-09-11 20-43-39

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.