Giter Club home page Giter Club logo

Comments (13)

hz658832 avatar hz658832 commented on July 29, 2024 1

Hey Juan-Ting @brade31919,

would you like to release the code? :)

Thx?
LG Haoming

from radar_depth.

pjckoch avatar pjckoch commented on July 29, 2024 1

Hi @brade31919,

Sorry for bothering you again. How are you progressing with the code release? Can we expect the code to be released by end of January?

Thanks a lot and best regards,
Patrick

from radar_depth.

pjckoch avatar pjckoch commented on July 29, 2024 1

Hi @brade31919,

Ok, I totally understand. Thank you for getting back to me so quickly!

I look forward to the release of your code. In the meantime, would you mind specifying which data augmentation you used in your training? I can't find any information on that in your paper, except for the fact that you downscaled nuScenes images to 450x800 pixels. Was your data augmentation pipeline like that of the Sparse-to-Dense paper? They describe their data augmentation as follows:

C. Data Augmentation
We augment the training data in an online manner with random transformations, including

  • Scale: color images are scaled by a random number s∈[1,1.5], and depths are divided by s.
  • Rotation: color and depths are both rotated with a random degree r∈[−5,5].
  • Color Jitter: the brightness, contrast, and saturation of color images are each scaled by ki∈[0.6,1.4].
  • Color Normalization: RGB is normalized through mean subtraction and division by standard deviation.
  • Flips: color and depths are both horizontally flipped with a 50% chance.

from radar_depth.

brade31919 avatar brade31919 commented on July 29, 2024 1

Hi @pjckoch

Thank you for the understanding!

Regarding the data augmentation info, I used (almost the same as sparse-to-dense):

  • Scale: random number s between [1, 1.5]. And remember to divide the depth by s.
  • Crop: the scaled images and depth maps are cropped back to [450, 800] (this resolution is for nuscenes).
  • Rotation: I think I accidentally turned off the random rotation... The reason was that I didn't have lots of CPU cores to do the data loading, and random rotation would slow down the data loading. I think you should do the random rotation though.
  • Color Jitter: I used their transform here, but I think the one from torchvision will do the same thing. The parameters I used are (brightness, contrast, saturation, hue) = (0.2, 0.2, 0.2, 0.).
  • Flip: 50% chance.
  • Color normalization: The images are first normalized to 0.~1. Then, I used mean and std from ImageNet to do the normalization (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])

I used "bilinear interpolation" to resize the images and "nearest neighbor" to resize the depth maps.

from radar_depth.

brade31919 avatar brade31919 commented on July 29, 2024 1

Hi @pjckoch and @hz658832 ,

I just updated the repo. I tested the installation and training procedures on the cluster I can access, but there might still be some bugs. Let me know if you guys encounter any problem using the code.

Sorry for the delayed release.

from radar_depth.

hz658832 avatar hz658832 commented on July 29, 2024

Hello Patrick @pjckoch,
me and my student from RWTH Aachen are also working on the same task, developing a low cost radar / camera sensor fusion platform for depth estimation / object detection.

If you are interested, we could share and discuss together. Also with 沅 and Dr. Dai.

Beste Grüße aus Aachen

Haoming

from radar_depth.

brade31919 avatar brade31919 commented on July 29, 2024

Hi @pjckoch and @hz658832,

Sorry I was busy with CVPR 2021. I think I can try to release the code around the mid or end of December 2020.

Thank you @hz658832! I think Dr. Dengxin Dai will definitely be interested in some discussions or possible collaborations.

Sincerely,
Juan-Ting Lin

from radar_depth.

pjckoch avatar pjckoch commented on July 29, 2024

Hi @brade31919 ,

thank you for your quick response. Mid or end of December sounds perfect, thank you very much.

@hz658832 I would be happy to discuss the topic and exchange some ideas. I will talk to my supervisor and get back to you.

Best,
Patrick

from radar_depth.

brade31919 avatar brade31919 commented on July 29, 2024

Hi @hz658832,

I am working on it

Sincerely,
Juan-Ting Lin

from radar_depth.

brade31919 avatar brade31919 commented on July 29, 2024

Hi @pjckoch,

Sorry for the delay on the release date! The whole process takes more time than expected. Currently, I have finished some code re-organization (cleaned some experimental code not related to the main results). I am still discussing with Dr. Dai about the release of the processed data (need to find some storage). Due to some issues of the ETH clusters, some of my backed up checkpoints were deleted. I am now trying to re-train some of them for the release.

We are also considering to release the code to extract processed data from the original nuScenes dataset, but we need some time to check the compatibility. They have some updates in 2020.

Regarding the release date, I think at least the code and trained models can be released by the end of January, but I am not sure about the processed dataset.

Sincerely,
Juan-Ting Lin

from radar_depth.

pjckoch avatar pjckoch commented on July 29, 2024

Hi @brade31919,

this helps a lot, thank you very much!

Best,
Patrick

Three last things:
1.)

The parameters I used are (brightness, contrast, saturation, hue) = (0.2, 0.2, 0.2, 0.).

Is it supposed to be a 0.2 for hue as well?

2.):
So, no normalization for the input radar, correct?

3.):

Crop: the scaled images and depth maps are cropped back to [450, 800] (this resolution is for nuscenes).

Are you using a center crop or random crop?

from radar_depth.

brade31919 avatar brade31919 commented on July 29, 2024

Hi @pjckoch

  1. It's 0. in my case 😂😂, but yeah I think you should also use maybe 0.2. It's always better if we can have more diverse augmentations.
  2. Nope. However, I think you can try to normalize the depth maps. Most of time, the learning procedure is more stable on normalized data.
  3. Random cropping in training time and no cropping in testing time.

from radar_depth.

pjckoch avatar pjckoch commented on July 29, 2024

Hi @brade31919 ,

Great, thanks for answering all my questions!

from radar_depth.

Related Issues (18)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.