Giter Club home page Giter Club logo

Comments (32)

JUGGHM avatar JUGGHM commented on June 4, 2024

Hi! Of course you could add a tensorboard writer to record variations during training. As for myself, I observe the error metrics on the validation set after each epoch to judge the convergence empirically in this project.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Oh that's nice since I am new to this training, I don't know how to write a tensorboard for your training and I couldn't understand how to observe error metrics on the validation set after each epoch, for example, training data is 26k and validation set is 1000. I think the epoch will reach 100, how to check error metrics? I know you would be busy by this time and sorry to ask you many questions right now? please guide me through this!

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

The results are recorded in the generated file 'val.csv'. Feel free to direct me if you have more questions.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Hi, successfully training on selected data as per your suggestion, thank you very much. I have some doubts while training, after every epoch it showing a summary of the train round with MAE and RMSE, in that section how an error is calculating, I.e for calculating error we need ground truth while training what are MAE and RMSE? and why ?. For validation, I can understand i.e for every epoch, a checkpoint will save and with that checkpoint, it computes a depth map and compared with ground truth which is in Val_section_cropped with 1000 data.

  1. where those results are saving, I will attach a picture for reference because in main.py I have set the path for submit_test but it's not saving in that! Can you point out whee it is saving?
    results

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

Hi, (1) during the training procedure the error metrics are accumulated in an on-line scheme. For the definition of RMSE and MAE you might refer the screenshot below from NLSPN[Park et al, ECCV2020].
QQ截图20210627150714

(2) Only when the mode of the function iteration in main.py is "test_completion" will the 1000 figures for testing be saved.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Thank you very much for the detailed explanation. I am curious about the final prediction as you can see there is some pixel doesn't have a depth value in ground truth but in prediction, it computes depth value for missing pixels. I will attach a picture for reference.
what is the yellow color in-depth prediction mean? As you can notice the yellow and red colors are rendered in prediction where there is no depth value in-ground truth. Maybe this question little confusing but I hope you will understand because I am really confused about the error range for the color map. Because every color in the color map represents some value like 0 - 5m error indicates purple like that, If so please explain it
question
! Thank you in advance, Your valuable answers are helping me in my research.

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

Hi!(1)Note that the groundtruth maps is automatically generated by accumulating 11 LiDaR consequent frames with outlier removed in Sparse CNNs [Uhrig et al. 3DV2017]. So they are semi-dense and can be regarded as part of full-dense depth maps. Although not at all pixels supervision is imposed in one certain frame, but for the whole dataset at most pixels there exists supervision, either frequently or not. In addition, the color image is dense as well. So the model do be able to predict full-dense depth maps. (2) Yellow and red in this color mapping, I guess, might simply mean large values.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Thank you very much JUGGHM, it was a clear explanation. But I am still wondering about colormap how it assigning different colors for a particular distance in the dense depth map, there is no error range for a color map or am I asking the wrong question about colormap? what colormap you used?

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

I used jet here and the range were set to 0~100m.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Thank you JUGGHM, I appreciate your patience and answering all my questions too.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Hi, I am curious to ask this question "how to calculate the percentage of Lidar points ?" like In sparse depth map has only 6% of points and ground truth (dense depth map) has only 30%, how to calculate those percentages and how to read points in depth map like how many points are projected to RGB or how many points present in sparse depth map?

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

Hello, you might use torch.where for generating a binary mask and then sum it up to count the number of valid pixels.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Thanks for the quick reply, need to what is the mechanism behind calculating those percentages like "projected point number/cropped RGB resolution"?

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

valid percentage = the number of valid pixels / the number of all pixels (i.e. h x w)

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Thank you, so, According to the above calculation, will generate a binary mask for sparse depth map and will consider valid pixel if it is 1 or else invalid pixel if it is 0, Then valid pixel divided by total number of pixels to calculate valid percentage, right ?

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

Exactly!

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Thank you very much for the clear explanation.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Hi, sorry to ask you more questions, can I change the Loss function from RMSE or MAE to Ruber Loss, since it is also depth estimation, can we use this loss function in your work, If son can you explain how to use it! Thank you very much.

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

Hi! I think you must be referring to Huber loss. L1 and L2 loss are provided in criteria.py, and you could use torch.where to conditionally combine them into Huber loss via Huber loss's formular.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Hi, actually I am talking about Ruber loss which is the square root of Huber Loss (Ruber), May be you can refer to this link "https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8803059" or you can search this article "Robust Learning for Deep Monocular Depth Estimation", if you find this loss, please help me to implement in your code, Thanks in advance.

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

Hi, here is a naive version and you could use it like L1 or L2 losses:

class MaskedRuberLoss(nn.Module):
    def __init__(self, c=1):
        super(MaskedRuberLoss, self).__init__()
        self.c = c

    def forward(self, pred, target, weight=None):
        assert pred.dim() == target.dim(), "inconsistent dimensions"
        valid_mask = (target > 0).detach()
        e = (target - pred).abs()
        diff1 = e*e
        diff2 = 2*self.c*e - self.c*self.c
        diff = torch.sqrt(torch.where(diff<=self.c, diff1, diff2))
        diff = diff[valid_mask]
        self.loss = diff.mean()
        return self.loss

I am not sure whether it is right or whether it will work. If error occurs you could report directly.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Thank you, sure I will let you know if there is any problem.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Hi, I would like to ask this basic thing, should I replace this code in the criteria.py or should I add these lines to the creteria.py, if need to add then how can call Ruber loss while training, and you mentioned c =1, what is C here? Thanks in advance!

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

You could add these lines to the criteria.py and create an instance of it in main.py. When initializing, you can define c as you like, and here I just set the default value to 1.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Yeah, I got it, In main.py
(

# define loss functions
) Instead of makedMSELoss need to define MaskedRuberLoss, right? facing some error while training.
Screenshot from 2021-07-13 03-29-37

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Hi, is that possible to render different color for invalid pixels in sparse depth map or dense depth map? maybe we can assign higher depth values for an infinite depth in-depth map with a different color? As we can see in sparse depth map or in Kitti ground truth, it rendered in the same color for near(less) depth value and infinite depth value.

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

Hi! You could replace the original code in the corresponding function in utils.py by the following ones:

def depth_colorize(depth, min=1e-3, max=100):
    depth = (depth - min) / (max - min)
    img = 255 * cmap(depth)[:, :, :3]  # H, W, C
    img_c1 = img[:, :, 0]
    img_c2 = img[:, :, 1]
    img_c3 = img[:, :, 2]
    img_c1[depth < min] = 255
    img_c2[depth < min] = 255
    img_c3[depth < min] = 255
    return img.astype('uint8')

This marks zero(invalid) values with the white color. You could utilize the same philosophy to mark very large values.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Thank you very much, this is what I exactly expected. But If I want to run this code for a specific sparse depth map means how can I do it? Like input is sparse depth map and output is custom colorized depth map just for visualize from .png format.

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

I strongly suggest the diverse coloring mapping recipes here and you could manually set the exception values (extreme large or small). I think setting each interval to a corresponding color manually is strenuous.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Ok sure, I understand I think the JET colormap is enough to point out the depth values, and it's the interval of 0.1m to 90m. But I want to use this colorization to read specific depth maps from the dataset, do you have code for that ?

from penet_icra2021.

JUGGHM avatar JUGGHM commented on June 4, 2024

You could record the rank of alphabetical order of the corresponding files and set a "if" condition in the iteration function in main.py for executing the iteration only when the figure is the ones you want to read.

from penet_icra2021.

Laihu08 avatar Laihu08 commented on June 4, 2024

Thank you very much @JUGGHM, definitely, I will look into that. I really want to the way to do this process for mentioned issue .#22 (comment) hope you can help me out !

from penet_icra2021.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.