Giter Club home page Giter Club logo

kitti-odom-eval's People

Contributors

huangying-zhan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

kitti-odom-eval's Issues

Why is the translation/rotation error recalculated at every 10th frame?

I've been going through the code and I've been having a hard time understanding why the translation/rotation error recalculated at every 10th frame.

at line 191:

for first_frame in range(0, len(poses_gt), self.step_size):

Why is first_frame resetted at every 10th frame and the whole errors recomputed for a segment (100m, 200m, etc.) We end up with multiple error values for segments of decreasing lengths. It looks like some sort of sliding window of varying length that is moved along the sequence.

Why is it computed this way and not computed over the first 100 meters, then 200 meters, etc?

Monocular Visual Odometry 6 DOF

I'm trying to evaluate KITTI odometry
first I convert pose to 6DoF using this code

def rotationMatrixToEulerAngles(self, R):
        assert (self.isRotationMatrix(R))
        sy = math.sqrt(R[0, 0] * R[0, 0] + R[1, 0] * R[1, 0])
        singular = sy < 1e-6

        if not singular:
            x = math.atan2(R[2, 1], R[2, 2])
            y = math.atan2(-R[2, 0], sy)
            z = math.atan2(R[1, 0], R[0, 0])
        else:
            x = math.atan2(-R[1, 2], R[1, 1])
            y = math.atan2(-R[2, 0], sy)
            z = 0
        return np.array([x, y, z], dtype=np.float32)

def matrix_rt(self, p):
        return np.vstack([np.reshape(p.astype(np.float32), (3, 4)), [[0., 0., 0., 1.]]])

pose1 = self.matrix_rt(self.poses[index][i])
pose2 = self.matrix_rt(self.poses[index][i + 1])
pose2wrt1 = np.dot(np.linalg.inv(pose1), pose2)
R = pose2wrt1[0:3, 0:3]
t = pose2wrt1[0:3, 3]
angles = self.rotationMatrixToEulerAngles(R)
odometries.append(np.concatenate((t, angles)))

also, model output is in the same format (6DoF)

the question is how to evaluate 6DoF results

Regarding the working of the project.

Can you provide an explanation what this project actually does and how do we provide custom inputs (from the example directory) ? Your help would be appreciated.

Thanks in advance.

understand the absolute trajectory error values

thanks to share this tool, I just wanted to understand these values, they should be the absolute trajectory error results on kitti benchmark I got them from a research paper, could you explain it please, I should produce similar values to compare, how to do that using this tool,? thanks in advance
Screenshot_2021-11-19-19-48-36-90

Error Graph 800 meter limit

Hi! Thanks again for publishing your work, it has been very helpful for my project.
I have noticed that the resulting graphs, regardless of sequence, will always include up to 800 meters on the X axis.
This means that for sequences shorter than 800 meters (such as 04), the points exceeding the sequence path drop down to zero, while sequences greater than 800 meters in length (such as 00) will cut off the results at 800 meters.

Is there a way I can set parameters to resolve this on my own, or is this something that must be addressed in the core code?

different kitti Official code

thanks a lot, can you tell me what different between your code and Official code provided by the kitti dataset?

Why inverse?

Hello, in your code you apply inverse to the first frame and then multiply it with the corresponding (i-th) pose matrix to obtain the pose at the frame i. Why not just multiply the i-th pose matrix with the first pose? Why do we need inverse here?

I am talking about the lines like: poses_result[cnt] = np.linalg.inv(pred_0) @ poses_result[cnt].
I thought to obtain poses we simply need to: poses_result[cnt] = poses_result[cnt] @ pred_0.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.