Giter Club home page Giter Club logo

semantic_3d_mapping's People

Contributors

shichaoy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

semantic_3d_mapping's Issues

Ask about some code about ScrollGrid

In compute() function of the ScrollForBaseFrame class, there exists the following code:

tf::Vector3 target_sensor_to_center = wv2laser * ca::point_cast< tf::Vector3 >(target_sensor_to_center_); Vec3 origin_laser = ca::point_cast<Vec3>(target_sensor_to_center); //sensor position in world frame.
What is the meaning of "target_sensor_to_center_" , "wv2laser" and "origin_laser" ?
and what is the meaning of "origin_laser-center"?

Hi, Thanks for the interest. I think it is due to recent server changes. I put it temporarily here. Refer to sh file to put it in right place. https://drive.google.com/open?id=1cfQYQMkU1cZn6e4hPsgGyYwvKfh7gYV7

Hi, Thanks for the interest. I think it is due to recent server changes. I put it temporarily here. Refer to sh file to put it in right place. https://drive.google.com/open?id=1cfQYQMkU1cZn6e4hPsgGyYwvKfh7gYV7

I will update the sh file when I find a stable position.

Originally posted by @shichaoy in #6 (comment)

Question on experiments

Hello,
I want to test your research on a real drone, and I'm wondering that images from monocular camera can be used on your research, because semantic segmentation can be built from RGB images, and some light weight CNNs can run in real time on embbeded system (like you wrote in your paper). Can you answer this for me?
Thank you and best regard.

Can not run file change_cnn_label.m

I have run your script with matlab but it has error ?

change_cnn_label
processing 1 out of 500
Error using reshape
To RESHAPE the number of elements must not change.

Error in change_cnn_label (line 38)
prob=reshape(U,num_labels,img_height*img_width)';
Can you give me some advice to fix this ?

pose estimation

Could you please explain a little bit how did you gain the pose estimate using ORB SLAM2? Thanks!

Does ORB SLAM and depth estimation use the gray camera?

Hi, when we trying to run kitti15, our 3D scrolling map seemed to be drifting. We are not sure if we generate the depth images and the orb slam poses. Currently, we generate them using kitti's gray camera (and its calibration matrix). When you generate kitti05's depth images and poses, did you convert the rgb left camera into gray, or you directly use the gray camera?

Thanks!

how to generate the depth image??

(1) we download the kitti gray sequences, because they are stereo, we firstly use the elas to convert stereo image into depth. one question, I saw the depth data you provided have the following property: close objects are dark, far objects are light. when I use the elas, how I can save the depth as the format you have?

There are something wrong with me when I test on the NYUv2 dataset.

Hi, thanks for your share!

I am testing your code on the NYUv2 dataset. ^ _ ^

I have used your code in folder named 'preproess_data/superpixel/' and have made the image like this
image
Then transform the image to bin using your code.

I make the semantic segmentation using the FCN, in which there is model on the dataset.
The segmentation image like this
image
Then translate the image into bin using the code:
net.blobs['prob'].data[0].transpose([1,2,0]).flatten().tofile('/home/x/bin/nyu'+ str(i) + '.bin')

And I get the pose through the ORB-SLAM2

Then I run the mapping_3d, there isn't any warning and error.
image

However, the result seems to be wrong. -_-
The image produced in the folder named 'crf_3d_reproj' seems to be wrong, like this
image

As a result, the semantic 3d mapping looks bad and like this
image

Could you please help me and tell me what the possible reasons are ?

Thanks!

can not read depth image

Hello! I have realized that there are only 20 depth images in your data set. And it complains about can not read depth image when the index goes to 20

predict_dilanet have no information for dataset kitti

When calling predict_dilanet.py , complained as follow:

To processing: 4
First one: 000001.png
Traceback (most recent call last):
File "predict_dilanet.py", line 223, in
batch_predict_kitti("kitti")
File "predict_dilanet.py", line 126, in batch_predict_kitti
dataset = Dataset(dataset_name)
File "predict_dilanet.py", line 30, in init
.format(dataset_name))
IOError: Do not have information for dataset kitti

could you please explain a little bit what this may would have caused?

Thanks!

Weird result gained.

screenshot from 2019-02-12 15-45-50
HI I wonder if you have any idea where the things could have gone wrong, as you can see that things here are not parallel....

evaluatioList.txt

Hi, thanks for share!
how can i get the evaluatioList.txt
i can run without the evaluatioList.txt
what is the function of it
@shichaoy

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.