Giter Club home page Giter Club logo

g2l_net's People

Contributors

dc1991 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

g2l_net's Issues

AttributeError: 'NoneType' object has no attribute 'shape'

What should I be expecting from the demo output?

I tried running the demo, but I got the following error: AttributeError: 'NoneType' object has no attribute 'shape'
I then created a new valseg.lst with numbers 0~19 (since there are only 20 demo images); the demo code worked but there was no output.

How to test the fps of the program?

Hello author, may I ask how to test the fps of the program?
Should it be tested on ROS or just run on the dataset?
Can you tell me the general method? I'm doing a comparative test.
Thank you!

can't run demo

can't import function from neighbor folder. Can you please explain the folder structure?

About the depth image in Figure 1.

Hi, @DC1991 !Thanks for your excellent work!
Since the depth image in the Linemod seems too dark and couln't see anything, I wonder how to visualize the depth image in Figure 1. shown as follows .
微信图片_20201117144143

Do you pre-process the original depth image by ajusting brightness or contrast?

About the training data

Hi, thank you for your great work. Will you provide the completed training dataset? Or how do I generate it myself?
Thank you very much.

模型的训练问题

你好,我想问一下对于不同的物体,是需要训练不同的模型。还是说多个物体可以共用一个网络结构呢?

Some questions about 3D sphere

Thank you for your excellent work and sharing!
I'm trying to run the code and I have some questions, Would you mind give me some explaination ?

how to understand 'cors' in the code: output0, cors = model(imgs.cuda(), test=1), ? I find that the 'cors ' are also used in the following :

DC = int(W * cors[2][1][io] / 52)
DR = int(H * cors[2][0][io] / 52)

and how to undertand the 300 and 102 in the following code ?

dep3d = dep3d[np.where(dep3d[:, 2] > 300.0)]
dep3d = chooselimt_test(dep3d, 102, cen_depth) 

Hope for your reply!

网络训练效果问题

你好,我根据你提供的物体6的训练数据集,对模型进行了训练,发现网络对旋转的预测效果不太理想。我应该怎么做才能提高准确率呢?非常感谢你的回复。

Few questions.

Hello @DC1991
Thank you for your work.

I run your code and it executed well.
Unfortunately, I don't understand the meaning of output values (R and T). Would you mind give me some explaination ?

In your paper I found this part. Is the code to manipulate including in current source code?
If it isn't included, would you mind give me a link to the code?

However, both LINEMOD and YCB-Video datasets do
not contain the label for each point of the point cloud. To
train G2L-Net in a supervised fashion, we adopt an automatic way to label each point of the point cloud of [?]. As
described in [?], we label each point in two steps§ First, for
the 3D model of an object, we transform it into the camera
coordinate using the corresponding ground truth. We adopt
the implementation provided by [14] for this process.

Thank you

Where is the code of G2L_Net?

In train_G2L.py line 12
from G2L_Net.utils.networks_arch import *
I wanna know where is the G2L_Net?Can u give me a hand ?best wishes

运行test_linemod.py,读取图像数据出错 AttributeError: 'NoneType' object has no attribute 'shape'

我改好几个路径错误之后,遇到一个很怪异的问题:
`Traceback (most recent call last):
File "/home/leon/code project test/G2L_Net/demo/test_linemod.py", line 289, in
R, T = test(rgbs, deps, idx, model, classifier, classifier_ce, classifier_box, classifier_box_gan, classifier_box_vec, opt, pc, OR, Rt, Tt, imgid=idxx, temp=temp)

File "/home/leon/code project test/G2L_Net/demo/test_linemod.py", line 152, in test
imgs = letterbox(rgb[0], [416, 416], color=(127.5, 127.5, 127.5))

File "/home/leon/code project test/G2L_Net/demo/test_linemod.py", line 78, in letterbox
shape = img.shape[:2] # shape = [height, width]

AttributeError: 'NoneType' object has no attribute 'shape'`
我查看了"../test_sequence/01/rgb" 确认数据存在和路径没有问题,但是我无法理解为何依旧存在这样的错误。
而且我在运行test_linemod.py需要等待好几分钟才会出现报错的结果。

About making 'best_obj.pt' on my own

Hi, @DC1991 . Thanks for your outstanding work and sharing! I have trained the 'cat'(models 6 in this project), but I don't know how to make best_6.pt by myself. Can you give me some suggestion? Any word would be helpful!

Best wishes!

No /utils/render_balls_so file

File "/home/galen/deepLearningCode/PoseEstimation/G2L_Net/utils/utils_funs.py", line 239, in showpoints
dll = np.ctypeslib.load_library('../utils/render_balls_so', '.')

It seems that this file is not provided.
Can you upload this?
Thanks!

No model named G2L_net.

Sorry to bother you, when I run the command of "python test_linemod.py" the following error accured, I can't find way to solve it, could you give me some advice on how to solve it, thank you for advance.

image

在一张1080ti上 YCBV数据集大概要训练多久

作者您好,我们对您的工作非常感兴趣,但是我们在2080 上大概要train30来天(估计)。请问是我们有哪里做错了吗,或者有什么加快训练的方法吗?期待收到您的回复!

How to procure YoloV3 related file "best_1.pt" or make my own

Your efforts are great!!
An error occurred when I prepared the execution environment and executed the sample script as shown below. I followed your tutorial and referred to https://github.com/ultralytics/yolov3 but did not find the file "best_1.pt". Would you please tell me if you make it yourself or have a procurement method from somewhere?

Executed script

$ cd demo
$ python3 test_linemod.py

The error that occurred

Traceback (most recent call last):
  File "test_linemod.py", line 249, in <module>
    model, classifier, classifier_ce, classifier_box, classifier_box_gan, classifier_box_vec, pc, opt, OR,temp = load_models_yolo(obj)
  File "test_linemod.py", line 100, in load_models_yolo
    model.load_state_dict(torch.load(weights)['model'])
  File "/home/b920405/.local/lib/python3.6/site-packages/torch/serialization.py", line 419, in load
    f = open(f, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: '../models/1/best_1.pt'

Tried with Weight download

The following Google Drive had a "404 error" and could not be referenced.

# yolov3 pytorch weights
https://drive.google.com/drive/folders/1uxgUBemJVw9wZsdpboYbzUN4bcRhsuAI

Thank you.

How to understand section3.2 "Translation localization"

Hello, Thank you for your work!

I'm a little confused about section3.2 Translation localization. You train a teo PointNets to perform 3D segmentation and output the residual distance ||T- T^-||. What' s the mean val T^- of the segmented points means? Is it to calculate the mean value of these points in the camera coordinate system? And how to understand object translation T here?

Thank you for your reply!

test for my own

If I want to estimate the object position in real time during the practical process, what depth camera is recommended, I use realsense D435 to obtain the depth map and point cloud quality is very bad, thank you for your reply!

About the training process

Hi, @DC1991 ! Thanks for your excellent work and sharing. Now I'm confused about the training process, and I'm looking forward to your code or descriptions about the training process.

From my observation, the training contains Translation localization, Rotation localization and others. Are these components trained jointly with the main architecture? Or trained separately?

Any description benefits me !
Once again, thanks a lot!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.