Giter Club home page Giter Club logo

graspnetapi's Introduction

graspnetAPI

PyPI version

Dataset

Visit the GraspNet Website to get the dataset.

Install

You can install using pip.

pip install graspnetAPI

You can also install from source.

git clone https://github.com/graspnet/graspnetAPI.git
cd graspnetAPI
pip install .

Document

Refer to online document for more details.
PDF Document is available, too.

You can also build the doc manually.

cd docs
pip install -r requirements.txt
bash build_doc.sh

LaTeX is required to build the pdf, but html can be built anyway.

Grasp Definition

The frame of our gripper is defined as

Examples

cd examples

# change the path of graspnet root

# How to load labels from graspnet.
python3 exam_loadGrasp.py

# How to convert between 6d and rectangle grasps.
python3 exam_convert.py

# Check the completeness of the data.
python3 exam_check_data.py

# you can also run other examples

Please refer to our document for more examples.

Citation

Please cite these papers in your publications if it helps your research:

@article{fang2023robust,
  title={Robust grasping across diverse sensor qualities: The GraspNet-1Billion dataset},
  author={Fang, Hao-Shu and Gou, Minghao and Wang, Chenxi and Lu, Cewu},
  journal={The International Journal of Robotics Research},
  year={2023},
  publisher={SAGE Publications Sage UK: London, England}
}

@inproceedings{fang2020graspnet,
  title={GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping},
  author={Fang, Hao-Shu and Wang, Chenxi and Gou, Minghao and Lu, Cewu},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR)},
  pages={11444--11453},
  year={2020}
}

Change Log

1.2.6

  • Add transformation for Grasp and GraspGroup.

1.2.7

  • Add inpainting for depth image.

1.2.8

  • Minor fix bug on loadScenePointCloud.

graspnetapi's People

Contributors

chenxi-wang avatar cubercsl avatar cww97 avatar fang-haoshu avatar gouminghao avatar qinr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

graspnetapi's Issues

关于eval

你好,
我使用eval评测数据集中的collision free的ground truth,发现mAP只有30多,而且有很多grasp在评测中是有碰撞的。可视化来看,确实有个别的grasp与场景中的grasp碰撞。 这是因为生成数据集和eval的碰撞检测参数不一致吗,如果不是,请问是什么原因?(collision free的grasp指的是场景中的collision free的标签)
非常感谢~

关于GraspGroup部分的疑问

你好,首先很感谢你的工作。
在使用过程中,gg = GraspGroup(preds)来生成抓取,我尝试遍历gg中的元素打印对应的object_id,结果全都是-1。我想知道如何获取每个抓取姿态对应的object_id类别信息。
我想要确定场景中每个物体对应的抓取姿态,以确保抓取特定的物体,而不是得分最高的抓取。

sklearn is now deprecated

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [18 lines of output]
The 'sklearn' PyPI package is deprecated, use 'scikit-learn'
rather than 'sklearn' for pip commands.

loadScenePointCloud: RuntimeError

$ python examples/exam_loadGrasp.py
WARNING - 2020-11-27 20:15:17,053 - rigid_transformations - Failed to import geometry msgs in rigid_transformations.py.
WARNING - 2020-11-27 20:15:17,054 - rigid_transformations - Failed to import ros dependencies in rigid_transforms.py
WARNING - 2020-11-27 20:15:17,054 - rigid_transformations - autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable
Loading data path...: 100%|███████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 267.62it/s]
warning: grasp_labels are not given, calling self.loadGraspLabels to retrieve them
Loading grasping labels...: 100%|██████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:06<00:00,  1.34it/s]
warning: collision_labels are not given, calling self.loadCollisionLabels to retrieve them
Loading collision labels...: 100%|█████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  4.08it/s]
6d grasp:
----------
Grasp Group, Number=90332:
Grasp: score:0.9000000357627869, width:0.11247877031564713, height:0.019999999552965164, depth:0.029999999329447746, translation:[-0.09166837 -0.16910084  0.39480919]
rotation:
[[-0.81045675 -0.57493848  0.11227506]
 [ 0.49874267 -0.77775514 -0.38256073]
 [ 0.30727136 -0.25405255  0.91708326]]
object id:66
Grasp: score:0.9000000357627869, width:0.10030215978622437, height:0.019999999552965164, depth:0.019999999552965164, translation:[-0.09166837 -0.16910084  0.39480919]
rotation:
[[-0.73440629 -0.67870212  0.0033038 ]
 [ 0.64608938 -0.70059127 -0.3028869 ]
 [ 0.20788456 -0.22030747  0.95302087]]
object id:66
Grasp: score:0.9000000357627869, width:0.08487851172685623, height:0.019999999552965164, depth:0.019999999552965164, translation:[-0.10412319 -0.13797761  0.38312319]
rotation:
[[ 0.03316294  0.78667933 -0.61647028]
 [-0.47164679  0.55612749  0.68430358]
 [ 0.88116372  0.26806271  0.38947761]]
object id:66
......
Grasp: score:0.9000000357627869, width:0.11909123510122299, height:0.019999999552965164, depth:0.019999999552965164, translation:[-0.05140382  0.11790846  0.48782501]
rotation:
[[-0.71453273  0.63476181 -0.2941435 ]
 [-0.07400083  0.3495101   0.93400562]
 [ 0.69567728  0.68914449 -0.20276351]]
object id:14
Grasp: score:0.9000000357627869, width:0.10943549126386642, height:0.019999999552965164, depth:0.019999999552965164, translation:[-0.05140382  0.11790846  0.48782501]
rotation:
[[ 0.08162415  0.4604325  -0.88393396]
 [-0.52200603  0.77526748  0.3556262 ]
 [ 0.84902728  0.4323912   0.30362913]]
object id:14
Grasp: score:0.9000000357627869, width:0.11654743552207947, height:0.019999999552965164, depth:0.009999999776482582, translation:[-0.05140382  0.11790846  0.48782501]
rotation:
[[-0.18380146  0.39686993 -0.89928377]
 [-0.61254776  0.66926688  0.42055583]
 [ 0.76876676  0.62815309  0.12008961]]
object id:14
----------
Traceback (most recent call last):
  File "examples/exam_loadGrasp.py", line 27, in <module>
    geometries.append(g.loadScenePointCloud(sceneId = sceneId, annId = annId, camera = 'kinect'))
  File "/home/weiwen/anaconda3/envs/ggcnn/lib/python3.6/site-packages/graspnetAPI/graspnet.py", line 498, in loadScenePointCloud
    cloud.points = o3d.utility.Vector3dVector(points)
RuntimeError

rect_label contains only 120 scenes

作者你好!数据集一共有190个场景,但是rect_label只提供了120个场景,请问这是漏传了还是正常就是这样?

生成场景碰撞标签

首先感谢你们的杰出工作!
我尝试用数据集里的grasp_label及场景0000来生成场景的collision_label。原本我的想法是直接将整个场景所有物体的点级抓取构建为一个graspGroup,然后调用ModelFreeCollisionDetector函数得到colllision_mask,即为collision_labels。但是当我只将一个物体(id为0)的所有抓取构建为graspGroup,然后调用ModelFreeCollisionDetector函数时,遇到了内存不够的情况:
collision_detector.py", line 75, in detect targets = self.scene_points[np.newaxis,:,:] - T[:,np.newaxis,:] numpy.core._exceptions.MemoryError: Unable to allocate 19.5 TiB for an array with shape (49809600, 17901, 3) and data type float64
然后我将grasp_label里collision为True的抓取排除,都还剩下4332493个抓取,检测时还是会报超内存。这时我只能将这个graspGroup切片,每次50000左右,然后分别检测碰撞,这样会很慢,检测完一个物体都需要7,8个小时。
你们在生成场景的collision_label时,会碰到这样的问题吗?你们是怎么处理的呢?
另外,你们在生成场景的collision_label时ModelFreeCollisionDetector函数里场景点云的voxel_size设为多少呢?approach_dist,collision_thresh, empty_thresh,finger_width,finger_length等这些参数设为多少呢?

I successfully test rectangle-graspnet-multiObject-multiGrasp, how could I get the ap evaluation result on your paper

scene_0189+0254.png                                                                                      │
Detection took 0.084s                                                                                    │
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                                                                      │
scene_0189+0255.png                                                                                      │
Detection took 0.086s                                                                                    │
========================================================================                                 │
                                                                                                         │
Miss Rate: [0.62971788 0.9795459  0.99943616 0.99999805 1.        ]                                      │
FPPI: [1.87301649e+01 2.96397569e-01 3.90625000e-03 0.00000000e+00                                       │
 0.00000000e+00] 

table 3, line 2 Chu et al.[7]

convert rect -> 6d faild

first, my dataset is downloaded completely

$ python exam_check_data.py
WARNING:root:Failed to import geometry msgs in rigid_transformations.py.
WARNING:root:Failed to import ros dependencies in rigid_transforms.py
WARNING:root:autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable
Loading data path...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 190/190 [00:00<00:00, 271.81it/s]
Checking Models: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 88/88 [00:00<00:00, 6797.40it/s]
Checking Grasp Labels: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 88/88 [00:00<00:00, 28521.66it/s]
Checking Collosion Labels: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 190/190 [00:00<00:00, 18873.57it/s]
Checking Scene Datas: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 190/190 [00:08<00:00, 21.89it/s]
Check for kinect passed
Loading data path...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 190/190 [00:00<00:00, 271.39it/s]
Checking Models: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 88/88 [00:00<00:00, 6914.94it/s]
Checking Grasp Labels: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 88/88 [00:00<00:00, 28199.16it/s]
Checking Collosion Labels: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 190/190 [00:00<00:00, 15353.10it/s]
Checking Scene Datas: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 190/190 [00:08<00:00, 22.52it/s]
Check for realsense passed

then I try the example of convert

import numpy as np
from graspnetAPI import GraspNet
import cv2
import open3d as o3d


rect_grasp_path = '../predicted_rectangle_grasp'
graspnet_root = '/home/weiwen/mnt0/mail2020/graspDL/rectMM/graspnet_dataset'
camera = 'kinect'
sceneId = 5
annId = 3

g = GraspNet(graspnet_root, camera = camera, split = 'all')

bgr = g.loadBGR(sceneId = sceneId, camera = camera, annId = annId)
depth = g.loadDepth(sceneId = sceneId, camera = camera, annId = annId)

rect_grasp_group = g.loadGrasp(sceneId=sceneId, camera=camera, annId=annId,
                               fric_coef_thresh=0.2, format='rect')

# RectGrasp to Grasp
rect_grasp = rect_grasp_group.random_sample(1)[0]
print(rect_grasp)
img = rect_grasp.to_opencv_image(bgr)

cv2.imwrite('exam_imgs/rua.png', img)

print(camera, depth)
grasp = rect_grasp.to_grasp(camera, depth)
if grasp is not None:
    geometry = []
    geometry.append(g.loadScenePointCloud(sceneId, camera, annId))
    geometry.append(grasp.to_open3d_geometry())
    o3d.visualization.draw_geometries(geometry)
else:
    print('No result because the depth is invalid, please try again!')

the code above is copied from the example/doc,

then I get

$ python convert_grasp.py
WARNING:root:Failed to import geometry msgs in rigid_transformations.py.
WARNING:root:Failed to import ros dependencies in rigid_transforms.py
WARNING:root:autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable
Loading data path...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 190/190 [00:00<00:00, 274.32it/s]
Rectangle Grasp: score:1.0, height:30.647785186767578, open point:(838.07336, 563.3613), center point:(776.25494, 552.6008), object id:2
kinect [[ 381  381  381 ...    0    0    0]
 [ 382  382  382 ...    0    0    0]
 [ 382  382  383 ...    0    0    0]
 ...
 [1677 1677 1677 ...    0    0    0]
 [   0    0    0 ...    0    0    0]
 [   0    0    0 ...    0    0    0]]
Traceback (most recent call last):
  File "convert_grasp.py", line 29, in <module>
    grasp = rect_grasp.to_grasp(camera, depth)
  File "/home/weiwen/anaconda3/envs/grasp/lib/python3.6/site-packages/graspnetAPI/grasp.py", line 701, in to_grasp
    depth_2d = depth_method(depths, center, open_point, upper_point) / 1000.0
  File "/home/weiwen/anaconda3/envs/grasp/lib/python3.6/site-packages/graspnetAPI/utils/utils.py", line 558, in center_depth
    return depths[round(center[1]), round(center[0])]
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices

I notice there is this in doc

This conversion may fail due to invalid depth information.

why my depth infomation is invalid.

Install GraspnetAPI, no module named 'setuptools.extern.six'

ERROR: Exception:
Traceback (most recent call last):
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 245, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/commands/install.py", line 377, in run
requirement_set = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 95, in resolve
result = self._result = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py", line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py", line 427, in resolve
failure_causes = self._attempt_to_pin_criterion(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py", line 239, in _attempt_to_pin_criterion
criteria = self._get_updated_criteria(candidate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py", line 230, in _get_updated_criteria
self._add_to_criteria(criteria, requirement, parent=candidate)
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py", line 173, in _add_to_criteria
if not criterion.candidates:
^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_vendor/resolvelib/structs.py", line 156, in bool
return bool(self._sequence)
^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 155, in bool
return any(self)
^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in
return (c for c in iterator if id(c) not in self._incompatible_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 47, in _iter_built
candidate = func()
^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 211, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 293, in init
super().init(
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 156, in init
self.dist = self._prepare()
^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 225, in _prepare
dist = self._prepare_distribution()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 304, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/operations/prepare.py", line 525, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/operations/prepare.py", line 640, in _prepare_linked_requirement
dist = _get_prepared_distribution(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/operations/prepare.py", line 71, in _get_prepared_distribution
abstract_dist.prepare_distribution_metadata(
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/distributions/sdist.py", line 54, in prepare_distribution_metadata
self._install_build_reqs(finder)
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/distributions/sdist.py", line 124, in _install_build_reqs
build_reqs = self._get_build_requires_wheel()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/distributions/sdist.py", line 101, in _get_build_requires_wheel
return backend.get_requires_for_build_wheel()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_internal/utils/misc.py", line 751, in get_requires_for_build_wheel
return super().get_requires_for_build_wheel(config_settings=cs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 166, in get_requires_for_build_wheel
return self._call_hook('get_requires_for_build_wheel', {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 321, in _call_hook
raise BackendUnavailable(data.get('traceback', ''))
pip._vendor.pyproject_hooks._impl.BackendUnavailable: Traceback (most recent call last):
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
obj = import_module(mod_path)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lwh/anaconda3/envs/graspnet/lib/python3.12/importlib/init.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1387, in _gcd_import
File "", line 1360, in _find_and_load
File "", line 1310, in _find_and_load_unlocked
File "", line 488, in _call_with_frames_removed
File "", line 1387, in _gcd_import
File "", line 1360, in _find_and_load
File "", line 1331, in _find_and_load_unlocked
File "", line 935, in _load_unlocked
File "", line 995, in exec_module
File "", line 488, in _call_with_frames_removed
File "/tmp/pip-build-env-200lmyul/overlay/lib/python3.12/site-packages/setuptools/init.py", line 18, in
from setuptools.extern.six import PY3, string_types
ModuleNotFoundError: No module named 'setuptools.extern.six'

数据标注

请问graspnetAPI支持自采集数据的标注吗?

Difference between uploaded rect labels and projected 6d labels

While working with your uploaded rect labels i noticed labels with negative center coordinates for the realsense.
While trying to find the cause i noticed that the uploaded label files contain far more rect grasps than you get when projecting the 6d labels with the to_rect_grasp_group() function.

Can you please explain how the uploaded labels where generated and how to use them?

What method/tool do you use to get a complete point cloud of a single object?

Hi,

I want to make the pointcloud for single object of my own dataset but I don't know how to make the pointcloud complete and proper. I have tried https://github.com/F2Wang/ObjectDatasetTools, but I find that it doesn't work well on the object who has complex shape.
So, would you please give me some advice or share your tool/method of making the pointcloud for a single object? Thx!
BTW, would you please publish the purchase link of the new objects you added in the graspnet? Thx!

Hope to hear from you soon.

RAM Memory usage during evaluation

Hi everyone.
I am running the test.py file on a laptop with Ubuntu16 and 8 GB of RAM, under python 3.6, with a number of workers equal to 1.
During the evaluation step, I observed a strange behavior, that is the RAM space taken from the script grows without any stop.
After some analysis of the graspnet_eval.py:eval_scene method, it seems that once the evaluation of a scene is completed the loaded models, for the given scene, are not deallocated.
I would like to know if this behavior is intentional or not.
Thanks for your kind.

exam_convert.py有两个img show,第二个炸了

$ python examples/exam_convert.py          
WARNING - 2020-12-08 22:49:34,482 - rigid_transformations - Failed to import geometry msgs in rigid_transformations.py.
WARNING - 2020-12-08 22:49:34,482 - rigid_transformations - Failed to import ros dependencies in rigid_transforms.py
WARNING - 2020-12-08 22:49:34,482 - rigid_transformations - autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable
Loading data path...: 100%|███████████████████████████████████████████████████████████████████████████████████| 190/190 [00:00<00:00, 276.30it/s]
Traceback (most recent call last):
  File "examples/exam_convert.py", line 35, in <module>
    grasp = rect_grasp.to_grasp(camera, depth)
  File "/home/weiwen/anaconda3/envs/grasp/lib/python3.6/site-packages/graspnetAPI/grasp.py", line 701, in to_grasp
    depth_2d = depth_method(depths, center, open_point, upper_point) / 1000.0
  File "/home/weiwen/anaconda3/envs/grasp/lib/python3.6/site-packages/graspnetAPI/utils/utils.py", line 558, in center_depth
    return depths[round(center[1]), round(center[0])]
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices

o3d.utility.Vector3dVector(points) RuntimeError

graspnetAPI on  master [!?⇣] via 🟟 v3.6.10 via 🟟 grasp 
❯ python examples/exam_loadGrasp.py 
WARNING:root:Failed to import ros dependencies in rigid_transforms.py
WARNING:root:autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable
Loading data path...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 281.68it/s]
warning: grasp_labels are not given, calling self.loadGraspLabels to retrieve them
Loading grasping labels...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:03<00:00,  2.33it/s]
warning: collision_labels are not given, calling self.loadCollisionLabels to retrieve them
Loading collision labels...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  5.52it/s]
6d grasp:
----------
Grasp Group, Number=90332:
Grasp: score:0.9000000357627869, width:0.11247877031564713, height:0.019999999552965164, depth:0.029999999329447746, translation:[-0.09166837 -0.16910084  0.39480919]
rotation:
[[-0.81045675 -0.57493848  0.11227506]
 [ 0.49874267 -0.77775514 -0.38256073]
 [ 0.30727136 -0.25405255  0.91708326]]
object id:66
Grasp: score:0.9000000357627869, width:0.10030215978622437, height:0.019999999552965164, depth:0.019999999552965164, translation:[-0.09166837 -0.16910084  0.39480919]
rotation:
[[-0.73440629 -0.67870212  0.0033038 ]
 [ 0.64608938 -0.70059127 -0.3028869 ]
 [ 0.20788456 -0.22030747  0.95302087]]
object id:66
Grasp: score:0.9000000357627869, width:0.08487851172685623, height:0.019999999552965164, depth:0.019999999552965164, translation:[-0.10412319 -0.13797761  0.38312319]
rotation:
[[ 0.03316294  0.78667933 -0.61647028]
 [-0.47164679  0.55612749  0.68430358]
 [ 0.88116372  0.26806271  0.38947761]]
object id:66
......
Grasp: score:0.9000000357627869, width:0.11909123510122299, height:0.019999999552965164, depth:0.019999999552965164, translation:[-0.05140382  0.11790846  0.48782501]
rotation:
[[-0.71453273  0.63476181 -0.2941435 ]
 [-0.07400083  0.3495101   0.93400562]
 [ 0.69567728  0.68914449 -0.20276351]]
object id:14
Grasp: score:0.9000000357627869, width:0.10943549126386642, height:0.019999999552965164, depth:0.019999999552965164, translation:[-0.05140382  0.11790846  0.48782501]
rotation:
[[ 0.08162415  0.4604325  -0.88393396]
 [-0.52200603  0.77526748  0.3556262 ]
 [ 0.84902728  0.4323912   0.30362913]]
object id:14
Grasp: score:0.9000000357627869, width:0.11654743552207947, height:0.019999999552965164, depth:0.009999999776482582, translation:[-0.05140382  0.11790846  0.48782501]
rotation:
[[-0.18380146  0.39686993 -0.89928377]
 [-0.61254776  0.66926688  0.42055583]
 [ 0.76876676  0.62815309  0.12008961]]
object id:14
----------
Traceback (most recent call last):
  File "examples/exam_loadGrasp.py", line 27, in <module>
    geometries.append(g.loadScenePointCloud(sceneId = sceneId, annId = annId, camera = 'kinect'))
  File "/home/weiwen/mnt0/mail2020/graspDL/graspnetAPI/graspnetAPI/graspnet.py", line 498, in loadScenePointCloud
    cloud.points = o3d.utility.Vector3dVector(points)
RuntimeError

seems this happens one years ago, happens again, because of what

您好,请问grasp_nms文件和nms_grasp方法有给出吗

nms抓取非极大值抑制的具体实现在项目文件中未找到
def nms(self, translation_thresh = 0.03, rotation_thresh = 30.0 / 180.0 * np.pi):
'''
Input:

    - translation_thresh: float of the translation threshold.

    - rotation_thresh: float of the rotation threshold.

    **Output:**

    - GraspGroup instance after nms.
    '''
    from grasp_nms import nms_grasp
    return GraspGroup(nms_grasp(self.grasp_group_array, translation_thresh, rotation_thresh))

exam_eval.py

您好,最近在github上看你们的代码,这里我把你们数据集的前110个scene使用convert脚本转换成.npy文件,使用exam_eval.py脚本进行评估,遇到了以下错误:
Traceback (most recent call last):
File "/home/robot/graspnetAPI-master/examples/exam_eval.py", line 21, in
acc = ge_r.eval_scene(scene_id=sceneId, dump_folder=dump_folder)
File "/home/robot/graspnetAPI-master/graspnetAPI/graspnet_eval.py", line 144, in eval_scene
grasp_list, score_list, collision_mask_list = eval_grasp(grasp_group, model_sampled_list, dexmodel_list, pose_list, config, table=table_trans, voxel_size=0.008, TOP_K = TOP_K)
File "/home/robot/graspnetAPI-master/graspnetAPI/utils/eval_utils.py", line 340, in eval_grasp
indices = compute_closest_points(grasp_group.translations, scene)
File "/home/robot/graspnetAPI-master/graspnetAPI/utils/eval_utils.py", line 133, in compute_closest_points
dists = compute_point_distance(A, B)
File "/home/robot/graspnetAPI-master/graspnetAPI/utils/eval_utils.py", line 114, in compute_point_distance
dists = np.linalg.norm(A-B, axis=-1)
ValueError: operands could not be broadcast together with shapes (2754,1,0) (1,5328,3)

Process finished with exit code 1

看了每个函数的定义,不知道怎么解决,期待您的回复!!

How to generate a GraspGroup for evaluation

Hi, first thank you for the excellent work. I've read the graspAPI doc, it is good with so many detailed expressions and codes. But at evaluation part, I need to prepare my own data and generate a GraspGroup. Does it mean I need to generate dataset according to the format descriped in API doc by my own camera? like 255 npy to each 189 scences. Or do I missed parts about transforming your dataset to evaluation GraspGroup format descriped in API doc. If not, can you give me some explanations to generate the so called GraspGroup for evaluation. Thanks a lot.

sth about rectGrasp -> evaluation

I have successfully get rect grasps from here, does these 7-d rect grasp the same as what you explained above?

further more, I wanna evaluate these grasps, can I use convert from here, and then use evaluation from here?

Install graspnetAPI Error and tmp Solution

Install graspnetAPI Error
Enviroment: Ubuntu 20.04 Anaconda
Command: pip install .
Solution: export SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True
Error:

  Using cached sklearn-0.0.post9.tar.gz (3.6 kB)
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error
  
  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [18 lines of output]
      The 'sklearn' PyPI package is deprecated, use 'scikit-learn'
      rather than 'sklearn' for pip commands.
      
      Here is how to fix this error in the main use cases:
      - use 'pip install scikit-learn' rather than 'pip install sklearn'
      - replace 'sklearn' by 'scikit-learn' in your pip requirements files
        (requirements.txt, setup.py, setup.cfg, Pipfile, etc ...)
      - if the 'sklearn' package is used by one of your dependencies,
        it would be great if you take some time to track which package uses
        'sklearn' instead of 'scikit-learn' and report it to their issue tracker
      - as a last resort, set the environment variable
        SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True to avoid this error
      
      More information is available at
      https://github.com/scikit-learn/sklearn-pypi-package
      
      If the previous advice does not cover your use case, feel free to report it at
      https://github.com/scikit-learn/sklearn-pypi-package/issues/new
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

Reproducing results of GGCNN on GraspNet

Hello,
i am trying to reproduce your results with GGCNN on GraspNet and i am having a few questions:

  1. In your paper you wrote you used the official implementation of GGCNN (i asume your mean this one). How exactly did you handle the amount of labeld grasps? Converting the grasps to the needed maps for quality, angle and width takes far too much time with the original code.
  2. After training a GGCNN on the train data of GraspNet and evaluating the estimated grasp poses i only achieve a AP of ~3% (for now only trying on the seen split). Can you provide weights or code for your training of these models?

I really want to work with your evaluation pipeline as i find it a really nice way of evaluating grasp estimation methods without having to physically executing every grasp.

Thank you for your work and help.

对数据集中的部分不懂之处

作者您好:

  感谢你们提供的数据集!但在使用你们的数据集的过程中,我有三个不解之处:

   第一,在你们的数据集中,每个场景有256个不同的视角,每个视角下都有对物体6D姿态的标注信息,为‘pos_in_world'和'ori_in_world',我的理解是:这表示物体在世界坐标系下的6D姿态。
   困惑的地方在于,**对于同一个场景的不同视角,物体在世界坐标系下的姿态是没有发生变化的,那么为什么不同视角下相同物体对应的‘pos_in_world'和'ori_in_world是不同的**?

   第二,collision_label中对每个物体上的n个点上的不同抓取是否发生碰撞进行了标注,请问抓取是否碰撞是根据在单个目标的点云中,夹爪在闭合前是否与点云发生碰撞决定的吗?将物体堆叠到一起的时候,杂乱场景中夹爪是否发生碰撞又是如何确定的?

   第三,在最终得到的抓取姿势中,[scores, widths, heights, depths, rotations, target_points, object_ids],其中rotations表示旋转矩阵,target_points表示平移向量吗?那么depth是用来干什么的?根据你们的文章,我对depth理解是其表示对抓取目标点沿逼近方向的逼近距离,按照这种理解,平移向量是不是要由target_points和depth共同得出?

  期待您的回答!

  谢谢!

exam_convert.py 未成功 warning缺少ros依赖?

$ python examples/exam_convert.py 
WARNING - 2020-12-07 13:14:14,427 - rigid_transformations - Failed to import geometry msgs in rigid_transformations.py.
WARNING - 2020-12-07 13:14:14,427 - rigid_transformations - Failed to import ros dependencies in rigid_transforms.py
WARNING - 2020-12-07 13:14:14,427 - rigid_transformations - autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable
Loading data path...: 100%|███████████████████████████████████████████████████████████████████████████████████████| 190/190 [00:00<00:00, 275.87it/s]

然后弹出了一个黑色方块的window,鼠标移过去会显示坐标(x=..., v=...)

How gripper-specific are the pre-trained weights?

Hello,
I'm trying to understand what aspects of the gripper geometry are encoded in the trained network weights; under which circumstances would I need to retrain given a different parallel jaw gripper (assuming the gripper is parameterized by finger length/depth and maximum width [i.e. the joint limit of the prismatic finger joints]) ?

In your figure
image

there is an "2 cm" distance and I'm unsure what this refers to. Is this the distance of the frame origin to the physical palm of the gripper?

Or put differently: How do I match the grasp frame from a different gripper to be able to re-use the trained weights?

Thanks!

convert Grasps between ann_ids

I see version 1.2.6 supports transform for grasps, f312d7e

I have another idea:
I found that the grasps are all in camera coordinate,
I am in ann_id == i but I wanna to use grasp in ann_id == j, seems I need this

class AroundViewGrasp(Grasp):
    @property
    def ann_id(self):
        return self._ann_id
    
    @ann_id.setter
    def ann_id(self, idx):
        assert idx in range(256)
        self._ann_id = idx

    def to_view(self, target_id):
        T = some_magic(self.ann_id, target_id)
        self.transform(T)
        self.ann_id = target_id

I guess I can get some idea from here

table_trans = transform_points(table, np.linalg.inv(np.matmul(align_mat, camera_pose)))

but my mind is still all a mess about how could I construct this matrix T

Missing close.jpg in mtl for object 68

Hi. I download the object models from Google Drive, and find that textured.obj.mtl uses close.jpg, which is not included. Could anyone help fix it? Thanks.

About the evaluation: how to apply different friction?

Hi, I'm a master student also work on the grasp generation.
I appreciate your efforts very much, it actually save alot of times for me.

To compare the performace like the ways in your papers , I have to compare the result in different friction.
But after browsering the code of this API, I still don't know where to adjust the friction u.
Please helps.

Question about rectgrasp dataset

Thanks for your grasp dataset.
I have a question about rect grasp dataset.

Rect grasp dataset contains some rect grasp data that its center point is outside the target object like the driver and the yellow round fruits of below picture.

Are they some error data generated by the process from 3d to 2d?

How can deal with them?

example

Rect Label: Score

Hi everyone.
I'm trying to understand the meaning of the "score" label in the rect_labels. In this API's code I have the impression that the meaning of this variable is the same as the 6DOF Grasp but in the repo
rectangle-graspnet-multiObject-multiGrasp
the authors treats this variable as the row_score: mask = grasp[:, 5] < 0.2.
Can someone explain to me which score the variable refers to?

Objects used in Scene 188 and 189 declared as 'novel' are part of 'similar' and 'seen' split

Hello,
I recently took a closer look at the objects used in your dataset.
I found that the scenes 188 and 189 are completely comprised of objects which were already used in other splits (scene 188 from test_seen and scene 189 from test_similar).
I assume there was a mixup when selecting objects for the scenes, but for using the test_novel split to evaluate the generalization quality of models this should be addressed.

about eval.py

请问运行eval.py时,为什么会报以下错误
WARNING:root:Failed to import geometry msgs in rigid_transformations.py.
WARNING:root:Failed to import ros dependencies in rigid_transforms.py
WARNING:root:autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable
Loading data path...: 100%|██████████| 30/30 [00:00<00:00, 124.31it/s]
Loading data path...: 100%|██████████| 30/30 [00:00<00:00, 119.35it/s]
Evaluating scene:121, camera:kinect
Traceback (most recent call last):
File "/graspnetAPI/examples/exam_eval.py", line 23, in
acc = ge_k.eval_scene(scene_id = sceneId, dump_folder = dump_folder)
File "
\graspnetAPI\graspnetAPI\graspnet_eval.py", line 144, in eval_scene
grasp_list, score_list, collision_mask_list = eval_grasp(grasp_group, model_sampled_list, dexmodel_list, pose_list, config, table=table_trans, voxel_size=0.008, TOP_K = TOP_K)
File "\graspnetAPI\graspnetAPI\utils\eval_utils.py", line 336, in eval_grasp
indices = compute_closest_points(grasp_group.translations, scene)
File "
\graspnetAPI\graspnetAPI\utils\eval_utils.py", line 129, in compute_closest_points
dists = compute_point_distance(A, B)
File "******\graspnetAPI\graspnetAPI\utils\eval_utils.py", line 114, in compute_point_distance
dists = np.linalg.norm(A-B, axis=-1)
ValueError: operands could not be broadcast together with shapes (2784,1,0) (1,4778,3)

Process finished with exit code 1

使用pickle.load()加载dex_models时报错

您好,非常感谢您的数据集和graspnetAPI,能向您们请教一个问题吗。我在执行测试模型,加载dex_models时遇到了以下错误,我没有找到修改bug的地方,您知道这是什么引起的吗,是因为dex_models数据集中 'ColorVisuals'对象没有'crc'属性吗,我该如何修改呢。

Traceback (most recent call last):
File "test.py", line 118, in <module>
    evaluate()
File "test.py", line 112, in evaluate
    res, ap = ge.eval_all(cfgs.dump_dir, proc=cfgs.num_workers)
File "graspnetAPI/graspnet_eval.py", line 120, in eval_scene
    model_list, dexmodel_list, _ = self.get_scene_models(scene_id, ann_id=0)
File "graspnetAPI/graspnet_eval.py", line 50, in get_scene_models
    dexmodel = pickle.load(f)
AttributeError: 'ColorVisuals' object has no attribute 'crc'

您们的部分源码:

for obj_idx in obj_list:
      model = o3d.io.read_point_cloud(os.path.join(model_dir, '%03d' % obj_idx, 'nontextured.ply'))
      dex_cache_path = os.path.join(self.root, 'dex_models', '%03d.pkl' % obj_idx)
      if os.path.exists(dex_cache_path):
          with open(dex_cache_path, 'rb') as f:
              dexmodel = pickle.load(f)   # 这句话报了错:AttributeError: 'ColorVisuals' object has no attribute 'crc'
      else:
          dexmodel = load_dexnet_model(os.path.join(model_dir, '%03d' % obj_idx, 'textured'))
      points = np.array(model.points)
      model_list.append(points)
      dexmodel_list.append(dexmodel)

exam_eval.py 缺少 dexnet

$ python exam_eval.py 
WARNING - 2020-12-07 15:02:51,492 - rigid_transformations - Failed to import geometry msgs in rigid_transformations.py.
WARNING - 2020-12-07 15:02:51,492 - rigid_transformations - Failed to import ros dependencies in rigid_transforms.py
WARNING - 2020-12-07 15:02:51,492 - rigid_transformations - autolab_core not installed as catkin package, RigidTransform ros methods will be unavailable
Loading data path...: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 90/90 [00:00<00:00, 274.76it/s]
Loading data path...: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 90/90 [00:00<00:00, 275.34it/s]
Evaluating scene:121, camera:kinect
Traceback (most recent call last):
  File "exam_eval.py", line 21, in <module>
    acc = ge_k.eval_scene(scene_id = sceneId, dump_folder = dump_folder)
  File "/home/weiwen/anaconda3/envs/grasp/lib/python3.6/site-packages/graspnetAPI/graspnet_eval.py", line 117, in eval_scene
    model_list, dexmodel_list, _ = self.get_scene_models(scene_id, ann_id=0)
  File "/home/weiwen/anaconda3/envs/grasp/lib/python3.6/site-packages/graspnetAPI/graspnet_eval.py", line 50, in get_scene_models
    dexmodel = pickle.load(f)
ModuleNotFoundError: No module named 'dexnet'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.