Giter Club home page Giter Club logo

catgrasp's Introduction

This is the official implementation of our paper:

Bowen Wen, Wenzhao Lian, Kostas Bekris, and Stefan Schaal. "CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation." IEEE International Conference on Robotics and Automation (ICRA) 2022.

Abstract

Task-relevant grasping is critical for industrial assembly, where downstream manipulation tasks constrain the set of valid grasps. Learning how to perform this task, however, is challenging, since task-relevant grasp labels are hard to define and annotate. There is also yet no consensus on proper representations for modeling or off-the-shelf tools for performing task-relevant grasps. This work proposes a framework to learn task-relevant grasping for industrial objects without the need of time-consuming real-world data collection or manual annotation. To achieve this, the entire framework is trained solely in simulation, including supervised training with synthetic label generation and self-supervised, hand-object interaction. In the context of this framework, this paper proposes a novel, object-centric canonical representation at the category level, which allows establishing dense correspondence across object instances and transferring task-relevant grasps to novel instances. Extensive experiments on task-relevant grasping of densely-cluttered industrial objects are conducted in both simulation and real-world setups, demonstrating the effectiveness of the proposed framework.

Bibtex

@article{wen2021catgrasp,
  title={CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation},
  author={Wen, Bowen and Lian, Wenzhao and Bekris, Kostas and Schaal, Stefan},
  journal={ICRA 2022},
  year={2022}
}

Supplementary Video

Click to watch

ICRA 2022 Presentation Video

Quick Setup

We provide docker environment and setup is as easy as below a few lines.

  • If you haven't installed docker, firstly install (https://docs.docker.com/get-docker/).

  • Run

    docker pull wenbowen123/catgrasp:latest
    
  • To enter the docker, run below

    cd  docker && bash run_container.sh
    cd /home/catgrasp && bash build.sh
    

    Now the environment is ready to run training or testing. Later you can re-enter the lauched docker environment without re-compilation by:

    docker exec -it catgrasp bash
    

Data

  catgrasp
  ├── artifacts
  ├── data
  └── urdf

Testing

python run_grasp_simulation.py

You should see the demo starting like below. You can play with the settings in config_run.yml, including changing different object instances within the category while using the same framework

Training

In the following, we take the nut category as an example to walk through

  • Compute signed distance function for all objects of the category

    python make_sdf.py --class_name nut
    
  • Pre-compute offline grasps of training objects. This generates and evaluates grasp qualities regardless of their task-relevance. To visualize and debug the grasp quality evaluation change to --debug 1

    python generate_grasp.py --class_name nut --debug 0
    
  • Self-supervised task-relevance discovery in simulation

    python pybullet_env/env_semantic_grasp.py --class_name nut --debug 0
    

    Changing --debug 0 to --debug 1, you are able to debug and visualize the process

    The affordance results will be saved in data/object_models. The heatmap file XXX_affordance_vis can be visualized as in the below image, where warmer area means higher task-relevant grasping region P(T|G)

  • Make the canonical model that stores category-level knowledge

    python make_canonical.py --class_name nut
    

  • Training data generation of piles

    python generate_pile_data.py --class_name nut
    

  • Process training data, including generating ground-truth labels

    python tool.py
    
  • To train NUNOCS net, examine the settings in config_nunocs.yml, then

    python train_nunocs.py
    
  • To train grasping-Q net, examine the settings in config_grasp.yml, then

    python train_grasp.py
    
  • To train instance segmentation net, examine the settings in PointGroup/config/config_pointgroup.yaml, then

    python train_pointgroup.py
    

catgrasp's People

Contributors

wenbowen123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

catgrasp's Issues

Question about the camera

Hi wenbo, thanks for your code sharing. Could I ask what is the specific model of the camera and what is the role of the camera in the whole project?
Looking forward to your reply.

No grasp candidates found in NocsTransferGraspSampler()

Hi Wen,

I was testing with screw part and I am able to detect nunocs pose but NocsTransferGraspSampler() always generate 0 candidate grasps. The candidate grasp poses always generated from PointConeGraspSampler().

Here is the debug print out for NocsTransferGraspSampler()

----- debug print from nocs predictor -----
nocs predictor best_ratio=1.0, scales=[0.0151087  0.01752974 0.03605307]
nocs pose
 [[ 1.41363386e-04 -7.31567217e-03  3.27616881e-02  1.97458331e-01]
 [-3.07184000e-04  1.59254132e-02  1.50497812e-02  8.53916265e-03]
 [-1.51049184e-02 -3.92335806e-04  5.46257179e-07  7.27071140e-01]
 [ 0.00000000e+00  0.00000000e+00  0.00000000e+00  1.00000000e+00]]
 
----- debug print from PointConeGraspSampler()  -----
estimated resolution=0.002014853280667527
#sphere_pts=30
#sample_ids=48
begin center_ob_between_gripper... with grasp length of  50680
PointConeGraspSampler before Filtering #grasp_poses=50680
canonical_to_cam:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1

n_approach_dir_rej=12120, n_ik_rej=16718, n_open_gripper_rej=21840, n_close_gripper_rej=0
PointConeGraspSampler after Filtering #grasp_poses=2

----- debug print from NocsTransferGraspSampler()  -----
grasp_poses before filter 10000
symmetry_tfs before filter 72
total grasp_poses before filter 720000
canonical_to_cam:
 0.000141363  -0.00731567    0.0327617     0.197458
-0.000307184    0.0159254    0.0150498   0.00853916
  -0.0151049 -0.000392336  5.46257e-07     0.727071
           0            0            0            1

n_approach_dir_rej=359983, n_ik_rej=360017, n_open_gripper_rej=0, n_close_gripper_rej=0
#grasp_poses with symmetry and after filter 0
Sampled grasps: 0

issure of segement

Hi Wen:
When i try to run the train_pointgroup.py in the container, it raises the value error as i showed.
2022-07-29 09-56-43 的屏幕截图
Except this, i can run any other py files in docker .

Question about Train environment

Hi wenbo,
Thanks for your code sharing. I am trying to use your code to train my own model. Could I ask what typr of GPU and CPU are you based on when you are training your model? Like how many kernals and memory of cpu and GPUtpe(1080,2060 or 3090?)
Also, the type of my GPU is 3060 and it is not suit for your image env(cuda10). I couldn't run model.cuda( ) on my computer. So I changed the cudatoolkit to 11.0. In this way model.cuda() is sucessful but when I run build.sh, pg_op can't be built successfully. I gusess it is because the dismatch of cuda. Do you have any sugesstion on how to import pg_op on cuda11.0?
Looking forward to your reply : )

About computation speed at runtime

Hi Wen,

I have question regarding the computation speed for the process.
The computation speed is about 30secs+ for generating a single grasp with screw object on my current laptop (with 20cores CPU and GPU RTX3060) and I noticed that most the time spent on makeOccupancyGridFromCloudScan (~ 15secs) and sample_grasps (~13secs).
I understand current code might not optimized for speed but is this the expected computation speed?
Can you also suggest which parameters to change so that I can reduce the computation time? (maybe downsample more on the pointcloud or reduce grasp candidates , I understand this might reduce the sucess rate on picking)

Question aobout code

wen博,你好
my_cpp/common.cpp中的这一个行代码是不是有错误:
cur_grasp_in_cam.block(0,3,3,1) += step*major_dir*sign;
正确的应该是:
cur_grasp_in_cam.block(0,3,3,3) += step*major_dir*sign;

What is the meaning of matrix `M`, `minor_pc` and `major_pc` ?

Dear author, I am reading your code, I have difficulty understanding the geometric meaning of matrix M, minor_pc and major_pc in the following code:

for _ in range(len(kd_indices)):
if sqr_distances[_] != 0:
normal = normals_for_sample[kd_indices[_]]
normal = normal.reshape(-1, 1)
if np.linalg.norm(normal) != 0:
normal /= np.linalg.norm(normal)
M += np.matmul(normal, normal.T)
if sum(sum(M)) == 0:
print("M matrix is empty as there is no point near the neighbour")
self.params['r_ball'] *= 2
print(f"Here is a bug, if points amount is too little it will keep trying and never go outside. Update r_ball to {self.params['r_ball']} and resample")
return self.sample_one_surface_point(selected_surface,selected_normal,points_for_sample,normals_for_sample,background_pts,sphere_pts,seed)
approach_normal = -selected_normal.reshape(3)
approach_normal /= np.linalg.norm(approach_normal)
eigval, eigvec = np.linalg.eig(M)
def proj(u,v):
u = u.reshape(-1)
v = v.reshape(-1)
return np.dot(u,v)/np.dot(u,u) * u
minor_pc = eigvec[:, np.argmin(eigval)].reshape(3)
minor_pc = minor_pc-proj(approach_normal,minor_pc)
minor_pc /= np.linalg.norm(minor_pc)
major_pc = np.cross(minor_pc, approach_normal)
major_pc = major_pc / np.linalg.norm(major_pc)

And I have not found the relevant narrative in your paper, can you provide relevant information, please?

Thank you so much!

questions about code

  1. function generate_affordance_worker's definition is definition, but the usage of this function is usage, should I swap the variable id and d in function's definition? since there exists an error when I run the code, and I think maybe the id in definition is corresponding to i in usage?
  2. when I run code in this line, I also encounter an error. Therefore I read the code carefully, and change grasps_all to grasps_all[i], then the error doesn't exist again.

I'm not sure if I'm right, and I'm confused why there's nobody asking about this, maybe I make something wrong? Looking forward to your reply! Thank you!

程序运行方面的问题

wen博,你好。
我想请教一个关于程序运行相关的问题:
当我第一次运行python generate_grasp.py --class_name nut --debug 1时,程序不报错能够正常的运行,但是当我第二次运行的时候时候,出现下面的错误。不知道您知不知道这种情况该怎么处理?

(catgrasp) root@yqw:/home/catgrasp# python generate_grasp.py --class_name nut --debug 1
pybullet build time: Dec  1 2021 18:33:04
Gripper hand_depth: 0.018883
Gripper init_bite: 0.005
Gripper max_width: 0.048
Gripper hand_height: 0.020832
Gripper finger_width: 0.00586
Gripper hand_outer_diameter: 0.061398
Sdf3D self.dims_=[168 168 168], self.resolution_=0.000994335, self.origin_=[-0.0835218 -0.083531   0.0678743], center_sdf=-0.0309976, boundary_sdf=0.0808156
sdf_dir /home/catgrasp/dexnet/grasping/../../urdf/robotiq_hande/gripper_enclosed_air_tight.sdf
Sdf3D self.dims_=[168 168 168], self.resolution_=0.000994683, self.origin_=[-0.083551  -0.0835602  0.0678726], center_sdf=-0.0309976, boundary_sdf=0.080857
obj_dirs:
 /home/catgrasp/data/object_models/nut_LBNR12-screw.obj
/home/catgrasp/data/object_models/nut_carr_95505A631.obj
/home/catgrasp/data/object_models/nut_carr_95496A380_MIL_SPEC.obj
/home/catgrasp/data/object_models/nut_carr_95010A240_GRADE.obj
/home/catgrasp/data/object_models/nut_carr_92362A160_TYPE_18-8.obj
/home/catgrasp/data/object_models/nut_carr_91034A423_LOW-STRENGTH.obj
/home/catgrasp/data/object_models/nut_carr_90580A717_EXTRA-WD.obj
/home/catgrasp/data/object_models/nut_carr_90566A271_LOW-STRENGTH.obj
/home/catgrasp/data/object_models/nut_carr_90565A061_GRADE.obj
/home/catgrasp/data/object_models/nut_carr_90387A512_GLASS-FILLED.obj
/home/catgrasp/data/object_models/nut_carr_90215A433_NICKEL-PLATED.obj
/home/catgrasp/data/object_models/nut_carr_6407T760_HIGH-PRESSURE-VACUUM.obj
obj_dir /home/catgrasp/data/object_models/nut_LBNR12-screw.obj
estimated resolution=0.0018148186202093218
#sphere_pts=30
#sample_ids=157
begin center_ob_between_gripper...
Filtering #grasp_poses=113668
Traceback (most recent call last):
  File "generate_grasp.py", line 152, in <module>
    generate_grasp_one_object_complete_space(obj_dir)
  File "generate_grasp.py", line 97, in generate_grasp_one_object_complete_space
    grasps = ags.sample_grasps(background_pts=np.ones((1,3))*99999,points_for_sample=points_for_sample,normals_for_sample=normals_for_sample,num_grasps=np.inf,max_num_samples=np.inf,n_sphere_dir=30,approach_step=0.005,ee_in_grasp=np.eye(4),cam_in_world=np.eye(4),upper=np.ones((7))*999,lower=-np.ones((7))*999,open_gripper_collision_pts=np.ones((1,3))*999999,center_ob_between_gripper=True,filter_ik=False,filter_approach_dir_face_camera=False,adjust_collision_pose=False)
  File "/home/catgrasp/dexnet/grasping/grasp_sampler.py", line 216, in sample_grasps
    grasp_poses = my_cpp.filterGraspPose(grasp_poses,list(symmetry_tfs),nocs_pose,canonical_to_nocs,cam_in_world,ee_in_grasp,gripper_in_grasp,filter_approach_dir_face_camera,filter_ik,adjust_collision_pose,upper,lower,self.gripper.trimesh.vertices,self.gripper.trimesh.faces,self.gripper.trimesh_enclosed.vertices,self.gripper.trimesh_enclosed.faces,open_gripper_collision_pts,background_pts,resolution,verbose)
AttributeError: module 'my_cpp' has no attribute 'filterGraspPose'

在运行python run_grasp_simulation.py也出现了类似的问题,第一运行的时候也能正常运行,但是第二次运行就会出现错误。

Traceback (most recent call last):
  File "/opt/project/run_grasp_simulation.py", line 30, in <module>
    from predicter import *
  File "/opt/project/predicter.py", line 21, in <module>
    import PointGroup.data.dataset_seg as dataset_seg
  File "/opt/project/PointGroup/data/dataset_seg.py", line 19, in <module>
    from lib.pointgroup_ops.functions import pointgroup_ops
  File "/opt/project/PointGroup/data/../lib/pointgroup_ops/functions/pointgroup_ops.py", line 8, in <module>
    import PG_OP
ModuleNotFoundError: No module named 'PG_OP'

Question about code comprehension

wen博,你好
我想问一下关于几个变换矩阵和姿态矩阵的问题。
my_cpp.filterGraspPose()代码实现中

vectorMatrix4f filterGraspPose(const vectorMatrix4f grasp_poses, const vectorMatrix4f symmetry_tfs, const Eigen::Matrix4f nocs_pose, const Eigen::Matrix4f canonical_to_nocs_transform, const Eigen::Matrix4f cam_in_world, const Eigen::Matrix4f ee_in_grasp, const Eigen::Matrix4f gripper_in_grasp, bool filter_approach_dir_face_camera, bool filter_ik, bool adjust_collision_pose, const std::vector<double> upper, const std::vector<double> lower, const Eigen::MatrixXf gripper_vertices, const Eigen::MatrixXi gripper_faces, const Eigen::MatrixXf gripper_enclosed_vertices, const Eigen::MatrixXi gripper_enclosed_faces, const Eigen::MatrixXf gripper_collision_pts, const Eigen::MatrixXf gripper_enclosed_collision_pts, float octo_resolution, bool verbose)
{
  vectorMatrix4f out;
  Eigen::Matrix4f canonical_to_cam = nocs_pose*canonical_to_nocs_transform;
  std::cout<<"canonical_to_cam:\n"<<canonical_to_cam<<"\n\n";

  int n_approach_dir_rej = 0;
  int n_ik_rej = 0;
  int n_open_gripper_rej = 0;
  int n_close_gripper_rej = 0;

omp_set_num_threads(int(std::thread::hardware_concurrency()));
#pragma omp parallel firstprivate(grasp_poses,symmetry_tfs,nocs_pose,canonical_to_nocs_transform,cam_in_world,ee_in_grasp,gripper_in_grasp,upper,lower,gripper_vertices,gripper_faces,gripper_enclosed_vertices,gripper_enclosed_faces,gripper_collision_pts,gripper_enclosed_collision_pts,canonical_to_cam)
{
  vectorMatrix4f out_local;
  int n_approach_dir_rej_local = 0;
  int n_ik_rej_local = 0;
  int n_open_gripper_rej_local = 0;
  int n_close_gripper_rej_local = 0;
  // 碰撞效率测试
  CollisionManager cm;
  int gripper_id = cm.registerMesh(gripper_vertices,gripper_faces);
  cm.registerPointCloud(gripper_collision_pts,octo_resolution);

  CollisionManager cm_bg;
  int gripper_enclosed_id = cm_bg.registerMesh(gripper_enclosed_vertices,gripper_enclosed_faces);
  cm_bg.registerPointCloud(gripper_enclosed_collision_pts,octo_resolution);

  #pragma omp for schedule(dynamic)
  for (int i=0;i<grasp_poses.size();i++)
  {
    const auto &grasp_pose = grasp_poses[i];
    for (int j=0;j<symmetry_tfs.size();j++)
    {
      const auto &tf = symmetry_tfs[j];
      Eigen::Matrix4f tmp_grasp_pose = tf*grasp_pose;
      Eigen::Matrix4f grasp_in_cam = canonical_to_cam*tmp_grasp_pose;

      for (int col=0;col<3;col++)
      {
        grasp_in_cam.block(0,col,3,1).normalize();
      }
		// filter_approach_dir_face_camera -- True
      if (filter_approach_dir_face_camera)
      {
          // 抓取方向
        Eigen::Vector3f approach_dir = grasp_in_cam.block(0,0,3,1);
          // 归一化
        approach_dir.normalize();
          // z轴值
        float dot = approach_dir.dot(Eigen::Vector3f(0,0,1));
        if (dot<0)
        {
            //verbose -- True
          if (verbose)
          {
            n_approach_dir_rej_local++;
          }
          continue;
        }
      }
		// filter_ik -- False
      if (filter_ik)
      {
        Eigen::Matrix4f ee_in_base = cam_in_world*grasp_in_cam*ee_in_grasp;
        auto sols = get_ik_within_limits(ee_in_base,upper,lower);
        if (sols.size()==0)
        {
          if (verbose)
          {
            n_ik_rej_local++;
          }
          continue;
        }
      }
		// adjust_collision_pose -- False
      if (!adjust_collision_pose)
      {
        Eigen::Matrix4f gripper_in_cam = grasp_in_cam*gripper_in_grasp;
        cm.setTransform(gripper_in_cam,gripper_id);
        if (cm.isAnyCollision())
        {
            //verbose -- True
          if (verbose)
          {
            n_open_gripper_rej_local++;
          }
          continue;
        }
        // gripper_enclosed_id
        cm_bg.setTransform(gripper_in_cam,gripper_enclosed_id);
        if (cm_bg.isAnyCollision())
        {
            //verbose -- True
          if (verbose)
          {
            n_close_gripper_rej_local++;
          }
          continue;
        }
      }
      else
      {
        Eigen::Vector3f major_dir = grasp_in_cam.block(0,1,3,1);
        bool found = false;
        for (float step=0.0;step<=0.003;step+=0.001)
        {
          std::vector<int> signs = {1,-1};
          if (step==0)
          {
            signs = {1};
          }
          for (auto sign:signs)
          {
            Eigen::Matrix4f cur_grasp_in_cam = grasp_in_cam;
            cur_grasp_in_cam.block(0,3,3,1) += step*major_dir*sign;
            Eigen::Matrix4f cur_gripper_in_cam = cur_grasp_in_cam*gripper_in_grasp;
            cm.setTransform(cur_gripper_in_cam,gripper_id);
            if (cm.isAnyCollision())
            {
              continue;
            }

            cm_bg.setTransform(cur_gripper_in_cam,gripper_enclosed_id);
            if (cm_bg.isAnyCollision())
            {
              continue;
            }

            grasp_in_cam = cur_grasp_in_cam;
            found = true;
            break;
          }
          if (found)
          {
            break;
          }
        }
          
        if (!found)
        {
          grasp_in_cam.setZero();
          n_open_gripper_rej_local++;
        }
      }

        //将没有发生碰撞的变换矩阵放入输出中
      if (grasp_in_cam!=Eigen::Matrix4f::Zero())
      {
        out_local.push_back(grasp_in_cam);
      }
    }
  }

  #pragma omp critical
  {
    n_approach_dir_rej += n_approach_dir_rej_local;
    n_ik_rej += n_ik_rej_local;
    n_open_gripper_rej += n_open_gripper_rej_local;
    n_close_gripper_rej += n_close_gripper_rej_local;
    for (int i=0;i<out_local.size();i++)
    {
      out.push_back(out_local[i]);
    }
  }
}

    // verbose -- True
  if (verbose)
  {
    printf("n_approach_dir_rej=%d, n_ik_rej=%d, n_open_gripper_rej=%d, n_close_gripper_rej=%d\n",n_approach_dir_rej,n_ik_rej,n_open_gripper_rej,n_close_gripper_rej);
  }
  return out;
}

你使用了四个变换矩阵(canonical_to_nocs_transform、canonical_to_cam、canonical_to_nocs_transform、symmetry_tfs)和两个姿态矩阵(nocs_pose、grasp_pose)。我想说一下我的个人理解,然后您能帮我指导一下对错吗?

1.nocs_pose:相当于R_{nocs}^{cam},即归一化目标坐标系空间相对于相机空间坐标系的变换矩阵;
2.grasp_pose:相当于,即在相机坐标系下表示的抓取姿态矩阵;
3.canonical_to_cam:相当于R_{canonical}^{cam},即规范坐标系相对于相机坐标系的变换矩阵;
4.canonical_to_nocs_transform:相当于,即规范坐标系相对于归一化目标坐标系空间的变换矩阵;
5.symmetry_tfs:我不是很理解为什么是一个4x4的单位矩阵,它有什么具体的功能吗?
6.canonical:对应的是哪一个坐标系呢?

同时我还有几个关于数据相关的问题,在my_cpp.filterGraspPose()代码实现中,你使用了两类夹具向量(gripper_enclosed_vertices和gripper_vertices)用于碰撞检测,我通过meshlab绘制这两类夹具向量的点云发现,gripper_enclosed_vertices的点云会比gripper_vertices少几个点,但是我不是很理解这两者之间的差别和功能,您解释一下吗?

Could not create GL context

  • System: WSL-kali/WSL-Ubuntu (Windows 10 22H2 19045.2965), Ubuntu 22.04
  • Docker Engine Version: 19.03.13
  • Docker Desktop Version: 20.10.24

When running python tool.py, the following error occurs:

#color_files=21000
model_path /tmp/0000000rgb.obj
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
Traceback (most recent call last):
  File "tool.py", line 436, in <module>
    compute_per_ob_visibility()
  File "tool.py", line 275, in compute_per_ob_visibility
    compute_per_ob_visibility_worker(color_file,cfg)
  File "tool.py", line 251, in compute_per_ob_visibility_worker
    renderer = ModelRendererOffscreen([model_dir],K,H=cfg['H'],W=cfg['W'])
  File "/home/catgrasp/renderer.py", line 34, in __init__
    self.r = pyrender.OffscreenRenderer(self.W, self.H)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyrender/offscreen.py", line 31, in __init__
    self._create()
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyrender/offscreen.py", line 149, in _create
    self._platform.init_context()
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyrender/platforms/pyglet_platform.py", line 52, in init_context
    width=1, height=1)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyglet/window/xlib/__init__.py", line 171, in __init__
    super(XlibWindow, self).__init__(*args, **kwargs)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyglet/window/__init__.py", line 615, in __init__
    context = config.create_context(gl.current_context)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyglet/gl/xlib.py", line 204, in create_context
    return XlibContextARB(self, share)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyglet/gl/xlib.py", line 314, in __init__
    super(XlibContext13, self).__init__(config, share)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyglet/gl/xlib.py", line 218, in __init__
    raise gl.ContextException('Could not create GL context')
pyglet.gl.ContextException: Could not create GL context

I use glxinfo | grep "OpenGL version" (by apt-get install mesa-utils) to get the version of OpenGL, and it returns OpenGL version string: 3.1 Mesa 20.0.8.

Then I run vim /opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyrender/constants.py, changing

TARGET_OPEN_GL_MAJOR = 4
TARGET_OPEN_GL_MINOR = 1

with

TARGET_OPEN_GL_MAJOR = 3
TARGET_OPEN_GL_MINOR = 1

, and it came out a new error:

#color_files=21000
model_path /tmp/0000000rgb.obj
Traceback (most recent call last):
  File "tool.py", line 436, in <module>
    compute_per_ob_visibility()
  File "tool.py", line 275, in compute_per_ob_visibility
    compute_per_ob_visibility_worker(color_file,cfg)
  File "tool.py", line 260, in compute_per_ob_visibility_worker
    color,depth = renderer.render([ob_in_cam])
  File "/home/catgrasp/renderer.py", line 46, in render
    color, depth = self.r.render(self.scene)  # depth: float
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyrender/offscreen.py", line 102, in render
    retval = self._renderer.render(scene, flags, seg_node_map)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyrender/renderer.py", line 144, in render
    retval = self._forward_pass(scene, flags, seg_node_map=seg_node_map)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyrender/renderer.py", line 326, in _forward_pass
    self._configure_forward_pass_viewport(flags)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyrender/renderer.py", line 1012, in _configure_forward_pass_viewport
    self._configure_main_framebuffer()
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/pyrender/renderer.py", line 1094, in _configure_main_framebuffer
    self.viewport_width, self.viewport_height
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 402, in __call__
    return self( *args, **named )
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/OpenGL/error.py", line 232, in glCheckError
    baseOperation = baseOperation,
OpenGL.error.GLError: GLError(
        err = 1282,
        description = b'invalid operation',
        baseOperation = glRenderbufferStorageMultisample,
        cArguments = (
                GL_RENDERBUFFER,
                4,
                GL_RGBA,
                2064,
                1544,
        )
)

The document of OpenGL(https://registry.khronos.org/OpenGL-Refpages/gl4/html/glRenderbufferStorageMultisample.xhtml) shows that

GL_INVALID_OPERATION is generated if samples is greater than the maximum number of samples supported for internalformat.
GL_INVALID_OPERATION is generated if internalformat is a signed or unsigned integer format and samples is greater than the value of GL_MAX_INTEGER_SAMPLES

, but i don't know what happened. I tried both WSL and Ubuntu 22.04, but they shared the same error.

CAD Model

Hi, bowen. thanks for your delicate cad model such as nut, hnm. I want to follow your work to train my network which need more industrial part model. Could you share spider script to collect part model or convenient methd to get these model.

what thing the new_tf do?

new_tf = np.eye(4)

Hi, the frame had been translated to the centere of the object, why again calcute the new_tf matrix? In my opinion, the new_tf just transform the obj to canonical space, but before new_tf do the object had at canonical space(line68-69).

No module named PG_OP issue

Hi wenbo,
Thanks for your code sharing. I downloaded docker and docker pull your image successfully. However, when I try to run "python run_grasp_simulation.py" to test the code and environment, I got the issue shown in the pic.
image
It said there is no module named PG_OP. I didn't heard of this module before and I couldn't find it on Internet either. I tried apt-get and pip install but they all failed. Could I ask how I can solve this issue?
Looking forward to your reply. : )

Question about class Ublock

wen博,你好
我有一个问题想请教您,是关于'train_pointgroup.py'中创建网络模型时用到的UBlock类
这个类的forward方法实现如下:

  def forward(self, input):
      output = self.blocks(input)
      identity = spconv.SparseConvTensor(output.features, output.indices, output.spatial_shape, output.batch_size)
      if len(self.nPlanes) > 1:
          output_decoder = self.conv(output)
          output_decoder = self.u(output_decoder)
          output_decoder = self.deconv(output_decoder)
          output.features = torch.cat((identity.features, output_decoder.features), dim=1)
          output = self.blocks_tail(output)

      return output

len(self.nPlanes) > 1时,output.features的维度是self.nPlanes[0]*2output的维度是self.nPlanes[0],但是self.blocks_tail()需要outputoutput.features这两个tensor的维度都是self.nPlanes[0],为什么这里会出现函数输入维度和实际输入维度不一致的情况呢?

Where to find NUNOCS training data?

HI! I am interested in your project! When I clone the repo, I am able to successfully build the code within the Docker container, but when I run python train_nunocs.py no training or validation data is found and error is:

(catgrasp) root@XPS:/home/catgrasp# python train_nunocs.py
    phase=train #self.files=0
    phase=val #self.files=0
    Traceback (most recent call last):
      File "train_nunocs.py", line 38, in <module>
        trainer = TrainerNunocs(cfg)
      File "/home/catgrasp/trainer_nunocs.py", line 31, in __init__
        self.train_loader = torch.utils.data.DataLoader(self.train_data, batch_size=self.cfg['batch_size'], shuffle=True, num_workers=self.cfg['n_workers'], pin_memory=False, drop_last=True,worker_init_fn=worker_init_fn)
      File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 213, in __init__
        sampler = RandomSampler(dataset)
      File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 94, in __init__
        "value, but got num_samples={}".format(self.num_samples))
    ValueError: num_samples should be a positive integer value, but got num_samples=0

. I followed the instructions to download the data directory from https://archive.cs.rutgers.edu/pracsys/catgrasp/ but still no success. Do I need to download more data? Any help would be greatly appreciated!

In the future I would like to try to train my own NUNOCS Net for different object classes, so learning to train from scratch is necessary

Question about code

wen博,你好
我向请教一下关于代码相关的问题,下面这一行代码是不是写错了

other_cloud = (transforms_to_nocs[other_file]@to_homo(other_cloud).T)[:,:3]

正确的应该是下面这样吗
other_cloud = (transforms_to_nocs[other_file]@to_homo(other_cloud).T).T[:,:3]

How transfer training set task-relevant knowledge to testing instance?

hello, I don't understand the process of transfering category task-relevant knowledge to test instance.
In paper , it seem need estimate correspondence from the instace to category template in NUNOCS space. I think it's a match problem with respect to partial point cloud to full template point cloud in order to transfer the task-relevant knowledge.
beside, i read the code, find seems no the process above in run_grasp_simulation.py, row 62.
Can you tell me how to establish the correspondence?

thank you!

Graphics window stuck

环境:
Debian 9系统
2080ti 显卡

question:运行 run_grasp_simulation 脚本,图形框卡住不显示训练过程,只有一个框体显示,内容是卡住之前桌面显示的内容。

看日志打印可以正常训练。不清楚无法图形显示的原因。

环境配置问题

Wen博,你好,之前在3D视觉工坊上观看过您分享的GatGrsap的视频,想学习您的方法,但是我遇到了一些问题:
在Ubuntu18.04上安装了docker,也拉取了catgrasp的镜像,但是无法成功运行 bash run_container.sh,所以想请教一下您
还需要配置什么环境才能成功运行bash run_container.sh,或者需要配置什么环境可以让我直接在自己的Ubuntu上进行网络训练,而不需要docker

How to split train set and test set

The config_grasp.yml, config_nunocs.yml, config_pointgroup.yml 's default settings show that the train_root is "dataset/nut/train" and test_root is "dataset/nut/test", what files should I place into them and how can I split them? Thank you!

about

In your paper, I see a sparse 3D Unet network for instance segmentation, but in your code is pointgroup for instance segmentation?

The demo script cannot run due to an error

Hi Bowen,

I really enjoyed listening to your talk at RSS2022. Your work was really inspiring and I am trying out your code example now to get a better understanding of your work. I tried to follow your instruction, and set up everything and was able to run the script run_grasp_simulation.py. The GUI simulation view showed up but crashed after a bit. It seems that there was an error saying CUDA out of memory (see the error messages below). The machine I use for testing your code has 4GB of GPU memory, and I think the memory should be enough? From my googling of the issues, it seems to be related to the batch size, but it still did not work after reducing all the batch_size settings in the code to half the sizes. Can you help me out? Thanks!

eval grasps 240/407
eval grasps 280/407
eval grasps 320/407
eval grasps 360/407
eval grasps 400/407
Traceback (most recent call last):
  File "run_grasp_simulation.py", line 717, in <module>
    simulate_grasp_with_arm()
  File "run_grasp_simulation.py", line 575, in simulate_grasp_with_arm
    for grasps in compute_candidate_grasp(rgb,depth,seg,i_pick=i_pick,env=env,ags=ags,symmetry_tfs=symmetry_tfs,ik_func=ik_func):
  File "run_grasp_simulation.py", line 310, in compute_candidate_grasp
    ret = grasp_predicter.predict_batch(data,grasp_poses)
  File "/home/catgrasp/predicter.py", line 85, in predict_batch
    pred = self.model(input_data)[0]
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/catgrasp/pointnet2.py", line 294, in forward
    x, trans, trans_feat = self.feat(x)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/catgrasp/pointnet2.py", line 255, in forward
    trans_feat = self.fstn(x)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/catgrasp/pointnet2.py", line 212, in forward
    x = F.relu(self.bn3(self.conv3(x)))
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 81, in forward
    exponential_average_factor, self.eps)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/nn/functional.py", line 1656, in batch_norm
    training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: CUDA out of memory. Tried to allocate 656.00 MiB (GPU 0; 3.82 GiB total capacity; 840.13 MiB already allocated; 406.75 MiB free; 765.87 MiB cached)
numActiveThreads = 0
stopping threads
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed

Best,
-- Andy

Gripper model configuration

Hi Bowen,

Thanks for your gerat work! I want to change your robotiq Hande gripper to Fanka Panda gripper in my project. Your gripper models are in urdf/robotiq_hande and there are many model and configuration files. How and where do you get this files and could you please give me a hint about how to configure the Franka Panda gripper? Thank your ahead.

why the part don't move following the finger?

I try moving the screw using the one of fingers. and the output hand pointcloud shows that it can sweep the part. in my opinion, the part will move to new position, but fact is it still stay there.

before the hand move:
image
image

after hand move:
image

Problem about the python generate_grasp.py

hello,thank you for your work. I clone your code and run the python generate_grasp.py --class_name nut --debug 0,but there is a bug.
Screenshot from 2022-05-11 21-15-43
Did you met this problem and how can i fix it.

how to determine jaw_width ?

I haven't find the code about calculating jaw_width which means the hand can hold object just right, seems only set the fixed hand_outer_diameter(refer to GPD). or maybe, the task determine width various in different object should assign to hardware?

size mismatch between ckpt and model

Hi,

When I run python run_grasp_simulation.py I encountered the error for size mismatch when loading PointGroupPredictor from artifacts/artifacts-77.

size mismatch for input_conv.0.weight: copying a param with shape torch.Size([3, 3, 3, 6, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 6]).
Full error report section 1. Click to expand

Error(s) in loading state_dict for PointGroup:
size mismatch for input_conv.0.weight: copying a param with shape torch.Size([3, 3, 3, 6, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 6]).
size mismatch for unet.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for unet.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for unet.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for unet.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for unet.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 16, 32]) from checkpoint, the shape in current model is torch.Size([32, 2, 2, 2, 16]).
size mismatch for unet.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]).
size mismatch for unet.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]).
size mismatch for unet.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]).
size mismatch for unet.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]).
size mismatch for unet.u.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 32, 48]) from checkpoint, the shape in current model is torch.Size([48, 2, 2, 2, 32]).
size mismatch for unet.u.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]).
size mismatch for unet.u.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]).
size mismatch for unet.u.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]).
size mismatch for unet.u.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]).
size mismatch for unet.u.u.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 48, 64]) from checkpoint, the shape in current model is torch.Size([64, 2, 2, 2, 48]).
size mismatch for unet.u.u.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]).
size mismatch for unet.u.u.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]).
size mismatch for unet.u.u.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]).
size mismatch for unet.u.u.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]).
size mismatch for unet.u.u.u.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 64, 80]) from checkpoint, the shape in current model is torch.Size([80, 2, 2, 2, 64]).
size mismatch for unet.u.u.u.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]).
size mismatch for unet.u.u.u.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]).
size mismatch for unet.u.u.u.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]).
size mismatch for unet.u.u.u.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]).
size mismatch for unet.u.u.u.u.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 80, 96]) from checkpoint, the shape in current model is torch.Size([96, 2, 2, 2, 80]).
size mismatch for unet.u.u.u.u.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]).
size mismatch for unet.u.u.u.u.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]).
size mismatch for unet.u.u.u.u.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]).
size mismatch for unet.u.u.u.u.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]).
size mismatch for unet.u.u.u.u.u.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 96, 112]) from checkpoint, the shape in current model is torch.Size([112, 2, 2, 2, 96]).
size mismatch for unet.u.u.u.u.u.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 112, 112]) from checkpoint, the shape in current model is torch.Size([112, 3, 3, 3, 112]).
size mismatch for unet.u.u.u.u.u.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 112, 112]) from checkpoint, the shape in current model is torch.Size([112, 3, 3, 3, 112]).
size mismatch for unet.u.u.u.u.u.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 112, 112]) from checkpoint, the shape in current model is torch.Size([112, 3, 3, 3, 112]).
size mismatch for unet.u.u.u.u.u.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 112, 112]) from checkpoint, the shape in current model is torch.Size([112, 3, 3, 3, 112]).
size mismatch for unet.u.u.u.u.u.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 112, 96]) from checkpoint, the shape in current model is torch.Size([96, 2, 2, 2, 112]).
size mismatch for unet.u.u.u.u.u.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 192, 96]) from checkpoint, the shape in current model is torch.Size([96, 1, 1, 1, 192]).
size mismatch for unet.u.u.u.u.u.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 192, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 192]).
size mismatch for unet.u.u.u.u.u.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]).
size mismatch for unet.u.u.u.u.u.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]).
size mismatch for unet.u.u.u.u.u.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]).
size mismatch for unet.u.u.u.u.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 96, 80]) from checkpoint, the shape in current model is torch.Size([80, 2, 2, 2, 96]).
size mismatch for unet.u.u.u.u.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 160, 80]) from checkpoint, the shape in current model is torch.Size([80, 1, 1, 1, 160]).
size mismatch for unet.u.u.u.u.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 160, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 160]).
size mismatch for unet.u.u.u.u.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]).
size mismatch for unet.u.u.u.u.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]).
size mismatch for unet.u.u.u.u.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]).
size mismatch for unet.u.u.u.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 80, 64]) from checkpoint, the shape in current model is torch.Size([64, 2, 2, 2, 80]).
size mismatch for unet.u.u.u.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 128, 64]) from checkpoint, the shape in current model is torch.Size([64, 1, 1, 1, 128]).
size mismatch for unet.u.u.u.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 128, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 128]).
size mismatch for unet.u.u.u.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]).
size mismatch for unet.u.u.u.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]).
size mismatch for unet.u.u.u.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]).
size mismatch for unet.u.u.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 64, 48]) from checkpoint, the shape in current model is torch.Size([48, 2, 2, 2, 64]).
size mismatch for unet.u.u.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 96, 48]) from checkpoint, the shape in current model is torch.Size([48, 1, 1, 1, 96]).
size mismatch for unet.u.u.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 96, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 96]).
size mismatch for unet.u.u.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]).
size mismatch for unet.u.u.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]).
size mismatch for unet.u.u.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]).
size mismatch for unet.u.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 48, 32]) from checkpoint, the shape in current model is torch.Size([32, 2, 2, 2, 48]).
size mismatch for unet.u.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 64, 32]) from checkpoint, the shape in current model is torch.Size([32, 1, 1, 1, 64]).
size mismatch for unet.u.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 64, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 64]).
size mismatch for unet.u.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]).
size mismatch for unet.u.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]).
size mismatch for unet.u.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]).
size mismatch for unet.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 32, 16]) from checkpoint, the shape in current model is torch.Size([16, 2, 2, 2, 32]).
size mismatch for unet.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 32, 16]) from checkpoint, the shape in current model is torch.Size([16, 1, 1, 1, 32]).
size mismatch for unet.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 32]).
size mismatch for unet.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for unet.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for unet.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for score_unet.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for score_unet.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for score_unet.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for score_unet.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for score_unet.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 16, 32]) from checkpoint, the shape in current model is torch.Size([32, 2, 2, 2, 16]).
size mismatch for score_unet.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]).
size mismatch for score_unet.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]).
size mismatch for score_unet.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]).
size mismatch for score_unet.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]).
size mismatch for score_unet.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 32, 16]) from checkpoint, the shape in current model is torch.Size([16, 2, 2, 2, 32]).
size mismatch for score_unet.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 32, 16]) from checkpoint, the shape in current model is torch.Size([16, 1, 1, 1, 32]).
size mismatch for score_unet.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 32]).
size mismatch for score_unet.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for score_unet.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).
size mismatch for score_unet.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).

cuda compilation issue

wen博,你好,

Very much interested in your development. Trying to run the code under my virtual environment (not the docker environment). When running bash build.sh, encountered following issue:

/home/gz/workspace/grasp/catgrasp-master/PointGroup/lib/spconv/src/spconv/maxpool.cu(116): error: more than one operator ">" matches these operands:
built-in operator "arithmetic > arithmetic"
function "operator>(const __half &, const __half &)"
/usr/local/cuda-11.3/include/cuda_fp16.hpp(296): here
operand types are: c10::Half > c10::Half
detected during:
instantiation of "void spconv::maxPoolFwdVecBlockKernel<T,Index,NumTLP,NumILP,VecType>(T *, const T *, const Index *, const Index *, int, int) [with T=c10::Half, Index=int, NumTLP=64, NumILP=16, VecType=std::conditional_t<true, int2, int4>]"

Could you please help check out the issue above? Lots of thanks!

CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`

  • System: WSL-kali/WSL-Ubuntu (Windows 10 22H2 19045.2965), Ubuntu 22.04
  • Docker Engine Version: 19.03.13
  • Docker Desktop Version: 20.10.24

When running python run_grasp_simulation.py, the following error occurs:

pybullet build time: Dec  1 2021 18:33:04
Gripper hand_depth: 0.018883
Gripper init_bite: 0.005
Gripper max_width: 0.048
Gripper hand_height: 0.020832
Gripper finger_width: 0.00586
Gripper hand_outer_diameter: 0.061398
Sdf3D self.dims_=[168 168 168], self.resolution_=0.000994335, self.origin_=[-0.0835218 -0.083531   0.0678743], center_sdf=-0.0309976, boundary_sdf=0.0808156
sdf_dir /home/catgrasp/dexnet/grasping/../../urdf/robotiq_hande/gripper_enclosed_air_tight.sdf
Sdf3D self.dims_=[168 168 168], self.resolution_=0.000994683, self.origin_=[-0.083551  -0.0835602  0.0678726], center_sdf=-0.0309976, boundary_sdf=0.080857
GraspPredicter artifact_dir /home/catgrasp/artifacts/artifacts-50
phase=test #self.keys=0
Load ckpt from /home/catgrasp/artifacts/artifacts-50/best_val.pth.tar
NunocsPredicter artifact_dir /home/catgrasp/artifacts/artifacts-76
phase=test #self.files=0
Load ckpt from /home/catgrasp/artifacts/artifacts-76/best_val.pth.tar
PointGroupPredictor artifact_dir /home/catgrasp/artifacts/artifacts-77
config_dir /home/catgrasp/artifacts/artifacts-77/config_pointgroup.yaml
phase: test, num files=0
Load ckpt from /home/catgrasp/artifacts/artifacts-77/best_val.pth.tar
NocsTransferGraspSampler score_larger_than=0.95, center_ob_between_gripper=False, max_n_grasp=10000, #canonical_grasp=10000, before has 747195
startThreads creating 1 threads.
starting thread 0
started thread 0
argc=2
argv[0] = --unused
argv[1] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
X11 functions dynamically loaded using dlopen/dlsym OK!
Creating context
Created GL 3.3 context
Direct GLX rendering context obtained
Making context current
GL_VENDOR=VMware, Inc.
GL_RENDERER=llvmpipe (LLVM 10.0.0, 128 bits)
GL_VERSION=3.3 (Core Profile) Mesa 20.0.8
GL_SHADING_LANGUAGE_VERSION=3.30
pthread_getconcurrency()=0
Version = 3.3 (Core Profile) Mesa 20.0.8
Vendor = VMware, Inc.
Renderer = llvmpipe (LLVM 10.0.0, 128 bits)
b3Printf: Selected demo: Physics Server
startThreads creating 1 threads.
starting thread 0
started thread 0
MotionThreadFunc thread started
ven = VMware, Inc.
ven = VMware, Inc.
bullet server already connected
gripper_dir /home/catgrasp/dexnet/grasping/../../urdf/robotiq_hande
self.env_body_ids [0, 1]
(0, b'world_iiwa_joint', 4, -1, -1, 0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, b'arm_iiwa_link_0', (0.0, 0.0, 0.0), (0.0, 0.0, 0.0), (0.0, 0.0, 0.0, 1.0), -1)
(1, b'iiwa_joint_1', 0, 7, 6, 1, 2.0, 1.0, -2.96, 2.96, 320.0, 10.0, b'arm_iiwa_link_1', (0.0, 0.0, 1.0), (0.1, 0.0, 0.0875), (0.0, 0.0, 0.0, 1.0), 0)
(2, b'iiwa_joint_2', 0, 8, 7, 1, 2.0, 1.0, -2.09, 2.09, 320.0, 10.0, b'arm_iiwa_link_2', (0.0, 0.0, 1.0), (0.0, 0.03, 0.08249999999999999), (9.381873917569987e-07, 0.7071080798588513, 0.707105482510614, -9.381839456086129e-07), 1)
(3, b'iiwa_joint_3', 0, 9, 8, 1, 2.0, 1.0, -2.96, 2.96, 176.0, 10.0, b'arm_iiwa_link_3', (0.0, 0.0, 1.0), (-0.0003, 0.14549999999862046, -0.042000751170443634), (9.381873916908391e-07, 0.7071080798588513, 0.707105482510614, 9.381839456747728e-07), 2)
(4, b'iiwa_joint_4', 0, 10, 9, 1, 2.0, 1.0, -2.09, 2.09, 176.0, 10.0, b'arm_iiwa_link_4', (0.0, 0.0, 1.0), (0.0, -0.03, 0.08550000000000002), (-0.7071080798594737, 0.0, 0.0, 0.7071054825112363), 3)
(5, b'iiwa_joint_5', 0, 11, 10, 1, 2.0, 1.0, -2.96, 2.96, 110.0, 10.0, b'arm_iiwa_link_5', (0.0, 0.0, 1.0), (0.0, 0.11749999999875532, -0.03400067770634157), (-9.381873916908391e-07, 0.7071080798588513, 0.707105482510614, -9.381839456747728e-07), 4)
(6, b'iiwa_joint_6', 0, 12, 11, 1, 2.0, 1.0, -2.09, 2.09, 40.0, 10.0, b'arm_iiwa_link_6', (0.0, 0.0, 1.0), (-0.0001, -0.021, 0.1394999999999999), (-0.7071080798594737, 6.21032719178595e-23, 6.210304380022312e-23, 0.7071054825112363), 5)
(7, b'iiwa_joint_7', 0, 13, 12, 1, 2.0, 1.0, -3.05433, 3.05433, 40.0, 10.0, b'arm_iiwa_link_7', (0.0, 0.0, 1.0), (1.9014789953427065e-27, 0.0803999999994535, -0.0004002975296133711), (9.381873916908391e-07, 0.7071080798588513, 0.707105482510614, 9.381839456747728e-07), 6)
(8, b'toolchanger_joint', 4, -1, -1, 0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, b'toolchanger_base_link', (0.0, 0.0, 0.0), (0.0, 0.0, 0.05426099999999999), (0.0, 0.0, 0.7071080798594737, 0.7071054825112363), 7)
(9, b'attach_joint', 4, -1, -1, 0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, b'toolchanger_tool_attach', (0.0, 0.0, 0.0), (0.0, 0.0, 0.0), (0.0, 0.0, -1.8087206151615695e-22, 1.0), 8)
(10, b'force_torqe_sensor_joint', 4, -1, -1, 0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, b'force_torqe_sensor_base_link', (0.0, 0.0, 0.0), (0.0, 0.0, 0.03695699999999991), (0.0, 0.0, -1.8087206151615695e-22, 1.0), 9)
(11, b'attach_joint_', 4, -1, -1, 0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, b'force_torqe_sensor_tool_attach', (0.0, 0.0, 0.0), (0.0, 0.0, 0.0), (0.0, 0.0, -1.8087206151615695e-22, 1.0), 10)
(12, b'axia_gripper_joint', 4, -1, -1, 0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, b'robotiq_hande_gripper_body', (0.0, 0.0, 0.0), (0.0, 0.0, -0.027666999999999914), (0.0, 0.0, 0.7071041838352712, 0.7071093785282833), 11)
(13, b'finger1', 1, 14, 13, 1, 0.0, 0.0, 0.0, 0.025, 100.0, 10.0, b'robotiq_hande_gripper_finger1', (0.0, 1.0, 0.0), (0.0, 0.0, 0.0), (0.0, 0.0, 1.8087206151615695e-22, 1.0), 12)
(14, b'finger2', 1, 15, 14, 1, 0.0, 0.0, 0.0, 0.025, 100.0, 10.0, b'robotiq_hande_gripper_finger2', (0.0, -1.0, 0.0), (0.0, 0.0, 0.0), (0.0, 0.0, 1.8087206151615695e-22, 1.0), 12)
self.env_body_ids [0, 1, 2]
Making pile /home/catgrasp/data/object_models/screw_carr_94323A329_NYLON.obj scale=[1. 1. 1.]
Add new objects on pile #=4
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
simulation_until_stable....
Finished simulation
symmetry_tfs: 72
simulation_until_stable....
Finished simulation
Traceback (most recent call last):
  File "run_grasp_simulation.py", line 717, in <module>
    simulate_grasp_with_arm()
  File "run_grasp_simulation.py", line 575, in simulate_grasp_with_arm
    for grasps in compute_candidate_grasp(rgb,depth,seg,i_pick=i_pick,env=env,ags=ags,symmetry_tfs=symmetry_tfs,ik_func=ik_func):
  File "run_grasp_simulation.py", line 213, in compute_candidate_grasp
    scene_seg = seg_predicter.predict(copy.deepcopy(seg_input_data))
  File "/home/catgrasp/predicter.py", line 304, in predict
    ret = self.model(input_, p2v_map, coords_float, coords[:, 0].int(), batch_offsets, epoch=self.model.prepare_epochs-1)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/catgrasp/PointGroup/model/pointgroup/pointgroup.py", line 226, in forward
    output = self.input_conv(input_tensor)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/spconv/modules.py", line 123, in forward
    input = module(input)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/spconv/conv.py", line 157, in forward
    outids.shape[0])
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/spconv/functional.py", line 83, in forward
    return ops.indice_conv(features, filters, indice_pairs, indice_pair_num, num_activate_out, False, True)
  File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/spconv/ops.py", line 112, in indice_conv
    int(inverse), int(subm))
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` (gemm<float> at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/ATen/cuda/CUDABlas.cpp:182)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7f6f79822e37 in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x3b45097 (0x7f6f8459d097 in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #2: THCudaTensor_addmm + 0x378 (0x7f6f84979518 in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #3: <unknown function> + 0x3bfa038 (0x7f6f84652038 in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #4: <unknown function> + 0x3b53fd8 (0x7f6f845abfd8 in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #5: torch::autograd::VariableType::mm_out(at::Tensor&, at::Tensor const&, at::Tensor const&) + 0x645 (0x7f6f83e9cad5 in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #6: at::Tensor spconv::indiceConv<float>(at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long) + 0xa0b (0x7f6f6353dcbb in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/spconv/libspconv.so)
frame #7: c10::guts::infer_function_traits_t::return_type c10::detail::call_functor_with_args_from_stack_<c10::detail::WrapRuntimeKernelFunctor_<at::Tensor (*)(at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long), at::Tensor, c10::guts::typelist::typelist<at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long> >, true, 0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul>(c10::detail::WrapRuntimeKernelFunctor_<at::Tensor (*)(at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long), at::Tensor, c10::guts::typelist::typelist<at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long> >*, std::vector<c10::IValue, std::allocator<c10::IValue> >*, std::integer_sequence<unsigned long, 0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul>) + 0x146 (0x7f6f63547756 in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/spconv/libspconv.so)
frame #8: c10::detail::wrap_kernel_functor<c10::detail::WrapRuntimeKernelFunctor_<at::Tensor (*)(at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long), at::Tensor, c10::guts::typelist::typelist<at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long> >, true, void>::call(std::vector<c10::IValue, std::allocator<c10::IValue> >*, c10::KernelCache*) + 0x2f (0x7f6f6354787f in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/spconv/libspconv.so)
frame #9: <unknown function> + 0x383f5f0 (0x7f6f842975f0 in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #10: <unknown function> + 0x45050c (0x7f6faffc450c in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #11: <unknown function> + 0x41bf44 (0x7f6faff8ff44 in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #12: <unknown function> + 0x1c8066 (0x7f6fafd3c066 in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #21: THPFunction_apply(_object*, _object*) + 0x8e6 (0x7f6faff62726 in /opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/lib/libtorch_python.so)

numActiveThreads = 0
stopping threads
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed
finished
numActiveThreads = 0
btShutDownExampleBrowser stopping threads
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed
pybullet disconnected

It seems that the torch's version doesn't match cuda's version, but after updating torch by

pip install torch==1.5.0+cu101 torchvision==0.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html

, I can't bash build.sh (build.log).

cannot connect to X server

How to solve this problem? thanks very much

startThreads creating 1 threads.
starting thread 0
started thread 0
argc=2
argv[0] = --unused
argv[1] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
No protocol specified

cannot connect to X server

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.