Giter Club home page Giter Club logo

geomatch's Introduction

GeoMatch: Geometry Matching for Multi-Embodiment Grasping

While significant progress has been made on the problem of generating grasps, many existing learning-based approaches still concentrate on a single embodiment, provide limited generalization to higher DoF end-effectors and cannot capture a diverse set of grasp modes. In this paper, we tackle the problem of grasping multi-embodiments through the viewpoint of learning rich geometric representations for both objects and end-effectors using Graph Neural Networks (GNN). Our novel method - GeoMatch - applies supervised learning on grasping data from multiple embodiments, learning end-to-end contact point likelihood maps as well as conditional autoregressive prediction of grasps keypoint-by-keypoint. We compare our method against 3 baselines that provide multi-embodiment support. Our approach performs better across 3 end-effectors, while also providing competitive diversity of grasps. Examples can be found at geomatch.github.io.

This is source code for the paper: Geometry Matching for Multi-Embodiment Grasping.

Installation

To get started, creating an Anaconda or virtual environment is recommended.

This repository was developed on pytorch 1.13 among other dependencies:

pip install torch==1.13.1 pytorch-kinematics matplotlib transforms3d numpy scipy plotly trimesh urdf_parser_py tqdm argparse

For this work, we used the data from GenDexGrasp: Generalizable Dexterous Grasping. Please follow instructions on this link to download.

Usage

To train our model, run:

python3 train.py --epochs=XXX --batch_size=YYY

To generate grasps for a given object and all end-effectors, run:

python3 generate_grasps_for_obj.py --saved_model_dir=<your_trained_model> --object_name=<selected_object>

Optionally, you can plot grasps as they're generated by passing in --plot_grasps to the command above.

To generate grasps for all objects of the eval set and all end-effectors, run:

python3 generate_grasps_for_all_objs.py --saved_model_dir=<your_trained_model>

Citing this work

If you liked and used our repository, please cite us:

@inproceedings{attarian2023geometry,
  title={Geometry Matching for Multi-Embodiment Grasping},
  author={Attarian, Maria and Asif, Muhammad Adil and Liu, Jingzhou and Hari, Ruthrash and Garg, Animesh and Gilitschenski, Igor and Tompson, Jonathan},
  booktitle={Proceedings of the 7th Conference on Robot Learning (CoRL)},
  year={2023}
}

License and disclaimer

Copyright 2023 DeepMind Technologies Limited

All software is licensed under the Apache License, Version 2.0 (Apache 2.0); you may not use this file except in compliance with the Apache 2.0 license. You may obtain a copy of the Apache 2.0 license at: https://www.apache.org/licenses/LICENSE-2.0

All other materials are licensed under the Creative Commons Attribution 4.0 International License (CC-BY). You may obtain a copy of the CC-BY license at: https://creativecommons.org/licenses/by/4.0/legalcode

Unless required by applicable law or agreed to in writing, all software and materials distributed here under the Apache 2.0 or CC-BY licenses are distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the licenses for the specific language governing permissions and limitations under those licenses.

This is not an official Google product.

geomatch's People

Contributors

jmaria88 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

geomatch's Issues

Request for Pre-trained Model and Data File Format Information

Hello,

I encountered some issues while trying to use your project's code. Specifically, I'm missing the robot_centroids.json and robot_keypoints.json files when running the create_gnn_dataset.py script to generate the dataset.

To proceed smoothly, I would appreciate the following assistance:

If possible, could you provide a pre-trained model to help me get started and understand the project's performance more easily?
Could you please provide the format specifications for the robot_centroids.json and robot_keypoints.json files so that I can attempt to generate them myself?
Thank you very much for your help, looking forward to your reply!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.