Giter Club home page Giter Club logo

contactopt's Introduction

ContactOpt: Optimizing Contact to Improve Grasps

Patrick Grady, Chengcheng Tang, Christopher D. Twigg, Minh Vo, Samarth Brahmbhatt, Charles C. Kemp

pipeline overview

Physical contact between hands and objects plays a critical role in human grasps. We show that optimizing the pose of a hand to achieve expected contact with an object can improve hand poses inferred via image-based methods. Given a hand mesh and an object mesh, a deep model trained on ground truth contact data infers desirable contact across the surfaces of the meshes. Then, ContactOpt efficiently optimizes the pose of the hand to achieve desirable contact using a differentiable contact model. Notably, our contact model encourages mesh interpenetration to approximate deformable soft tissue in the hand. In our evaluations, our methods resulted in grasps that better matched ground truth contact, had lower kinematic error, and were significantly preferred by human participants.

[Paper] [Paper website] [Supplementary] [Video]

Installation

Refer to installation instructions.

Run ContactOpt on the demo

A small dataset of 10 grasps from an image-based pose estimator has been included (paper, section 4.2.2). To run ContactOpt on this demo dataset:

python contactopt/run_contactopt.py --split=demo

To calculate aggregate statistics:

python contactopt/run_eval.py --split=demo

To visualize the individual results, run the evaluation script with the --vis flag. Pan/rotation/zoom are controlled with the mouse, and press Q to advance to the next frame.

python contactopt/run_eval.py --split=demo --vis

Demo visualization

Run ContactOpt on user-provided data

A demo script has been provided to demonstrate running ContactOpt on a single hand/object pair. This file may be easily modified to run on your own data. Demo files containing the object mesh and mano parameters have been provided.

python contactopt/run_user_demo.py 
python contactopt/run_eval.py --split=user --vis

Note that this project uses the MANO pose parameterized with 15 PCA components. Other projects may use the MANO model with different formats (such as 45 individual joint angles). The contactopt.util.fit_pca_to_axang function has been provided to convert between these modalities.

Running on Datasets

To run ContactOpt on the datasets described in the paper, download and generate the dataset as described in the installation document. Use the --split=aug flag for Perturbed ContactPose, or --split==im for the image-based pose estimates.

python contactopt/run_contactopt.py --split=aug
python contactopt/run_eval.py --split=aug

Training DeepContact

The DeepContact network is trained on the Perturbed ContactPose dataset.

python contactopt/train_deepcontact.py

Citation

@inproceedings{grady2021contactopt,
    author={Grady, Patrick and Tang, Chengcheng and Twigg, Christopher D. and Vo, Minh and Brahmbhatt, Samarth and Kemp, Charles C.},
    title = {{ContactOpt}: Optimizing Contact to Improve Grasps},
    booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2021}
}

License

The code for this project is released under the MIT License.

contactopt's People

Contributors

pgrady3 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

contactopt's Issues

The problem about the fit_pca_to_axang function

Hello, I've been stuck in a problem for a long time. I input a mano model with 45 axis angle coordinates( the axisang format) to fit_pca_to_axang function. But the result is not satisfactory. Is my understanding of the input wrong or there may be other reasons?
Looking forward to your reply!
input
2023-05-30 20-13-02 的屏幕截图
output
2023-05-30 20-12-52 的屏幕截图

about ho3d dataset

I would like to test this method in codelab's contest on HO3D_V2, may I ask how I need to process the data, or can you provide a processed data

about bin weights

Could you explain how the weights in class_bin_weights.out are calculated ??

TypeError: new(): data must be a sequence (got NoneType)

Hello, thanks for your work. I have a problem when I run ContactOpt on my-provided data.
The log is shown as follow:
Traceback (most recent call last): File "contactopt/run_user_demo.py", line 79, in <module> run_contactopt(args) File "/home/barry/文档/contact/contactopt/run_contactopt.py", line 67, in run_contactopt for idx, data in enumerate(tqdm(test_loader)): File "/home/barry/anaconda3/envs/contactopt/lib/python3.8/site-packages/tqdm/std.py", line 1171, in __iter__ for obj in iterable: File "/home/barry/anaconda3/envs/contactopt/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 681, in __next__ data = self._next_data() File "/home/barry/anaconda3/envs/contactopt/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1376, in _next_data return self._process_data(data) File "/home/barry/anaconda3/envs/contactopt/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data data.reraise() File "/home/barry/anaconda3/envs/contactopt/lib/python3.8/site-packages/torch/_utils.py", line 461, in reraise raise exception TypeError: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/barry/anaconda3/envs/contactopt/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop data = fetcher.fetch(index) File "/home/barry/anaconda3/envs/contactopt/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/barry/anaconda3/envs/contactopt/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/barry/文档/contact/contactopt/loader.py", line 50, in __getitem__ out['hand_contact_gt'] = torch.Tensor(sample['ho_gt'].hand_contact) TypeError: new(): data must be a sequence (got NoneType)

The data I use is the demo_object.obj and demo_mano.json you provided in the data folder. I find the data you provided without "ho_gt" (loader.py line 49) which leads to this error. How can I solve this problem where I need to provide the ground truth?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.