fang-haoshu / halpe-fullbody Goto Github PK
View Code? Open in Web Editor NEWHalpe: full body human pose estimation and human-object interaction detection dataset
Halpe: full body human pose estimation and human-object interaction detection dataset
Hello there,
I'm trying to execute the vis.py, but halpe_train_v1.json is missing. Where do I get this file from?
Thanks in advanse!
Adriana
Thanks for your contribution. Which annotation tool did you use to annotate Haple-FullBody dataset?
Thus, which image in HICO-DET dataset should be used corresponding to halpe_train_v1.json?
Hello,
Thanks you for creating a wonderful dataset.
Thank you for sharing the detailed annotations. I downloaded the HICO-DET
dataset and the annotations as instructed in the README.md
in the repository root. When inspecting the data, I found that only at most one person for each image was annotated. The question is: was it on purpose to annotate only one (not always the main) person or has there been a bug when compiling the annotations jsons? Please see the attached image.
.
I believe this is a duplicate of #3, but I decided to re-open this as a new issue because the original didn't provide enough information.
Here's a minimal script to reproduce, edited from vis.py
that came along with the repo.
import os
import json
import cv2
import skimage.io as io
import numpy as np
from tqdm import tqdm
l_pair = [
(0, 1), (0, 2), (1, 3), (2, 4), # Head
(5, 18), (6, 18), (5, 7), (7, 9), (6, 8), (8, 10),# Body
(17, 18), (18, 19), (19, 11), (19, 12),
(11, 13), (12, 14), (13, 15), (14, 16),
(20, 24), (21, 25), (23, 25), (22, 24), (15, 24), (16, 25),# Foot
(26, 27),(27, 28),(28, 29),(29, 30),(30, 31),(31, 32),(32, 33),(33, 34),(34, 35),(35, 36),(36, 37),(37, 38),#Face
(38, 39),(39, 40),(40, 41),(41, 42),(43, 44),(44, 45),(45, 46),(46, 47),(48, 49),(49, 50),(50, 51),(51, 52),#Face
(53, 54),(54, 55),(55, 56),(57, 58),(58, 59),(59, 60),(60, 61),(62, 63),(63, 64),(64, 65),(65, 66),(66, 67),#Face
(68, 69),(69, 70),(70, 71),(71, 72),(72, 73),(74, 75),(75, 76),(76, 77),(77, 78),(78, 79),(79, 80),(80, 81),#Face
(81, 82),(82, 83),(83, 84),(84, 85),(85, 86),(86, 87),(87, 88),(88, 89),(89, 90),(90, 91),(91, 92),(92, 93),#Face
(94,95),(95,96),(96,97),(97,98),(94,99),(99,100),(100,101),(101,102),(94,103),(103,104),(104,105),#LeftHand
(105,106),(94,107),(107,108),(108,109),(109,110),(94,111),(111,112),(112,113),(113,114),#LeftHand
(115,116),(116,117),(117,118),(118,119),(115,120),(120,121),(121,122),(122,123),(115,124),(124,125),#RightHand
(125,126),(126,127),(115,128),(128,129),(129,130),(130,131),(115,132),(132,133),(133,134),(134,135)#RightHand
]
p_color = [(0, 255, 255), (0, 191, 255), (0, 255, 102), (0, 77, 255), (0, 255, 0), # Nose, LEye, REye, LEar, REar
(77, 255, 255), (77, 255, 204), (77, 204, 255), (191, 255, 77), (77, 191, 255), (191, 255, 77), # LShoulder, RShoulder, LElbow, RElbow, LWrist, RWrist
(204, 77, 255), (77, 255, 204), (191, 77, 255), (77, 255, 191), (127, 77, 255), (77, 255, 127), # LHip, RHip, LKnee, Rknee, LAnkle, RAnkle, Neck
(77, 255, 255), (0, 255, 255), (77, 204, 255), # head, neck, shoulder
(0, 255, 255), (0, 191, 255), (0, 255, 102), (0, 77, 255), (0, 255, 0), (77, 255, 255)] # foot
line_color = [(0, 215, 255), (0, 255, 204), (0, 134, 255), (0, 255, 50),
(0, 255, 102), (77, 255, 222), (77, 196, 255), (77, 135, 255), (191, 255, 77), (77, 255, 77),
(77, 191, 255), (204, 77, 255), (77, 222, 255), (255, 156, 127),
(0, 127, 255), (255, 127, 77), (0, 77, 255), (255, 77, 36),
(0, 77, 255), (0, 77, 255), (0, 77, 255), (0, 77, 255), (255, 156, 127), (255, 156, 127)]
bodyanno = json.load(open('halpe_train_v1.json'))
image_folder = 'path/to/train images'
imgs = {}
for img in bodyanno['images']:
imgs[img['id']] = img
annot = bodyanno['annotations'][5]
if 'keypoints' in annot and type(annot['keypoints']) == list:
imgname = str(imgs[annot['image_id']]['file_name'])
img = cv2.imread(os.path.join(image_folder, imgname))
part_line = {}
kp = np.array(annot['keypoints'])
kp_x = kp[0::3]
kp_y = kp[1::3]
kp_scores = kp[2::3]
# Draw keypoints
for n in range(kp_scores.shape[0]):
if kp_scores[n] <= 0.6:
continue
cor_x, cor_y = int(kp_x[n]), int(kp_y[n])
part_line[n] = (int(cor_x), int(cor_y))
if n < len(p_color):
cv2.circle(img, (int(cor_x), int(cor_y)), 2, p_color[n], -1)
else:
cv2.circle(img, (int(cor_x), int(cor_y)), 1, (255,255,255), 2)
# Draw limbs
for i, (start_p, end_p) in enumerate(l_pair):
if start_p in part_line and end_p in part_line:
start_xy = part_line[start_p]
end_xy = part_line[end_p]
if i < len(line_color):
cv2.line(img, start_xy, end_xy, line_color[i], 2)
else:
cv2.line(img, start_xy, end_xy, (255,255,255), 1)
plt.imshow(img); plt.axis('off')
Thank you for share great project. Question as titles.
@Fang-Haoshu , Hello , The dataset cannot be downloaded。
I have noticed that visibility scores are not always in between 0 - 2 range for Halpe-FullBody dataset as in coco dataset, where
0 = can't annotate
1 = can annotate for a hidden joint
2 = can annotate because the joint is visible.
What's your visibility keypoints definition?
为什么halpe里面有多个人,halpe_train_v1.json标注文件才只给了一个人的标注?是我搞错了吗?还是就是一张图片只能一个注释?
I wanted to bring to your attention an issue I encountered while attempting to access the HICO-DET dataset for my research.
2.In my attempt to download the images from the HICO website (https://websites.umich.edu/~ywchao/hico/), I noticed that the number of images in the "train2015" folder is 38,118. This differs from the counts mentioned in your paper (40k or 50k). Could you clarify the correct number of images in the train set?
Thank you for your time.
Firstly thanks for sharing the dataset.
While looking into the data in perticular the annotaions dictionary which is suppose to follow the coco format has
invalid values for the field num_keypoints
.
All columns values seems to indicate a constant 3 value. the coco format for keypoints indicate this field should
be the sum of keypoints with visibility > 0 but seems not to be the case:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.