Giter Club home page Giter Club logo

mmhuman3d's People

Contributors

ailingzengzzz avatar caizhongang avatar coach257 avatar fubuki901 avatar haofanwang avatar innerlee avatar jacobliu-s avatar jiaqiaa avatar juxuan27 avatar khao123 avatar kimren227 avatar kristijanbartol avatar lazybusyyang avatar leng-yue avatar ly015 avatar maoxie avatar mingyuan-zhang avatar naive-bayes avatar onescotch avatar pangyyyyy avatar ttxskk avatar vra avatar wei-chen-hub avatar wendaizhou avatar wenjiawang0312 avatar wyjsjtu avatar ykk648 avatar yl-1993 avatar yongtaoge avatar zessay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mmhuman3d's Issues

目前mmhuman3d支持对深度数据的估算吗 | Does mmhuman3d support depth data

很高兴看到这么伟大的开源。在这里我希望用母语来询问。我想知道这个开源支持从传感器传入深度数据或者点云数据来生成参数化模型吗?这样可以更直接的对人体尺寸进行估算,毕竟2d估算的尺寸并不正确。
希望可以得到你们的回答


Translation: Will this codebase support depth / point-cloud data for parametric model estimation? This will provide an accurate dimension estimation of human subjects given that estimation from 2D data can be ambiguous.

cam_param is absent in HumanData.SUPPORTED_KEYS.

I want to convert Human36M dataset and I use the following script

python tools/convert_datasets.py --datasets h36m_p1 --root_path ../data/datasets/ --output_path ../data/processed_datasets

Then I get the error "cam_param is absent in HumanData.SUPPORTED_KEYS."

about spin pre_data

I noticed that in the original code of spin, one is taken every 5 frames, while in this framework, the processing h36m is taken every 10 frames. I want to know this h36m_fits.npy obtained every 10 frames or 5 frames? Is it consistent with spin source code?

The training loss does not converge

I have trained models like SPIN, but the training loss (especially keypoint_2dloss) is not converge, which is getting bigger. I just download COCO dataset and modify the dataset setting in the config file.

2022-01-17 14:43:00,158 - mmhuman3d - INFO - Epoch [1][50/3125] lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 22:28:06, time: 2.593, data_time: 0.067, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 0.2118, vertex_loss: 0.2076, smpl_pose_loss: 0.1244, smpl_betas_loss: 0.0138, camera_loss: 0.5341, loss: 1.0918
2022-01-17 14:45:06,227 - mmhuman3d - INFO - Epoch [1][100/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 22:07:29, time: 2.521, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 0.6550, vertex_loss: 0.2113, smpl_pose_loss: 0.1248, smpl_betas_loss: 0.0168, camera_loss: 0.0009, loss: 1.0087
2022-01-17 14:47:11,979 - mmhuman3d - INFO - Epoch [1][150/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:58:06, time: 2.515, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 0.9062, vertex_loss: 0.2161, smpl_pose_loss: 0.1248, smpl_betas_loss: 0.0206, camera_loss: 0.0639, loss: 1.3317
2022-01-17 14:49:19,096 - mmhuman3d - INFO - Epoch [1][200/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:55:54, time: 2.542, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 1.0589, vertex_loss: 0.2169, smpl_pose_loss: 0.1217, smpl_betas_loss: 0.0301, camera_loss: 0.0000, loss: 1.4275
2022-01-17 14:51:24,083 - mmhuman3d - INFO - Epoch [1][250/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:49:20, time: 2.500, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 1.5051, vertex_loss: 0.2175, smpl_pose_loss: 0.1195, smpl_betas_loss: 0.0256, camera_loss: 0.0307, loss: 1.8983
2022-01-17 14:53:30,917 - mmhuman3d - INFO - Epoch [1][300/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:47:26, time: 2.537, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 1.4352, vertex_loss: 0.2168, smpl_pose_loss: 0.1235, smpl_betas_loss: 0.0221, camera_loss: 0.0000, loss: 1.7976
2022-01-17 14:55:38,974 - mmhuman3d - INFO - Epoch [1][350/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:47:16, time: 2.561, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 1.9522, vertex_loss: 0.2104, smpl_pose_loss: 0.1221, smpl_betas_loss: 0.0106, camera_loss: 0.0000, loss: 2.2953
2022-01-17 14:57:46,203 - mmhuman3d - INFO - Epoch [1][400/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:45:33, time: 2.545, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 2.6034, vertex_loss: 0.2096, smpl_pose_loss: 0.1228, smpl_betas_loss: 0.0099, camera_loss: 0.0000, loss: 2.9457
2022-01-17 14:59:56,095 - mmhuman3d - INFO - Epoch [1][450/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:46:47, time: 2.598, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 2.5713, vertex_loss: 0.2144, smpl_pose_loss: 0.1229, smpl_betas_loss: 0.0227, camera_loss: 0.0000, loss: 2.9314
2022-01-17 15:02:02,843 - mmhuman3d - INFO - Epoch [1][500/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:44:07, time: 2.535, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 3.3346, vertex_loss: 0.2117, smpl_pose_loss: 0.1235, smpl_betas_loss: 0.0110, camera_loss: 0.0001, loss: 3.6810
2022-01-17 15:04:10,546 - mmhuman3d - INFO - Epoch [1][550/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:42:26, time: 2.554, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 3.5986, vertex_loss: 0.2122, smpl_pose_loss: 0.1161, smpl_betas_loss: 0.0237, camera_loss: 0.0003, loss: 3.9508
2022-01-17 15:06:18,414 - mmhuman3d - INFO - Epoch [1][600/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:40:49, time: 2.557, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 4.4794, vertex_loss: 0.2034, smpl_pose_loss: 0.1147, smpl_betas_loss: 0.0097, camera_loss: 0.0001, loss: 4.8073
2022-01-17 15:08:27,190 - mmhuman3d - INFO - Epoch [1][650/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:39:50, time: 2.575, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 3.7132, vertex_loss: 0.2025, smpl_pose_loss: 0.1140, smpl_betas_loss: 0.0093, camera_loss: 0.0000, loss: 4.0391
2022-01-17 15:10:38,125 - mmhuman3d - INFO - Epoch [1][700/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:40:15, time: 2.619, data_time: 0.012, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 4.4091, vertex_loss: 0.2049, smpl_pose_loss: 0.1173, smpl_betas_loss: 0.0108, camera_loss: 0.0000, loss: 4.7421
2022-01-17 15:12:44,682 - mmhuman3d - INFO - Epoch [1][750/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:37:21, time: 2.531, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 4.4774, vertex_loss: 0.2119, smpl_pose_loss: 0.1186, smpl_betas_loss: 0.0125, camera_loss: 0.0000, loss: 4.8204
2022-01-17 15:14:46,391 - mmhuman3d - INFO - Epoch [1][800/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:31:29, time: 2.434, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 5.5440, vertex_loss: 0.2023, smpl_pose_loss: 0.1113, smpl_betas_loss: 0.0096, camera_loss: 0.0002, loss: 5.8674
2022-01-17 15:16:54,684 - mmhuman3d - INFO - Epoch [1][850/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:30:00, time: 2.566, data_time: 0.012, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 6.3298, vertex_loss: 0.2075, smpl_pose_loss: 0.1155, smpl_betas_loss: 0.0117, camera_loss: 0.0001, loss: 6.6646
2022-01-17 15:19:03,403 - mmhuman3d - INFO - Epoch [1][900/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:28:40, time: 2.574, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 8.4531, vertex_loss: 0.2038, smpl_pose_loss: 0.1110, smpl_betas_loss: 0.0181, camera_loss: 0.0001, loss: 8.7860
2022-01-17 15:21:09,466 - mmhuman3d - INFO - Epoch [1][950/3125]        lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:25:51, time: 2.521, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 6.5281, vertex_loss: 0.2075, smpl_pose_loss: 0.1143, smpl_betas_loss: 0.0211, camera_loss: 70.7021, loss: 77.5730
2022-01-17 15:23:19,187 - mmhuman3d - INFO - Epoch [1][1000/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:24:56, time: 2.594, data_time: 0.012, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 7.2353, vertex_loss: 0.2082, smpl_pose_loss: 0.1164, smpl_betas_loss: 0.0147, camera_loss: 0.0000, loss: 7.5747
2022-01-17 15:25:30,366 - mmhuman3d - INFO - Epoch [1][1050/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:24:36, time: 2.624, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 7.4498, vertex_loss: 0.2072, smpl_pose_loss: 0.1169, smpl_betas_loss: 0.0154, camera_loss: 0.0000, loss: 7.7893
2022-01-17 15:27:37,373 - mmhuman3d - INFO - Epoch [1][1100/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:22:12, time: 2.540, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 7.3770, vertex_loss: 0.2133, smpl_pose_loss: 0.1166, smpl_betas_loss: 0.0297, camera_loss: 0.0000, loss: 7.7366
2022-01-17 15:29:49,020 - mmhuman3d - INFO - Epoch [1][1150/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:21:51, time: 2.633, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 8.5110, vertex_loss: 0.2076, smpl_pose_loss: 0.1148, smpl_betas_loss: 0.0171, camera_loss: 0.0000, loss: 8.8505
2022-01-17 15:31:57,100 - mmhuman3d - INFO - Epoch [1][1200/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:19:51, time: 2.562, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 9.0734, vertex_loss: 0.2063, smpl_pose_loss: 0.1150, smpl_betas_loss: 0.0197, camera_loss: 0.0000, loss: 9.4143
2022-01-17 15:34:06,728 - mmhuman3d - INFO - Epoch [1][1250/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:18:28, time: 2.593, data_time: 0.012, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 10.2803, vertex_loss: 0.2078, smpl_pose_loss: 0.1174, smpl_betas_loss: 0.0135, camera_loss: 0.0000, loss: 10.6190
2022-01-17 15:36:16,439 - mmhuman3d - INFO - Epoch [1][1300/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:17:03, time: 2.594, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 13.1727, vertex_loss: 0.2084, smpl_pose_loss: 0.1168, smpl_betas_loss: 0.0157, camera_loss: 0.0000, loss: 13.5136
2022-01-17 15:38:28,994 - mmhuman3d - INFO - Epoch [1][1350/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:16:38, time: 2.651, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 10.8980, vertex_loss: 0.2077, smpl_pose_loss: 0.1145, smpl_betas_loss: 0.0139, camera_loss: 0.0000, loss: 11.2341
2022-01-17 15:40:33,621 - mmhuman3d - INFO - Epoch [1][1400/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:13:16, time: 2.493, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 11.8481, vertex_loss: 0.2136, smpl_pose_loss: 0.1143, smpl_betas_loss: 0.0194, camera_loss: 0.0000, loss: 12.1954
2022-01-17 15:42:34,076 - mmhuman3d - INFO - Epoch [1][1450/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:08:34, time: 2.409, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 10.6372, vertex_loss: 0.2035, smpl_pose_loss: 0.1077, smpl_betas_loss: 0.0151, camera_loss: 0.0000, loss: 10.9635
2022-01-17 15:44:44,637 - mmhuman3d - INFO - Epoch [1][1500/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:07:22, time: 2.611, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 11.9690, vertex_loss: 0.2182, smpl_pose_loss: 0.1221, smpl_betas_loss: 0.0177, camera_loss: 0.0000, loss: 12.3270
2022-01-17 15:46:54,947 - mmhuman3d - INFO - Epoch [1][1550/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:06:03, time: 2.606, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 11.0126, vertex_loss: 0.2086, smpl_pose_loss: 0.1100, smpl_betas_loss: 0.0139, camera_loss: 0.0000, loss: 11.3451
2022-01-17 15:49:08,060 - mmhuman3d - INFO - Epoch [1][1600/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:05:32, time: 2.662, data_time: 0.011, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 15.5035, vertex_loss: 0.2117, smpl_pose_loss: 0.1197, smpl_betas_loss: 0.0119, camera_loss: 0.0000, loss: 15.8468
2022-01-17 15:51:18,177 - mmhuman3d - INFO - Epoch [1][1650/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:04:01, time: 2.602, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 15.4472, vertex_loss: 0.2105, smpl_pose_loss: 0.1179, smpl_betas_loss: 0.0120, camera_loss: 0.0000, loss: 15.7876
2022-01-17 15:53:28,477 - mmhuman3d - INFO - Epoch [1][1700/3125]       lr_backbone: 3.000e-05 lr_head: 3.000e-05, eta: 21:02:31, time: 2.606, data_time: 0.010, memory: 3186, keypoints3d_loss: 0.0000, keypoints2d_loss: 16.4281, vertex_loss: 0.2120, smpl_pose_loss: 0.1165, smpl_betas_loss: 0.0136, camera_loss: 0.0000, loss: 16.7702

Naming inconsistencies

  • index, indices
  • image, img
  • build_xxx, singluar
  • SMPLify
  • Human Data
  • path vs dir
  • data_source vs convention
  • keypoints and its shapes

Wishlist

Please leave a comment on what new method/feature/dataset you would like us to add in MMHuman3D.

The most upvoted suggestions will be prioritized!

Update: we have moved our Wishlist to this discussion. See you there!

GPU memory unbalance problem

I copied the dist_train.sh from mmpose.

#!/usr/bin/env bash
# Copyright (c) OpenMMLab. All rights reserved.

CONFIG=$1
GPUS=$2
PORT=${PORT:-29500}

PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
python -m torch.distributed.launch --nproc_per_node=$GPUS --master_port=$PORT \
    $(dirname "$0")/train.py $CONFIG --launcher pytorch ${@:3}

With the following training command:

tools/dist_train.sh configs/hmr/resnet50_hmr_pw3d.py 2 --no-validate

When I have samples_per_gpu=32, and workers_per_gpu=2, I found that the GPU memory is unbalanced.

image

when I take hybirk as a example to Inference / Demo, take the errror

File "demo/estimate_smpl.py", line 296, in
main(args)
File "demo/estimate_smpl.py", line 195, in main
verts, K0, img_index = single_person_with_mmdet(args, frames_iter)
File "demo/estimate_smpl.py", line 65, in single_person_with_mmdet
mesh_results = inference_model(
File "/home/bixueting/code/mmhuman3d/mmhuman3d/apis/inference.py", line 139, in inference_model
inference_pipeline = [LoadImage()] + cfg.inference_pipeline
File "/home/bixueting/anaconda3/envs/mmhuman3d/lib/python3.8/site-packages/mmcv/utils/config.py", line 507, in getattr
return getattr(self._cfg_dict, name)
File "/home/bixueting/anaconda3/envs/mmhuman3d/lib/python3.8/site-packages/mmcv/utils/config.py", line 48, in getattr
raise ex
AttributeError: 'ConfigDict' object has no attribute 'inference_pipeline'

ImportError: cannot import name 'axis_angle_to_quaternion' from 'pytorch3d.transforms' (/home/user/anaconda3/envs/open-mmlab/lib/python3.8/site-packages/pytorch3d/transforms/__init__.py)

ImportError: cannot import name 'axis_angle_to_quaternion' from 'pytorch3d.transforms' (/home/user/anaconda3/envs/open-mmlab/lib/python3.8/site-packages/pytorch3d/transforms/init.py)

(open-mmlab) root@slave01:~/lol/mmhuman3d# conda list

packages in environment at /home/user/anaconda3/envs/open-mmlab:

Name Version Build Channel

_libgcc_mutex 0.1 main defaults
_openmp_mutex 4.5 1_gnu defaults
addict 2.4.0 pypi_0 pypi
aiohttp 3.8.1 pypi_0 pypi
aiosignal 1.2.0 pypi_0 pypi
async-timeout 4.0.1 pypi_0 pypi
attrs 21.2.0 pypi_0 pypi
autobahn 21.11.1 pypi_0 pypi
automat 20.2.0 pypi_0 pypi
blas 1.0 mkl defaults
bzip2 1.0.8 h7b6447c_0 defaults
ca-certificates 2021.10.26 h06a4308_2 defaults
cdflib 0.3.20 pypi_0 pypi
certifi 2021.10.8 py38h06a4308_0 defaults
cffi 1.15.0 pypi_0 pypi
charset-normalizer 2.0.9 pypi_0 pypi
chumpy 0.70 pypi_0 pypi
colorama 0.4.4 pypi_0 pypi
colorlog 6.6.0 pypi_0 pypi
colormap 1.0.4 pypi_0 pypi
constantly 15.1.0 pypi_0 pypi
cpuonly 1.0 0 pytorch
cryptography 36.0.0 pypi_0 pypi
cudatoolkit 10.0.130 0 defaults
cycler 0.11.0 pypi_0 pypi
cython 0.29.25 pypi_0 pypi
deprecated 1.2.13 pypi_0 pypi
dotty-dict 1.3.0 pypi_0 pypi
easydev 0.12.0 pypi_0 pypi
ffmpeg 4.2.2 h20bf706_0 defaults
flake8 4.0.1 pypi_0 pypi
flake8-import-order 0.18.1 pypi_0 pypi
fonttools 4.28.3 pypi_0 pypi
freetype 2.11.0 h70c0345_0 defaults
frozenlist 1.2.0 pypi_0 pypi
fvcore 0.1.5.post20211023 pypi_0 pypi
giflib 5.2.1 h7b6447c_0 defaults
gmp 6.2.1 h2531618_2 defaults
gnutls 3.6.15 he1e5248_0 defaults
h5py 3.6.0 pypi_0 pypi
hyperlink 21.0.0 pypi_0 pypi
idna 3.3 pypi_0 pypi
imageio 2.13.2 pypi_0 pypi
incremental 21.3.0 pypi_0 pypi
iniconfig 1.1.1 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561 defaults
iopath 0.1.9 pypi_0 pypi
jpeg 9b 0 defaults
json-tricks 3.15.5 pypi_0 pypi
kiwisolver 1.3.2 pypi_0 pypi
lame 3.100 h7b6447c_0 defaults
lcms2 2.12 h3be6417_0 defaults
ld_impl_linux-64 2.35.1 h7274673_9 defaults
libffi 3.3 he6710b0_2 defaults
libgcc-ng 9.3.0 h5101ec6_17 defaults
libgomp 9.3.0 h5101ec6_17 defaults
libidn2 2.3.2 h7f8727e_0 defaults
libopus 1.3.1 h7b6447c_0 defaults
libpng 1.6.37 hbc83047_0 defaults
libstdcxx-ng 9.3.0 hd4cf53a_17 defaults
libtasn1 4.16.0 h27cfd23_0 defaults
libtiff 4.2.0 h85742a9_0 defaults
libunistring 0.9.10 h27cfd23_0 defaults
libuv 1.40.0 h7b6447c_0 defaults
libvpx 1.7.0 h439df22_0 defaults
libwebp 1.2.0 h89dd481_0 defaults
libwebp-base 1.2.0 h27cfd23_0 defaults
lz4-c 1.9.3 h295c915_1 defaults
matplotlib 3.5.0 pypi_0 pypi
mccabe 0.6.1 pypi_0 pypi
mkl 2021.4.0 h06a4308_640 defaults
mkl-service 2.4.0 py38h7f8727e_0 defaults
mkl_fft 1.3.1 py38hd3c417c_0 defaults
mkl_random 1.2.2 py38h51133e4_0 defaults
mmcls 0.17.0 pypi_0 pypi
mmcv-full 1.3.18 pypi_0 pypi
mmdet 2.19.0 pypi_0 pypi
mmhuman3d 0.3.0 dev_0
mmpose 0.21.0 pypi_0 pypi
mmtrack 0.8.0 pypi_0 pypi
motmetrics 1.2.0 pypi_0 pypi
multidict 5.2.0 pypi_0 pypi
munkres 1.1.4 pypi_0 pypi
ncurses 6.3 h7f8727e_2 defaults
nettle 3.7.3 hbbd107a_1 defaults
networkx 2.6.3 pypi_0 pypi
ninja 1.10.2 py38hd09550d_3 defaults
numpy 1.21.2 py38h20f2e39_0 defaults
numpy-base 1.21.2 py38h79a1101_0 defaults
olefile 0.46 pyhd3eb1b0_0 defaults
opencv-python 4.5.4.60 pypi_0 pypi
openh264 2.1.0 hd408876_0 defaults
openssl 1.1.1l h7f8727e_0 defaults
packaging 21.3 pypi_0 pypi
pandas 1.3.4 pypi_0 pypi
pexpect 4.8.0 pypi_0 pypi
pickle5 0.0.11 pypi_0 pypi
pillow 8.4.0 py38h5aabda8_0 defaults
pip 21.2.4 py38h06a4308_0 defaults
pluggy 1.0.0 pypi_0 pypi
portalocker 2.3.2 pypi_0 pypi
ptyprocess 0.7.0 pypi_0 pypi
py 1.11.0 pypi_0 pypi
py-cpuinfo 8.0.0 pypi_0 pypi
pycocotools 2.0.3 pypi_0 pypi
pycodestyle 2.8.0 pypi_0 pypi
pycparser 2.21 pypi_0 pypi
pyflakes 2.4.0 pypi_0 pypi
pyparsing 3.0.6 pypi_0 pypi
pytest 6.2.5 pypi_0 pypi
pytest-benchmark 3.4.1 pypi_0 pypi
python 3.8.12 h12debd9_0 defaults
python-dateutil 2.8.2 pypi_0 pypi
pytorch 1.8.0 py3.8_cpu_0 [cpuonly] pytorch
pytorch3d 0.3.0 pypi_0 pypi
pytz 2021.3 pypi_0 pypi
pywavelets 1.2.0 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
readline 8.1 h27cfd23_0 defaults
scikit-image 0.19.0 pypi_0 pypi
scipy 1.7.3 pypi_0 pypi
seaborn 0.11.2 pypi_0 pypi
setuptools 58.0.4 py38h06a4308_0 defaults
setuptools-scm 6.3.2 pypi_0 pypi
six 1.16.0 pyhd3eb1b0_0 defaults
smplx 0.1.28 pypi_0 pypi
sqlite 3.36.0 hc218d9a_0 defaults
tabulate 0.8.9 pypi_0 pypi
termcolor 1.1.0 pypi_0 pypi
terminaltables 3.1.7 pypi_0 pypi
tifffile 2021.11.2 pypi_0 pypi
tk 8.6.11 h1ccaba5_0 defaults
toml 0.10.2 pypi_0 pypi
tomli 1.2.2 pypi_0 pypi
torchvision 0.9.0 py38_cpu [cpuonly] pytorch
tqdm 4.62.3 pypi_0 pypi
twisted 21.7.0 pypi_0 pypi
txaio 21.2.1 pypi_0 pypi
typing_extensions 3.10.0.2 pyh06a4308_0 defaults
vedo 2021.0.7 pypi_0 pypi
vtk 9.0.3 pypi_0 pypi
wheel 0.37.0 pyhd3eb1b0_1 defaults
wrapt 1.13.3 pypi_0 pypi
wslink 1.2.0 pypi_0 pypi
x264 1!157.20191217 h7b6447c_0 defaults
xmltodict 0.12.0 pypi_0 pypi
xtcocotools 1.10 pypi_0 pypi
xz 5.2.5 h7b6447c_0 defaults
yacs 0.1.8 pypi_0 pypi
yapf 0.31.0 pypi_0 pypi
yarl 1.7.2 pypi_0 pypi
zlib 1.2.11 h7b6447c_3 defaults
zope-interface 5.4.0 pypi_0 pypi
zstd 1.4.9 haebb681_0 defaults

images_to_video yields unexpected behavior when -r is behind -i

Suppose there are 1202 images under xxx folder.

When -r 30 is behind -i, ffmpeg will first expand the total frames to 1202 / 25 * 30 = 1440 and then truncate the first 1202 frames.

ffmpeg -y -threads 4 -start_number 0 -i xxx/frame_%06d.png  -r 30 -frames:v 1202 -c:v libx264 -pix_fmt yuv420p output.mp4

Place -r 30 before -i works well

ffmpeg -y -threads 4 -start_number 0 -r 30 -i xxx/frame_%06d.png -frames:v 1202 -c:v libx264 -pix_fmt yuv420p output.mp4

Unnecessary complexity in ignore_keypoints computation

Assume len(self.ignore_keypoints)==m and len(convention)==n, the current method consumes time O(m*n).
As self.ignore_keypoints is a dict using hash table, we could use code below to reduce time to O(1*n):
for keypoint_idx, keypoint_name in enumerate(convention_names_list): if keypoint_name in self.ignore_keypoints: self.ignore_keypoint_idxs.append(keypoint_idx)

Originally posted by @LazyBusyYang in #56 (comment)

The optimizer dict config causes an error in mmcv.runner.hooks.optimizer

When I trained the HMR using the config with a optimizer dict:

optimizer = dict(
backbone=dict(type='Adam', lr=2.5e-4),
head=dict(type='Adam', lr=2.5e-4),
#disc=dict(type='Adam', lr=1e-4)
)

This will causer a bug in the line of EpochBasedRunner and mmcv.runner.hooks.optimizer, as the after_train_iter function sees the optimizer as a torch.optim.optimizer, not a dict:

    def after_train_iter(self, runner):
        runner.optimizer.zero_grad()
        runner.outputs['loss'].backward()
        if self.grad_clip is not None:
            grad_norm = self.clip_grads(runner.model.parameters())
            if grad_norm is not None:
                # Add grad norm to the logger
                runner.log_buffer.update({'grad_norm': float(grad_norm)},
                                         runner.outputs['num_samples'])
        runner.optimizer.step()

Is there an alternative to the use of dict{torch.optim.optimizer}, such as using the pytorch default params_groups to the optimizer?

Or,do I need to modify the mmcv.runner lib?

_fits.npy for own dataset training SPIN model

I use my own dataset to train SPIN model and I found it need _fits.npy file. The train datasets in fits_dict.py are fixed and maybe if you can add a flag for other datasets and generate initial fits_npy file automatically?

The cmu_mosh.npz file sees broken

The Download link of cmu_mosh.npz from https://github.com/open-mmlab/mmhuman3d/blob/main/docs/preprocess_dataset.md Seems broken .....

"zipfile.BadZipFile: File is not a zip file" occurs when running an experiment as well as when directly using numpy to load....

It's noticed that the size of the file is much smaller than the original one provided in https://github.com/open-mmlab/mmpose/blob/master/docs/tasks/3d_body_mesh.md

I Guess maybe the file is uncarefully overwritten.....

Install without pickel5 and pytorch3d (on Windows)

Hi, I am trying to install the packages on Windows. I found that pickle5 can not to be installed on Windows and pytroch3d needs to be installed from source. Is it possible to install without those two packages? Rare lines of codes are depending on these. Otherwise, it works fine. Thank you.

您好,我尝试在windows下安装库和相关依赖。请问pickle5 和 pytorch3d 是否必须安装? 它们在windows上的安装不是很方便。从requirements 中删去是否影响使用?谢谢。

Typo in keypoints_convention.md

Thanks for your contribution to the community!
I found a typo in the keypoints_convention.md:

keypoints = np.zeros((1, len(KEYPOINTS_FACTORY['smpl']), 3))
original_mask = np.ones((len(KEYPOINTS_FACTORY['smpl'])))
original_mask[KEYPOINTS_FACTORY['smpl'].index('left_hip')] = 0

_, mask_coco = convert_kps(
    keypoints=keypoints, mask=original_mask, src='smpl', dst='coco')
_, mask_coco_full = convert_kps(
    keypoints=keypoints, src='smpl', dst='coco')

assert mask_coco[KEYPOINTS_FACTORY['coco'].index('left_hip')] == 0
mask_coco[KEYPOINTS_FACTORY['coco'].index('left_hip')] = 1
assert (mask_coco == mask_coco_full).all()

KEYPOINTS_FACTORY['coco'] doesn't have left_hip joint. Maybe coco could be changed to agora.

Also, I have a suggestion here. We usually not only convert coordinates of the skeleton but also pass the confidence of each joint together. when I changed to original_mask[KEYPOINTS_FACTORY['smpl'].index('left_hip')] = 0.5, the converted mask is also 0. So, I think return converted order (indices) instead of the converted skeleton is more flexible and convenient.

Vibe converter

When I try to convert insta_variety dataset, I get the following error:

Traceback (most recent call last):
  File "tools/convert_datasets.py", line 118, in <module>
    main()
  File "tools/convert_datasets.py", line 113, in main
    data_converter.convert(input_path, args.output_path)
  File "/mnt/share_1227775/yongtaoge/mmhuman3d/mmhuman3d/data/data_converters/insta_vibe.py", line 81, in convert
    features_ = np.array(features_).reshape((-1, 2048))
numpy.core._exceptions.MemoryError: Unable to allocate 16.7 GiB for an array with shape (2187158, 2048) and data type float32

training process some problems?

hello,when my input this "python tools/train.py configs/spin/tmp.py --gpus 2 --no-validate" for training and error is that:

AssertionError: MMDataParallel only supports single GPU training, if you need to train with multiple GPUs, please use MMDistributedDataParallel instead.

when input "python tools/train.py configs/spin/tmp.py --gpus 1 --no-validate" and get error is that:

Traceback (most recent call last):
File "tools/train.py", line 154, in
main()
File "tools/train.py", line 150, in main
meta=meta)
File "/root/cloud/lj28/project/git-code/mmhuman3d/mmhuman3d/apis/train.py", line 167, in train_model
runner.run(data_loaders, cfg.workflow)
File "/opt/conda/envs/open-mmpose/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/opt/conda/envs/open-mmpose/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File "/opt/conda/envs/open-mmpose/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
**kwargs)
File "/opt/conda/envs/open-mmpose/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 75, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "/root/cloud/lj28/project/git-code/mmhuman3d/mmhuman3d/models/architectures/mesh_estimator.py", line 143, in train_step
features = self.backbone(img)
File "/opt/conda/envs/open-mmpose/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/cloud/lj28/project/git-code/mmhuman3d/mmhuman3d/models/backbones/resnet.py", line 620, in forward
x = self.norm1(x)
File "/opt/conda/envs/open-mmpose/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/envs/open-mmpose/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 519, in forward
world_size = torch.distributed.get_world_size(process_group)
File "/opt/conda/envs/open-mmpose/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 625, in get_world_size
return _get_group_size(group)
File "/opt/conda/envs/open-mmpose/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 220, in _get_group_size
_check_default_pg()
File "/opt/conda/envs/open-mmpose/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 211, in _check_default_pg
"Default process group is not initialized"
AssertionError: Default process group is not initialized

thx!

training with val, but TypeError: evaluate() got an unexpected keyword argument 'logger'

raceback (most recent call last):
File "tools/train.py", line 158, in
if name == 'main':
File "tools/train.py", line 146, in main
# add an attribute for visualization convenience
File "/root/Downloads/mmhuman3d-0.4.0/mmhuman3d/apis/train.py", line 167, in train_model
runner.run(data_loaders, cfg.workflow)
File "/root/anaconda3/envs/zxh/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/root/anaconda3/envs/zxh/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 54, in train
self.call_hook('after_train_epoch')
File "/root/anaconda3/envs/zxh/lib/python3.8/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook
getattr(hook, fn_name)(self)
File "/root/anaconda3/envs/zxh/lib/python3.8/site-packages/mmcv/runner/hooks/evaluation.py", line 267, in after_train_epoch
self._do_evaluate(runner)
File "/root/anaconda3/envs/zxh/lib/python3.8/site-packages/mmcv/runner/hooks/evaluation.py", line 505, in _do_evaluate
key_score = self.evaluate(runner, results)
File "/root/anaconda3/envs/zxh/lib/python3.8/site-packages/mmcv/runner/hooks/evaluation.py", line 361, in evaluate
eval_res = self.dataloader.dataset.evaluate(
TypeError: evaluate() got an unexpected keyword argument 'logger'
Traceback (most recent call last):
File "/root/anaconda3/envs/zxh/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/anaconda3/envs/zxh/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/root/anaconda3/envs/zxh/lib/python3.8/site-packages/torch/distributed/launch.py", line 261, in
main()
File "/root/anaconda3/envs/zxh/lib/python3.8/site-packages/torch/distributed/launch.py", line 256, in main
raise subprocess.CalledProcessError(returncode=process.returncode,
subprocess.CalledProcessError: Command '['/root/anaconda3/envs/zxh/bin/python', '-u', 'tools/train.py', '--local_rank=1', 'configs/lj28/mthmr/resnet50_mthmr_pw3d.py', '--launcher', 'pytorch']' returned non-zero exit status 1.

val just like test in config:

val=dict(
    type=dataset_type,
    body_model=dict(
        type='GenderedSMPL',
        keypoint_src='h36m',
        keypoint_dst='h36m',
        model_path='data/body_models/smpl',
        joints_regressor='data/body_models/J_regressor_h36m.npy'),
    dataset_name='pw3d',
    data_prefix='data',
    pipeline=test_pipeline,
    ann_file='pw3d_test.npz'),

THX!

Refactor the architecture of core/parametric_models and models/utils/smpl.py

Currently, smplify is placed in core/parametric_models and has its own builder. However, it may be better to place it in models/ to share the builder with other network components as smplify is also an component in SPIN.

Besides, body models deserve its own directory as we are not limited to smpl, it may be moved to core/body_models.

where is cmu_mosh dataset?

Hello, I try train HMR.:

adv_dataset=dict(
type='MeshDataset',
dataset_name='cmu_mosh',
data_prefix='data',
pipeline=train_adv_pipeline,
ann_file='cmu_mosh.npz')

where is cmu_mosh dataset?
THX.

AssertionError: Please install mmdet to run the demo But had installed mmdet

(open-mmlab) ➜ mmhuman3d git:(main) python demo/estimate_smpl.py
configs/hmr/resnet50_hmr_pw3d.py
data/checkpoints/resnet50_hmr_pw3d.pth
--single_person_demo
--det_config demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py
--det_checkpoint https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
--video_path demo/resources/single_person_demo.mp4
--output_path vis_results/single_person_demo.mp4
--smooth_type savgol
Traceback (most recent call last):
File "demo/estimate_smpl.py", line 286, in
assert has_mmdet, 'Please install mmdet to run the demo.'
AssertionError: Please install mmdet to run the demo.
(open-mmlab) ➜ mmhuman3d git:(main) conda list

(open-mmlab) ➜ mmhuman3d git:(main) conda list

packages in environment at /home/w0x7ce/anaconda3/envs/open-mmlab:

Name Version Build Channel

_libgcc_mutex 0.1 conda_forge https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
_openmp_mutex 4.5 1_gnu https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
addict 2.4.0 pypi_0 pypi
aiohttp 3.8.1 pypi_0 pypi
aiosignal 1.2.0 pypi_0 pypi
async-timeout 4.0.1 pypi_0 pypi
attrs 21.2.0 pypi_0 pypi
autobahn 21.11.1 pypi_0 pypi
automat 20.2.0 pypi_0 pypi
blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
bzip2 1.0.8 h7f98852_4 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
ca-certificates 2021.10.8 ha878542_0 conda-forge
cdflib 0.3.20 pypi_0 pypi
cffi 1.15.0 pypi_0 pypi
charset-normalizer 2.0.9 pypi_0 pypi
chumpy 0.70 pypi_0 pypi
colorama 0.4.4 pyh9f0ad1d_0 conda-forge
colorlog 6.6.0 pypi_0 pypi
colormap 1.0.4 pypi_0 pypi
constantly 15.1.0 pypi_0 pypi
cpuonly 1.0 0 pytorch
cryptography 36.0.0 pypi_0 pypi
cudatoolkit 11.2.2 he111cf0_9 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
cycler 0.11.0 pypi_0 pypi
cython 0.29.25 pypi_0 pypi
deprecated 1.2.13 pypi_0 pypi
dotty-dict 1.3.0 pypi_0 pypi
easydev 0.12.0 pypi_0 pypi
ffmpeg 4.3 hf484d3e_0 pytorch
flake8 4.0.1 pypi_0 pypi
flake8-import-order 0.18.1 pypi_0 pypi
fonttools 4.28.3 pypi_0 pypi
freetype 2.10.4 h0708190_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
frozenlist 1.2.0 pypi_0 pypi
fvcore 0.1.5.post20210915 py38 fvcore
gmp 6.2.1 h58526e2_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
gnutls 3.6.13 h85f3911_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
h5py 3.6.0 pypi_0 pypi
hyperlink 21.0.0 pypi_0 pypi
idna 3.3 pypi_0 pypi
imageio 2.13.3 pypi_0 pypi
incremental 21.3.0 pypi_0 pypi
iniconfig 1.1.1 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
iopath 0.1.9 py38 iopath
jpeg 9b h024ee3a_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
json-tricks 3.15.5 pypi_0 pypi
kiwisolver 1.3.2 pypi_0 pypi
lame 3.100 h7f98852_1001 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
ld_impl_linux-64 2.36.1 hea4e1c9_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libffi 3.4.2 h7f98852_5 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libgcc-ng 11.2.0 h1d223b6_11 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libgomp 11.2.0 h1d223b6_11 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libiconv 1.16 h516909a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libnsl 2.0.0 h7f98852_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libpng 1.6.37 h21135ba_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libstdcxx-ng 11.2.0 he4da1e4_11 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libtiff 4.2.0 h85742a9_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libuv 1.42.0 h7f98852_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libwebp-base 1.2.1 h7f98852_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libzlib 1.2.11 h36c2ea0_1013 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
lz4-c 1.9.3 h9c3ff4c_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
matplotlib 3.5.1 pypi_0 pypi
mccabe 0.6.1 pypi_0 pypi
mkl 2021.4.0 h06a4308_640 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl-service 2.4.0 py38h95df7f1_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
mkl_fft 1.3.1 py38h8666266_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
mkl_random 1.2.2 py38h1abd341_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
mmcls 0.17.0 pypi_0 pypi
mmcv-full 1.3.18 pypi_0 pypi
mmdet 2.19.0 dev_0
mmhuman3d 0.3.0 dev_0
mmpose 0.21.0 pypi_0 pypi
mmtrack 0.8.0 pypi_0 pypi
motmetrics 1.2.0 pypi_0 pypi
multidict 5.2.0 pypi_0 pypi
munkres 1.1.4 pypi_0 pypi
ncurses 6.2 h58526e2_4 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
nettle 3.6 he412f7d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
networkx 2.6.3 pypi_0 pypi
ninja 1.10.2 h4bd325d_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
numpy 1.21.2 py38h20f2e39_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numpy-base 1.21.2 py38h79a1101_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nvidiacub 1.10.0 0 bottler
olefile 0.46 pyh9f0ad1d_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
opencv-python 4.5.4.60 pypi_0 pypi
openh264 2.1.1 h780b84a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
openssl 3.0.0 h7f98852_2 conda-forge
packaging 21.3 pypi_0 pypi
pandas 1.3.4 pypi_0 pypi
pexpect 4.8.0 pypi_0 pypi
pickle5 0.0.11 pypi_0 pypi
pillow 8.4.0 pypi_0 pypi
pip 21.3.1 pyhd8ed1ab_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
pluggy 1.0.0 pypi_0 pypi
portalocker 2.3.2 py38h578d9bd_1 conda-forge
ptyprocess 0.7.0 pypi_0 pypi
py 1.11.0 pypi_0 pypi
py-cpuinfo 8.0.0 pypi_0 pypi
pycocotools 2.0.3 pypi_0 pypi
pycodestyle 2.8.0 pypi_0 pypi
pycparser 2.21 pypi_0 pypi
pyflakes 2.4.0 pypi_0 pypi
pyparsing 3.0.6 pypi_0 pypi
pytest 6.2.5 pypi_0 pypi
pytest-benchmark 3.4.1 pypi_0 pypi
python 3.8.12 hf930737_2_cpython https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
python-dateutil 2.8.2 pypi_0 pypi
python_abi 3.8 2_cp38 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
pytorch 1.8.0 py3.8_cpu_0 [cpuonly] pytorch
pytorch3d 0.6.0 pypi_0 pypi
pytz 2021.3 pypi_0 pypi
pywavelets 1.2.0 pypi_0 pypi
pyyaml 6.0 py38h497a2fe_3 conda-forge
readline 8.1 h46c0cb4_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
scikit-image 0.19.0 pypi_0 pypi
scipy 1.7.3 pypi_0 pypi
seaborn 0.11.2 pypi_0 pypi
setuptools 59.4.0 py38h578d9bd_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
setuptools-scm 6.3.2 pypi_0 pypi
six 1.16.0 pyh6c4a22f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
smplx 0.1.28 pypi_0 pypi
sqlite 3.37.0 h9cd32fc_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
tabulate 0.8.9 pyhd8ed1ab_0 conda-forge
termcolor 1.1.0 py_2 conda-forge
terminaltables 3.1.10 pypi_0 pypi
tifffile 2021.11.2 pypi_0 pypi
tk 8.6.11 h27826a3_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
toml 0.10.2 pypi_0 pypi
tomli 1.2.2 pypi_0 pypi
torchvision 0.9.0 py38_cpu [cpuonly] pytorch
tqdm 4.62.3 pyhd8ed1ab_0 conda-forge
twisted 21.7.0 pypi_0 pypi
txaio 21.2.1 pypi_0 pypi
typing_extensions 4.0.1 pyha770c72_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
vedo 2021.0.7 pypi_0 pypi
vtk 9.0.3 pypi_0 pypi
wheel 0.37.0 pyhd8ed1ab_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
wrapt 1.13.3 pypi_0 pypi
wslink 1.2.0 pypi_0 pypi
xmltodict 0.12.0 pypi_0 pypi
xtcocotools 1.10 pypi_0 pypi
xz 5.2.5 h516909a_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
yacs 0.1.6 py_0 conda-forge
yaml 0.2.5 h516909a_0 conda-forge
yapf 0.31.0 pypi_0 pypi
yarl 1.7.2 pypi_0 pypi
zlib 1.2.11 h36c2ea0_1013 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
zope-interface 5.4.0 pypi_0 pypi
zstd 1.4.9 ha95c52a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
(open-mmlab) ➜ mmhuman3d git:(main)

How to change batch size when training hybrik

Hi all,

I can train hybrik using single gpu card with default config file. However, I cannot find the option that changes the batch size of training schedule. Could you give any hints on how to adopt it in source code? Thanks in advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.