Giter Club home page Giter Club logo

forth-modelbasedtracker / mocapnet Goto Github PK

View Code? Open in Web Editor NEW
829.0 35.0 134.0 29.25 MB

We present MocapNET, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images. Our contributions include: (a) A novel and compact 2D pose NSRM representation. (b) A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions. (c) An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known). All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET) while maintaining real-time performance

Home Page: https://www.youtube.com/watch?v=Jgz1MRq-I-k

License: Other

CMake 3.28% C++ 81.48% Shell 4.80% C 1.25% Python 9.01% TeX 0.08% Dockerfile 0.10%
mocap bvh-format bvh neural-network tensorflow computer-vision ensemble 2d-to-3d demo 3d-animation gesture-recognition rgb-images webcam pose-estimation 3d-pose-estimation real-time

mocapnet's Introduction

MocapNET Project

MocapNET

One click deployment in Google Collab : Open MocapNET v4 In Colab

News


14-2-2024 Last Wednesday (8-2-2024) I successfully defended the MocapNET Ph.D. thesis! I am currently at the process of finalizing the MocapNET PhD thesis text, however there are also some good news for the project in regards to the continuation of development and its funding in the context of the Greece4.0 call. Thank you for your patience in regards with the recent repository inactivity, which I am working hard in order to improve..! MocapNET PhD Thesis photo

2-11-2023 I am currently finalizing my PhD thesis and preparing my defence, for this reason repository maintenance is subpar at this time. However, hopefully soon, I will be able to share my ~316 page thesis that clearly and in detail goes through all of the development effort and research behind MocapNET and this repository.

2-10-2023 MocapNET v4 was successfully presented in the AMFG 2023 ICCV workshop MocapNET 4

13-9-2023

MocapNET v4 has been accepted in the AMFG 2023 ICCV workshop with an Oral and Poster presentation. It can now also retrieve 3D gaze and BVH facial configurations. The whole codebase has been written from scratch in Python to hopefully make it more useful for the community! This is a very big change including handling all of 3D rendering using Blender python scripts. In order to keep this repository manageable the new version lives in its own mnet4 branch

We have also implemented a one click Google Collab setup, which you can use to quickly and easily test the method : Open In Colab

Hope that you find it useful and hopefully see you at ICCV 2023!

30-12-2022

MocapNET has a new plugin/script for the Blender 3D editor that when combined with MPFB2 (the MakeHuman addon for Blender) can make 3D animations using custom skinned humans with the output BVH files of MocapNET with a few clicks. The code targets recent Blender Versions 3.4+ Watch this video to learn how to install MPFB2 and this video to learn how to interface the provided plugin in this repository with the MocapNET output BVH file and the generated MakeHuman armature.

MocapNET Blender Plugin

YouTube Link

30-9-2022

MocapNET was demonstrated at the Foundation of Research and technology of Greece as part of the European Researcher's Night 2022 event.

Researcher's Night 2022

1-6-2022

An added python/mediapipe utility can now help with generating 2D data for experiments! This can help you create datasets that include hands that can be processed using MocapNETv3

7-4-2022

The open call of BONSAPPS (https://bonsapps.eu/) for AI talents received 126 proposals from 31 EU countries. Out of these proposals, 30 were actually accepted. Out of the 30 running BONSAPPs projects, 10 were selected yesterday to continue into phase 2. I am very happy to report that our AUTO-MNET MocapNET based work made it to the top ten!

BonsAPPs Hackathon/Stage 2 selection

9-3-2022

MocapNET was one of the selected projects in the BonsAPPs Open Call for AI talents We are now preparing a version of MocapNET called AUTO-MNET tailored for 3D Body Tracking for automotive uses

Due to our limited resources this has currently pushed back merging of the mnet3 branch, however hopefully we will soon have a working MocapNET in the Bonseyes platform.

8-11-2021

MocapNET3 with hand pose estimation support has landed in this repository! The latest version that has been accepted in BMVC2021 is now commited in the mnet3 branch of this repository. Since however there is considerable code-polish missing and currently the 2D joint estimator offered does not contain hands there needs to be a transition to a 2D joint estimator like Mediapipe Holistic for a better live webcam demo. MocapNET3 will appear in the 32nd British Machine Vision Conference that will be held virtually and is free to attend this year!!

An upgraded 2020 version of MocapNET has landed! It contains a very big list of improvements that have been carried out during 2020 over the original work that allows higher accuracy, smoother BVH output and better occlusion robustness while maintaining realtime perfomance. MocapNET2 will appear in the 25th International Conference on Pattern Recognition

If you are interested in the older MocapNET v1 release you can find it in the mnet1 branch,

Visualization Example: With MocapNET2 an RGB video feed like this can be converted to BVH motion frames in real-time. The result can be easily used in your favourite 3D engine or application.

Sample run

Example Output:

Youtube Video MocapNET Output Editing on Blender
YouTube Link BVH File Blender Video

Ensemble of SNN Encoders for 3D Human Pose Estimation in RGB Images


We present MocapNET v2, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images.

Our contributions include:

  • A novel and compact 2D pose NSRM representation.
  • A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions.
  • An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known).

All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET v1) while maintaining real-time performance (70 fps in CPU-only execution).

MocapNET

Youtube Videos


BMVC 2021 Supplementary Video ICPR 2020 Poster Session
YouTube Link YouTube Link
ICPR 2020 Supplementary Video BMVC 2019 Supplementary Video
YouTube Link YouTube Link

Citation


Please cite the following papers 1, 2, 3 according to the part of this work that helps your research :

@inproceedings{Qammaz2021,
  author = {Qammaz, Ammar and Argyros, Antonis A},
  title = {Towards Holistic Real-time Human 3D Pose Estimation using MocapNETs},
  booktitle = {British Machine Vision Conference (BMVC 2021)},
  publisher = {BMVA},
  year = {2021},
  month = {November},
  projects =  {I.C.HUMANS},
  videolink = {https://www.youtube.com/watch?v=aaLOSY_p6Zc}
}

For the BMVC21 version of MocapNET please switch to the MNET3 branch

@inproceedings{Qammaz2020,
  author = {Ammar Qammaz and Antonis A. Argyros},
  title = {Occlusion-tolerant and personalized 3D human pose estimation in RGB images},
  booktitle = {IEEE International Conference on Pattern Recognition (ICPR 2020), (to appear)},
  year = {2021},
  month = {January},
  url = {http://users.ics.forth.gr/argyros/res_mocapnet_II.html},
  projects =  {Co4Robots},
  pdflink = {http://users.ics.forth.gr/argyros/mypapers/2021_01_ICPR_Qammaz.pdf},
  videolink = {https://youtu.be/Jgz1MRq-I-k}
}
@inproceedings{Qammaz2019,
  author = {Qammaz, Ammar and Argyros, Antonis A},
  title = {MocapNET: Ensemble of SNN Encoders for 3D Human Pose Estimation in RGB Images},
  booktitle = {British Machine Vision Conference (BMVC 2019)},
  publisher = {BMVA},
  year = {2019},
  month = {September},
  address = {Cardiff, UK},
  url = {http://users.ics.forth.gr/argyros/res_mocapnet.html},
  projects =  {CO4ROBOTS,MINGEI},
  pdflink = {http://users.ics.forth.gr/argyros/mypapers/2019_09_BMVC_mocapnet.pdf},
  videolink = {https://youtu.be/fH5e-KMBvM0}
}

Overview, System Requirements and Dependencies


MocapNET is a high performance 2D to 3D single person pose estimator. This code base targets recent Linux (Ubuntu 18.04 - 20.04 +) machines, and relies on the Tensorflow C-API and OpenCV. Windows 10 users can try the linux subsystem that has been also reported to work.

Tensorflow is used as the Neural Network framework for our work and OpenCV is used to enable the acquisition of images from webcams or video files as well as to provide an easy visualization method.

We have provided an initialization script that automatically handles most dependencies, as well as download all needed pretrained models. After running it the application should be ready for use. To examine the neural network .pb files provided you can download and use Netron.

Any issues not automatically resolved by the script can be reported on the issues section of this repository.

This repository contains 2D joint estimators for the MocapNET2LiveWebcamDemo. By giving it the correct parameters you can switch between a cut-down version of OpenPose (--openpose), VNect (--vnect) or our own MobileNet (default) based 2D joint estimator. All of these are automatically downloaded using the initialize.sh script. However in order to achieve higher accuracy estimations you are advised to set up a full OpenPose instance and use it to acquire JSON files with 2D detections that can be subsequently converted to CSV using convertOpenPoseJSONToCSV and then to 3D BVH files using the MocapNET2CSV binary. They will provide superior accuracy compared to the bundled 2D joint detectors which are provided for faster performance in the live demo, since 2D estimation is the bottleneck of the application. Our live demo will try to run the 2D Joint estimation on your GPU and MocapNET 3D estimation on the system CPU to achieve a combined framerate of over 30 fps which in most systems matches or surpasses the acquisition rate of web cameras. Unfortunately there are many GPU compatibility issues with Tensorflow C-API builds since recent versions have dropped CUDA 9.0 support as well as compute capabilities that might be required by your system, you can edit the initialize.sh script and change the variable TENSORFLOW_VERSION according to your needs. If you want CUDA 9.0 you should se it to 1.12.0. If you want CUDA 9.0 and have a card with older compute capabilities (5.2) then choose version 1.11.0. If all else fails you can always recompile the tensorflow C-API to match your specific hardware configuration. You can also use this script that automates building tensorflow r1.15 that might help you, dealing with the Bazel build system and all of its weirdness. Release 1.15 is the final of the 1.x tensorflow tree and is compatible with MocapNET, Tensorflow 2.x is also supported, according to the Tensorflow site, version 2.3 is the first version of the 2.x tree to re-include C bindings. The initialize.sh script will ask you which version you want to use and try to download it and set it up locally for your MocapNET installation.

If you are interested in generating BVH training data for your research, we have also provided the code that handles randomization and pose perturbation from the CMU dataset. After a successful compilation, dataset generation is accessible using the scripts scripts/createRandomizedDataset.sh and scripts/createTestDataset.sh. All BVH manipulation code is imported from a secondary github project that is automatically downloaded, included and built using the initialize.sh script. These scripts/createRandomizedDataset.sh and scripts/createTestDataset.sh scripts will populate the dataset/ directory with CSV files that contain valid training samples based on the CMU dataset. It is trivial to load these files using python. After loading them using them as training samples in conjunction with a deep learning framework like Keras you can facilitate learning of 2D to 3D BVH.

Building the library


To download and compile the library issue :

sudo apt-get install git build-essential cmake libopencv-dev libjpeg-dev libpng-dev libglew-dev libpthread-stubs0-dev

git clone https://github.com/FORTH-ModelBasedTracker/MocapNET

cd MocapNET

./initialize.sh

After performing changes to the source code, you do not need to rerun the initialization script. You can recompile the code by using :

cd build 
cmake .. 
make 
cd ..

Updating the library


The MocapNET library is under active development, the same thing is true for its dependencies.

In order to update all the relevant parts of the code you can use the update.sh script provided.

./update.sh

If you made changes to the source code that you want to discard and want to revert to the master you can also use the revert.sh script provided

./revert.sh

Testing the library and performing benchmarks


To test your OpenCV installation as well as support of your webcam issue :

./OpenCVTest --from /dev/video0 

To test OpenCV support of your video files issue :

./OpenCVTest --from /path/to/yourfile.mp4

These tests only use OpenCV (without Tensorflow or any other dependencies) and are intended as a quick method that can identify and debug configuration problems on your system. In case of problems playing back video files or your webcam you might want to consider compiling OpenCV yourself. The scripts/getOpenCV.sh script has been included to automatically fetch and make OpenCV for your convinience. The CMake file provided will automatically try to set the OpenCV_DIR variable to target the locally built version made using the script. If you are having trouble switching between the system version and the downloaded version consider using the cmake-gui utility or removing the build directory and making a fresh one, once again following the Building instructions. The new build directory should reset all paths and automatically see the local OpenCV version if you used the scripts/getOpenCV.sh script and use this by default.

Live Demo


Assuming that the OpenCVTest executable described previously is working correctly with your input source, to do a live test of the MocapNET library using a webcam issue :

./MocapNET2LiveWebcamDemo --from /dev/video0 --live

To dump 5000 frames from the webcam to out.bvh instead of the live directive issue :

./MocapNET2LiveWebcamDemo --from /dev/video0 --frames 5000

To control the resolution of your webcam you can use the --size width height parameter, make sure that the resolution you provide is supported by your webcam model. You can use the v4l2-ctl tool by executing it and examining your supported sensor sizes and rates. By issuing --forth you can use our FORTH developed 2D joint estimator that performs faster but offers lower accuracy

 v4l2-ctl --list-formats-ext
./MocapNET2LiveWebcamDemo --from /dev/video0 --live --forth --size 800 600

Testing the library using a pre-recorded video file (i.e. not live input) means you can use a slower but more precise 2D Joint estimation algorithm like the included OpenPose implementation. You should keep in mind that this OpenPose implementation does not use PAFs and so it is still not as precise as the official OpenPose implementation. To run the demo with a prerecorded file issue :

./MocapNET2LiveWebcamDemo --from /path/to/yourfile.mp4 --openpose

We have included a video file that should be automatically downloaded by the initialize.sh script. Issuing the following command should run it and produce an out.bvh file even if you don't have any webcam or other video files available! :

./MocapNET2LiveWebcamDemo --from shuffle.webm --openpose --frames 375

Since high-framerate output is hard to examine, if you need some more time to elaborate on the output you can use the delay flag to add programmable delays between frames. Issuing the following will add 1 second of delay after each processed frame :

./MocapNET2LiveWebcamDemo --from shuffle.webm --openpose --frames 375 --delay 1000

If your target is a headless environment then you might consider deactivating the visualization by passing the runtime argument --novisualization. This will prevent any windows from opening and thus not cause issues even on a headless environment.

BVH output files are stored to the "out.bvh" file by default. If you want them to be stored in a different path use the -o option. They can be easily viewed using a variety of compatible applicatons. We suggest Blender which is a very powerful open-source 3D editing and animation suite or BVHacker that is freeware and compatible with Wine

MocapNETLiveWebcamDemo default visualization

./MocapNET2LiveWebcamDemo --from shuffle.webm --openpose --show 0 --frames 375

MocapNETLiveWebcamDemo all-in-one visualization

./MocapNET2LiveWebcamDemo --from shuffle.webm --openpose --show 3 --frames 375

MocapNETLiveWebcamDemo rotation per joint visualization

./MocapNET2LiveWebcamDemo --from shuffle.webm --openpose --show 1 --frames 375

By using the --show variable you can alternate between different visualizations. A particularly useful visualization is the "--show 1" one that plots the joint rotations as seen above.

MocapNETLiveWebcamDemo OpenGL visualization

./MocapNET2LiveWebcamDemo --from shuffle.webm --openpose --show 0 --opengl --frames 375

By executing "sudo apt-get install freeglut3-dev" to get the required libraries, then enabling the ENABLE_OPENGL CMake configuration flag during compilation and using the --opengl flag when running the MocapNET2LiveWebcamDemo you can also see the experimental OpenGL visualization illustrated above, rendering a skinned mesh that was generated using makehuman. The BVH file armature used corresponds to the CMU+Face armature of makehuman.

./MocapNET2LiveWebcamDemo --from shuffle.webm --openpose --gestures --frames 375

By starting the live demo using the --gestures argument you can enable an experimental simple form of gesture detection as seen in the illustration above. Gestures are stored as BVH files and controlled through the gestureRecognition.hpp file. A client application can register a callback as seen in the demo. The gesture detection code is experimental and has been included as a proof of concept, since due to our high-level output you can easily facilitate gesture detections by comparing subsequent BVH frames as seen in the code. That being said gestures where not a part of the original MocapNET papers.

ROS (Robot Operating System) node


mocapnet_rosnode screenshot with rviz

If you are interested in ROS development and looking for a 3D pose estimator for your robot, you are in luck, MocapNET has a ROS node! You can get it here!

Tuning Hierarchical Coordinate Descent for accuracy/performance gains


As described in the paper, the Hierarchical Coordinate Descent Inverse Kinematics algorithm has various hyper-parameters that have been set to default values after experiments. Depending on your deployment scenarios you might to sacrifice some performance for better accuracy. You can do this by altering the IK tuning parameters by using the --ik switch

A default run without the --ik switch is equivalent to a run using a learning rate of 0.01, 5 iterations, 30 epochs. The iterations variable has the biggest impact in performance.

A normal run without the --ik flag is equivalent to

./MocapNET2LiveWebcamDemo --from shuffle.webm --ik 0.01 5 30

If you want a very high accuracy run and don't care about framerate as much consider

./MocapNET2LiveWebcamDemo --from shuffle.webm --ik 0.01 15 40

The IK module supports tailoring the model used for posed estimation to your liking using the "--changeJointDimensions neckLength torsoLength chestWidth shoulderToElbowLength elbowToHandLength waistWidth hipToKneeLength kneeToFootLength shoeLength as well as the focal length of your specific camera using "--focalLength fx fy" The following example will try to track the shuffle.webm sample assuming a body with feet 150% the normal size and a focal length of 600 on x and y

./MocapNET2LiveWebcamDemo --from shuffle.webm --ik 0.01 25 40 --changeJointDimensions 1.0 1.0 1.0 1.0 1.0 1.5 1.5 1.5 1.0 --focalLength 600 600

If you don't care about fine results and just want a rough pose estimation extracted really fast you can completely switch the IK module off using

./MocapNET2LiveWebcamDemo --from shuffle.webm --noik

Headless deployment


When deploying the code on headless environments like Google Colab where there is no display available you might experience errors like

(3D Points Output:xxxx): Gtk-WARNING **:  cannot open display: 

To overcome these errors just use the --novisualization switch to disable visualization windows

Higher accuracy with relatively little work using Mediapipe Holistic


To convert video files ready for use as input to MocapNET in a relatively easy way I have included a python converter that uses mediapipe/opencv to create the CSV files needed for MocapNET.

MediaPipe Video 2 CSV utility

You can get mediapipe using this src/python/mediapipe/setup.sh script or by executing

pip install --user mediapipe opencv-python

The converter utility receives an input video stream and creates an output directory with all image frames and the CSV file with 2D joint estimations.

After going to the root directory of the project

python3 src/python/mediapipe/mediapipeHolistic2CSV.py --from shuffle.webm -o tester

After the conversion finishes you can process the generated "dataset" using MocapNET2CSV

./MocapNET2CSV --from tester-mpdata/2dJoints_mediapipe.csv --show 3 

Due to the higher accuracy of mediapipe holistic (as well as inclusion of heads and hands which makes data forward compatible with the next versions of MocapNET) this might be a very useful tool to use in conjunction with MocapNET. In particular if you use this dumper be sure to checkout MocapNET version 3 that also supports hand pose estimation!

Higher accuracy with more work deploying Caffe/OpenPose and using OpenPose JSON files


In order to get higher accuracy output compared to the live demo which is more performance oriented, you can use OpenPose and the 2D output JSON files produced by it. The convertOpenPoseJSONToCSV application can convert them to a BVH file. After downloading OpenPose and building it you can use it to acquire 2D JSON body pose data by running :

build/examples/openpose/openpose.bin -number_people_max 1 --hand --write_json /path/to/outputJSONDirectory/ -video /path/to/yourVideoFile.mp4

This will create files in the following fashion /path/to/outputJSONDirectory/yourVideoFile_XXXXXXXXXXXX_keypoints.json Notice that the filenames generated encode the serial number by padding it up to 12 characters (marked as X). You provide this information to our executable using the --seriallength commandline option.

The dump_and_process_video.sh script has been included that can be used to fully process a video file using openpose and then process it through MocapNET, or act as a guide for this procedure.

A utility has been included that can convert the JSON files to a single CSV file issuing :

 ./convertOpenPoseJSONToCSV --from /path/to/outputJSONDirectory/ --label yourVideoFile --seriallength 12 --size 1920 1080 -o .

For more information on how to use the conversion utility please see the documentation inside the utility

A CSV file has been included that can be run by issuing :

 ./MocapNET2CSV --from dataset/sample.csv --visualize --delay 30

The delay is added in every frame so that there is enough time for the user to see the results, of course the visualization only contains the armature since the CSV file does not have the input images.

Check out this guide contributed by a project user for more info.

Experimental utilities


The repository contains experimental utilities used for the development of the papers.

The CSV cluster plot utility if you choose to download the CMU-BVH dataset using the ./initialize.sh script will allow you to perform the clustering experiments described.

CSV cluster plot utility

./CSVClusterPlot

The BVHGUI2 is a very minimal utility you can use to become more familiar with the BVH armature used by the project. Using easy to use sliders you can animate the armature and it is has a minimal source code.

BVH GUI utility

./BVHGUI2 --opengl

License


This library is provided under the FORTH license

mocapnet's People

Contributors

ammarkov avatar aurosutru avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mocapnet's Issues

Errors when building Tensorflow in Ubuntu 2020.4 in a Virtual Machine

Hi,

The following errors occur when running ./initialize.sh and Tensorflow is building. The same errors occur for Ver. 1.x and 2.x.

     Do you want to use Tensorflow 1.x instead of 2.x ? 

The project is compatible with both but if you have an older GPU it might be better for you
to stick with Tensorflow 1.x

(Y/N)?n
Selected Tensorflow version cpu/2.3.1
Found a local tensorflow installation, not altering anything
./initialize.sh: line 285: git: command not found
./initialize.sh: line 286: cd: RGBDAcquisition: No such file or directory
mkdir: cannot create directory ‘build’: File exists
CMake Error: The source directory "/home/tm/Downloads/MocapNET-master/dependencies" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
./initialize.sh: line 292: cd: ../opengl_acquisition_shared_library/opengl_depth_and_color_renderer: No such file or directory
mkdir: cannot create directory ‘build’: File exists
CMake Error: The source directory "/home/tm/Downloads/MocapNET-master/dependencies/build" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
Now to try and build MocapNET..
Build directory already exists, but since the initialize.sh script was called
this means the user wants to also initialize the build directory, so doing this
to prevent problems like #19
-- The C compiler identification is GNU 9.3.0
-- The CXX compiler identification is GNU 9.3.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
SSE2 detected and will be used..
Using locally found libtensorflow v1 C-API
OpenGL support will not be compiled in
-- Could NOT find OpenCV (missing: OpenCV_DIR)
-- Found OpenCV: /usr (found version "4.2.0")
OpenCV code found and will be used..
/usr/lib/x86_64-linux-gnu/cmake/opencv4
-- Configuring done
-- Generating done
-- Build files have been written to: /home/tm/Downloads/MocapNET-master/build
Scanning dependencies of target OpenCVTest
[ 1%] Building CXX object src/Webcam/CMakeFiles/OpenCVTest.dir/webcam.cpp.o
[ 2%] Linking CXX executable ../../../OpenCVTest
[ 2%] Built target OpenCVTest
Scanning dependencies of target MocapNETLib2
[ 3%] Building CXX object src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/core/core.cpp.o
[ 4%] Building CXX object src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/core/singleThreaded.cpp.o
[ 5%] Building CXX object src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/core/multiThreaded.cpp.o
/home/tm/Downloads/MocapNET-master/src/MocapNET2/MocapNETLib2/core/multiThreaded.cpp: In function ‘void* mocapNETWorkerThread(void*)’:
/home/tm/Downloads/MocapNET-master/src/MocapNET2/MocapNETLib2/core/multiThreaded.cpp:9:52: error: invalid use of incomplete type ‘struct mocapNETWorkerThread(void*)::threadContext’
9 | fprintf(stderr,"MNET Thread-%u: Started..!\n",ptr->threadID);
| ^~
/home/tm/Downloads/MocapNET-master/src/MocapNET2/MocapNETLib2/core/multiThreaded.cpp:8:10: note: forward declaration of ‘struct mocapNETWorkerThread(void*)::threadContext’
8 | struct threadContext * ptr = (struct threadContext ) arg;
| ^~~~~~~~~~~~~
/home/tm/Downloads/MocapNET-master/src/MocapNET2/MocapNETLib2/core/multiThreaded.cpp:10:73: error: invalid use of incomplete type ‘struct mocapNETWorkerThread(void
)::threadContext’
10 | ocapNETContext * contextArray = (struct mocapNETContext *) ptr->argumentToPass;
| ^~

/home/tm/Downloads/MocapNET-master/src/MocapNET2/MocapNETLib2/core/multiThreaded.cpp:8:10: note: forward declaration of ‘struct mocapNETWorkerThread(void*)::threadContext’
8 | struct threadContext * ptr = (struct threadContext ) arg;
| ^~~~~~~~~~~~~
/home/tm/Downloads/MocapNET-master/src/MocapNET2/MocapNETLib2/core/multiThreaded.cpp:11:51: error: invalid use of incomplete type ‘struct mocapNETWorkerThread(void
)::threadContext’
11 | struct mocapNETContext * ctx = &contextArray[ptr->threadID];
| ^~
/home/tm/Downloads/MocapNET-master/src/MocapNET2/MocapNETLib2/core/multiThreaded.cpp:8:10: note: forward declaration of ‘struct mocapNETWorkerThread(void*)::threadContext’
8 | struct threadContext * ptr = (struct threadContext ) arg;
| ^~~~~~~~~~~~~
/home/tm/Downloads/MocapNET-master/src/MocapNET2/MocapNETLib2/core/multiThreaded.cpp:24:3: error: ‘threadpoolWorkerInitialWait’ was not declared in this scope
24 | threadpoolWorkerInitialWait(ptr);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/tm/Downloads/MocapNET-master/src/MocapNET2/MocapNETLib2/core/multiThreaded.cpp:26:10: error: ‘threadpoolWorkerLoopCondition’ was not declared in this scope
26 | while (threadpoolWorkerLoopCondition(ptr))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/tm/Downloads/MocapNET-master/src/MocapNET2/MocapNETLib2/core/multiThreaded.cpp:28:16: error: invalid use of incomplete type ‘struct mocapNETWorkerThread(void
)::threadContext’
28 | switch (ptr->threadID)
| ^~
/home/tm/Downloads/MocapNET-master/src/MocapNET2/MocapNETLib2/core/multiThreaded.cpp:8:10: note: forward declaration of ‘struct mocapNETWorkerThread(void*)::threadContext’
8 | struct threadContext * ptr = (struct threadContext *) arg;
| ^~~~~~~~~~~~~
/home/tm/Downloads/MocapNET-master/src/MocapNET2/MocapNETLib2/core/multiThreaded.cpp:59:5: error: ‘threadpoolWorkerLoopEnd’ was not declared in this scope
59 | threadpoolWorkerLoopEnd(ptr);
| ^~~~~~~~~~~~~~~~~~~~~~~
make[2]: *** [src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/build.make:89: src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/core/multiThreaded.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:409: src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/all] Error 2
make: *** [Makefile:84: all] Error 2
tm@tm-VirtualBox:~/Downloads/MocapNET-master$

MocapNET2CSV to bvh worked, but got some errors

Hello,

So I ran the MocapNET2CSV (from JSON files exported by Openpose). It worked and I had the out.bvh file. But the output animation is not completely right compared to the Openpose preview output. The body suddenly turned around 180 degrees and its position was lifted higher, just one time, then back to normal.

When I looked at the output log I saw some errors:

Failed @ loadCalibration (null) , color.calib

Loading Pose 0 -> dataset/poses/neutral.bvh  : Pose 0 -> dataset/poses/neutral.bvh  is corrupted
Failure
Loading Pose 1 -> dataset/poses/tpose.bvh  : Pose 1 -> dataset/poses/tpose.bvh  is corrupted
Failure
Loading Pose 2 -> dataset/poses/x.bvh  : Pose 2 -> dataset/poses/x.bvh  is corrupted
Failure
Loading Pose 3 -> dataset/poses/handsup.bvh  : Pose 3 -> dataset/poses/handsup.bvh  is corrupted
Failure
Loading Pose 4 -> dataset/poses/leftwave.bvh  : Pose 4 -> dataset/poses/leftwave.bvh  is corrupted
Failure
Loading Pose 5 -> dataset/poses/rightright.bvh  : Pose 5 -> dataset/poses/rightright.bvh  is corrupted
Failure
Loading Pose 6 -> dataset/poses/leftleft.bvh  : Pose 6 -> dataset/poses/leftleft.bvh  is corrupted
Failure
Loading Pose 7 -> dataset/poses/push.bvh  : Pose 7 -> dataset/poses/push.bvh  is corrupted
Failure
Loading Pose 8 -> dataset/poses/rightwave.bvh  : Pose 8 -> dataset/poses/rightwave.bvh  is corrupted
Failure
Failed to read recognized Poses
This is not fatal, but poses/gestures will be deactivated..

Are they errors that made the animation of out.bvh wrong? If so, how to fix?

Thank you.

Can more skeleton bones be supported by MocapNet?

I've watched the promo videos of results gotten with the library here:
https://www.youtube.com/watch?v=fH5e-KMBvM0
And I've looked through all 2-d points in your example .csv file here:
https://github.com/FORTH-ModelBasedTracker/MocapNET/blob/master/dataset/sample.csv

From what I've seen the results are not accurate at least in two body areas, namely in the spine, which is not devided into parts, so it doesn't reflect natural body moves and feet, which are turned in another direction compared to human models.

So the question is: can the result be improved, I mean can at least the spine be split to, for example, 3 segments and be more precise as for the feet moves?

I'm absolute newbie in this area, just share my observations, may be it has nothing to do with the library, in which case I'd be grateful to hear where the problem is.

make: *** [Makefile:272: cmake_check_build_system] Error 1

`The system is: Linux - 5.4.0-28-generic - x86_64
Compiling the C compiler identification source file "CMakeCCompilerId.c" succeeded.
Compiler: /usr/bin/cc
Build flags:
Id flags:

The output was:
0

Compilation of the C compiler identification source "CMakeCCompilerId.c" produced "a.out"

The C compiler identification is GNU, found in "/home/user/MocapNET/build/CMakeFiles/3.16.3/CompilerIdC/a.out"

Compiling the CXX compiler identification source file "CMakeCXXCompilerId.cpp" succeeded.
Compiler: /usr/bin/c++
Build flags:
Id flags:

The output was:
0

Compilation of the CXX compiler identification source "CMakeCXXCompilerId.cpp" produced "a.out"

The CXX compiler identification is GNU, found in "/home/user/MocapNET/build/CMakeFiles/3.16.3/CompilerIdCXX/a.out"

Determining if the C compiler works passed with the following output:
Change Dir: /home/user/MocapNET/build/CMakeFiles/CMakeTmp

Run Build Command(s):/usr/bin/make cmTC_82cee/fast && /usr/bin/make -f CMakeFiles/cmTC_82cee.dir/build.make CMakeFiles/cmTC_82cee.dir/build
make[1]: Entering directory '/home/user/MocapNET/build/CMakeFiles/CMakeTmp'
Building C object CMakeFiles/cmTC_82cee.dir/testCCompiler.c.o
/usr/bin/cc -o CMakeFiles/cmTC_82cee.dir/testCCompiler.c.o -c /home/user/MocapNET/build/CMakeFiles/CMakeTmp/testCCompiler.c
Linking C executable cmTC_82cee
/usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_82cee.dir/link.txt --verbose=1
/usr/bin/cc -rdynamic CMakeFiles/cmTC_82cee.dir/testCCompiler.c.o -o cmTC_82cee
make[1]: Leaving directory '/home/user/MocapNET/build/CMakeFiles/CMakeTmp'

Detecting C compiler ABI info compiled with the following output:
Change Dir: /home/user/MocapNET/build/CMakeFiles/CMakeTmp

Run Build Command(s):/usr/bin/make cmTC_b54fe/fast && /usr/bin/make -f CMakeFiles/cmTC_b54fe.dir/build.make CMakeFiles/cmTC_b54fe.dir/build
make[1]: Entering directory '/home/user/MocapNET/build/CMakeFiles/CMakeTmp'
Building C object CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o
/usr/bin/cc -v -o CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o -c /usr/share/cmake-3.16/Modules/CMakeCCompilerABI.c
Using built-in specs.
COLLECT_GCC=/usr/bin/cc
OFFLOAD_TARGET_NAMES=nvptx-none:hsa
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 9.3.0-10ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none,hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 9.3.0 (Ubuntu 9.3.0-10ubuntu2)
COLLECT_GCC_OPTIONS='-v' '-o' 'CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o' '-c' '-mtune=generic' '-march=x86-64'
/usr/lib/gcc/x86_64-linux-gnu/9/cc1 -quiet -v -imultiarch x86_64-linux-gnu /usr/share/cmake-3.16/Modules/CMakeCCompilerABI.c -quiet -dumpbase CMakeCCompilerABI.c -mtune=generic -march=x86-64 -auxbase-strip CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o -version -fasynchronous-unwind-tables -fstack-protector-strong -Wformat -Wformat-security -fstack-clash-protection -fcf-protection -o /tmp/ccL422Xg.s
GNU C17 (Ubuntu 9.3.0-10ubuntu2) version 9.3.0 (x86_64-linux-gnu)
compiled by GNU C version 9.3.0, GMP version 6.2.0, MPFR version 4.0.2, MPC version 1.1.0, isl version isl-0.22.1-GMP

GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/9/include-fixed"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/9/../../../../x86_64-linux-gnu/include"
#include "..." search starts here:
#include <...> search starts here:
/usr/lib/gcc/x86_64-linux-gnu/9/include
/usr/local/include
/usr/include/x86_64-linux-gnu
/usr/include
End of search list.
GNU C17 (Ubuntu 9.3.0-10ubuntu2) version 9.3.0 (x86_64-linux-gnu)
compiled by GNU C version 9.3.0, GMP version 6.2.0, MPFR version 4.0.2, MPC version 1.1.0, isl version isl-0.22.1-GMP

GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
Compiler executable checksum: 18dc4c39b54390aa2b5013fb4339d43f
COLLECT_GCC_OPTIONS='-v' '-o' 'CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o' '-c' '-mtune=generic' '-march=x86-64'
as -v --64 -o CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o /tmp/ccL422Xg.s
GNU assembler version 2.34 (x86_64-linux-gnu) using BFD version (GNU Binutils for Ubuntu) 2.34
COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/
LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS='-v' '-o' 'CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o' '-c' '-mtune=generic' '-march=x86-64'
Linking C executable cmTC_b54fe
/usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_b54fe.dir/link.txt --verbose=1
/usr/bin/cc -v -rdynamic CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o -o cmTC_b54fe
Using built-in specs.
COLLECT_GCC=/usr/bin/cc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none:hsa
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 9.3.0-10ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none,hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 9.3.0 (Ubuntu 9.3.0-10ubuntu2)
COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/
LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS='-v' '-rdynamic' '-o' 'cmTC_b54fe' '-mtune=generic' '-march=x86-64'
/usr/lib/gcc/x86_64-linux-gnu/9/collect2 -plugin /usr/lib/gcc/x86_64-linux-gnu/9/liblto_plugin.so -plugin-opt=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper -plugin-opt=-fresolution=/tmp/cc7jjZRG.res -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s --build-id --eh-frame-hdr -m elf_x86_64 --hash-style=gnu --as-needed -export-dynamic -dynamic-linker /lib64/ld-linux-x86-64.so.2 -pie -z now -z relro -o cmTC_b54fe /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/Scrt1.o /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crti.o /usr/lib/gcc/x86_64-linux-gnu/9/crtbeginS.o -L/usr/lib/gcc/x86_64-linux-gnu/9 -L/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu -L/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib -L/lib/x86_64-linux-gnu -L/lib/../lib -L/usr/lib/x86_64-linux-gnu -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-linux-gnu/9/../../.. CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o -lgcc --push-state --as-needed -lgcc_s --pop-state -lc -lgcc --push-state --as-needed -lgcc_s --pop-state /usr/lib/gcc/x86_64-linux-gnu/9/crtendS.o /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crtn.o
COLLECT_GCC_OPTIONS='-v' '-rdynamic' '-o' 'cmTC_b54fe' '-mtune=generic' '-march=x86-64'
make[1]: Leaving directory '/home/user/MocapNET/build/CMakeFiles/CMakeTmp'

Parsed C implicit include dir info from above output: rv=done
found start of include info
found start of implicit include info
add: [/usr/lib/gcc/x86_64-linux-gnu/9/include]
add: [/usr/local/include]
add: [/usr/include/x86_64-linux-gnu]
add: [/usr/include]
end of search list found
collapse include dir [/usr/lib/gcc/x86_64-linux-gnu/9/include] ==> [/usr/lib/gcc/x86_64-linux-gnu/9/include]
collapse include dir [/usr/local/include] ==> [/usr/local/include]
collapse include dir [/usr/include/x86_64-linux-gnu] ==> [/usr/include/x86_64-linux-gnu]
collapse include dir [/usr/include] ==> [/usr/include]
implicit include dirs: [/usr/lib/gcc/x86_64-linux-gnu/9/include;/usr/local/include;/usr/include/x86_64-linux-gnu;/usr/include]

Parsed C implicit link information from above output:
link line regex: [^( |.[/])(ld|CMAKE_LINK_STARTFILE-NOTFOUND|([^/]+-)?ld|collect2)[^/\]*( |$)]
ignore line: [Change Dir: /home/user/MocapNET/build/CMakeFiles/CMakeTmp]
ignore line: []
ignore line: [Run Build Command(s):/usr/bin/make cmTC_b54fe/fast && /usr/bin/make -f CMakeFiles/cmTC_b54fe.dir/build.make CMakeFiles/cmTC_b54fe.dir/build]
ignore line: [make[1]: Entering directory '/home/user/MocapNET/build/CMakeFiles/CMakeTmp']
ignore line: [Building C object CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o]
ignore line: [/usr/bin/cc -v -o CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o -c /usr/share/cmake-3.16/Modules/CMakeCCompilerABI.c]
ignore line: [Using built-in specs.]
ignore line: [COLLECT_GCC=/usr/bin/cc]
ignore line: [OFFLOAD_TARGET_NAMES=nvptx-none:hsa]
ignore line: [OFFLOAD_TARGET_DEFAULT=1]
ignore line: [Target: x86_64-linux-gnu]
ignore line: [Configured with: ../src/configure -v --with-pkgversion='Ubuntu 9.3.0-10ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c ada c++ go brig d fortran objc obj-c++ gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32 m64 mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu]
ignore line: [Thread model: posix]
ignore line: [gcc version 9.3.0 (Ubuntu 9.3.0-10ubuntu2) ]
ignore line: [COLLECT_GCC_OPTIONS='-v' '-o' 'CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o' '-c' '-mtune=generic' '-march=x86-64']
ignore line: [ /usr/lib/gcc/x86_64-linux-gnu/9/cc1 -quiet -v -imultiarch x86_64-linux-gnu /usr/share/cmake-3.16/Modules/CMakeCCompilerABI.c -quiet -dumpbase CMakeCCompilerABI.c -mtune=generic -march=x86-64 -auxbase-strip CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o -version -fasynchronous-unwind-tables -fstack-protector-strong -Wformat -Wformat-security -fstack-clash-protection -fcf-protection -o /tmp/ccL422Xg.s]
ignore line: [GNU C17 (Ubuntu 9.3.0-10ubuntu2) version 9.3.0 (x86_64-linux-gnu)]
ignore line: [ compiled by GNU C version 9.3.0 GMP version 6.2.0 MPFR version 4.0.2 MPC version 1.1.0 isl version isl-0.22.1-GMP]
ignore line: []
ignore line: [GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072]
ignore line: [ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu"]
ignore line: [ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/9/include-fixed"]
ignore line: [ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/9/../../../../x86_64-linux-gnu/include"]
ignore line: [#include "..." search starts here:]
ignore line: [#include <...> search starts here:]
ignore line: [ /usr/lib/gcc/x86_64-linux-gnu/9/include]
ignore line: [ /usr/local/include]
ignore line: [ /usr/include/x86_64-linux-gnu]
ignore line: [ /usr/include]
ignore line: [End of search list.]
ignore line: [GNU C17 (Ubuntu 9.3.0-10ubuntu2) version 9.3.0 (x86_64-linux-gnu)]
ignore line: [ compiled by GNU C version 9.3.0 GMP version 6.2.0 MPFR version 4.0.2 MPC version 1.1.0 isl version isl-0.22.1-GMP]
ignore line: []
ignore line: [GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072]
ignore line: [Compiler executable checksum: 18dc4c39b54390aa2b5013fb4339d43f]
ignore line: [COLLECT_GCC_OPTIONS='-v' '-o' 'CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o' '-c' '-mtune=generic' '-march=x86-64']
ignore line: [ as -v --64 -o CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o /tmp/ccL422Xg.s]
ignore line: [GNU assembler version 2.34 (x86_64-linux-gnu) using BFD version (GNU Binutils for Ubuntu) 2.34]
ignore line: [COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/]
ignore line: [LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../:/lib/:/usr/lib/]
ignore line: [COLLECT_GCC_OPTIONS='-v' '-o' 'CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o' '-c' '-mtune=generic' '-march=x86-64']
ignore line: [Linking C executable cmTC_b54fe]
ignore line: [/usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_b54fe.dir/link.txt --verbose=1]
ignore line: [/usr/bin/cc -v -rdynamic CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o -o cmTC_b54fe ]
ignore line: [Using built-in specs.]
ignore line: [COLLECT_GCC=/usr/bin/cc]
ignore line: [COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper]
ignore line: [OFFLOAD_TARGET_NAMES=nvptx-none:hsa]
ignore line: [OFFLOAD_TARGET_DEFAULT=1]
ignore line: [Target: x86_64-linux-gnu]
ignore line: [Configured with: ../src/configure -v --with-pkgversion='Ubuntu 9.3.0-10ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c ada c++ go brig d fortran objc obj-c++ gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32 m64 mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu]
ignore line: [Thread model: posix]
ignore line: [gcc version 9.3.0 (Ubuntu 9.3.0-10ubuntu2) ]
ignore line: [COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/]
ignore line: [LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../:/lib/:/usr/lib/]
ignore line: [COLLECT_GCC_OPTIONS='-v' '-rdynamic' '-o' 'cmTC_b54fe' '-mtune=generic' '-march=x86-64']
link line: [ /usr/lib/gcc/x86_64-linux-gnu/9/collect2 -plugin /usr/lib/gcc/x86_64-linux-gnu/9/liblto_plugin.so -plugin-opt=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper -plugin-opt=-fresolution=/tmp/cc7jjZRG.res -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s --build-id --eh-frame-hdr -m elf_x86_64 --hash-style=gnu --as-needed -export-dynamic -dynamic-linker /lib64/ld-linux-x86-64.so.2 -pie -z now -z relro -o cmTC_b54fe /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/Scrt1.o /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crti.o /usr/lib/gcc/x86_64-linux-gnu/9/crtbeginS.o -L/usr/lib/gcc/x86_64-linux-gnu/9 -L/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu -L/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib -L/lib/x86_64-linux-gnu -L/lib/../lib -L/usr/lib/x86_64-linux-gnu -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-linux-gnu/9/../../.. CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o -lgcc --push-state --as-needed -lgcc_s --pop-state -lc -lgcc --push-state --as-needed -lgcc_s --pop-state /usr/lib/gcc/x86_64-linux-gnu/9/crtendS.o /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crtn.o]
arg [/usr/lib/gcc/x86_64-linux-gnu/9/collect2] ==> ignore
arg [-plugin] ==> ignore
arg [/usr/lib/gcc/x86_64-linux-gnu/9/liblto_plugin.so] ==> ignore
arg [-plugin-opt=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper] ==> ignore
arg [-plugin-opt=-fresolution=/tmp/cc7jjZRG.res] ==> ignore
arg [-plugin-opt=-pass-through=-lgcc] ==> ignore
arg [-plugin-opt=-pass-through=-lgcc_s] ==> ignore
arg [-plugin-opt=-pass-through=-lc] ==> ignore
arg [-plugin-opt=-pass-through=-lgcc] ==> ignore
arg [-plugin-opt=-pass-through=-lgcc_s] ==> ignore
arg [--build-id] ==> ignore
arg [--eh-frame-hdr] ==> ignore
arg [-m] ==> ignore
arg [elf_x86_64] ==> ignore
arg [--hash-style=gnu] ==> ignore
arg [--as-needed] ==> ignore
arg [-export-dynamic] ==> ignore
arg [-dynamic-linker] ==> ignore
arg [/lib64/ld-linux-x86-64.so.2] ==> ignore
arg [-pie] ==> ignore
arg [-znow] ==> ignore
arg [-zrelro] ==> ignore
arg [-o] ==> ignore
arg [cmTC_b54fe] ==> ignore
arg [/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/Scrt1.o] ==> ignore
arg [/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crti.o] ==> ignore
arg [/usr/lib/gcc/x86_64-linux-gnu/9/crtbeginS.o] ==> ignore
arg [-L/usr/lib/gcc/x86_64-linux-gnu/9] ==> dir [/usr/lib/gcc/x86_64-linux-gnu/9]
arg [-L/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu] ==> dir [/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu]
arg [-L/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib] ==> dir [/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib]
arg [-L/lib/x86_64-linux-gnu] ==> dir [/lib/x86_64-linux-gnu]
arg [-L/lib/../lib] ==> dir [/lib/../lib]
arg [-L/usr/lib/x86_64-linux-gnu] ==> dir [/usr/lib/x86_64-linux-gnu]
arg [-L/usr/lib/../lib] ==> dir [/usr/lib/../lib]
arg [-L/usr/lib/gcc/x86_64-linux-gnu/9/../../..] ==> dir [/usr/lib/gcc/x86_64-linux-gnu/9/../../..]
arg [CMakeFiles/cmTC_b54fe.dir/CMakeCCompilerABI.c.o] ==> ignore
arg [-lgcc] ==> lib [gcc]
arg [--push-state] ==> ignore
arg [--as-needed] ==> ignore
arg [-lgcc_s] ==> lib [gcc_s]
arg [--pop-state] ==> ignore
arg [-lc] ==> lib [c]
arg [-lgcc] ==> lib [gcc]
arg [--push-state] ==> ignore
arg [--as-needed] ==> ignore
arg [-lgcc_s] ==> lib [gcc_s]
arg [--pop-state] ==> ignore
arg [/usr/lib/gcc/x86_64-linux-gnu/9/crtendS.o] ==> ignore
arg [/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crtn.o] ==> ignore
collapse library dir [/usr/lib/gcc/x86_64-linux-gnu/9] ==> [/usr/lib/gcc/x86_64-linux-gnu/9]
collapse library dir [/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu] ==> [/usr/lib/x86_64-linux-gnu]
collapse library dir [/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib] ==> [/usr/lib]
collapse library dir [/lib/x86_64-linux-gnu] ==> [/lib/x86_64-linux-gnu]
collapse library dir [/lib/../lib] ==> [/lib]
collapse library dir [/usr/lib/x86_64-linux-gnu] ==> [/usr/lib/x86_64-linux-gnu]
collapse library dir [/usr/lib/../lib] ==> [/usr/lib]
collapse library dir [/usr/lib/gcc/x86_64-linux-gnu/9/../../..] ==> [/usr/lib]
implicit libs: [gcc;gcc_s;c;gcc;gcc_s]
implicit dirs: [/usr/lib/gcc/x86_64-linux-gnu/9;/usr/lib/x86_64-linux-gnu;/usr/lib;/lib/x86_64-linux-gnu;/lib]
implicit fwks: []

Determining if the CXX compiler works passed with the following output:
Change Dir: /home/user/MocapNET/build/CMakeFiles/CMakeTmp

Run Build Command(s):/usr/bin/make cmTC_003f1/fast && /usr/bin/make -f CMakeFiles/cmTC_003f1.dir/build.make CMakeFiles/cmTC_003f1.dir/build
make[1]: Entering directory '/home/user/MocapNET/build/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_003f1.dir/testCXXCompiler.cxx.o
/usr/bin/c++ -o CMakeFiles/cmTC_003f1.dir/testCXXCompiler.cxx.o -c /home/user/MocapNET/build/CMakeFiles/CMakeTmp/testCXXCompiler.cxx
Linking CXX executable cmTC_003f1
/usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_003f1.dir/link.txt --verbose=1
/usr/bin/c++ -rdynamic CMakeFiles/cmTC_003f1.dir/testCXXCompiler.cxx.o -o cmTC_003f1
make[1]: Leaving directory '/home/user/MocapNET/build/CMakeFiles/CMakeTmp'

Detecting CXX compiler ABI info compiled with the following output:
Change Dir: /home/user/MocapNET/build/CMakeFiles/CMakeTmp

Run Build Command(s):/usr/bin/make cmTC_c45a6/fast && /usr/bin/make -f CMakeFiles/cmTC_c45a6.dir/build.make CMakeFiles/cmTC_c45a6.dir/build
make[1]: Entering directory '/home/user/MocapNET/build/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o
/usr/bin/c++ -v -o CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o -c /usr/share/cmake-3.16/Modules/CMakeCXXCompilerABI.cpp
Using built-in specs.
COLLECT_GCC=/usr/bin/c++
OFFLOAD_TARGET_NAMES=nvptx-none:hsa
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 9.3.0-10ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none,hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 9.3.0 (Ubuntu 9.3.0-10ubuntu2)
COLLECT_GCC_OPTIONS='-v' '-o' 'CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o' '-c' '-shared-libgcc' '-mtune=generic' '-march=x86-64'
/usr/lib/gcc/x86_64-linux-gnu/9/cc1plus -quiet -v -imultiarch x86_64-linux-gnu -D_GNU_SOURCE /usr/share/cmake-3.16/Modules/CMakeCXXCompilerABI.cpp -quiet -dumpbase CMakeCXXCompilerABI.cpp -mtune=generic -march=x86-64 -auxbase-strip CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o -version -fasynchronous-unwind-tables -fstack-protector-strong -Wformat -Wformat-security -fstack-clash-protection -fcf-protection -o /tmp/ccSvK8ud.s
GNU C++14 (Ubuntu 9.3.0-10ubuntu2) version 9.3.0 (x86_64-linux-gnu)
compiled by GNU C version 9.3.0, GMP version 6.2.0, MPFR version 4.0.2, MPC version 1.1.0, isl version isl-0.22.1-GMP

GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
ignoring duplicate directory "/usr/include/x86_64-linux-gnu/c++/9"
ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/9/include-fixed"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/9/../../../../x86_64-linux-gnu/include"
#include "..." search starts here:
#include <...> search starts here:
/usr/include/c++/9
/usr/include/x86_64-linux-gnu/c++/9
/usr/include/c++/9/backward
/usr/lib/gcc/x86_64-linux-gnu/9/include
/usr/local/include
/usr/include/x86_64-linux-gnu
/usr/include
End of search list.
GNU C++14 (Ubuntu 9.3.0-10ubuntu2) version 9.3.0 (x86_64-linux-gnu)
compiled by GNU C version 9.3.0, GMP version 6.2.0, MPFR version 4.0.2, MPC version 1.1.0, isl version isl-0.22.1-GMP

GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
Compiler executable checksum: a3d04a02fbd98a786d710618ca593f02
COLLECT_GCC_OPTIONS='-v' '-o' 'CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o' '-c' '-shared-libgcc' '-mtune=generic' '-march=x86-64'
as -v --64 -o CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o /tmp/ccSvK8ud.s
GNU assembler version 2.34 (x86_64-linux-gnu) using BFD version (GNU Binutils for Ubuntu) 2.34
COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/
LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS='-v' '-o' 'CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o' '-c' '-shared-libgcc' '-mtune=generic' '-march=x86-64'
Linking CXX executable cmTC_c45a6
/usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_c45a6.dir/link.txt --verbose=1
/usr/bin/c++ -v -rdynamic CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o -o cmTC_c45a6
Using built-in specs.
COLLECT_GCC=/usr/bin/c++
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none:hsa
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 9.3.0-10ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none,hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 9.3.0 (Ubuntu 9.3.0-10ubuntu2)
COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/
LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS='-v' '-rdynamic' '-o' 'cmTC_c45a6' '-shared-libgcc' '-mtune=generic' '-march=x86-64'
/usr/lib/gcc/x86_64-linux-gnu/9/collect2 -plugin /usr/lib/gcc/x86_64-linux-gnu/9/liblto_plugin.so -plugin-opt=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper -plugin-opt=-fresolution=/tmp/ccMRfM1D.res -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc --build-id --eh-frame-hdr -m elf_x86_64 --hash-style=gnu --as-needed -export-dynamic -dynamic-linker /lib64/ld-linux-x86-64.so.2 -pie -z now -z relro -o cmTC_c45a6 /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/Scrt1.o /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crti.o /usr/lib/gcc/x86_64-linux-gnu/9/crtbeginS.o -L/usr/lib/gcc/x86_64-linux-gnu/9 -L/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu -L/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib -L/lib/x86_64-linux-gnu -L/lib/../lib -L/usr/lib/x86_64-linux-gnu -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-linux-gnu/9/../../.. CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o -lstdc++ -lm -lgcc_s -lgcc -lc -lgcc_s -lgcc /usr/lib/gcc/x86_64-linux-gnu/9/crtendS.o /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crtn.o
COLLECT_GCC_OPTIONS='-v' '-rdynamic' '-o' 'cmTC_c45a6' '-shared-libgcc' '-mtune=generic' '-march=x86-64'
make[1]: Leaving directory '/home/user/MocapNET/build/CMakeFiles/CMakeTmp'

Parsed CXX implicit include dir info from above output: rv=done
found start of include info
found start of implicit include info
add: [/usr/include/c++/9]
add: [/usr/include/x86_64-linux-gnu/c++/9]
add: [/usr/include/c++/9/backward]
add: [/usr/lib/gcc/x86_64-linux-gnu/9/include]
add: [/usr/local/include]
add: [/usr/include/x86_64-linux-gnu]
add: [/usr/include]
end of search list found
collapse include dir [/usr/include/c++/9] ==> [/usr/include/c++/9]
collapse include dir [/usr/include/x86_64-linux-gnu/c++/9] ==> [/usr/include/x86_64-linux-gnu/c++/9]
collapse include dir [/usr/include/c++/9/backward] ==> [/usr/include/c++/9/backward]
collapse include dir [/usr/lib/gcc/x86_64-linux-gnu/9/include] ==> [/usr/lib/gcc/x86_64-linux-gnu/9/include]
collapse include dir [/usr/local/include] ==> [/usr/local/include]
collapse include dir [/usr/include/x86_64-linux-gnu] ==> [/usr/include/x86_64-linux-gnu]
collapse include dir [/usr/include] ==> [/usr/include]
implicit include dirs: [/usr/include/c++/9;/usr/include/x86_64-linux-gnu/c++/9;/usr/include/c++/9/backward;/usr/lib/gcc/x86_64-linux-gnu/9/include;/usr/local/include;/usr/include/x86_64-linux-gnu;/usr/include]

Parsed CXX implicit link information from above output:
link line regex: [^( |.[/])(ld|CMAKE_LINK_STARTFILE-NOTFOUND|([^/]+-)?ld|collect2)[^/\]*( |$)]
ignore line: [Change Dir: /home/user/MocapNET/build/CMakeFiles/CMakeTmp]
ignore line: []
ignore line: [Run Build Command(s):/usr/bin/make cmTC_c45a6/fast && /usr/bin/make -f CMakeFiles/cmTC_c45a6.dir/build.make CMakeFiles/cmTC_c45a6.dir/build]
ignore line: [make[1]: Entering directory '/home/user/MocapNET/build/CMakeFiles/CMakeTmp']
ignore line: [Building CXX object CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o]
ignore line: [/usr/bin/c++ -v -o CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o -c /usr/share/cmake-3.16/Modules/CMakeCXXCompilerABI.cpp]
ignore line: [Using built-in specs.]
ignore line: [COLLECT_GCC=/usr/bin/c++]
ignore line: [OFFLOAD_TARGET_NAMES=nvptx-none:hsa]
ignore line: [OFFLOAD_TARGET_DEFAULT=1]
ignore line: [Target: x86_64-linux-gnu]
ignore line: [Configured with: ../src/configure -v --with-pkgversion='Ubuntu 9.3.0-10ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c ada c++ go brig d fortran objc obj-c++ gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32 m64 mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu]
ignore line: [Thread model: posix]
ignore line: [gcc version 9.3.0 (Ubuntu 9.3.0-10ubuntu2) ]
ignore line: [COLLECT_GCC_OPTIONS='-v' '-o' 'CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o' '-c' '-shared-libgcc' '-mtune=generic' '-march=x86-64']
ignore line: [ /usr/lib/gcc/x86_64-linux-gnu/9/cc1plus -quiet -v -imultiarch x86_64-linux-gnu -D_GNU_SOURCE /usr/share/cmake-3.16/Modules/CMakeCXXCompilerABI.cpp -quiet -dumpbase CMakeCXXCompilerABI.cpp -mtune=generic -march=x86-64 -auxbase-strip CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o -version -fasynchronous-unwind-tables -fstack-protector-strong -Wformat -Wformat-security -fstack-clash-protection -fcf-protection -o /tmp/ccSvK8ud.s]
ignore line: [GNU C++14 (Ubuntu 9.3.0-10ubuntu2) version 9.3.0 (x86_64-linux-gnu)]
ignore line: [ compiled by GNU C version 9.3.0 GMP version 6.2.0 MPFR version 4.0.2 MPC version 1.1.0 isl version isl-0.22.1-GMP]
ignore line: []
ignore line: [GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072]
ignore line: [ignoring duplicate directory "/usr/include/x86_64-linux-gnu/c++/9"]
ignore line: [ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu"]
ignore line: [ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/9/include-fixed"]
ignore line: [ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/9/../../../../x86_64-linux-gnu/include"]
ignore line: [#include "..." search starts here:]
ignore line: [#include <...> search starts here:]
ignore line: [ /usr/include/c++/9]
ignore line: [ /usr/include/x86_64-linux-gnu/c++/9]
ignore line: [ /usr/include/c++/9/backward]
ignore line: [ /usr/lib/gcc/x86_64-linux-gnu/9/include]
ignore line: [ /usr/local/include]
ignore line: [ /usr/include/x86_64-linux-gnu]
ignore line: [ /usr/include]
ignore line: [End of search list.]
ignore line: [GNU C++14 (Ubuntu 9.3.0-10ubuntu2) version 9.3.0 (x86_64-linux-gnu)]
ignore line: [ compiled by GNU C version 9.3.0 GMP version 6.2.0 MPFR version 4.0.2 MPC version 1.1.0 isl version isl-0.22.1-GMP]
ignore line: []
ignore line: [GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072]
ignore line: [Compiler executable checksum: a3d04a02fbd98a786d710618ca593f02]
ignore line: [COLLECT_GCC_OPTIONS='-v' '-o' 'CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o' '-c' '-shared-libgcc' '-mtune=generic' '-march=x86-64']
ignore line: [ as -v --64 -o CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o /tmp/ccSvK8ud.s]
ignore line: [GNU assembler version 2.34 (x86_64-linux-gnu) using BFD version (GNU Binutils for Ubuntu) 2.34]
ignore line: [COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/]
ignore line: [LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../:/lib/:/usr/lib/]
ignore line: [COLLECT_GCC_OPTIONS='-v' '-o' 'CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o' '-c' '-shared-libgcc' '-mtune=generic' '-march=x86-64']
ignore line: [Linking CXX executable cmTC_c45a6]
ignore line: [/usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_c45a6.dir/link.txt --verbose=1]
ignore line: [/usr/bin/c++ -v -rdynamic CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o -o cmTC_c45a6 ]
ignore line: [Using built-in specs.]
ignore line: [COLLECT_GCC=/usr/bin/c++]
ignore line: [COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper]
ignore line: [OFFLOAD_TARGET_NAMES=nvptx-none:hsa]
ignore line: [OFFLOAD_TARGET_DEFAULT=1]
ignore line: [Target: x86_64-linux-gnu]
ignore line: [Configured with: ../src/configure -v --with-pkgversion='Ubuntu 9.3.0-10ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c ada c++ go brig d fortran objc obj-c++ gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32 m64 mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu]
ignore line: [Thread model: posix]
ignore line: [gcc version 9.3.0 (Ubuntu 9.3.0-10ubuntu2) ]
ignore line: [COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/]
ignore line: [LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/9/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/9/../../../:/lib/:/usr/lib/]
ignore line: [COLLECT_GCC_OPTIONS='-v' '-rdynamic' '-o' 'cmTC_c45a6' '-shared-libgcc' '-mtune=generic' '-march=x86-64']
link line: [ /usr/lib/gcc/x86_64-linux-gnu/9/collect2 -plugin /usr/lib/gcc/x86_64-linux-gnu/9/liblto_plugin.so -plugin-opt=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper -plugin-opt=-fresolution=/tmp/ccMRfM1D.res -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc --build-id --eh-frame-hdr -m elf_x86_64 --hash-style=gnu --as-needed -export-dynamic -dynamic-linker /lib64/ld-linux-x86-64.so.2 -pie -z now -z relro -o cmTC_c45a6 /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/Scrt1.o /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crti.o /usr/lib/gcc/x86_64-linux-gnu/9/crtbeginS.o -L/usr/lib/gcc/x86_64-linux-gnu/9 -L/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu -L/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib -L/lib/x86_64-linux-gnu -L/lib/../lib -L/usr/lib/x86_64-linux-gnu -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-linux-gnu/9/../../.. CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o -lstdc++ -lm -lgcc_s -lgcc -lc -lgcc_s -lgcc /usr/lib/gcc/x86_64-linux-gnu/9/crtendS.o /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crtn.o]
arg [/usr/lib/gcc/x86_64-linux-gnu/9/collect2] ==> ignore
arg [-plugin] ==> ignore
arg [/usr/lib/gcc/x86_64-linux-gnu/9/liblto_plugin.so] ==> ignore
arg [-plugin-opt=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper] ==> ignore
arg [-plugin-opt=-fresolution=/tmp/ccMRfM1D.res] ==> ignore
arg [-plugin-opt=-pass-through=-lgcc_s] ==> ignore
arg [-plugin-opt=-pass-through=-lgcc] ==> ignore
arg [-plugin-opt=-pass-through=-lc] ==> ignore
arg [-plugin-opt=-pass-through=-lgcc_s] ==> ignore
arg [-plugin-opt=-pass-through=-lgcc] ==> ignore
arg [--build-id] ==> ignore
arg [--eh-frame-hdr] ==> ignore
arg [-m] ==> ignore
arg [elf_x86_64] ==> ignore
arg [--hash-style=gnu] ==> ignore
arg [--as-needed] ==> ignore
arg [-export-dynamic] ==> ignore
arg [-dynamic-linker] ==> ignore
arg [/lib64/ld-linux-x86-64.so.2] ==> ignore
arg [-pie] ==> ignore
arg [-znow] ==> ignore
arg [-zrelro] ==> ignore
arg [-o] ==> ignore
arg [cmTC_c45a6] ==> ignore
arg [/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/Scrt1.o] ==> ignore
arg [/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crti.o] ==> ignore
arg [/usr/lib/gcc/x86_64-linux-gnu/9/crtbeginS.o] ==> ignore
arg [-L/usr/lib/gcc/x86_64-linux-gnu/9] ==> dir [/usr/lib/gcc/x86_64-linux-gnu/9]
arg [-L/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu] ==> dir [/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu]
arg [-L/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib] ==> dir [/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib]
arg [-L/lib/x86_64-linux-gnu] ==> dir [/lib/x86_64-linux-gnu]
arg [-L/lib/../lib] ==> dir [/lib/../lib]
arg [-L/usr/lib/x86_64-linux-gnu] ==> dir [/usr/lib/x86_64-linux-gnu]
arg [-L/usr/lib/../lib] ==> dir [/usr/lib/../lib]
arg [-L/usr/lib/gcc/x86_64-linux-gnu/9/../../..] ==> dir [/usr/lib/gcc/x86_64-linux-gnu/9/../../..]
arg [CMakeFiles/cmTC_c45a6.dir/CMakeCXXCompilerABI.cpp.o] ==> ignore
arg [-lstdc++] ==> lib [stdc++]
arg [-lm] ==> lib [m]
arg [-lgcc_s] ==> lib [gcc_s]
arg [-lgcc] ==> lib [gcc]
arg [-lc] ==> lib [c]
arg [-lgcc_s] ==> lib [gcc_s]
arg [-lgcc] ==> lib [gcc]
arg [/usr/lib/gcc/x86_64-linux-gnu/9/crtendS.o] ==> ignore
arg [/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crtn.o] ==> ignore
collapse library dir [/usr/lib/gcc/x86_64-linux-gnu/9] ==> [/usr/lib/gcc/x86_64-linux-gnu/9]
collapse library dir [/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu] ==> [/usr/lib/x86_64-linux-gnu]
collapse library dir [/usr/lib/gcc/x86_64-linux-gnu/9/../../../../lib] ==> [/usr/lib]
collapse library dir [/lib/x86_64-linux-gnu] ==> [/lib/x86_64-linux-gnu]
collapse library dir [/lib/../lib] ==> [/lib]
collapse library dir [/usr/lib/x86_64-linux-gnu] ==> [/usr/lib/x86_64-linux-gnu]
collapse library dir [/usr/lib/../lib] ==> [/usr/lib]
collapse library dir [/usr/lib/gcc/x86_64-linux-gnu/9/../../..] ==> [/usr/lib]
implicit libs: [stdc++;m;gcc_s;gcc;c;gcc_s;gcc]
implicit dirs: [/usr/lib/gcc/x86_64-linux-gnu/9;/usr/lib/x86_64-linux-gnu;/usr/lib;/lib/x86_64-linux-gnu;/lib]
implicit fwks: []

`

Questions about converting OpenPose .json files to .bvh

When attempting to run convertOpenPoseJSONToCSV using a copy of the output folder from OpenPose that contains 273 000000000xxx_keypoints.json files the following is output:

tm@tm-VirtualBox:~/Downloads/MocapNET$ ./convertOpenPoseJSONToCSV --from output/ -o
File output//colorFrame_0_00001.jpg does not exist, unable to get its dimensions..
Assuming default image dimensions 1920x1080 , you can change this using --size x y
Threshold is set to 0.50
Processing : 
Stopping search after 1000 checks ..
findFirstJSONFileInDirectory: failed to find any JSON files.. :(
Path : output/ 
Format : %s/%s%05u_keypoints.json 
Label : colorFrame_0_ 
Failed to find a JSON file..!
 tm@tm-VirtualBox:~/Downloads/MocapNET$ 

The name of the first json file is Test1_000000000000_keypoints.json

Why can’t this find the json files?
In the steps for using this utility listed at the top of its code it mentions jpeg files but not json files. Can it convert OpenPose json files to CSV files without the need for jpeg files?


How is the mocapnet2CSV utility to be used? The comment section at the top of its code says:

  • Utility to extract BVH files straight from OpenPose JSON output (exactly what I want to do!)
  • Sample usage ./MocapNETCSV --from test.csv --visualize

After running initialize.sh there is no test.csv file to be found in the MocapNET directory, but there is a TestCSV binary.

The result of running this program is:

> tm@tm-VirtualBox:~/Downloads/MocapNET$ ./MocapNET2CSV --from output --visualize
> CPU :  Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
> GPU :  VMware SVGA II Adapter
> Visualization enabled
> The path output doesn't look like a CSV file.. 
> tm@tm-VirtualBox:~/Downloads/MocapNET$ Your version is up to date

Is this utility designed to accept the output from the convertOpenPoseJSONToCSV utility and convert it into a BVH file?
Or will it actually work on the output folder of OpenPose JSON files directly, as apparently stated?

OpenGL error after getOpenGLColor()

Hi,bro,after I solve the Nvidia problem,I run the --opengl and I meet a new problem

OpenGL Error(1280):.../MocapNET/dependencies/RGBDAcquisition/opengl_acquisition_shared_library/opengl_depth_and_color_renderer/src/Library/Rendering/downloadFromRenderer.c 72
An unacceptable value is specified for an enumerated argument.The offending command is ignored and has no other side effect than to set the error flag

OpenGL can not visualization

hi,bro,I want to use OpenGL visualization to render a skinned mesh
./MocapNETLiveWebcamDemo --from shuffle.webm --openpose --opengl --frames 375
I try to change ENABLE_OPENGL to on during compilation and using the --opengl flag when running the MocapNETLiveWebcamDemo
But I can not see the experimental OpenGL visualization

Upper body pose estimation

Hey @AmmarkoV @padeler @nkyriazis ,
Pleased to see this amazing work.
I need a help from you.
I changed int MAXIMUM_NUMBER_OF_NSDM_ELEMENTS_MISSING=110 to value 133 for upper body estimation, but the estimation results is not good.
Would you help update a new model or how can I train a new model for upper body estimation.
Thanks!

would you mind sharing one instruction in detail?


after building, I would like to testing the repo, while I have to use the centOS service or the docker environment without camera. So, I tried to process the video by your repo and save the processed output at local path. But I can not find one detailed instruction about this excellent work?
I got the following error:
root@277aeac2e1bb:/workspace/pcode_tme/pose_dancing/MocapNET# ./WebcamBIN --from ./video_test/sense_test.mp4
Trying to open ./video_test/sense_test.mp4
QXcbConnection: Could not connect to display
Aborted
root@277aeac2e1bb:/workspace/pcode_tme/pose_dancing/MocapNET# vim ~/.bashrc
root@277aeac2e1bb:/workspace/pcode_tme/pose_dancing/MocapNET# source ~/.bashrc
root@277aeac2e1bb:/workspace/pcode_tme/pose_dancing/MocapNET# ./WebcamBIN --from ./video_test/sense_test.mp4
Trying to open ./video_test/sense_test.mp4
QFontDatabase: Cannot find font directory /usr/lib/x86_64-linux-gnu/fonts - is Qt installed correctly?
QFontDatabase: Cannot find font directory /usr/lib/x86_64-linux-gnu/fonts - is Qt installed correctly?
QFontDatabase: Cannot find font directory /usr/lib/x86_64-linux-gnu/fonts - is Qt installed correctly?
QFontDatabase: Cannot find font directory /usr/lib/x86_64-linux-gnu/fonts - is Qt installed correctly?
This plugin does not support propagateSizeHints()
This plugin does not support propagateSizeHints()
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /root/opencv/modules/highgui/src/window.cpp, line 339
terminate called after throwing an instance of 'cv::Exception'
what(): /root/opencv/modules/highgui/src/window.cpp:339: error: (-215) size.width>0 && size.height>0 in function imshow
Aborted

And, I guess I built it successfully under the mac's docker environment
MocapNET: Input was given precompressed
Direction is : 38.503 Front
Sample 195/200 - 10.0350ms - mae 1.4426
MocapNET: Input was given precompressed
Direction is : 34.6637 Front
Sample 196/200 - 10.0750ms - mae 1.4900
MocapNET: Input was given precompressed
Direction is : 46.3801 Front
Sample 197/200 - 9.0950ms - mae 1.4359
MocapNET: Input was given precompressed
Direction is : 70.9141 Front
Sample 198/200 - 9.0400ms - mae 1.2482
MocapNET: Input was given precompressed
Direction is : 70.4886 Front
Sample 199/200 - 12.7740ms - mae 1.2491
model name : Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz

Total 9986.97ms for 1000 samples - Average 9.98697ms - 100.13 fps
root@277aeac2e1bb:/workspace/pcode_tme/pose_dancing/MocapNET# ./MocapNETBenchmark --cpu^C

How do you install on Windows 10 linux subsystem?

Ubuntu installs successfully.

Steps

  1. git clone https://github.com/FORTH-ModelBasedTracker/MocapNET.git

  2. sudo apt-get update

  3. sudo apt-get install build-essential cmake libopencv-dev libjpeg-dev libpng-dev libglew-dev

  4. run ./initialize.sh

The problem I am running into is with step 4.

It gets to
[ 21%] Building CXX object src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/__/MocapNETLib/visualization.cpp

and then reports many errors first error

o/home/me/MocapNET/src/MocapNET1/MocapNETLib/visualization.cpp: In function ‘int visualizeNSDM(cv::Mat&, std::vector, unsigned int, unsigned int, unsigned int, unsigned int)’:
/home/delebash/MocapNET/src/MocapNET1/MocapNETLib/visualization.cpp:116:1: warning: no return statement in function returning non-void [-Wreturn-type]
116 | }
| ^

Any ideas on how to fix this?

File and path errors displayed as Could not find COCO 2D skeleton

I ran into this when I did a webcam output from openpose. With the webcam output there is no label so when I ran ./MocapNETJSON --from openpose-input --seriallength 12 --size 1920 1080 note without --label and that is when I get the error. Also if I point to a wrong path by mistake the error above is also displayed. Suggestion would be to make file path not found errors as just that and not the above error.

Thanks.
Love this software just what poor me has been looking for :)

Instant segfault in all McapNET programs

I can't run this programs after compile:

./MocapNETBenchmark --cpu
./MocapNETLiveWebcamDemo --from /dev/video0 --live
./MocapNETJSON --from ### --label ###  --seriallength 12 --size 1280 720

All this programs don't work - instant segafult. OpenCVTest works fine.
Please, help me to run your programs!
OS: Fedora 32

Also i notice some warnings while building:

[ 10%] Building C object src/GroundTruthGenerator/CMakeFiles/GroundTruthDumper.dir/__/__/dependencies/RGBDAcquisition/opengl_acquisition_shared_library/opengl_depth_and_color_renderer/src/Library/MotionCaptureLoader/export/bvh_to_bvh.c.o
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow.so: .dynsym local symbol at index 3 (>= sh_info of 3)
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 3 (>= sh_info of 3)
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 4 (>= sh_info of 3)
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 5 (>= sh_info of 3)
[ 11%] Building C object src/Converters/CMakeFiles/convertBody25JSONToCSV.dir/__/__/dependencies/RGBDAcquisition/opengl_acquisition_shared_library/opengl_depth_and_color_renderer/src/Library/MotionCaptureLoader/bvh_loader.c.o

...

[ 92%] Linking CXX executable ../../../BVHGUI
/usr/bin/ld: ../../../dependencies/libtensorflow/lib/libtensorflow.so: .dynsym local symbol at index 3 (>= sh_info of 3)
/usr/bin/ld: ../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 3 (>= sh_info of 3)
/usr/bin/ld: ../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 4 (>= sh_info of 3)
/usr/bin/ld: ../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 5 (>= sh_info of 3)
[ 93%] Building CXX object src/MocapNET1/MocapNETLiveWebcamDemo/CMakeFiles/MocapNETLiveWebcamDemo.dir/__/MocapNETLib/tools.cpp.o
[ 93%] Building CXX object src/MocapNET1/MocapNETLiveWebcamDemo/CMakeFiles/MocapNETLiveWebcamDemo.dir/__/MocapNETLib/jsonCocoSkeleton.cpp.o
[ 94%] Building CXX object src/MocapNET1/MocapNETSimpleBenchmark/CMakeFiles/MocapNETBenchmark.dir/__/MocapNETLib/outputFiltering.cpp.o
[ 94%] Building CXX object src/MocapNET1/MocapNETSimpleBenchmark/CMakeFiles/MocapNETBenchmark.dir/__/MocapNETLib/bvh.cpp.o
[ 94%] Building CXX object src/MocapNET1/MocapNETLiveWebcamDemo/CMakeFiles/MocapNETLiveWebcamDemo.dir/__/__/__/dependencies/InputParser/InputParser_C.cpp.o
[ 94%] Building CXX object src/MocapNET1/MocapNETSimpleBenchmark/CMakeFiles/MocapNETBenchmark.dir/__/MocapNETLib/csv.cpp.o
[ 95%] Building CXX object src/MocapNET1/MocapNETSimpleBenchmark/CMakeFiles/MocapNETBenchmark.dir/__/__/Tensorflow/tensorflow.cpp.o
[ 95%] Built target BVHGUI
[ 95%] Building CXX object src/MocapNET1/MocapNETSimpleBenchmark/CMakeFiles/MocapNETBenchmark.dir/__/__/Tensorflow/tf_utils.cpp.o
[ 96%] Building CXX object src/MocapNET1/MocapNETLiveWebcamDemo/CMakeFiles/MocapNETLiveWebcamDemo.dir/__/MocapNETLib/gestureRecognition.cpp.o
[ 97%] Building CXX object src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/__/MocapNETLib/jsonCocoSkeleton.cpp.o
[ 97%] Building CXX object src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/__/MocapNETLib/remoteExecution.cpp.o
[ 97%] Building CXX object src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/__/MocapNETLib/jsonMocapNETHelpers.cpp.o
[ 97%] Linking CXX executable ../../../../MocapNETBenchmark
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow.so: .dynsym local symbol at index 3 (>= sh_info of 3)
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 3 (>= sh_info of 3)
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 4 (>= sh_info of 3)
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 5 (>= sh_info of 3)
[ 98%] Building CXX object src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/__/__/__/dependencies/InputParser/InputParser_C.cpp.o
[ 98%] Building CXX object src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/__/MocapNETLib/gestureRecognition.cpp.o
[ 98%] Building CXX object src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/__/MocapNETLib/outputFiltering.cpp.o
[ 99%] Building CXX object src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/__/__/Tensorflow/tensorflow.cpp.o
[ 99%] Building CXX object src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/__/__/Tensorflow/tf_utils.cpp.o
[ 99%] Building CXX object src/MocapNET1/MocapNETLiveWebcamDemo/CMakeFiles/MocapNETLiveWebcamDemo.dir/__/MocapNETLib/outputFiltering.cpp.o
[ 99%] Building CXX object src/MocapNET1/MocapNETLiveWebcamDemo/CMakeFiles/MocapNETLiveWebcamDemo.dir/utilities.cpp.o
[100%] Building CXX object src/MocapNET1/MocapNETLiveWebcamDemo/CMakeFiles/MocapNETLiveWebcamDemo.dir/__/__/Tensorflow/tensorflow.cpp.o
[100%] Linking CXX executable ../../../../MocapNETJSON
[100%] Building CXX object src/MocapNET1/MocapNETLiveWebcamDemo/CMakeFiles/MocapNETLiveWebcamDemo.dir/__/__/Tensorflow/tf_utils.cpp.o
[100%] Built target MocapNETBenchmark
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow.so: .dynsym local symbol at index 3 (>= sh_info of 3)
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 3 (>= sh_info of 3)
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 4 (>= sh_info of 3)
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 5 (>= sh_info of 3)
[100%] Built target MocapNETJSON
[100%] Linking CXX executable ../../../../MocapNETLiveWebcamDemo
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow.so: .dynsym local symbol at index 3 (>= sh_info of 3)
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 3 (>= sh_info of 3)
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 4 (>= sh_info of 3)
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow_framework.so: .dynsym local symbol at index 5 (>= sh_info of 3)
[100%] Built target MocapNETLiveWebcamDemo

Missing "handsup.bvh"

Hello. Thank you for sharing the excellent work!
I was converting openpose json to bvh
and I noticed that "handsup.bvh" is missing in the dataset directory .
(Still, conversion seems successful without it.)
Thanks!

Tensorflow/Keras code for training on custom dataset

Hi, I want to use your mocap net architecture on my custom dataset.
Your paper says that you trained the model using the tensorflow/keras, but I have hard time finding out how the network architecture is actually created.
Is the training part of the code (data preprocessing / ensemble / SNN network architecture / lr scheduling, etc) available?
Thanks alot

command with IK not working / flipped hip rotation output

If i run following command,
./MocapNET2LiveWebcamDemo --novisualization --from shuffle.webm --ik 0.01 15 40

if gives me following error:
Visualization disabled
Incorrect number of arguments, 4 required ( @ --ik )..

Also, I ran :
./MocapNET2LiveWebcamDemo --novisualization --from sample.mp4 --openpose

for the video mentioned and sample output......my output file is not quite good as the sample. Is there something extra that needs to be done for refinement?
here is my output file:
https://drive.google.com/file/d/1_49-f8K3uohBCJjK6ElbuCFfrzkFzRla/view?usp=sharing

I am using Ubuntu on Windows os. Thank you.

Why is the motion capture applied to the model not smooth?

In your video example of using makehuman, blender, and applying out.bvh the model motion is not very smooth. It seems to be moving/jumping around.

I created the same animation using your example from openpose data and got the same results.

What am I doing wrong?

MocapNET2CSV Could not connect: Connection refused

Hi,

I am running Ubuntu 18.04 via VirtualBox in Windows 10.

When I run MocapNET2CSV, I got the following error

loadCalibration (null), color.calib
Failed @ loadCalibration (null), color.calib
Unable to init server: Could not connect: Connection refused

Any hints on how to solve that? Thanks.
image

cmakelist error 1

nvidia@nvidia-desktop:~/Desktop/CONVSYS/MocapNET-master$ ./initialize.sh
Generating shortcut
CMU BVH datasets appear to have been downloaded..
mkdir: cannot create directory ‘combinedModel’: File exists
mkdir: cannot create directory ‘mode3’: File exists
mkdir: cannot create directory ‘mode5’: File exists
Downloading Models for mode 3/quality setting 1.0
mkdir: cannot create directory ‘1.0’: File exists
Downloading Models for mode 3/quality setting 2.0
mkdir: cannot create directory ‘2.0’: File exists
Downloading 2D Joint Estimator models
Found a local tensorflow installation, not altering anything
RGBDAcquisition appears to already exist ..
AmmarServer appears to already exist ..
Now to try and build MocapNET..
mkdir: cannot create directory ‘build’: File exists
CMake Error: The current CMakeCache.txt directory /home/nvidia/Desktop/CONVSYS/MocapNET-master/build/CMakeCache.txt is different than the directory /home/nvidia/Desktop/MocapNET-master/build where CMakeCache.txt was created. This may result in binaries being created in the wrong place. If you are not sure, reedit the CMakeCache.txt
CMake Error: The source "/home/nvidia/Desktop/CONVSYS/MocapNET-master/CMakeLists.txt" does not match the source "/home/nvidia/Desktop/MocapNET-master/CMakeLists.txt" used to generate cache. Re-run cmake with a different source directory.
CMake Error: The source directory "/home/nvidia/Desktop/MocapNET-master" does not exist.
Specify --help for usage, or press the help button on the CMake GUI.
Makefile:272: recipe for target 'cmake_check_build_system' failed
make: *** [cmake_check_build_system] Error 1
#####################################################################
i am getting this error while running initialize.sh ,please recommend some solution

Extending towards Hrnet as 2D joint detector

Hi , first of all great work. I was wondering if it could be extended to HrNET as it is supposed to highly accurate ? Here is an implementation of it . I think it is possible , to dump the json file per frame format in for the keypoints. It is based on coco keypoints . Link to the repo
simpleHRNET

There is a demo script here demo_script

The keypoints are outputted here keypoints
The keypoint is array type of Nx17x3 where N is number of persons. Please let me know what you think about it ?

No OpenCV

Hello, I'm attempting to install MocapNET on my system and, after figuring out all the syntax, going through tutorials, and finally running the initialize.sh script, MocapNET is not installing because it's not installing OpenCV. I don't think I'm supposed to do that myself because it's not listed on the front that anything besides the initialize.sh is required but regardless it doesn't seem to be working. Here are my log files.
CMakeOutput.log
CMakeError.log

Documentation - Pretrained models

Could you please document your pre-trained models, specifically, what they are expecting to get as an input?

I'm looking for a model that can minimally do body and hands, which I did see in your video:
image

But also for a model that can do body+hands+face. Do you have something pretrained like that?

2D-3D mapping part

Hi,

Can I learn where the mapping part is where 2D-3D conversion is performed.

Thanks

Could not find COCO 2D skeleton

I ran openpose with just the face option. When trying to convert to bvh I get error Could not find COCO 2D skeleton in json/colorFrame_0_000000000000_keypoints.json

libopencv_freetype.so.3.2.0: error: undefined reference / Project Not Working on Google Collab

Hi.
So, by referring to a previous issue ( #33 ) , I tried to run this on Google Collab.
Here is the link to my Google Collab Notebook
https://colab.research.google.com/drive/1-j-EhXFmeKO7EONP-DiHIEGomu20n9vn?usp=sharing

However I ran into another problem while running the initialization scripts.


Scanning dependencies of target ReshapeCSV
[ 37%] Building CXX object src/MocapNET2/reshapeCSVFileToMakeClassification/CMakeFiles/ReshapeCSV.dir/reshapeCSV.cpp.o
[ 37%] Linking CXX executable ../../../../ReshapeCSV
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'FT_Set_Pixel_Sizes'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'hb_shape'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'FT_Load_Glyph'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'hb_buffer_guess_segment_properties'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'hb_ft_font_create'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'FT_Outline_Transform'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'hb_buffer_destroy'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'FT_Init_FreeType'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'FT_Outline_Decompose'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'FT_Outline_Translate'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'hb_font_destroy'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'FT_Done_FreeType'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'hb_buffer_add_utf8'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'hb_buffer_get_glyph_infos'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'FT_New_Face'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'FT_Render_Glyph'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'FT_Done_Face'
/usr/lib/x86_64-linux-gnu/libopencv_freetype.so.3.2.0: error: undefined reference to 'hb_buffer_create'
collect2: error: ld returned 1 exit status
src/MocapNET2/reshapeCSVFileToMakeClassification/CMakeFiles/ReshapeCSV.dir/build.make:127: recipe for target '../ReshapeCSV' failed
make[2]: *** [../ReshapeCSV] Error 1
CMakeFiles/Makefile2:145: recipe for target 'src/MocapNET2/reshapeCSVFileToMakeClassification/CMakeFiles/ReshapeCSV.dir/all' failed
make[1]: *** [src/MocapNET2/reshapeCSVFileToMakeClassification/CMakeFiles/ReshapeCSV.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

Please take a look at this.

P. S.

Thanks for this great repo and sharing it with the world. Its simply awesome.

*** No rule to make target '../dependencies/RGBDAcquisition/opengl_acquisition_shared_library/opengl_depth_and_color_renderer/src/Library/MotionCaptureLoader/export/bvh_to_c.c'

After recent additions to the BVH code of the RGBDAcquisition there might be a compilation error while building the project

make[2]: *** No rule to make target '../dependencies/RGBDAcquisition/opengl_acquisition_shared_library/opengl_depth_and_color_renderer/src/Library/MotionCaptureLoader/export/bvh_to_c.c', needed by 'src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/__/__/__/dependencies/RGBDAcquisition/opengl_acquisition_shared_library/opengl_depth_and_color_renderer/src/Library/MotionCaptureLoader/export/bvh_to_c.c.o'.  Stop.
make[1]: *** [CMakeFiles/Makefile2:636: src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

Please always use the update.sh script when updating to make sure you have the latest version of RGBDAcquisition or manually cd to dependencies/RGBDAcquisition and git pull to get the latest version of the BVH code..

The BVH code has been overhauled to now also understand quaternions so I hope the inconvience will be worth it on the long run :)

Demo not working on Google Colab

Tried on a GPU runtime

Colab link:
https://colab.research.google.com/drive/19UM6NAcmEeUUbSkjlJniWSs9bc4V-c4s?usp=sharing

Command:

./MocapNET2LiveWebcamDemo --from shuffle.webm --openpose --frames 375 --delay 1000 --novisualization

Output:

CPU :  Intel(R) Xeon(R) CPU @ 2.00GHz
sh: 1: lspci: not found
GPU : 
Trying to open source (shuffle.webm) 
Using BVH codebase version 0.43
buildOpenGLProjectionForIntrinsics: img(1920x1080) fx 582.18, fy 582.53, cx 960.00, cy 540.00

Number of Values Per Frame: 498
Frames: 1(1) / Frame Time : 0.0400
BVH subsystem initialized using dataset/headerWithHeadAndOneMotion.bvh that contains 1 frames ( 498 motions each )
Initializing IK code..
Body Problem : 




The IK problem we want to solve has 2 groups of subproblems
It is also ultimately divided into 7 kinematic chains
Chain 0 has 8 parts : jID(hip/0)->mID(0 to 2) jID(hip/0)->mID(3 to 5) jID(neck/3)->EndEffector jID(head/5)->EndEffector jID(rShldr/105)->EndEffector jID(lShldr/136)->EndEffector jID(rThigh/167)->EndEffector jID(lThigh/190)->EndEffector 
Chain 1 has 4 parts : jID(chest/2)->mID(9 to 11) jID(neck/3)->EndEffector jID(rShldr/105)->EndEffector jID(lForeArm/137)->EndEffector 
Chain 2 has 5 parts : jID(neck/3)->mID(12 to 14) jID(neck1/4)->mID(15 to 17) jID(head/5)->mID(18 to 20) jID(eye.l/70)->EndEffector jID(eye.r/78)->EndEffector 
Chain 3 has 3 parts : jID(rShldr/105)->mID(237 to 239) jID(rForeArm/106)->mID(240 to 242) jID(rHand/107)->EndEffector 
Chain 4 has 3 parts : jID(lShldr/136)->mID(315 to 317) jID(lForeArm/137)->mID(318 to 320) jID(lHand/138)->EndEffector 
Chain 5 has 5 parts : jID(rThigh/167)->mID(393 to 395) jID(rShin/168)->mID(396 to 398) jID(rFoot/169)->mID(399 to 401) jID(EndSite_toe1-2.R/172)->EndEffector jID(EndSite_toe5-3.R/188)->EndEffector 
Chain 6 has 5 parts : jID(lThigh/190)->mID(447 to 449) jID(lShin/191)->mID(450 to 452) jID(lFoot/192)->mID(453 to 455) jID(EndSite_toe1-2.L/195)->EndEffector jID(EndSite_toe5-3.L/211)->EndEffector 
BVH code initalization successfull..
Trying to set thread realtime prio = 99 
Failed setting thread realtime priority
LoadGraph dataset/combinedModel/openpose_model.pb using TensorFlow Version: 1.14.0
 Input Tensor for dataset/combinedModel/openpose_model.pb
Tensor: input_1 inputs: 0
 outputs: 1
 Output Tensor for dataset/combinedModel/openpose_model.pb
Tensor: k2tfout_1 inputs: 1
 outputs: 1
2020-10-15 14:35:58.222583: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2020-10-15 14:35:58.227891: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2000140000 Hz
2020-10-15 14:35:58.228125: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557bd7267f80 executing computations on platform Host. Devices:
2020-10-15 14:35:58.228159: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2020-10-15 14:35:58.229927: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2020-10-15 14:35:58.254401: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-15 14:35:58.255191: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: 
name: Tesla P4 major: 6 minor: 1 memoryClockRate(GHz): 1.1135
pciBusID: 0000:00:04.0
2020-10-15 14:35:58.255727: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2020-10-15 14:35:58.257197: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
2020-10-15 14:35:58.258389: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0
2020-10-15 14:35:58.258814: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0
2020-10-15 14:35:58.260154: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0
2020-10-15 14:35:58.261347: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0
2020-10-15 14:35:58.264441: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2020-10-15 14:35:58.264549: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-15 14:35:58.265225: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-15 14:35:58.265830: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2020-10-15 14:35:58.265891: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2020-10-15 14:35:58.353876: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-15 14:35:58.353919: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      0 
2020-10-15 14:35:58.353933: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0:   N 
2020-10-15 14:35:58.354113: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-15 14:35:58.354554: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-15 14:35:58.354981: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-15 14:35:58.355325: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:40] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2020-10-15 14:35:58.355367: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7123 MB memory) -> physical GPU (device: 0, name: Tesla P4, pci bus id: 0000:00:04.0, compute capability: 6.1)
MocapNET2 Initializing..
Trying to set thread realtime prio = 99 
Failed setting thread realtime priority
Map will be initialized from file  
Initializing from  .. 
 artifacts ok
Loading Gesture 0 -> dataset/gestures/help.bvh  : 
Number of Values Per Frame: 498
Frames: 63(63) / Frame Time : 0.0400
Gesture dataset/gestures/help.bvh has 63 frames with 498 fields each
Joint 7/abdomen_Xrotation is going to be used for gesture #0
Joint 8/abdomen_Yrotation is going to be used for gesture #0
Joint 12/neck_Zrotation is going to be used for gesture #0
Joint 13/neck_Xrotation is going to be used for gesture #0
Joint 14/neck_Yrotation is going to be used for gesture #0
Joint 15/neck1_Zrotation is going to be used for gesture #0
Joint 16/neck1_Xrotation is going to be used for gesture #0
Joint 17/neck1_Yrotation is going to be used for gesture #0
Joint 18/head_Zrotation is going to be used for gesture #0
Joint 19/head_Xrotation is going to be used for gesture #0
Joint 20/head_Yrotation is going to be used for gesture #0
Joint 21/__jaw_Zrotation is going to be used for gesture #0
Joint 22/__jaw_Xrotation is going to be used for gesture #0
Joint 24/jaw_Zrotation is going to be used for gesture #0
Joint 25/jaw_Xrotation is going to be used for gesture #0
Joint 26/jaw_Yrotation is going to be used for gesture #0
Joint 27/special04_Zrotation is going to be used for gesture #0
Joint 28/special04_Xrotation is going to be used for gesture #0
Joint 29/special04_Yrotation is going to be used for gesture #0
Joint 30/oris02_Zrotation is going to be used for gesture #0
Joint 31/oris02_Xrotation is going to be used for gesture #0
Joint 32/oris02_Yrotation is going to be used for gesture #0
Joint 33/oris01_Zrotation is going to be used for gesture #0
Joint 34/oris01_Xrotation is going to be used for gesture #0
Joint 35/oris01_Yrotation is going to be used for gesture #0
Joint 36/oris06.l_Zrotation is going to be used for gesture #0
Joint 37/oris06.l_Xrotation is going to be used for gesture #0
Joint 38/oris06.l_Yrotation is going to be used for gesture #0
Joint 40/oris07.l_Xrotation is going to be used for gesture #0
Joint 41/oris07.l_Yrotation is going to be used for gesture #0
Joint 44/oris06.r_Yrotation is going to be used for gesture #0
Joint 45/oris07.r_Zrotation is going to be used for gesture #0
Joint 46/oris07.r_Xrotation is going to be used for gesture #0
Joint 47/oris07.r_Yrotation is going to be used for gesture #0
Joint 48/tongue00_Zrotation is going to be used for gesture #0
Joint 49/tongue00_Xrotation is going to be used for gesture #0
Joint 50/tongue00_Yrotation is going to be used for gesture #0
Joint 51/tongue01_Zrotation is going to be used for gesture #0
Joint 52/tongue01_Xrotation is going to be used for gesture #0
Joint 54/tongue02_Zrotation is going to be used for gesture #0
Joint 55/tongue02_Xrotation is going to be used for gesture #0
Joint 56/tongue02_Yrotation is going to be used for gesture #0
Joint 57/tongue03_Zrotation is going to be used for gesture #0
Joint 58/tongue03_Xrotation is going to be used for gesture #0
Joint 59/tongue03_Yrotation is going to be used for gesture #0
Joint 60/__tongue04_Zrotation is going to be used for gesture #0
Joint 61/__tongue04_Xrotation is going to be used for gesture #0
Joint 62/__tongue04_Yrotation is going to be used for gesture #0
Joint 63/tongue04_Zrotation is going to be used for gesture #0
Joint 64/tongue04_Xrotation is going to be used for gesture #0
Joint 65/tongue04_Yrotation is going to be used for gesture #0
Joint 66/tongue07.l_Zrotation is going to be used for gesture #0
Joint 67/tongue07.l_Xrotation is going to be used for gesture #0
Joint 68/tongue07.l_Yrotation is going to be used for gesture #0
Joint 70/tongue07.r_Xrotation is going to be used for gesture #0
Joint 71/tongue07.r_Yrotation is going to be used for gesture #0
Joint 74/tongue06.l_Yrotation is going to be used for gesture #0
Joint 75/tongue06.r_Zrotation is going to be used for gesture #0
Joint 76/tongue06.r_Xrotation is going to be used for gesture #0
Joint 77/tongue06.r_Yrotation is going to be used for gesture #0
Joint 78/tongue05.l_Zrotation is going to be used for gesture #0
Joint 79/tongue05.l_Xrotation is going to be used for gesture #0
Joint 80/tongue05.l_Yrotation is going to be used for gesture #0
Joint 81/tongue05.r_Zrotation is going to be used for gesture #0
Joint 82/tongue05.r_Xrotation is going to be used for gesture #0
Joint 84/__levator02.l_Zrotation is going to be used for gesture #0
Joint 85/__levator02.l_Xrotation is going to be used for gesture #0
Joint 86/__levator02.l_Yrotation is going to be used for gesture #0
Joint 88/levator02.l_Xrotation is going to be used for gesture #0
Joint 91/levator03.l_Xrotation is going to be used for gesture #0
Joint 93/levator04.l_Zrotation is going to be used for gesture #0
Joint 94/levator04.l_Xrotation is going to be used for gesture #0
Joint 95/levator04.l_Yrotation is going to be used for gesture #0
Joint 96/levator05.l_Zrotation is going to be used for gesture #0
Joint 97/levator05.l_Xrotation is going to be used for gesture #0
Joint 98/levator05.l_Yrotation is going to be used for gesture #0
Joint 100/__levator02.r_Xrotation is going to be used for gesture #0
Joint 101/__levator02.r_Yrotation is going to be used for gesture #0
Joint 104/levator02.r_Yrotation is going to be used for gesture #0
Joint 105/levator03.r_Zrotation is going to be used for gesture #0
Joint 106/levator03.r_Xrotation is going to be used for gesture #0
Joint 107/levator03.r_Yrotation is going to be used for gesture #0
Joint 108/levator04.r_Zrotation is going to be used for gesture #0
Joint 109/levator04.r_Xrotation is going to be used for gesture #0
Joint 110/levator04.r_Yrotation is going to be used for gesture #0
Joint 111/levator05.r_Zrotation is going to be used for gesture #0
Joint 112/levator05.r_Xrotation is going to be used for gesture #0
Joint 113/levator05.r_Yrotation is going to be used for gesture #0
Joint 114/__special01_Zrotation is going to be used for gesture #0
Joint 115/__special01_Xrotation is going to be used for gesture #0
Joint 116/__special01_Yrotation is going to be used for gesture #0
Joint 118/special01_Xrotation is going to be used for gesture #0
Joint 121/oris04.l_Xrotation is going to be used for gesture #0
Joint 123/oris03.l_Zrotation is going to be used for gesture #0
Joint 124/oris03.l_Xrotation is going to be used for gesture #0
Joint 125/oris03.l_Yrotation is going to be used for gesture #0
Joint 126/oris04.r_Zrotation is going to be used for gesture #0
Joint 127/oris04.r_Xrotation is going to be used for gesture #0
Joint 128/oris04.r_Yrotation is going to be used for gesture #0
Joint 130/oris03.r_Xrotation is going to be used for gesture #0
Joint 131/oris03.r_Yrotation is going to be used for gesture #0
Joint 134/oris06_Yrotation is going to be used for gesture #0
Joint 135/oris05_Zrotation is going to be used for gesture #0
Joint 136/oris05_Xrotation is going to be used for gesture #0
Joint 137/oris05_Yrotation is going to be used for gesture #0
Joint 138/__special03_Zrotation is going to be used for gesture #0
Joint 139/__special03_Xrotation is going to be used for gesture #0
Joint 140/__special03_Yrotation is going to be used for gesture #0
Joint 141/special03_Zrotation is going to be used for gesture #0
Joint 142/special03_Xrotation is going to be used for gesture #0
Joint 143/special03_Yrotation is going to be used for gesture #0
Joint 144/__levator06.l_Zrotation is going to be used for gesture #0
Joint 145/__levator06.l_Xrotation is going to be used for gesture #0
Joint 146/__levator06.l_Yrotation is going to be used for gesture #0
Joint 148/levator06.l_Xrotation is going to be used for gesture #0
Joint 150/__levator06.r_Zrotation is going to be used for gesture #0
Joint 151/__levator06.r_Xrotation is going to be used for gesture #0
Joint 152/__levator06.r_Yrotation is going to be used for gesture #0
Joint 153/levator06.r_Zrotation is going to be used for gesture #0
Joint 154/levator06.r_Xrotation is going to be used for gesture #0
Joint 155/levator06.r_Yrotation is going to be used for gesture #0
Joint 156/special06.l_Zrotation is going to be used for gesture #0
Joint 157/special06.l_Xrotation is going to be used for gesture #0
Joint 158/special06.l_Yrotation is going to be used for gesture #0
Joint 160/special05.l_Xrotation is going to be used for gesture #0
Joint 161/special05.l_Yrotation is going to be used for gesture #0
Joint 164/eye.l_Yrotation is going to be used for gesture #0
Joint 169/orbicularis04.l_Xrotation is going to be used for gesture #0
Joint 170/orbicularis04.l_Yrotation is going to be used for gesture #0
Joint 171/special06.r_Zrotation is going to be used for gesture #0
Joint 172/special06.r_Xrotation is going to be used for gesture #0
Joint 173/special06.r_Yrotation is going to be used for gesture #0
Joint 174/special05.r_Zrotation is going to be used for gesture #0
Joint 175/special05.r_Xrotation is going to be used for gesture #0
Joint 176/special05.r_Yrotation is going to be used for gesture #0
Joint 178/eye.r_Xrotation is going to be used for gesture #0
Joint 180/orbicularis03.r_Zrotation is going to be used for gesture #0
Joint 181/orbicularis03.r_Xrotation is going to be used for gesture #0
Joint 182/orbicularis03.r_Yrotation is going to be used for gesture #0
Joint 183/orbicularis04.r_Zrotation is going to be used for gesture #0
Joint 184/orbicularis04.r_Xrotation is going to be used for gesture #0
Joint 185/orbicularis04.r_Yrotation is going to be used for gesture #0
Joint 186/__temporalis01.l_Zrotation is going to be used for gesture #0
Joint 187/__temporalis01.l_Xrotation is going to be used for gesture #0
Joint 188/__temporalis01.l_Yrotation is going to be used for gesture #0
Joint 189/temporalis01.l_Zrotation is going to be used for gesture #0
Joint 190/temporalis01.l_Xrotation is going to be used for gesture #0
Joint 191/temporalis01.l_Yrotation is going to be used for gesture #0
Joint 193/oculi02.l_Xrotation is going to be used for gesture #0
Joint 196/oculi01.l_Xrotation is going to be used for gesture #0
Joint 197/oculi01.l_Yrotation is going to be used for gesture #0
Joint 200/__temporalis01.r_Yrotation is going to be used for gesture #0
Joint 201/temporalis01.r_Zrotation is going to be used for gesture #0
Joint 202/temporalis01.r_Xrotation is going to be used for gesture #0
Joint 203/temporalis01.r_Yrotation is going to be used for gesture #0
Joint 204/oculi02.r_Zrotation is going to be used for gesture #0
Joint 205/oculi02.r_Xrotation is going to be used for gesture #0
Joint 206/oculi02.r_Yrotation is going to be used for gesture #0
Joint 207/oculi01.r_Zrotation is going to be used for gesture #0
Joint 208/oculi01.r_Xrotation is going to be used for gesture #0
Joint 210/__temporalis02.l_Zrotation is going to be used for gesture #0
Joint 211/__temporalis02.l_Xrotation is going to be used for gesture #0
Joint 212/__temporalis02.l_Yrotation is going to be used for gesture #0
Joint 213/temporalis02.l_Zrotation is going to be used for gesture #0
Joint 214/temporalis02.l_Xrotation is going to be used for gesture #0
Joint 215/temporalis02.l_Yrotation is going to be used for gesture #0
Joint 216/risorius02.l_Zrotation is going to be used for gesture #0
Joint 217/risorius02.l_Xrotation is going to be used for gesture #0
Joint 218/risorius02.l_Yrotation is going to be used for gesture #0
Joint 219/risorius03.l_Zrotation is going to be used for gesture #0
Joint 220/risorius03.l_Xrotation is going to be used for gesture #0
Joint 221/risorius03.l_Yrotation is going to be used for gesture #0
Joint 222/__temporalis02.r_Zrotation is going to be used for gesture #0
Joint 223/__temporalis02.r_Xrotation is going to be used for gesture #0
Joint 224/__temporalis02.r_Yrotation is going to be used for gesture #0
Joint 225/temporalis02.r_Zrotation is going to be used for gesture #0
Joint 226/temporalis02.r_Xrotation is going to be used for gesture #0
Joint 227/temporalis02.r_Yrotation is going to be used for gesture #0
Joint 230/risorius02.r_Yrotation is going to be used for gesture #0
Joint 231/risorius03.r_Zrotation is going to be used for gesture #0
Joint 232/risorius03.r_Xrotation is going to be used for gesture #0
Joint 233/risorius03.r_Yrotation is going to be used for gesture #0
Joint 234/rcollar_Zrotation is going to be used for gesture #0
Joint 235/rcollar_Xrotation is going to be used for gesture #0
Joint 236/rcollar_Yrotation is going to be used for gesture #0
Joint 237/rshoulder_Zrotation is going to be used for gesture #0
Joint 238/rshoulder_Xrotation is going to be used for gesture #0
Joint 239/rshoulder_Yrotation is going to be used for gesture #0
Joint 240/relbow_Zrotation is going to be used for gesture #0
Joint 241/relbow_Xrotation is going to be used for gesture #0
Joint 242/relbow_Yrotation is going to be used for gesture #0
Joint 243/rhand_Zrotation is going to be used for gesture #0
Joint 244/rhand_Xrotation is going to be used for gesture #0
Joint 245/rhand_Yrotation is going to be used for gesture #0
Joint 313/lcollar_Xrotation is going to be used for gesture #0
Joint 315/lshoulder_Zrotation is going to be used for gesture #0
Joint 316/lshoulder_Xrotation is going to be used for gesture #0
Joint 317/lshoulder_Yrotation is going to be used for gesture #0
Joint 318/lelbow_Zrotation is going to be used for gesture #0
Joint 319/lelbow_Xrotation is going to be used for gesture #0
Joint 320/lelbow_Yrotation is going to be used for gesture #0
Joint 322/lhand_Xrotation is going to be used for gesture #0
Joint 323/lhand_Yrotation is going to be used for gesture #0
Joint 392/rbuttock_Yrotation is going to be used for gesture #0
Joint 393/rhip_Zrotation is going to be used for gesture #0
Joint 394/rhip_Xrotation is going to be used for gesture #0
Joint 395/rhip_Yrotation is going to be used for gesture #0
Joint 396/rknee_Zrotation is going to be used for gesture #0
Joint 397/rknee_Xrotation is going to be used for gesture #0
Joint 398/rknee_Yrotation is going to be used for gesture #0
Joint 399/rfoot_Zrotation is going to be used for gesture #0
Joint 400/rfoot_Xrotation is going to be used for gesture #0
Joint 401/rfoot_Yrotation is going to be used for gesture #0
Joint 444/lbuttock_Zrotation is going to be used for gesture #0
Joint 445/lbuttock_Xrotation is going to be used for gesture #0
Joint 446/lbuttock_Yrotation is going to be used for gesture #0
Joint 448/lhip_Xrotation is going to be used for gesture #0
Joint 450/lknee_Zrotation is going to be used for gesture #0
Joint 451/lknee_Xrotation is going to be used for gesture #0
Joint 452/lknee_Yrotation is going to be used for gesture #0
Joint 453/lfoot_Zrotation is going to be used for gesture #0
Joint 454/lfoot_Xrotation is going to be used for gesture #0
Joint 455/lfoot_Yrotation is going to be used for gesture #0
Success , loaded 63 frames
Loading Gesture 1 -> dataset/gestures/push.bvh  : 
Number of Values Per Frame: 498
Frames: 46(46) / Frame Time : 0.0400
Gesture dataset/gestures/push.bvh has 46 frames with 498 fields each
Joint 237/rshoulder_Zrotation is going to be used for gesture #1
Joint 238/rshoulder_Xrotation is going to be used for gesture #1
Joint 239/rshoulder_Yrotation is going to be used for gesture #1
Joint 240/relbow_Zrotation is going to be used for gesture #1
Joint 241/relbow_Xrotation is going to be used for gesture #1
Joint 242/relbow_Yrotation is going to be used for gesture #1
Joint 315/lshoulder_Zrotation is going to be used for gesture #1
Joint 316/lshoulder_Xrotation is going to be used for gesture #1
Joint 317/lshoulder_Yrotation is going to be used for gesture #1
Joint 318/lelbow_Zrotation is going to be used for gesture #1
Joint 319/lelbow_Xrotation is going to be used for gesture #1
Joint 320/lelbow_Yrotation is going to be used for gesture #1
Success , loaded 46 frames
Loading Gesture 2 -> dataset/gestures/lefthandcircle.bvh  : 
Number of Values Per Frame: 498
Frames: 29(29) / Frame Time : 0.0400
Gesture dataset/gestures/lefthandcircle.bvh has 29 frames with 498 fields each
Joint 315/lshoulder_Zrotation is going to be used for gesture #2
Joint 316/lshoulder_Xrotation is going to be used for gesture #2
Joint 317/lshoulder_Yrotation is going to be used for gesture #2
Joint 318/lelbow_Zrotation is going to be used for gesture #2
Joint 319/lelbow_Xrotation is going to be used for gesture #2
Joint 320/lelbow_Yrotation is going to be used for gesture #2
Success , loaded 29 frames
Loading Gesture 3 -> dataset/gestures/righthandcircle.bvh  : 
Number of Values Per Frame: 498
Frames: 40(40) / Frame Time : 0.0400
Gesture dataset/gestures/righthandcircle.bvh has 40 frames with 498 fields each
Joint 237/rshoulder_Zrotation is going to be used for gesture #3
Joint 238/rshoulder_Xrotation is going to be used for gesture #3
Joint 239/rshoulder_Yrotation is going to be used for gesture #3
Joint 240/relbow_Zrotation is going to be used for gesture #3
Joint 241/relbow_Xrotation is going to be used for gesture #3
Joint 242/relbow_Yrotation is going to be used for gesture #3
Success , loaded 40 frames
Loading Gesture 4 -> dataset/gestures/waveleft.bvh  : 
Number of Values Per Frame: 498
Frames: 27(27) / Frame Time : 0.0400
Gesture dataset/gestures/waveleft.bvh has 27 frames with 498 fields each
Joint 316/lshoulder_Xrotation is going to be used for gesture #4
Joint 317/lshoulder_Yrotation is going to be used for gesture #4
Joint 319/lelbow_Xrotation is going to be used for gesture #4
Joint 320/lelbow_Yrotation is going to be used for gesture #4
Success , loaded 27 frames
Loading Gesture 5 -> dataset/gestures/doubleclap.bvh  : 
Number of Values Per Frame: 498
Frames: 22(22) / Frame Time : 0.0400
Gesture dataset/gestures/doubleclap.bvh has 22 frames with 498 fields each
Joint 238/rshoulder_Xrotation is going to be used for gesture #5
Joint 239/rshoulder_Yrotation is going to be used for gesture #5
Joint 241/relbow_Xrotation is going to be used for gesture #5
Joint 242/relbow_Yrotation is going to be used for gesture #5
Joint 316/lshoulder_Xrotation is going to be used for gesture #5
Joint 317/lshoulder_Yrotation is going to be used for gesture #5
Joint 319/lelbow_Xrotation is going to be used for gesture #5
Joint 320/lelbow_Yrotation is going to be used for gesture #5
Success , loaded 22 frames
Loading Gesture 6 -> dataset/gestures/waveright.bvh  : 
Number of Values Per Frame: 498
Frames: 19(19) / Frame Time : 0.0400
Gesture dataset/gestures/waveright.bvh has 19 frames with 498 fields each
Joint 238/rshoulder_Xrotation is going to be used for gesture #6
Joint 239/rshoulder_Yrotation is going to be used for gesture #6
Joint 241/relbow_Xrotation is going to be used for gesture #6
Joint 242/relbow_Yrotation is going to be used for gesture #6
Success , loaded 19 frames
Loading Gesture 7 -> dataset/gestures/leftkick.bvh  : 
Number of Values Per Frame: 498
Frames: 25(25) / Frame Time : 0.0400
Gesture dataset/gestures/leftkick.bvh has 25 frames with 498 fields each
Joint 447/lhip_Zrotation is going to be used for gesture #7
Joint 448/lhip_Xrotation is going to be used for gesture #7
Joint 449/lhip_Yrotation is going to be used for gesture #7
Joint 450/lknee_Zrotation is going to be used for gesture #7
Joint 451/lknee_Xrotation is going to be used for gesture #7
Joint 452/lknee_Yrotation is going to be used for gesture #7
Success , loaded 25 frames
Loading Gesture 8 -> dataset/gestures/rightkick.bvh  : 
Number of Values Per Frame: 498
Frames: 26(26) / Frame Time : 0.0400
Gesture dataset/gestures/rightkick.bvh has 26 frames with 498 fields each
Joint 393/rhip_Zrotation is going to be used for gesture #8
Joint 394/rhip_Xrotation is going to be used for gesture #8
Joint 395/rhip_Yrotation is going to be used for gesture #8
Joint 396/rknee_Zrotation is going to be used for gesture #8
Joint 397/rknee_Xrotation is going to be used for gesture #8
Joint 398/rknee_Yrotation is going to be used for gesture #8
Success , loaded 26 frames
Loading Gesture 9 -> dataset/gestures/tpose.bvh  : 
Number of Values Per Frame: 498
Frames: 61(61) / Frame Time : 0.0400
Gesture dataset/gestures/tpose.bvh has 61 frames with 498 fields each
Joint 238/rshoulder_Xrotation is going to be used for gesture #9
Joint 239/rshoulder_Yrotation is going to be used for gesture #9
Joint 315/lshoulder_Zrotation is going to be used for gesture #9
Joint 316/lshoulder_Xrotation is going to be used for gesture #9
Joint 319/lelbow_Xrotation is going to be used for gesture #9
Joint 320/lelbow_Yrotation is going to be used for gesture #9
Success , loaded 61 frames
Loading Pose 0 -> dataset/poses/neutral.bvh  : Pose 0 -> dataset/poses/neutral.bvh  is corrupted
Failure
Loading Pose 1 -> dataset/poses/tpose.bvh  : Pose 1 -> dataset/poses/tpose.bvh  is corrupted
Failure
Loading Pose 2 -> dataset/poses/x.bvh  : Pose 2 -> dataset/poses/x.bvh  is corrupted
Failure
Loading Pose 3 -> dataset/poses/handsup.bvh  : Pose 3 -> dataset/poses/handsup.bvh  is corrupted
Failure
Loading Pose 4 -> dataset/poses/leftwave.bvh  : Pose 4 -> dataset/poses/leftwave.bvh  is corrupted
Failure
Loading Pose 5 -> dataset/poses/rightright.bvh  : Pose 5 -> dataset/poses/rightright.bvh  is corrupted
Failure
Loading Pose 6 -> dataset/poses/leftleft.bvh  : Pose 6 -> dataset/poses/leftleft.bvh  is corrupted
Failure
Loading Pose 7 -> dataset/poses/push.bvh  : Pose 7 -> dataset/poses/push.bvh  is corrupted
Failure
Loading Pose 8 -> dataset/poses/rightwave.bvh  : Pose 8 -> dataset/poses/rightwave.bvh  is corrupted
Failure
Failed to read recognized Poses
This is not fatal, but poses/gestures will be deactivated..
Initializing output filters : .......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
LoadGraph dataset/combinedModel/mocapnet2/mode5/1.0/categorize_upperbody_all.pb using TensorFlow Version: 1.14.0
 Input Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/categorize_upperbody_all.pb
Tensor: input_all inputs: 0
 outputs: 1
 Output Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/categorize_upperbody_all.pb
Tensor: result_all/concat inputs: 3
 outputs: 1
2020-10-15 14:35:58.396721: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-15 14:35:58.396768: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      
LoadGraph dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_front.pb using TensorFlow Version: 1.14.0
 Input Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_front.pb
Tensor: input_front inputs: 0
 outputs: 1
 Output Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_front.pb
Tensor: result_front/concat inputs: 43
 outputs: 1
2020-10-15 14:35:58.479954: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-15 14:35:58.479988: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      
LoadGraph dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_back.pb using TensorFlow Version: 1.14.0
 Input Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_back.pb
Tensor: input_back inputs: 0
 outputs: 1
 Output Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_back.pb
Tensor: result_back/concat inputs: 43
 outputs: 1
2020-10-15 14:35:58.556299: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-15 14:35:58.556343: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      
LoadGraph dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_left.pb using TensorFlow Version: 1.14.0
 Input Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_left.pb
Tensor: input_left inputs: 0
 outputs: 1
 Output Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_left.pb
Tensor: result_left/concat inputs: 43
 outputs: 1
2020-10-15 14:35:58.631395: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-15 14:35:58.631439: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      
LoadGraph dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_right.pb using TensorFlow Version: 1.14.0
 Input Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_right.pb
Tensor: input_right inputs: 0
 outputs: 1
 Output Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_right.pb
Tensor: result_right/concat inputs: 43
 outputs: 1
2020-10-15 14:35:58.709670: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-15 14:35:58.709707: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      
Caching networks after initialization to be ready for use..
2020-10-15 14:35:58.750881: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
Caching model 0 (dataset/combinedModel/mocapnet2/mode5/1.0/categorize_upperbody_all.pb) was successful
Caching model 1 (dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_front.pb) was successful
Caching model 2 (dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_back.pb) was successful
Caching model 3 (dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_left.pb) was successful
Caching model 4 (dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_right.pb) was successful
LoadGraph dataset/combinedModel/mocapnet2/mode5/1.0/categorize_lowerbody_all.pb using TensorFlow Version: 1.14.0
 Input Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/categorize_lowerbody_all.pb
Tensor: input_all inputs: 0
 outputs: 1
 Output Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/categorize_lowerbody_all.pb
Tensor: result_all/concat inputs: 3
 outputs: 1
2020-10-15 14:36:03.779817: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-15 14:36:03.779861: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      
LoadGraph dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_front.pb using TensorFlow Version: 1.14.0
 Input Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_front.pb
Tensor: input_front inputs: 0
 outputs: 1
 Output Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_front.pb
Tensor: result_front/concat inputs: 46
 outputs: 1
2020-10-15 14:36:03.857417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-15 14:36:03.857466: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      
LoadGraph dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_back.pb using TensorFlow Version: 1.14.0
 Input Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_back.pb
Tensor: input_back inputs: 0
 outputs: 1
 Output Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_back.pb
Tensor: result_back/concat inputs: 46
 outputs: 1
2020-10-15 14:36:03.933404: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-15 14:36:03.933455: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      
LoadGraph dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_left.pb using TensorFlow Version: 1.14.0
 Input Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_left.pb
Tensor: input_left inputs: 0
 outputs: 1
 Output Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_left.pb
Tensor: result_left/concat inputs: 46
 outputs: 1
2020-10-15 14:36:03.994146: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-15 14:36:03.994192: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      
LoadGraph dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_right.pb using TensorFlow Version: 1.14.0
 Input Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_right.pb
Tensor: input_right inputs: 0
 outputs: 1
 Output Tensor for dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_right.pb
Tensor: result_right/concat inputs: 46
 outputs: 1
2020-10-15 14:36:04.061722: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-15 14:36:04.061785: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      
Caching networks after initialization to be ready for use..
Caching model 0 (dataset/combinedModel/mocapnet2/mode5/1.0/categorize_lowerbody_all.pb) was successful
Caching model 1 (dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_front.pb) was successful
Caching model 2 (dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_back.pb) was successful
Caching model 3 (dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_left.pb) was successful
Caching model 4 (dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_right.pb) was successful
Unable to init server: Could not connect: Connection refused

(3D Points Output:2846): Gtk-WARNING **: 14:36:07.879: cannot open display: 

Errors when trying OpenCVTest and MocapNET2LiveWebcamDemo in Windows Subsystem for Linux

MocapNet is installed in Ubuntu 2020.4 in WSL but when trying OpenCVTest the following errors occurs:

tm@LAPTOP-DGSMQ87U:/mnt/d/MocapNET-master$ ./OpenCVTest --from /shuffle.webm --novisualization (OpenCVTest:2395): GStreamer-CRITICAL **: 10:37:02.419: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed [ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (713) open OpenCV | GStreamer warning: Error opening bin: no source element for URI "/shuffle.webm" [ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created [ERROR:0] global ../modules/videoio/src/cap.cpp (116) open VIDEOIO(CV_IMAGES): raised OpenCV exception: OpenCV(4.2.0) ../modules/videoio/src/cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): /shuffle.webm in function 'icvExtractPattern' Trying to open /shuffle.webm Openning camera failed tm@LAPTOP-DGSMQ87U:/mnt/d/MocapNET-master$


Are these errors non-critical?
Another test also apparently fails:

tm@LAPTOP-DGSMQ87U:/mnt/d/MocapNET-master$ ./MocapNET2LiveWebcamDemo --from shuffle.webm CPU : Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz pcilib: Cannot open /proc/bus/pci lspci: Cannot find any working access method. GPU : Trying to open source (shuffle.webm) Using BVH codebase version 0.44 buildOpenGLProjectionForIntrinsics: img(1920x1080) fx 582.18, fy 582.53, cx 960.00, cy 540.00 Number of Values Per Frame: 498 Frames: 1(1) / Frame Time : 0.0400 BVH subsystem initialized using dataset/headerWithHeadAndOneMotion.bvh that contains 1 frames ( 498 motions each ) Initializing IK code.. Body Problem : The IK problem we want to solve has 2 groups of subproblems It is also ultimately divided into 7 kinematic chains Chain 0 has 10 parts : jID(hip/0)->mID(0 to 2) jID(hip/0)->mID(3 to 5) jID(neck/3)->EndEffector jID(head/5)->EndEffector jID(rShldr/105)->EndEffector jID(lShldr/136)->EndEffector jID(rThigh/167)->EndEffector jID(lThigh/190)->EndEffector jID(eye.l/70)->EndEffector jID(eye.r/78)->EndEffector Chain 1 has 4 parts : jID(chest/2)->mID(9 to 11) jID(neck/3)->EndEffector jID(rShldr/105)->EndEffector jID(lForeArm/137)->EndEffector Chain 2 has 5 parts : jID(neck/3)->mID(12 to 14) jID(neck1/4)->mID(15 to 17) jID(head/5)->mID(18 to 20) jID(eye.l/70)->EndEffector jID(eye.r/78)->EndEffector Chain 3 has 3 parts : jID(rShldr/105)->mID(237 to 239) jID(rForeArm/106)->mID(240 to 242) jID(rHand/107)->EndEffector Chain 4 has 3 parts : jID(lShldr/136)->mID(315 to 317) jID(lForeArm/137)->mID(318 to 320) jID(lHand/138)->EndEffector Chain 5 has 5 parts : jID(rThigh/167)->mID(393 to 395) jID(rShin/168)->mID(396 to 398) jID(rFoot/169)->mID(399 to 401) jID(EndSite_toe1-2.R/172)->EndEffector jID(EndSite_toe5-3.R/188)->EndEffector Chain 6 has 5 parts : jID(lThigh/190)->mID(447 to 449) jID(lShin/191)->mID(450 to 452) jID(lFoot/192)->mID(453 to 455) jID(EndSite_toe1-2.L/195)->EndEffector jID(EndSite_toe5-3.L/211)->EndEffector BVH code initalization successfull.. Trying to set thread realtime prio = 99 Failed setting thread realtime priority LoadGraph dataset/combinedModel/mobnet2_tiny_vnect_sm_1.9k.pb using TensorFlow Version: 2.3.1 Error importing graph definition.. (3) Invalid Argument in graph.. Can't load graph dataset/combinedModel/mobnet2_tiny_vnect_sm_1.9k.pb tm@LAPTOP-DGSMQ87U:/mnt/d/MocapNET-master$ Your version is up to date


The file dataset/combinedModel/mobnet2_tiny_vnect_sm_1.9k.pb is a 261KB file in the dataset/combinedModel/ folder.

Any suggestions?

I have installed OpenPose in Win 10 and would like to use its .json output to create .bvh mocap files for Blender using MocapNET.

[ 21%] Building CXX object src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/__/MocapNETLib/bvh.cpp.o

Hi,
i always get following error, when i initialize MocapNET.
your help to solve the issue would be great.

Thanks,
Simon

[ 21%] Building CXX object src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir//MocapNETLib/bvh.cpp.o
/content/MocapNET/src/MocapNET1/MocapNETLib/bvh.cpp: In function ‘std::vector<std::vector > convertBVHFrameTo2DPoints(std::vector, unsigned int, unsigned int)’:
/content/MocapNET/src/MocapNET1/MocapNETLib/bvh.cpp:272:60: error: too few arguments to function ‘int bvh_loadTransformForMotionBuffer(BVH_MotionCapture*, float*, BVH_Transform*, unsigned int)’
)
^
In file included from /content/MocapNET/src/MocapNET1/MocapNETLib/../../../dependencies/RGBDAcquisition/opengl_acquisition_shared_library/opengl_depth_and_color_renderer/src/Library/MotionCaptureLoader/calculate/bvh_project.h:5:0,
from /content/MocapNET/src/MocapNET1/MocapNETLib/bvh.cpp:7:
/content/MocapNET/src/MocapNET1/MocapNETLib/../../../dependencies/RGBDAcquisition/opengl_acquisition_shared_library/opengl_depth_and_color_renderer/src/Library/MotionCaptureLoader/calculate/bvh_transform.h:113:5: note: declared here
int bvh_loadTransformForMotionBuffer(
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/content/MocapNET/src/MocapNET1/MocapNETLib/bvh.cpp: In function ‘std::vector convertBVHFrameToFlat3DPoints(std::vector, unsigned int, unsigned int)’:
/content/MocapNET/src/MocapNET1/MocapNETLib/bvh.cpp:353:60: error: too few arguments to function ‘int bvh_loadTransformForMotionBuffer(BVH_MotionCapture*, float*, BVH_Transform*, unsigned int)’
)
^
In file included from /content/MocapNET/src/MocapNET1/MocapNETLib/../../../dependencies/RGBDAcquisition/opengl_acquisition_shared_library/opengl_depth_and_color_renderer/src/Library/MotionCaptureLoader/calculate/bvh_project.h:5:0,
from /content/MocapNET/src/MocapNET1/MocapNETLib/bvh.cpp:7:
/content/MocapNET/src/MocapNET1/MocapNETLib/../../../dependencies/RGBDAcquisition/opengl_acquisition_shared_library/opengl_depth_and_color_renderer/src/Library/MotionCaptureLoader/calculate/bvh_transform.h:113:5: note: declared here
int bvh_loadTransformForMotionBuffer(
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/build.make:400: recipe for target 'src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/
/MocapNETLib/bvh.cpp.o' failed
make[2]: *** [src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/__/MocapNETLib/bvh.cpp.o] Error 1
CMakeFiles/Makefile2:200: recipe for target 'src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/all' failed
make[1]: *** [src/MocapNET1/MocapNETFromJSON/CMakeFiles/MocapNETJSON.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

could I got the output or processed file or the 2d to 3d, bvh file. and where?

I added a flag : --novisualization that suppresses visualization windows

You should git pull , recompile and then try ./WebcamBIN --from ./video_test/sense_test.mp4 --novisualization and see if it will now work on the docker container..

d5c7471

Originally posted by @AmmarkoV in #2 (comment)

now, I can get the image imshow at docker environment.
I run root@7953ba7131a3:/pose_dancing/we-try-git-newest/MocapNET# ./WebcamBIN --from video_test/sense_test.mp4 --novisualization. Although, the code and enviroment will work, where can I got the final ouput or the 3d, 2d, bvh file?
I got as following:
Trying to open video_test/sense_test.mp4
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /pose_dancing/MocapNET/opencv-3.2.0/modules/highgui/src/window.cpp, line 304
terminate called after throwing an instance of 'cv::Exception'
what(): /pose_dancing/MocapNET/opencv-3.2.0/modules/highgui/src/window.cpp:304: error: (-215) size.width>0 && size.height>0 in function imshow.
Aborted

It can work for about 10 seconds or work until the end of video with the image of video_test/sense_test.mp4. However, it will end of one error nearly the ending frames or second, called,
what(): /pose_dancing/MocapNET/opencv-3.2.0/modules/highgui/src/window.cpp:304: error: (-215) size.width>0 && size.height>0 in function imshow. Aborted.
In fact, I build one opencv by using your getOpenCV3.2.sh .
I'm a little bit confuesed with it. If l don't build the opnecv by getOpenCV3.2.sh, just using the pip install opencv-python, what will happend? Anyway, with the old version of this repo, I got the result out keypoints and others, but the old version can only work with the first some frames, I will be Aborted, which may be caused by so many windows with in dokcer.

And, there the latest results. Thanks for your updating and replies.
Uploading image.png…

I added a flag : --novisualization that suppresses visualization windows

You should git pull , recompile and then try ./WebcamBIN --from ./video_test/sense_test.mp4 --novisualization and see if it will now work on the docker container..

d5c7471

now, I can get the image imshow at docker environment.
and, --novisualizaition man not work?
I run root@7953ba7131a3:/pose_dancing/we-try-git-newest/MocapNET# ./WebcamBIN --from video_test/sense_test.mp4 --novisualization
I got as following:
Trying to open video_test/sense_test.mp4
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /pose_dancing/MocapNET/opencv-3.2.0/modules/highgui/src/window.cpp, line 304
terminate called after throwing an instance of 'cv::Exception'
what(): /pose_dancing/MocapNET/opencv-3.2.0/modules/highgui/src/window.cpp:304: error: (-215) size.width>0 && size.height>0 in function imshow.
Aborted

It can work for about 10 seconds with the image of video_test/sense_test.mp4. However, nothing have been processed. I'm a little bit confuesed with it. Anyway, with the old version of this repo, I got the result out keypoints and others, but the old version can only work with the first some frames, I will be Aborted, which may be caused by so many windows with in dokcer.

Build error on Jetson Nano

Hi,

I'm trying to install MocapNET on a Jetson Nano.
When running the initialize.sh script, I'm getting the following error:

/home/hans/MocapNET/dependencies/RGBDAcquisition/tools/AmMatrix/matrix4x4Tools.c:1062:10: fatal error: xmmintrin.h: No such file or directory
 #include <xmmintrin.h>
          ^~~~~~~~~~~~~
compilation terminated.
src/GroundTruthGenerator/CMakeFiles/GroundTruthDumper.dir/build.make:374: recipe for target 'src/GroundTruthGenerator/CMakeFiles/GroundTruthDumper.dir/__/__/dependencies/RGBDAcquisition/tools/AmMatrix/matrix4x4Tools.c.o' failed
make[2]: *** [src/GroundTruthGenerator/CMakeFiles/GroundTruthDumper.dir/__/__/dependencies/RGBDAcquisition/tools/AmMatrix/matrix4x4Tools.c.o] Error 1
CMakeFiles/Makefile2:100: recipe for target 'src/GroundTruthGenerator/CMakeFiles/GroundTruthDumper.dir/all' failed
make[1]: *** [src/GroundTruthGenerator/CMakeFiles/GroundTruthDumper.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

According to https://forums.developer.nvidia.com/t/fatal-error-emmintrin-h-no-such-file-or-directory-compilation-terminated/46577, ARM CPUs do not support AMD x86_64 SSE/SSE2.

Do you know how this could be fixed on the Jetson Nano?

Greetings,
Hans Beemsterboer

Is it possible to use this without a webcam?

Hello, I just found today and I've been looking for documentation on how to use what's available before I do some tinkering and one thing I found here is that it seems to require a webcam. I do not have a webcam but I have a smartphone I can film myself with. Is it possible to use this with prerecorded videos in any capacity? If it is, could it be explained how? And, if not, could such a thing be added later?

Could not find COCO 2D skeleton when attempting to convert OpenPose Json output

First of all big props with this project, I love the idea of it and was pleased to find out it exists.

As it stands I've been attempting to test this project in conjunction with OpenPose by running the following command:

openpose/openpose.bin -number_people_max 1 --hand --write_json /output/ --write_images /output/ --image_dir examples/media/

This generated an image and json file (the image for my own benefit in confirming everything was working correctly with regards to OpenPose interpreting the source image).

I then moved the generated json file to the directory ~/Documents/json and attempted to run the convertBody25SONToCSV module with the following command from the MocapNET folder:

./convertBody25JSONToCSV --from '~/Documents/json' --size 720 1080 -o .

Which yielded the following output:

File ~/Documents/json/colorFrame_0_00001.jpg does not exist, unable to get its dimensions.. Assuming default image dimensions 720x1080 , you can change this using --size x y Processing ~/Documents/json/colorFrame_0_00001_keypoints.json (720x1080) Could not find COCO 2D skeleton in ~/Documents/json/colorFrame_0_00001_keypoints.json Done processing 1 frames.. Output has been stored at ./2dJoints_v1.2.csv

This leaves me with the question of why it is that the tool is attempting to access a non-existent colorFrame_0_00001_keypoints.json and similarly named image when the only file in the given folder is named 1.json. The obvious assumption would be that I'm simply not calling the module properly and need to rectify the command used accordingly, however is is not clear to me what I need to change and how.

I did at least try the obvious option of directly passing in the path to the json file like so:

./convertBody25JSONToCSV --from '~/Documents/json/1.json' --size 720 1080 -o .

But ended up with the same result.

I suspect that the error is an obvious one on my part rather than an issue with MocapNET but I'm uncertain as to how I should be resolving it.

Just to rule out an issue with openpose and the generated json file I'll paste the contents below:

{"version":1.3,"people":[{"person_id":[-1],"pose_keypoints_2d":[301.986,142.172,0.861539,298.959,250.623,0.869627,208.469,250.72,0.745,105.856,365.168,0.736903,199.447,425.471,0.815681,392.297,250.591,0.767769,497.778,347.227,0.800218,419.367,440.535,0.801715,298.925,494.725,0.735155,244.634,494.761,0.660716,211.45,729.905,0.80738,157.249,952.867,0.819982,353.122,491.808,0.68745,437.538,729.837,0.806956,552.095,953.034,0.741912,280.842,127.132,0.874103,322.925,124.07,0.903101,253.844,136.099,0.92029,347.02,133.065,0.931848,560.989,1037.34,0.775494,576.162,1025.33,0.709642,567.097,955.94,0.538249,166.283,1025.35,0.691949,148.12,1022.17,0.639221,154.284,958.97,0.630102],"face_keypoints_2d":[],"hand_left_keypoints_2d":[405.562,448.236,0.326992,399.789,439.314,0.37053,386.67,425.67,0.321641,371.451,421.996,0.227782,354.657,414.649,0.0814476,374.074,417.798,0.698801,353.608,406.252,0.829781,341.013,401.529,0.891227,328.942,397.331,0.849132,373.55,423.046,0.777923,350.459,419.897,0.85627,334.19,419.372,0.961719,322.645,419.897,0.857534,375.124,432.492,0.863338,354.132,435.116,0.815929,337.864,437.215,0.808573,326.843,439.314,0.863499,377.748,444.038,0.832029,360.43,451.909,0.82551,349.934,456.633,1.00618,341.013,460.306,0.919247],"hand_right_keypoints_2d":[210.031,446.499,0.37984,219.054,437.477,0.475413,232.587,420.56,0.285183,246.12,410.974,0.188407,259.09,403.079,0.101311,238.79,410.974,0.709444,259.654,398.004,0.874165,273.751,392.365,0.819851,285.593,388.418,0.858441,238.79,416.049,0.802709,263.601,408.718,0.85787,278.262,405.899,0.909572,290.668,402.515,0.851317,237.662,425.071,0.851579,263.037,424.507,0.893198,277.134,423.943,0.870259,288.412,423.379,0.838396,238.226,437.477,0.843679,257.962,441.424,0.844412,268.112,443.679,0.910401,277.134,445.371,0.87341],"pose_keypoints_3d":[],"face_keypoints_3d":[],"hand_left_keypoints_3d":[],"hand_right_keypoints_3d":[]}]}

/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow.so: .dynsym local symbol at index 3 (>= sh_info of 3)

There is a compilation warning appearing during the final linking stages of the build procedure

/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow.so.2: .dynsym local symbol at index 3 (>= sh_info of 3)
or
/usr/bin/ld: ../../../../dependencies/libtensorflow/lib/libtensorflow.so: .dynsym local symbol at index 3 (>= sh_info of 3)

There is probably something going wrong with the linking of symbols from the tensorflow.so library, however I haven't located exactly what causes this..!

The compilation succeeds but this should be addressed and fixed..

train/test source code?

Hi, I am trying to reproduce your model on my custom dataset.
Is there any source code on how you trained the model available?
Thanks

MocapNET2CSV demo not starting up

Hi Ammar,
The MocapNET2CSV demo doesn't seem to work.

When I run with:
./MocapNET2CSV --from sample.csv --visualize --delay 30

I'm getting a message "TODO: do face associations.."

CMake error Jetson NX

Hi,
I am having the following cmake error. Can you kindly help with the error if possible?
I am using Jetson Xavier NX.

Thanks

[ 38%] Linking CXX shared library ../../../../libMocapNETLib2.so ../../../../dependencies/libtensorflow/lib/libtensorflow.so: error adding symbols: File in wrong format collect2: error: ld returned 1 exit status src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/build.make:1094: recipe for target '../libMocapNETLib2.so' failed make[2]: *** [../libMocapNETLib2.so] Error 1 CMakeFiles/Makefile2:475: recipe for target 'src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/all' failed make[1]: *** [src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/all] Error 2 Makefile:102: recipe for target 'all' failed make: *** [all] Error 2

How to install and run the program

Alright, so I'm having huge issues trying to get the program to install, build, and run on Zorin OS. Specifically none of the scripts I ran seem to work and the program is having a hard time finding the existence of the files. So I need someone to please post an in-depth tutorial so I can use the program. I'm not exactly experienced with Linux on the whole but the readme on the front of the page is not helping, I think there's a huge amount I'm missing. I'm looking through other tutorials and things to help me get this running and they're not helping.

convertOpenPoseJSONtoCSV failed to find a JSON file..!

Hello,

I am using Windows 10 with WSL (Ubuntu 20.04) and trying to use the file convertOpenPoseJSONtoCSV. With the command ./convertOpenPoseJSONToCSV --from 20191130_170653.mp4-data --label colorFrame_0_ --seriallength 5 --size 1920 1080 -o . where 20191130_170653.mp4-data is the folder which contain image files generated from dump_video.sh file (so the label is colorFrame_0_XXXXX) and JSON files generated from latest prebuild Openpose for Windows.

I keep getting the Failed to find a JSON file..! error. Even when I change the path for --from to fully direction /home/user/MocapNET/20191130_170653.mp4-data , I still get the error. Where did I do wrong?

Thank you.

Issue with installation on Ubuntu20 on Windows 10 machine

I am using ubuntu 20.04 on linux (Previous MocapNET version had no issue)

OpenCV version 4.2
CUDA version 10.1.243

CMAKE ERROR:

MocapNET-master/src/MocapNET2/MocapNETLib2/visualization/widgets.cpp: In function ‘int visualizeOrientation(cv::Mat&, const char*, float, float, float, float, float, unsigned int, unsigned int, unsigned int, unsigned int)’:
/mnt/e/AI/MocapNETV2_1-master/MocapNET-master/src/MocapNET2/MocapNETLib2/visualization/widgets.cpp:268:67: error: ‘CV_AA’ was not declared in this scope; did you mean ‘CV_MSA’?
  268 |     cv::line(img,startPoint,arrowPointing,cv::Scalar(0,255,255),4,CV_AA,0);
      |                                                                   ^~~~~
      |                                                                   CV_MSA
make[2]: *** [src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/build.make:752: src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/visualization/widgets.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:457: src/MocapNET2/MocapNETLib2/CMakeFiles/MocapNETLib2.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

If I replace CV_AA with CV_MSA...build will be successful...but when i run command:
./MocapNET2LiveWebcamDemo --novisualization --from shuffle.webm --openpose

I get error:

Loading Pose 0 -> dataset/poses/neutral.bvh  : Pose 0 -> dataset/poses/neutral.bvh  is corrupted
Failure
Loading Pose 1 -> dataset/poses/tpose.bvh  : Pose 1 -> dataset/poses/tpose.bvh  is corrupted
Failure
Loading Pose 2 -> dataset/poses/x.bvh  : Pose 2 -> dataset/poses/x.bvh  is corrupted
Failure
Loading Pose 3 -> dataset/poses/handsup.bvh  : Pose 3 -> dataset/poses/handsup.bvh  is corrupted
Failure
Loading Pose 4 -> dataset/poses/leftwave.bvh  : Pose 4 -> dataset/poses/leftwave.bvh  is corrupted
Failure
Loading Pose 5 -> dataset/poses/rightright.bvh  : Pose 5 -> dataset/poses/rightright.bvh  is corrupted
Failure
Loading Pose 6 -> dataset/poses/leftleft.bvh  : Pose 6 -> dataset/poses/leftleft.bvh  is corrupted
Failure
Loading Pose 7 -> dataset/poses/push.bvh  : Pose 7 -> dataset/poses/push.bvh  is corrupted
Failure
Loading Pose 8 -> dataset/poses/rightwave.bvh  : Pose 8 -> dataset/poses/rightwave.bvh  is corrupted
Failure
Failed to read recognized Poses
This is not fatal, but poses/gestures will be deactivated..
Initializing output filters : .......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Could not find a pose categorization classifier for UpperBody..
 Graph dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_front.pb could not be found
Graph dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_back.pb could not be found
Graph dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_left.pb could not be found
Graph dataset/combinedModel/mocapnet2/mode5/1.0/upperbody_right.pb could not be found
Could not find a pose categorization classifier for LowerBody..
 Graph dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_front.pb could not be found
Graph dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_back.pb could not be found
Graph dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_left.pb could not be found
Graph dataset/combinedModel/mocapnet2/mode5/1.0/lowerbody_right.pb could not be found


out.bvh file is not generated.

GroundTruthGenerator compilation error after pulling latest master..

Hello, if you get a compilation error on the GroundTruthGenerator after pulling the latest master this is because the dependencies/RGBDAcquisition also needs to be pulled.. I have added new functionality that can filter out BVH poses and this needs to be pulled in.

In order to simplify the update procedure I have included an update.sh script that automatically pulls all my code at once so that it is always on-sync.

I have also updated the README file accordingly

CMake Error at CMakeLists.txt:269 (add_subdirectory)

Dear All!
many thanks for providing this fantastic code.
unfortunately i have an CMake error when running initialize.sh. which i can not solve by myself.
cmakeouput log is attached, error message below.

it would be great if you could help sovling the problem.

i am using ubuntu subsystem on win10. and in the begining of initialization following error occurs, which might be the root cause:

Will not apply HSTS. The HSTS database must be a regular and non-world-writable file.
ERROR: could not open HSTS store at '/home/simon/.wget-hsts'. HSTS will be disabled.

Many Thanks in advance,
Simon

CMakeOutput.log

-- Configuring incomplete, errors occurred!
See also "/home/simon/MocapNET/build/CMakeFiles/CMakeOutput.log".
Using locally found libtensorflow C-API
BVH Code found and will be used..
OpenGL support will not be compiled in
OpenCV code found and will be used..
/usr/share/OpenCV
AmmarServer support will be compiled in
OpenSSL will not be used
LibEvent will not be used against AmmarServerLib
OpenSSL will not be used against AmmarServerLib
CMake Error at CMakeLists.txt:269 (add_subdirectory):
add_subdirectory given source "src/MocapNETServerHTTP/" which is not an
existing directory.

-- Configuring incomplete, errors occurred!
See also "/home/simon/MocapNET/build/CMakeFiles/CMakeOutput.log".
Makefile:278: recipe for target 'cmake_check_build_system' failed
make: *** [cmake_check_build_system] Error 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.