Giter Club home page Giter Club logo

animateddrawings's Introduction

Animated Drawings

Sequence 02

This repo contains an implementation of the algorithm described in the paper, A Method for Animating Children's Drawings of the Human Figure.

In addition, this repo aims to be a useful creative tool in its own right, allowing you to flexibly create animations starring your own drawn characters. If you do create something fun with this, let us know! Use hashtag #FAIRAnimatedDrawings, or tag me on twitter: @hjessmith.

Project website: http://www.fairanimateddrawings.com

Video overview of Animated Drawings OS Project

Installation

This project has been tested with macOS Ventura 13.2.1 and Ubuntu 18.04. If you're installing on another operating system, you may encounter issues.

We strongly recommend activating a Python virtual environment prior to installing Animated Drawings. Conda's Miniconda is a great choice. Follow these steps to download and install it. Then run the following commands:

    # create and activate the virtual environment
    conda create --name animated_drawings python=3.8.13
    conda activate animated_drawings

    # clone AnimatedDrawings and use pip to install
    git clone https://github.com/facebookresearch/AnimatedDrawings.git
    cd AnimatedDrawings
    pip install -e .

Mac M1/M2 users: if you get architecture errors, make sure your ~/.condarc does not have osx-64, but only osx-arm64 and noarch in its subdirs listing. You can see that it's going to go sideways as early as conda create because it will show osx-64 instead of osx-arm64 versions of libraries under "The following NEW packages will be INSTALLED".

Using Animated Drawings

Quick Start

Now that everything's set up, let's animate some drawings! To get started, follow these steps:

  1. Open a terminal and activate the animated_drawings conda environment:
~ % conda activate animated_drawings
  1. Ensure you're in the root directory of AnimatedDrawings:
(animated_drawings) ~ % cd {location of AnimatedDrawings on your computer}
  1. Start up a Python interpreter:
(animated_drawings) AnimatedDrawings % python
  1. Copy and paste the follow two lines into the interpreter:
from animated_drawings import render
render.start('./examples/config/mvc/interactive_window_example.yaml')

If everything is installed correctly, an interactive window should appear on your screen. (Use spacebar to pause/unpause the scene, arrow keys to move back and forth in time, and q to close the screen.)




There's a lot happening behind the scenes here. Characters, motions, scenes, and more are all controlled by configuration files, such as interactive_window_example.yaml. Below, we show how different effects can be achieved by varying the config files. You can learn more about the config files here.

Export MP4 video

Suppose you'd like to save the animation as a video file instead of viewing it directly in a window. Specify a different example config by copying these lines into the Python interpreter:

from animated_drawings import render
render.start('./examples/config/mvc/export_mp4_example.yaml')

Instead of an interactive window, the animation was saved to a file, video.mp4, located in the same directory as your script.




Export transparent .gif

Perhaps you'd like a transparent .gif instead of an .mp4? Copy these lines in the Python interpreter instead:

from animated_drawings import render
render.start('./examples/config/mvc/export_gif_example.yaml')

Instead of an interactive window, the animation was saved to a file, video.gif, located in the same directory as your script.




Headless Rendering

If you'd like to generate a video headlessly (e.g. on a remote server accessed via ssh), you'll need to specify USE_MESA: True within the view section of the config file.

    view:
      USE_MESA: True

Animating Your Own Drawing

All of the examples above use drawings with pre-existing annotations. To understand what we mean by annotations here, look at one of the 'pre-rigged' character's annotation files. You can use whatever process you'd like to create those annotations files and, as long as they are valid, AnimatedDrawings will give you an animation.

So you'd like to animate your own drawn character. I wouldn't want you to create those annotation files manually. That would be tedious. To make it fast and easy, we've trained a drawn humanoid figure detector and pose estimator and provided scripts to automatically generate annotation files from the model predictions. There are currently two options for setting this up.

Option 1: Docker

To get it working, you'll need to set up a Docker container that runs TorchServe. This allows us to quickly show your image to our machine learning models and receive their predictions.

To set up the container, follow these steps:

  1. Install Docker Desktop
  2. Ensure Docker Desktop is running.
  3. Run the following commands, starting from the Animated Drawings root directory:
    (animated_drawings) AnimatedDrawings % cd torchserve

    # build the docker image... this takes a while (~5-7 minutes on Macbook Pro 2021)
    (animated_drawings) torchserve % docker build -t docker_torchserve .

    # start the docker container and expose the necessary ports
    (animated_drawings) torchserve % docker run -d --name docker_torchserve -p 8080:8080 -p 8081:8081 docker_torchserve

Wait ~10 seconds, then ensure Docker and TorchServe are working by pinging the server:

    (animated_drawings) torchserve % curl http://localhost:8080/ping

    # should return:
    # {
    #   "status": "Healthy"
    # }

If, after waiting, the response is curl: (52) Empty reply from server, one of two things is likely happening.

  1. Torchserve hasn't finished initializing yet, so wait another 10 seconds and try again.
  2. Torchserve is failing because it doesn't have enough RAM. Try increasing the amount of memory available to your Docker containers to 16GB by modifying Docker Desktop's settings.

With that set up, you can now go directly from image -> animation with a single command:

    (animated_drawings) torchserve % cd ../examples
    (animated_drawings) examples % python image_to_animation.py drawings/garlic.png garlic_out

As you waited, the image located at drawings/garlic.png was analyzed, the character detected, segmented, and rigged, and it was animated using BVH motion data from a human actor. The resulting animation was saved as ./garlic_out/video.gif.




Option 2: Running locally on macOS

Getting Docker working can be complicated, and it's unnecessary if you just want to play around with this locally. Contributer @Gravityrail kindly submitted a script that sets up Torchserve locally on MacOS, no Docker required.

cd torchserve
./setup_macos.sh
torchserve --start --ts-config config.local.properties --foreground

With torchserve running locally like this, you can use the same command as before to make the garlic dance:

python image_to_animation.py drawings/garlic.png garlic_out

Fixing bad predictions

You may notice that, when you ran python image_to_animation.py drawings/garlic.png garlic_out, there were additional non-video files within garlic_out. mask.png, texture.png, and char_cfg.yaml contain annotation results of the image character analysis step. These annotations were created from our model predictions. If the mask predictions are incorrect, you can edit the mask with an image editing program like Paint or Photoshop. If the joint predictions are incorrect, you can run python fix_annotations.py to launch a web interface to visualize, correct, and update the annotations. Pass it the location of the folder containing incorrect joint predictions (here we use garlic_out/ as an example):

    (animated_drawings) examples % python fix_annotations.py garlic_out/
    ...
     * Running on http://127.0.0.1:5050
    Press CTRL+C to quit

Navigate to http://127.0.0.1:5050 in your browser to access the web interface. Drag the joints into the appropriate positions, and hit Submit to save your edits.

Once you've modified the annotations, you can render an animation using them like so:

    # specify the folder where the fixed annoations are located
    (animated_drawings) examples % python annotations_to_animation.py garlic_out

Adding multiple characters to scene

Multiple characters can be added to a video by specifying multiple entries within the config scene's 'ANIMATED_CHARACTERS' list. To see for yourself, run the following commands from a Python interpreter within the AnimatedDrawings root directory:

from animated_drawings import render
render.start('./examples/config/mvc/multiple_characters_example.yaml')

Adding a background image

Suppose you'd like to add a background to the animation. You can do so by specifying the image path within the config. Run the following commands from a Python interpreter within the AnimatedDrawings root directory:

from animated_drawings import render
render.start('./examples/config/mvc/background_example.yaml')

Using BVH Files with Different Skeletons

You can use any motion clip you'd like, as long as it is in BVH format.

If the BVH's skeleton differs from the examples used in this project, you'll need to create a new motion config file and retarget config file. Once you've done that, you should be good to go. The following code and resulting clip uses a BVH with completely different skeleton. Run the following commands from a Python interpreter within the AnimatedDrawings root directory:

from animated_drawings import render
render.start('./examples/config/mvc/different_bvh_skeleton_example.yaml')

Creating Your Own BVH Files

You may be wondering how you can create BVH files of your own. You used to need a motion capture studio. But now, thankfully, there are simple and accessible options for getting 3D motion data from a single RGB video. For example, I created this Readme's banner animation by:

  1. Recording myself doing a silly dance with my phone's camera.
  2. Using Rokoko to export a BVH from my video.
  3. Creating a new motion config file and retarget config file to fit the skeleton exported by Rokoko.
  4. Using AnimatedDrawings to animate the characters and export a transparent animated gif.
  5. Combining the animated gif, original video, and original drawings in Adobe Premiere.

Here is an example of the configs I used apply my motion to a character. To use these config files, ensure that the Rokoko exports the BVH with the Mixamo skeleton preset:

from animated_drawings import render
render.start('./examples/config/mvc/rokoko_motion_example.yaml')

It will show this in a new window:

Sequence 01

Adding Addition Character Skeletons

All of the example animations above depict "human-like" characters; they have two arms and two legs. Our method is primarily designed with these human-like characters in mind, and the provided pose estimation model assumes a human-like skeleton is present. But you can manually specify a different skeletons within the character config and modify the specified retarget config to support it. If you're interested, look at the configuration files specified in the two examples below.

from animated_drawings import render
render.start('./examples/config/mvc/six_arms_example.yaml')




from animated_drawings import render
render.start('./examples/config/mvc/four_legs_example.yaml')




Creating Your Own Config Files

If you want to create your own config files, see the configuration file documentation.

Browser-Based Demo

If you'd like to animate a drawing of your own, but don't want to deal with downloading code and using the command line, check out our browser-based demo:

www.sketch.metademolab.com

Paper & Citation

If you find the resources in this repo helpful, please consider citing the accompanying paper, A Method for Animating Children's Drawings of The Human Figure).

Citation:

@article{10.1145/3592788,
author = {Smith, Harrison Jesse and Zheng, Qingyuan and Li, Yifei and Jain, Somya and Hodgins, Jessica K.},
title = {A Method for Animating Children’s Drawings of the Human Figure},
year = {2023},
issue_date = {June 2023},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {42},
number = {3},
issn = {0730-0301},
url = {https://doi.org/10.1145/3592788},
doi = {10.1145/3592788},
abstract = {Children’s drawings have a wonderful inventiveness, creativity, and variety to them. We present a system that automatically animates children’s drawings of the human figure, is robust to the variance inherent in these depictions, and is simple and straightforward enough for anyone to use. We demonstrate the value and broad appeal of our approach by building and releasing the Animated Drawings Demo, a freely available public website that has been used by millions of people around the world. We present a set of experiments exploring the amount of training data needed for fine-tuning, as well as a perceptual study demonstrating the appeal of a novel twisted perspective retargeting technique. Finally, we introduce the Amateur Drawings Dataset, a first-of-its-kind annotated dataset, collected via the public demo, containing over 178,000 amateur drawings and corresponding user-accepted character bounding boxes, segmentation masks, and joint location annotations.},
journal = {ACM Trans. Graph.},
month = {jun},
articleno = {32},
numpages = {15},
keywords = {2D animation, motion retargeting, motion stylization, Skeletal animation}
}

Amateur Drawings Dataset

To obtain the Amateur Drawings Dataset, run the following two commands from the command line:

# download annotations (~275Mb)
wget https://dl.fbaipublicfiles.com/amateur_drawings/amateur_drawings_annotations.json

# download images (~50Gb)
wget https://dl.fbaipublicfiles.com/amateur_drawings/amateur_drawings.tar

If you have feedback about the dataset, please fill out this form.

Trained Model Weights

Trained model weights for human-like figure detection and pose estimation are included in the repo releases. Model weights are released under MIT license. The .mar files were generated using the OpenMMLab framework (OpenMMDet Apache 2.0 License, OpenMMPose Apache 2.0 License)

As-Rigid-As-Possible Shape Manipulation

These characters are deformed using As-Rigid-As-Possible (ARAP) shape manipulation. We have a Python implementation of the algorithm, located here, that might be of use to other developers.

License

Animated Drawings is released under the MIT license.

animateddrawings's People

Contributors

chentiao avatar curtisgibby avatar eltociear avatar gravityrail avatar hjessmith avatar hossinasaadi avatar jonmcoe avatar kianmeng avatar pringshia avatar shahryarsaljoughi avatar sunshinelist avatar urjitbhatia avatar xyf001 avatar yihleego avatar zyckk4 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

animateddrawings's Issues

question about Adjust animation speed and bvh joint

I wait a long time for this open source . after reading the guide,I made own drawings own bvh and animations . it's ok .
some questions:

the result ,gif or mp4 , is slower than the reference video which i used to generate bvh by rokoko . how to adjust parameters to fit the source video speed

all the bvh files export from rokoko or down load from somewhere , there are no LeftHandEnd RightHandEnd joint , what did i use wrong?

Turn on USE_MESA: True Execution command fails

When USE_MESA is False, the code execution is ok, but when I set USE_MESA to True, the program reports an error

USE_MESA: False
image
USE_MESA: True

image

`Python 3.11.3 (main, Apr 7 2023, 19:25:52) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.

from animated_drawings import render
render.start('./examples/config/mvc/export_mp4_example.yaml')
Writing video to: /Users/chejinsong/Desktop/animated-server/AnimatedDrawings/video.mp4
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 779/779 [00:49<00:00, 15.60it/s]
render.start('./examples/config/mvc/export_mp4_example.yaml')
Traceback (most recent call last):
File "", line 1, in
File "/Users/chejinsong/Desktop/animated-server/AnimatedDrawings/animated_drawings/render.py", line 17, in start
view = View.create_view(cfg.view)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/chejinsong/Desktop/animated-server/AnimatedDrawings/animated_drawings/view/view.py", line 43, in create_view
from animated_drawings.view.mesa_view import MesaView
File "/Users/chejinsong/Desktop/animated-server/AnimatedDrawings/animated_drawings/view/mesa_view.py", line 8, in
from OpenGL import GL, osmesa
File "/usr/local/lib/python3.11/site-packages/OpenGL/osmesa/init.py", line 2, in
from OpenGL.raw.osmesa.mesa import *
File "/usr/local/lib/python3.11/site-packages/OpenGL/raw/osmesa/mesa.py", line 41, in
@_f
^^
File "/usr/local/lib/python3.11/site-packages/OpenGL/raw/osmesa/mesa.py", line 10, in _f
function,_p.PLATFORM.OSMesa,
^^^^^^^^^^^^^^^^^^
AttributeError: 'DarwinPlatform' object has no attribute 'OSMesa'

`

Accept 2D motion data from webcam

Thanks for replying on HN. I think it'd be super fun to do this with the family: scan in the kids' drawings, turn on our webcam, and start moving around so that the drawn characters mimic our motions. Then all we have to do is record that, overlay some audio / voice, and we have ourselves a little makeshift family-friendly movie studio :)

Understandably this would be a ton of work but I just wanted to put it out there if there's interest!

How to make character

   Is there any tools for convinient?      can export char_cfg.yaml   mask.png  joint_overlay  texture.png

AssertionError,when I use my own BVH file

Thanks to the author's outstanding work!
I encountered the following error when using the BVH file extracted by myself to drive the picture animation. Where can I correct it?

from animated_drawings import render
render.start('./examples/config/mvc/interactive_window_example.yaml')
CRITICAL:root:framenum specified (1482) and found (1483) do not match
CRITICAL:root:Error loading BVH: framenum specified (1482) and found (1483) do not match
Traceback (most recent call last):
File "D:\pythonProject\AnimatedDrawings\animated_drawings\model\retargeter.py", line 34, in init
self.bvh = BVH.from_file(str(motion_cfg.bvh_p), motion_cfg.start_frame_idx, motion_cfg.end_frame_idx)
File "D:\pythonProject\AnimatedDrawings\animated_drawings\model\bvh.py", line 162, in from_file
assert False, msg
AssertionError: framenum specified (1482) and found (1483) do not match

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "D:\pythonProject\AnimatedDrawings\animated_drawings\render.py", line 21, in start
scene = Scene(cfg.scene)
File "D:\pythonProject\AnimatedDrawings\animated_drawings\model\scene.py", line 30, in init
ad = AnimatedDrawing(*each)
File "D:\pythonProject\AnimatedDrawings\animated_drawings\model\animated_drawing.py", line 255, in init
self._initialize_retargeter_bvh(motion_cfg, retarget_cfg)
File "D:\pythonProject\AnimatedDrawings\animated_drawings\model\animated_drawing.py", line 317, in _initialize_retargeter_bvh
self.retargeter = Retargeter(motion_cfg, retarget_cfg)
File "D:\pythonProject\AnimatedDrawings\animated_drawings\model\retargeter.py", line 38, in init
assert False, msg
AssertionError: Error loading BVH: framenum specified (1482) and found (1483) do not match

Possible to export mesh?

Hey there, awesome project! Thanks for making it available.

Would it be possible to export an animated character to, say, GLB/glTF?

Obtain parameters for obtaining configuration files from Rokoko?

Official example:

filepath: examples/bvh/fair1/dab.bvh
start_frame_idx: 0
end_frame_idx: 339 
groundplane_joint: LeftFoot
forward_perp_joint_vectors:
  - - LeftShoulder
    - RightShoulder
  - - LeftUpLeg
    - RightUpLeg
scale: 0.025
up: +z

I obtained the motion in BVH format files through Rokoko,
But how do I know What is the end_frame_idx, which cannot be obtained from Rokoko,
So how should we set it?

AttributeError: dlsym(0x2068fe020, CGLGetCurrentContext): symbol not found

Python 3.10. (already tried with 3.9 and 3.11)
MacOS Monterey 12.6.5
PyOpenGL 3.1.5 (already tried with 3.1.6)

Error log
`

from animated_drawings import render
render.start('./examples/config/mvc/interactive_window_example.yaml')
Traceback (most recent call last):
File "", line 1, in
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/animated_drawings/render.py", line 17, in start
view = View.create_view(cfg.view)
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/animated_drawings/view/view.py", line 46, in create_view
from animated_drawings.view.window_view import WindowView
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/animated_drawings/view/window_view.py", line 6, in
from animated_drawings.view.shaders.shader import Shader
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/animated_drawings/view/shaders/shader.py", line 5, in
import OpenGL.GL as GL
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/ad/lib/python3.10/site-packages/OpenGL/GL/init.py", line 4, in
from OpenGL.GL.VERSION.GL_1_1 import *
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/ad/lib/python3.10/site-packages/OpenGL/GL/VERSION/GL_1_1.py", line 14, in
from OpenGL.raw.GL.VERSION.GL_1_1 import *
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/ad/lib/python3.10/site-packages/OpenGL/raw/GL/VERSION/GL_1_1.py", line 7, in
from OpenGL.raw.GL import _errors
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/ad/lib/python3.10/site-packages/OpenGL/raw/GL/_errors.py", line 4, in
_error_checker = _ErrorChecker( _p, _p.GL.glGetError )
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/ad/lib/python3.10/site-packages/OpenGL/error.py", line 183, in init
self._isValid = platform.CurrentContextIsValid
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/ad/lib/python3.10/site-packages/OpenGL/platform/baseplatform.py", line 15, in get
value = self.fget( obj )
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/ad/lib/python3.10/site-packages/OpenGL/platform/baseplatform.py", line 356, in CurrentContextIsValid
return self.GetCurrentContext
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/ad/lib/python3.10/site-packages/OpenGL/platform/baseplatform.py", line 15, in get
value = self.fget( obj )
File "/Users/tg/Projects/ZorbinLabs/app/backend/AnimatedDrawings/ad/lib/python3.10/site-packages/OpenGL/platform/darwin.py", line 62, in GetCurrentContext
return self.CGL.CGLGetCurrentContext
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ctypes/init.py", line 387, in getattr
func = self.getitem(name)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ctypes/init.py", line 392, in getitem
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: dlsym(0x2068fe020, CGLGetCurrentContext): symbol not found

Is there a perfect method to export background images?

Hi AnimatedDrawings developers,

First let me thank you for this project, it's really great! 🎉🎉🎉

I noticed that it supports background images. And I didn't find the code to export background images in the project, so I added the following code after "save mask":

# save mask
cv2.imwrite(str(outdir/'mask.png'), mask)

# save background
full_mask = np.zeros((img.shape[0], img.shape[1]), dtype=np.uint8)
full_mask[t:b, l:r] = mask
background = cv2.inpaint(img, full_mask, 3, cv2.INPAINT_TELEA)
cv2.imwrite(str(outdir/'background.png'), background)

Exec cmd:

python image_to_annotations.py drawings/garlic.png garlic_out

The background image is not perfect, so I wonder if there is any example or could you plz add a demo?

I've never used OpenCV or Torch before, and I only started learning them because of this project, hope my question is not so stupid.

AttributeError: 'dict' object has no attribute 'sort'

os:Ubuntu 20.04.3 LTS
i tried to resolved my own picture
when i run the cmd python image_to_animation.py drawings/garlic.png garlic_out
i got the error
Traceback (most recent call last): File "image_to_animation.py", line 41, in <module> image_to_animation(img_fn, char_anno_dir, motion_cfg_fn, retarget_cfg_fn) File "image_to_animation.py", line 19, in image_to_animation image_to_annotations(img_fn, char_anno_dir) File "/home/dhs/下载/AnimatedDrawings/examples/image_to_annotations.py", line 60, in image_to_annotations detection_results.sort(key=lambda x: x['score'], reverse=True) AttributeError: 'dict' object has no attribute 'sort'
i tried to add log and debug the project i got this
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): localhost:8080 DEBUG:urllib3.connectionpool:http://localhost:8080 "POST /predictions/drawn_humanoid_detector HTTP/1.1" 507 84 INFO:root:{'code': 507, 'type': 'InternalServerException', 'message': 'Worker died.'}

Segmentation fault when saving to video or gif

> $ python gif.py
> Imports successful!
>  Writing video to: /home/usr/projects/animated-drawing/video.gif
> 100%|█████████████████████████████████████████████████████████████████████████████████| 339/339 [00:10<00:00, 33.21it/s]
> Segmentation fault

Same thing happens to save to a video, it finishes 100% but then seg fault

ValueError: Input X contains NaN. PCA does not accept missing values encoded as NaN natively.

ValueError: Input X contains NaN.
PCA does not accept missing values encoded as NaN natively. For supervised learning, you might want to consider sklearn.ensemble.HistGradientBoostingClassifier and Regressor which accept missing values encoded as NaNs natively. Alternatively, it is possible to preprocess the data, for instance by using an imputer transformer in a pipeline or drop samples with missing values. See https://scikit-learn.org/stable/modules/impute.html You can find a list of all estimators that handle NaN values at the following page: https://scikit-learn.org/stable/modules/impute.html#estimators-that-handle-nan-values

this is my setup:
bvh file: custom_hik.bvh (export from rokoko)
my export settings:
image
motion config: custom.yaml
retarget config: custom_retarget.yaml
custom.zip
Thank you in advance

Prompt this error : Segmentation fault (core dumped)

A large number of these processes are found in the process:

/opt/conda/bin/python /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py --sock-type unix --sock-name /tmp/.ts.sock.9044 --metrics-config /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml

It takes up a lot of memory. How to solve it?

Application for overlays?

Firstly, this is supreme! Both kids have been glued to it since getting out of school today!

This is unrelated to the codebase and specific to the header image/video you have - you have the animations overlayed on another video, which application did you use to achieve this?

How can I change it to Run on gpu;

When I run the sample code:
from animated_drawings import render render.start('./examples/config/mvc/export_mp4_example.yaml')
find that the gpu usage is 0;
How can I change it to Run on gpu;

ModuleNotFoundError: No module named 'requests'

When I run this command, I get the following error. help ~

I installed the docker environment as instructed, but what's the problem?

➜ examples python3 image_to_animation.py drawings/garlic.png garlic_out

Traceback (most recent call last):
File "/Users/jinhyeogjun/Desktop/ai/AnimatedDrawings/examples/image_to_animation.py", line 5, in
from image_to_annotations import image_to_annotations
File "/Users/jinhyeogjun/Desktop/ai/AnimatedDrawings/examples/image_to_annotations.py", line 6, in
import requests
ModuleNotFoundError: No module named 'requests'

The settings of exporting BVH from Rokoko

  1. Attached is the settings of exporting BVH from Rokoko. I tried many options, but it always showed errors, like:
CRITICAL:root:framenum specified (2311) and found (2312) do not match
CRITICAL:root:Error loading BVH: framenum specified (2311) and found (2312) do not match
Traceback (most recent call last):
  File "f:\animateddrawings\animated_drawings\model\retargeter.py", line 34, in __init__
    self.bvh = BVH.from_file(str(motion_cfg.bvh_p), motion_cfg.start_frame_idx, motion_cfg.end_frame_idx)
  File "f:\animateddrawings\animated_drawings\model\bvh.py", line 162, in from_file
    assert False, msg
AssertionError: framenum specified (2311) and found (2312) do not match

What's the correct exporting settings?
image

  1. Within the rokoko example, the "end_frame_idx" is null. I tried it with int number as well, but it stills shows up errors. Need some help to figure this out.
    image

Thanks~

help, can you provide a docker image

can you provide a docker image?
when i run, docker build -t docker_torchserve .
always failed,the network often timeouts in the docker installation environment

first curl i got { "status": "Healthy" } next i got curl: (52) Empty reply from server

hello,os:windows 10
i install docker for windows and configure the wsl2
then i run the cmd that you left torchserve % docker build -t docker_torchserve .
this is the final report

[+] Building 4.1s (21/21) FINISHED
 => [internal] load build definition from Dockerfile                                                               0.1s
 => => transferring dockerfile: 32B                                                                                0.0s
 => [internal] load .dockerignore                                                                                  0.1s
 => => transferring context: 2B                                                                                    0.0s
 => resolve image config for docker.io/docker/dockerfile:1.2                                                       2.1s
 => [auth] docker/dockerfile:pull token for registry-1.docker.io                                                   0.0s
 => CACHED docker-image://docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313  0.0s
 => [internal] load metadata for docker.io/continuumio/miniconda3:latest                                           1.5s
 => [auth] continuumio/miniconda3:pull token for registry-1.docker.io                                              0.0s
 => [internal] load build context                                                                                  0.0s
 => => transferring context: 39B                                                                                   0.0s
 => [ 1/12] FROM docker.io/continuumio/miniconda3@sha256:10b38c9a8a51692838ce4517e8c74515499b68d58c8a2000d8a9df7f  0.0s
 => CACHED [ 2/12] RUN apt-get update &&     DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommen  0.0s
 => CACHED [ 3/12] RUN pip install openmim                                                                         0.0s
 => CACHED [ 4/12] RUN pip install torch                                                                           0.0s
 => CACHED [ 5/12] RUN mim install mmcv-full==1.7.0                                                                0.0s
 => CACHED [ 6/12] RUN pip install mmpose==0.29.0                                                                  0.0s
 => CACHED [ 7/12] RUN pip install mmdet==2.27.0                                                                   0.0s
 => CACHED [ 8/12] RUN pip install torchserve                                                                      0.0s
 => CACHED [ 9/12] RUN mkdir -p /home/torchserve/model-store                                                       0.0s
 => CACHED [10/12] RUN wget https://github.com/facebookresearch/AnimatedDrawings/releases/download/v0.0.1/drawn_h  0.0s
 => CACHED [11/12] RUN wget https://github.com/facebookresearch/AnimatedDrawings/releases/download/v0.0.1/drawn_h  0.0s
 => CACHED [12/12] COPY config.properties /home/torchserve/config.properties                                       0.0s
 => exporting to image                                                                                             0.1s
 => => exporting layers                                                                                            0.0s
 => => writing image sha256:bdfaac7896d3b16f80898aedc414a1fa86d541db49784427477a55fd0a959394                       0.0s
 => => naming to docker.io/library/docker_torchserve

then run the cmd docker run -d --name docker_torchserve -p 8080:8080 -p 8081:8081 docker_torchserve
this is the final report
35244d4692fbe95e967c8a3994df95541e97c9b2214ff6099de438ac25de9e11
then run curl http://localhost:8080/ping
i got this :
{
"status": "Healthy"
}
but while i run the same cmd curl http://localhost:8080/ping
i got this :curl: (52) Empty reply from server

i dont know if this is a bug?
this is my docker container log

2023-04-22 18:27:27 WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2023-04-22 18:27:40 2023-04-22T10:27:38,052 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager...
2023-04-22 18:27:50 2023-04-22T10:27:50,640 [INFO ] main org.pytorch.serve.ModelServer - 
2023-04-22 18:27:50 Torchserve version: 0.7.1
2023-04-22 18:27:50 TS Home: /opt/conda/lib/python3.10/site-packages
2023-04-22 18:27:50 Current directory: /
2023-04-22 18:27:50 Temp directory: /tmp
2023-04-22 18:27:50 Metrics config path: /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml
2023-04-22 18:27:50 Number of GPUs: 0
2023-04-22 18:27:50 Number of CPUs: 16
2023-04-22 18:27:50 Max heap size: 3164 M
2023-04-22 18:27:50 Python executable: /opt/conda/bin/python
2023-04-22 18:27:50 Config file: /home/torchserve/config.properties
2023-04-22 18:27:50 Inference address: http://0.0.0.0:8080
2023-04-22 18:27:50 Management address: http://0.0.0.0:8081
2023-04-22 18:27:50 Metrics address: http://0.0.0.0:8082
2023-04-22 18:27:50 Model Store: /home/torchserve/model-store
2023-04-22 18:27:50 Initial Models: all
2023-04-22 18:27:50 Log dir: /logs
2023-04-22 18:27:50 Metrics dir: /logs
2023-04-22 18:27:50 Netty threads: 0
2023-04-22 18:27:50 Netty client threads: 0
2023-04-22 18:27:50 Default workers per model: 16
2023-04-22 18:27:50 Blacklist Regex: N/A
2023-04-22 18:27:50 Maximum Response Size: 6553500
2023-04-22 18:27:50 Maximum Request Size: 6553500
2023-04-22 18:27:50 Limit Maximum Image Pixels: true
2023-04-22 18:27:50 Prefer direct buffer: false
2023-04-22 18:27:50 Allowed Urls: [file://.*|http(s)?://.*]
2023-04-22 18:27:50 Custom python dependency for model allowed: false
2023-04-22 18:27:50 Metrics report format: prometheus
2023-04-22 18:27:50 Enable metrics API: true
2023-04-22 18:27:50 Workflow Store: /home/torchserve/model-store
2023-04-22 18:27:50 Model config: N/A
2023-04-22 18:27:50 2023-04-22T10:27:50,650 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager -  Loading snapshot serializer plugin...
2023-04-22 18:27:50 2023-04-22T10:27:50,667 [DEBUG] main org.pytorch.serve.ModelServer - Loading models from model store: drawn_humanoid_pose_estimator.mar
2023-04-22 18:28:54 2023-04-22T10:28:54,471 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model drawn_humanoid_pose_estimator
2023-04-22 18:28:54 2023-04-22T10:28:54,472 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model drawn_humanoid_pose_estimator
2023-04-22 18:28:54 2023-04-22T10:28:54,472 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model drawn_humanoid_pose_estimator loaded.
2023-04-22 18:28:54 2023-04-22T10:28:54,472 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: drawn_humanoid_pose_estimator, count: 16
2023-04-22 18:28:54 2023-04-22T10:28:54,485 [DEBUG] W-9009-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9009, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,484 [DEBUG] W-9004-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9004, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,484 [DEBUG] W-9001-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9001, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,486 [DEBUG] main org.pytorch.serve.ModelServer - Loading models from model store: drawn_humanoid_detector.mar
2023-04-22 18:28:54 2023-04-22T10:28:54,484 [DEBUG] W-9006-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9006, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,484 [DEBUG] W-9005-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9005, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,485 [DEBUG] W-9008-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9008, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,486 [DEBUG] W-9010-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9010, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,484 [DEBUG] W-9000-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9000, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,487 [DEBUG] W-9012-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9012, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,484 [DEBUG] W-9007-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9007, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,484 [DEBUG] W-9003-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9003, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,484 [DEBUG] W-9002-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9002, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,486 [DEBUG] W-9013-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9013, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,486 [DEBUG] W-9011-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9011, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,487 [DEBUG] W-9014-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9014, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:28:54 2023-04-22T10:28:54,487 [DEBUG] W-9015-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9015, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:00 2023-04-22T10:29:00,988 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9011
2023-04-22 18:29:00 2023-04-22T10:29:00,988 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9007
2023-04-22 18:29:00 2023-04-22T10:29:00,988 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9002
2023-04-22 18:29:00 2023-04-22T10:29:00,989 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9014
2023-04-22 18:29:00 2023-04-22T10:29:00,988 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9001
2023-04-22 18:29:00 2023-04-22T10:29:00,991 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9005
2023-04-22 18:29:00 2023-04-22T10:29:00,992 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:00 2023-04-22T10:29:00,993 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9000
2023-04-22 18:29:00 2023-04-22T10:29:00,994 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:00 2023-04-22T10:29:00,994 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9004
2023-04-22 18:29:00 2023-04-22T10:29:00,994 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9003
2023-04-22 18:29:00 2023-04-22T10:29:00,994 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9010
2023-04-22 18:29:00 2023-04-22T10:29:00,994 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:00 2023-04-22T10:29:00,995 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]110
2023-04-22 18:29:00 2023-04-22T10:29:00,995 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:00 2023-04-22T10:29:00,995 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:00 2023-04-22T10:29:00,997 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:00 2023-04-22T10:29:00,993 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9013
2023-04-22 18:29:00 2023-04-22T10:29:00,993 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]108
2023-04-22 18:29:00 2023-04-22T10:29:00,994 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9015
2023-04-22 18:29:00 2023-04-22T10:29:00,997 [DEBUG] W-9011-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9011-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:00 2023-04-22T10:29:00,997 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:00 2023-04-22T10:29:00,993 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:00 2023-04-22T10:29:00,993 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9012
2023-04-22 18:29:00 2023-04-22T10:29:00,997 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9006
2023-04-22 18:29:00 2023-04-22T10:29:00,998 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:00 2023-04-22T10:29:00,993 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9008
2023-04-22 18:29:00 2023-04-22T10:29:00,998 [DEBUG] W-9007-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9007-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:00 2023-04-22T10:29:00,996 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]111
2023-04-22 18:29:00 2023-04-22T10:29:00,997 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:00 2023-04-22T10:29:00,999 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]79
2023-04-22 18:29:00 2023-04-22T10:29:00,998 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]106
2023-04-22 18:29:00 2023-04-22T10:29:00,998 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:00 2023-04-22T10:29:00,998 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]109
2023-04-22 18:29:00 2023-04-22T10:29:00,999 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:00 2023-04-22T10:29:00,995 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:00 2023-04-22T10:29:00,996 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]112
2023-04-22 18:29:00 2023-04-22T10:29:00,999 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:00 2023-04-22T10:29:00,999 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:01 2023-04-22T10:29:00,995 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9009
2023-04-22 18:29:01 2023-04-22T10:29:00,999 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,000 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]78
2023-04-22 18:29:01 2023-04-22T10:29:00,999 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:01 2023-04-22T10:29:01,000 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,003 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:01 2023-04-22T10:29:01,003 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,004 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]107
2023-04-22 18:29:01 2023-04-22T10:29:01,004 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,004 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,005 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]113
2023-04-22 18:29:01 2023-04-22T10:29:01,004 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,005 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,005 [DEBUG] W-9010-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9010-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,005 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,003 [DEBUG] W-9000-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,003 [DEBUG] W-9003-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9003-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,005 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,000 [DEBUG] W-9013-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9013-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,006 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]82
2023-04-22 18:29:01 2023-04-22T10:29:01,006 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:01 2023-04-22T10:29:01,006 [DEBUG] W-9014-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9014-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,006 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,006 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]83
2023-04-22 18:29:01 2023-04-22T10:29:01,007 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]77
2023-04-22 18:29:01 2023-04-22T10:29:01,007 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]81
2023-04-22 18:29:01 2023-04-22T10:29:01,008 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]80
2023-04-22 18:29:01 2023-04-22T10:29:01,008 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,008 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,008 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,008 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,008 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,004 [DEBUG] W-9001-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,008 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:01 2023-04-22T10:29:01,008 [DEBUG] W-9006-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9006-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,009 [DEBUG] W-9012-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9012-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,009 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,008 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,009 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,009 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]105
2023-04-22 18:29:01 2023-04-22T10:29:01,011 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,011 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,011 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,011 [DEBUG] W-9008-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9008-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,011 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,011 [DEBUG] W-9002-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9002-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,008 [DEBUG] W-9005-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9005-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,011 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,011 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,011 [DEBUG] W-9009-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9009-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,012 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:01 2023-04-22T10:29:01,012 [DEBUG] W-9015-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9015-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,008 [DEBUG] W-9004-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9004-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:01 2023-04-22T10:29:01,013 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,015 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,015 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,015 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9009
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9010
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9015
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9001
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9004
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9014
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9006
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9005
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9008
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9012
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9003
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9011
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9013
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9002
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000
2023-04-22 18:29:01 2023-04-22T10:29:01,027 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9007
2023-04-22 18:29:01 2023-04-22T10:29:01,138 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9008.
2023-04-22 18:29:01 2023-04-22T10:29:01,139 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9004.
2023-04-22 18:29:01 2023-04-22T10:29:01,139 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9001.
2023-04-22 18:29:01 2023-04-22T10:29:01,139 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9005.
2023-04-22 18:29:01 2023-04-22T10:29:01,139 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9012.
2023-04-22 18:29:01 2023-04-22T10:29:01,139 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9010.
2023-04-22 18:29:01 2023-04-22T10:29:01,139 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9007.
2023-04-22 18:29:01 2023-04-22T10:29:01,139 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9009.
2023-04-22 18:29:01 2023-04-22T10:29:01,139 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9000.
2023-04-22 18:29:01 2023-04-22T10:29:01,138 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9006.
2023-04-22 18:29:01 2023-04-22T10:29:01,138 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9011.
2023-04-22 18:29:01 2023-04-22T10:29:01,137 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9014.
2023-04-22 18:29:01 2023-04-22T10:29:01,137 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9015.
2023-04-22 18:29:01 2023-04-22T10:29:01,138 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9013.
2023-04-22 18:29:01 2023-04-22T10:29:01,140 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9003.
2023-04-22 18:29:01 2023-04-22T10:29:01,140 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9002.
2023-04-22 18:29:01 2023-04-22T10:29:01,146 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341146
2023-04-22 18:29:01 2023-04-22T10:29:01,147 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341147
2023-04-22 18:29:01 2023-04-22T10:29:01,147 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341147
2023-04-22 18:29:01 2023-04-22T10:29:01,146 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341146
2023-04-22 18:29:01 2023-04-22T10:29:01,147 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341147
2023-04-22 18:29:01 2023-04-22T10:29:01,146 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341146
2023-04-22 18:29:01 2023-04-22T10:29:01,147 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341147
2023-04-22 18:29:01 2023-04-22T10:29:01,146 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341146
2023-04-22 18:29:01 2023-04-22T10:29:01,147 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341147
2023-04-22 18:29:01 2023-04-22T10:29:01,147 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341147
2023-04-22 18:29:01 2023-04-22T10:29:01,147 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341147
2023-04-22 18:29:01 2023-04-22T10:29:01,147 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341147
2023-04-22 18:29:01 2023-04-22T10:29:01,147 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341147
2023-04-22 18:29:01 2023-04-22T10:29:01,147 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341147
2023-04-22 18:29:01 2023-04-22T10:29:01,147 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341147
2023-04-22 18:29:01 2023-04-22T10:29:01,147 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159341147
2023-04-22 18:29:01 2023-04-22T10:29:01,268 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,269 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,269 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,269 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,269 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,263 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,269 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,269 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,268 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,268 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,268 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,268 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,269 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,268 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,268 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:01 2023-04-22T10:29:01,268 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2023-04-22 18:29:03 2023-04-22T10:29:03,599 [WARN ] W-9008-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,599 [WARN ] W-9014-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,600 [WARN ] W-9012-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,600 [WARN ] W-9012-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,600 [WARN ] W-9014-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,600 [WARN ] W-9008-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,603 [WARN ] W-9007-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,604 [WARN ] W-9007-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,608 [WARN ] W-9011-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,608 [WARN ] W-9011-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,617 [WARN ] W-9013-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,618 [WARN ] W-9013-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,630 [WARN ] W-9000-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,631 [WARN ] W-9000-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,636 [WARN ] W-9004-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,636 [WARN ] W-9004-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,661 [WARN ] W-9006-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,661 [WARN ] W-9006-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,665 [WARN ] W-9002-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,665 [WARN ] W-9002-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,690 [WARN ] W-9003-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,691 [WARN ] W-9003-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,746 [WARN ] W-9010-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,747 [WARN ] W-9010-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,746 [WARN ] W-9001-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,747 [WARN ] W-9001-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,766 [WARN ] W-9009-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,767 [WARN ] W-9009-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,801 [WARN ] W-9015-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,802 [WARN ] W-9015-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:03 2023-04-22T10:29:03,828 [WARN ] W-9005-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG - /opt/conda/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
2023-04-22 18:29:03 2023-04-22T10:29:03,829 [WARN ] W-9005-drawn_humanoid_pose_estimator_1.0-stderr MODEL_LOG -   warnings.warn(
2023-04-22 18:29:05 2023-04-22T10:29:05,224 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model drawn_humanoid_detector
2023-04-22 18:29:05 2023-04-22T10:29:05,224 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model drawn_humanoid_detector
2023-04-22 18:29:05 2023-04-22T10:29:05,224 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model drawn_humanoid_detector loaded.
2023-04-22 18:29:05 2023-04-22T10:29:05,225 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: drawn_humanoid_detector, count: 16
2023-04-22 18:29:05 2023-04-22T10:29:05,226 [DEBUG] W-9017-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9017, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,227 [DEBUG] W-9018-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9018, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,226 [DEBUG] W-9016-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9016, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,227 [DEBUG] W-9019-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9019, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,270 [DEBUG] W-9026-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9026, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,270 [DEBUG] W-9021-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9021, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,270 [DEBUG] W-9023-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9023, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,270 [DEBUG] W-9022-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9022, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,270 [DEBUG] W-9025-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9025, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,270 [DEBUG] W-9024-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9024, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,270 [DEBUG] W-9020-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9020, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,342 [DEBUG] W-9027-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9027, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,418 [DEBUG] W-9028-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9028, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,430 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel.
2023-04-22 18:29:05 2023-04-22T10:29:05,469 [DEBUG] W-9029-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9029, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,470 [DEBUG] W-9030-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9030, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,470 [DEBUG] W-9031-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9031, --metrics-config, /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-04-22 18:29:05 2023-04-22T10:29:05,492 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://0.0.0.0:8080
2023-04-22 18:29:05 2023-04-22T10:29:05,493 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerSocketChannel.
2023-04-22 18:29:05 2023-04-22T10:29:05,619 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://0.0.0.0:8081
2023-04-22 18:29:05 2023-04-22T10:29:05,619 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: EpollServerSocketChannel.
2023-04-22 18:29:05 2023-04-22T10:29:05,655 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://0.0.0.0:8082
2023-04-22 18:29:06 Model server started.
2023-04-22 18:29:06 2023-04-22T10:29:06,806 [WARN ] pool-3-thread-1 org.pytorch.serve.metrics.MetricCollector - worker pid is not available yet.
2023-04-22 18:29:07 2023-04-22T10:29:07,032 [INFO ] pool-3-thread-1 TS_METRICS - CPUUtilization.Percent:100.0|#Level:Host|#hostname:35244d4692fb,timestamp:1682159347
2023-04-22 18:29:07 2023-04-22T10:29:07,033 [INFO ] pool-3-thread-1 TS_METRICS - DiskAvailable.Gigabytes:220.7176971435547|#Level:Host|#hostname:35244d4692fb,timestamp:1682159347
2023-04-22 18:29:07 2023-04-22T10:29:07,033 [INFO ] pool-3-thread-1 TS_METRICS - DiskUsage.Gigabytes:17.44916534423828|#Level:Host|#hostname:35244d4692fb,timestamp:1682159347
2023-04-22 18:29:07 2023-04-22T10:29:07,033 [INFO ] pool-3-thread-1 TS_METRICS - DiskUtilization.Percent:7.3|#Level:Host|#hostname:35244d4692fb,timestamp:1682159347
2023-04-22 18:29:07 2023-04-22T10:29:07,034 [INFO ] pool-3-thread-1 TS_METRICS - MemoryAvailable.Megabytes:5480.96875|#Level:Host|#hostname:35244d4692fb,timestamp:1682159347
2023-04-22 18:29:07 2023-04-22T10:29:07,034 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUsed.Megabytes:6854.16796875|#Level:Host|#hostname:35244d4692fb,timestamp:1682159347
2023-04-22 18:29:07 2023-04-22T10:29:07,034 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUtilization.Percent:56.7|#Level:Host|#hostname:35244d4692fb,timestamp:1682159347
2023-04-22 18:29:07 2023-04-22T10:29:07,164 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,170 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,247 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,270 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,342 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,366 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,383 [INFO ] pool-2-thread-33 ACCESS_LOG - /172.17.0.1:50214 "GET /ping HTTP/1.1" 200 23
2023-04-22 18:29:07 2023-04-22T10:29:07,383 [INFO ] pool-2-thread-33 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:35244d4692fb,timestamp:1682159347
2023-04-22 18:29:07 2023-04-22T10:29:07,398 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,417 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,473 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,534 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,565 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,570 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,617 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9022
2023-04-22 18:29:07 2023-04-22T10:29:07,624 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-04-22 18:29:07 2023-04-22T10:29:07,625 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]770
2023-04-22 18:29:07 2023-04-22T10:29:07,626 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2023-04-22 18:29:07 2023-04-22T10:29:07,626 [DEBUG] W-9022-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9022-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2023-04-22 18:29:07 2023-04-22T10:29:07,626 [INFO ] W-9022-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9022
2023-04-22 18:29:07 2023-04-22T10:29:07,627 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.10.8
2023-04-22 18:29:07 2023-04-22T10:29:07,634 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - /172.17.0.1:50214 "GET /favicon.ico HTTP/1.1" 404 1
2023-04-22 18:29:07 2023-04-22T10:29:07,650 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,654 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:35244d4692fb,timestamp:1682159347
2023-04-22 18:29:07 2023-04-22T10:29:07,662 [INFO ] W-9022-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1682159347662
2023-04-22 18:29:07 2023-04-22T10:29:07,662 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9022.
2023-04-22 18:29:07 2023-04-22T10:29:07,729 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,769 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,770 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2023-04-22 18:29:07 2023-04-22T10:29:07,778 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - generated new fontManager
2023-04-22 18:29:07 2023-04-22T10:29:07,880 [INFO ] W-9023-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9023

Make retarget file

thanks to make awesome code for everyone!

I am testing my own data(bvh), which is made from Rokoko. but I couldn't apply my own .bvh data to retarget.yaml file.

can i get tips for option that making .bvh file? or making retargeting file also?

When I extract code with my own data(.bvh, retarget.yaml, motion.yaml) I got error like this.

CRITICAL:root:Could not find BVH joint with name: None
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/doubleme/Work/Lyndsey/1_Working/AnimatedDrawings/animated_drawings/render.py", line 21, in start
    scene = Scene(cfg.scene)
  File "/home/doubleme/Work/Lyndsey/1_Working/AnimatedDrawings/animated_drawings/model/scene.py", line 30, in __init__
    ad = AnimatedDrawing(*each)
  File "/home/doubleme/Work/Lyndsey/1_Working/AnimatedDrawings/animated_drawings/model/animated_drawing.py", line 255, in __init__
    self._initialize_retargeter_bvh(motion_cfg, retarget_cfg)
  File "/home/doubleme/Work/Lyndsey/1_Working/AnimatedDrawings/animated_drawings/model/animated_drawing.py", line 317, in _initialize_retargeter_bvh
    self.retargeter = Retargeter(motion_cfg, retarget_cfg)
  File "/home/doubleme/Work/Lyndsey/1_Working/AnimatedDrawings/animated_drawings/model/retargeter.py", line 57, in __init__
    skeleton_fwd: Vectors = self.bvh.get_skeleton_fwd(self.forward_perp_vector_joint_names)
  File "/home/doubleme/Work/Lyndsey/1_Working/AnimatedDrawings/animated_drawings/model/bvh.py", line 115, in get_skeleton_fwd
    assert False, msg
AssertionError: Could not find BVH joint with name: None

help,AttributeError: 'NoneType' object has no attribute 'decode'

os: ubuntu1~18.04
log in to the server without display remotely using ssh

I meet error

image

Python 3.8.13 (default, Oct 21 2022, 23:50:54)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.

from animated_drawings import render
render.start('./examples/config/mvc/interactive_window_example.yaml')
/home/ubuntu/miniconda3/envs/animated_drawings/lib/python3.8/site-packages/glfw/init.py:912: GLFWError: (65544) b'X11: The DISPLAY environment variable is missing'
warnings.warn(message, GLFWError)
/home/ubuntu/miniconda3/envs/animated_drawings/lib/python3.8/site-packages/glfw/init.py:912: GLFWError: (65537) b'The GLFW library is not initialized'
warnings.warn(message, GLFWError)
Traceback (most recent call last):
File "", line 1, in
File "/home/ubuntu/AnimatedDrawings/animated_drawings/render.py", line 17, in start
view = View.create_view(cfg.view)
File "/home/ubuntu/AnimatedDrawings/animated_drawings/view/view.py", line 47, in create_view
return WindowView(view_cfg)
File "/home/ubuntu/AnimatedDrawings/animated_drawings/view/window_view.py", line 34, in init
self._create_window(*cfg.window_dimensions) # pyright: ignore[reportGeneralTypeIssues]
File "/home/ubuntu/AnimatedDrawings/animated_drawings/view/window_view.py", line 126, in _create_window
logging.info(f'OpenGL Version: {GL.glGetString(GL.GL_VERSION).decode()}') # pyright: ignore[reportGeneralTypeIssues]
AttributeError: 'NoneType' object has no attribute 'decode'

AttributeError: 'dict' object has no attribute 'sort' in /examples/image_to_annotations.py

Hi, when I tried to run my own image by setting up the container and following the tutorial below, it shows an error message.

(animated_drawings) torchserve % cd ../examples
(animated_drawings) examples % python image_to_animation.py drawings/garlic.png garlic_out

Here is the error message:

Traceback (most recent call last): File "/Users/chen_yenru/Documents/GitHub/DATASCIENCE/ML/AnimatedDrawings/examples/image_to_animation.py", line 41, in <module> image_to_animation(img_fn, char_anno_dir, motion_cfg_fn, retarget_cfg_fn) File "/Users/chen_yenru/Documents/GitHub/DATASCIENCE/ML/AnimatedDrawings/examples/image_to_animation.py", line 19, in image_to_animation image_to_annotations(img_fn, char_anno_dir) File "/Users/chen_yenru/Documents/GitHub/DATASCIENCE/ML/AnimatedDrawings/examples/image_to_annotations.py", line 59, in image_to_annotations detection_results.sort(key=lambda x: x['score'], reverse=True) ^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'dict' object has no attribute 'sort'

I think it's because of Python's version difference, where .sort() is only for Python 2.

Yet, when I change it to sort(detection_results, key=lambda x: x['score'], reverse=True), it then returns a type error on the indices.

Here is the error message:
Traceback (most recent call last): File "/Users/chen_yenru/Documents/GitHub/DATASCIENCE/ML/AnimatedDrawings/examples/image_to_animation.py", line 41, in <module> image_to_animation(img_fn, char_anno_dir, motion_cfg_fn, retarget_cfg_fn) File "/Users/chen_yenru/Documents/GitHub/DATASCIENCE/ML/AnimatedDrawings/examples/image_to_animation.py", line 19, in image_to_animation image_to_annotations(img_fn, char_anno_dir) File "/Users/chen_yenru/Documents/GitHub/DATASCIENCE/ML/AnimatedDrawings/examples/image_to_annotations.py", line 59, in image_to_annotations sorted(detection_results, key=lambda x: x['score'], reverse=True) File "/Users/chen_yenru/Documents/GitHub/DATASCIENCE/ML/AnimatedDrawings/examples/image_to_annotations.py", line 59, in <lambda> sorted(detection_results, key=lambda x: x['score'], reverse=True) ~^^^^^^^^^ TypeError: string indices must be integers, not 'str'

I'd like to ask how to overcome this problem?
Thanks!

run my own image error @ /examples/image_to_annotations.py

Hi, when I tried to run my own image by setting up the container and following the tutorial below, it shows an error message.
Here is the error message:

$ python image_to_animation.py drawings/garlic.png garlic_out
Traceback (most recent call last):
  File "F:\animate\AnimatedDrawings-0.0.1\examples\image_to_animation.py", line 39, in <module>
    image_to_animation(img_fn, char_anno_dir, motion_cfg_fn, retarget_cfg_fn)
  File "F:\animate\AnimatedDrawings-0.0.1\examples\image_to_animation.py", line 17, in image_to_animation
    image_to_annotations(img_fn, char_anno_dir)
  File "F:\animate\AnimatedDrawings-0.0.1\examples\image_to_annotations.py", line 50, in image_to_annotations
    detection_results = json.loads(resp.content)
  File "C:\Users\zhuxulu\AppData\Local\Programs\Python\Python39\lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "C:\Users\zhuxulu\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Users\zhuxulu\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Python's version:3.9.2
os version win11 home

How should I solve the problem?
Thanks!

Not grabbing entire drawing...

Hi,

so I starting from scratch here... on a clean new Mac and have insalled Anaconda and Docker - there goes 8GB of space :-) and following your excellent instructions have got things running. I have used python before but I'm only a hobby level coder.

I want to use my daughters drawings - the first one worked fabulously. The second however gives many of these errors as it runs.
birb

point [[0.44142616 0.13921902]] not inside or on edge of any triangle in mesh. Skipping it

and then the result gif is like this...
video

... or when I upped the gutter on the image, making the canvas bigger...
video3

The generated mask image is way off...
mask

I tried using annotations_to_animation.py but the image was already clipped at the edges.

Do you have any tips for stylistic changes to the image which would make the image detection more reliable ( I tried adding a black stroke to the image and got this)... but it still clips...
video

Are there optimum dimensions or resolution? I tried ensuring alpha transparency in the png...

Aha... I think I've got it... I made the picture much bigger ... 3000+ pixel with the image (small) in the centre.
birb

THEN I got this...
video

...which is much better but it's still clipping slightly... and when I try and fix_annotations I see this...
Fix_Annotations

Our plan is to generate sprite sheets (or folders of numbered images) so she can make a game with them, so I assume I can modify the motion params (somewhere) for "walking", for example, so that the character doesn't move.

Lastly, that slight white border might get a bit annoying (given the image itself had no background)... looks at bit like characters from Paddington Bear when I was kid https://youtu.be/mClA14WQFu8?t=55 ) but we will be shrinking the generated image down a lot so it may get lost in the shrinkage...

Thanks!

AssertionError: Error loading BVH: framenum - Animation Issue

I'm having trouble using my own animation with any of the example drawings.

I've gone through the steps of creating an Animation in Rokoko.
Exported with the Mixamo preset - tried various settings within that.

I've made a new motion config with the correct path to the bvh file and the same end frame count.
I've tried using the mixamo_fff & fair1_ppf retarget files.
I have tried using the character shown in the example and the garlic drawing as well.

I feel like i've exhausted every possible solution and I still get this same error...

Traceback (most recent call last):
File "", line 1, in
File "C:\Users\IKONIX-Desktop-2\AnimatedDrawings\animated_drawings\render.py", line 21, in start
scene = Scene(cfg.scene)
File "C:\Users\IKONIX-Desktop-2\AnimatedDrawings\animated_drawings\model\scene.py", line 30, in init
ad = AnimatedDrawing(*each)
File "C:\Users\IKONIX-Desktop-2\AnimatedDrawings\animated_drawings\model\animated_drawing.py", line 255, in init
self._initialize_retargeter_bvh(motion_cfg, retarget_cfg)
File "C:\Users\IKONIX-Desktop-2\AnimatedDrawings\animated_drawings\model\animated_drawing.py", line 317, in _initialize_retargeter_bvh
self.retargeter = Retargeter(motion_cfg, retarget_cfg)
File "C:\Users\IKONIX-Desktop-2\AnimatedDrawings\animated_drawings\model\retargeter.py", line 38, in init
assert False, msg
AssertionError: Error loading BVH: framenum specified (559) and found (560) do not match

I noticed that in the example bvh files the Frame Time = 0.0333333 whereas my export = 0.011111
Is this possible the issue or have I missed something else??

Thank you

ValueError: 'RightHandEnd' is not in list

I obtained bvh file from rokoko:

HIERARCHY
ROOT Hips
{
  OFFSET 0.0000 94.0000 0.0000
  CHANNELS 6 Xposition Yposition Zposition Yrotation Xrotation Zrotation
  JOINT Spine
  {
    OFFSET 0.0000 8.4839 -3.9483
    CHANNELS 3 Yrotation Xrotation Zrotation
    JOINT Spine1
    {
      OFFSET 7.5932 0.0000 0.0000
      CHANNELS 3 Yrotation Xrotation Zrotation
      JOINT Spine2
      {
        OFFSET 7.5932 0.0000 0.0000
        CHANNELS 3 Yrotation Xrotation Zrotation
        JOINT Spine3
        {
          OFFSET 12.8217 0.0000 0.0000
          CHANNELS 3 Yrotation Xrotation Zrotation
          JOINT LeftShoulder
          {
            OFFSET 12.4389 -7.0401 1.9147
            CHANNELS 3 Yrotation Xrotation Zrotation
            JOINT LeftArm
            {
              OFFSET 13.4599 0.0000 0.0000
              CHANNELS 3 Yrotation Xrotation Zrotation
              JOINT LeftForeArm
              {
                OFFSET 26.1001 0.0000 0.0001
                CHANNELS 3 Yrotation Xrotation Zrotation
                JOINT LeftHand
                {
                  OFFSET 28.9001 0.0000 0.0000
                  CHANNELS 3 Yrotation Xrotation Zrotation
                  JOINT LeftHandIndex1
                  {
                    OFFSET 8.0739 -0.1030 2.5612
                    CHANNELS 3 Yrotation Xrotation Zrotation
                    JOINT LeftHandIndex2
                    {
                      OFFSET 4.5805 0.0000 0.0000
                      CHANNELS 3 Yrotation Xrotation Zrotation
                      JOINT LeftHandIndex3
                      {
                        OFFSET 2.8181 0.0000 0.0000
                        CHANNELS 3 Yrotation Xrotation Zrotation
                        End Site
                        {
                          OFFSET 1.3566 0.0000 0.0000
                        }
                      }
                    }
                  }
                  JOINT LeftHandMiddle1
                  {
                    OFFSET 8.1366 -0.1030 0.5450
                    CHANNELS 3 Yrotation Xrotation Zrotation
                    JOINT LeftHandMiddle2
                    {
                      OFFSET 4.7024 0.0000 0.0000
                      CHANNELS 3 Yrotation Xrotation Zrotation
                      JOINT LeftHandMiddle3
                      {
                        OFFSET 3.1016 0.0000 0.0000
                        CHANNELS 3 Yrotation Xrotation Zrotation
                        End Site
                        {
                          OFFSET 1.4991 0.0000 0.0000
                        }
                      }
                    }
                  }
                  JOINT LeftHandPinky1
                  {
                    OFFSET 7.0181 -0.1031 -3.1368
                    CHANNELS 3 Yrotation Xrotation Zrotation
                    JOINT LeftHandPinky2
                    {
                      OFFSET 3.5541 0.0000 0.0000
                      CHANNELS 3 Yrotation Xrotation Zrotation
                      JOINT LeftHandPinky3
                      {
                        OFFSET 2.2340 0.0000 0.0000
                        CHANNELS 3 Yrotation Xrotation Zrotation
                        End Site
                        {
                          OFFSET 1.1205 0.0000 0.0000
                        }
                      }
                    }
                  }
                  JOINT LeftHandRing1
                  {
                    OFFSET 7.8067 -0.1031 -1.4405
                    CHANNELS 3 Yrotation Xrotation Zrotation
                    JOINT LeftHandRing2
                    {
                      OFFSET 4.3922 0.0000 0.0000
                      CHANNELS 3 Yrotation Xrotation Zrotation
                      JOINT LeftHandRing3
                      {
                        OFFSET 2.8327 0.0000 0.0000
                        CHANNELS 3 Yrotation Xrotation Zrotation
                        End Site
                        {
                          OFFSET 1.3500 0.0000 0.0000
                        }
                      }
                    }
                  }
                  JOINT LeftHandThumb1
                  {
                    OFFSET 2.0791 -0.1030 2.7863
                    CHANNELS 3 Yrotation Xrotation Zrotation
                    JOINT LeftHandThumb2
                    {
                      OFFSET 3.9619 0.0000 0.0000
                      CHANNELS 3 Yrotation Xrotation Zrotation
                      JOINT LeftHandThumb3
                      {
                        OFFSET 3.0072 0.0000 0.0000
                        CHANNELS 3 Yrotation Xrotation Zrotation
                        End Site
                        {
                          OFFSET 1.7295 0.0000 0.0000
                        }
                      }
                    }
                  }
                }
                JOINT LeftForeArmRoll
                {
                  OFFSET 14.0000 0.0000 0.0001
                  CHANNELS 3 Yrotation Xrotation Zrotation
                  End Site
                  {
                    OFFSET 7.0000 0.0000 0.0001
                  }
                }
              }
              JOINT LeftArmRoll
              {
                OFFSET 13.5000 0.0000 0.0001
                CHANNELS 3 Yrotation Xrotation Zrotation
                End Site
                {
                  OFFSET 6.7500 0.0000 0.0000
                }
              }
            }
          }
          JOINT Neck
          {
            OFFSET 20.2657 0.0000 0.0000
            CHANNELS 3 Yrotation Xrotation Zrotation
            JOINT Head
            {
              OFFSET 10.9846 0.0000 0.0000
              CHANNELS 3 Yrotation Xrotation Zrotation
              End Site
              {
                OFFSET 21.9692 0.0000 0.0000
              }
            }
          }
          JOINT RightShoulder
          {
            OFFSET 12.4389 7.0401 1.9147
            CHANNELS 3 Yrotation Xrotation Zrotation
            JOINT RightArm
            {
              OFFSET 13.4599 0.0000 0.0000
              CHANNELS 3 Yrotation Xrotation Zrotation
              JOINT RightForeArm
              {
                OFFSET 26.1001 0.0000 0.0000
                CHANNELS 3 Yrotation Xrotation Zrotation
                JOINT RightHand
                {
                  OFFSET 28.9001 0.0000 0.0001
                  CHANNELS 3 Yrotation Xrotation Zrotation
                  JOINT RightHandIndex1
                  {
                    OFFSET 8.0740 -0.1031 -2.5611
                    CHANNELS 3 Yrotation Xrotation Zrotation
                    JOINT RightHandIndex2
                    {
                      OFFSET 4.5805 0.0000 0.0000
                      CHANNELS 3 Yrotation Xrotation Zrotation
                      JOINT RightHandIndex3
                      {
                        OFFSET 2.8181 0.0000 0.0000
                        CHANNELS 3 Yrotation Xrotation Zrotation
                        End Site
                        {
                          OFFSET 1.3566 0.0000 0.0000
                        }
                      }
                    }
                  }
                  JOINT RightHandMiddle1
                  {
                    OFFSET 8.1366 -0.1031 -0.5450
                    CHANNELS 3 Yrotation Xrotation Zrotation
                    JOINT RightHandMiddle2
                    {
                      OFFSET 4.7024 0.0000 0.0000
                      CHANNELS 3 Yrotation Xrotation Zrotation
                      JOINT RightHandMiddle3
                      {
                        OFFSET 3.1016 0.0000 0.0000
                        CHANNELS 3 Yrotation Xrotation Zrotation
                        End Site
                        {
                          OFFSET 1.4991 0.0000 0.0000
                        }
                      }
                    }
                  }
                  JOINT RightHandPinky1
                  {
                    OFFSET 7.0181 -0.1030 3.1368
                    CHANNELS 3 Yrotation Xrotation Zrotation
                    JOINT RightHandPinky2
                    {
                      OFFSET 3.5541 0.0000 0.0000
                      CHANNELS 3 Yrotation Xrotation Zrotation
                      JOINT RightHandPinky3
                      {
                        OFFSET 2.2340 0.0000 0.0000
                        CHANNELS 3 Yrotation Xrotation Zrotation
                        End Site
                        {
                          OFFSET 1.1205 0.0000 0.0000
                        }
                      }
                    }
                  }
                  JOINT RightHandRing1
                  {
                    OFFSET 7.8068 -0.1030 1.4405
                    CHANNELS 3 Yrotation Xrotation Zrotation
                    JOINT RightHandRing2
                    {
                      OFFSET 4.3922 0.0000 0.0000
                      CHANNELS 3 Yrotation Xrotation Zrotation
                      JOINT RightHandRing3
                      {
                        OFFSET 2.8327 0.0000 0.0000
                        CHANNELS 3 Yrotation Xrotation Zrotation
                        End Site
                        {
                          OFFSET 1.3500 0.0000 0.0000
                        }
                      }
                    }
                  }
                  JOINT RightHandThumb1
                  {
                    OFFSET 2.0791 -0.1030 -2.7863
                    CHANNELS 3 Yrotation Xrotation Zrotation
                    JOINT RightHandThumb2
                    {
                      OFFSET 3.9619 0.0000 0.0000
                      CHANNELS 3 Yrotation Xrotation Zrotation
                      JOINT RightHandThumb3
                      {
                        OFFSET 3.0071 0.0000 0.0000
                        CHANNELS 3 Yrotation Xrotation Zrotation
                        End Site
                        {
                          OFFSET 1.7295 0.0000 0.0000
                        }
                      }
                    }
                  }
                }
                JOINT RightForeArmRoll
                {
                  OFFSET 14.0000 -0.0001 0.0001
                  CHANNELS 3 Yrotation Xrotation Zrotation
                  End Site
                  {
                    OFFSET 7.0000 -0.0001 0.0000
                  }
                }
              }
              JOINT RightArmRoll
              {
                OFFSET 13.5000 -0.0001 0.0000
                CHANNELS 3 Yrotation Xrotation Zrotation
                End Site
                {
                  OFFSET 6.7500 0.0000 0.0000
                }
              }
            }
          }
        }
      }
    }
  }
  JOINT LeftUpLeg
  {
    OFFSET 8.1000 0.0000 0.0000
    CHANNELS 3 Yrotation Xrotation Zrotation
    JOINT LeftLeg
    {
      OFFSET 43.2000 0.0000 0.0000
      CHANNELS 3 Yrotation Xrotation Zrotation
      JOINT LeftFoot
      {
        OFFSET 43.3000 0.0000 0.0000
        CHANNELS 3 Yrotation Xrotation Zrotation
        JOINT LeftToeBase
        {
          OFFSET 14.5727 0.0000 0.0000
          CHANNELS 3 Yrotation Xrotation Zrotation
          End Site
          {
            OFFSET 7.2864 0.0000 0.0000
          }
        }
      }
      JOINT LeftLegRoll
      {
        OFFSET 22.0000 0.0000 0.0000
        CHANNELS 3 Yrotation Xrotation Zrotation
        End Site
        {
          OFFSET 11.0000 0.0000 0.0000
        }
      }
    }
    JOINT LeftUpLegRoll
    {
      OFFSET 21.5000 0.0000 0.0000
      CHANNELS 3 Yrotation Xrotation Zrotation
      End Site
      {
        OFFSET 10.7500 0.0000 0.0000
      }
    }
  }
  JOINT RightUpLeg
  {
    OFFSET -8.1000 0.0000 0.0000
    CHANNELS 3 Yrotation Xrotation Zrotation
    JOINT RightLeg
    {
      OFFSET 43.2000 0.0000 0.0000
      CHANNELS 3 Yrotation Xrotation Zrotation
      JOINT RightFoot
      {
        OFFSET 43.3000 -0.0626 0.0000
        CHANNELS 3 Yrotation Xrotation Zrotation
        JOINT RightToeBase
        {
          OFFSET 14.5727 0.0000 0.0000
          CHANNELS 3 Yrotation Xrotation Zrotation
          End Site
          {
            OFFSET 7.2864 0.0000 0.0000
          }
        }
      }
      JOINT RightLegRoll
      {
        OFFSET 22.0000 0.0000 0.0000
        CHANNELS 3 Yrotation Xrotation Zrotation
        End Site
        {
          OFFSET 11.0000 0.0000 0.0000
        }
      }
    }
    JOINT RightUpLegRoll
    {
      OFFSET 21.5000 0.0000 0.0000
      CHANNELS 3 Yrotation Xrotation Zrotation
      End Site
      {
        OFFSET 10.7500 0.0000 0.0000
      }
    }
  }
}
MOTION
Frames: 1101
Frame Time: 0.010000

and my motion config

filepath: examples/bvh/fair1/custom.bvh
start_frame_idx: 0
end_frame_idx: 1101
groundplane_joint: LeftFoot
forward_perp_joint_vectors:
  - - LeftShoulder
    - RightShoulder
  - - LeftUpLeg
    - RightUpLeg
scale: 0.025
up: +z

i edited my image_to_animation.py:

from image_to_annotations import image_to_annotations
from annotations_to_animation import annotations_to_animation
from pathlib import Path
import logging
import sys
from pkg_resources import resource_filename


def image_to_animation(img_fn: str, char_anno_dir: str, motion_cfg_fn: str, retarget_cfg_fn: str):
    """
    Given the image located at img_fn, create annotation files needed for animation.
    Then create animation from those animations and motion cfg and retarget cfg.
    """
    # create the annotations
    image_to_annotations(img_fn, char_anno_dir)

    # create the animation
    annotations_to_animation(char_anno_dir, motion_cfg_fn, retarget_cfg_fn)


if __name__ == '__main__':
    log_dir = Path('./logs')
    log_dir.mkdir(exist_ok=True, parents=True)
    logging.basicConfig(filename=f'{log_dir}/log.txt', level=logging.DEBUG)

    img_fn = sys.argv[1]
    char_anno_dir = sys.argv[2]
    if len(sys.argv) > 3:
        motion_cfg_fn = sys.argv[3]
    else:
        motion_cfg_fn = resource_filename(__name__, 'config/motion/custom.yaml')
    if len(sys.argv) > 4:
        retarget_cfg_fn = sys.argv[4]
    else:
        retarget_cfg_fn = resource_filename(__name__, 'config/retarget/fair1_ppf.yaml')

    image_to_animation(img_fn, char_anno_dir, motion_cfg_fn, retarget_cfg_fn)

i run command:
python image_to_animation.py drawings/garlic.png garlic_out
i got the issue

[Informative] Having trouble with OpenGL in miniconda environment (Ubuntu 22)

Hello

if you have GPU on your machine and you are having trouble to run the project on Ubuntu 22 or higher. Run the following commands:

  1. sudo apt-get install libosmesa6-dev freeglut3-dev
  2. sudo apt-get install libglfw3-dev libgles2-mesa-dev
  3. sudo apt-get install libosmesa6
  4. export PYOPENGL_PLATFORM=osmesa (important: inside conda environment)
  5. conda install -c conda-forge libstdcxx-ng
  6. Still not working? try conda install cmake.

No need to downgrade the Python version or any other dep versioning hack.

Hope it helps!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.