Giter Club home page Giter Club logo

talking-head-anime-2-demo's Introduction

ประมุข ขันเงิน

Thai: ประมุข ขันเงิน
English: Pramook Khungurn
Japanese: プラムック・カンガーン

I'm a software engineer and researcher from Thailand. I'm interested in computer vision, computer graphics, machine learning, algorithms, and mathematics in general.

Nowadays, I mainly work on the application of machine learning to character animation. I have created several models that can create simple animations of an anime character given as a single image.

You can find more information about my work and experience at my website

talking-head-anime-2-demo's People

Contributors

dragonmeteor avatar graphemecluster avatar gunwoohan avatar pkhungurn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

talking-head-anime-2-demo's Issues

Ifacialmocap alternative

Hi,

First of all super impressive work. Now come to the question. Would you mind suggesting to me if there are any alternatives to ifacialmocap in android or PC? I am thinking of using some kind of motion capture that might give the same value as the ios one and port values to your puppeteer.

All the best,
Thanisorn

ModuleNotFoundError: No module named 'tha2'

Traceback (most recent call last):
File "D:\Downloads\IDM\Compressed\talking-head-anime-2-demo-main\talking-head-anime-2-demo-main\tha2\app\manual_poser.py", line 13, in
from tha2.poser.poser import Poser, PoseParameterCategory, PoseParameterGroup
ModuleNotFoundError: No module named 'tha2'

Could you please help me? :(

KeyError: 'eyebrow_decomposer'

Hi,

I have successfully used Anaconda to launch your app. I was able to Load the images with no problem.
Then I tried to connect with the iFacialMocap desktop app.

Then I got a bunch of messages like the following.
When does this message happen?

And the Loaded image is not moving, only the green parameter below is moving.
The green bar is moving, so the connection itself should be working, but I am wondering why the image is not moving.

I am using a gaming laptop with Windows 11 and RTX2080 Super.

Traceback (most recent call last):
  File "tha2/app/ifacialmocap_puppeteer.py", line 406, in update_result_image_bitmap
    output_image = self.poser.pose(self.torch_source_image, pose, output_index)[0].detach().cpu()
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\poser\general_poser_02.py", line 54, in pose
    output_list = self.get_posing_outputs(image, pose)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\poser\general_poser_02.py", line 69, in get_posing_outputs
    return self.output_list_func(modules, batch, outputs)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\poser\modes\mode_20.py", line 57, in func
    output = self.get_output(KEY_ALL_OUTPUT, modules, batch, outputs)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\compute\cached_computation_protocol.py", line 19, in get_output
    output = self.compute_output(key, modules, batch, outputs)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\poser\modes\mode_20.py", line 114, in compute_output
    combiner_output = self.get_output(KEY_COMBINER_OUTPUT, modules, batch, outputs)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\compute\cached_computation_protocol.py", line 19, in get_output
    output = self.compute_output(key, modules, batch, outputs)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\poser\modes\mode_20.py", line 102, in compute_output
    face_rotater_output = self.get_output(KEY_FACE_ROTATER_OUTPUT, modules, batch, outputs)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\compute\cached_computation_protocol.py", line 19, in get_output
    output = self.compute_output(key, modules, batch, outputs)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\poser\modes\mode_20.py", line 92, in compute_output
    face_morpher_output = self.get_output(KEY_FACE_MORPHER_OUTPUT, modules, batch, outputs)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\compute\cached_computation_protocol.py", line 19, in get_output
    output = self.compute_output(key, modules, batch, outputs)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\poser\modes\mode_20.py", line 81, in compute_output
    eyebrow_morphing_combiner_output = self.get_output(
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\compute\cached_computation_protocol.py", line 19, in get_output
    output = self.compute_output(key, modules, batch, outputs)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\poser\modes\mode_20.py", line 71, in compute_output
    eyebrow_decomposer_output = self.get_output(KEY_EYEBROW_DECOMPOSER_OUTPUT, modules, batch, outputs)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\compute\cached_computation_protocol.py", line 19, in get_output
    output = self.compute_output(key, modules, batch, outputs)
  File "C:\Users\emoto\Documents\talking-head-anime-2-demo-main\tha2\poser\modes\mode_20.py", line 69, in compute_output
    return modules[KEY_EYEBROW_DECOMPOSER].forward_from_batch([input_image])
KeyError: 'eyebrow_decomposer'

Problem

I can open ifacialmocap_puppeteer and load the image, iFacialMocap is working too. But can‘t create the movable picture

It return this

File "F:\talking-head-anime-2-demo-main\tha2\compute\cached_computation_protocol.py", line 19, in get_output
output = self.compute_output(key, modules, batch, outputs)
File "F:\talking-head-anime-2-demo-main\tha2\poser\modes\mode_20.py", line 81, in compute_output
eyebrow_morphing_combiner_output = self.get_output(
File "F:\talking-head-anime-2-demo-main\tha2\compute\cached_computation_protocol.py", line 19, in get_output
output = self.compute_output(key, modules, batch, outputs)
File "F:\talking-head-anime-2-demo-main\tha2\poser\modes\mode_20.py", line 71, in compute_output
eyebrow_decomposer_output = self.get_output(KEY_EYEBROW_DECOMPOSER_OUTPUT, modules, batch, outputs)
File "F:\talking-head-anime-2-demo-main\tha2\compute\cached_computation_protocol.py", line 19, in get_output
output = self.compute_output(key, modules, batch, outputs)
File "F:\talking-head-anime-2-demo-main\tha2\poser\modes\mode_20.py", line 69, in compute_output
return modules[KEY_EYEBROW_DECOMPOSER].forward_from_batch([input_image])
KeyError: 'eyebrow_decomposer'

Can’t see the output live image.

Yeah,this time I uploaded the image as well as the progress bars at the bottom of the ifacialmocap_puppeteer window moved as I moved face in front of the iOS device's front-facing camera.I just can't see the live image produced.There’s nothing appeared.The output frame just had blank.In addition,my GPU is RTX3070.
image

image
@dragonmeteor @graphemecluster excuse,can anyone help me?

Torch not compiled with CUDA enabled

Hi, I love this new tool you developed, but I have problem with installing and running it on my device,
for tha2:
this pop up when I am choosing an image

"AssertionError: Torch not compiled with CUDA enabled"

I check my CUDA version and reinstalled everything

How to transfer arkit 52 blendshapes to 37 common blendshapes in the MMD models

Hi, thanks for your wonderful repo, I am quite interested in your convertor which can do motion transfer from facial performances.
It can transfer the 52 arkit blendshape value into MMD model 37 blendshape value. However, I am quite curious about whether there is a method which can transfer the MMD blendshapes back to arkit blendshapes? Looking forward to your reply !

Choice of using iOS app iFacialMocap?

Hi! Thank you for the cool work that you've published.

Is there a particular reason that you decided to use iFacialMocap?
I'm just curious that is it because you think monocular cameras like webcams aren't as accurate as iPhone's depth cam?

can i go with android

is these all run succesfully must be using ios and apple ,can it works in android or can i run these just in the window application .

About low FPS

First thank you for the amazing work!!
I'm testing the ifacialmocap_puppeteer.py with single RTX3080 on Windows10, but only get FPS around 4~6.
Is this a normal performance with this GPU? Could you give me a FPS baseline using TITAN RTX?

Connection with Unreal Iive Link Face iOS app?

Hello, Is it possible to use the Unreal Iive Link Face iOS app instead of the paid app? this app can send/recieve 52 shapekeys data from the iPhone camera and then save that data as a csv, json and Live option is also there, is it possible to use it by using the Unreal Iive Link Face iOS app?

Improvements

Improvements

  • Image are automatically resized to 256×256 now
  • No more ugly output: r, g, b are changed to 0 if alpha is 0

always flashing white

Thank you for making such interesting software
I encountered some problems while using.
Why does my output keep flashing white and Is there any way to make the output easier to capture by obs?

the problem

image
how can i solve this problem while i am gping to go on for the first step with this
image

GPU low utilization rate

Hello
I encountered some problems while using.
frames are in single digits.
GPU low utilization rate
cpu utilizationis a little higher than usual
what went wrong

socket cannot recv data

my ipad's iFacialMocap shows it has connected to my pc, but python always show that errno is 10035.
image

RuntimeError about mouth_lowered_corner

When I use this innovative program with Google Colab, I face this error.
It seems that this error occur with any image.

We can find it when we change 'mouth_lowered_corner' parametor.

I'm sorry I have no knowledge about fix it.


/content/talking-head-anime-2-demo/tha2/poser/poser.py in get_parameter_index(self, name)
     78                     return index
     79                 index += 1
---> 80         raise RuntimeError("Cannot find parameter with name %s" % name)
     81 
     82     def get_parameter_name(self, index: int) -> str:

RuntimeError: Cannot find parameter with name mouth_lowered_corner

image

Install pytorch in conda (Windows)

As the time goes by, current sample command in readme will not install a proper pytorch with cuda support.
According to pytorch's official site, the latest windows version does not support of CUDA 10.2 anymore.

Insted I tried install 11.3 as recommend (And also required for cards like 3060)

conda install pytorch torchvision cudatoolkit=11.3 -c pytorch

Then it works. I don't have git client in my recent environment so is unable to push, but I think this information can be shared in readme as well?

Operation error

Traceback (most recent call last):
File "tha2/app/manual_poser.py", line 323, in update_result_image_panel
output_image = self.poser.pose(self.torch_source_image, pose, output_index)[0].detach().cpu()
File "C:\Users\mayn\Desktop\talking-head-anime-2-demo-main\talking-head-anime-2-demo-main\tha2\poser\general_poser_02.py", line 54, in pose
output_list = self.get_posing_outputs(image, pose)
File "C:\Users\mayn\Desktop\talking-head-anime-2-demo-main\talking-head-anime-2-demo-main\tha2\poser\general_poser_02.py", line 58, in get_posing_outputs
modules = self.get_modules()
File "C:\Users\mayn\Desktop\talking-head-anime-2-demo-main\talking-head-anime-2-demo-main\tha2\poser\general_poser_02.py", line 39, in get_modules
module = self.module_loaderskey
File "C:\Users\mayn\Desktop\talking-head-anime-2-demo-main\talking-head-anime-2-demo-main\tha2\poser\modes\mode_20.py", line 269, in
lambda: load_eyebrow_decomposer(module_file_names[KEY_EYEBROW_DECOMPOSER]),
File "C:\Users\mayn\Desktop\talking-head-anime-2-demo-main\talking-head-anime-2-demo-main\tha2\poser\modes\mode_20.py", line 146, in load_eyebrow_decomposer
module.load_state_dict(torch_load(file_name))
File "C:\Users\mayn\Desktop\talking-head-anime-2-demo-main\talking-head-anime-2-demo-main\tha2\util.py", line 23, in torch_load
with open(file_name, 'rb') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'data/eyebrow_decomposer.pt'

Where can I find instructions on how to move a face based on MMD motion data?

I am a person who learned about it through Nico Video.

https://www.nicovideo.jp/watch/sm38211856

According to 8:10 of the video, "you can move 2D illustrations with MMD motion data," but I couldn't find anywhere how to do this.
How can I create an animation using MMD motion data?

日本語:
 ニコニコ動画から来た者です。
ツマミやiPhoneを使った顔アニメーションの作成方法はサイトの方を拝見させていただいて理解できたのですが、MMDモーションデータを用いたアニメーションの作成方法が、恥ずかしながらわかりませんでした。
 動画の8分10秒では「MMDのモーションデータで動かせる」と書いてありましたので、私の理解が及ばないばかりだとは思うのですが、どうかMMDのモーションデータからアニメーションを作る方法を教えて頂けないでしょうか……?
 最後に、素晴らしいツールをありがとう!

Play button disapears

When I tried to run colab.ipynb, the Play button didn't appear as introduction saying.

Instruction

Run the four cells below, one by one, in order by clicking the "Play" button to the left of it. Wait for each cell to finish before going to the next one.
Scroll down to the end of the last cell, and play with the GUI.

Please let me know how to get Play button.

I have a question.

Hello, I am a Korean university student who is interested in your project.
I'm analyzing the code because your project is so impressive.
I want to make sure that I understood it correctly, so I'm leaving a message.

I'm trying to make various facial expressions, but I'm asking because there's no change.
If I make the code like this, is the flow right?

# happy
def make_happy(self):
    selected_morph_index = 1      # eye_happy_wink
    param_group = self.param_groups[selected_morph_index]   

    param_range = param_group.get_range()
    pose = [0.0 for i in range(poser.get_num_parameters())]

    pose[14] = param_range[0] + (param_range[1] - param_range[0]) * self.alpha
    pose[15] = param_range[0] + (param_range[1] - param_range[0]) * self.alpha

    self.save_img('happy')

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.