Giter Club home page Giter Club logo

vtuber-momosehiyori's Introduction

VTuber-MomoseHiyori

VTuber Demo

  • Watch MP4 DEMO ( I use a mirror camera so that my behavior is opposite to Hiyori's )
  • Test Behavior : Nod, Shake, Rotation, Eyeball Rotation, Blink, Eye Half-opening, Mouth Opening

Where does the idea come from ?

Recently, I do some studies on Deep Learning and Computer Vision. At the same time, I realize that I can make a VTuber model by Unity which could simulate my facial expression via computer vision. After watching some tutorials I have made a fantastic Live2D model Momose Hhiyori, and becomes a VTuber successfully !


Development Environment

  • Test System : Windows 10 64bits
  • Camera : Integrated Webcam
  • Socket Transmission : Intranet
  • Model Made : Live2D Cubism Editor 4.0
  • Engine : Unity
  • Script Language : C#
  • Recognition Algorithm : Deep Learning
  • Language : Python 3.7 Anaconda
  • Main Required Library : opencv, dlib, numpy, torch

File Explanation

File Explanation
Recognition Packed Algorithm for facial recognition
UnityAssets Tutorial materials for those want to make Live2D VTuber by self
Hiyori酱~ Starter, quick mode to start program

How to be a VTuber ?

  1. Download and unzip ZIP source file
  2. Install required python libraries ( recommend Anaconda )
  • I do not test at other operating system, if your OS is not Windows, you'd better test it by yourself
  • Windows
    • There are some libraries that I use, you can use pip install -r requirements.txt to install as you like
    • CPU ( recommend for testing )
      • Libraries Installation by pip install -r requirements_cpu.txt
      • Open Anaconda Prompt to install dlib by conda install -c menpo dlib if it doesn't work
    • GPU
      • Firstly, please check the your CUDA version : 9.0 / 10.1 / 10.2 / None
      • Install pytorch by running corresponding command such as conda install pytorch torchvision cudatoolkit=10.2 -c pytorch for 10.2
      • Install other libraries by pip install -r requirements_gpu.txt.
      • If you have CUDA 10, pip install onnxruntime-gpu to get faster inference speed using onnx model.
  1. Download VTuber_Hiyori.zip and ckpts.zip ( If you want to use onnxruntime to get faster speed ) at Release
  2. Unzip ckpts and put it under Recognition\face_alignment
  3. Unzip VTuber_Hiyori and start VTuber_MomoseHiyori.exe ( Please wait and do not start any other applications simultaneously !!! )
  4. Run Hiyori酱~.bat
  5. If ひよりちゃん start to simulate your facial expression, congratulations! You have been a VTuber now!
  6. The latest verion has been released, you can download and use them.

Tips

If you find it doesn't recognize well, please try again as following :

  • Use brighter light : To make your face more clearly, using both natural light and point light seems perfect.
  • Adjust your position : You can start a camera demo to help you know your position by adding --debug at Hiyori酱~.bat. Run again, let the outer green boundary be larger and central but not larger than demo boundary.
  • Do not wear glasses : Glasses probably influence on the accuracy of eye recognition.
  • Show your forehead : Probably your hair is too long to have side effects on recognition of your eyes.

Optimization

  • Use Live2D instead of 3D model
  • Add 2 eye events : Eye Half-opening and Eyeball Rotation
  • Optimize some parameters and be more accurate
  • Easy start and fixed window at top without boundary, more convenient for live streaming

UnityAssets Tutorial

( If you don't want to know how to import a Live2D VTuber, you can skip this part )

  • Description : It is a template for most Cubism Live2D models. If you just want to customize your own Live2D models, probably you can read this tutorial and following steps.
  • Recommend Unity Engine : Unity 2019.4.1f1 LTS
  • Before you start : Equip yourselves with knowledge of Unity basic operation
  • Prepare Live2D SDK : You must download SDK on website, or use CubismSdkForUnity-4-r.1 I download for you under UnityAssets
  • Create a new Unity project

  • Import Live2D SDK : Drag CubismSdkForUnity-4-r.1 to Assets and choose to import all

  • Restart Unity : Do not forget this step, otherwise the SDK probably cannot work !
  • Import Assets : Delete the default scene file, drag Momose, Scece and Script file under Assets

  • Import Model : A prefab will be automatically generated at Assets/Momose/hiyori_pro_t08.prefab. Open Scene/MomoseHiyori and drag prefab into scene

  • Set Position : Select prefab and move Y axis (blue) ahead

  • Initialization : Move control balls to initialize

  • Bind Script

  • Export & Build

  • Start to Test


Credits

Thanks for following blogs or projects which give me a reference :


License


Author

  • Kennard Wang ( 2020.6.27 )

vtuber-momosehiyori's People

Contributors

kennardwang avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.