Giter Club home page Giter Club logo

emotion-based-filter's Introduction

emotion-based-filter

Introduction

Emotion based filtering in real-time. The example used with our model is the classic "dog face" filter in Snapchat. We first detect the agent's facial region, reformat and slice out the region defining their face, process the facial features using a pretrained Convolutional Neural Net (CNN) for emotions, and then adjust the filter contents (e.g., happy ears, neutral ears, sad ears) based on the softmax result of the classifier. In the ideal world, this would be used for Snapchat, but as it currently is written, it supports your webcam!
TL:DR: Face detection -> emotion detection -> responsive dog ears!

Credit

For face detection, we used OpenCV and Haar Feature-based Cascade Classifiers in order to effiicently process live video. A tutorial directly from OpenCV is located here, which we followed.

For emotion detection, we used the FER-2013 dataset, downloadable here, which was cleaned and standardized by atulapra on Github; we also drew inspiration from his implementation of the "Emotion Recognition using Deep Convolutional Neural Networks" research paper (provided in our repo as /emotion-based-filter/emotion_detection_research.pdf), which we also read, analyzed, and implemented. The FER-2013 dataset then trained our Convolutional Neural Net (CNN) using Tensorflow, and allowed us to classify a provided face for emotion!

Installation

FER-2013 Dataset

The "Facial Expression Recogition 2013" or "FER-2013" dataset is downloadable here. Once downloaded, copy the contents of the folder to the root /emotion-based-filter/ folder. Afterwards, we will now have a /emotion-based-filter/data folder containing a test and train folder for use to train the CNN for emotion detection.
It is also possible that this repo actually already contains the data/test and data/train folders, in which case you should ignore this subsection. Additionally, we include a model that we did not train from priya-dwivedi, who has made their trained model public! Check the [How to Run section](# How to Run) for details on how to swap between our model and Priya's model.

MacOS and Linux/Git Bash Setup

Virtual Environment

In order to easily run all of the files, and make sure that all dependencies are satsified, it is easiest to use a virtual environment. To install the virtualenv module, run the following in a new terminal session (make sure that no other virtual environments are active):

pip3 install virtualenv

Next, navigate to a directory of your choice (perhaps the parent directory to this repo) and run:

virtualenv --python=python3.7 emotion_venv

If it wasn't clear from the above command, you should be using at least python3.7, downloadable here. python3.7 is the path to the Python executable that you wish to use in your virtual environment. The virtualenv command above creates a new directory called "emotion_venv" that contains your new virtual environment (env). You can activate this using the following command (in the directory that contains the emotion_venv folder):

source ./emotion_venv/bin/activate

Next install the necessary packages to your virtual environment by running these two commands (assuming you're in the folder containing this repo):

(emotion_venv) cd emotion-based-filter
(emotion_venv) pip3 install -r requirements.txt

(note: (emotion_venv) is included in the above lines to indicate you should have called the source command above to enter the virtual environment before installing; do not type (emotion_venv))

Windows Setup

For the most part, the instructions can be completed on Windows operating systems using a shell such as git bash, however, there may be some difficulties during the setup process, such as running into the following error when running pip install on the requirements: "cmake must be installed to build the following extensions dlib" Therefore, the section below will be useful as an alternative. Note: windows uses backslashes for directory navigation "" as opposed to MacOS and Linux, which use forward slashes "/".

You will need to be using a command prompt with access to CMake such as "Developer Command Prompt for VS 2019"

Install the virtual environment module with the following command:

pip3 install virtualenv

Next, navigate to a directory of your choice (perhaps the parent directory to this repo) and run:

virtualenv emotion_venv

The virtualenv command above creates a new directory called "emotion_venv" that contains your new virtual environment (env). You can activate this using the following command (in the directory that contains the emotion_venv folder): Here is where the commands start to deviate slightly.

.\emotion_venv\Scripts\activate

Next install the necessary packages to your virtual environment by running these two commands (assuming you're in the folder containing this repo):

(emotion_venv) cd emotion-based-filter
(emotion_venv) pip3 install -r requirements.txt

(note: (emotion_venv) is included in the above lines to indicate you should have called the source command above to enter the virtual environment before installing; do not type (emotion_venv))

VSCode Setup on MacOS

If you're running this in VSCode, and are on MacOS, you might need to take an extra step to get everything going.

  • In an active VSCode window, press Command+Shift+P which will open a search bar for VSCode related settings.
  • Type shell command and then click on the option that says Shell Command: Install 'code' command in PATH.
  • Close VSCode completely (right click and quit)
  • Open a fresh terminal and type sudo code -- this will open a fresh instance of VSCode running as the root user.
  • Cd into the /emotion-based-filter/ directory (or open it in VSCode) and then run the virtual environment as detailed above

At this point you should be ready to run the files! See the next section for how to run everything. You needed to run VSCode as root because otherwise the privacy permissions will be denied, and the video capture will always fail. Running as root lets you enable webcam capture!

How to Run

To run without any changes (i.e., open your webcam, load the pretrained weights, detect emotion on each frame), simply cd to this repo, and then run:

python3.7 py/main.py

The only parameters you might want to change are:

  • --model_mode: entering "train" will train the CNN using the provided FER-2013 dataset over 50 epochs before running your webcam; entering nothing, or explicitly entering "display" will simply run the process described at the beginning of this section.

  • --camera: by default, this is 0 which represents your built in webcam. If you would like to use an external webcam, then change this number until it works.

If you wish to use a different model, simply change MODEL_DATA_FILE_PATH in /emotion-based-filter/py/hyperparameters.py to the file path representing your model of choice. This may cause issues in our code, so do this at your own risk.

Results

Our results are outlined in a formal paper included in our repo as /emotion-based-filter/report.pdf, including plenty of pictures!
In the ideal world, this would work directly on Snapchat, but we limited our scope to simply run on live webcam footage. We also ran into issues relating to Snapchat, because many of the APIs available are in Javascript, which means our access to Computer Vision related libraries is far more limited.

emotion-based-filter's People

Contributors

griffinbeels avatar giusen799 avatar czebos avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.