Giter Club home page Giter Club logo

facetorch's Introduction

facetorch

build lint PyPI Conda (channel only) PyPI - License Code style: black

Hugging Face Space demo app ๐Ÿค—

Google Colab notebook demo Open In Colab

User Guide, Documentation, ChatGPT facetorch guide

Docker Hub (GPU)

Facetorch is a Python library that can detect faces and analyze facial features using deep neural networks. The goal is to gather open sourced face analysis models from the community, optimize them for performance using TorchScript and combine them to create a face analysis tool that one can:

  1. configure using Hydra (OmegaConf)
  2. reproduce with conda-lock and Docker
  3. accelerate on CPU and GPU with TorchScript
  4. extend by uploading a model file to Google Drive and adding a config yaml file to the repository

Please, use the library responsibly with caution and follow the ethics guidelines for Trustworthy AI from European Commission. The models are not perfect and may be biased.

Install

PyPI

pip install facetorch

Conda

conda install -c conda-forge facetorch

Usage

Prerequisites

Docker Compose provides an easy way of building a working facetorch environment with a single command.

Run docker example

  • CPU: docker compose run facetorch python ./scripts/example.py
  • GPU: docker compose run facetorch-gpu python ./scripts/example.py analyzer.device=cuda

Check data/output for resulting images with bounding boxes and facial 3D landmarks.

(Apple Mac M1) Use Rosetta 2 emulator in Docker Desktop to run the CPU version.

Configure

The project is configured by files located in conf with the main file: conf/config.yaml. One can easily add or remove modules from the configuration.

Components

FaceAnalyzer is the main class of facetorch as it is the orchestrator responsible for initializing and running the following components:

  1. Reader - reads the image and returns an ImageData object containing the image tensor.
  2. Detector - wrapper around a neural network that detects faces.
  3. Unifier - processor that unifies sizes of all faces and normalizes them between 0 and 1.
  4. Predictor dict - set of wrappers around neural networks trained to analyze facial features.
  5. Utilizer dict - set of wrappers around any functionality that requires the output of neural networks e.g. drawing bounding boxes or facial landmarks.

Structure

analyzer
    โ”œโ”€โ”€ reader
    โ”œโ”€โ”€ detector
    โ”œโ”€โ”€ unifier
    โ””โ”€โ”€ predictor
            โ”œโ”€โ”€ embed
            โ”œโ”€โ”€ verify
            โ”œโ”€โ”€ fer
            โ”œโ”€โ”€ au
            โ”œโ”€โ”€ va
            โ”œโ”€โ”€ deepfake
            โ””โ”€โ”€ align
    โ””โ”€โ”€ utilizer
            โ”œโ”€โ”€ align
            โ”œโ”€โ”€ draw
            โ””โ”€โ”€ save

Models

Detector

|     model     |   source  |   params  |   license   | version |
| ------------- | --------- | --------- | ----------- | ------- |
|   RetinaFace  |  biubug6  |   27.3M   | MIT license |    1    |
  1. biubug6

Predictor

Facial Representation Learning (embed)

|       model       |   source   |  params |   license   | version |  
| ----------------- | ---------- | ------- | ----------- | ------- |
|  ResNet-50 VGG 1M |  1adrianb  |  28.4M  | MIT license |    1    |
  1. 1adrianb

Face Verification (verify)

|       model      |   source    |  params  |      license       | version |  
| ---------------- | ----------- | -------- | ------------------ | ------- |
|    MagFace+UNPG  | Jung-Jun-Uk |   65.2M  | Apache License 2.0 |    1    |
|  AdaFaceR100W12M |  mk-minchul |    -     |     MIT License    |    2    |
  1. Jung-Jun-Uk
  2. mk-minchul

Facial Expression Recognition (fer)

|       model       |      source    |  params  |       license      | version |  
| ----------------- | -------------- | -------- | ------------------ | ------- |
| EfficientNet B0 7 | HSE-asavchenko |    4M    | Apache License 2.0 |    1    |
| EfficientNet B2 8 | HSE-asavchenko |   7.7M   | Apache License 2.0 |    2    |
  1. HSE-asavchenko

Facial Action Unit Detection (au)

|        model        |   source  |  params |       license      | version |  
| ------------------- | --------- | ------- | ------------------ | ------- |
| OpenGraph Swin Base |  CVI-SZU  |   94M   |     MIT License    |    1    |
  1. CVI-SZU

Facial Valence Arousal (va)

|       model       |   source   |  params |   license   | version |
| ----------------- | ---------- | ------- | ----------- | ------- |
|  ELIM AL AlexNet  | kdhht2334  |  2.3M   | MIT license |    1    |
  1. kdhht2334

Deepfake Detection (deepfake)

|         model        |      source      |  params  |   license   | version |
| -------------------- | ---------------- | -------- | ----------- | ------- |
|    EfficientNet B7   |     selimsef     |   66.4M  | MIT license |    1    |
  1. selimsef

Face Alignment (align)

|       model       |      source      |  params  |   license   | version |
| ----------------- | ---------------- | -------- | ----------- | ------- |
|    MobileNet v2   |     choyingw     |   4.1M   | MIT license |    1    |
  1. choyingw

Model download

Models are downloaded during runtime automatically to the models directory. You can also download the models manually from a public Google Drive folder.

Execution time

Image test.jpg (4 faces) is analyzed (including drawing boxes and landmarks, but not saving) in about 486ms and test3.jpg (25 faces) in about 1845ms (batch_size=8) on NVIDIA Tesla T4 GPU once the default configuration (conf/config.yaml) of models is initialized and pre heated to the initial image size 1080x1080 by the first run. One can monitor the execution times in logs using the DEBUG level.

Detailed test.jpg execution times:

analyzer
    โ”œโ”€โ”€ reader: 27 ms
    โ”œโ”€โ”€ detector: 193 ms
    โ”œโ”€โ”€ unifier: 1 ms
    โ””โ”€โ”€ predictor
            โ”œโ”€โ”€ embed: 8 ms
            โ”œโ”€โ”€ verify: 58 ms
            โ”œโ”€โ”€ fer: 28 ms
            โ”œโ”€โ”€ au: 57 ms
            โ”œโ”€โ”€ va: 1 ms
            โ”œโ”€โ”€ deepfake: 117 ms
            โ””โ”€โ”€ align: 5 ms
    โ””โ”€โ”€ utilizer
            โ”œโ”€โ”€ align: 8 ms
            โ”œโ”€โ”€ draw_boxes: 22 ms
            โ”œโ”€โ”€ draw_landmarks: 7 ms
            โ””โ”€โ”€ save: 298 ms

Development

Run the Docker container:

  • CPU: docker compose -f docker-compose.dev.yml run facetorch-dev
  • GPU: docker compose -f docker-compose.dev.yml run facetorch-dev-gpu

Add predictor

Prerequisites

  1. file of the TorchScript model
  2. ID of the Google Drive model file
  3. facetorch fork

Facetorch works with models that were exported from PyTorch to TorchScript. You can apply torch.jit.trace function to compile a PyTorch model as a TorchScript module. Please verify that the output of the traced model equals the output of the original model.

The first models are hosted on my public Google Drive folder. You can either send the new model for upload to me, host the model on your Google Drive or host it somewhere else and add your own downloader object to the codebase.

Configuration

Create yaml file
  1. Create new folder with a short name of the task in predictor configuration directory /conf/analyzer/predictor/ following the FER example in /conf/analyzer/predictor/fer/
  2. Copy the yaml file /conf/analyzer/predictor/fer/efficientnet_b2_8.yaml to the new folder /conf/analyzer/predictor/<predictor_name>/
  3. Change the yaml file name to the model you want to use: /conf/analyzer/predictor/<predictor_name>/<model_name>.yaml
Edit yaml file
  1. Change the Google Drive file ID to the ID of the model.
  2. Select the preprocessor (or implement a new one based on BasePredPreProcessor) and specify it's parameters e.g. image size and normalization in the yaml file to match the requirements of the new model.
  3. Select the postprocessor (or implement a new one based on BasePredPostProcessor) and specify it's parameters e.g. labels in the yaml file to match the requirements of the new model.
  4. (Optional) Add BaseUtilizer derivative that uses output of your model to perform some additional actions.
Configure tests
  1. Add a new predictor to the main config.yaml and all tests.config.n.yaml files. Alternatively, create a new config file e.g. tests.config.n.yaml and add it to the /tests/conftest.py file.
  2. Write a test for the new predictor in /tests/test_<predictor_name>.py

Test and submit

  1. Run linting: black facetorch
  2. Add the new predictor to the README model table.
  3. Update CHANGELOG and version
  4. Submit a pull request to the repository

Update environment

CPU:

  • Add packages with corresponding versions to environment.yml file
  • Lock the environment: conda lock -p linux-64 -f environment.yml --lockfile conda-lock.yml
  • (Alternative Docker) Lock the environment: docker compose -f docker-compose.dev.yml run facetorch-lock
  • Install the locked environment: conda-lock install --name env conda-lock.yml

GPU:

  • Add packages with corresponding versions to gpu.environment.yml file
  • Lock the environment: conda lock -p linux-64 -f gpu.environment.yml --lockfile gpu.conda-lock.yml
  • (Alternative Docker) Lock the environment: docker compose -f docker-compose.dev.yml run facetorch-lock-gpu
  • Install the locked environment: conda-lock install --name env gpu.conda-lock.yml

Run tests + coverage

  • Run tests and generate coverage: pytest tests --verbose --cov-report html:coverage --cov facetorch

Generate documentation

  • Generate documentation from docstrings using pdoc3: pdoc --html facetorch --output-dir docs --force --template-dir pdoc/templates/

Profiling

  1. Run profiling of the example script: python -m cProfile -o profiling/example.prof scripts/example.py
  2. Open profiling file in the browser: snakeviz profiling/example.prof

Acknowledgements

I want to thank the open source code community and the researchers who have published the models. This project would not be possible without their work.

Logo was generated using DeepAI Text To Image API

facetorch's People

Contributors

tomas-gajarsky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

facetorch's Issues

Permission error when downloading the models

Hello,
This might sound stupid, but I am trying to learn about the tool.
When I try to init the analyzer using the config you used in the demo notebook, I get a "permission error" for the face detector. For some reason, it is not finding the model.

InstantiationException: Error in call to target 'facetorch.analyzer.detector.core.FaceDetector': PermissionError(13, 'Permission denied') full_key: analyzer.detector

Thank you,

Dev Containers in VScode

I cannot get into Dev Containers in VScode with facetorch-dev-gpu in docker-compose.dev.yml.
On the other hand, in facetorch-dev-example, I could use VSCode dev containers. How can I use Dev Containers in facetorch-dev-gpu?

Feature Request: Add the MediaPipe's Face Mesh model

Hi!
Thank you for your work!
I have a feature request for a face mesh model.
MediaPipe FaceMesh can detect 468 3D key points and is widely applicable to face analysis.
FaceMesh is licensed under the Apache 2.0 license. Pre-trained models are available and can be easily integrated into Facetorch.
I would appreciate it if you would add this model.
ใ‚นใ‚ฏใƒชใƒผใƒณใ‚ทใƒงใƒƒใƒˆ 2023-12-10 13 33 32

[Feature request]: Add CLI entrypoint to the package

First of all, the project looks great and thanks for all the work:)

This is just an optional feature request. It would be nice if facetorch can be used as an cli tool (for inference)?
This could be achieved by defining entrypoints in setup.py (reference).

For example something like facetorch --predict -i text.img -o test.img -c config.yaml or any other better alternative structure?

@tomas-gajarsky What you think?

Issue trying to remove predictors

Hi,
Congrats for your library. That's a great tool! :D

I'm trying to use just AU and Emotion Predictors and removed all others as you suggested in #48.

My obtained .yml is the following:

analyzer:
  device: cuda
  optimize_transforms: true
  reader:
    _target_: facetorch.analyzer.reader.ImageReader
    device:
      _target_: torch.device
      type: ${analyzer.device}
    optimize_transform: ${analyzer.optimize_transforms}
    transform:
      _target_: torchvision.transforms.Compose
      transforms:
      - _target_: facetorch.transforms.SquarePad
      - _target_: torchvision.transforms.Resize        
        size:
        - 1080
        antialias: True
  detector:
    _target_: facetorch.analyzer.detector.FaceDetector
    downloader:
      _target_: facetorch.downloader.DownloaderGDrive
      file_id: 1eMuOdGkiNCOUTiEbKKoPCHGCuDgiKeNC
      path_local: /opt/facetorch/models/torchscript/detector/1/model.pt
    device:
      _target_: torch.device
      type: ${analyzer.device}
    reverse_colors: true
    preprocessor:
      _target_: facetorch.analyzer.detector.pre.DetectorPreProcessor
      transform:
        _target_: torchvision.transforms.Compose
        transforms:
        - _target_: torchvision.transforms.Normalize
          mean:
          - 104.0
          - 117.0
          - 123.0
          std:
          - 1.0
          - 1.0
          - 1.0
      device:
        _target_: torch.device
        type: ${analyzer.device}
      optimize_transform: ${analyzer.optimize_transforms}
      reverse_colors: ${analyzer.detector.reverse_colors}
    postprocessor:
      _target_: facetorch.analyzer.detector.post.PostRetFace
      transform: None
      device:
        _target_: torch.device
        type: ${analyzer.device}
      optimize_transform: ${analyzer.optimize_transforms}
      confidence_threshold: 0.02
      top_k: 5000
      nms_threshold: 0.4
      keep_top_k: 750
      score_threshold: 0.6
      prior_box:
        _target_: facetorch.analyzer.detector.post.PriorBox
        min_sizes:
        - - 16
          - 32
        - - 64
          - 128
        - - 256
          - 512
        steps:
        - 8
        - 16
        - 32
        clip: false
      variance:
      - 0.1
      - 0.2
      reverse_colors: ${analyzer.detector.reverse_colors}
      expand_box_ratio: 0.0
  unifier:
    _target_: facetorch.analyzer.unifier.FaceUnifier
    transform:
      _target_: torchvision.transforms.Compose
      transforms:
      - _target_: torchvision.transforms.Normalize
        mean:
        - -123.0
        - -117.0
        - -104.0
        std:
        - 255.0
        - 255.0
        - 255.0
      - _target_: torchvision.transforms.Resize        
        size:
        - 380
        - 380
        antialias: True
    device:
      _target_: torch.device
      type: ${analyzer.device}
    optimize_transform: ${analyzer.optimize_transforms}
  predictor:
    fer:
      _target_: facetorch.analyzer.predictor.FacePredictor
      downloader:
        _target_: facetorch.downloader.DownloaderGDrive
        file_id: 1xoB5VYOd0XLjb-rQqqHWCkQvma4NytEd
        path_local: /opt/facetorch/models/torchscript/predictor/fer/2/model.pt
      device:
        _target_: torch.device
        type: ${analyzer.device}
      preprocessor:
        _target_: facetorch.analyzer.predictor.pre.PredictorPreProcessor
        transform:
          _target_: torchvision.transforms.Compose
          transforms:
          - _target_: torchvision.transforms.Resize            
            size:
            - 260
            - 260
            antialias: True
          - _target_: torchvision.transforms.Normalize
            mean:
            - 0.485
            - 0.456
            - 0.406
            std:
            - 0.229
            - 0.224
            - 0.225
        device:
          _target_: torch.device
          type: ${analyzer.predictor.fer.device.type}
        optimize_transform: ${analyzer.optimize_transforms}
        reverse_colors: false
      postprocessor:
        _target_: facetorch.analyzer.predictor.post.PostArgMax
        transform: None
        device:
          _target_: torch.device
          type: ${analyzer.predictor.fer.device.type}
        optimize_transform: ${analyzer.optimize_transforms}
        dim: 1
        labels:
        - Anger
        - Disgust
        - Fear
        - Happiness
        - Neutral
        - Sadness
        - Surprise
    au:
      _target_: facetorch.analyzer.predictor.FacePredictor
      downloader:
        _target_: facetorch.downloader.DownloaderGDrive
        file_id: 1uoVX9suSA5JVWTms3hEtJKzwO-CUR_jV
        path_local: /opt/facetorch/models/torchscript/predictor/au/1/model.pt # str
      device:
        _target_: torch.device
        type: ${analyzer.device}
      preprocessor:
        _target_: facetorch.analyzer.predictor.pre.PredictorPreProcessor
        transform:
          _target_: torchvision.transforms.Compose
          transforms:
          - _target_: torchvision.transforms.Resize        
            size:
            - 224
            - 224
            antialias: True
          - _target_: torchvision.transforms.Normalize
            mean:
            - 0.485
            - 0.456
            - 0.406
            std:
            - 0.229
            - 0.224
            - 0.225
        device:
          _target_: torch.device
          type: ${analyzer.predictor.au.device.type}
        optimize_transform: ${analyzer.optimize_transforms}
        reverse_colors: false
      postprocessor:
        _target_: facetorch.analyzer.predictor.post.PostMultiLabel
        transform: None
        device:
          _target_: torch.device
          type: ${analyzer.predictor.au.device.type}
        optimize_transform: ${analyzer.optimize_transforms}
        dim: 1
        threshold: 0.5
        labels:
        - inner_brow_raiser
        - outer_brow_raiser
        - brow_lowerer
        - upper_lid_raiser
        - cheek_raiser
        - lid_tightener
        - nose_wrinkler
        - upper_lip_raiser
        - nasolabial_deepener
        - lip_corner_puller
        - sharp_lip_puller
        - dimpler
        - lip_corner_depressor
        - lower_lip_depressor
        - chin_raiser
        - lip_pucker
        - tongue_show
        - lip_stretcher
        - lip_funneler
        - lip_tightener
        - lip_pressor
        - lips_part
        - jaw_drop
        - mouth_stretch
        - lip_bite
        - nostril_dilator
        - nostril_compressor
        - left_inner_brow_raiser
        - right_inner_brow_raiser
        - left_outer_brow_raiser
        - right_outer_brow_raiser
        - left_brow_lowerer
        - right_brow_lowerer
        - left_cheek_raiser
        - right_cheek_raiser
        - left_upper_lip_raiser
        - right_upper_lip_raiser
        - left_nasolabial_deepener
        - right_nasolabial_deepener
        - left_dimpler
        - right_dimpler
  utilizer:
      draw_boxes:
          _target_: facetorch.analyzer.utilizer.draw.BoxDrawer
          transform: None
          device:
            _target_: torch.device
            type: ${analyzer.device}
          optimize_transform: false
          color: green
          line_width: 3
      draw_landmarks:
          _target_: facetorch.analyzer.utilizer.draw.LandmarkDrawerTorch
          transform: None
          device:
            _target_: torch.device
            type: ${analyzer.device}
          optimize_transform: false
          width: 2
          color: green
  logger:
    _target_: facetorch.logger.LoggerJsonFile
    name: facetorch
    level: 20
    path_file: /opt/facetorch/logs/facetorch/main.log
    json_format: '%(asctime)s %(levelname)s %(message)s'
main:
  sleep: 3
debug: true
batch_size: 8
fix_img_size: true
return_img_data: true
include_tensors: true

When trying to run the following code:

path_img_input="./test.jpg"
path_img_output="/test_output.jpg"
path_config="/content/drive/MyDrive/PhD/Courtscribes/Libraries-Comparison/Facetorch/gpu.config.yml"
 
cfg = OmegaConf.load(path_config)
 
# initialize
analyzer = FaceAnalyzer(cfg.analyzer)
 
print(len(cfg.analyzer))
 
# warmup
response = analyzer.run(
        path_image= path_img_input,
        batch_size=cfg.batch_size,
        fix_img_size=cfg.fix_img_size,
        return_img_data=False,
        include_tensors=True,
        path_output= path_img_output
    )

I'm getting this error:

ConfigIndexError                          Traceback (most recent call last)
<ipython-input-24-0c93463983a6> in <cell line: 13>()
     11 
     12 # warmup
---> 13 response = analyzer.run(
     14         path_image= path_img_input,
     15         batch_size=cfg.batch_size
     
/usr/local/lib/python3.10/dist-packages/omegaconf/listconfig.py in __getitem__(self, index)
    213             else:
    214                 return self._resolve_with_default(
--> 215                     key=index, value=self.__dict__["_content"][index]
    216                 )
    217         except Exception as e:
 
ConfigIndexError: list index out of range
    full_key: [7]
    object_type=list

How can I solve? Thanks :)

Output of AU detector

Hi Gajarsky. I really appreciate that you deveop this cool tool! But I am a little bit confused about the output of AU detector like this.

'au': Prediction(label='inner_brow_raiser', logits=tensor([5.9231e-01, 6.0695e-02, 3.0883e-01, 3.8449e-02, 2.9211e-01, 3.2139e-01,
7.0077e-04, 3.4345e-01, 1.0299e-03, 8.1870e-02, 0.0000e+00, 1.3947e-01,
3.1471e-02, 1.4108e-03, 8.3429e-02, 0.0000e+00, 6.6746e-05, 0.0000e+00,
0.0000e+00, 1.1023e-01, 2.9374e-03, 1.3591e-01, 7.4503e-03, 3.5197e-04,
2.9681e-03, 0.0000e+00, 0.0000e+00, 1.6567e-02, 5.1174e-03, 3.3988e-02,
5.1338e-04, 7.6977e-04, 3.8522e-03, 4.3307e-03, 7.2081e-04, 0.0000e+00,
2.0911e-02, 1.7393e-03, 6.8589e-03, 3.1934e-02, 0.0000e+00]), other={'multi': ['inner_brow_raiser']})

From #46 I know that the number is the logits of each AU. However, I am confused that how to interpret these logits? Does it mean the confidence of prediction of each unit? Can I know the position of each predicted unit? Or draw a nice heatmap like the original paper? Thanks!

Stuck at "Reading Image"

Stuck at "Reading Image" when running the analyzer.

image

from omegaconf import OmegaConf
import cv2
from pathlib import Path
import os
from facetorch import FaceAnalyzer
from omegaconf import OmegaConf

path_config="backgroundworker/deepfake_detection/conf/merged.config.yaml"

cfg = OmegaConf.load(path_config)
analyzer = FaceAnalyzer(cfg.analyzer)

os.environ["HYDRA_FULL_ERROR"] = "1"

def process(frame):
    
    Path("./temp").mkdir(parents=True, exist_ok=True)
    cv2.imwrite("./temp/in.jpg", frame)
    
    response = analyzer.run(
        path_image="./temp/in.jpg",
        batch_size=cfg.batch_size,
        fix_img_size=cfg.fix_img_size,
        return_img_data=cfg.return_img_data,
        include_tensors=cfg.include_tensors,
        path_output="./temp/out.jpg",
    )

    print(response)

    output = {face.indx: face.preds["deepfake"].other["multi"] for face in response.faces}

    print(output)
    return(output)

Questions about the AU detection

Thank you for the amazing work!

I have questions about the AU detection model.
When running the code, I got the following output.

'au': Prediction(label='lid_tightener', logits=tensor([1.4294e-01, 5.5219e-02, 7.3257e-01, 4.7374e-03, 6.6783e-01, 8.6254e-01,
        7.5628e-01, 6.6030e-01, 0.0000e+00, 4.2513e-01, 0.0000e+00, 3.2038e-01,
        1.4721e-01, 3.7695e-01, 1.6047e-01, 2.5481e-05, 0.0000e+00, 1.1697e-01,
        5.4448e-03, 5.7558e-02, 0.0000e+00, 8.3319e-01, 1.6647e-01, 4.1829e-02,
        0.0000e+00, 2.5487e-03, 0.0000e+00, 4.5418e-02, 0.0000e+00, 2.9258e-02,
        0.0000e+00, 8.7168e-03, 0.0000e+00, 1.9661e-02, 2.5690e-03, 6.4678e-03,
        1.5031e-02, 2.5029e-03, 0.0000e+00, 6.2914e-03, 0.0000e+00]), other={'multi': ['brow_lowerer', 'cheek_raiser', 'lid_tightener', 'nose_wrinkler', 'upper_lip_raiser', 'lips_part']})

Could you clarify what the output "label='lid_tightener'" and "other={'multi': ['brow_lowerer', 'cheek_raiser', 'lid_tightener', 'nose_wrinkler', 'upper_lip_raiser', 'lips _part']}" signify?
What is the difference between "label" and "multi"?
Also, what do the "logits" represent?
I think it probably shows the predicted probabilities of 41 different AUs, but I don't know in which order.
Lastly, in the original paper, there appear to be two models, one trained with DISFA and the other trained with BP4D. Which model is used in this case?

fix_img_size

In the config file, fix_img_size is set to True by default. What is the purpose (advantages) of fixing image size?

Build error only in the GitHub action?

The build action pipeline is failing because the models are not downloading from Google Drive with the following message:

Cannot retrieve the public link of the file. You may need to change
the permission to 'Anyone with the link', or have had many accesses. 

You may still be able to access the file from the browser:

This is not happening locally nor on the Hugging Face Space server.

Have you experienced any problems caused by Google Drive access while composing the docker images and running the tests or example scripts?

Image normalization

Hello,

Thank you for this project. It is very helpful. It seems to have a discrepancy in the normalization of the image. I noticed two potential issues:

  1. The detector retinaNnet applies some normalization which also affects the face crop data.face. Then, the normalization order should stay the same in the unifier to denormalize the face crop into [0,1]. Right now the face crop can have a negative value.
  2. The image is loaded with torchvision resulting in RGB images. From the RetinaNet implementation preprocessing the image is loaded in BGR format the mean they applied is (104, 117, 123). In facetorch, the transform is applied before rgbtobgr() thus I think that the correct order would be (123,117,104).

With the modification of point 1, all face crops are in [0,1]. For point 2 it is harder to check

A TypeError

I have a TypeError while using the facetorch_notebook_demo.ipynb.
After this line{"asctime": "2024-03-14 11:20:40,111", "levelname": "INFO", "message": "Running FacePredictor: fer"}
The following error occured
TypeError: argmax(): argument 'input' (position 1) must be Tensor, not tuple

"docker compose run facetorch-gpu python ./scripts/example.py analyzer.device=cuda " This command have problem

[8/9] RUN pip install facetorch:
#0 1.670 Collecting facetorch
#0 2.313 Downloading facetorch-0.1.4-py3-none-any.whl (36 kB)
#0 3.092 Collecting python-json-logger>=2.0.0
#0 3.207 Downloading python_json_logger-2.0.4-py3-none-any.whl (7.8 kB)
#0 3.878 Collecting torchvision>=0.10.0
#0 4.098 Downloading torchvision-0.14.0-cp39-cp39-manylinux1_x86_64.whl (24.3 MB)
#0 777.8 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 22.5/24.3 MB 36.2 kB/s eta 0:00:50
#0 777.8 ERROR: Exception:
#0 777.8 Traceback (most recent call last):
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/urllib3/response.py", line 437, in _error_catcher
#0 777.8 yield
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/urllib3/response.py", line 560, in read
#0 777.8 data = self._fp_read(amt) if not fp_closed else b""
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/urllib3/response.py", line 526, in _fp_read
#0 777.8 return self._fp.read(amt) if amt is not None else self._fp.read()
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/cachecontrol/filewrapper.py", line 90, in read
#0 777.8 data = self.__fp.read(amt)
#0 777.8 File "/usr/lib/python3.9/http/client.py", line 463, in read
#0 777.8 n = self.readinto(b)
#0 777.8 File "/usr/lib/python3.9/http/client.py", line 507, in readinto
#0 777.8 n = self.fp.readinto(b)
#0 777.8 File "/usr/lib/python3.9/socket.py", line 704, in readinto
#0 777.8 return self._sock.recv_into(b)
#0 777.8 File "/usr/lib/python3.9/ssl.py", line 1242, in recv_into
#0 777.8 return self.read(nbytes, buffer)
#0 777.8 File "/usr/lib/python3.9/ssl.py", line 1100, in read
#0 777.8 return self._sslobj.read(len, buffer)
#0 777.8 socket.timeout: The read operation timed out
#0 777.8
#0 777.8 During handling of the above exception, another exception occurred:
#0 777.8
#0 777.8 Traceback (most recent call last):
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/cli/base_command.py", line 160, in exc_logging_wrapper
#0 777.8 status = run_func(*args)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/cli/req_command.py", line 247, in wrapper
#0 777.8 return func(self, options, args)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/commands/install.py", line 400, in run
#0 777.8 requirement_set = resolver.resolve(
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/resolution/resolvelib/resolver.py", line 92, in resolve
#0 777.8 result = self._result = resolver.resolve(
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/resolvelib/resolvers.py", line 481, in resolve
#0 777.8 state = resolution.resolve(requirements, max_rounds=max_rounds)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/resolvelib/resolvers.py", line 373, in resolve
#0 777.8 failure_causes = self._attempt_to_pin_criterion(name)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/resolvelib/resolvers.py", line 213, in _attempt_to_pin_criterion
#0 777.8 criteria = self._get_updated_criteria(candidate)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/resolvelib/resolvers.py", line 204, in _get_updated_criteria
#0 777.8 self._add_to_criteria(criteria, requirement, parent=candidate)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/resolvelib/resolvers.py", line 172, in _add_to_criteria
#0 777.8 if not criterion.candidates:
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/resolvelib/structs.py", line 151, in bool
#0 777.8 return bool(self._sequence)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 155, in bool
#0 777.8 return any(self)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in
#0 777.8 return (c for c in iterator if id(c) not in self._incompatible_ids)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 47, in _iter_built
#0 777.8 candidate = func()
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/resolution/resolvelib/factory.py", line 206, in _make_candidate_from_link
#0 777.8 self._link_candidate_cache[link] = LinkCandidate(
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/resolution/resolvelib/candidates.py", line 297, in init
#0 777.8 super().init(
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/resolution/resolvelib/candidates.py", line 162, in init
#0 777.8 self.dist = self._prepare()
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/resolution/resolvelib/candidates.py", line 231, in _prepare
#0 777.8 dist = self._prepare_distribution()
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/resolution/resolvelib/candidates.py", line 308, in _prepare_distribution
#0 777.8 return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/operations/prepare.py", line 491, in prepare_linked_requirement
#0 777.8 return self._prepare_linked_requirement(req, parallel_builds)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/operations/prepare.py", line 536, in _prepare_linked_requirement
#0 777.8 local_file = unpack_url(
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/operations/prepare.py", line 166, in unpack_url
#0 777.8 file = get_http_url(
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/operations/prepare.py", line 107, in get_http_url
#0 777.8 from_path, content_type = download(link, temp_dir.path)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/network/download.py", line 147, in call
#0 777.8 for chunk in chunks:
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/cli/progress_bars.py", line 53, in _rich_progress_bar
#0 777.8 for chunk in iterable:
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_internal/network/utils.py", line 63, in response_chunks
#0 777.8 for chunk in response.raw.stream(
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/urllib3/response.py", line 621, in stream
#0 777.8 data = self.read(amt=amt, decode_content=decode_content)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/urllib3/response.py", line 586, in read
#0 777.8 raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
#0 777.8 File "/usr/lib/python3.9/contextlib.py", line 137, in exit
#0 777.8 self.gen.throw(typ, value, traceback)
#0 777.8 File "/usr/local/lib/python3.9/dist-packages/pip/_vendor/urllib3/response.py", line 442, in _error_catcher
#0 777.8 raise ReadTimeoutError(self._pool, None, "Read timed out.")
#0 777.8 pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.


failed to solve: executor failed running [/bin/sh -c pip install facetorch]: exit code: 2

What code should I use if I just want to use the FAU?

Hello thanks for creating this awesome library :)

When I run the FaceAnalyzer object I notice it runs through all the modules - FAU, FER, Deepfake prediction and embeddings.

I was just looking for an FAU prediction - even when I tried removing the config for FER, Deepfake and the rest, but the code was unable to run, hence I'm not sure if I'm using the correct class.

Which class/ config should I be using, and are there any dependencies for using FAU that I have to include, like steps to crop the image?

confidence threshold

can i get the confidence measure of the prediction,
the neutral is being detected with other emotion , any idea on how i can fine tune the model.

About face crop and cluster

Hi Tomas, thanks for the repo! I have a question, is there any way to save only the detected face? I didn't find this option.

And would you have any suggestions for the clustering task, any approaches that might work better?

Thanks!

how to get confidence score

[Face(indx=0, loc=Location(x1=434, x2=953, y1=214, y2=733), dims=Dimensions(height=519, width=519), tensor=tensor([[[ 0.1373, 0.1337, 0.1297, ..., 0.1647, 0.1647, 0.1647],
[ 0.1409, 0.1361, 0.1278, ..., 0.1647, 0.1647, 0.1647],
[ 0.1376, 0.1333, 0.1226, ..., 0.1647, 0.1647, 0.1647],
...,
[ 0.8915, 0.9198, 0.9219, ..., 0.7380, 0.7581, 0.7679],
[ 0.8832, 0.9155, 0.9232, ..., 0.7327, 0.7553, 0.7676],
[ 0.8765, 0.9055, 0.9206, ..., 0.7376, 0.7529, 0.7640]],

     [[ 0.0431,  0.0395,  0.0356,  ...,  0.0706,  0.0706,  0.0706],
      [ 0.0467,  0.0420,  0.0337,  ...,  0.0706,  0.0706,  0.0706],
      [ 0.0435,  0.0392,  0.0285,  ...,  0.0706,  0.0706,  0.0706],
      ...,
      [ 0.6887,  0.7173,  0.7143,  ...,  0.6334,  0.6564,  0.6666],
      [ 0.6863,  0.7129,  0.7173,  ...,  0.6383,  0.6581,  0.6731],
      [ 0.6830,  0.7073,  0.7170,  ...,  0.6434,  0.6584,  0.6725]],

     [[-0.0431, -0.0467, -0.0506,  ..., -0.0275, -0.0275, -0.0275],
      [-0.0395, -0.0443, -0.0526,  ..., -0.0275, -0.0275, -0.0275],
      [-0.0421, -0.0455, -0.0578,  ..., -0.0275, -0.0275, -0.0275],
      ...,
      [ 0.4731,  0.5000,  0.4948,  ...,  0.5667,  0.5955,  0.6077],
      [ 0.4710,  0.4994,  0.4998,  ...,  0.5701,  0.5989,  0.6175],
      [ 0.4679,  0.4931,  0.5014,  ...,  0.5804,  0.6042,  0.6272]]]), ratio=0.23093364197530863, preds={'embed': Prediction(label='abstract', logits=tensor([ 0.1084, -0.1679, -0.0204, -0.0253,  0.0205,  0.1097, -0.0177, -0.1260,
     -0.0314, -0.0648,  0.0659,  0.0451, -0.0370,  0.0450,  0.0337, -0.1246,
      0.0383,  0.1337, -0.0364,  0.0365, -0.1737, -0.0831, -0.0467, -0.0486,
      0.0386, -0.0189, -0.1069, -0.0121,  0.1179,  0.0478, -0.0222, -0.0724,
     -0.0953, -0.1114, -0.0245,  0.0220, -0.1771,  0.0396, -0.0082,  0.0081,
     -0.0938,  0.0824,  0.1724, -0.1075,  0.0149,  0.0446, -0.1196,  0.0161,
     -0.1212, -0.0146, -0.0690,  0.1383, -0.0517,  0.0812,  0.0480, -0.1122,
      0.0787,  0.1988,  0.0095, -0.2227, -0.0003,  0.1787, -0.0560,  0.0300,
      0.0058,  0.0238,  0.1514, -0.0214, -0.0711,  0.0882,  0.0685,  0.0641,
      0.1223,  0.0853,  0.0355,  0.1460, -0.0740,  0.0493, -0.2184,  0.1159,
      0.1151, -0.0588, -0.0500,  0.1039,  0.0340,  0.0039, -0.1422, -0.0104,
      0.1030, -0.0904, -0.1258,  0.0655,  0.0794, -0.0897,  0.1169, -0.0005,
      0.0530, -0.0860,  0.0695, -0.1501,  0.0609,  0.0149,  0.0286,  0.1003,
     -0.0363, -0.0775, -0.1615,  0.0107, -0.1477,  0.1088, -0.0633, -0.0605,
     -0.0319,  0.1822,  0.0791, -0.0238, -0.0765,  0.0223, -0.0125, -0.0594,
      0.0621, -0.0281,  0.0406,  0.0442, -0.0208,  0.0528, -0.0791,  0.0483]), other={}), 'verify': Prediction(label='abstract', logits=tensor([-3.3224e-02, -2.4557e-02, -1.9427e-02,  2.6323e-02, -3.4657e-02,
     -3.9893e-02,  3.0786e-02, -7.5714e-02, -7.3427e-02, -1.4084e-03,
     -6.9474e-03, -2.5220e-02,  1.1566e-02, -7.1516e-02, -4.1833e-02,
     -7.0128e-02, -9.4383e-02, -8.1581e-03, -1.1111e-01, -4.1234e-02,
      6.4064e-02, -1.2209e-02, -8.5447e-02,  8.8342e-02, -1.7705e-02,
     -3.5760e-02,  2.0179e-02,  3.6288e-02,  8.8054e-02, -4.8106e-02,
     -3.7405e-02, -7.8233e-02, -8.2899e-02,  3.2510e-02,  6.5459e-02,
      1.2680e-02,  6.2668e-03,  1.4967e-02,  2.1543e-02, -3.2847e-03,
     -3.4881e-02,  1.4036e-02, -3.9442e-02,  4.2367e-03, -3.8897e-02,
      7.2695e-02,  9.6176e-02, -3.4447e-02,  4.3521e-02,  8.6814e-03,
     -1.1861e-02,  1.4138e-02,  1.0717e-02, -2.7392e-02, -2.4828e-02,
      5.7942e-02, -2.5837e-02, -4.2777e-02,  7.8459e-03,  1.9358e-02,
      2.4522e-03,  4.5778e-02,  3.3084e-03,  5.5125e-02,  6.2631e-02,
      6.6993e-02,  3.1644e-02,  6.9820e-03,  2.0503e-02,  4.9447e-02,
      7.7141e-02,  9.8624e-03, -4.0049e-02, -4.9250e-02,  1.3131e-02,
      1.3083e-02,  7.6573e-03, -2.3873e-02,  2.3156e-04,  2.8440e-03,
      7.9887e-02, -6.4234e-02, -7.4516e-03,  4.6921e-02, -2.3886e-02,
     -3.5960e-02, -1.8992e-02,  8.1928e-02,  8.4129e-03, -2.4341e-02,
     -3.7343e-03,  2.4277e-02, -2.0130e-02,  1.9980e-02, -3.5903e-02,
      6.5225e-03,  4.7414e-02, -2.8072e-02, -1.5894e-02,  6.5363e-02,
      3.0128e-02,  1.1777e-03,  4.4043e-02,  6.4681e-03,  1.1622e-01,
     -1.7241e-02, -4.1181e-02, -8.0510e-03, -3.3600e-03,  2.7956e-02,
     -3.6577e-02,  4.0386e-02, -7.4661e-02, -6.2661e-02,  5.4758e-02,
      2.9642e-02,  1.7531e-02, -5.4692e-02, -9.6330e-03, -7.5329e-02,
      3.7327e-02, -8.1653e-02, -5.6770e-02,  2.9630e-02,  3.3025e-02,
      7.9455e-02,  2.2289e-02, -1.7560e-03, -2.4501e-02,  5.0421e-02,
     -7.1247e-02,  2.6351e-02, -1.3235e-02, -2.9801e-02,  3.6468e-02,
     -7.3361e-03,  6.7757e-02, -2.8253e-02,  3.1011e-02, -1.4887e-02,
     -9.4293e-03,  5.4439e-02, -4.4333e-02, -2.0498e-02, -8.4924e-04,
     -3.6461e-02, -2.1851e-02,  9.0311e-03,  4.1987e-02, -3.6330e-03,
      1.0882e-02,  4.9719e-03,  2.0902e-02, -1.0550e-01,  3.9165e-04,
     -1.1211e-01,  2.1864e-02, -4.4882e-02, -4.2871e-02, -1.5887e-02,
      1.1780e-02,  5.8599e-02,  2.7941e-02,  2.3612e-02, -3.2438e-02,
     -3.4483e-03,  3.4063e-02,  7.8976e-03,  3.6698e-02, -1.9161e-02,
      4.6956e-02,  2.0255e-02,  2.5486e-02,  2.9408e-02,  1.8262e-02,
      1.5205e-02, -4.9320e-02, -2.3444e-02,  2.1395e-02,  3.3107e-02,
     -3.4140e-02, -4.4371e-02,  4.2448e-02, -2.4631e-02,  5.4878e-03,
      6.0990e-03, -4.3823e-02,  5.4613e-02, -5.6894e-02,  1.0733e-02,
     -7.6143e-03,  1.4257e-02,  5.6273e-02,  7.2083e-02, -1.2872e-02,
     -5.7984e-02,  4.2089e-03, -4.2025e-02,  3.0082e-02, -6.1542e-02,
     -3.1634e-02, -1.5619e-02,  2.6641e-02,  8.2029e-02,  6.3837e-02,
      3.8624e-02,  3.9092e-02,  2.0121e-02,  2.8493e-02,  7.4564e-03,
      6.4805e-02, -4.3313e-02, -1.5154e-02,  3.7041e-04, -6.8154e-02,
      3.9102e-02,  2.4517e-02, -1.0365e-01, -5.4518e-02, -1.8800e-02,
     -2.2643e-02, -1.5723e-02, -2.0403e-03,  5.2207e-02, -2.6852e-02,
      6.5871e-02,  4.3300e-04, -4.6482e-02,  4.9541e-02, -2.3600e-03,
     -5.3421e-02,  1.4810e-02, -6.9565e-02,  1.4715e-02,  5.7413e-02,
     -3.1967e-02,  2.2431e-02,  2.8699e-03,  6.5302e-02, -1.9074e-02,
      3.8895e-02,  5.1430e-02, -5.5126e-02, -5.6184e-02, -2.0254e-02,
      1.0044e-01,  3.1875e-02,  1.1852e-01, -1.4119e-02,  8.7341e-03,
     -4.2976e-02, -3.0095e-03, -1.1797e-02,  2.7088e-02, -3.0576e-02,
     -4.3891e-03, -3.8791e-02, -3.8361e-02, -4.4177e-02,  5.8130e-02,
      3.9415e-02, -5.4877e-02, -3.2452e-03,  7.5045e-02,  1.9019e-02,
     -2.2427e-02, -5.1804e-02, -6.4367e-02,  3.6681e-02, -2.1003e-02,
     -1.4513e-02,  3.0933e-03,  3.0340e-03,  2.2994e-02, -1.9586e-02,
     -5.8214e-02,  2.5919e-03,  1.9293e-02, -3.7726e-03, -2.1816e-02,
      1.6009e-02, -5.9140e-02, -2.9345e-02,  3.0215e-02, -1.5051e-02,
      4.6398e-02, -5.5400e-02, -1.1546e-02,  2.5626e-02, -4.9593e-03,
     -2.4928e-02,  1.5603e-02, -2.0350e-02,  1.1109e-01, -1.2965e-02,
      9.4433e-02, -1.0026e-02,  5.1298e-06,  8.9873e-02, -1.0977e-02,
     -6.9289e-02, -1.3149e-02, -9.4907e-03, -2.9136e-03, -9.2025e-02,
      4.5848e-02, -6.7131e-02,  5.9350e-02, -7.8705e-03, -1.6058e-02,
     -4.1721e-02,  1.4011e-02,  1.2016e-02,  6.9526e-02,  1.5262e-03,
     -2.9988e-02, -1.1155e-02,  4.5036e-04, -5.0899e-02,  1.4544e-03,
      6.3418e-02, -2.8156e-02, -1.2425e-02, -6.0529e-02,  7.2251e-03,
      8.6141e-03,  6.2308e-02, -2.9041e-02, -3.2445e-02,  3.4674e-02,
      4.7092e-02,  2.4686e-02,  1.6326e-02,  4.3353e-02, -1.5464e-02,
      1.8762e-02,  3.8868e-02, -7.2798e-02, -3.8606e-02,  1.0046e-02,
     -4.0572e-02,  2.9378e-02, -6.1089e-02, -4.8065e-02, -5.6713e-03,
      4.5702e-04,  5.2896e-02, -1.2261e-02,  4.0902e-03, -1.9959e-02,
      1.7561e-03, -3.1607e-02, -3.7245e-02, -1.3067e-02, -7.7395e-02,
      2.9010e-02,  1.0330e-02,  8.4882e-03,  9.1096e-02,  4.2342e-02,
      1.9053e-02,  4.5474e-02, -6.1862e-02,  7.3056e-02,  3.4285e-02,
     -1.1038e-03,  2.4672e-02,  3.3181e-02,  2.5882e-02,  1.2855e-02,
      7.0895e-02, -4.7559e-03, -1.0617e-01, -6.9381e-02, -5.6203e-02,
      1.2208e-02,  3.7183e-02, -7.2761e-02,  2.9459e-02, -3.4049e-02,
     -8.7027e-02,  6.5434e-02, -2.0737e-02, -2.2162e-02, -4.2827e-02,
      1.2348e-01,  5.9659e-02, -5.3705e-02,  4.1885e-02, -1.9994e-03,
     -1.9372e-02,  6.7860e-02, -6.1758e-02, -1.1951e-01,  3.0546e-02,
      7.7302e-02,  3.5251e-02, -1.1624e-02,  4.3307e-02, -4.3849e-02,
      3.0644e-02, -5.5049e-02, -1.0313e-01,  4.4474e-02, -3.0743e-02,
      1.5136e-02, -1.5870e-02,  5.0055e-02,  1.5974e-03, -4.1321e-02,
     -4.7258e-02, -1.7429e-02, -3.4838e-02,  2.1184e-03, -1.7297e-02,
     -3.8342e-02,  1.1624e-02, -1.4158e-02,  6.0678e-02, -2.9162e-02,
      1.2990e-02,  4.9877e-02,  1.0054e-01, -6.8934e-02, -1.8155e-02,
     -6.0196e-02, -4.8684e-02, -1.1367e-02,  1.0779e-02, -8.2621e-02,
     -9.2737e-02,  2.9211e-02,  2.1318e-03,  5.9109e-03, -1.0072e-02,
      2.8047e-02,  7.1930e-03,  1.2927e-02, -4.2631e-02,  1.6762e-02,
     -2.4878e-03,  3.5958e-02, -2.1847e-02, -7.8670e-03,  7.6192e-02,
     -1.0983e-02, -1.1366e-01,  8.1869e-02, -4.9506e-02, -8.4323e-03,
     -7.7799e-02,  9.0517e-03, -2.8737e-02,  2.5685e-02,  4.8899e-02,
     -5.3328e-02,  1.3767e-02, -7.3781e-03, -5.7545e-02, -3.6328e-03,
     -4.3947e-02,  2.6072e-03, -8.5318e-02,  3.6656e-02,  6.6216e-02,
     -1.2816e-02,  4.2571e-02,  2.2549e-02,  7.0498e-02, -2.8171e-02,
     -1.2510e-02, -9.6193e-02,  1.7179e-02,  5.3272e-02,  6.6195e-03,
      6.3364e-03, -8.4876e-03,  7.2678e-03, -4.3682e-02,  3.9331e-03,
     -2.8494e-02, -6.1872e-02, -6.4432e-02,  6.6572e-02,  7.5570e-02,
     -2.7126e-03, -1.7593e-02, -5.3505e-02,  5.9413e-02, -1.8211e-02,
     -2.5284e-03, -2.8161e-03, -5.9543e-03,  8.6127e-03,  3.9225e-02,
     -3.1761e-03, -2.4748e-02,  4.3533e-02,  9.7472e-03, -8.2742e-02,
     -1.7751e-02, -2.0734e-02,  8.8190e-02, -2.9043e-02,  7.7701e-03,
      1.5303e-02,  1.0730e-02,  4.8615e-02, -1.1959e-02,  1.3826e-03,
     -4.6868e-02,  1.0027e-01]), other={}), 'fer': Prediction(label='Happiness', logits=tensor([-0.6251,  0.4655,  0.3576, -0.1943,  0.7099, -0.0265, -1.3299, -0.4130]), other={}), 'au': Prediction(label='left_dimpler', logits=tensor([0.5515, 0.6236, 0.1615, 0.1085, 0.5575, 0.4415, 0.0505, 0.4210, 0.0484,
     0.6485, 0.0090, 0.1863, 0.1050, 0.0181, 0.3330, 0.2457, 0.0713, 0.1476,
     0.0273, 0.4818, 0.2539, 0.2910, 0.3599, 0.0469, 0.0214, 0.0764, 0.0087,
     0.0039, 0.0135, 0.0790, 0.0412, 0.2358, 0.0925, 0.1083, 0.0033, 0.0422,
     0.1445, 0.0111, 0.0242, 0.7870, 0.0186]), other={'multi': ['inner_brow_raiser', 'outer_brow_raiser', 'cheek_raiser', 'lip_corner_puller', 'left_dimpler']}), 'deepfake': Prediction(label='Real', logits=tensor(0.0159), other={}), 'align': Prediction(label='abstract', logits=tensor([ 1.0489e+00, -3.4420e-01,  7.2751e-01, -5.1543e-01,  7.3023e-01,
      2.4881e-01, -9.6506e-01,  6.5722e-01, -4.8655e-01,  2.2559e+00,
      4.3205e-01, -6.1142e-02,  1.8862e-01,  1.6044e-01,  5.0534e-01,
     -4.3917e-02,  2.9256e-01,  2.4434e-01, -4.0683e-01,  2.5768e-01,
      1.2181e-01,  1.6102e-01,  5.9151e-03, -3.2074e-02, -1.6347e-02,
      4.3678e-02, -2.1059e-01,  6.5439e-02,  9.7091e-02,  3.6339e-02,
     -1.6805e-01, -1.1507e-01,  1.6595e-02,  8.7263e-02,  2.4129e-02,
      1.4290e-02, -8.7965e-02,  4.1801e-02, -3.5430e-02, -1.8644e-02,
      1.3499e-02, -3.0223e-02, -2.3360e-04,  1.9108e-02, -2.9163e-02,
      6.6146e-02,  1.9205e-02, -4.9499e-02,  1.5346e-02, -1.9337e-02,
      1.0034e-01, -6.7382e-02,  2.0553e-01, -6.7100e-01,  1.9152e-01,
     -4.7468e-01, -8.7253e-02, -1.9519e-01, -2.4274e-01,  5.9840e-01,
      3.3604e-02,  1.9953e-02]), other={'lmk3d': tensor([[ 502.8373,  518.9011,  538.0644,  557.6124,  585.8576,  628.1694,
       675.5670,  728.6715,  785.3468,  824.1080,  835.6247,  837.0992,
       835.7198,  831.1275,  828.7592,  827.1747,  820.2486,  588.0635,
       614.6161,  644.9459,  672.3715,  695.7761,  779.7474,  796.1220,
       812.6455,  828.8076,  838.7121,  751.5838,  766.8667,  781.8368,
       788.4557,  744.6086,  760.2995,  777.9470,  790.5276,  796.6903,
       631.5100,  651.8181,  672.3687,  690.8339,  676.9792,  653.7588,
       775.7441,  794.5120,  815.0684,  823.4375,  815.4554,  796.3927,
       713.3414,  743.4560,  771.5247,  785.0727,  796.4966,  815.5997,
       824.1107,  816.7976,  805.6889,  788.5579,  769.1210,  746.4871,
       719.5526,  765.6178,  783.3878,  797.7781,  820.8790,  798.7222,
       783.5672,  766.0459],
     [ 436.4594,  478.0351,  513.7496,  547.2553,  587.0750,  623.4663,
       651.8424,  674.1008,  672.9284,  638.1714,  593.0019,  546.7362,
       495.1970,  446.5255,  406.4955,  364.5473,  319.8803,  437.1565,
       426.9044,  422.0297,  420.2204,  420.0029,  388.8094,  373.6950,
       358.9923,  346.9276,  344.6076,  451.8895,  486.8222,  521.0617,
       544.5255,  553.8885,  555.5727,  554.2590,  544.6048,  534.7312,
       463.9752,  457.8876,  450.9662,  451.7894,  464.5584,  470.5920,
       420.3985,  405.5945,  397.8869,  393.8495,  410.9395,  420.4042,
       603.2154,  596.6987,  585.4514,  583.5238,  576.5116,  570.4471,
       563.3161,  592.3161,  610.7651,  620.2634,  624.2229,  617.8684,
       601.4301,  596.3197,  591.5111,  584.8419,  564.7042,  594.1776,
       602.2295,  605.2144],
     [-223.2773, -244.1935, -266.5513, -283.0958, -291.9818, -285.6154,
      -266.8949, -253.7015, -262.6176, -286.3449, -321.6923, -356.7802,
      -377.4162, -376.5053, -365.6689, -349.8277, -331.6347,  -80.1479,
       -53.5690,  -41.6587,  -39.5855,  -44.3876,  -73.0366,  -81.7120,
       -98.7605, -126.3356, -165.6154,  -79.2502,  -76.2674,  -72.8899,
       -82.1336, -128.4686, -124.7119, -127.4405, -135.0770, -146.2149,
       -94.2438,  -76.7546,  -83.1338, -100.1482,  -92.7679,  -90.8969,
      -129.6218, -125.4574, -132.9311, -160.0222, -146.0844, -133.7021,
      -172.9007, -147.6369, -137.9288, -142.0148, -146.4650, -172.3406,
      -210.9806, -193.6120, -184.1975, -176.7468, -171.0858, -169.1402,
      -172.8372, -155.3651, -156.3394, -166.3194, -207.9360, -174.0700,
      -167.4038, -163.3625]], dtype=torch.float64), 'mesh': tensor([[ 591.5412,  591.7885,  592.0413,  ...,  779.3440,  778.2234,
       777.1041],
     [ 448.5855,  449.1485,  449.7079,  ...,  403.2846,  402.0480,
       400.8456],
     [ -85.1300,  -85.4449,  -85.7656,  ..., -445.7211, -446.6257,
      -447.4770]], dtype=torch.float64), 'pose': {'angles': [-15.73379586181253, -26.273406725245536, -11.75605653773541], 'translation': tensor([652.1875, 555.7106, -77.7673], dtype=torch.float64)}})}),

Face(indx=1, loc=Location(x1=50, x2=696, y1=393, y2=1039), dims=Dimensions(height=646, width=646), tensor=tensor([[[0.9202, 0.9176, 0.9176, ..., 0.9942, 0.9998, 1.0017],
[0.9202, 0.9176, 0.9176, ..., 0.9971, 1.0000, 0.9958],
[0.9183, 0.9176, 0.9176, ..., 0.9993, 1.0000, 0.9880],
...,
[1.0206, 1.0194, 1.0127, ..., 0.9294, 0.9333, 0.9398],
[1.0273, 1.0231, 1.0165, ..., 0.9255, 0.9294, 0.9385],
[1.0314, 1.0273, 1.0196, ..., 0.9197, 0.9229, 0.9375]],

     [[0.7869, 0.7843, 0.7843,  ..., 0.6412, 0.6431, 0.6353],
      [0.7869, 0.7843, 0.7843,  ..., 0.6431, 0.6430, 0.6299],
      [0.7850, 0.7843, 0.7843,  ..., 0.6424, 0.6365, 0.6198],
      ...,
      [0.6637, 0.6635, 0.6569,  ..., 0.5686, 0.5753, 0.5820],
      [0.6704, 0.6667, 0.6663,  ..., 0.5648, 0.5686, 0.5777],
      [0.6787, 0.6770, 0.6738,  ..., 0.5589, 0.5621, 0.5768]],

     [[0.6378, 0.6353, 0.6353,  ..., 0.3431, 0.3412, 0.3364],
      [0.6378, 0.6353, 0.6353,  ..., 0.3412, 0.3410, 0.3278],
      [0.6359, 0.6353, 0.6353,  ..., 0.3382, 0.3317, 0.3126],
      ...,
      [0.2918, 0.2910, 0.2850,  ..., 0.2941, 0.3008, 0.3100],
      [0.3041, 0.3014, 0.2976,  ..., 0.2902, 0.2941, 0.3046],
      [0.3175, 0.3147, 0.3110,  ..., 0.2812, 0.2849, 0.3022]]]), ratio=0.357781207133059, preds={'embed': Prediction(label='abstract', logits=tensor([-0.0208, -0.0869, -0.0418, -0.0918,  0.0460,  0.1246, -0.0706,  0.0380,
      0.1282,  0.0727,  0.2065,  0.0459, -0.1517,  0.1894,  0.0654,  0.1014,
      0.0295,  0.0196,  0.0024,  0.0181, -0.0905,  0.1452, -0.0210, -0.1291,
     -0.0459,  0.1003, -0.1509,  0.0641,  0.1220,  0.1316, -0.0441, -0.1277,
     -0.0919,  0.0733, -0.1337,  0.1635, -0.0224,  0.1123, -0.0047, -0.0653,
     -0.1403, -0.0380,  0.0973, -0.1033,  0.1506,  0.0084, -0.0832,  0.0835,
     -0.0371,  0.0094,  0.0936, -0.0063,  0.0224, -0.0434, -0.0119, -0.1055,
      0.0559, -0.2002, -0.0632, -0.1404, -0.0968,  0.0266,  0.0848, -0.1204,
      0.1424,  0.0505,  0.1307, -0.0332, -0.0964, -0.0891,  0.0426,  0.0695,
     -0.0292, -0.0304, -0.1369,  0.0557, -0.0710,  0.0941, -0.1451,  0.1356,
     -0.0259, -0.0785,  0.0414,  0.1357, -0.0745, -0.1044, -0.0559,  0.0272,
      0.0111,  0.0853,  0.0317,  0.0714, -0.0496, -0.0241, -0.0526,  0.0929,
      0.0408, -0.0575,  0.0255, -0.0802,  0.2028,  0.0544, -0.0145, -0.0486,
     -0.0730,  0.0263,  0.1208,  0.0361, -0.0789, -0.0911,  0.0330,  0.0438,
     -0.1261,  0.0774,  0.0559, -0.0536,  0.0051, -0.0735,  0.0284, -0.0422,
      0.1197, -0.0856, -0.1432,  0.0343, -0.0690, -0.0626,  0.0049,  0.0296]), other={}), 'verify': Prediction(label='abstract', logits=tensor([ 1.2964e-02, -1.3904e-02,  3.4976e-02,  3.2565e-02, -7.1412e-02,
      8.9851e-03,  7.7667e-02,  1.8712e-02,  3.0730e-02,  2.9410e-03,
      5.0255e-02, -3.5002e-02, -6.3176e-02,  4.2954e-02,  2.9268e-02,
     -4.9133e-02, -1.3079e-02,  4.3286e-02, -6.7974e-02,  3.6239e-02,
     -3.1894e-02,  1.8158e-02,  7.0954e-02,  5.3481e-02,  3.0436e-02,
      3.1926e-03,  3.6142e-02, -1.4123e-02,  2.6709e-02, -3.5059e-02,
      3.0942e-03,  6.2149e-02,  1.2038e-01, -1.3772e-02,  1.2311e-02,
      1.4627e-02,  2.8467e-02,  2.6847e-02, -8.7075e-03,  7.9499e-02,
      6.6128e-02,  1.0364e-01,  5.2299e-03,  6.1904e-02,  1.3677e-02,
      5.7426e-02,  2.8266e-02,  3.2642e-02, -1.5866e-02, -9.4086e-02,
     -5.5181e-03, -4.3002e-03, -2.2604e-02,  1.6448e-02, -2.3173e-03,
      8.1938e-03,  2.2262e-03,  4.6636e-02,  1.5774e-03, -7.8123e-02,
     -3.0442e-02,  4.8795e-02,  6.4049e-02,  5.3124e-02, -5.4582e-02,
      8.0434e-02, -2.0324e-03, -4.7266e-02, -7.7170e-03, -3.7350e-03,
     -6.0486e-02,  1.9230e-02, -2.5287e-02, -5.4030e-02,  6.9023e-02,
      8.0279e-02, -4.7146e-02,  2.6661e-02,  9.4323e-03, -1.3234e-02,
     -2.0525e-02,  3.4602e-04,  3.9545e-02,  2.8930e-03, -4.2740e-02,
     -2.3970e-03, -2.5503e-02, -4.3101e-02,  4.9594e-02, -4.4815e-03,
     -5.3772e-02, -3.0303e-03, -4.6498e-02,  3.2047e-03,  1.6106e-02,
      3.5684e-02, -1.7290e-02, -4.3161e-02, -2.5080e-02, -6.7244e-02,
      1.0715e-02,  7.4794e-02,  6.0814e-02,  5.3839e-03,  6.6487e-02,
      3.2904e-02, -1.7076e-02,  1.4910e-02,  1.8142e-02, -3.5357e-02,
     -7.5826e-02,  7.5689e-02,  5.3454e-02, -7.0275e-02, -2.7209e-02,
     -7.0906e-03,  8.5554e-03, -5.9336e-02,  4.5120e-02,  1.1939e-02,
     -4.2454e-02, -3.3039e-02, -1.2749e-02,  4.6701e-02, -1.3351e-02,
     -6.7016e-03, -2.7988e-02,  1.9238e-02,  1.0085e-02,  1.0825e-02,
     -1.2017e-03, -9.0903e-02, -3.1259e-02,  6.3244e-02,  4.0683e-03,
      9.5520e-02, -4.0603e-03, -2.1029e-02,  6.6616e-02, -3.6199e-02,
     -1.9091e-02, -6.5155e-02,  5.9252e-02, -1.8515e-02, -6.7507e-03,
     -4.4752e-02,  3.7692e-02,  7.7971e-03, -1.5190e-02, -3.7073e-02,
      5.8424e-02, -3.0673e-02, -7.8842e-02,  6.0340e-02, -2.7925e-05,
      1.1980e-04,  3.5554e-02,  3.3457e-02, -8.3417e-03,  1.5538e-03,
      2.9872e-02,  2.5899e-02,  1.6603e-02,  2.4789e-02,  2.5254e-02,
     -5.1429e-02, -5.8905e-02, -1.0098e-01,  1.1910e-02, -1.2088e-02,
     -4.6684e-02,  6.8075e-02,  2.0129e-03,  2.0990e-02, -2.3811e-02,
     -2.0661e-02, -1.4717e-02, -6.9792e-02,  8.6924e-03,  2.4097e-02,
     -3.2509e-02, -3.1564e-03, -3.3007e-02, -9.8357e-02,  3.0751e-02,
      4.7175e-02, -3.1896e-02,  5.5510e-02,  2.0149e-02,  2.7245e-02,
      7.3527e-02, -1.4220e-02, -2.8176e-02,  1.1685e-02,  2.1735e-02,
      1.5050e-02, -2.1646e-02,  6.4943e-02, -6.3470e-02,  3.8925e-02,
     -9.4592e-03,  4.9307e-03,  3.3417e-02,  2.0394e-02,  2.5664e-02,
     -1.0099e-02,  4.1284e-02, -6.6365e-03, -3.0639e-02,  8.9582e-03,
      2.7808e-02,  1.6247e-02,  2.0718e-02, -1.0459e-01,  3.3263e-02,
     -1.2084e-02,  1.5892e-02,  1.3222e-02, -6.0852e-02, -5.1233e-02,
      3.2479e-04,  4.3284e-02, -4.6805e-02,  4.0978e-02,  3.6537e-02,
      6.2670e-02,  3.7214e-03, -2.0519e-02, -1.8443e-02, -1.3791e-02,
      1.8769e-02,  4.2175e-02,  5.2204e-03,  7.8865e-02, -5.3398e-03,
     -6.5813e-02, -1.8292e-02,  4.7430e-02, -2.4413e-02,  3.1486e-02,
     -2.5879e-02,  4.5190e-02,  1.7112e-02, -1.4829e-02,  5.5747e-02,
     -7.2228e-03,  4.8913e-02,  6.8298e-02,  2.8451e-02, -1.8368e-02,
     -5.0754e-02, -1.5250e-02, -2.2185e-02, -3.1764e-02,  4.2280e-02,
     -1.6820e-02, -7.7383e-03, -2.5763e-02,  4.0923e-02,  2.4803e-02,
      3.7851e-02, -5.0024e-02,  1.4865e-02, -1.7857e-01,  8.5098e-03,
      7.1600e-02,  7.5451e-02, -2.0070e-02, -1.4368e-01, -8.1701e-03,
      7.9451e-02,  2.9882e-02, -1.7771e-02,  2.3340e-02, -5.5345e-02,
     -1.4259e-02, -2.2667e-02,  1.9489e-02, -4.3003e-02, -2.7351e-02,
      5.5924e-02,  1.9710e-02, -9.1964e-03,  8.4865e-02,  1.5062e-02,
      1.0641e-02, -2.5487e-02,  1.0687e-01, -1.1516e-02, -8.6110e-02,
     -4.3438e-02,  7.4342e-02,  2.6815e-03, -5.1181e-03, -8.5881e-03,
     -2.0820e-02,  3.3215e-02,  1.7116e-02,  1.7540e-03,  5.9804e-02,
      3.0366e-02,  1.6881e-02,  8.6788e-02,  5.3094e-02, -2.5471e-02,
      1.3309e-02,  8.4078e-03, -8.1982e-03,  7.0214e-02, -9.5711e-03,
      3.8840e-02, -3.1118e-02,  8.5577e-04, -4.9637e-02,  2.4383e-02,
     -6.1713e-02,  7.1369e-02, -1.2416e-02, -5.9984e-03,  1.9067e-02,
     -4.1707e-02, -3.5260e-02,  2.7844e-02, -6.2120e-03,  3.6904e-02,
      2.9058e-02, -7.1418e-02, -1.8063e-02,  3.2158e-02, -2.0005e-02,
     -6.7869e-03,  7.4434e-02,  5.6973e-02, -5.0454e-02, -2.7467e-02,
     -1.4896e-02,  1.0950e-02, -5.6802e-02,  9.5876e-02,  1.2027e-02,
      6.3869e-02, -4.2184e-02, -1.0200e-01,  2.1229e-02, -1.6324e-02,
     -5.8513e-03,  5.0814e-02,  2.3400e-02,  1.1532e-02,  2.3803e-02,
     -3.5420e-03, -1.3801e-02, -2.9194e-02,  3.5877e-03, -2.4612e-02,
     -6.3972e-02,  9.9871e-02, -5.0480e-02,  3.1997e-02, -7.3747e-02,
      1.1778e-02,  8.3630e-02,  2.7150e-02,  3.1591e-02,  1.9049e-02,
     -2.1507e-02, -4.1946e-02,  1.6876e-02,  8.6554e-03,  1.2508e-02,
      1.9149e-02, -5.5687e-02, -1.8321e-02,  1.3830e-02, -6.3743e-02,
      7.7132e-02,  3.5116e-04,  9.7937e-03,  8.7488e-02, -8.3910e-02,
     -1.1534e-02, -4.4558e-02, -6.1024e-02,  4.4909e-03, -1.3097e-02,
      3.0353e-02, -1.9032e-02, -4.3842e-02, -3.2839e-02, -5.9319e-02,
      8.5470e-02, -1.1962e-01,  4.5720e-02, -7.8198e-02, -1.3849e-02,
     -1.5661e-02,  3.1085e-02,  2.6190e-02,  6.9064e-02,  1.2727e-03,
      9.1496e-02,  5.5262e-02, -2.2445e-02,  1.4406e-03, -4.1048e-02,
      6.5948e-02,  3.1378e-02, -1.5357e-02, -3.2627e-02,  2.2352e-02,
      3.4510e-02,  7.2831e-02,  5.1644e-02, -3.7757e-02,  3.9041e-02,
     -1.7638e-02, -5.8678e-02,  1.2936e-02,  4.2111e-02,  2.8205e-02,
      3.9613e-02, -8.2226e-03,  5.1488e-02, -1.7203e-02,  4.5282e-02,
     -1.2903e-02,  3.5587e-02,  3.5561e-02,  2.4388e-03, -3.2954e-02,
     -1.3049e-02, -1.3349e-03,  1.6006e-03, -2.7376e-02,  1.5596e-02,
     -1.8273e-02,  2.8787e-02,  1.9526e-02,  4.7752e-02,  5.2537e-03,
      1.0372e-02, -8.9011e-02, -4.0706e-03, -2.7103e-02, -4.9495e-02,
     -2.6835e-02, -6.7866e-03, -4.8761e-02, -4.0079e-02,  6.4382e-02,
     -9.1615e-02, -2.9919e-02, -8.9534e-03,  4.8749e-02,  2.3850e-02,
     -2.7775e-02, -1.6384e-02, -5.9256e-02,  6.6300e-02,  2.4258e-02,
     -7.4699e-02,  2.3095e-02, -3.7284e-03, -1.2667e-01,  2.6777e-02,
     -1.0564e-01, -2.9554e-02, -3.3424e-02,  2.7901e-02,  2.0938e-02,
     -3.7948e-02,  3.4862e-02,  7.3329e-02,  1.5987e-02, -2.2484e-03,
     -5.4010e-02,  2.6973e-03, -1.5203e-02, -1.0846e-02,  1.6754e-02,
      5.2685e-02, -1.7484e-02,  3.5447e-02, -1.8065e-02, -7.1617e-02,
     -3.8780e-02, -2.8710e-02,  1.4432e-02,  7.0967e-02,  5.7819e-02,
      6.7817e-04,  4.5686e-04,  5.5189e-02,  5.2838e-02, -5.0920e-02,
     -9.8434e-04,  1.4329e-01, -3.6727e-02, -5.0595e-02, -2.3635e-02,
     -5.3968e-02,  3.5619e-02, -3.7827e-02, -5.7281e-02,  4.2756e-02,
      4.9697e-02,  6.8059e-02,  3.6737e-02, -1.1643e-02, -1.3067e-02,
      1.5777e-02,  4.6537e-02]), other={}), 'fer': Prediction(label='Surprise', logits=tensor([-0.7667, -0.1572, -0.2810,  0.5850, -2.2443,  0.4423,  0.0594,  0.6899]), other={}), 'au': Prediction(label='lip_corner_puller', logits=tensor([4.9410e-01, 5.3558e-01, 1.5885e-01, 1.3994e-01, 4.6157e-01, 2.9390e-01,
     5.9555e-02, 2.0917e-01, 6.1774e-02, 6.2711e-01, 8.9492e-03, 2.8662e-01,
     9.0427e-02, 1.4572e-02, 4.3141e-01, 2.1209e-02, 8.1021e-03, 5.9383e-02,
     1.1530e-02, 2.7943e-01, 2.4722e-01, 1.8789e-01, 2.2216e-01, 2.3858e-02,
     4.6834e-03, 2.1032e-02, 1.2882e-03, 3.2885e-03, 1.5880e-03, 4.3600e-02,
     3.9450e-04, 1.2566e-02, 2.3418e-02, 4.2488e-03, 1.0941e-03, 8.7864e-03,
     5.5277e-02, 1.6091e-03, 1.9466e-03, 7.4525e-02, 6.4842e-03]), other={'multi': ['outer_brow_raiser', 'lip_corner_puller']}), 'deepfake': Prediction(label='Real', logits=tensor(0.0220), other={}), 'align': Prediction(label='abstract', logits=tensor([ 1.5367e+00,  6.8233e-02,  1.6897e-01, -1.7415e-01, -1.2539e-01,
      9.8054e-01,  6.6077e-01, -5.8280e-01, -2.0651e-01, -1.4103e+00,
      1.2762e+00, -1.1035e+00,  7.1689e-01,  2.0784e-01,  2.3616e-01,
     -9.1242e-02,  7.7662e-02,  1.0715e-01, -8.3309e-02,  6.5937e-02,
      3.7155e-02,  1.6193e-02,  5.1607e-02,  2.7444e-02,  3.0134e-02,
     -5.3562e-02, -8.9604e-02,  3.3948e-02, -1.9801e-02,  6.9995e-02,
      4.9629e-02,  7.2830e-02,  9.6640e-02,  2.6874e-02, -6.7558e-02,
      2.7193e-02, -6.0657e-02,  5.3060e-03,  5.9660e-05, -2.5344e-02,
     -8.8336e-03, -3.0318e-02, -2.5706e-02, -1.2814e-02,  1.6562e-02,
     -7.1656e-02,  7.3884e-02,  3.0497e-02, -1.7587e-02, -3.8105e-02,
     -5.5412e-02,  6.1230e-02,  3.3515e-01, -9.5943e-01,  1.4545e-01,
     -2.3372e-01, -9.7544e-03,  2.0690e-01, -1.8137e-02,  2.6992e-01,
      3.9929e-02,  1.5707e-01]), other={'lmk3d': tensor([[ 127.7438,  132.2567,  143.5178,  155.0604,  172.8295,  207.2668,
       248.8328,  300.7377,  373.9154,  444.4555,  488.2218,  521.1388,
       549.5575,  566.1870,  578.7250,  591.4955,  598.6500,  198.5895,
       229.5250,  266.1461,  300.2260,  330.3103,  453.8557,  483.1487,
       514.4548,  546.4862,  569.0913,  391.6398,  393.1363,  394.6638,
       394.0370,  346.7453,  364.4471,  387.6184,  409.7017,  424.8053,
       242.8958,  266.4351,  296.2436,  322.5005,  297.8176,  266.1681,
       447.9089,  476.6442,  507.0883,  525.7045,  504.6696,  474.0296,
       297.2710,  330.6139,  367.4538,  386.3623,  405.1106,  437.1935,
       459.5589,  433.0534,  408.8237,  381.7580,  354.9207,  330.0929,
       306.0154,  359.7746,  384.0301,  407.2180,  453.9540,  405.2695,
       381.6787,  358.3096],
     [ 635.3413,  700.2004,  760.4036,  814.0373,  869.6609,  910.3087,
       931.8080,  952.3611,  967.9831,  958.4673,  943.6260,  925.7180,
       888.4874,  834.2109,  781.3015,  722.7172,  658.3131,  528.7373,
       500.1857,  490.0359,  491.1128,  498.9772,  504.5948,  499.0244,
       500.7936,  514.1828,  546.2267,  568.4695,  604.1664,  639.3376,
       672.8209,  715.2729,  716.2363,  720.1581,  718.6630,  719.0924,
       575.8093,  558.2081,  559.5470,  577.5910,  582.3413,  583.4703,
       583.8735,  568.2462,  570.1944,  590.0542,  594.7038,  590.8626,
       803.4917,  776.4624,  759.3547,  763.1168,  761.3814,  782.1764,
       813.0880,  826.8230,  836.4264,  837.0371,  833.0865,  820.8643,
       802.4575,  785.8152,  784.3049,  788.4851,  811.2205,  809.0113,
       809.1197,  805.9491],
     [-447.6359, -429.2760, -415.2429, -396.0782, -359.8308, -305.2322,
      -242.4311, -188.0687, -172.7411, -205.8362, -271.0502, -342.2363,
      -404.9924, -445.3175, -467.7169, -485.2403, -504.6857, -267.4684,
      -236.8413, -215.0776, -201.9889, -196.2705, -211.4931, -224.7538,
      -245.9241, -275.6843, -312.5845, -182.4866, -145.9955, -109.2458,
       -96.6480, -155.2021, -142.5886, -137.8106, -147.5624, -163.9676,
      -241.6900, -219.9485, -222.6158, -231.0115, -219.8800, -223.5794,
      -246.9078, -245.5314, -249.5524, -275.9770, -252.6340, -241.6490,
      -170.2287, -138.4749, -125.1839, -124.3155, -129.1804, -149.9664,
      -187.9251, -151.8730, -133.7544, -126.9776, -126.8748, -139.9529,
      -168.7333, -136.0403, -132.5045, -141.0484, -185.2490, -140.0294,
      -134.5023, -135.8251]], dtype=torch.float64), 'mesh': tensor([[ 198.7510,  198.8704,  198.9997,  ...,  539.3710,  538.7939,
       538.1870],
     [ 545.4448,  546.3419,  547.2407,  ...,  840.1667,  839.4380,
       838.7001],
     [-261.9849, -261.7548, -261.5308,  ..., -569.6752, -572.0292,
      -574.3027]], dtype=torch.float64), 'pose': {'angles': [-4.643017620919836, 9.23048479692336, 1.8010015665256485], 'translation': tensor([358.0552, 720.3383, -96.8412], dtype=torch.float64)}})})]

please share how i can evaluate this model? please share the paper

Clarification on Align, Embed and Verify

Hi, thanks for this amazing repo! I am using it for the task of face verification, but I don't really get the differences between the different types of prediction: verify, embed and align.
Shouldn't the alignment be performed before the prediction? Why is it performed in a different task and just left there?
Another question is about the embed and verify, what's the difference between them? If the verification task is just a cosine similarity between two image embeddings, why are they two different tasks that lead to two different results? Also on the demo there are Cosine similarity of Face Representation Embeddings
and Cosine similarity of Face Verification Embeddings with different results, but I cannot understand what's happening in the code.
Thanks in advance!

Still the original size after feeding into the reader

Hi!

Thanks for your amazing work first! I only need to use au and pose predictor in the model. When the image runs on the Colab demo, it will return the image with size [3,1080,1080]. However, when I only ran on my Ubuntu system, it gave me [3,288,288], which is the original size of the image. I understood that the resizing of the image is important in the later step of detecting the face of the image. I can get the face in the Colab demo but can not detect the face from my end. Can you please help me with this? Thank you!

screenshot

Error Running Facetorch Example File in Docker Image

I recently installed the Facetorch docker image and downloaded the necessary files according to the Facetorch User Guide. However, when I tried to run the example file, I encountered an error as shown in the attached screenshot.

Could you please advise on what could be causing this issue?

Thank you for your help.

image

Running on specified GPU

Hi, How to run the analyzer on a second GPU or specified GPU node? For example, pytorch like framework allow us to run on cuda:1. I did not find a way to run this on specified node.

Unable to add packages

Hi.

As written in the README, I followed these steps to add pandas

  1. Add packages with corresponding versions to gpu.environment.yml file
  2. Lock the environment: conda lock -p linux-64 -f gpu.environment.yml --lockfile gpu.conda-lock.yml
  3. (Alternative Docker) Lock the environment: docker compose -f docker-compose.dev.yml run facetorch-lock-gpu
  4. Install the locked environment: conda-lock install --name env gpu.conda-lock.yml

However, if I create a container after this procedure with "docker compose -f docker-compose.dev.yml run facetorch-dev-gpu", pandas is not added there. How can I add it?

Running example.py without docker

image
Hi,
Thank you for the awesome work.
I wanted to try the code without using docker. But I am running into the above error. Could you provide some guidance? I also wanted to run a webcam demo, could you maybe provide with the some code snippet for it too? I checked the analyzer function can only take frames. Thank you.

Feed image as tensor

Hello, thanks for your solution. Is there any possibility to feed an image in the form of a torch tensor instead of a path to the image? It isn't so suitable to deal with every frame as a file.

How to reproduce the result of verification from the analyzer using only the predictor from it?

Hi! First of all, thank you for your work, it looks great.
I encountered some problems with reproducing results of face verification using it out of the pipeline that is implemented in the analyzer class.
The reason why I want to use only the predictor but not the whole pipeline is that I have already implemented the face detector and I want to verify faces between each other, so I don't need most of the functionality that is present in the analyzer.
And the problem is that scores of cosine similarity of output from the analyzer and single predictor are different, and it breaks the verification.
Here is the code that I use:

` test_img = cv2.imread('./database/17/17_01-00-399.jpg')

test_img_2 = cv2.imread('./database/13/13_00-09-199.jpg')
# #13/13_00-09-199.jpg

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

path_img_input='./database/17/17_01-00-399.jpg'
path_img_input_2 = './database/13/13_00-09-199.jpg'
path_img_output="/test_output.jpg"
path_config="gpu.config.yml"

test_img = np.transpose(test_img, (2,0,1))
test_img = np.expand_dims(test_img,axis=0)
test_torch = torch.Tensor(test_img)
test_torch = test_torch.to(device=device, dtype=torch.float32)
test_torch_r = test_torch
test_torch = test_torch/255

test_img_2 = np.transpose(test_img_2,(2,0,1))
test_img_2 = np.expand_dims(test_img_2, axis=0)
test_torch_2 = torch.Tensor(test_img_2)
test_torch_2 = test_torch_2.to(device=device, dtype=torch.float32)
test_torch_2 = test_torch_2/255

cfg = OmegaConf.load(path_config)

predictor = instantiate(cfg['analyzer']['predictor']['verify'])
analyzer = FaceAnalyzer(cfg.analyzer)

responce_a = analyzer.run(path_image=path_img_input,
    batch_size=cfg.batch_size,
    fix_img_size=cfg.fix_img_size,
    return_img_data=True,
    include_tensors=True,
    path_output=path_img_output,)
responce_a2 = analyzer.run(path_image=path_img_input_2,
    batch_size=cfg.batch_size,
    fix_img_size=cfg.fix_img_size,
    return_img_data=True,
    include_tensors=True,
    path_output=path_img_output,)

responce = predictor.run(test_torch)[0].logits
responce_2 = predictor.run(test_torch_2)[0].logits

vec_1  = responce_a.faces[0].preds['verify'].logits
vec_2 = responce_a2.faces[0].preds['verify'].logits

sim = cosine_similarity(vec_1, vec_2, dim=0)
sim_3 = cosine_similarity(responce, responce_2, dim=0)


print(sim)
print(sim_3)

`
And as result i get
tensor(0.3768, device='cuda:0')
tensor(0.0573, device='cuda:0')
So predictor by itself doesn't work.
Thank you in advance.

Stack on AU inference when `return_img_data=True`

Thank for exciting repository!

I tried to run example .ipynb on locally.
First, warmup code run correctly.

response = analyzer.run(
        path_image=path_img_input,
        batch_size=cfg.batch_size,
        fix_img_size=cfg.fix_img_size,
        return_img_data=False,
        include_tensors=True,
        path_output=path_img_output,
    )
{"asctime": "2023-11-27 14:09:19,209", "levelname": "INFO", "message": "Running FaceAnalyzer"}
{"asctime": "2023-11-27 14:09:19,209", "levelname": "INFO", "message": "Reading image", "path_image": "./test.jpg"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
{"asctime": "2023-11-27 14:09:19,289", "levelname": "INFO", "message": "Detecting faces"}
{"asctime": "2023-11-27 14:09:21,654", "levelname": "INFO", "message": "Number of faces: 4"}
{"asctime": "2023-11-27 14:09:21,655", "levelname": "INFO", "message": "Unifying faces"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
{"asctime": "2023-11-27 14:09:21,679", "levelname": "INFO", "message": "Predicting facial features"}
{"asctime": "2023-11-27 14:09:21,679", "levelname": "INFO", "message": "Running FacePredictor: embed"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
{"asctime": "2023-11-27 14:09:21,816", "levelname": "INFO", "message": "Running FacePredictor: verify"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
{"asctime": "2023-11-27 14:09:30,575", "levelname": "INFO", "message": "Running FacePredictor: fer"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
{"asctime": "2023-11-27 14:09:30,835", "levelname": "INFO", "message": "Running FacePredictor: au"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
{"asctime": "2023-11-27 14:09:31,206", "levelname": "INFO", "message": "Running FacePredictor: deepfake"}
{"asctime": "2023-11-27 14:09:31,675", "levelname": "INFO", "message": "Running FacePredictor: align"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
{"asctime": "2023-11-27 14:09:31,966", "levelname": "INFO", "message": "Utilizing facial features"}
{"asctime": "2023-11-27 14:09:31,967", "levelname": "INFO", "message": "Running BaseUtilizer: align"}
{"asctime": "2023-11-27 14:09:31,976", "levelname": "INFO", "message": "Running BaseUtilizer: draw_boxes"}
{"asctime": "2023-11-27 14:09:32,005", "levelname": "INFO", "message": "Running BaseUtilizer: draw_landmarks"}

But, change return_img_data=False to return_img_data=True, hung up on AU inference.

# warmup
response = analyzer.run(
        path_image=path_img_input,
        batch_size=cfg.batch_size,
        fix_img_size=cfg.fix_img_size,
        return_img_data=True,
        include_tensors=True,
        path_output=path_img_output,
    )
{"asctime": "2023-11-27 14:23:03,506", "levelname": "INFO", "message": "Running FaceAnalyzer"}
{"asctime": "2023-11-27 14:23:03,506", "levelname": "INFO", "message": "Reading image", "path_image": "./test.jpg"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
{"asctime": "2023-11-27 14:23:03,541", "levelname": "INFO", "message": "Detecting faces"}
{"asctime": "2023-11-27 14:23:04,294", "levelname": "INFO", "message": "Number of faces: 4"}
{"asctime": "2023-11-27 14:23:04,294", "levelname": "INFO", "message": "Unifying faces"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
{"asctime": "2023-11-27 14:23:04,296", "levelname": "INFO", "message": "Predicting facial features"}
{"asctime": "2023-11-27 14:23:04,296", "levelname": "INFO", "message": "Running FacePredictor: embed"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
{"asctime": "2023-11-27 14:23:04,356", "levelname": "INFO", "message": "Running FacePredictor: verify"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
{"asctime": "2023-11-27 14:23:25,887", "levelname": "INFO", "message": "Running FacePredictor: fer"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(
{"asctime": "2023-11-27 14:23:28,681", "levelname": "INFO", "message": "Running FacePredictor: au"}
C:\Users\sobassy\.conda\envs\jupyter\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
  warnings.warn(

Is there a solution? or How to get verbose information on inference?

###Verbose information

(jupyter) C:\Users\sobassy>pip list
Package                  Version
------------------------ --------------------
antlr4-python3-runtime   4.9.3
anyio                    3.5.0
appdirs                  1.4.4
argon2-cffi              21.3.0
argon2-cffi-bindings     21.2.0
asttokens                2.0.5
attrs                    22.1.0
Babel                    2.11.0
backcall                 0.2.0
beautifulsoup4           4.12.0
black                    23.9.1
bleach                   4.1.0
brotlipy                 0.7.0
certifi                  2022.12.7
cffi                     1.15.1
charset-normalizer       2.0.4
click                    8.1.7
codetiming               1.4.0
colorama                 0.4.6
comm                     0.1.2
contourpy                1.1.0
cryptography             39.0.1
cycler                   0.11.0
debugpy                  1.5.1
decorator                5.1.1
defusedxml               0.7.1
dill                     0.3.7
entrypoints              0.4
executing                0.8.3
facetorch                0.3.0
fastjsonschema           2.16.2
filelock                 3.12.2
flit_core                3.8.0
fonttools                4.40.0
fsspec                   2023.4.0
gdown                    4.7.1
gitdb                    4.0.10
GitPython                3.1.31
googleapis-common-protos 1.61.0
hydra-core               1.3.2
idna                     3.4
importlib-metadata       6.0.0
importlib-resources      5.12.0
ipykernel                6.19.2
ipython                  8.12.0
ipython-autotime         0.3.2
ipython-genutils         0.2.0
ipywidgets               8.0.4
isort                    5.12.0
jedi                     0.18.1
Jinja2                   3.1.2
joblib                   1.3.2
json5                    0.9.6
jsonschema               4.17.3
jupyter                  1.0.0
jupyter_client           8.1.0
jupyter-console          6.6.3
jupyter_core             5.3.0
jupyter-server           1.23.4
jupyterlab               3.5.3
jupyterlab-pygments      0.1.2
jupyterlab_server        2.22.0
jupyterlab-widgets       3.0.5
kiwisolver               1.4.4
lxml                     4.9.2
MarkupSafe               2.1.1
matplotlib               3.7.1
matplotlib-inline        0.1.6
mistune                  0.8.4
mkl-fft                  1.3.1
mkl-random               1.2.2
mkl-service              2.4.0
mpmath                   1.3.0
mypy-extensions          1.0.0
nbclassic                0.5.4
nbclient                 0.5.13
nbconvert                6.5.4
nbformat                 5.7.0
nest-asyncio             1.5.6
networkx                 3.1
notebook                 6.5.3
notebook_shim            0.2.2
numpy                    1.23.5
omegaconf                2.3.0
opencv-python            4.7.0.72
packaging                23.0
pandas                   2.0.2
pandocfilters            1.5.0
parso                    0.8.3
pathspec                 0.11.2
pbr                      6.0.0
pickleshare              0.7.5
Pillow                   9.5.0
pip                      23.0.1
platformdirs             3.11.0
ply                      3.11
pooch                    1.4.0
prometheus-client        0.14.1
prompt-toolkit           3.0.36
protobuf                 4.25.1
psutil                   5.9.0
pure-eval                0.2.2
pycparser                2.21
Pygments                 2.11.2
pylatexenc               2.10
pyOpenSSL                23.0.0
pyparsing                3.1.0
PyQt5                    5.15.7
PyQt5-sip                12.11.0
pyrsistent               0.18.0
PySocks                  1.7.1
python-dateutil          2.8.2
python-json-logger       2.0.7
pytz                     2022.7
pywin32                  305.1
pywinpty                 2.0.10
PyYAML                   6.0
pyzmq                    23.2.0
qiskit                   0.45.0
qiskit-aer               0.13.0
qiskit-terra             0.45.0
qtconsole                5.4.0
QtPy                     2.2.0
requests                 2.28.1
rustworkx                0.13.2
scikit-learn             1.3.1
scipy                    1.10.1
seaborn                  0.12.2
Send2Trash               1.8.0
setuptools               65.6.3
sip                      6.6.2
six                      1.16.0
smmap                    5.0.0
sniffio                  1.2.0
soupsieve                2.4
stack-data               0.2.0
stevedore                5.1.0
sympy                    1.12
terminado                0.17.1
thop                     0.1.1.post2209072238
threadpoolctl            3.2.0
tinycss2                 1.2.1
toml                     0.10.2
tomli                    2.0.1
torch                    2.1.1+cu118
torchaudio               2.1.1+cu118
torchvision              0.16.1+cu118
tornado                  6.2
tqdm                     4.65.0
traitlets                5.7.1
typing_extensions        4.8.0
tzdata                   2023.3
ultralytics              8.0.121
urllib3                  1.26.15
wcwidth                  0.2.5
webencodings             0.5.1
websocket-client         0.58.0
wheel                    0.38.4
widgetsnbextension       4.0.5
win-inet-pton            1.1.0
wincertstore             0.2
zipp                     3.11.0
import torch

print(torch.cuda.is_available())
print(torch.cuda.device_count())
print(torch.cuda.current_device())
print(torch.cuda.get_device_name(0))
True
2
0
NVIDIA GeForce RTX 3080

Thank, you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.