Giter Club home page Giter Club logo

torchlm's Introduction

torchlm-logo

English | Data Augmentations API Docs | ZhiHu Page | Pypi Downloads

🤗 Introduction

torchlm is aims to build a high level pipeline for face landmarks detection, it supports training, evaluating, exporting, inference(Python/C++) and 100+ data augmentations, can easily install via pip.

Other Repos 🔥🔥

🛠lite.ai.toolkit 💎torchlm 📒statistic-learning-R-note 🎉cuda-learn-note 📖Awesome-LLM-Inference

👋 Core Features

  • High level pipeline for training and inference.
  • Provides 30+ native landmarks data augmentations.
  • Can bind 80+ transforms from torchvision and albumentations with one-line-code.
  • Support PIPNet, YOLOX, ResNet, MobileNet and ShuffleNet for face landmarks detection.

🆕 What's New

🔥🔥Performance(@NME)

Model Backbone Head 300W COFW AFLW WFLW Download
PIPNet MobileNetV2 Heatmap+Regression+NRM 3.40 3.43 1.52 4.79 link
PIPNet ResNet18 Heatmap+Regression+NRM 3.36 3.31 1.48 4.47 link
PIPNet ResNet50 Heatmap+Regression+NRM 3.34 3.18 1.44 4.48 link
PIPNet ResNet101 Heatmap+Regression+NRM 3.19 3.08 1.42 4.31 link

🛠️Installation

you can install torchlm directly from pypi.

pip install torchlm>=0.1.6.10 # or install the latest pypi version `pip install torchlm`
pip install torchlm>=0.1.6.10 -i https://pypi.org/simple/ # or install from specific pypi mirrors use '-i'

or install from source if you want the latest torchlm and install it in editable mode with -e.

git clone --depth=1 https://github.com/DefTruth/torchlm.git 
cd torchlm && pip install -e .

🌟🌟Data Augmentation

torchlm provides 30+ native data augmentations for landmarks and can bind with 80+ transforms from torchvision and albumentations. The layout format of landmarks is xy with shape (N, 2).

Use almost 30+ native transforms from torchlm directly

import torchlm
transform = torchlm.LandmarksCompose([
    torchlm.LandmarksRandomScale(prob=0.5),
    torchlm.LandmarksRandomMask(prob=0.5),
    torchlm.LandmarksRandomBlur(kernel_range=(5, 25), prob=0.5),
    torchlm.LandmarksRandomBrightness(prob=0.),
    torchlm.LandmarksRandomRotate(40, prob=0.5, bins=8),
    torchlm.LandmarksRandomCenterCrop((0.5, 1.0), (0.5, 1.0), prob=0.5)
])

Also, a user-friendly API build_default_transform is available to build a default transform pipeline.

transform = torchlm.build_default_transform(
    input_size=(input_size, input_size),
    mean=[0.485, 0.456, 0.406],
    std=[0.229, 0.224, 0.225],
    force_norm_before_mean_std=True,  # img/=255. first
    rotate=30,
    keep_aspect=False,
    to_tensor=True  # array -> Tensor & HWC -> CHW
)

See transforms.md for supported transforms sets and more example can be found at test/transforms.py.

💡 more details about transform in torchlm

torchlm provides 30+ native data augmentations for landmarks and can bind with 80+ transforms from torchvision and albumentations through torchlm.bind method. The layout format of landmarks is xy with shape (N, 2), N denotes the number of the input landmarks. Further, torchlm.bind provide a prob param at bind-level to force any transform or callable be a random-style augmentation. The data augmentations in torchlm are safe and simplest. Any transform operations at runtime cause landmarks outside will be auto dropped to keep the number of landmarks unchanged. Yes, is ok if you pass a Tensor to a np.ndarray-like transform, torchlm will automatically be compatible with different data types and then wrap it back to the original type through a autodtype wrapper.

bind 80+ torchvision and albumentations's transforms

NOTE: Please install albumentations first if you want to bind albumentations's transforms. If you have the conflict problem between different installed version of opencv (opencv-python and opencv-python-headless, ablumentations need opencv-python-headless). Please uninstall the opencv-python and opencv-python-headless first, and then reinstall albumentations. See albumentations#1140 for more details.

# first uninstall conflict opencvs
pip uninstall opencv-python
pip uninstall opencv-python-headless
pip uninstall albumentations  # if you have installed albumentations
pip install albumentations # then reinstall albumentations, will also install deps, e.g opencv

Then, check if albumentations is available.

torchlm.albumentations_is_available()  # True or False
transform = torchlm.LandmarksCompose([
    torchlm.bind(torchvision.transforms.GaussianBlur(kernel_size=(5, 25)), prob=0.5),  
    torchlm.bind(albumentations.ColorJitter(p=0.5))
])
bind custom callable array or Tensor transform functions
# First, defined your custom functions
def callable_array_noop(img: np.ndarray, landmarks: np.ndarray) -> Tuple[np.ndarray, np.ndarray]: # do some transform here ...
    return img.astype(np.uint32), landmarks.astype(np.float32)

def callable_tensor_noop(img: Tensor, landmarks: Tensor) -> Tuple[Tensor, Tensor]: # do some transform here ...
    return img, landmarks
# Then, bind your functions and put it into the transforms pipeline.
transform = torchlm.LandmarksCompose([
        torchlm.bind(callable_array_noop, bind_type=torchlm.BindEnum.Callable_Array),
        torchlm.bind(callable_tensor_noop, bind_type=torchlm.BindEnum.Callable_Tensor, prob=0.5)
])
some global debug setting for torchlm's transform
  • setup logging mode as True globally might help you figure out the runtime details
# some global setting
torchlm.set_transforms_debug(True)
torchlm.set_transforms_logging(True)
torchlm.set_autodtype_logging(True)

some detail information will show you at each runtime, the infos might look like

LandmarksRandomScale() AutoDtype Info: AutoDtypeEnum.Array_InOut
LandmarksRandomScale() Execution Flag: False
BindTorchVisionTransform(GaussianBlur())() AutoDtype Info: AutoDtypeEnum.Tensor_InOut
BindTorchVisionTransform(GaussianBlur())() Execution Flag: True
BindAlbumentationsTransform(ColorJitter())() AutoDtype Info: AutoDtypeEnum.Array_InOut
BindAlbumentationsTransform(ColorJitter())() Execution Flag: True
BindTensorCallable(callable_tensor_noop())() AutoDtype Info: AutoDtypeEnum.Tensor_InOut
BindTensorCallable(callable_tensor_noop())() Execution Flag: False
Error at LandmarksRandomTranslate() Skip, Flag: False Error Info: LandmarksRandomTranslate() have 98 input landmarks, but got 96 output landmarks!
LandmarksRandomTranslate() Execution Flag: False
  • Execution Flag: True means current transform was executed successful, False means it was not executed because of the random probability or some Runtime Exceptions(torchlm will should the error infos if debug mode is True).

  • AutoDtype Info:

    • Array_InOut means current transform need a np.ndnarray as input and then output a np.ndarray.
    • Tensor_InOut means current transform need a torch Tensor as input and then output a torch Tensor.
    • Array_In means current transform needs a np.ndarray input and then output a torch Tensor.
    • Tensor_In means current transform needs a torch Tensor input and then output a np.ndarray.

    Yes, is ok if you pass a Tensor to a np.ndarray-like transform, torchlm will automatically be compatible with different data types and then wrap it back to the original type through a autodtype wrapper.

🎉🎉Training

In torchlm, each model have two high level and user-friendly APIs named apply_training and apply_freezing for training. apply_training handle the training process and apply_freezing decide whether to freeze the backbone for fune-tuning.

Quick Start👇

Here is an example of PIPNet. You can freeze backbone before fine-tuning through apply_freezing.

from torchlm.models import pipnet
# will auto download pretrained weights from latest release if pretrained=True
model = pipnet(backbone="resnet18", pretrained=True, num_nb=10, num_lms=98, net_stride=32,
               input_size=256, meanface_type="wflw", backbone_pretrained=True)
model.apply_freezing(backbone=True)
model.apply_training(
    annotation_path="../data/WFLW/converted/train.txt",  # or fine-tuning your custom data
    num_epochs=10,
    learning_rate=0.0001,
    save_dir="./save/pipnet",
    save_prefix="pipnet-wflw-resnet18",
    save_interval=1,
    logging_interval=1,
    device="cuda",
    coordinates_already_normalized=True,
    batch_size=16,
    num_workers=4,
    shuffle=True
)

Please jump to the entry point of the function for the detail documentations of apply_training API for each defined models in torchlm, e.g pipnet/_impls.py#L166. You might see some logs if the training process is running:

Parameters for DataLoader:  {'batch_size': 16, 'num_workers': 4, 'shuffle': True}
Built _PIPTrainDataset: train count is 7500 !
Epoch 0/9
----------
[Epoch 0/9, Batch 1/468] <Total loss: 0.372885> <cls loss: 0.063186> <x loss: 0.078508> <y loss: 0.071679> <nbx loss: 0.086480> <nby loss: 0.073031>
[Epoch 0/9, Batch 2/468] <Total loss: 0.354169> <cls loss: 0.051672> <x loss: 0.075350> <y loss: 0.071229> <nbx loss: 0.083785> <nby loss: 0.072132>
[Epoch 0/9, Batch 3/468] <Total loss: 0.367538> <cls loss: 0.056038> <x loss: 0.078029> <y loss: 0.076432> <nbx loss: 0.083546> <nby loss: 0.073492>
[Epoch 0/9, Batch 4/468] <Total loss: 0.339656> <cls loss: 0.053631> <x loss: 0.073036> <y loss: 0.066723> <nbx loss: 0.080007> <nby loss: 0.066258>
[Epoch 0/9, Batch 5/468] <Total loss: 0.364556> <cls loss: 0.051094> <x loss: 0.077378> <y loss: 0.071951> <nbx loss: 0.086363> <nby loss: 0.077770>
[Epoch 0/9, Batch 6/468] <Total loss: 0.371356> <cls loss: 0.049117> <x loss: 0.079237> <y loss: 0.075729> <nbx loss: 0.086213> <nby loss: 0.081060>
...
[Epoch 0/9, Batch 33/468] <Total loss: 0.298983> <cls loss: 0.041368> <x loss: 0.069912> <y loss: 0.057667> <nbx loss: 0.072996> <nby loss: 0.057040>

Dataset Format👇

The annotation_path parameter is denotes the path to a custom annotation file, the format must be:

"img0_path x0 y0 x1 y1 ... xn-1,yn-1"
"img1_path x0 y0 x1 y1 ... xn-1,yn-1"
"img2_path x0 y0 x1 y1 ... xn-1,yn-1"
"img3_path x0 y0 x1 y1 ... xn-1,yn-1"
...

If the label in annotation_path is already normalized by image size, please set coordinates_already_normalized as True in apply_training API.

"img0_path x0/w y0/h x1/w y1/h ... xn-1/w,yn-1/h"
"img1_path x0/w y0/h x1/w y1/h ... xn-1/w,yn-1/h"
"img2_path x0/w y0/h x1/w y1/h ... xn-1/w,yn-1/h"
"img3_path x0/w y0/h x1/w y1/h ... xn-1/w,yn-1/h"
...

Here is an example of WFLW to show you how to prepare the dataset, also see test/data.py.

Additional Custom Settings👋

Some models in torchlm support additional custom settings beyond the num_lms of your custom dataset. For example, PIPNet also need to set a custom meanface generated by your custom dataset. Please jump the source code of each defined model in torchlm for the details about additional custom settings to get more flexibilities of training or fine-tuning processes. Here is an example of How to train PIPNet in your own dataset with custom meanface setting?

Set up your custom meanface and nearest-neighbor landmarks through pipnet.set_custom_meanface method, this method will calculate the Euclidean Distance between different landmarks in meanface and will auto set up the nearest-neighbors for each landmark. NOTE: The PIPNet will reshape the detection headers if the number of landmarks in custom dataset is not equal with the num_lms you initialized.

def set_custom_meanface(custom_meanface_file_or_string: str) -> bool:
    """
    :param custom_meanface_file_or_string: a long string or a file contains normalized
    or un-normalized meanface coords, the format is "x0,y0,x1,y1,x2,y2,...,xn-1,yn-1".
    :return: status, True if successful.
    """

Also, a generate_meanface API is available in torchlm to help you get meanface in custom dataset.

# generate your custom meanface.
custom_meanface, custom_meanface_string = torchlm.data.annotools.generate_meanface(
  annotation_path="../data/WFLW/converted/train.txt",
  coordinates_already_normalized=True)
# check your generated meanface.
rendered_meanface = torchlm.data.annotools.draw_meanface(
  meanface=custom_meanface, coordinates_already_normalized=True)
cv2.imwrite("./logs/wflw_meanface.jpg", rendered_meanface)
# setting up your custom meanface
model.set_custom_meanface(custom_meanface_file_or_string=custom_meanface_string)

Benchmarks Dataset Converters👇

In torchlm, some pre-defined dataset converters for common use benchmark datasets are available, such as 300W, COFW, WFLW and AFLW. These converters will help you to convert the common use dataset to the standard annotation format that torchlm need. Here is an example of WFLW.

from torchlm.data import LandmarksWFLWConverter
# setup your path to the original downloaded dataset from official 
converter = LandmarksWFLWConverter(
    data_dir="../data/WFLW", save_dir="../data/WFLW/converted",
    extend=0.2, rebuild=True, target_size=256, keep_aspect=False,
    force_normalize=True, force_absolute_path=True
)
converter.convert()
converter.show(count=30)  # show you some converted images with landmarks for debugging

Then, the output's layout in ../data/WFLW/converted would be look like:

├── image
│   ├── test
│   └── train
├── show
│   ├── 16--Award_Ceremony_16_Award_Ceremony_Awards_Ceremony_16_589x456y91.jpg
│   ├── 20--Family_Group_20_Family_Group_Family_Group_20_118x458y58.jpg
...
├── test.txt
└── train.txt

🛸🚵‍️ Inference

C++ APIs👀

The ONNXRuntime(CPU/GPU), MNN, NCNN and TNN C++ inference of torchlm will be release in lite.ai.toolkit. Here is an example of 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

The output is:

More classes for face alignment (68 points, 98 points, 106 points, 1000 points)

auto *align = new lite::cv::face::align::PFLD(onnx_path);  // 106 landmarks, 1.0Mb only!
auto *align = new lite::cv::face::align::PFLD98(onnx_path);  // 98 landmarks, 4.8Mb only!
auto *align = new lite::cv::face::align::PFLD68(onnx_path);  // 68 landmarks, 2.8Mb only!
auto *align = new lite::cv::face::align::MobileNetV268(onnx_path);  // 68 landmarks, 9.4Mb only!
auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path);  // 68 landmarks, 11Mb only!
auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path);  // 1000 landmarks, 2.0Mb only!
auto *align = new lite::cv::face::align::PIPNet98(onnx_path);  // 98 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet68(onnx_path);  // 68 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet29(onnx_path);  // 29 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet19(onnx_path);  // 19 landmarks, CVPR2021!

More details of C++ APIs, please check lite.ai.toolkit.

Python APIs👇

In torchlm, we provide pipelines for deploying models with PyTorch and ONNXRuntime. A high level API named runtime.bind can bind face detection and landmarks models together, then you can run the runtime.forward API to get the output landmarks and bboxes. Here is an example of PIPNet. Pretrained weights of PIPNet, Download.

Inference on PyTorch Backend

import torchlm
from torchlm.tools import faceboxesv2
from torchlm.models import pipnet

torchlm.runtime.bind(faceboxesv2(device="cpu"))  # set device="cuda" if you want to run with CUDA
# set map_location="cuda" if you want to run with CUDA
torchlm.runtime.bind(
  pipnet(backbone="resnet18", pretrained=True,  
         num_nb=10, num_lms=98, net_stride=32, input_size=256,
         meanface_type="wflw", map_location="cpu", checkpoint=None) 
) # will auto download pretrained weights from latest release if pretrained=True
landmarks, bboxes = torchlm.runtime.forward(image)
image = torchlm.utils.draw_bboxes(image, bboxes=bboxes)
image = torchlm.utils.draw_landmarks(image, landmarks=landmarks)

Inference on ONNXRuntime Backend

import torchlm
from torchlm.runtime import faceboxesv2_ort, pipnet_ort

torchlm.runtime.bind(faceboxesv2_ort())
torchlm.runtime.bind(
  pipnet_ort(onnx_path="pipnet_resnet18.onnx",num_nb=10,
             num_lms=98, net_stride=32,input_size=256, meanface_type="wflw")
)
landmarks, bboxes = torchlm.runtime.forward(image)
image = torchlm.utils.draw_bboxes(image, bboxes=bboxes)
image = torchlm.utils.draw_landmarks(image, landmarks=landmarks)

🤠🎯 Evaluating

In torchlm, each model have a high level and user-friendly API named apply_evaluating for evaluation. This method will calculate the NME, FR and AUC for eval dataset. Here is an example of PIPNet.

from torchlm.models import pipnet
# will auto download pretrained weights from latest release if pretrained=True
model = pipnet(backbone="resnet18", pretrained=True, num_nb=10, num_lms=98, net_stride=32,
               input_size=256, meanface_type="wflw", backbone_pretrained=True)
NME, FR, AUC = model.apply_evaluating(
    annotation_path="../data/WFLW/convertd/test.txt",
    norm_indices=[60, 72],  # the indexes of two eyeballs.
    coordinates_already_normalized=True, 
    eval_normalized_coordinates=False
)
print(f"NME: {NME}, FR: {FR}, AUC: {AUC}")

Then, you will get the Performance(@NME@FR@AUC) results.

Built _PIPEvalDataset: eval count is 2500 !
Evaluating PIPNet: 100%|██████████| 2500/2500 [02:53<00:00, 14.45it/s]
NME: 0.04453323229181989, FR: 0.04200000000000004, AUC: 0.5732673333333334

⚙️⚔️ Exporting

In torchlm, each model have a high level and user-friendly API named apply_exporting for ONNX export. Here is an example of PIPNet.

from torchlm.models import pipnet
# will auto download pretrained weights from latest release if pretrained=True
model = pipnet(backbone="resnet18", pretrained=True, num_nb=10, num_lms=98, net_stride=32,
               input_size=256, meanface_type="wflw", backbone_pretrained=True)
model.apply_exporting(
    onnx_path="./save/pipnet/pipnet_resnet18.onnx",
    opset=12, simplify=True, output_names=None  # use default output names.
)

Then, you will get a Static ONNX model file if the exporting process was done.

  ...
  %195 = Add(%259, %189)
  %196 = Relu(%195)
  %outputs_cls = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%196, %cls_layer.weight, %cls_layer.bias)
  %outputs_x = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%196, %x_layer.weight, %x_layer.bias)
  %outputs_y = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%196, %y_layer.weight, %y_layer.bias)
  %outputs_nb_x = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%196, %nb_x_layer.weight, %nb_x_layer.bias)
  %outputs_nb_y = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%196, %nb_y_layer.weight, %nb_y_layer.bias)
  return %outputs_cls, %outputs_x, %outputs_y, %outputs_nb_x, %outputs_nb_y
}
Checking 0/3...
Checking 1/3...
Checking 2/3...

📖 Documentations

🎓 License

The code of torchlm is released under the MIT License.

❤️ Contribution

Please consider ⭐ this repo if you like it, as it is the simplest way to support me.

👋 Acknowledgement

torchlm's People

Contributors

artificialzeng avatar deftruth avatar nunenuh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

torchlm's Issues

旋转变换后的图像尺度变化问题

您好,我发现在使用torchlm对图片进行旋转之后,图片边缘角点会保持内切与图像边界框,这会导致图像的缩放比例发生了变换,有没有考虑过增加图像保持比例的参数,或是增加获得图像缩放比例的方法?

Error on Converting 300W dataset in class Landmarks300WConverter with message UFuncTypeError: Cannot cast ufunc 'subtract' output...

Example code

from torchlm.data import LandmarksWFLWConverter, Landmarks300WConverter
# setup your path to the original downloaded dataset from official 
converter = Landmarks300WConverter(
    data_dir="/data/extended/landmark/ibug_300W_dlib/", save_dir="/data/extended/landmark/ibug_300W_dlib/converted",
    extend=0.2, rebuild=True, target_size=256, keep_aspect=False,
    force_normalize=True, force_absolute_path=True
)
converter.convert()
converter.show(count=30)  # show you some converted images with landmarks for debugging

Error Message

Converting 300W Train Annotations:   0%|                                                                                                                                          | 0/3148 [00:00<?, ?it/s]
---------------------------------------------------------------------------
UFuncTypeError                            Traceback (most recent call last)
Input In [1], in <cell line: 8>()
      2 # setup your path to the original downloaded dataset from official 
      3 converter = Landmarks300WConverter(
      4     data_dir="/data/extended/landmark/ibug_300W_dlib//", save_dir="/data/extended/landmark/ibug_300W_dlib/converted",
      5     extend=0.2, rebuild=True, target_size=256, keep_aspect=False,
      6     force_normalize=True, force_absolute_path=True
      7 )
----> 8 converter.convert()
      9 converter.show(count=30)

File /opt/anaconda/envs/facial/lib/python3.9/site-packages/torchlm/data/_converters.py:329, in Landmarks300WConverter.convert(self)
    322 test_anno_file = open(self.save_test_annotation_path, "w")
    324 for annotation in tqdm.tqdm(
    325         self.train_annotations,
    326         colour="GREEN",
    327         desc="Converting 300W Train Annotations"
    328 ):
--> 329     crop, landmarks, new_img_name = self._process_annotation(annotation=annotation)
    330     if crop is None or landmarks is None:
    331         continue

File /opt/anaconda/envs/facial/lib/python3.9/site-packages/torchlm/data/_converters.py:493, in Landmarks300WConverter._process_annotation(self, annotation)
    491 crop = image[int(ymin):int(ymax), int(xmin):int(xmax), :]
    492 # adjust according to left-top corner
--> 493 landmarks[:, 0] -= float(xmin)
    494 landmarks[:, 1] -= float(ymin)
    496 if self.target_size is not None and self.resize_op is not None:

UFuncTypeError: Cannot cast ufunc 'subtract' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'

Solution

Change this line

# adjust according to left-top corner
landmarks[:, 0] -= float(xmin)
landmarks[:, 1] -= float(ymin)

Use this line instead

# adjust according to left-top corner
landmarks[:, 0] = landmarks[:, 0] - float(xmin)
landmarks[:, 1] = landmarks[:, 1] - float(ymin)

Environment

  • conda 4.10.3
  • Python 3.9.12 (main, Jun 1 2022, 11:38:51)
  • numpy version 1.23.1
  • torch version 1.12.1
  • Operating system Ubuntu 20.04.3 LTS

update torchlm pypi v0.1.2

  • add safe check and make sure the landmarks keep the same number after transform done
  • fixed docs typos error
  • add internal transform logging api

控制台会自动输出小数

是class LandmarksRandomScale(LandmarksTransform):里面,print(resize_scale_x) print(resize_scale_y),是更新时候没去掉吗?

How to infer in batches

image = cv2.imread(img_path)
landmarks, bboxes = torchlm.runtime.forward(image) >> batches

Is there a way to infer in batches using an image list or data loader instead of inferring with a single image?

Wrong results while running on GPU

Hi,

Thanks for the wonderful work. I am using torchlm for landmark detection. While running on CPU, the results looks fine. When I later run it on the GPU server, its gives non-sense results like the following:
image

I don't know if anyone has encountered this issue before. I set up the environment the same way both locally and remotely, it keeps giving non-sense result either on CPU or GPU while running on remote server.

HorizontalFlip with Albumentations

Great project and thanks for sharing it!
For Horizontal flip, the pipnet has aug point flip such as

def random_flip(image, target, points_flip):
    if random.random() > 0.5:
        image = image.transpose(Image.FLIP_LEFT_RIGHT)
        target = np.array(target).reshape(-1, 2)
        target = target[points_flip, :]
        target[:,0] = 1-target[:,0]
        target = target.flatten()
        return image, target
    else:
        return image, target

How do you implement the above function with Albumentations? I found HorizontalFlip but it has no points_flip aug

is PIPNet the SOTA ?

Hi, thanks for the work!
I am from the industrial, and I plan to choose one algorithm and train with custom datasets.

I would like to ask which algorithm is the SOTA choice for the 2D facial landmarks task?
By SOTA I mean the video stability and accuracy given in the paper or github.

Also, any suggestions for 3D facial landmarks?

Wrong results with Faceboxesv2 (ONNX) and there isn't YoloV5face implemented yet.

Hi, first of all, thanks for the amazing work.
I founded an error regarding with the model of Faceboxesv2 (ONNX)
When I do a inference with my image:
test_hrnet_
I got nothing, i was able to found the problem and is with the model itself, if i change the threshold of the model from 0.3 to 0.003 i get:
pipnet0_
as you can see, is not working well the model and the conditions of foucs, and brightness is very well. So, i was thinking here that maybe the model will work only with the image of the test, right?

So, i tried to use another model (yolov5face) and i saw that is not implemented yet. So, do you have in your plans to incoporate it?

Thanks!

Inference on PyTorch Backend not work

Hi, I tested the interface code for predict landmarks by using 300W pretrained weights like these:

checkpoint = "pipnet_resnet101_10x68x32x256_300w.pth"
torchlm.runtime.bind(faceboxesv2())
torchlm.runtime.bind(
  pipnet(backbone="resnet101", pretrained=True,  
         num_nb=10, num_lms=68, net_stride=32, input_size=256,
         meanface_type="300w", map_location="cpu",
            backbone_pretrained=True, checkpoint=checkpoint)
) # will auto download pretrained weights from latest release if pretrained=True

when I run the forward code:

 landmarks, bboxes = torchlm.runtime.forward(frame_bgr)

I got the error of

[File ~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py:120, in forward(image, extend, swapRB_before_face, swapRB_before_landmarks, **kwargs)
    ]()[105](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=104)[ def forward(
    ]()[106](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=105)[         image: np.ndarray,
    ]()[107](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=106)[         extend: float = 0.2,
   (...)
    ]()[110](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=109)[         **kwargs: Any  # params for face_det & landmarks_det
    ]()[111](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=110)[ ) -> Tuple[_Landmarks, _BBoxes]:
    ]()[112](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=111)[     """
    ]()[113](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=112)[     :param image: original input image, HWC, BGR/RGB
    ]()[114](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=113)[     :param extend: extend ratio for face cropping (1.+extend) before landmarks detection.
   (...)
    ]()[118](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=117)[     :return: landmarks (n,m,2) -> x,y; bboxes (n,5) -> x1,y1,x2,y2,score
    ]()[119](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=118)[     """
--> ]()[120](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=119)[     return RuntimeWrapper.forward(
    ]()[121](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=120)[         image=image,
    ]()[122](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=121)[         extend=extend,
    ]()[123](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=122)[         swapRB_before_face=swapRB_before_face,
    ]()[124](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=123)[         swapRB_before_landmarks=swapRB_before_landmarks,
    ]()[125](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=124)[         **kwargs
    ]()[126](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=125)[     )

File ~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py:50, in RuntimeWrapper.forward(cls, image, extend, swapRB_before_face, swapRB_before_landmarks, **kwargs)
     ]()[48](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=47)[     bboxes = cls.face_base.apply_detecting(image_swapRB, **kwargs)  # (n,5) x1,y1,x2,y2,score
     ]()[49](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=48)[ else:
---> ]()[50](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=49)[     bboxes = cls.face_base.apply_detecting(image, **kwargs)  # (n,5) x1,y1,x2,y2,score
     ]()[52](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=51)[ det_num = bboxes.shape[0]
     ]()[53](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=52)[ landmarks = []

File ~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torch/autograd/grad_mode.py:28, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
     ]()[25](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torch/autograd/grad_mode.py?line=24)[ @functools.wraps(func)
     ]()[26](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torch/autograd/grad_mode.py?line=25)[ def decorate_context(*args, **kwargs):
     ]()[27](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torch/autograd/grad_mode.py?line=26)[     with self.__class__():
---> ]()[28](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torch/autograd/grad_mode.py?line=27)[         return func(*args, **kwargs)

File ~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/tools/_faceboxesv2.py:340, in FaceBoxesV2.apply_detecting(self, image, thresh, im_scale, top_k)
    ]()[338](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/tools/_faceboxesv2.py?line=337)[ # keep top-K before NMS
    ]()[339](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/tools/_faceboxesv2.py?line=338)[ order = scores.argsort()[::-1][:top_k * 3]
--> ]()[340](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/tools/_faceboxesv2.py?line=339)[ boxes = boxes[order]
    ]()[341](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/tools/_faceboxesv2.py?line=340)[ scores = scores[order]
    ]()[343](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/tools/_faceboxesv2.py?line=342)[ # nms

IndexError: index 1 is out of bounds for axis 0 with size 1]()

Could you help me with this, thx!

Fine-tuning Custom Dataset Error

Hi!

I tried finetuning my dataset (I followed the format of annotations) but I am getting this error

image

Here is the code I used:

"model = pipnet(backbone="resnet101", weights = True, num_nb=10, num_lms=68, net_stride=32,
input_size=256, meanface_type="300w", backbone_pretrained=True)

model.apply_freezing(backbone=True)
model.apply_training(
annotation_path=r"\data\TNF\test\train.txt", # or fine-tuning your custom data
num_epochs=10,
learning_rate=0.0001,
save_dir="./save/pipnet",
save_prefix="pipnet-300W-TNFfinetune-resnet101",
save_interval=1,
logging_interval=1,
device="cuda",
coordinates_already_normalized=False,
batch_size=16,
num_workers=4,
shuffle=True)
"

For reference, my folder structure looks like this

image

Can't use cuda in pipnet

I want use cuda in pipnet, so I run the following code:

import torchlm
from torchlm.tools import faceboxesv2
from torchlm.models import pipnet
import cv2
image_path = '../rgb/image0/1.png'
image = cv2.imread(image_path)
torchlm.runtime.bind(faceboxesv2())
torchlm.runtime.bind(
    pipnet(backbone="resnet18", pretrained=True,
        num_nb=10, num_lms=98, net_stride=32, input_size=256,
        meanface_type="wflw", checkpoint=None, map_location="cuda")
)
torchlm.runtime.forward(image)

Then I get a error that say:

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

I know it due to my image data is still stay in cpu instead of gpu, and I need load my data to gpu. So I add a line code like following:

image = cv2.imread(image_path)
image = torch.tensor(image).cuda()

But now I get another error:

  File "C:\Home\Development\Anaconda\envs\DeepLearning\lib\site-packages\torchlm\tools\_faceboxesv2.py", line 305, in apply_detecting
    image_scale = cv2.resize(
cv2.error: OpenCV(4.5.5) :-1: error: (-5:Bad argument) in function 'resize'
> Overload resolution failed:
>  - src is not a numpy array, neither a scalar
>  - Expected Ptr<cv::UMat> for argument 'src'

It means I need pass a ndarray array instead of a torch tensor. But if I pass a ndarray, its data will stay in cpu, and I will get the first error again.

What shold I do? Hava anyone get the same error?

Will PipNet allow gradients to flow through?

I will just freeze the weights in the PipNet model itself, but I'd like to get gradients (to adjust aspects of the input image) from a loss created by the Landmarks vs some target Landmarks.

It looks like the code 'flows through' from input image to Landmark predictions - does this sound right? Or is there some strange discretisation step that would make the gradients nonsense?

Missing mobilenet_v2 model file

It's mentioned in readme but page with releases contain only pipnet with resnet18 and resnet101 backbone.
@DefTruth would be great if you can share mobilenet_v2 backboned pipnet! Because even resnet18 is too heavy for mobile application (44mb).
Thank you for great work, very useful!

Training with 7 landmarks

Hi can I train my own dataset that has 7 landmarks in an image. Lets say feet landmarks. I tried doing that but it gives error.
RuntimeError: Can not found any meanface landmarks settings !Please check and setup meanface carefully beforerunning PIPNet

Originally generated by this following warning:
UserWarning: meanface_lms != self.num_lms, 98 != 7So, we will skip this setup for PIPNet meanface

Inference - Getting non-normalized landmarks

Hi! Thank you very much for creating this as I am also testing PIPNet.

Is there anyway to generate the landmark outputs as unnormalized? For reference, I am using the 68 pt landmark format of 300W

Thanks!

关于在装甲板检测中的相关问题

@DefTruth 大佬你好,就是我是一名大二学生,然后是在中北大学的robomaster战队里负责用神经网络识别装甲板实现自动瞄准,不过就是之前我用yolo系列训练出来的模型最后实际测试时得到的bbox和装甲板的轮廓并不能很好的拟合,导致后续使用pnp进行姿态解算时会有较大误差,所以我想将传统yolo的数据集格式改为用四个角点的归一化坐标,然后我看到了您这个项目,我觉得我们的应用场景和车牌检测有很大的共通之处,然后我现在的数据集格式是像这样:1 0.673029 0.373564 0.678429 0.426232 0.830433 0.401262 0.824525 0.351212,第一个数字是类别id,后面八个数字是归一化后的装甲板的四个角点坐标,然后其实现在我是参照yolov5-face这个repo来实现了关键点定位,其他一些使用网络的学校也用的这个repo,用pytorch推理的效果如下:
image
第五个点我还没去掉,不过在使用他提供的export.py转化为onnx时却一直有error,目前在issue下作者还没回复我,所以现在这段时间我想来尝试尝试您的这个repo,如果您能提供一些指导,我将不胜感激

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.