Giter Club home page Giter Club logo

edgeai-mmdetection's Introduction

EdgeAI-MMDetection

Notice 1

Notice 2

  • The scripts in this repository use Model Optimization Tools in edgeai-modeloptimization. It is installed during pip install using requirements.txt file - so there is no need to clone it, unless you want to modify it.

This repository is an extension of the popular mmdetection open source repository for object detection training. While mmdetection focuses on a wide variety of models, typically at high complexity, we focus on models that are optimized for speed and accuracy so that they run efficiently on embedded devices. For this purpose, we have added a set of embedded friendly model configurations and scripts - please see the Usage for more information.

If the accuracy degradation with Post Training Quantization (PTQ) is higher than expected, this repository provides instructions and functionality required to do Quantization Aware Training (QAT).


Release Notes

See notes about recent changes/updates in this repository in release notes

Installation

These installation instructions were tested using Miniconda Python 3.7 on a Linux Machine with Ubuntu 18.04 OS.

Make sure that your Python version is indeed 3.7 or higher by typing:

python --version

Please clone and install EdgeAI-Torchvision as this repository uses several components from there - especially to define low complexity models and to do Quantization Aware Training (QAT) or Calibration.

After that, install this repository by running ./setup.sh

After installation, a python package called "mmdet" will be listed if you do pip list

In order to use a local folder as a repository, your PYTHONPATH must start with a : or a .: Please add the following to your .bashrc startup file for bash (assuming you are using bash shell).

export PYTHONPATH=:$PYTHONPATH

Make sure to close your current terminal or start a new one for the change in .bashrc to take effect or one can do source ~/.bashrc after the update.

Get Started

Please see Usage for training and testing with this repository.

Object Detection Model Zoo

Complexity and Accuracy report of several trained models is available at the Detection Model Zoo

Quantization

This tutorial explains more about quantization and how to do Quantization Aware Training (QAT) of detection models.

ONNX & Prototxt Export

Export of ONNX model (.onnx) and additional meta information (.prototxt) is supported. The .prototxt contains meta information specified by TIDL for object detectors.

The export of meta information is now supported for SSD and RetinaNet detectors.

For more information please see Usage

Advanced documentation

Kindly take time to read through the documentation of the original mmdetection before attempting to use extensions added this repository.

The setup script setup.sh in this repository has the commonly used settings. If your CUDA version is different or your Python version is different or if you have some missing packages in your system, this script can fail. In those scenarios, please refer to installation instructions for original mmdetection for detailed installation instructions.

Also see documentation of MMDetection for the basic usage of original mmdetection.

Acknowledgement

This is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.

We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to train existing detectors and also to develop their own new detectors.

License

Please see LICENSE file of this repository.

Citation

This package/toolbox is an extension of mmdetection (https://github.com/open-mmlab/mmdetection). If you use this repository or benchmark in your research or work, please cite the following:

@article{EdgeAI-MMDetection,
  title   = {{EdgeAI-MMDetection}: An Extension To Open MMLab Detection Toolbox and Benchmark},
  author  = {Texas Instruments EdgeAI Development Team, [email protected]},
  journal = {https://github.com/TexasInstruments/edgeai},
  year={2021}
}
@article{mmdetection,
  title   = {{MMDetection}: Open MMLab Detection Toolbox and Benchmark},
  author  = {Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and
             Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and
             Liu, Ziwei and Xu, Jiarui and Zhang, Zheng and Cheng, Dazhi and
             Zhu, Chenchen and Cheng, Tianheng and Zhao, Qijie and Li, Buyu and
             Lu, Xin and Zhu, Rui and Wu, Yue and Dai, Jifeng and Wang, Jingdong
             and Shi, Jianping and Ouyang, Wanli and Loy, Chen Change and Lin, Dahua},
  journal= {arXiv preprint arXiv:1906.07155},
  year={2019}
}

References

[1] MMDetection: Open MMLab Detection Toolbox and Benchmark, https://arxiv.org/abs/1906.07155, Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, Chen Change Loy, Dahua Lin

edgeai-mmdetection's People

Contributors

aronlin avatar bigwangyudong avatar chhluo avatar czm369 avatar daavoo avatar erotemic avatar gt9505 avatar hellock avatar hhaandroid avatar innerlee avatar johnson-wang avatar jshilong avatar melikovk avatar mxbonn avatar myownskyw7 avatar oceanpang avatar rangilyu avatar runningleon avatar ryanxli avatar shinya7y avatar thangvubk avatar tianyuandu avatar v-qjqs avatar wangruohui avatar wswday avatar xvjiarui avatar yeliudev avatar yhcao6 avatar yuzhj avatar zwwwayne avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

edgeai-mmdetection's Issues

AttributeError: module 'torchvision.edgeailite.xnn.model_surgery' has no attribute 'get_replacements_dict'

I'm getting following error when i tried to run a ./run_detection_train.sh

work_dir = './work_dirs/yolov3_regnet_bgr_lite'
gpu_ids = range(0, 1)

2022-01-03 09:13:58,990 - mmdet - INFO - Set random seed to 886029822, deterministic: False
2022-01-03 09:13:59,511 - mmdet - INFO - initialize RegNet with init_cfg {'type': 'Pretrained', 'checkpoint': 'open-mmlab://regnetx_1.6gf'}
2022-01-03 09:13:59,512 - mmcv - INFO - load model from: open-mmlab://regnetx_1.6gf
2022-01-03 09:13:59,512 - mmcv - INFO - load checkpoint from openmmlab path: open-mmlab://regnetx_1.6gf
2022-01-03 09:13:59,562 - mmcv - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: fc.weight, fc.bias

Traceback (most recent call last):
File "./scripts/train_detection_main.py", line 65, in
train_mmdet.main(args)
File "/home/ubuntu/edgeai-mmdetection/tools/train.py", line 172, in main
model = convert_to_lite_model(model, cfg)
File "/home/ubuntu/edgeai-mmdetection/mmdet/utils/model_surgery.py", line 38, in convert_to_lite_model
replacements_dict = copy.deepcopy(xnn.model_surgery.get_replacements_dict())
AttributeError: module 'torchvision.edgeailite.xnn.model_surgery' has no attribute 'get_replacements_dict'
Done.

QAT model configuration

Hi,

I would like to do model inference and pytorch to onnx conversion of custom object detection model(not in mmdetection) after QAT. Can you please help me in sharing sample code for the same.

I have model.py before QAT and as I understand this configuration get changed after doing QAT. I would like to know on how to change the model.py after QAT, for inference and also for onnx conversion.

regards,
Gina

QAT training backbone problem?

when i train yolo v3 with darknet, qat training accuracy is ok, but replace the backbone network like vgg,and floating point training is ok,but qat training accuracy is very low
@mathmanu
(backbone): VGG(
(stage0): VGGBlock(
(conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
(stage1): Sequential(
(0): VGGBlock(
(conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
(1): VGGBlock(
(conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
)
(stage2): Sequential(
(0): VGGBlock(
(conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
(1): VGGBlock(
(conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
(2): VGGBlock(
(conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
(3): VGGBlock(
(conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
)
(stage3): Sequential(
(0): VGGBlock(
(conv): Conv2d(64, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
(1): VGGBlock(
(conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
(2): VGGBlock(
(conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
(3): VGGBlock(
(conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
(4): VGGBlock(
(conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
(5): VGGBlock(
(conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
(6): VGGBlock(
(conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
(7): VGGBlock(
(conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
)
(stage4): Sequential(
(0): VGGBlock(
(conv): Conv2d(96, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activate): ReLU(inplace=True)
)
)
)

@mathmanu

the rusult of QAT model in .onnx and .pt are much different

When training done with fp32, I convert the .pt model to .onnx, and I respectively use them to predict one pic, and the results are very close.
While when I QAT this model, and doing the same step, the results are much different.
Is this something wrong during the convert process?
Should there be any more attentions when convert a QAT model to onnx?
Thx

Run mmdetect demo code and get incorrect results

I am new to mmdetection and TI edgeai mmdtection.

After installing this repository, I try to run some demo codes from original mmdrection repo to perform inference on a demo picture .

Unfortunately, the demo code below get the incorrect results on the demo picture. The bboxes are not on the right places and the classes are totally wrong.

Could you please be so kind to tell me how to perform inference and get the correct bbox with your edgeai mmdtection repository?

*** the demo code is below*********************************************
from mmdet.apis import init_detector, inference_detector
import mmcv
config_file = './configs/edgeailite/ssd/ssd_regnet_fpn_bgr_lite.py'
checkpoint_file = './checkpoints/ssd_regnetx-800mf_fpn_bgr_lite_512x512_20200919_checkpoint.pth'
model = init_detector(config_file, checkpoint_file, device='cuda:0')
img = 'demo/demo.jpg'
result = inference_detector(model, img)
model.show_result(img, result)
model.show_result(img, result, out_file='result.jpg')
***** the demo code is above**************

BTY, after running the ./run_detection_test.sh according the usage guide, I could get the right result with content of "mmdet - INFO - OrderedDict([('bbox_mAP', 0.328), ('bbox_mAP_50', 0.528).........."

Which edgeAI-torchvision version/tag to download for which edgeAI-mmdetection version/tag

I'm trying to create a docker container with both packages installed. Even though everything seems to be installed fine, I can import all modules without errors. When wanting to QAT or Calibrate the YoloX lite model I trained on a custom dataset. I get errors saying where attributes of functions called by edgai-mmdet from edgeAI-torchvision are not exisiting. So currently I'm unable to figure out which edgeAI-torchvision to pull for the latest edgeai-mmdetection commit?

what the difference between quantize = 'training' and quantize = 'calibration'?

what the difference between quantize = 'training' and quantize = 'calibration'? and what the calibration does?

I saw the train.py code as bellow

    quantize = cfg.get('quantize', False)
	
    if quantize == 'calibration':
        optimizer_config = XMMDetNoOptimizerHook()
    elif fp16_cfg is not None:
        optimizer_config = Fp16OptimizerHook(
            **cfg.optimizer_config, **fp16_cfg, distributed=distributed)
    elif distributed and 'type' not in cfg.optimizer_config:
        optimizer_config = OptimizerHook(**cfg.optimizer_config)
    else:
        optimizer_config = cfg.optimizer_config

I thought it about
B. Training for Quantization
C. Quantization aware Training

but, not sure if it really do that

RuntimeError: NCCL communicator was aborted on rank 1

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug
A clear and concise description of what the bug is.

Reproduction

  1. What command or script did you run?
./run_detection_train.sh
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
    NO .

  2. What dataset did you use?

My own dataset (like bdd100k), about 11.2W pics in training dataset

Thanks for your nice work,Now we have some problems and need your help. I start training with my own data set. When the training ends at one epoch, the following error will be reported:(see the attachment for the specific log)

image

20220112_010819.log

We look forward to your reply !!! Thanks a lot!

run_detection_export.sh, ImportError: cannot import name 'build_model_from_cfg' from 'xmmdet.core'

PRIME-Z390-A:~/work/edgeai-mmdetection-master$ ./run_detection_export.sh
/home/haitao/work/test_venv/lib/python3.8/site-packages/mmcv/utils/registry.py:251: UserWarning: The old API of register_module(module, force=False) is deprecated and will be removed, please use the new API register_module(name=None, force=False, module=None) instead.
warnings.warn(
Traceback (most recent call last):
File "./scripts/export_pytorch2onnx.py", line 35, in
from xmmdet.tools import pytorch2onnx
File "/home/haitao/work/edgeai-mmdetection-master/xmmdet/tools/pytorch2onnx.py", line 11, in
from xmmdet.core import (build_model_from_cfg, generate_inputs_and_wrap_model,
ImportError: cannot import name 'build_model_from_cfg' from 'xmmdet.core' (/home/haitao/work/edgeai-mmdetection-master/xmmdet/core/init.py)
Done.

SSD import error

Hi,
I trained a ssd_regnetx_fpn_bgr_lite model and encountered some problems while importing it to bin.

  1. I got 4 files: model.onnx model.prototxt model-proto.onnx model-proto.prototxt after running the export script, is this correct?
  2. I got "[libprotobuf ERROR google/protobuf/text_format.cc:309] Error parsing text-format tidl_meta_arch.TIDLMetaArch: 16:9: Non-repeated field "output" is specified multiple times." when importing the onnx, the auto generated output fields are output: "dets" and output: "labels".

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.