Giter Club home page Giter Club logo

yolou's Introduction

YOLOU:United, Study and easier to Deploy

​ The purpose of our creation of YOLOU is to better learn the algorithms of the YOLO series and pay tribute to our predecessors.

​ Here "U" means United, mainly to gather more algorithms about the YOLO series through this project, so that friends can better learn the knowledge of object detection. At the same time, in order to better apply AI technology, YOLOU will also join The corresponding Deploy technology will accelerate the implementation of the algorithms we have learned and realize the value.

YOLOU

At present, the YOLO series algorithms mainly included in YOLOU are:

Anchor-base: YOLOv3, YOLOv4, YOLOv5, YOLOv5-Lite, YOLOv7, YOLOv5-TPH, YOLO-Fastest v2, YOLO-LF, YOLO-SA, YOLOR, YOLOv5-SPD

Anchor-Free: YOLOv6-v1, YOLOv6-v2, YOLOX, YOLOE, YOLOX-Lite, FastestDet

Face-Detection: YOLOv5-Face, YOLOFace-v2

Segmentation: YOLOv5-Segment

KeyPoint: YOLOv7-Keypoint

Classfication: ResNet, DarkNet,......

Comparison of ablation experiment results
Model size(pixels) [email protected] [email protected]:95 Parameters(M) GFLOPs TensorRT-FP32(b16)
ms/fps
TensorRT-FP16(b16)
ms/fps
YOLOv5n 640 45.7 28.0 1.9 4.5 0.95/1054.64 0.61/1631.64
YOLOv5s 640 56.8 37.4 7.2 16.5 1.7/586.8 0.84/1186.42
YOLOv5m 640 64.1 45.4 21.2 49.0 4.03/248.12 1.42/704.20
YOLOv5l 640 67.3 49.0 46.5 109.1
YOLOv5x 640 68.9 50.7 86.7 205.7
YOLOv6-T 640
YOLOv6-n 640
YOLOv6 640 58.4 39.8 20.4 28.8 3.06/326.93 1.27/789.51
YOLOv7 640 69.7 51.4 37.6 53.1 8.18/113.88 1.97/507.55
YOLOv7-X 640 71.2 53.7 71.3 95.1
YOLOv7-W6 1280 72.6 54.9
YOLOv7-E6 1280 73.5 56.0
YOLOv7-D6 1280 74.0 56.6
YOLOv7-E6E 1280 74.4 56.8
YOLOX-s 640 59.0 39.2 8.1 10.8 2.11/473.78 0.89/1127.67
YOLOX-m 640 63.8 44.5 23.3 31.2 4.94/202.43 1.58/632.48
YOLOX-l 640 54.1 77.7
YOLOX-x 640 104.5 156.2
v5-Lite-e 320 35.1 0.78 0.73 0.55/1816.10 0.49/2048.47
v5-Lite-s 416 42.0 25.2 1.64 1.66 0.72/1384.76 0.64/1567.36
v5-Lite-c 512 50.9 32.5 4.57 5.92 1.18/850.03 0.80/1244.20
v5-Lite-g 640 57.6 39.1 5.39 15.6 1.85/540.90 1.09/916.69
X-Lite-e 320 36.4 21.2 2.53 1.58 0.65/1547.58 0.46/2156.38
X-Lite-s 416 Training… Training… 3.36 2.90
X-Lite-c 512 Training… Training… 6.25 5.92
X-Lite-g 640 58.3 40.7 7.30 12.91 2.15/465.19 1.01/990.69

How to use

Install

git clone https://github.com/jizhishutong/YOLOU
cd YOLOU
pip install -r requirements.txt

Training

python train_det.py --mode yolov6 --data coco.yaml --cfg yolov6.yaml --weights yolov6.pt --batch-size 32

Detect

python detect_det.py --source 0  # webcam
                            file.jpg  # image 
                            file.mp4  # video
                            path/  # directory
                            path/*.jpg  # glob
                            'https://youtu.be/NUsoVlDFqZg'  # YouTube
                            'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP stream

Re-parameterization

See reparameterization.ipynb

Pose estimation

yolov7-w6-pose.pt

See keypoint.ipynb.

Detect Inference Result

YOLOU

Segmentation Inference Result

YOLOU

KeyPoint Inference Result

YOLOU

Face-Detect Inference Result

YOLOU

DataSet
train: ../coco/images/train2017/
val: ../coco/images/val2017/
├── images            # xx.jpg example
│   ├── train2017        
│   │   ├── 000001.jpg
│   │   ├── 000002.jpg
│   │   └── 000003.jpg
│   └── val2017         
│       ├── 100001.jpg
│       ├── 100002.jpg
│       └── 100003.jpg
└── labels             # xx.txt example      
    ├── train2017       
    │   ├── 000001.txt
    │   ├── 000002.txt
    │   └── 000003.txt
    └── val2017         
        ├── 100001.txt
        ├── 100002.txt
        └── 100003.txt

Export ONNX

Expand
python export.py --weights ./weights/yolov6/yolov6s.pt --include onnx

​ In order to facilitate the deployment and implementation of friends here, all models included in YOLOU have been processed to a certain extent, and their pre- and post-processing codes can be used in one set, because the format and output results of the ONNX files they export are consistent.

YOLOv5

YOLOU

YOLOv6

YOLOU

YOLOv7

YOLOU

YOLOX

YOLOU

YOLOv5-Lite

YOLOU

YOLOX-Lite

YOLOU

YOLO-Fastest v2

YOLOU

Acknowledgements

Expand

https://github.com/ultralytics/yolov5

https://github.com/WongKinYiu/yolor

https://github.com/ppogg/YOLOv5-Lite

https://github.com/WongKinYiu/yolov7

https://github.com/meituan/YOLOv6

https://github.com/ultralytics/yolov3

https://github.com/Megvii-BaseDetection/YOLOX

https://github.com/WongKinYiu/ScaledYOLOv4

https://github.com/WongKinYiu/PyTorch_YOLOv4

https://github.com/WongKinYiu/yolor

https://github.com/shouxieai/tensorRT_Pro

https://github.com/Tencent/ncnn

https://github.com/Gumpest/YOLOv5-Multibackbone-Compression

https://github.com/positive666/yolov5_research

https://github.com/cmdbug/YOLOv5_NCNN

https://github.com/OAID/Tengine

https://github.com/meituan/YOLOv6/releases/tag/0.2.0

https://github.com/PaddlePaddle/PaddleDetection

Citing YOLOU

Expand If you use YOLOU in your research, please cite our work and give a star ⭐:
 @misc{yolou2022,
  title = { YOLOU:United, Study and easier to Deploy},
  author = {ChaucerG},
  year={2022}
}

For more information, Please follow the QR code below:

YOLOU

yolou's People

Contributors

jizhishutong avatar mandylove1993 avatar shituo123456 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yolou's Issues

使用opencv部署yolov3,v4,v5,v6, v7, yolox和nanodet目标检测全集,包含C++和Python两种版本的程序

yolov6 and yolox

When will other pretrained models for yolox and other yaml profiles for yolov6 be released

yolov5 -seg

感谢博主的开源,我想使用yolov5-seg进行实例分割 请问数据格式是什么样的?

MIT License

Can I use your source for commercial purposes?

Inference with onnx

Hi jizhishutong, do you have a script to do inference with onnx, just to check if the model works normally. Thanks.

yolov7训练错误

使用yolov7训练数据时,出现了下列错误:
RuntimeError: result type Float can't be cast to the desired output type __int64
目前在网上还没有找到yolov7相关解决方案,我的运行环境是pytorch==1.12,cuda==11.3,python==3.7

yolov7 推理精度异常

yolov7 推理的时候,同样的模型同样的图片,与官方的推理代码的结果相差很远。一个是0.5 官方有0.91
image
image

some issus!

File "/home/pc/dawn_work_space/yolou/YOLOU/models/yolox.py", line 210, in get_output_and_grid
reg_box[..., :2] = (reg_box[..., :2] + grid) * stride
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!

yolov6s.pt这个文件不能加载,提示出错啊

export: weights=['./weights/yolov6s.pt'], imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=False, train=False, optimize=False, dynamic=True, simplify=True, opset=12, include=['onnx']
YOLOv5 2022-8-25 Python-3.10.5 torch-1.12.0+cpu CPU

Traceback (most recent call last):
File "E:\Code\YOLOU-main\export.py", line 209, in
main(opt)
File "E:\Code\YOLOU-main\export.py", line 204, in main
run(**vars(opt))
File "E:\Code\pythonCode\changeable_region_detection-main\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "E:\Code\YOLOU-main\export.py", line 133, in run
model = attempt_load(weights, device=device, inplace=True, fuse=True) # load FP32 model
File "E:\Code\YOLOU-main\models\experimental.py", line 228, in attempt_load
ckpt = torch.load(attempt_download(w), map_location='cpu') # load
File "E:\Code\pythonCode\changeable_region_detection-main\venv\lib\site-packages\torch\serialization.py", line 712, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "E:\Code\pythonCode\changeable_region_detection-main\venv\lib\site-packages\torch\serialization.py", line 1049, in _load
result = unpickler.load()
File "E:\Code\pythonCode\changeable_region_detection-main\venv\lib\site-packages\torch\serialization.py", line 1042, in find_class
return super().find_class(mod_name, name)

ModuleNotFoundError: No module named 'yolov6'

进程已结束,退出代码1

Repvgg convert in export

Hi jizhishutong, I check the export onnx model of yolov6 and find that some rep module is not reparameterized. Wanna know how to reparameterize when exporting, thanks!
截屏2022-08-11 10 57 42

关于val_det.py推理时参数不匹配

当我尝试运行val_det.py时,提示forward()出现了未知的参数val,然后查看代码
image.png

但是去models/common.py中查看DetectMultiBackend()却发现forward()中并没有val这个参数
image.png

所以这应该是个小bug或者是没有及时更新的地方,无论是在common.py的forward()中添加一个val参数或者是去掉推理时的val=True都可以解决这个问题.

似乎没有关闭马赛克增强

if mode == 'yolox':
if epoch == epochs - no_aug_epochs:
LOGGER.info("--->No mosaic aug now!")
dataset.mosaic = False # close mosaic
LOGGER.info("--->Add additional L1 loss now!")

我自己单步调试的时候发现, dataset.mosaic 在运行过程中还是True

pretrained weight

How can I get a pretrained weight about YOLOX, will train the model from the beginning?Thank you!

yolov7训练出错

命令:python train.py --mode yolov7 --use_aux True --data garbage.yaml --cfg yolov7.yaml --weights weights/yolov7/yolov7.pt --batch-size 8
出现错误
**Logging results to runs/train/exp14
Starting training for 100 epochs...

 Epoch   gpu_mem       box       obj       cls    labels  img_size

0%| | 0/5 [00:01<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 772, in
main(opt)
File "train.py", line 670, in main
train(opt.hyp, opt, device, callbacks)
File "train.py", line 448, in train
loss, loss_items = compute_loss(pred, targets.to(device), imgs) # init loss class
File "YOLOU/utils/loss.py", line 992, in call
bs_aux, as_aux_, gjs_aux, gis_aux, targets_aux, anchors_aux = self.build_targets2(p[:self.nl], targets, imgs)
File "YOLOU/utils/loss.py", line 1202, in build_targets2
indices, anch = self.find_5_positive(p, targets)
File "YOLOU/utils/loss.py", line 1388, in find_5_positive
indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
RuntimeError: result type Float can't be cast to the desired output type long int
**

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.