Comments (20)
Hello @mbufi, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook , Docker Image, and Google Cloud Quickstart Guide for example environments.
If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.
If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:
- Cloud-based AI systems operating on hundreds of HD video streams in realtime.
- Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
- Custom data training, hyperparameter evolution, and model exportation to any destination.
For more information please visit https://www.ultralytics.com.
from yolov5.
@Jacobsolawetz thanks! I've been meaning to get this done for a while now. To be honest, manually crunching anchors and then slotting them back into a model file is a pretty complicated task that can go wrong in a lot of places, so automating the process should remove those failure points.
And of course, I have a feeling poor anchor-data fits may be one of the primary reasons for people seeing x results in a paper, but then turning around to find y results on their custom dataset (where y << x).
Hopefully this will help bridge that gap in a painless way.
from yolov5.
@mbufi --img (which is short for --img-size) accepts two values, which are train and test sizes. If you supply one size it uses them for both, so for example:
python train.py --img 640
means that the mosaic dataloader pulls up the selected image along with 3 random images, resizes them all to 640, joins all 4 at the seams into a 1280x1280 mosaic, augments them, and then crops a center 640x640 area for placement as 1 image into the batch.
Training at native resolution will always produce the best results if your hardware/budget allows for it. Significantly different object sizes from the default anchors (as measured in pixels at your training --img) though would require you to modify the anchors as well for best results.
Training and inference should be paired at the same resolution for best results. If you plan on inference at 1980 train at 1980. If you plan on inference at 1024, train at that size. Just remember the anchors do not change size, they are fixed in pixel-space, so modify as appropriate if necesary.
We offer a hybrid kmeans-genetic evolution algorithm for anchor computation:
Lines 657 to 662 in ad71d2d
from yolov5.
All,
Kmeans has been updated, and AutoAnchor is now implemented. This means anchors are analyzed automatically and updated as necessary. No action is required on the part of the user, this is the new default behavior. You simply train normally as before to get this.
git pull or clone again to receive this update.
from yolov5.
@glenn-jocher Okay. Great. Thanks for all your help!
from yolov5.
@glenn-jocher Great! this all makes sense now:) Thank you so much for that great description.
With that said:
- That is super interesting. To make sure I understand - I have to use the
kmean_anchors()
separately before training to add to my .yaml correct? How does one acquire the coco128.txt? - I am not 100% finished in reading through
--evolve
.. but I fail to run it. Is this a bug? Or am not using it correctly? I get the following error when using it:
Traceback (most recent call last):
File "train.py", line 440, in
results = train(hyp.copy())
File "train.py", line 201, in train
tb_writer.add_histogram('classes', c, 0)
AttributeError: 'NoneType' object has no attribute 'add_histogram'
Thank you again for sure dedication!
from yolov5.
@mbufi you can optionally run kmean_anchors() if you feel your objects are not similar in size to the default anchors. You would do this before training, and then manually place the final generation of evolved anchors into your model.yaml file here:
Lines 6 to 11 in ad71d2d
We have not tried to use --evolve in this repo yet, so I can't speak for it's status. In any case, this is a much more advanced offline feature (it is not part of training) which you would only try to run if default training is not producing results that are acceptable to you. It requires significant time and resources to produce results.
from yolov5.
@glenn-jocher Awesome. That's what I figured... In the example in the code, where did you get the coco128.txt
? What does that text file represent? Can I use the .yaml for this instead?
from yolov5.
@mbufi there is no text file like this. You can create a custom dataset using coco128.yaml as a template:
https://docs.ultralytics.com/yolov5/tutorials/train_custom_data#1-create-datasetyaml
from yolov5.
@glenn-jocher Yes, correct. I have my own customdata.yaml
The problem I am getting is using the kmeans() algo with my yaml. I know the yaml works because I have trained my own custom model already.
I am in the process of generating new anchors:
Python 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.from utils.utils import *
_ = kmean_anchors(path='/home/ai/yolov5/data/custom_data.yaml', img_size=(1280,960))
Traceback (most recent call last):
File "", line 1, in
File "/home/ai/yolov5/utils/utils.py", line 698, in kmean_anchors
dataset = LoadImagesAndLabels(path, augment=True, rect=True)
File "/home/ai/yolov5/utils/datasets.py", line 277, in init
assert n > 0, 'No images found in %s. See %s' % (path, help_url)
AssertionError: No images found in /home/ai/yolov5/data/camshaft.yaml. See https://docs.ultralytics.com/yolov5/tutorials/train_custom_data
I even try to run it with the coco128.yaml and the images and it still gives me the same error.
For reference from utils.py
:
def kmean_anchors(path='./data/coco128.txt', n=9, img_size=(640, 640), thr=0.20, gen=1000):
from yolov5.
@mbufi yes, this is possible since we have not actually updated this function for yolov5 yet. We will try to update it next week. In the meantime you may simply try to pass the directory of your training images as shown in the yaml:
Line 11 in ad71d2d
TODO: Update kmeans_anchors() for v5
from yolov5.
Passing the directory directly worked for me:
kmean_anchors('./train/images', n = 9, img_size=[416,416], thr=4.0, gen=1000)
from yolov5.
@Jacobsolawetz yes it works. I believe the latest commit allows you to pass the .yaml
Do you have a good understanding about the threshold with regards to small objects? I see you are using 4.0. Why is that?
from yolov5.
@glenn-jocher very nice
from yolov5.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
from yolov5.
@glenn-jocher May I know if the anchor data gets saved into the yaml file after the auto Anchor is run? Need to know the anchors being used for further output processing.
from yolov5.
@foochuanyue You may want to read the autoanchor output, which answers your question.
from yolov5.
@glenn-jocher ok! thanks!
from yolov5.
How do we resize the input video or reduce the fps in the input video?
from yolov5.
@SureshbabuAkash1999 set inference size with --img argument
python detect.py --img 1280
from yolov5.
Related Issues (20)
- 提升训练速度 HOT 1
- is there a max limit to --imgsz ? HOT 6
- RuntimeError: The size of tensor a (24) must match the size of tensor b (20) at non-singleton dimension 2 HOT 5
- How to show count in screen using yolov5 HOT 6
- How to change annotations indices in memory without changing the dataset locally? HOT 3
- How to add a button inside the video stream of yolov5. HOT 1
- Extract feature vector from the bounding box predicted together with the coordinates and class output vector HOT 5
- augmentation in validation HOT 1
- About detect.py HOT 9
- How to close window in yolov5 detection HOT 1
- Training YoloV5n on a custom dataset, best.pt is bigger than yolov5n official size HOT 4
- Data Augmentation HOT 1
- about eval.py HOT 1
- Need advice for training a YOLOv5-obb model HOT 2
- Code doubts about the model in the detection process HOT 2
- predicting from 2D array HOT 2
- Same yolov5s training, but one over-fitting and one training is very good. HOT 2
- Hello, I have some questions about the YOLOv5 code. Could you please help me answer them? HOT 2
- Different results from train.py and val.py HOT 1
- How to change training input image size? HOT 8
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from yolov5.