Comments (4)
π Hello @MiNaMisan, thank you for your interest in YOLOv5 π! Please visit our βοΈ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a π Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training β Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Requirements
Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
Environments
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
- Notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
- Docker Image. See Docker Quickstart Guide
Status
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
Introducing YOLOv8 π
We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 π!
Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.
Check out our YOLOv8 Docs for details and get started with:
pip install ultralytics
from yolov5.
@MiNaMisan hi there! π
Thank you for providing detailed information and visuals. The sudden increase in mAP you observed could be due to several factors:
-
Randomness in Training: Neural network training involves stochastic processes, and sometimes, random initialization or data shuffling can lead to unexpected spikes in performance.
-
Learning Rate and Hyperparameters: As you suspected, the learning rate or other hyperparameters might have played a role. You can experiment with different learning rates or use a learning rate scheduler to see if it stabilizes the training process.
-
Data Augmentation: Variations in data augmentation can sometimes lead to sudden improvements. Ensure that your augmentation settings are consistent across all training runs.
-
Batch Size: Changing the batch size can affect the training dynamics. Larger batch sizes generally provide more stable gradients but may require adjustments in learning rate.
To identify the specific reason, you can:
- Review Training Logs: Check for any anomalies or patterns in the training logs around the epochs where the spike occurred.
- Hyperparameter Tuning: Use tools like Hyperparameter Evolution to optimize your hyperparameters systematically.
For more detailed guidance on achieving the best training results, you can refer to our Tips for Best Training Results page.
Good luck with your training! π If you have any further questions, feel free to ask.
from yolov5.
@glenn-jocher thanks for replying!
I'm trying to solve this problem now by adjusting hyperparameters now.
please allow ask one more question
Review Training Logs: Check for any anomalies or patterns in the training logs around the epochs where the spike occurred.
I've used TensorBoard to check for any suspicious parameters during training, but there are only limited parameters I can monitor: train obj/box/cls loss, val obj/box/cls loss, metrics of precision, recall, and mAP0.5, as well as lr0, lr1, and lr2.
If I want to see more parameters, such as total loss or variable updates for each epoch, is this possible? Also, are there any useful tools that can help me specify these parameters?
Thank a lot
from yolov5.
Hi @MiNaMisan,
Great to hear you're diving into hyperparameter adjustments!
Regarding your question about monitoring more parameters in TensorBoard, YOLOv5's integration with TensorBoard primarily focuses on the key metrics you've listed. If you're looking to track additional details like total loss or specific variable updates per epoch, you might need to modify the logging code in the training script.
For a more customized tracking, you might consider using other tools like Weights & Biases, which integrates well with YOLOv5 and allows for more extensive monitoring and visualization options. You can enable Weights & Biases in your training by setting --wandb
in your training command, and it will automatically log more metrics and provide a richer interface for analysis.
Hope this helps, and keep up the great work!
from yolov5.
Related Issues (20)
- Per Detection class accuracy on validation set HOT 4
- Parameters Fusion HOT 8
- Parameters Fusion HOT 1
- A question about bbox normalization HOT 2
- Unable to train model on VisDrone HOT 6
- Author, do you have a complete Python version that reads the engine model of Tensorrt to infer strength segmentation code, which is a simple version of the official inference code. It can be run in just one file without calling too many Python files or libraries HOT 1
- Android uses YOLOv5 segmentation HOT 3
- yolov5 Tensortt errors ? HOT 8
- about physical memory and virtual memory HOT 1
- _clip_augmented: clarifications required HOT 4
- After training my own dataset, the labels of pt model inference and engine model inference are inconsistent. HOT 3
- How to Show Real-Time Detection of Multiple Streams Using Titled Display Windows in Yolov5? HOT 4
- Class scores from TFlite model's output data don't add up to 1 HOT 4
- Model size is doubled when exporting model to onnx/torchscript HOT 2
- Labelling Objects Occluded objects in Extreme Environment HOT 4
- Trying to implement a custom dataset HOT 5
- Visualizing YOLOv5 Segmentation Data HOT 9
- no detectionθΏδΈͺη»ζ HOT 7
- Save new video that only shows detections on filtered classes HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from yolov5.