Giter Club home page Giter Club logo

yolov10's People

Contributors

abirami-vina avatar adrianboguszewski avatar ayushexel avatar burhan-q avatar democat3457 avatar dennisjcy avatar dependabot[bot] avatar developer0hye avatar glenn-jocher avatar jameslahm avatar jamjamjon avatar kalenmike avatar kayzwer avatar lakshanthad avatar laughing-q avatar leonnil avatar nihui avatar onuralpszr avatar pderrenger avatar pre-commit-ci[bot] avatar rizwanmunawar avatar sanha9999 avatar sergiuwaxmann avatar shcheklein avatar snyk-bot avatar tfriedel avatar triple-mu avatar ultralyticsassistant avatar wangqvq avatar yermandy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yolov10's Issues

Understanding the post processing in tensorrt C++

I have implemented the onnx to tensorrt for yolov10m.
Dynamic batching is not supported.

In onnx, we have the following output.

name: output0
tensor: float32[1,300,6]

300 is the fixed shape of max detections.
6 is box[xyxy], confidence, label.

https://github.com/PrinceP/tensorrt-cpp-for-onnx/tree/main?tab=readme-ov-file#yolov10

In results,

IMAGE1

But it is not better than the YOLOV9-C model.

IMAGE2

Is the following postprocessing fine:
https://github.com/PrinceP/tensorrt-cpp-for-onnx/blob/97457127305e1382398e237a702303b4b0ceb869/examples/yolov10/main.cpp#L63

pre-training weight

Hello, how can I remove the pre-training weight during training? What should I do

map一直等于0

你好,我拿之前在v8成功跑通的数据集用在v10上运行发现map一直为0,cfg中val:True但运行过程中似乎并没有进行val,请问一下怎么解决
Uploading RV0@UVM@(AQS}]I~EJTIU.png…
@jameslahm

Missing tags

from ultralytics import YOLOv10,YOLO
m = YOLO("models/yolov8n.pt")
print(m.names)
# Load a model
model = YOLOv10("models/yolov10n.pt").to("cuda")  # pretrained YOLOv8n model
print(model.names)
res = model.predict("bus.jpg",
                  show_boxes=True,show_conf=True,show_labels=True)[0]
res.show()
root@xy3dn:~/yolo8# /root/.pyenv/versions/3.10.12/bin/python /root/yolo8/test.py
{0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'}
{0: '0', 1: '1', 2: '2', 3: '3', 4: '4', 5: '5', 6: '6', 7: '7', 8: '8', 9: '9', 10: '10', 11: '11', 12: '12', 13: '13', 14: '14', 15: '15', 16: '16', 17: '17', 18: '18', 19: '19', 20: '20', 21: '21', 22: '22', 23: '23', 24: '24', 25: '25', 26: '26', 27: '27', 28: '28', 29: '29', 30: '30', 31: '31', 32: '32', 33: '33', 34: '34', 35: '35', 36: '36', 37: '37', 38: '38', 39: '39', 40: '40', 41: '41', 42: '42', 43: '43', 44: '44', 45: '45', 46: '46', 47: '47', 48: '48', 49: '49', 50: '50', 51: '51', 52: '52', 53: '53', 54: '54', 55: '55', 56: '56', 57: '57', 58: '58', 59: '59', 60: '60', 61: '61', 62: '62', 63: '63', 64: '64', 65: '65', 66: '66', 67: '67', 68: '68', 69: '69', 70: '70', 71: '71', 72: '72', 73: '73', 74: '74', 75: '75', 76: '76', 77: '77', 78: '78', 79: '79'}

image 1/1 /root/yolo8/bus.jpg: 640x480 4 0s, 1 5, 118.0ms
Speed: 1.9ms preprocess, 118.0ms inference, 112.4ms postprocess per image at shape (1, 3, 640, 480)

image

I can't get label when using YOLO v10.

Rotating target detection, segmentation

Search before asking

Question

Hello, may I ask whether this project can achieve the task of training rotating target detection and segmentation? If so, may I ask whether there is corresponding pre-training weight

Additional

No response

Issue with Resuming Training Process

If I want to resume training, is it correct to call it like this: "yolo detect train model=F:/01CODE/yolov10-main/runs/detect/train/weights/last.pt resume=True"?

About Figure in paper

Your research is really great. A little out of the box, if possible, could you give me your code to draw the image below? Thank you very much
image

requierments

Installation does not have a version number, how do I configure the environment

Questions Regarding TRT inference speed as compared to Yolov8.

Hey, thanks for the amazing repo.
I have been testing it against Yolov8 in terms of inference speed which is highlighted int he paper.

I ran a speed test on few videos on a 3070Ti device.
Here's my script to run inference.

from ultralytics import YOLO
from ultralytics import YOLOv10

model = YOLOv10('yolov10l.engine', task='detect')

in_dir = "videoplayback1080.mp4"
results = model.predict(source=in_dir, device="0", imgsz=640, half= True, iou = 0.7, save=False, conf=0.3, save_txt=False,
 stream=True, save_conf=True, save_crop=False, show_labels=True, line_width=1)

this is the o/p of Yolov10.engine We are getting an avg of 5.0ms

video 1/1 (frame 1712/9184) videoplayback1080.mp4: 640x640 1 0, 10 2s, 1 5, 2 7s, 4.9ms
video 1/1 (frame 1713/9184) videoplayback1080.mp4: 640x640 1 0, 10 2s, 2 7s, 5.0ms
video 1/1 (frame 1714/9184) videoplayback1080.mp4: 640x640 1 0, 10 2s, 2 7s, 5.0ms
video 1/1 (frame 1715/9184) videoplayback1080.mp4: 640x640 1 0, 9 2s, 2 7s, 5.0ms
video 1/1 (frame 1716/9184) videoplayback1080.mp4: 640x640 3 0s, 9 2s, 2 7s, 5.0ms
video 1/1 (frame 1717/9184) videoplayback1080.mp4: 640x640 2 0s, 9 2s, 2 7s, 5.0ms
video 1/1 (frame 1718/9184) videoplayback1080.mp4: 640x640 4 0s, 8 2s, 2 7s, 5.0ms
video 1/1 (frame 1719/9184) videoplayback1080.mp4: 640x640 3 0s, 10 2s, 2 7s, 5.0ms
video 1/1 (frame 1720/9184) videoplayback1080.mp4: 640x640 3 0s, 9 2s, 2 7s, 5.0ms
video 1/1 (frame 1721/9184) videoplayback1080.mp4: 640x640 1 0, 8 2s, 2 7s, 5.0ms
video 1/1 (frame 1722/9184) videoplayback1080.mp4: 640x640 1 0, 9 2s, 2 7s, 5.0ms
video 1/1 (frame 1723/9184) videoplayback1080.mp4: 640x640 1 0, 9 2s, 2 7s, 5.0ms
video 1/1 (frame 1724/9184) videoplayback1080.mp4: 640x640 1 0, 9 2s, 2 7s, 5.0ms
video 1/1 (frame 1725/9184) videoplayback1080.mp4: 640x640 1 0, 9 2s, 2 7s, 5.0ms
video 1/1 (frame 1726/9184) videoplayback1080.mp4: 640x640 1 0, 9 2s, 2 7s, 5.0ms
video 1/1 (frame 1727/9184) videoplayback1080.mp4: 640x640 1 0, 9 2s, 3 7s, 5.0ms
video 1/1 (frame 1728/9184) videoplayback1080.mp4: 640x640 1 0, 10 2s, 2 7s, 5.0ms
video 1/1 (frame 1729/9184) videoplayback1080.mp4: 640x640 1 0, 8 2s, 2 7s, 5.0ms
video 1/1 (frame 1730/9184) videoplayback1080.mp4: 640x640 1 0, 9 2s, 2 7s, 5.2ms

and this is the O/p of Yolov8.engine when compared which is giving an avg of 5.1ms

video 1/1 (1712/9184) videoplayback1080.mp4: 640x640 13 cars, 1 bus, 1 truck, 5.1ms
video 1/1 (1713/9184) videoplayback1080.mp4: 640x640 14 cars, 1 bus, 1 truck, 5.1ms
video 1/1 (1714/9184) videoplayback1080.mp4: 640x640 13 cars, 1 bus, 1 truck, 5.1ms
video 1/1 (1715/9184) videoplayback1080.mp4: 640x640 12 cars, 1 bus, 1 truck, 5.1ms
video 1/1 (1716/9184) videoplayback1080.mp4: 640x640 13 cars, 1 bus, 1 truck, 5.1ms
video 1/1 (1717/9184) videoplayback1080.mp4: 640x640 13 cars, 1 bus, 1 truck, 5.1ms
video 1/1 (1718/9184) videoplayback1080.mp4: 640x640 12 cars, 1 bus, 1 truck, 5.2ms
video 1/1 (1719/9184) videoplayback1080.mp4: 640x640 12 cars, 1 bus, 1 truck, 5.1ms
video 1/1 (1720/9184) videoplayback1080.mp4: 640x640 13 cars, 1 bus, 2 trucks, 5.1ms
video 1/1 (1721/9184) videoplayback1080.mp4: 640x640 13 cars, 1 bus, 2 trucks, 5.2ms
video 1/1 (1722/9184) videoplayback1080.mp4: 640x640 14 cars, 1 bus, 2 trucks, 5.1ms
video 1/1 (1723/9184) videoplayback1080.mp4: 640x640 15 cars, 1 bus, 2 trucks, 5.1ms
video 1/1 (1724/9184) videoplayback1080.mp4: 640x640 14 cars, 1 bus, 2 trucks, 5.1ms
video 1/1 (1725/9184) videoplayback1080.mp4: 640x640 14 cars, 1 bus, 2 trucks, 5.1ms
video 1/1 (1726/9184) videoplayback1080.mp4: 640x640 15 cars, 1 bus, 2 trucks, 5.1ms
video 1/1 (1727/9184) videoplayback1080.mp4: 640x640 14 cars, 1 bus, 2 trucks, 5.1ms
video 1/1 (1728/9184) videoplayback1080.mp4: 640x640 12 cars, 1 bus, 3 trucks, 5.2ms
video 1/1 (1729/9184) videoplayback1080.mp4: 640x640 13 cars, 1 bus, 2 trucks, 5.1ms
video 1/1 (1730/9184) videoplayback1080.mp4: 640x640 12 cars, 1 bus, 1 truck, 5.1ms

from the looks of it speed hasn't improved much, even tho params and FLOPS are reduced by a lot as compared, the speed itself hasn't changed much.

this is the script I used to convert to .engine, please see if it is correct:

from ultralytics import YOLOv10

model = YOLOv10('yolov10l.pt')

# Export the model
model.export(format='engine',imgsz=640, iou=0.7, device = 0, simplify=True, half = True, workspace=8)

please let me know if this is expected or I'm missing something here.

Also, I tried 1280 imgsz, the speed difference is 61FPS(v8) vs 64FPS(v10)

Is it that this is more optimized for A40, A100 type GPUs as compared to 30 and 40 series?

Thanks.

Dynamic Batch Export for Yolov10 Model

I encountered the following error while using trtexec to convert the exported dynamic ONNX model:

image

This makes me wonder if Yolov10 currently supports dynamic batch export. Could you please confirm whether Yolov10 supports dynamic batch export? If it does not, are there any plans to add this feature in future releases?

AttributeError: Can't get attribute 'YOLOv10DetectionModel' on module 'ultralytics.nn.tasks'

I encounter this error when running this training code, please help!

!yolo task=detect mode=train epochs=1 batch=16 plots=True
model='/kaggle/input/123456/yolov10l.pt'
data='/kaggle/input/filedata/data.yaml'

Traceback (most recent call last):
File "/opt/conda/bin/yolo", line 8, in
sys.exit(entrypoint())
File "/opt/conda/lib/python3.10/site-packages/ultralytics/cfg/init.py", line 556, in entrypoint
model = YOLO(model, task=task)
File "/opt/conda/lib/python3.10/site-packages/ultralytics/models/yolo/model.py", line 23, in init
super().init(model=model, task=task, verbose=verbose)
File "/opt/conda/lib/python3.10/site-packages/ultralytics/engine/model.py", line 152, in init
self._load(model, task=task)
File "/opt/conda/lib/python3.10/site-packages/ultralytics/engine/model.py", line 241, in _load
self.model, self.ckpt = attempt_load_one_weight(weights)
File "/opt/conda/lib/python3.10/site-packages/ultralytics/nn/tasks.py", line 806, in attempt_load_one_weight
ckpt, weight = torch_safe_load(weight) # load ckpt
File "/opt/conda/lib/python3.10/site-packages/ultralytics/nn/tasks.py", line 732, in torch_safe_load
ckpt = torch.load(file, map_location="cpu")
File "/opt/conda/lib/python3.10/site-packages/torch/serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/opt/conda/lib/python3.10/site-packages/torch/serialization.py", line 1172, in _load
result = unpickler.load()
File "/opt/conda/lib/python3.10/site-packages/torch/serialization.py", line 1165, in find_class
return super().find_class(mod_name, name)
AttributeError: Can't get attribute 'YOLOv10DetectionModel' on <module 'ultralytics.nn.tasks' from '/opt/conda/lib/python3.10/site-packages/ultralytics/nn/tasks.py'>

Unable to display map during verification

Why can't data such as map be output when using val.py for verification? However, the map can be output normally when the yolo command line is used for verification.
2024-05-26 10-36-43 的屏幕截图
2024-05-26 10-33-18 的屏幕截图

Multi Object Tracking Task

Will the yolov10 object detector be embedded into the multi-object tracker (such as ByteTrack, BoT-SORT ) in the future?
Thank you for your reply!

parameters and flops

I've noticed an issue; the parameter count for yolov10n is 2,708,210, with 8.4 GFLOPs of floating-point operations. This differs from the numbers in the paper. I wonder if it's a calculation issue on my end?

amp

ae58bc1d419603b6571396fa85ab5af I have set amp to false, but it still validates every 10 rounds during training. Since yolov8 is validating every round, is yolov10 set to validate every 10 rounds?

关于one2one 和one2manay 的Consistent Matching Metric

有关one2one和one2many的问题
请问在模型中训练的时候是对one2many中预测的目标进行损失计算,然后将最好的预测结果权重共享给one2one吗?
然后再模型部署推理的时候只使用one2one。
还有就是论文中有关Consistent Matching Metric的证明是说明one2one的预测与one2many的最佳预测是相等的吗?
非常感谢大佬解答

On the dilemma of 8 gpu training

Thank you for your excellent work in improving YOLOv8, I tried to reproduce your accuracy in training YOLOv10x on 8 gpu, but unfortunately I got very low ap, the parameters are configured according to your paper, did you make any additional adjustments for training on 8 gpu?

YoloV10 OBB

Do you have any plan to develop YoloV10 OBB (Oriented-Bounding-Box)? It would be super beneficial!

weights

Hello, why is the training loading only the data and model configuration, not the pre-training weights?

Hello, during the first round of training, the error is as follows: Is there a solution to it?

Traceback (most recent call last):
File "/home/amax/.conda/envs/zw_yolo/bin/yolo", line 8, in
sys.exit(entrypoint())
File "/data1/moxing/wsy/ZW/wyy/yolov10-main/ultralytics/cfg/init.py", line 586, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "/data1/moxing/wsy/ZW/wyy/yolov10-main/ultralytics/engine/model.py", line 657, in train
self.trainer.train()
File "/data1/moxing/wsy/ZW/wyy/yolov10-main/ultralytics/engine/trainer.py", line 213, in train
self._do_train(world_size)
File "/data1/moxing/wsy/ZW/wyy/yolov10-main/ultralytics/engine/trainer.py", line 430, in _do_train
self.metrics, self.fitness = self.validate()
File "/data1/moxing/wsy/ZW/wyy/yolov10-main/ultralytics/engine/trainer.py", line 552, in validate
metrics = self.validator(self)
File "/home/amax/.conda/envs/zw_yolo/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/data1/moxing/wsy/ZW/wyy/yolov10-main/ultralytics/engine/validator.py", line 187, in call
preds = self.postprocess(preds)
File "/data1/moxing/wsy/ZW/wyy/yolov10-main/ultralytics/models/yolov10/val.py", line 18, in postprocess
boxes, scores, labels = ops.v10postprocess(preds, self.args.max_det)
File "/data1/moxing/wsy/ZW/wyy/yolov10-main/ultralytics/utils/ops.py", line 852, in v10postprocess
assert(4 + nc == preds.shape[-1])
AssertionError

AttributeError: 'dict' object has no attribute 'shape'

Hi so i trained my model and im trying to run inference,

Command used "!yolo predict model="I:/Github/test/runs/detect/train/weights/best.pt" source="03_001_130524.png"

i get this error
File "c:\Users\Teymo.conda\envs\YOLO10\lib\runpy.py", line 197, in _run_module_as_main
return run_code(code, main_globals, None,
File "c:\Users\Teymo.conda\envs\YOLO10\lib\runpy.py", line 87, in run_code
exec(code, run_globals)
File "C:\Users\Teymo.conda\envs\YOLO10\Scripts\yolo.exe_main
.py", line 7, in
sys.exit(entrypoint())
File "I:\Github\test\yolov10-main\yolov10-main\ultralytics\cfg_init
.py", line 587, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "I:\Github\test\yolov10-main\yolov10-main\ultralytics\engine\model.py", line 441, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File "I:\Github\test\yolov10-main\yolov10-main\ultralytics\engine\predictor.py", line 177, in predict_cli
for _ in gen: # noqa, running CLI inference without accumulating any outputs (do not modify)
File "c:\Users\Teymo.conda\envs\YOLO10\lib\site-packages\torch\utils_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "I:\Github\test\yolov10-main\yolov10-main\ultralytics\engine\predictor.py", line 255, in stream_inference
self.results = self.postprocess(preds, im, im0s)
File "I:\Github\test\yolov10-main\yolov10-main\ultralytics\models\yolo\detect\predict.py", line 25, in postprocess
preds = ops.non_max_suppression(
File "I:\Github\test\yolov10-main\yolov10-main\ultralytics\utils\ops.py", line 216, in non_max_suppression
bs = prediction.shape[0] # batch size
AttributeError: 'dict' object has no attribute 'shape'

Questions about the inconsistency between the experimental results of YOLOv10-N parameters and FLOPs and the description in the article.

Hello, I have some questions about the parameters and FLOPs data of the YOLOv10-N model you mentioned in your article. According to the data you provided, the parameter size is 2.3M and FLOPs are 6.7G. However, in my experiments, the parameter amount of the model reached 2.7M, and the FLOPs were 8.4G. What is the reason for this difference? Can you provide some additional information? Thank you very much for your time and help!
微信图片_20240525185724
image

Unable to start basic training with pretrained coco weights, KeyError: 'epoch'.

Hey, thanks for the repo.
I wanted to train coco on 1280 and set my python train file like this:

from ultralytics import YOLOv10

model = YOLOv10(f'yolov10x.pt')

results = model.train(data='coco.yaml', epochs=120, imgsz=1280, device=[0, 1, 2, 3], 
                      batch=16, close_mosaic=20, project='coco-1280', resume=False)

this should start standard training of coco with 1280 imgsz from pretrained weight file of 640 imgsz.
However, when I try to load model like this I get:


Traceback (most recent call last):                                                                                                                                                                                  
  File "/home/.config/Ultralytics/DDP/_temp_asyw3_vx140544015144176.py", line 12, in <module>                                                                                                                 
    results = trainer.train()                                                                                                                                                                                       
  File "/home/anaconda3/envs/yolov10/lib/python3.9/site-packages/ultralytics/engine/trainer.py", line 213, in train                                                                                           
    self._do_train(world_size)                                                                                                                                                                                      
  File "/home/anaconda3/envs/yolov10/lib/python3.9/site-packages/ultralytics/engine/trainer.py", line 327, in _do_train                                                                                       
    self._setup_train(world_size)                                                                                                                                                                                   
  File "/home/anaconda3/envs/yolov10/lib/python3.9/site-packages/ultralytics/engine/trainer.py", line 319, in _setup_train                                                                                    
    self.resume_training(ckpt)                                                                                                                                                                                      
  File "/home/anaconda3/envs/yolov10/lib/python3.9/site-packages/ultralytics/engine/trainer.py", line 665, in resume_training                                                                                 
    start_epoch = ckpt["epoch"] + 1                                                                                                                                                                                 
KeyError: 'epoch'  

for some reason it is trying to resume even when it should not.

Latency Report - Hardware

I have been trying to find the hardware specifications used for measuring latency, specifically the GPU, GPU clock speed, and CPU model. Unfortunately, I couldn't find this information. Could anyone provide details on the hardware setup used for these measurements?

The size of tensor a (8400) must match the size of tensor b (16800) at non-singleton dimension 1

File "D:\yolov10-main\yolov10-main\ultralytics\utils\tal.py", line 314, in dist2bbox
x1y1 = anchor_points - lt
~~~~~~~~~~~~~~^~~~
RuntimeError: The size of tensor a (8400) must match the size of tensor b (16800) at non-singleton dimension 1

In function def dist2bbox

I try to print shape of anchor_points,lt and rb,they are in different shapr.
print(anchor_points.shape)
print(lt.shape)
print(rb.shape)
torch.Size([8400, 2])
torch.Size([1, 16800, 2])
torch.Size([1, 16800, 2])
How can i sove this issure

Empowering X-AnyLabeling with YOLOv10 Model Support

Hello YOLOv10 Team,

I hope this message finds you well. I wanted to express my gratitude for your exceptional work on the YOLOv10 algorithm. The advancements and improvements you've made are truly impressive 👏👏👏.

🚀 Now, i am pleased to inform you that I have integrated YOLOv10 into the X-AnyLabeling tool, which facilitates effortless data labeling with AI support. This integration has enhanced the tool's capabilities, allowing for more efficient and accurate labeling processes.

Thank you again for your hard work and dedication. Your contributions to the machine learning community are greatly appreciated.

Best regards,
CVHub

Question regarding the latency w/ and w/o post processing of Table 1 in the paper

Hi, thank you for sharing the code for this amazing work!
I have a question regarding the latency w/ and w/o post processing of Table 1 in the paper (denoted Latency & Latency-f).

My understanding is that Latency = Latency-f + NMS time (post processing means simply NMS, correct?)
If so, why would NMS time vary so largely among different versions of YOLO?
For example, the NMS overhead is consistently about 0.1ms for YOLOv10 while being about 4.5 ms for YOLOv8.

Any help would be greatly appreciated. Thanks :)

Run time error message.

Each line of training output displays this alert:

image

UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').

ONNX weights + web demo

Hi there! 👋 I've converted the models to ONNX and uploaded them to the Hugging Face Hub (link):

In each model card, I also added usage instructions for Transformers.js, a JavaScript library which allows you to run the models in the browser. You can try out the web demo here: https://huggingface.co/spaces/Xenova/yolov10-web

yolov10-web

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.