Giter Club home page Giter Club logo

yolov9's Issues

convert onnx to engine

TensorRT v8401
[E] ERROR ModelImporter.cpp:180 In function parseGraph
[6] Invalid Node - Conv_1485
Conv_1485: two inputs (data and weights) are allowed only in explicit-quantization mode

Add a MLFlow Callback

This is an improvement request

We should add an integration with a callback for MLFlow

I will try to do this on my side, to propose a PR
If anyone has an idea/template to put this in place, I'm also interested!

AttributeError: 'list' object has no attribute 'view'

In loss_tal.py: pred_distri, pred_scores = torch.cat([xi.view(feats[0].shape[0], self.no, -1) for xi in feats], 2).split(
(self.reg_max * 4, self.nc), 1)
The error is as follows:
AttributeError: 'list' object has no attribute 'view'

Upload models over to the Hugging Face Hub! πŸ€—

Hi there,

I'm VB I lead the advocacy efforts for open source at Hugging Face. Congratulations on such a brilliant model. I saw that the model checkpoints are released as part of the GitHub release. It'd be great if you could upload the model checkpoints over the Hugging Face Hub.

Uploading model checkpoints over on Hugging Face Hub comes with a couple of advantages:

  1. It increases the visibility of the model checkpoints on the Hub.
  2. It makes it easier for people to download the different weights.
  3. It makes it surprisingly easy to version control weights as well.

Here's a quick guide explaining how to upload models over on the Hub: https://huggingface.co/docs/hub/en/models-uploading

In addition to this, you can add support for directly loading the model checkpoints via the huggingface_hub library as well: https://huggingface.co/docs/huggingface_hub/v0.16.3/en/guides/download (it's just 2 lines of code).

Do let me or @merveenoyan know if you need any assistance with this.

Cheers,
VB

how to export to onnx?

python export.py --weights yolov9-c.pt --include onnx

export: data=G:\Item_done\yolo\yolo5\yolov9\yolov9-main\data\coco.yaml, weights=['yolov9-c.pt'], imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['onnx']
YOLOv5 2024-2-22 Python-3.9.16 torch-2.0.1+cu118 CPU

Fusing layers...
Model summary: 724 layers, 51141120 parameters, 0 gradients, 238.7 GFLOPs
Traceback (most recent call last):
File "G:\Item_done\yolo\yolo5\yolov9\yolov9-main\export.py", line 606, in
main(opt)
File "G:\Item_done\yolo\yolo5\yolov9\yolov9-main\export.py", line 601, in main
run(**vars(opt))
File "C:\Users\lllstandout.conda\envs\yolov8\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\Item_done\yolo\yolo5\yolov9\yolov9-main\export.py", line 497, in run
if isinstance(m, (Detect, V6Detect)):
NameError: name 'V6Detect' is not defined

detect.py

device = prediction[1]
mps = 'mps' in device.type # Apple MPS
if mps: # MPS not fully supported yet, convert tensors to CPU before NMS
prediction = prediction.cpu()
bs = prediction.shape[0] # batch size
nc = prediction.shape[1] - nm - 4 # number of classes
mi = 4 + nc # mask start index
xc = torch.max(prediction[:, 4:mi], dim=1)[0] > conf_thres
TypeError: argument of type 'builtin_function_or_method' is not iterable,The function or method does not support iterable,how to slove it?

YoloV7 segmentation backbone

can you please upload the weights of pre-trained model of v9 segmentation on MS COCO, Since the backbone is similar to yolov7 segmentation, so can we use v7 ".pt" files?

Please example code for getting certain class like below

This is how I do with Yolo V7

I hope you help me on this for Yolo V9

Also which are the classes it knows full list?

I would like to use YOLOv9-E


model = torch.hub.load('WongKinYiu/yolov7', 'custom', 'yolov7-e6e.pt', force_reload=False, trust_repo=True)
results = model(img_path)
detections = results.pandas().xyxy[0]

person_detected = detections[detections['name'] == 'person']
 if not person_detected.empty:
  x1, y1, x2, y2 = person_detected.iloc[0][['xmin', 'ymin', 'xmax', 'ymax']].astype(int)

RepConvN infer & remove the auxiliary head

During the inference process, not removing the auxiliary branch will reduce the inference speed. Where is the main code for removing the auxiliary branch; Additionally, it seems that the code does not provide reparameterized code for RepConvN related inference processes

question

does this work or it just mock up joke? i cant even run prediction python detect.py --imgsz 640 --conf-thres 0.001 --iou-thres 0.7 --weights yolov9-e.pt --source 6.mp4 (fusion) C:\EAL\Scripts\DarkFusion\UltraDarkFusion\yolov9>python detect.py --imgsz 640 --conf-thres 0.001 --iou-thres 0.7 --weights yolov9-e.pt --source 6.mp4
detect: weights=['yolov9-e.pt'], source=6.mp4, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.001, iou_thres=0.7, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 v0.1-10-g33fc75e Python-3.8.18 torch-2.1.0+cu118 CUDA:0 (NVIDIA GeForce RTX 3090 Ti, 24564MiB)

Fusing layers...
Model summary: 1119 layers, 69470144 parameters, 0 gradients, 244.0 GFLOPs
Traceback (most recent call last):
File "detect.py", line 231, in
main(opt)
File "detect.py", line 226, in main
run(**vars(opt))
File "c:\Anaconda3\envs\fusion\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "detect.py", line 102, in run
pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
File "C:\EAL\Scripts\DarkFusion\UltraDarkFusion\yolov9\utils\general.py", line 905, in non_max_suppression
device = prediction.device
AttributeError: 'list' object has no attribute 'device'

Empowering X-AnyLabeling with YOLOv9 Model Support

Hey @WongKinYiu, big thanks for your outstanding work!

I've seamlessly integrated yolov9 into X-AnyLabeling, marking a substantial leap in our capabilities. YOLOv9 proves to be a game-changer, offering robust object detection trained on an extensive dataset of labeled and unlabeled images. This integration elevates X-AnyLabeling, providing a more comprehensive and industrial-grade solution for image data engineering.

This is not an issue but a celebration of a successful integration. Once again, thank you for your exceptional contribution. Excited for more successful collaborations ahead!

yolov9-X-AnyLabeling-DEMO

YOLOv9 Inference using ONNX

Hi all...

I just release my code for performing inference on YOLOv9 using onnx. You can check out my code on the link below

https://github.com/danielsyahputra/yolov9-onnx

OnnxSlim support

Hi, we have developed a tool called onnxslim, which can help slim exported onnx model, and it's pure python, and it works well on yolo, should we intergrate it into awesome yolov9.

`AttributeError: 'list' object has no attribute 'device'`

While trying to run detect.py I encountered an error.

command:

!python detect.py --weights /content/weights/yolov9-e.pt --conf 0.1 --source /content/dog.jpeg 

error:

detect: weights=['/content/weights/yolov9-e.pt'], source=/content/dog.jpeg, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.1, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 πŸš€ v0.1-2-ge7d68de Python-3.10.12 torch-2.1.0+cu121 CUDA:0 (Tesla T4, 15102MiB)

Fusing layers... 
Model summary: 1119 layers, 69470144 parameters, 0 gradients, 244.0 GFLOPs
Traceback (most recent call last):
  File "/content/yolov9/detect.py", line 231, in <module>
    main(opt)
  File "/content/yolov9/detect.py", line 226, in main
    run(**vars(opt))
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/content/yolov9/detect.py", line 102, in run
    pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
  File "/content/yolov9/utils/general.py", line 905, in non_max_suppression
    device = prediction.device
AttributeError: 'list' object has no attribute 'device'

reproducible example

https://colab.research.google.com/drive/1hq-hORpv8pHjwjBz-slPUuKfpXyTXSEg?usp=sharing

Yolov9 Nano Version

Hi, thanks for this great work! For real time low level systems, we need yolov9 nano models. Are you planning to release nano version?

Is yolov9s available?

Is yolov9s available? Why do I get an error when changing the depth and width coefficients?

cls = names[cls] if names else cls KeyError: 79

After the training procedure completed, I encountered an error as shown below:

Fusing layers... 

Exception in thread Thread-9:

Traceback (most recent call last):

  File "/home/.../miniconda3/envs/YOLOv9/lib/python3.9/threading.py", line 980, in _bootstrap_inner
    self.run()
  File "/home/.../miniconda3/envs/YOLOv9/lib/python3.9/threading.py", line 917, in run
    self._target(*self._args, **self._kwargs)
  File "/home/.../yolov9/utils/plots.py", line 297, in plot_images
    cls = names[cls] if names else cls
KeyError: 79
<class 'torch.Tensor'>

It seems that the weight file best.pt has been successfully saved, but the evaluation diagram of the results is not correct

AttributeError: Can't get attribute 'C3' on <module 'models.common' from 'D:\\Pycharm\\Projects\\yolo\\yolov9\\models\\common.py'>

This error occurs when I execute the following code: python classify/train.py --model yolov5s-cls.pt --data D:/Pycharm/Projects/zsh/data/2Dimage_AlliOne_split --epochs 5 --img 224

Here's the full info.

(yolo) PS D:\Pycharm\Projects\yolo\yolov9> python classify/train.py --model yolov5s-cls.pt --data D:/Pycharm/Projects/zsh/data/2Dimage_AlliOne_split --epochs 5 --img 224
classify\train: model=yolov5s-cls.pt, data=D:/Pycharm/Projects/zsh/data/2Dimage_AlliOne_split, epochs=5, batch_size=64, imgsz=224, nosave=False, cache=None, device=, workers=8, project=runs\train-cls, name=exp, exist_ok=False, pretrained=True, optimizer=Adam, lr0=0.001, decay=5e-05, label_smoothing=0.1, cutoff=None, dropout=None, verbose=False, seed=0, local_rank=-1
github: up to date with https://github.com/WongKinYiu/yolov9
YOLOv5 v0.1-2-ge7d68de Python-3.11.5 torch-2.1.2+cu118 CUDA:0 (NVIDIA GeForce RTX 4060 Laptop GPU, 8188MiB)

TensorBoard: Start with 'tensorboard --logdir runs\train-cls', view at http://localhost:6006/
albumentations: RandomResizedCrop(p=1.0, height=224, width=224, scale=(0.08, 1.0), ratio=(0.75, 1.3333333333333333), interpolation=1), HorizontalFlip(p=0.5), ColorJitter(p=0.5, brightness=[0.6, 1.4], contrast=[0.6, 1.4], saturation=[0.6, 1.4], hue=[0, 0]), Normalize(p=1.0, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225), max_pixel_value=255.0), ToTensorV2(always_apply=True, p=1.0, transpose_mask=False)
Downloading https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-cls.pt to yolov5s-cls.pt...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10.5M/10.5M [00:01<00:00, 6.13MB/s]

Traceback (most recent call last):
File "D:\Pycharm\Projects\yolo\yolov9\classify\train.py", line 335, in
main(opt)
File "D:\Pycharm\Projects\yolo\yolov9\classify\train.py", line 321, in main
train(opt, device)
File "D:\Pycharm\Projects\yolo\yolov9\classify\train.py", line 113, in train
model = attempt_load(opt.model, device='cpu', fuse=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Pycharm\Projects\yolo\yolov9\models\experimental.py", line 75, in attempt_load
ckpt = torch.load(attempt_download(w), map_location='cpu') # load
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\yolo\Lib\site-packages\torch\serialization.py", line 1014, in load
return _load(opened_zipfile,
^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\yolo\Lib\site-packages\torch\serialization.py", line 1422, in _load
result = unpickler.load()
^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\yolo\Lib\site-packages\torch\serialization.py", line 1415, in find_class
return super().find_class(mod_name, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: Can't get attribute 'C3' on <module 'models.common' from 'D:\Pycharm\Projects\yolo\yolov9\models\common.py'>

weights

Isn't there a yolov9-s.pt weighting file

about YOLOv9 License

hello auther,
Could the author kindly provide details regarding the licensing terms applicable to YOLOv9?

many thanks,

Segment, classify, panoptic modules

Thank you for your hard work. I've been following you since the release of yolov7.
In this project, I don't see the results and benchmarks of segment, classify, panoptic modules.
Can you provide more information about them?

YOLOv9-e with 69M parameters

Hello, I was just training the YOLOv9-e model on the VisDrone dataset and noticed that it was showing me 69 million parameters instead of 58 million parameters which was mentioned in the README.

Please use `train.py` to train gelan models, and use `train_dual.py` to train yolov9 models.

          Please use `train.py` to train gelan models, and use `train_dual.py` to train yolov9 models.

Originally posted by @WongKinYiu in #27 (comment)

when we train the 2048*2448-size dataset for gelan.yaml, both train.py and train_dual.py report issue:

File "D:\PythonCode\yolov9-main\utils\loss_tal_dual.py", line 175, in
pred_distri, pred_scores = torch.cat([xi.view(feats[0].shape[0], self.no, -1) for xi in feats], 2).split(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: shape '[67, 67, -1]' is invalid for input of size 428800

training for yolov9.yaml is ok.

By the way, how to output the pixel position of the each visual object in the detection?

RuntimeError: Boolean value of Tensor with more than one value is ambiguous

Hello, I run train_dual.py target detection, encountered this error need to solve? Thank you very much for your contribution.
File "D:\Study\yolov9-main\models\yolo.py", line 577, in forward
if augment:
RuntimeError: Boolean value of Tensor with more than one value is ambiguous

class DetectionModel(BaseModel):
# YOLO detection model
def init(self, cfg='yolo.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
super().init()
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
self.yaml_file = Path(cfg).name
with open(cfg, encoding='ascii', errors='ignore') as f:
self.yaml = yaml.safe_load(f) # model dict

    # Define model
    ch = self.yaml['ch'] = self.yaml.get('ch', ch)  # input channels
    if nc and nc != self.yaml['nc']:
        LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
        self.yaml['nc'] = nc  # override yaml value
    if anchors:
        LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}')
        self.yaml['anchors'] = round(anchors)  # override yaml value
    self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch])  # model, savelist
    self.names = [str(i) for i in range(self.yaml['nc'])]  # default names
    self.inplace = self.yaml.get('inplace', True)

    # Build strides, anchors
    m = self.model[-1]  # Detect()
    if isinstance(m, (Detect, DDetect, Segment)):
        s = 256  # 2x min stride
        m.inplace = self.inplace
        forward = lambda x: self.forward(x)[0] if isinstance(m, (Segment)) else self.forward(x)
        m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))])  # forward
        # check_anchor_order(m)
        # m.anchors /= m.stride.view(-1, 1, 1)
        self.stride = m.stride
        m.bias_init()  # only run once
    if isinstance(m, (DualDetect, TripleDetect, DualDDetect, TripleDDetect)):
        s = 256  # 2x min stride
        m.inplace = self.inplace
        #forward = lambda x: self.forward(x)[0][0] if isinstance(m, (DualSegment)) else self.forward(x)[0]
        forward = lambda x: self.forward(x)[0]
        m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))])  # forward
        # check_anchor_order(m)
        # m.anchors /= m.stride.view(-1, 1, 1)
        self.stride = m.stride
        m.bias_init()  # only run once

    # Init weights, biases
    initialize_weights(self)
    self.info()
    LOGGER.info('')

def forward(self, x, augment=False, profile=False, visualize=False):
    if augment:
        return self._forward_augment(x)  # augmented inference, None
    return self._forward_once(x, profile, visualize)  # single-scale inference, train

def _forward_augment(self, x):
    img_size = x.shape[-2:]  # height, width
    s = [1, 0.83, 0.67]  # scales
    f = [None, 3, None]  # flips (2-ud, 3-lr)
    y = []  # outputs
    for si, fi in zip(s, f):
        xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
        yi = self._forward_once(xi)[0]  # forward
        # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1])  # save
        yi = self._descale_pred(yi, fi, si, img_size)
        y.append(yi)
    y = self._clip_augmented(y)  # clip augmented tails
    return torch.cat(y, 1), None  # augmented inference, train

License for the repo

Thanks for the great work. Could you also tell which license will be applicable to the repo?

Segmentation Model for Yolo v9

Any plan to make segmentation model? I saw you already used some Multi Task Yolo model code. Thank you for your work.

AttributeError: 'FreeTypeFont' object has no attribute 'getsize'

When I started training, I had the following problems, and then I started training normally. Is this normal?

Starting training for 100 epochs...

  Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
   0/99      3.01G      5.309      6.831      5.464         28        640:   0%|          | 0/3520 00:14Exception in thread Thread-7:

Traceback (most recent call last):
File "/home/Users/anaconda3/envs/detr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/home/Users/anaconda3/envs/detr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/Users/摌青/yolov9-main/utils/plots.py", line 300, in plot_images
annotator.box_label(box, label, color=color)
File "/home/Users/摌青/yolov9-main/utils/plots.py", line 86, in box_label
w, h = self.font.getsize(label) # text width, height
AttributeError: 'FreeTypeFont' object has no attribute 'getsize'
WARNING ⚠️ TensorBoard graph visualization failure Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions
0/99 3.53G 4.76 6.78 5.595 165 640: 0%| | 2/3520 00:18Exception in thread Thread-10:
Traceback (most recent call last):
File "/home/Users/anaconda3/envs/detr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/home/Users/anaconda3/envs/detr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/Users/摌青/yolov9-main/utils/plots.py", line 300, in plot_images
annotator.box_label(box, label, color=color)
File "/home/Users/摌青/yolov9-main/utils/plots.py", line 86, in box_label
w, h = self.font.getsize(label) # text width, height
AttributeError: 'FreeTypeFont' object has no attribute 'getsize'
0/99 3.53G 4.231 6.698 5.599 42 640: 0%| | 3/3520 00:19Exception in thread Thread-13:
Traceback (most recent call last):
File "/home/Users/anaconda3/envs/detr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/home/Users/anaconda3/envs/detr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/Users/摌青/yolov9-main/utils/plots.py", line 300, in plot_images
annotator.box_label(box, label, color=color)
File "/home/Users/摌青/yolov9-main/utils/plots.py", line 86, in box_label
w, h = self.font.getsize(label) # text width, height
AttributeError: 'FreeTypeFont' object has no attribute 'getsize'
0/99 4.48G 4.757 7.121 5.501 67 640: 8%|β–Š | 289/3520 03:37

how to remove auxiliary branch

Hello, thank you for your excellent work, but I didn't find the code on how to remove the auxiliary branch, can you help me?

i want to learn more about the DualDDetector

Hello, I want to know more information about DualDDetector. Is there any relevant literature? Maybe I didn't read the original literature carefully and couldn't find relevant information. Thank you very much!!!!

Question about training time and hardware

Hi, thx for your great work!

I noticed that the experiment requires training from scratch for 500 epochs.
Could you please tell me your training time and corresponding GPU device?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.