Giter Club home page Giter Club logo

strongsort-yolo's Introduction

StrongSORT with OSNet for YoloV5, YoloV7, YoloV8 (Counter)


# Official YOLOv5
CI CPU testing
Open In Colab
# Official YOLOv7

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

PWC Hugging Face Spaces Open In Colab arxiv.org

Ultralytics CI Ultralytics Code Coverage YOLOv8 Citation Docker Pulls
Run on Gradient Open In Colab Open In Kaggle

Ultralytics YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.

Introduction

This repository contains a highly configurable two-stage-tracker that adjusts to different deployment scenarios. The detections generated by YOLOv5, YOLOv7, YOLOv8 a family of object detection architectures and models pretrained on the COCO dataset, are passed to StrongSORT which combines motion and appearance information based on OSNet in order to tracks the objects. It can track any object that your Yolov5 model was trained to detect.

Before you run the tracker

  1. Clone the repository recursively:

git clone --recurse-submodules https://github.com/bharath5673/StrongSORT-YOLO.git

If you already cloned and forgot to use --recurse-submodules you can run git submodule update --init

  1. Make sure that you fulfill all the requirements: Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install, run:

pip install -r requirements.txt

Tracking sources

Tracking can be run on most video formats

Select object detectors and ReID model

Yolov5

There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs you can select a Yolov5 family model for automatic download

$ python track_v5.py --source 0 --yolo-weights weights/yolov5n.pt --img 640
                                            yolov5s.pt
                                            yolov5m.pt
                                            yolov5l.pt 
                                            yolov5x.pt --img 1280
                                            ...

Yolov7

There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs you can select a Yolov5 family model for automatic download

$ python track_v7.py --source 0 --yolo-weights weights/yolov7-tiny.pt --img 640
                                            yolov7.pt
                                            yolov7x.pt 
                                            yolov7-w6.pt 
                                            yolov7-e6.pt 
                                            yolov7-d6.pt 
                                            yolov7-e6e.pt
                                            ...

StrongSORT

The above applies to StrongSORT models as well. Choose a ReID model based on your needs from this ReID model zoo

$ python track_v*.py --source 0 --strong-sort-weights osnet_x0_25_market1501.pt
                                                   osnet_x0_5_market1501.pt
                                                   osnet_x0_75_msmt17.pt
                                                   osnet_x1_0_msmt17.pt
                                                   ...

Filter tracked classes

By default the tracker tracks all MS COCO classes.

If you only want to track persons I recommend you to get these weights for increased performance

python track_v*.py --source 0 --yolo-weights weights/v*.pt --classes 0  # tracks persons, only

If you want to track a subset of the MS COCO classes, add their corresponding index after the classes flag

python track_v*.py --source 0 --yolo-weights  weights/v*.pt --classes 16 17  # tracks cats and dogs, only

Counter

counter

get realtime counts of every tracking objects without any rois or any line interctions

$ python track_v*.py --source test.mp4 -yolo-weights weights/v*.pt --save-txt --count --show-vid

Draw Object Trajectory

$ python track_v*.py --source test.mp4 -yolo-weights weights/v*.pt --save-txt --count --show-vid --draw

Here is a list of all the possible objects that a Yolov5 model trained on MS COCO can detect. Notice that the indexing for the classes in this repo starts at zero.

MOT compliant results

Can be saved to your experiment folder runs/track/<yolo_model>_<deep_sort_model>/ by

python track_v*.py --source ... --save-txt

YoloV8 (Counter)

V8 counter

## recommended conda env python=3.10
## conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
## pip install ultralytics
$ python track_v8.py --source 0 1 vid1.mp4 vid2.mp4 --track --count

Acknowledgements

Expand

strongsort-yolo's People

Contributors

bharath5673 avatar iamrajee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

strongsort-yolo's Issues

Unknown model

when running the command : python track_v7.py --source test_inputs\chase.mp4 --yolo-weights weights/yolov7-tiny.pt --img 640
I get the below error:
YOLOR v0.1-3-g9ee1835 torch 1.12.1+cpu CPU

Fusing layers...
Model Summary: 200 layers, 6219709 parameters, 229245 gradients
Traceback (most recent call last):
File "yolov5_yolov7_tracking\track_v7.py", line 378, in
detect()
File "yolov5_yolov7_tracking\track_v7.py", line 124, in detect
StrongSORT(
File "yolov5_yolov7_tracking\strong_sort\strong_sort.py", line 40, in init
self.extractor = FeatureExtractor(
File "yolov5_yolov7_tracking\strong_sort/deep/reid\torchreid\utils\feature_extractor.py", line 71, in init
model = build_model(
File "yolov5_yolov7_tracking\strong_sort/deep/reid\torchreid\models_init_.py", line 114, in build_model
raise KeyError(
KeyError: "Unknown model: None. Must be one of ['resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d', 'resnet50_fc512', 'se_resnet50', 'se_resnet50_fc512', 'se_resnet101', 'se_resnext50_32x4d', 'se_resnext101_32x4d', 'densenet121', 'densenet169', 'densenet201', 'densenet161', 'densenet121_fc512', 'inceptionresnetv2', 'inceptionv4', 'xception', 'resnet50_ibn_a', 'resnet50_ibn_b', 'nasnsetmobile', 'mobilenetv2_x1_0', 'mobilenetv2_x1_4', 'shufflenet', 'squeezenet1_0', 'squeezenet1_0_fc512', 'squeezenet1_1', 'shufflenet_v2_x0_5', 'shufflenet_v2_x1_0', 'shufflenet_v2_x1_5', 'shufflenet_v2_x2_0', 'mudeep', 'resnet50mid', 'hacnn', 'pcb_p6', 'pcb_p4', 'mlfn', 'osnet_x1_0', 'osnet_x0_75', 'osnet_x0_5', 'osnet_x0_25', 'osnet_ibn_x1_0', 'osnet_ain_x1_0', 'osnet_ain_x0_75', 'osnet_ain_x0_5', 'osnet_ain_x0_25']"

StrongSORT Features

Hi, I have tested this code with yolov7 to track the passengers in moving cars. I'm getting low tracking accuracy. Have you implemented AFLink and GIS features in StrongSort.

can't play saved video when `--save-vid` is used together with `--count`

@bharath5673, I use the following command to save a demo video python track_v7.py --source demo/video.mp4 --yolo-weights weights/best.pt --save-txt --count --save-vid --draw, it does save video with its size (i.e. not empty), but the media players can't play the saved video (VLC, PotPlayer, Windows Media Player, etc.). It has no length information either.
image

However, when I use the above command without --count, it works as expected and I can play the saved video, and its size is slightly bigger:
image

Could you check if this happens on your end?

It'd also be great if you could let me know the exact versions of opencv-python and related packages
Thanks.

Real-time multi-camera

Hello
The description says Real-time multi-camera
How does multicam work? I didn't find this in the description.
Thank you

Multi Camera Tracking

Hello, I have a question: it seems like your source code doesn't support multi-camera tracking yet, right? Thank!

--save-img ONLY for first detection of each unique tracker ID

First of all, StrongSORT-YOLO is working great with my custom YOLO v7 model! Thank you!

I'm interested in saving only one cropped image for each newly detected, unique tracker ID. I don't need thousands of cropped images for every detection; I just want to be able to efficiently weed out the false positives in post-processing by checking a single frame of each unique tracker ID.

Do you think there is an easy way to do this by modifying the --save-img flag in track_v7.py? Specifically here (line 196):

imc = im0.copy() if save_crop else im0 # for save_crop

Or perhaps lines 323 - 326:
if save_img:
if dataset.mode == 'image':
cv2.imwrite(save_path, im0)

vehicle_side in annotation file

First of all, thank you for the work you have done, it's quite helpful.

In track_v5.py you defined vehicle_side as 0 for left and 1 for right side of the vehicle, right? I have a couple of questions regarding this.

  1. What does vehicle_side imply here?
    I could see that 0 (left side) is assigned if the bbox_right is smaller than the half of frame_width. I don't think it applies to all cases. In my demo video, a vehicle with id1 is coming from top-left towards bottom right at an intersection. The txt file for this vehicle assigned 0 (left side), but the visible vehicle_side is right. Can you explain it a little more?
  2. Why could one need to know the vehicle side in general?

Thank you.

dynamic id assignment

How can I dynamically assign id to an object based on a meeting a condition, for example, if object A was assign with id 5, but upon certain condition i want to change it to 3 or some other number, how can i modify code to do it?

Not able to acquire Tracking.

I have modified your script to work on custom weights. I am using yolov4 for detection. I am only using the tracking algorithm for your repository and providing the image source and all the bounding box information from my yolov4 node using socket.

Problem

  1. Unable to acquire tracking output
  2. if yes, then unable to tracking multiple bounding boxes
  3. The visualization lags really bad.

I am providing you the modified code. Will be glad if you can help me a bit. I have also modified the update function of strongsort.py

track_v7.py

import os
import sys
import argparse
import time
from pathlib import Path

import cv2
import torch
import torch.backends.cudnn as cudnn
from numpy import random
import numpy as np
import zmq
import json
import time
import traceback

import warnings
warnings.filterwarnings('ignore')

# Constants
ROOT = Path(__file__).resolve().parents[0]
WEIGHTS = ROOT / 'weights'



from strong_sort.utils.parser import get_config
from strong_sort.strong_sort import StrongSORT
from yolov7.utils.datasets import LoadStreams, LoadImages
from yolov7.utils.general import check_imshow
from yolov7.utils.torch_utils import select_device
from yolov7.utils.plots import plot_one_box


def process_frame_and_boxes(strong, names, colors, device, frame_np, bounding_boxes):
    
    try:
        # Prepare frame
        frame = cv2.imdecode(frame_np, cv2.IMREAD_COLOR)
        im0 = frame.copy()

        # Initialize variables
        nr_sources = len(bounding_boxes)
        outputs = [None] * nr_sources
        trajectory = {}

        # Process each bounding box
        for i, bbox in enumerate(bounding_boxes):
            xmin = bbox["xmin"]
            ymin = bbox["ymin"]
            xmax = bbox["xmax"]
            ymax = bbox["ymax"]
            cls_str = bbox["class"]
            conf = bbox["probability"]
            width = xmax - xmin
            height = ymax - ymin
            #xmin, ymin, xmax, ymax, cls, conf = bbox
            
            try:
                cls = names.index(cls_str)
            except ValueError:
                print(f"Unknown class: {cls_str}")
                continue

            
            # Prepare bounding box data
            bboxes = np.array([xmin, ymin, width, height])
            #id = i  # Assuming id can be derived from the loop index
            cls = int(cls)
            conf = float(conf)
            

            # Track object using StrongSORT
            xywhs = torch.Tensor([bboxes])  # Convert to tensor
            confs = torch.Tensor([conf])
            clss = torch.Tensor([cls])
            print("BB: " ,bbox)
            
            try:
                outputs[i] = strong.update(xywhs.cpu(), confs.cpu(), clss.cpu(), im0)
                print(f"Box:{xywhs} confidence:{confs} output:{outputs[i]}")
            except Exception as e:
                print(f"Unable to process properly: {e}")
            
            #outputs[i] = strongsort_list[i].update(xywhs, confs, clss, im0)
            # draw boxes for visualization
            if len(outputs[i]) > 0:
                for j, (output, conf) in enumerate(zip(outputs[i], confs)):
                    bboxes = output[0:4]
                    id = output[4]
                    cls = int(output[5])
                    # Draw trajectory if required
                    if draw:
                        center = ((int(bboxes[0]) + int(bboxes[2])) // 2, (int(bboxes[1]) + int(bboxes[3])) // 2)
                        if id not in trajectory:
                            trajectory[id] = []
                        trajectory[id].append(center)
                        for i1 in range(1, len(trajectory[id])):
                            if trajectory[id][i1 - 1] is None or trajectory[id][i1] is None:
                                continue
                            thickness = 2
                            try:
                                cv2.line(im0, trajectory[id][i1 - 1], trajectory[id][i1], (0, 0, 255), thickness)
                            except Exception as e:
                                print(f"Error drawing trajectory: {e}")
                    
                    # Draw bounding boxes on image
                    if show_vid:
                        label = f"{names[cls]} ID: {id}"
                        plot_one_box(bboxes, im0, label=label, color=colors[cls], line_thickness=line_thickness)


        # Display or save images/videos if required
            if show_vid:
                cv2.imshow("Tracking", im0)
                cv2.waitKey(1)  # Adjust as necessary for frame rate
        #if save_img:
        #   cv2.imwrite(save_path, im0)
        #if save_vid:
        #   vid_writer.write(im0)'''
    except Exception as e:
        print(f"Error: {e}")
        print(traceback.format_exc())
        
        
def set_logging():
    import logging
    logging.basicConfig(format="%(message)s", level=logging.INFO)
    
def track_bounding_boxes():
    # Process frames and boxes
    context = zmq.Context()
    socket = context.socket(zmq.SUB)
    socket.connect("tcp://127.0.0.1:5555")
    socket.setsockopt_string(zmq.SUBSCRIBE, "")
    # Initialize StrongSORT
    cfg = get_config()
    cfg.merge_from_file(opt.config_strongsort)
    
    # Placeholder for class names, should be provided
    names = ['person', 'vehicle', 'wheel', 'weapon']
    # Placeholder for colors, should be provided
    colors = [[random.randint(0, 255) for _ in range(3)] for _ in names]
    
    #nr_sources = 1
    # Create as many StrongSORT instances as there are sources
    #strongsort_list = []
    #for i in range(nr_sources):
     #   strongsort_list.append(
    strong = StrongSORT(
        strong_sort_weights,
        device,
        max_dist=cfg.STRONGSORT.MAX_DIST,
        max_iou_distance=cfg.STRONGSORT.MAX_IOU_DISTANCE,
        max_age=cfg.STRONGSORT.MAX_AGE,
        n_init=cfg.STRONGSORT.N_INIT,
        nn_budget=cfg.STRONGSORT.NN_BUDGET,
        mc_lambda=cfg.STRONGSORT.MC_LAMBDA,
        ema_alpha=cfg.STRONGSORT.EMA_ALPHA,
    )
        #)

    

    try:
        while True:
            print("Listening...")
            msg = socket.recv_string()
            frame_data = socket.recv()
            frame_np = np.frombuffer(frame_data, np.uint8)
            if msg and frame_np.size > 0:
                bounding_boxes = json.loads(msg)
                print("bounding boxes: ", bounding_boxes)
                process_frame_and_boxes(strong, names, colors, device, frame_np, bounding_boxes)
            else:
                print("Error: Empty message received")
    except KeyboardInterrupt:
        pass
    except Exception as e:
        print(f"Error during processing: {e}")
    finally:
        socket.close()
        context.term()

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--strong-sort-weights', type=str, default=WEIGHTS / 'osnet_x0_25_msmt17.pt')
    parser.add_argument('--config-strongsort', type=str, default='strong_sort/configs/strong_sort.yaml')
    parser.add_argument('--save-img', action='store_true', help='save results to *.jpg')
    parser.add_argument('--save-vid', action='store_true', help='save video tracking results')
    parser.add_argument('--show-vid', action='store_true', default=True , help='display results')
    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
    parser.add_argument('--project', default='runs/track', help='save results to project/name')
    parser.add_argument('--exp-name', default='exp', help='save results to project/name')
    parser.add_argument('--line-thickness', default=1, type=int, help='bounding box thickness (pixels)')
    parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
    parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
    parser.add_argument('--hide-class', default=False, action='store_true', help='hide IDs')
    parser.add_argument('--draw', action='store_true', help='display object trajectory lines')
    opt = parser.parse_args()

    set_logging()


    strong_sort_weights = opt.strong_sort_weights
    config_strongsort = opt.config_strongsort
    save_img = opt.save_img
    save_vid = opt.save_vid
    show_vid = opt.show_vid
    save_txt = opt.save_txt
    line_thickness = opt.line_thickness
    draw = opt.draw
    
    save_dir = Path(opt.project) / opt.exp_name
    save_dir.mkdir(parents=True, exist_ok=True)
    
     # Initialize variables
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
      # Update as per your implementation
    
    track_bounding_boxes()

Strongsort.py (Update function)

def update(self, bbox_xywh, confidences, classes, ori_img):
        try:
            print(f"Box_xywhs: {bbox_xywh} confdence:{confidences}")
            self.height, self.width = ori_img.shape[:2]
            # generate detections
            features = self._get_features(bbox_xywh, ori_img)
            
            bbox_tlwh = bbox_xywh # self._xywh_to_tlwh(bbox_xywh)
            detections = [Detection(bbox_tlwh[i], conf, features[i]) for i, conf in enumerate(
                confidences)]
            
            # run on non-maximum supression
            boxes = np.array([d.tlwh for d in detections])
            scores = np.array([d.confidence for d in detections])
            for det in detections:
                print(f"tlwh= {det.tlwh} confidence= {det.confidence}")
            print("Classes: ", classes)
            print("Confidence: ", confidences)
            
            print("Update_boxes: ", boxes)
            # update tracker
            self.tracker.predict()
            self.tracker.update(detections, classes, confidences)

            # output bbox identities
            outputs = []
            for track in self.tracker.tracks:
                if not track.is_confirmed() or track.time_since_update > 1:
                    continue

                box = track.to_tlwh()
                print("U11111111")
                x1, y1, x2, y2 = self._tlwh_to_xyxy(box)
                
                track_id = track.track_id
                class_id = track.class_id
                conf = track.conf 
                outputs.append(np.array([x1, y1, x2, y2, track_id, class_id, conf]))
                #print("Converted values:", outputs)
            if len(outputs) > 0:
                outputs = np.stack(outputs, axis=0)
            return outputs
        except Exception as e:
            print(f"Error in strongsort: {e}")
            print(traceback.format_exc())

Would highly appreciate if you can guide me a bit with it. My concepts are not that strong so sorry if you notice some silly mistakes
Thank you

--save-vid not working

--save-vid is not working. It creates a .mp4 video file in runs/track/exp** but it is empty.

SyntaxError

Hello,
when i want to run with python track_v7.py --source 0 --yolo-weights weights/yolov7-tiny.pt --img 640

I get the following Error:

File "track_v7.py", line 209
s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
^
SyntaxError: invalid syntax

IndexError: list assignment index out of range

Trying to run over yolov5 and it runs with no issue, however, running over yolov7 gives an issue:

(strongsort) C:\Users\Lenovo>cd \projects\strongsort

(strongsort) C:\projects\strongsort>python track_v7.py --source 0 --yolo-weights weights/yolov7-tiny.pt --classes 0
Namespace(yolo_weights=['weights/yolov7-tiny.pt'], strong_sort_weights=WindowsPath('C:/projects/strongsort/weights/osnet_x0_25_msmt17.pt'), config_strongsort='strong_sort/configs/strong_sort.yaml', source='0', img_size=640, conf_thres=0.25, iou_thres=0.45, device='', show_vid=True, save_txt=False, save_img=False, save_conf=False, nosave=True, save_vid=False, classes=[0], agnostic_nms=False, augment=False, update=False, project='runs/track', exp_name='exp', exist_ok=False, trace=False, line_thickness=1, hide_labels=False, hide_conf=False, hide_class=False, count=False, draw=False)
YOLOR 2022-11-21 torch 1.13.0+cpu CPU

Fusing layers...
Model Summary: 200 layers, 6219709 parameters, 229245 gradients
1/1: 0... success (640x480 at 30.00 FPS).

Traceback (most recent call last):
File "C:\projects\strongsort\track_v7.py", line 386, in
detect()
File "C:\projects\strongsort\track_v7.py", line 189, in detect
curr_frames[i] = im0
IndexError: list assignment index out of range

(strongsort) C:\projects\strongsort>

Tried few quick solutions but no luck so far ...

failure for Filter tracked classes

I try to track a single class by modifying
python track_v7.py --source ex1_video_1.mp4 --yolo-weights weights/yolov7x.pt --classes 32

image

I found that the output of the video still contains multiple classes.
Thank you for your help.

IndexError: list index out of range

While runing yolov5 and it runs with no issue, however, running yolov7 gives the following issue:

$ python3 track_v7.py --yolo-weights yolov7.pt --strong-sort-weights osnet_x0_25_msmt17.pt --source ~/TownCenter/TownCenter.mp4 --save-vid --show-vid --device 0
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, config_strongsort='strong_sort/configs/strong_sort.yaml', count=False, device='0', draw=False, exist_ok=False, exp_name='exp', hide_class=False, hide_conf=False, hide_labels=False, img_size=640, iou_thres=0.45, line_thickness=1, nosave=True, project='runs/track', save_conf=False, save_img=False, save_txt=False, save_vid=True, show_vid=True, source='/home/dl/TownCenter/TownCenter.mp4', strong_sort_weights='osnet_x0_25_msmt17.pt', trace=False, update=False, yolo_weights=['yolov7.pt'])
YOLOR 🚀 v0.1-115-g072f76c torch 1.13.1+cu117 CUDA:0 (NVIDIA GeForce RTX 3090, 24265.3125MB)

Traceback (most recent call last):
  File "/home/****/yolov7/StrongSORT-YOLO/yolov7/utils/google_utils.py", line 26, in attempt_download
    assets = [x['name'] for x in response['assets']]  # release assets
KeyError: 'assets'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "track_v7.py", line 387, in <module>
    detect()
  File "track_v7.py", line 90, in detect
    model = attempt_load(weights, map_location=device)  # load FP32 model
  File "/home/****/yolov7/StrongSORT-YOLO/yolov7/models/experimental.py", line 251, in attempt_load
    attempt_download(w)
  File "/home/****/yolov7/StrongSORT-YOLO/yolov7/utils/google_utils.py", line 31, in attempt_download
    tag = subprocess.check_output('git tag', shell=True).decode().split()[-1]
IndexError: list index out of range

KeyError: 'assets' issues

Traceback (most recent call last):
File "/home/student/cql/t/yolov7/utils/google_utils.py", line 26, in attempt_download
assets = [x['name'] for x in response['assets']] # release assets
KeyError: 'assets'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/student/cql/t/track_v7.py", line 382, in
detect()
File "/home/student/cql/t/track_v7.py", line 90, in detect
model = attempt_load(weights, map_location=device) # load FP32 model
File "/home/student/cql/t/yolov7/models/experimental.py", line 87, in attempt_load
attempt_download(w)
File "/home/student/cql/t/yolov7/utils/google_utils.py", line 30, in attempt_download
tag = subprocess.check_output('git tag', shell=True).decode().split()[-1]
IndexError: list index out of range

Process finished with exit code 1

Thank you for your help.

multi camera(multi video)

Hi
I want to know how to put multiple Hi
I want to know how to put multiple videos(multi-cameras) simultaneouly. I mean multiple videos are processed in parallel.
for example person can show up video1(camera 1) and show up on video2(camera) at the same time or maybe few seconds or minute later with same id(person id, like number)videos(multi-cameras) simultaneouly. I mean multiple videos are processed in parallel

Model not working with custom YOLOv8 weights

I'm encountering an issue with my YOLOv8 model when attempting to use custom weights. Despite specifying the appropriate configuration parameters, the model does not seem to be detecting objects correctly. Below, I've outlined the specifics of the problem along with relevant code snippets and command line execution details:

""
model = YOLO('yolov8_cus.pt') # Loading custom YOLOv8 weights
model.overrides['conf'] = 0.3 # NMS confidence threshold
model.overrides['iou'] = 0.4 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # Maximum number of detections per image
names = model.names
""

cmd line i used :!python track_v8.py --source cas_yl.mp4 --track

IndexError

I encountered this error when I used the camera.

Traceback (most recent call last):
File "track_v7.py", line 386, in
detect()
File "track_v7.py", line 189, in detect
curr_frames[i] = im0
IndexError: list assignment index out of range

Real-time Multi-Target-Multi-Camera

Hello Bharath, I appreciate your efforts in Yolo detection & tracking but I'm unable to get anything about multicamera tracking MCT. Can you please point me to that code where I can find the MCT stuff? thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.