我们为您编写可重复使用的计算机视觉工具。您是否需要从硬盘加载数据集、在图像或视频上绘制检测,或者计算一个区域中有多少检测。你可以依靠我们!🤝
在Python>=3.8环境下pip安装监管包 。
pip install supervision
在我们的指南中阅读有关桌面、无头和本地安装的更多信息。
监督被设计为与模型无关。只需插入任何分类、检测或分割模型即可。为了您的方便,我们为 Ultralytics、Transformers 或 MMDetection 等最流行的库创建了连接器。
>>> import cv2 >>> import supervision as sv >>> from ultralytics import YOLO>>> image = cv2.imread(...) >>> model = YOLO('yolov8s.pt') >>> result = model(image)[0] >>> detections = sv.Detections.from_ultralytics(result)
>>> len(detections) 5
👉更多型号连接器
-
inference
Running with Inference requires a Roboflow API KEY.
>>> import cv2 >>> import supervision as sv >>> from inference.models.utils import get_roboflow_model
>>> image = cv2.imread(...) >>> model = get_roboflow_model(model_id="yolov8s-640", api_key=<ROBOFLOW API KEY>) >>> result = model.infer(image)[0] >>> detections = sv.Detections.from_inference(result)
>>> len(detections) >>> 5
Supervision 提供了各种高度可定制的注释器,使您能够为您的用例构建完美的可视化。
>>> import cv2 >>> import supervision as sv>>> image = cv2.imread(...) >>> detections = sv.Detections(...)
>>> bounding_box_annotator = sv.BoundingBoxAnnotator() >>> annotated_frame = bounding_box_annotator.annotate( ... scene=image.copy(), ... detections=detections ... )
监督-0.16.0-annotators.mp4
supervision-0.16.0-annotators.mp4
Supervision 提供了一组实用程序,允许您以受支持的格式加载、拆分、合并和保存数据集。
>>> import supervision as sv>>> dataset = sv.DetectionDataset.from_yolo( ... images_directory_path=..., ... annotations_directory_path=..., ... data_yaml_path=... ... )
>>> dataset.classes ['dog', 'person']
>>> len(dataset) 1000
👉更多数据集实用程序
-
load
>>> dataset = sv.DetectionDataset.from_yolo( ... images_directory_path=..., ... annotations_directory_path=..., ... data_yaml_path=... ... )
>>> dataset = sv.DetectionDataset.from_pascal_voc( ... images_directory_path=..., ... annotations_directory_path=... ... )
>>> dataset = sv.DetectionDataset.from_coco( ... images_directory_path=..., ... annotations_path=... ... )
<clipboard-copy aria-label="Copy" class="ClipboardButton btn btn-invisible js-clipboard-copy m-2 p-0 tooltipped-no-delay d-flex flex-justify-center flex-items-center" data-copy-feedback="Copied!" data-tooltip-direction="w" value=">>> dataset = sv.DetectionDataset.from_yolo( ... images_directory_path=..., ... annotations_directory_path=..., ... data_yaml_path=... ... )dataset = sv.DetectionDataset.from_pascal_voc( ... images_directory_path=..., ... annotations_directory_path=... ... )
dataset = sv.DetectionDataset.from_coco( ... images_directory_path=..., ... annotations_path=... ... )" tabindex="0" role="button">
-
split
>>> train_dataset, test_dataset = dataset.split(split_ratio=0.7) >>> test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)
>>> len(train_dataset), len(test_dataset), len(valid_dataset) (700, 150, 150)
<clipboard-copy aria-label="Copy" class="ClipboardButton btn btn-invisible js-clipboard-copy m-2 p-0 tooltipped-no-delay d-flex flex-justify-center flex-items-center" data-copy-feedback="Copied!" data-tooltip-direction="w" value=">>> train_dataset, test_dataset = dataset.split(split_ratio=0.7)test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)
len(train_dataset), len(test_dataset), len(valid_dataset) (700, 150, 150)" tabindex="0" role="button">
-
merge
>>> ds_1 = sv.DetectionDataset(...) >>> len(ds_1) 100 >>> ds_1.classes ['dog', 'person']
>>> ds_2 = sv.DetectionDataset(...) >>> len(ds_2) 200 >>> ds_2.classes ['cat']
>>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2]) >>> len(ds_merged) 300 >>> ds_merged.classes ['cat', 'dog', 'person']
<clipboard-copy aria-label="Copy" class="ClipboardButton btn btn-invisible js-clipboard-copy m-2 p-0 tooltipped-no-delay d-flex flex-justify-center flex-items-center" data-copy-feedback="Copied!" data-tooltip-direction="w" value=">>> ds_1 = sv.DetectionDataset(...)len(ds_1) 100 ds_1.classes ['dog', 'person']
ds_2 = sv.DetectionDataset(...) len(ds_2) 200 ds_2.classes ['cat']
ds_merged = sv.DetectionDataset.merge([ds_1, ds_2]) len(ds_merged) 300 ds_merged.classes ['cat', 'dog', 'person']" tabindex="0" role="button">
-
save
>>> dataset.as_yolo( ... images_directory_path=..., ... annotations_directory_path=..., ... data_yaml_path=... ... )
>>> dataset.as_pascal_voc( ... images_directory_path=..., ... annotations_directory_path=... ... )
>>> dataset.as_coco( ... images_directory_path=..., ... annotations_path=... ... )
<clipboard-copy aria-label="Copy" class="ClipboardButton btn btn-invisible js-clipboard-copy m-2 p-0 tooltipped-no-delay d-flex flex-justify-center flex-items-center" data-copy-feedback="Copied!" data-tooltip-direction="w" value=">>> dataset.as_yolo( ... images_directory_path=..., ... annotations_directory_path=..., ... data_yaml_path=... ... )dataset.as_pascal_voc( ... images_directory_path=..., ... annotations_directory_path=... ... )
dataset.as_coco( ... images_directory_path=..., ... annotations_path=... ... )" tabindex="0" role="button">
-
convert
>>> sv.DetectionDataset.from_yolo( ... images_directory_path=..., ... annotations_directory_path=..., ... data_yaml_path=... ... ).as_pascal_voc( ... images_directory_path=..., ... annotations_directory_path=... ... )
了解如何使用 YOLO、ByteTrack 和 Roboflow Inference 跟踪和估计车辆的速度。这个综合教程涵盖了对象检测、多对象跟踪、过滤检测、透视变换、速度估计、可视化改进等。
使用 YOLOv8 和 ByteTrack 进行流量分析 - 车辆检测和跟踪
在本视频中,我们探索使用 YOLOv8 和 ByteTrack 进行实时交通分析,以检测和跟踪航空图像上的车辆。利用 Python 和监督的力量,我们深入研究将汽车分配到特定的入口区域并了解它们的移动方向。通过可视化他们的路径,我们可以深入了解熙熙攘攘的环岛的交通流量......
您是否通过监督构建了一些很酷的东西?让我们知道!
足球运动员跟踪-25.mp4
football-players-tracking-25.mp4
流量分析结果.mov
traffic_analysis_result.mov
车辆-step-7-new.mp4
vehicles-step-7-new.mp4
请访问我们的文档页面,了解监督如何帮助您更快、更可靠地构建计算机视觉应用程序。
我们喜欢您的意见!请参阅我们的贡献指南以开始使用。感谢🙏所有我们的贡献者!