Skip to content

revtheundead/supervision

 
 

Repository files navigation

👋 hello

We write your reusable computer vision tools. Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! 🤝

💻 install

Pip install the supervision package in a Python>=3.8 environment.

pip install supervision

Read more about desktop, headless, and local installation in our guide.

🔥 quickstart

models

Supervision was designed to be model agnostic. Just plug in any classification, detection, or segmentation model. For your convenience, we have created connectors for the most popular libraries like Ultralytics, Transformers, or MMDetection.

>>> import cv2
>>> import supervision as sv
>>> from ultralytics import YOLO

>>> image = cv2.imread(...)
>>> model = YOLO('yolov8s.pt')
>>> result = model(image)[0]
>>> detections = sv.Detections.from_ultralytics(result)

>>> len(detections)
5
👉 more model connectors
  • inference

    Running with Inference requires a Roboflow API KEY.

    >>> import cv2
    >>> import supervision as sv
    >>> from inference.models.utils import get_roboflow_model
    
    >>> image = cv2.imread(...)
    >>> model = get_roboflow_model(model_id="yolov8s-640", api_key=<ROBOFLOW API KEY>)
    >>> result = model.infer(image)[0]
    >>> detections = sv.Detections.from_inference(result)
    
    >>> len(detections)
    >>> 5

annotators

Supervision offers a wide range of highly customizable annotators, allowing you to compose the perfect visualization for your use case.

>>> import cv2
>>> import supervision as sv

>>> image = cv2.imread(...)
>>> detections = sv.Detections(...)

>>> bounding_box_annotator = sv.BoundingBoxAnnotator()
>>> annotated_frame = bounding_box_annotator.annotate(
...     scene=image.copy(),
...     detections=detections
... )
supervision-0.16.0-annotators.mp4

datasets

Supervision provides a set of utils that allow you to load, split, merge, and save datasets in one of the supported formats.

>>> import supervision as sv

>>> dataset = sv.DetectionDataset.from_yolo(
...     images_directory_path=...,
...     annotations_directory_path=...,
...     data_yaml_path=...
... )

>>> dataset.classes
['dog', 'person']

>>> len(dataset)
1000
👉 more dataset utils
  • load

    >>> dataset = sv.DetectionDataset.from_yolo(
    ...     images_directory_path=...,
    ...     annotations_directory_path=...,
    ...     data_yaml_path=...
    ... )
    
    >>> dataset = sv.DetectionDataset.from_pascal_voc(
    ...     images_directory_path=...,
    ...     annotations_directory_path=...
    ... )
    
    >>> dataset = sv.DetectionDataset.from_coco(
    ...     images_directory_path=...,
    ...     annotations_path=...
    ... )
  • split

    >>> train_dataset, test_dataset = dataset.split(split_ratio=0.7)
    >>> test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)
    
    >>> len(train_dataset), len(test_dataset), len(valid_dataset)
    (700, 150, 150)
  • merge

    >>> ds_1 = sv.DetectionDataset(...)
    >>> len(ds_1)
    100
    >>> ds_1.classes
    ['dog', 'person']
    
    >>> ds_2 = sv.DetectionDataset(...)
    >>> len(ds_2)
    200
    >>> ds_2.classes
    ['cat']
    
    >>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
    >>> len(ds_merged)
    300
    >>> ds_merged.classes
    ['cat', 'dog', 'person']
  • save

    >>> dataset.as_yolo(
    ...     images_directory_path=...,
    ...     annotations_directory_path=...,
    ...     data_yaml_path=...
    ... )
    
    >>> dataset.as_pascal_voc(
    ...     images_directory_path=...,
    ...     annotations_directory_path=...
    ... )
    
    >>> dataset.as_coco(
    ...     images_directory_path=...,
    ...     annotations_path=...
    ... )
  • convert

    >>> sv.DetectionDataset.from_yolo(
    ...     images_directory_path=...,
    ...     annotations_directory_path=...,
    ...     data_yaml_path=...
    ... ).as_pascal_voc(
    ...     images_directory_path=...,
    ...     annotations_directory_path=...
    ... )

🎬 tutorials

Speed Estimation & Vehicle Tracking | Computer Vision | Open Source Speed Estimation & Vehicle Tracking | Computer Vision | Open Source

Created: 11 Jan 2024 | Updated: 11 Jan 2024

Learn how to track and estimate the speed of vehicles using YOLO, ByteTrack, and Roboflow Inference. This comprehensive tutorial covers object detection, multi-object tracking, filtering detections, perspective transformation, speed estimation, visualization improvements, and more.


Traffic Analysis with YOLOv8 and ByteTrack - Vehicle Detection and Tracking Traffic Analysis with YOLOv8 and ByteTrack - Vehicle Detection and Tracking

Created: 6 Sep 2023 | Updated: 6 Sep 2023

In this video, we explore real-time traffic analysis using YOLOv8 and ByteTrack to detect and track vehicles on aerial images. Harnessing the power of Python and Supervision, we delve deep into assigning cars to specific entry zones and understanding their direction of movement. By visualizing their paths, we gain insights into traffic flow across bustling roundabouts...

💜 built with supervision

Did you build something cool using supervision? Let us know!

football-players-tracking-25.mp4
traffic_analysis_result.mov
market-square-result.mp4

📚 documentation

Visit our documentation page to learn how supervision can help you build computer vision applications faster and more reliably.

🏆 contribution

We love your input! Please see our contributing guide to get started. Thank you 🙏 to all our contributors!


About

Some quality of life improvements to Supervision.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%