Reverse Dependencies of supervision
The following projects have a declared dependency on supervision:
- achatbot — An open source chat bot for voice (and multimodal) assistants
- ArtTracker — Efficient person tracking using YOLOv8 and ZeroMQ (Zmq)
- atar — A CLI tool for AI tasks such as dataset handling, training, and validation.
- autodistill — Distill large foundational models into smaller, domain-specific models for deployment
- autodistill-albef — ALBEF module for use with Autodistill
- autodistill-altclip — AltCLIP model for use with Autodistill.
- autodistill-azure-vision — Azure Vision base model for use with Autodistill
- autodistill-bioclip — BioCLIP model for use with Autodistill
- autodistill-blip — BLIP module for use with Autodistill
- autodistill-clip — CLIP module for use with Autodistill
- autodistill-codet — CoDet model for use with Autodistill
- autodistill-detic — DETIC module for use with Autodistill
- autodistill-detr — DETR module for use with Autodistill
- autodistill-dinov2 — DINOv2 module for use with Autodistill
- autodistill-efficient-yolo-world — EfficientSAM + YOLO-World base model for use with Autodistill
- autodistill-efficientsam — EfficientSAM model for use with Autodistill
- autodistill-eva-clip — EvaClip module for use with Autodistill
- autodistill-fastsam — FastSAM module for use with Autodistill
- autodistill-fastvit — FastViT model for use with Autodistill
- autodistill-florence-2 — Use Florence 2 to auto-label data for use in training fine-tuned object detection models.
- autodistill-gcp-vision — Autodistill Google Cloud Vision module for use in training a custom, fine-tuned model.
- autodistill-gemini — Model for use with Autodistill
- autodistill-gpt-4o — GPT-4o model for use with Autodistill
- autodistill-gpt-4v — GPT-4V model for use with Autodistill
- autodistill-grounded-edgesam — Model for use with Autodistill
- autodistill-grounded-sam — Automatically distill large foundational models into smaller, in-domain models for deployment
- autodistill-grounded-sam-2 — Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.
- autodistill-grounding-dino — GroundingDINO module for use with Autodistill
- autodistill-hls-geospatial — Use the HLS Geospatial model made by NASA and IBM to generate masks for use in training a fine-tuned segmentation model.
- autodistill-metaclip — MetaCLIP base model for use with Autodistill.
- autodistill-mobileclip — Model for use with Autodistill
- autodistill-owl-vit — OWL-ViT module for use with Autodistill
- autodistill-owlv2 — OWLv2 base model for use with Autodistill.
- autodistill-paligemma — Auto-label data with a PaliGemma model, or ine-tune a PaLiGemma model using custom data with Autodistill.
- autodistill-rekognition — AWS Rekognition base model for use with Autodistill
- autodistill-roboflow-universe — Use models on Roboflow Universe to auto-label data for use in model training.
- autodistill-sam-clip — SAM-CLIP model for use with Autodistill
- autodistill-sam-hq — Segment Anything High Quality (SAM HQ) model for use with Autodistill
- autodistill-seggpt — SegGPT for use with Autodistill
- autodistill-segment-anything — Segment Anything model for use with Autodistill
- autodistill-transformers — Use object detection models in Hugging Face Transformers to automatically label data to train a fine-tuned model.
- autodistill-vit — ViT module for use with Autodistill
- autodistill-vlpart — VLPart for use with Autodistill
- autodistill-yolo-world — YOLO World for use with Autodistill
- autodistill-yolonas — YOLO-NAS module for use with Autodistill
- autodistill-yolov5 — YOLOv5 module for use with Autodistill
- AxleSpacing — A Python library for vehicle axle spacing detection.
- baseballcv — A collection of tools and models designed to aid in the use of Computer Vision in baseball.
- bluevision — Bluesignal Vision AI project
- coralnet-toolbox — Tools for annotating and developing ML models for benthic imagery
- cua-som — Computer Vision and OCR library for detecting and analyzing UI elements
- ezsam — Extract foreground from images or video via text prompt
- face-recognition-pkg — A package for face recognition in images and videos
- gdinopy — open-set object detector
- gptstream — OpenAI GPT Vision meets your webcam
- groundingdino-gml — open-set object detector
- groundingdino-iscas — open-set object detector
- groundingdino-stk — Welcome to the GroundingDINO-stk project! This repository provides a source that can install the `GroundingDINO` library from `PyPI`. The `GroundingDINO` version was last updated on `2023/06/29`.
- groundingdino-yl — open-set object detector
- groundino-samnet — A SAM model with GroundingDINO model for feet segmentation
- gvision — End-to-end automation platform for computer vision projects.
- indoxMiner — Indox Data Extraction
- inference — With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
- inference-cli — With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference CLI.
- inference-core — With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
- inference-cpu — With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
- inference-gpu — With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
- inference-sdk — With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
- juxtapose — no summary
- maestro — Streamline the fine-tuning process for vision-language models like PaliGemma 2, Florence-2, and Qwen2.5-VL.
- mdify — A powerful tool to extract text, tables, charts, and formulas from documents and convert them into Markdown format, ideal to improve LLM's accuracy and for versatile document processing.
- meta-label — Meta Auto Label Toolkit
- modelzilla — It is a lightweight Python package that enables developers to transform any AI model into a fully functional Command-Line Interface (CLI) plugin
- multimodal-maestro — Visual Prompting for Large Multimodal Models (LMMs)
- nw-groundingdino — open-set object detector
- ocr-loc — Add your description here
- opsci-toolbox — a complete toolbox
- ovsegmentation — Open vocabulary detection and segmentation package
- pdf-processing-lib — A library for processing PDF documents, images, extracting text, parsing TSV to JSON, and merging JSON files
- pi-inference — pi-inference
- pyniche — An AI Library for Niche Squad
- pyresearch — Computer Vision Helping Library
- PytorchWildlife — a PyTorch Collaborative Deep Learning Framework for Conservation.
- reefbuilder_segmentation — An instance segmentation library focussed on solving marine conservation-related problems
- rf-groundingdino — open-set object detector
- rfdetr — RF-DETR
- rlmc — Python utils for AI 🚀
- rt-pose — Real-time (GPU) pose estimation pipeline with 🤗 Transformers
- segment-lidar — A package for segmenting LiDAR data using Segment-Anything Model (SAM) from Meta AI Research.
- setofmark — Visual Prompting for Large Multimodal Models (LMMs)
- smart-reid — With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference CLI.
- ubigeoaii — Libreria que automatiza procesos espaciales complejos
- video-annotator — A Python package for video annotation, object tracking, and cropping
- visionframe — Empower Your Computer Vision Projects with VisionFrame: Seamlessly Handle Video and Image
- visionscript — VisionScript is an abstract programming language for doing common computer vision tasks, fast.
- vodin — Odin - Pytorch
- webcamgpt — WebcamGPT - chat with video stream
- yolo-world-open — YOLO-World: Real-time Open Vocabulary Object Detection
1