Reverse Dependencies of autodistill
The following projects have a declared dependency on autodistill:
- autodistill-albef — ALBEF module for use with Autodistill
- autodistill-altclip — AltCLIP model for use with Autodistill.
- autodistill-azure-vision — Azure Vision base model for use with Autodistill
- autodistill-bioclip — BioCLIP model for use with Autodistill
- autodistill-blip — BLIP module for use with Autodistill
- autodistill-codet — CoDet model for use with Autodistill
- autodistill-detic — DETIC module for use with Autodistill
- autodistill-detr — DETR module for use with Autodistill
- autodistill-dinov2 — DINOv2 module for use with Autodistill
- autodistill-distilbert — DistilBERT model for use with Autodistill
- autodistill-efficientsam — EfficientSAM model for use with Autodistill
- autodistill-eva-clip — EvaClip module for use with Autodistill
- autodistill-fastsam — FastSAM module for use with Autodistill
- autodistill-fastvit — FastViT model for use with Autodistill
- autodistill-florence-2 — Use Florence 2 to auto-label data for use in training fine-tuned object detection models.
- autodistill-gcp-vision — Autodistill Google Cloud Vision module for use in training a custom, fine-tuned model.
- autodistill-gemini — Model for use with Autodistill
- autodistill-gpt-4o — GPT-4o model for use with Autodistill
- autodistill-gpt-4v — GPT-4V model for use with Autodistill
- autodistill-gpt-text — Model for use with Autodistill
- autodistill-grounded-edgesam — Model for use with Autodistill
- autodistill-grounded-sam — Automatically distill large foundational models into smaller, in-domain models for deployment
- autodistill-grounded-sam-2 — Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.
- autodistill-grounding-dino — GroundingDINO module for use with Autodistill
- autodistill-hls-geospatial — Use the HLS Geospatial model made by NASA and IBM to generate masks for use in training a fine-tuned segmentation model.
- autodistill-kosmos-2 — Kosmos-2 base model for use with Autodistill.
- autodistill-metaclip — MetaCLIP base model for use with Autodistill.
- autodistill-mobileclip — Model for use with Autodistill
- autodistill-owlv2 — OWLv2 base model for use with Autodistill.
- autodistill-paligemma — Auto-label data with a PaliGemma model, or ine-tune a PaLiGemma model using custom data with Autodistill.
- autodistill-rekognition — AWS Rekognition base model for use with Autodistill
- autodistill-roboflow-universe — Use models on Roboflow Universe to auto-label data for use in model training.
- autodistill-sam-clip — SAM-CLIP model for use with Autodistill
- autodistill-sam-hq — Segment Anything High Quality (SAM HQ) model for use with Autodistill
- autodistill-seggpt — SegGPT for use with Autodistill
- autodistill-segment-anything — Segment Anything model for use with Autodistill
- autodistill-setfit — Train SetFit models with Autodistill
- autodistill-siglip — SigLIP base model for use with Autodistill
- autodistill-transformers — Use object detection models in Hugging Face Transformers to automatically label data to train a fine-tuned model.
- autodistill-vit — ViT module for use with Autodistill
- autodistill-vlpart — VLPart for use with Autodistill
- autodistill-yolo-world — YOLO World for use with Autodistill
- autodistill-yolonas — YOLO-NAS module for use with Autodistill
- autodistill-yolov11 — Label data with and train YOLOv11 models.
- autodistill-yolov5 — YOLOv5 module for use with Autodistill
- autodistill-yolov8 — Automatically distill large foundational models into smaller, in-domain models for deployment
- coralnet-toolbox — Tools for annotating and developing ML models for benthic imagery
- visionscript — VisionScript is an abstract programming language for doing common computer vision tasks, fast.
1