Reverse Dependencies of hf-transfer
The following projects have a declared dependency on hf-transfer:
- aibrix — AIBrix, the foundational building blocks for constructing your own GenAI inference infrastructure.
- alignment-handbook — The Alignment Handbook
- autora-doc — Automatic documentation generator from AutoRA code
- autotrain-advanced — no summary
- axolotl — LLM Trainer
- briton — Python component of using Briton
- cheesechaser — Swiftly get tons of images from indexed tars on Huggingface.
- competitions — Hugging Face Competitions
- enova — enova
- flow-judge — A small yet powerful LM Judge
- giantmusictransformer — Fast multi-instrumental music transformer with true full MIDI instruments range, efficient encoding, octo-velocity and outro tokens
- h2ogpt — no summary
- HF-fastup — Fast upload in parallel large datasets to HuggingFace Datasets hub.
- hf-model-downloader — HuggingFace model downloader
- hfutils — Useful utilities for huggingface
- huggingface-hub — Client library to download and publish models, datasets and other repos on the huggingface.co hub
- hulu-evaluate — Client library to train and evaluate models on the HuLu benchmark.
- infinity_emb — Infinity is a high-throughput, low-latency REST API for serving text-embeddings, reranking models and clip.
- lh-webtool — A web tool package
- llm-foundry — LLM Foundry
- llm-serve — An LLM inference solution to quickly deploy productive LLM service
- llm-swarm — no summary
- lm-eval — A framework for evaluating language models
- lmms-eval — A framework for evaluating large multi-modality language models
- mini-dust3r — Miniature version of dust3r, focused on inference
- neurosis — A neural network trainer (for weebs)
- optimum-nvidia — Optimum Nvidia is the interface between the Hugging Face Transformers and NVIDIA GPUs. "
- outpostcli — CLI for Outpost
- plancraft — Plancraft: an evaluation dataset for planning with LLM agents
- ppdiffusers — PPDiffusers: Diffusers toolbox implemented based on PaddlePaddle
- rewardbench — Tools for evaluating reward models
- sagemode — Deploy, scale, and monitor your ML models all with one click. Native to AWS.
- scoamp — SenseCore AI Model Platform Command Line Tool
- scoamppro — SenseCore AI Model Platform Command Line Tool
- sglang — SGLang is yet another fast serving framework for large language models and vision language models.
- sglang-router — SGLang is yet another fast serving framework for large language models and vision language models.
- swarms-cloud — Swarms Cloud - Pytorch
- tooncraftersimple — Simple Tooncrafter Implementation
- unsloth — 2-5X faster LLM finetuning
- unsloth-zoo — Utils for Unsloth
- vllm-tgis-adapter — vLLM adapter for a TGIS-compatible grpc server
1