Reverse Dependencies of deepspeed
The following projects have a declared dependency on deepspeed:
- accelerate — Accelerate
- adapter-transformers — A friendly fork of HuggingFace's Transformers, adding Adapters to PyTorch language models
- aeiva — aeiva is a general AI agent framework
- aesthetic-predictor-v2-5 — Aesthetic predictor v2.5
- agilerl — AgileRL is a deep reinforcement learning library focused on improving RL development through RLOps.
- aimet-onnx — AIMET onnx package
- aistrainer — AI Specialization Trainer for LLM models
- aivoifu — Easy and fast AI Waifu voice generation
- alexa-teacher-models — Alexa Teacher Models
- alignment-handbook — The Alignment Handbook
- alpaca-farm — no summary
- alphadev — AlphaDev - Pytorch
- alphafold3-pytorch-lightning-hydra — AlphaFold 3 - Pytorch
- Andromeda-llm — andromeda - Pytorch
- andromeda-torch — Andromeda - Pytorch
- archai — Platform for Neural Architecture Search
- arctic-training — Snowflake LLM training library
- ASRChild — Package for ASRChild
- autobiasdetector — tools for detecting bias patterns of LLMs
- axolotl — LLM Trainer
- badam — Package which implements the algorithm proposed by "BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models".
- catalyst — Catalyst. Accelerated deep learning R&D with PyTorch.
- catalyst-pdm — Catalyst fork compatible with PDM
- chatllama-py — no summary
- cm3 — Description of the cm3 package
- codegeex — CodeGeeX: A Open Multilingual Code Generation Model.
- codemmlu — CodeMMLU Evaluator: A framework for evaluating language models on CodeMMLU benchmark.
- cody-adapter-transformers — A friendly fork of HuggingFace's Transformers, adding Adapters to PyTorch language models
- collie-lm — CoLLiE: Collaborative Training of Large Language Models in an Efficient Way
- cut-cross-entropy — Code for cut cross entropy, a memory efficient implementation of linear-cross-entropy loss.
- danoliterate — Benchmark of Generative Large Language Models in Danish
- dbgpt-hub — DB-GPT-Hub: Text-to-SQL parsing with LLMs
- deepspeed-mii — deepspeed mii
- factory-sdk — factory SDK
- fair-esm — Evolutionary Scale Modeling (esm): Pretrained language models for proteins. From Facebook AI Research.
- fate-llm — Federated Learning for Large Language Models
- fedml — A research and production integrated edge-cloud library for federated/distributed machine learning at anywhere at any scale.
- fm-optimized-inference — no summary
- fmcore — A specialized toolkit for scaling experimental research with Foundation Models.
- formless — Handwritten + image OCR.
- gigagan — GigaGAN - Scaling up GANs for Text-to-Image Synthesis
- gpt3-torch — GPT3 - Pytorch
- gpt4-torch — GPT4 - Pytorch
- inspiremusic — InspireMusic: A Fundamental Music, Song and Audio Generation Framework and Toolkits
- instructlab-training — Training Library
- kogitune — The Kogitune 🦊 LLM Project
- kosmos-2 — kosmos-2 - Pytorch
- lazyllm — A Low-code Development Tool For Building Multi-agent LLMs Applications.
- lazyllm-beta — A Low-code Development Tool For Building Multi-agent LLMs Applications.
- lazyllm-llamafactory — Easy-to-use LLM fine-tuning framework
- lbster — Language models for Biological Sequence Transformation and Evolutionary Representation.
- Lightning — The Deep Learning framework to train, deploy, and ship AI products Lightning fast.
- lightning-fabric — no summary
- lightning-gpt — GPT training in Lightning
- lightning-lite — no summary
- lightning-transformers — Lightning Transformers.
- llamafactory — Unified Efficient Fine-Tuning of 100+ LLMs
- llamafactory-songlab — Easy-to-use LLM fine-tuning framework
- llava-torch — Towards GPT-4 like large language and visual assistant.
- llavaction — LLaVAction: Evaluating and Training Multi-Modal Large Language Models for Action Recognition
- llm-blender — LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diverse strengths and weaknesses of multiple open-source large language models (LLMs). LLM-Blender cut the weaknesses through ranking and integrate the strengths through fusing generation to enhance the capability of LLMs.
- llm-optimized-inference — no summary
- llm-serve — An LLM inference solution to quickly deploy productive LLM service
- llmtuner — Easy-to-use LLM fine-tuning framework
- lmflow-benchmark — LMFlow: Large Model Flow.
- lmflow-deploy — LMFlow: Large Model Flow.
- lmflow-diffusion — LMFlow: Large Model Flow.
- lmflow-eval — LMFlow: Large Model Flow.
- lmflow-evaluate — LMFlow: Large Model Flow.
- lmflow-evaluator — LMFlow: Large Model Flow.
- lmflow-finetune — LMFlow: Large Model Flow.
- lmflow-finetuner — LMFlow: Large Model Flow.
- lmflow-inference — LMFlow: Large Model Flow.
- lmflow-inferencer — LMFlow: Large Model Flow.
- lmflow-pretrain — LMFlow: Large Model Flow.
- lmflow-pretrainer — LMFlow: Large Model Flow.
- lmflow-vision — LMFlow: Large Model Flow.
- manifest-ml — Manifest for Prompting Foundation Models.
- mantis-vl — Official Codes for of "MANTIS: Interleaved Multi-Image Instruction Tuning"
- mase-tools — Machine-Learning Accelerator System Exploration Tools
- mask2former — Mask2Former
- megaladon — Megaladon - Pytorch
- memorag — A Python package for memory-augmented retrieval-augmented generation
- mengzi-zero-shot — NLU & NLG (zero-shot) depend on mengzi-t5-base-mt
- metatreelib — PyTorch Implementation for MetaTree: Learning a Decision Tree Algorithm with Transformers
- mw-adapter-transformers — A friendly fork of HuggingFace's Transformers, adding Adapters to PyTorch language models
- narration-xtts2 — Projeto que baseado em um json cria narrações frase a frase em uma lista de frases, com flexibilidade de parametros do modelo XTTS2
- nn-gpt — LLM-Based Neural Network Generator
- oat-llm — Online AlignmenT (OAT) for LLMs.
- ochat — An efficient framework for training and serving top-tier, open-source conversational LLMs.
- oh-my-bloom — ä¸æ–‡BLOOMè¯è¨€æ¨¡åž‹
- OmniEvent — A tookit for event extraction.
- OpenMind — openMind is a magicain who takes you to experience the mystery and creativity of AI.
- openrlhf — A Ray-based High-performance RLHF framework.
- optimum-benchmark — Optimum-Benchmark is a unified multi-backend utility for benchmarking Transformers, Timm, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
- otter-ai — Otter: A Multi-Modal Model with In-Context Instruction Tuning
- pandora-llm — Red-teaming large language models for train data leakage
- phi-torch — Phi - Pytorch
- Protenix — A trainable PyTorch reproduction of AlphaFold 3.
- py-data-juicer — A One-Stop Data Processing System for Large Language Models.
1
2