Reverse Dependencies of peft
The following projects have a declared dependency on peft:
- adaptor — Adaptor: Objective-centric Adaptation Framework for Language Models.
- africanwhisper — A framework for fast fine-tuning and API endpoint deployment of Whisper model specifically developed to accelerate Automatic Speech Recognition(ASR) for African Languages.
- agentgym — Agent Gym - Pytorch
- ai2-scholar-qa — Python package to embed the Ai2 Scholar QA functionality in another application
- aiforthechurch — Package for training and deploying doctrinally correct LLMs.
- aimet-onnx — AIMET onnx package
- airoboros — Updated and improved implementation of the self-instruct system.
- airunner — A Stable Diffusion GUI
- airunner-nexus — Run a socket server for AI models.
- aistrainer — AI Specialization Trainer for LLM models
- alf-t5 — ALF-T5 - Adaptative Language Framework for T5 is a Machine Learning framework for Neural Machine Translation Systems training and, subsequently, bidirectional translation.
- alignment-handbook — The Alignment Handbook
- alpaca-eval — AlpacaEval : An Automatic Evaluator of Instruction-following Models
- angle-emb — AnglE-optimize Text Embeddings
- apiprompting — Package for an easy implementation of paper "Attention Prompting on Image for Large Vision-Language Models".
- arbor-ai — A framework for fine-tuning and managing language models
- arcee-align — The open source toolkit for finetuning and deploying LLMs
- arctic-training — Snowflake LLM training library
- argilla-v1 — Open-source tool for exploring, labeling, and monitoring data for NLP projects.
- artcraft — Image generation based on diffusers
- Assistant — Your very own Assistant. Because you deserve it.
- atomgpt — atomgpt
- Attention-Maps-Extraction — A package for extracting attention maps.
- auto-gptq — An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
- autobiasdetector — tools for detecting bias patterns of LLMs
- autodistill-florence-2 — Use Florence 2 to auto-label data for use in training fine-tuned object detection models.
- autodistill-paligemma — Auto-label data with a PaliGemma model, or ine-tune a PaLiGemma model using custom data with Autodistill.
- AutoRAG — Automatically Evaluate RAG pipelines with your own data. Find optimal structure for new RAG product.
- autotrain-advanced — no summary
- autotrain-llm — autotrain_llm
- autotransformers — a Python package for automatic training and benchmarking of Language Models.
- axolotl — LLM Trainer
- azureml-acft-accelerator — Contains the acft accelerator package used in script to build the azureml components.
- azureml-acft-image-components — Contains the code for vision model's components.
- baseballcv — A collection of tools and models designed to aid in the use of Computer Vision in baseball.
- bayesian-lora — Bayesian LoRA adapters for Language Models
- be-great-v — (A Fork)Generating Realistic Tabular Data using Large Language Models
- beevibe — A lightweight framework for training and deploying language models for thematic text classification.
- beir — A Heterogeneous Benchmark for Information Retrieval
- bellem — My personal library
- beprepared — no summary
- biomed-multi-alignment — MAMMAL (Molecular Aligned Multi-Modal Architecture and Language), a flexible, multi-domain architecture with an adaptable task prompt syntax.
- bioseba — no summary
- bloombee — A short description of your project
- caikit-nlp — Caikit NLP
- cappr — Completion After Prompt Probability. Make your LLM make a choice
- cehrbert — CEHR-BERT: Incorporating temporal information from structured EHR data to improve prediction tasks
- cehrgpt — CEHR-GPT: Generating Electronic Health Records with Chronological Patient Timelines
- closedmindedness — Text classifier of closed-mindedness
- codemmlu — CodeMMLU Evaluator: A framework for evaluating language models on CodeMMLU benchmark.
- collie-lm — CoLLiE: Collaborative Training of Large Language Models in an Efficient Way
- colpali-engine — The code used to train and run inference with the ColPali architecture.
- comfyui — An installable version of ComfyUI
- competitions — Hugging Face Competitions
- composer — Composer is a PyTorch library that enables you to train neural networks faster, at lower cost, and to higher accuracy.
- continuing-education — System to ease incremental training of a Huggingface transformer model from a large S3-based dataset
- cornstarch — A multimodal model training toolkit
- cycleformers — A comprehensive implementation of the cycle-consistency training paradigm, extending the Huggingface Transformers trainer API to accommodate arbitrary combinations of generative models.
- cyphertune — A Trainer for Fine-tuning LLMs for Text-to-Cypher Datasets
- dallm — Domain Adapted Language Model
- dalpha-ai — no summary
- dalpha-ai-cpu — no summary
- danoliterate — Benchmark of Generative Large Language Models in Danish
- datadreamer.dev — Prompt. Generate Synthetic Data. Train & Align Models.
- dataquality — no summary
- dbgpt-hub — DB-GPT-Hub: Text-to-SQL parsing with LLMs
- dev-laiser — LAiSER (Leveraging Artificial Intelligence for Skill Extraction & Research) is a tool designed to help learners, educators, and employers extract and share trusted information about skills. It uses a fine-tuned language model to extract raw skill keywords from text, then aligns them with a predefined taxonomy. You can find more technical details in the project’s paper.md and an overview in the README.md.
- dgenerate — Batch image generation and manipulation tool supporting Stable Diffusion and related techniques / algorithms, with support for video and animated image processing.
- diffusers — State-of-the-art diffusion in PyTorch and JAX.
- distllm — Distributed Inference for Large Language Models.
- ditty — no summary
- dora-qwen2-5-vl — Dora Node for VLM
- dora-qwenvl — Dora Node for VLM
- dreamsim — DreamSim similarity metric
- easyeditor — easyeditor - Editing Large Language Models
- extralit — Open-source tool for accurate & fast scientific literature data extraction with LLM and human-in-the-loop.
- ezsmdeploy — Amazon SageMaker and Bedrock custom model deployments made easy
- factory-sdk — factory SDK
- fastckpt — A fast gradient checkpointing strategy for training with memory-efficient attention (e.g., FlashAttention).
- fastllama-python-test — no summary
- fastvideo — FastVideo
- fate-llm — Federated Learning for Large Language Models
- fed-rag — A framework for federated fine-tuning of retrieval-augmented generation (RAG) systems.
- fedem — A decentralized framework to train foundational models
- fedml — A research and production integrated edge-cloud library for federated/distributed machine learning at anywhere at any scale.
- fin-art — A module for generating stylistic images
- finetrainers — Finetrainers is a work-in-progress library to support (accessible) training of diffusion models
- finetuna — no summary
- FineTune-Mistral — A package for fine-tuning Mistral model and generating responses.
- FineTune-Uunsloth-Mistral-7b — A package for fine-tuning Mistral model and generating responses.
- finetuning-suite — A fine-tuning suite based on Transformers and LoRA.
- fireredasr — FireRed ASR
- flashrag-dev — A library for efficient Retrieval-Augmented Generation research
- flashvideo — flashvideo is a lightweight framework for accelerating large video diffusion models.
- flexeval — no summary
- fm-training-estimator — A package of Estimators for Large Language Model Training.
- fms-acceleration — FMS Acceleration Plugin Framework
- fms-hf-tuning — FMS HF Tuning
- fmtr.tools — Collection of high-level tools to simplify everyday development tasks, with a focus on AI/ML
- fp8-coat — https://arxiv.org/abs/2410.19313. COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training