Reverse Dependencies of bitsandbytes
The following projects have a declared dependency on bitsandbytes:
- img2tags — no summary
- impetus — An awesome tool/library benchmark LLM performance on all kinds of hardware!
- indic-eval — A package to make LLM evaluation easier
- indomain — no summary
- inranker — no summary
- instructlab-training — Training Library
- invokeai — An implementation of Stable Diffusion which provides various new features and options to aid the image generation process
- irisml-tasks-llava — Irisml adapter tasks for LLAVA models
- jac-nlp — no summary
- kapipe — A learnable pipeline for knowledge acquisition
- kogitune — The Kogitune 🦊 LLM Project
- KosmosX — Transformers at zeta scales
- langanisa — A langchain, transformers, and attention_sinks wrapper for longform response generation.
- latentsae — LatentSAE: Training and inference for SAEs on embeddings
- lavague-llms-huggingface — HuggingFaceLLM integration for lavague
- lazyllm-llamafactory — Easy-to-use LLM fine-tuning framework
- lighteval — A lightweight and configurable evaluation package
- Lightning — The Deep Learning framework to train, deploy, and ship AI products Lightning fast.
- lightning-fabric — no summary
- lisa-on-cuda — LISA (Reasoning Segmentation via Large Language Model) on cuda, now with huggingface ZeroGPU support!
- litGPT — Hackable implementation of state-of-the-art open-source LLMs
- llama-index-extra-llm — Just a simple extension for LlamaIndex for better apply some llm such as DeepSeek.
- llama-index-packs-zephyr-query-engine — llama-index packs zephyr_query_engine integration
- llama-recipes — Llama-recipes is a companion project to the Llama models. It's goal is to provide examples to quickly get started with fine-tuning for domain adaptation and how to run inference for the fine-tuned models.
- llama-trainer — Llama trainer utility
- llama2-terminal — Llama2 Terminal Tools Project
- llama2-wrapper — Use llama2-wrapper as your local llama2 backend for Generative Agents / Apps
- llamafactory — Easy-to-use LLM fine-tuning framework
- llamafactory-songlab — Easy-to-use LLM fine-tuning framework
- llamagym — Fine-tune LLM agents with online reinforcement learning
- llamatune — Haven's Tuning Library for LLM finetuning
- llava-torch — Towards GPT-4 like large language and visual assistant.
- llm-blender — LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diverse strengths and weaknesses of multiple open-source large language models (LLMs). LLM-Blender cut the weaknesses through ranking and integrate the strengths through fusing generation to enhance the capability of LLMs.
- llm-connect — LLM Connect API
- LLM-keyword-extractor — This is a python package to extract keywords from a given text using LLMs
- llm-lens — llm-lens is a Python package for CV as NLP, where you can run very descriptive image modules on images, and then pass those descriptions to a Large Language Model (LLM) to reason about those images.
- llm-serve — An LLM inference solution to quickly deploy productive LLM service
- llm-toolkit — LLM Finetuning resource hub + toolkit
- llmpool — Large Language Models' pool management library
- llmppl — A package for calculating perplexity using various language models
- llmtuner — Easy-to-use LLM fine-tuning framework
- lm-buddy — Ray-centric library for finetuning and evaluation of (large) language models.
- lm-polygraph — Uncertainty Estimation Toolkit for Transformer Language Models
- lmetric — Large Model Metrics
- lmflow-benchmark — LMFlow: Large Model Flow.
- lmflow-deploy — LMFlow: Large Model Flow.
- lmflow-diffusion — LMFlow: Large Model Flow.
- lmflow-eval — LMFlow: Large Model Flow.
- lmflow-evaluate — LMFlow: Large Model Flow.
- lmflow-evaluator — LMFlow: Large Model Flow.
- lmflow-finetune — LMFlow: Large Model Flow.
- lmflow-finetuner — LMFlow: Large Model Flow.
- lmflow-inference — LMFlow: Large Model Flow.
- lmflow-inferencer — LMFlow: Large Model Flow.
- lmflow-pretrain — LMFlow: Large Model Flow.
- lmflow-pretrainer — LMFlow: Large Model Flow.
- lmflow-vision — LMFlow: Large Model Flow.
- lmquant — This package is used for evaluating large foundation models quantization in deep learning.
- lmwrapper — An object-oriented wrapper around language models with caching, batching, and more.
- local-gemma — no summary
- lodestonegpt — 🤖 Modular Auto-GPT Framework Build For Project Lodestone
- LongNet — LongNet - Pytorch
- loopgpt — Modular Auto-GPT Framework
- mantis-vl — Official Codes for of "MANTIS: Interleaved Multi-Image Instruction Tuning"
- medusa-llm — Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads
- mexca — Emotion expression capture from multiple modalities.
- mm-poe — Multiple Choice Reasoning via. Process of Elimination using Multi-Modal Models
- mmengine — Engine of OpenMMLab projects
- mmengine-lite — Engine of OpenMMLab projects
- mmengine-open — Engine of OpenMMLab projects
- mmlm — no summary
- moipg — This is a test for my python package upload to pypi
- MovieChat — Long video understanding
- naeural-core — Naeural Core is the backbone of the Naeural Edge Protocol.
- naifu — naifu is designed for training generative models with various configurations and features.
- nataili — Nataili: Multimodal AI Python Library
- nerfstudio — All-in-one repository for state-of-the-art NeRFs
- neural-rag — no summary
- neuralnest — NeuralNest: An open-source personal AI assistant designed for seamless data integration from various sources, offering versatile connections to Language Learning Models (LLMs), enhanced security, and human-in-the-loop options for a personalized AI experience.
- neurosis — A neural network trainer (for weebs)
- nixietune — A semantic search embedding model fine-tuning tool
- nl2query — no summary
- nstm — This is a test for my python package upload to pypi
- oat-llm — Online AlignmenT (OAT) for LLMs.
- olive-ai — Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.
- open-gpt-torch — An open-source cloud-native of large multi-modal models (LMMs) serving framework.
- openbb-chat — Deep learning package to add chat capabilities to OpenBB
- openrlhf — A Ray-based High-performance RLHF framework.
- optillm — An optimizing inference proxy for LLMs.
- optimum-benchmark — Optimum-Benchmark is a unified multi-backend utility for benchmarking Transformers, Timm, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
- owlsight — Owlsight is a commandline tool which combines open-source AI models with Python functionality to create a powerful AI assistant.
- petals — Easy way to efficiently run 100B+ language models without high-end GPUs
- pgsocr — A command line utility for converting Blu-ray subs to SRT or ASS using AI Language Models.
- plotano — no summary
- promptcraft — PromptCraft: A Prompt Perturbation Toolkit for Prompt Robustness Analysis
- psruq-python — Code for uncertainty quantification with proper scoring rules.
- py-dreambooth — Easily create your own AI avatar images!
- pyiqa — PyTorch Toolbox for Image Quality Assessment
- pykoi — pykoi: Active learning in one unified interface
- pyllmsearch — LLM Powered Advanced RAG Application