Reverse Dependencies of auto-gptq
The following projects have a declared dependency on auto-gptq:
- airunner — A Stable Diffusion GUI
- airunner-nexus — Run a socket server for AI models.
- akasha-plus — Extension tools for akasha-terminal
- akasha-terminal — document QA(RAG) package using langchain and chromadb
- auto-round — Repository of AutoRound: Advanced Weight-Only Quantization Algorithm for LLMs
- axolotl — LLM Trainer
- chatdocs — Chat with your documents offline using AI.
- exciton — Natural Language Processing by the Exciton Research
- fastnn — A python library and framework for fast neural network computations.
- fms-hf-tuning — FMS HF Tuning
- geniusrise-audio — audio bolts for geniusrise
- geniusrise-text — Text bolts for geniusrise
- geniusrise-vision — Huggingface bolts for geniusrise
- glayout — A human language to analog layout API with support for different technologies.
- gptdb — GPT-DB is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
- gptq-Quantizer — A Python package for GPTQ quantization
- green-bit-llm — A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.
- h2ogpt — no summary
- indic-eval — A package to make LLM evaluation easier
- lapet — Library that makes it easier to evaluate the quality of LLM outputs
- lazyllm-llamafactory — Easy-to-use LLM fine-tuning framework
- lighteval — A lightweight and configurable evaluation package
- llama2-wrapper — Use llama2-wrapper as your local llama2 backend for Generative Agents / Apps
- llamafactory — Easy-to-use LLM fine-tuning framework
- llamafactory-songlab — Easy-to-use LLM fine-tuning framework
- llm-quantkit — cli tool for downloading and quantizing LLMs
- llmtuner — Easy-to-use LLM fine-tuning framework
- lm-eval — A framework for evaluating language models
- openbb-chat — Deep learning package to add chat capabilities to OpenBB
- optimum-benchmark — Optimum-Benchmark is a unified multi-backend utility for benchmarking Transformers, Timm, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
- ppromptor — An autonomous agent framework for prompt engineering
- readme-ready — Auto-generate code documentation in Markdown format in seconds.
- semantic-ai — Sematic AI RAG System
- sft-dpo-qlora — SFT-DPO-QLora Trainer Package
- xllm — Simple & Cutting Edge LLM Finetuning
1