Reverse Dependencies of loralib
The following projects have a declared dependency on loralib:
- adamix-gpt2 — PyTorch implementation of low-rank adaptation (LoRA) and Adamix, a parameter-efficient approach to adapt a large pre-trained deep learning model which obtains performance better than full fine-tuning.
- AdapterLoRa — A Tool for adaptation Larger Transfomer-Based model and Quantization built top on libraries LoRa and LoRa-Torch.
- chatopt — ChatOPT
- dp-transformers — Differentially-private transformers using HuggingFace and Opacus
- finetune-eval-harness — Finetune_Eval_Harness
- h2ogpt — no summary
- jac-nlp — no summary
- lazyllm — A Low-code Development Tool For Building Multi-agent LLMs Applications.
- lazyllm-beta — A Low-code Development Tool For Building Multi-agent LLMs Applications.
- llama-recipes — Llama-recipes is a companion project to the Llama models. It's goal is to provide examples to quickly get started with fine-tuning for domain adaptation and how to run inference for the fine-tuned models.
- oat-llm — Online AlignmenT (OAT) for LLMs.
- openrlhf — A Ray-based High-performance RLHF framework.
- openthaigpt — OpenThaiGPT focuses on developing a Thai Chatbot system to have capabilities equivalent to ChatGPT, as well as being able to connect to external systems and be able to retrieve data flexibly. Easily expandable and customizable and developed into Free open source software for everyone.
- rnaformer — RNAformer
- strategais — A Python library for deploying large language models (LLMs) in local environments.
- TokenProbs — Extract token-level probabilities from LLMs for classification-type outputs.
1