Reverse Dependencies of fschat
The following projects have a declared dependency on fschat:
- agentverse — A versatile framework that streamlines the process of creating custom multi-agent environments for large language models (LLMs).
- airoboros — Updated and improved implementation of the self-instruct system.
- aqueduct-llm — Aqueduct LLM Package
- auto-vicuna — An experiment with Vicuna.
- autobiasdetector — tools for detecting bias patterns of LLMs
- bigdl-llm — Large Language Model Develop Toolkit
- collie-bench — Official Implementation of "COLLIE: Systematic Construction of Constrained Text Generation Tasks"
- comfyui — An installable version of ComfyUI
- dbgpt — DB-GPT is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
- eagle-llm — Accelerating LLMs by 3x with No Quality Loss
- easyjailbreak — Easy Jailbreak toolkit
- flashrag-dev — A library for efficient Retrieval-Augmented Generation research
- fschat-FlagEmbedding-worker — FlagEmbedding model worker for fastchat.
- garak — LLM vulnerability scanner
- gptdb — GPT-DB is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
- ipex-llm — Large Language Model Develop Toolkit
- llm-blender — LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diverse strengths and weaknesses of multiple open-source large language models (LLMs). LLM-Blender cut the weaknesses through ranking and integrate the strengths through fusing generation to enhance the capability of LLMs.
- lm-polygraph — Uncertainty Estimation Toolkit for Transformer Language Models
- longchat — LongChat and LongEval
- medusa-llm — Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads
- pyragnarok — A package to running RAG models, especially for the TREC 2023 Retrieval Augmented Generation (RAG) track.
- rank-llm — A Package for running prompt decoders like RankVicuna
- rewardbench — Tools for evaluating reward models
- tracllm — A context tracing tool for LLM
- trustllm — TrustLLM
- umbrela — A Package for generating query-passage relevance assessment labels.
- universalmodels — A series of wrappers to allow for multiple AI model sources to behave as huggingface transformers models
1