Reverse Dependencies of outlines
The following projects have a declared dependency on outlines:
- alexandrainst_ragger — A repository for general-purpose RAG applications.
- autobiasdetector — tools for detecting bias patterns of LLMs
- briton — Python component of using Briton
- camel-ai — Communicative Agents for AI Society Study
- classipypi — Neurosymbolic PyPI package classifier selector.
- codestory — A cli tool for generating conventional commit messages using LLM models
- dblcsgen — DBLC Fast Structured Generation
- distilabel — Distilabel is an AI Feedback (AIF) framework for building datasets with and for LLMs.
- gigax — Call LLM-powered NPCs from your game, at runtime.
- hinteval — A Python framework designed for both generating and evaluating hints.
- latentscope — Quickly embed, project, cluster and explore a dataset.
- lexi-align — Word alignment between two languages using structured generation
- llm-vm — An Open-Source AGI Server for Open-Source LLMs
- mlx-omni-server — no summary
- optillm — An optimizing inference proxy for LLMs.
- outlines — Probabilistic Generative Model Programming
- outlines-haystack — Use `outlines` generators with Haystack.
- outlinesmlx — Probabilistic Generative Model Programming
- python-lilypad — An open-source prompt engineering framework.
- serverless-llm — no summary
- sglang — SGLang is yet another fast serving framework for large language models and vision language models.
- sglang-router — SGLang is yet another fast serving framework for large language models and vision language models.
- sieves — Rapid prototyping and robust baselines for information extraction with zero- and few-shot models.
- struct-ie — A Python library for structured information extraction with LLMs.
- TruthTorchLM — TruthTorchLM is an open-source library designed to assess truthfulness in language models' outputs. The library integrates state-of-the-art methods, offers comprehensive benchmarking tools across various tasks, and enables seamless integration with popular frameworks like Huggingface and LiteLLM.
- vllm-npu — A high-throughput and memory-efficient inference and serving engine for LLMs
- vllm-xft — A high-throughput and memory-efficient inference and serving engine for LLMs
1