Reverse Dependencies of tiktoken
The following projects have a declared dependency on tiktoken:
- keble-chains — Keble ai model toolkit
- keywordsai-eval — An evaluation package for LLM input output
- khoj — Your Second Brain
- knowledge-base-guardian — An LLM application to safeguard the consistency of documents in a knowledge base
- knowledgegpt — A package for extracting and querying knowledge using GPT models
- kogi — Kogi Programming Assistant AI Dog
- KosmosX — Transformers at zeta scales
- kube-agent — A Kubernetes copilot agent powered by OpenAI
- kube-copilot — Kubernetes Copilot
- l2mac — The LLM Automatic Computer Framework
- lagent — A lightweight framework for building LLM-based agents
- lamatic-airbyte-cdk — A framework for writing Airbyte Connectors.
- lanarky — The web framework for building LLM microservices
- langchain_1111_Dev_cerebrum — Building applications with LLMs through composability
- langchain-by-johnsnowlabs — Building applications with LLMs through composability
- langchain_dashscope — 将阿里云灵积模型(通义千问等)集成到LangChain
- langchain-g4f — LangChain gpt4free is an open-source project that assists in building applications using LLM (Large Language Models) and provides free access to GPT4/3.5.
- langchain-ibis — Building applications with LLMs through composability
- langchain-llm — langchain llm wrapper
- langchain-moonshot — An integration package connecting Moonshot AI and LangChain
- langchain-openai — An integration package connecting OpenAI and LangChain
- langchain-openai-limiter — Wrapper for Langchain & OpenAI api calls which use OpenAI headers to deal with TPM & RPM.
- langchain-plantuml — Subscribe to events using a callback and store them in PlantUML format. You can easily subscribe to events and keep them in a form that is easy to visualize and analyze.
- langchain-prefect — Orchestrate and observe tools built with langchain.
- langchain-qa-with-references — This is a temporary project while I wait for my langchain [pull-request](https://github.com/langchain-ai/langchain/pull/7278) to be validated.
- langchain-utils — Utilities built upon the langchain library
- langchain-xfyun — 在LangChain中流畅地使用讯飞星火大模型
- langchain_zhipu — 将智谱AI集成到LangChain
- langchaincoexpert — Building applications with LLMs through composability
- langchainmsai — Building applications with LLMs through composability
- langchainn — Building applications with LLMs through composability
- langfair — LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
- langgraph-studio — no summary
- langpack — A library to pakcage and deploy langugage model apps
- langplus — Building applications with LLMs through composability
- langrila — useful tool to use API-based LLM
- langroid — Harness LLMs with Multi-Agent Programming
- langroid-slim — Harness LLMs with Multi-Agent Programming
- langsearch — Easily create semantic search based LLM applications on your own data
- LangTorch — Framework for intuitive LLM application development with tensors.
- langtrace-python-sdk — Python SDK for LangTrace
- languageassistant — An LLM-powered language learning assistant
- langup — 社交网络机器人
- languru — The general purpose LLM app stacks.
- lanno — Let Large Language Models Serve As Data Annotators.
- latentscope — Quickly embed, project, cluster and explore a dataset.
- lazycodr — A CLI tool to help lazy coders get the work done with AI (commit messages, pull requests ...)
- lazyllm — A Low-code Development Tool For Building Multi-agent LLMs Applications.
- lazyllm-beta — A Low-code Development Tool For Building Multi-agent LLMs Applications.
- lazyllm-llamafactory — Easy-to-use LLM fine-tuning framework
- ldp — Agent framework for constructing language model agents and training on constructive tasks.
- learnbyvideo-whisper — Robust Speech Recognition via Large-Scale Weak Supervision
- lemon-rag — no summary
- letta — Create LLM agents with long-term memory and custom tools
- letta-nightly — Create LLM agents with long-term memory and custom tools
- lexi — Lexi is a local LLM-based solution - includes Chat UI, RAG, LLM Proxy, and Document importing
- liah — Insert a Lie in a Haystack and evaluate the model's ability to detect it.
- libvisualwebarena — This is an unofficial, use-at-your-own risks port of the visualwebarena benchmark, for use as a standalone library package.
- libwebarena — This is an unofficial, use-at-your-own risks port of the webarena benchmark, for use as a standalone library package.
- lighteval — A lightweight and configurable evaluation package
- lightning-gpt — GPT training in Lightning
- lightning-whisper-mlx — no summary
- lightrag — The Lightning Library for LLM Applications.
- lilac — Organize unstructured data
- lime-green — A cli based micro-framework for LLM evals
- linkedin-ai — Automate searching for jobs and submitting applications on LinkedIn using OpenAI
- linkml-store — linkml-store
- lion-service — no summary
- litellm — Library to easily interface with LLM API providers
- litemultiagent — no summary
- litenai — Python library for Liten AI Data platform
- live_illustrate — Live-ish illustration for your role-playing campaign
- llama-agentic-system — Llama Agentic System
- llama-cmdline — Llama CLI
- llama-index-core — Interface between LLMs and your data
- llama-index-g — Interface between LLMs and your data
- llama-index-legacy — Interface between LLMs and your data
- llama-models — Llama models
- llama-tokens — A Quick Library with Llama 3.1/3.2 Tokenization - source https://github.com/jeffxtang/llama-tokens
- llamafactory — Easy-to-use LLM fine-tuning framework
- llamafactory-songlab — Easy-to-use LLM fine-tuning framework
- llano — Let Large Language Models Serve As Data Annotators.
- llm-app — LLM-App is a library for creating responsive AI applications leveraging OpenAI/Hugging Face APIs to provide responses to user queries based on live data sources. Build your own LLM application in 30 lines of code, no vector database required.
- llm-bot — Python library for developing LLM bots
- llm-checks-common-functions — common functions, utilities using multiple DB and LLM options.
- llm-client — SDK for using LLM
- llm-commons — A Python wrapper for managing OpenAI API and other LLM models.
- llm-cost-estimation — A Python library for estimating the cost of LLM API calls
- llm-docstring-generator — Code to generate docstrings for Python code using GPT-4 etc.
- llm-error-handler — A simple decorator for AI error handling in Python.
- llm-explorer — A Lakehouse LLM Explorer. Wrapper for spark, databricks and langchain processes
- llm-foundry — LLM Foundry
- llm-guard — LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.
- llm-handler — A unified handler for LLM API calls across OpenAI, Anthropic, and Google
- llm-math-education — Retrieval-backed LLMs for math education
- llm-messages-token-helper — A helper library for estimating tokens used by messages.
- llm-monitor — 📈 Monitor your LLM integration with Galileo's LLM Monitor!
- llm-os — An open-source chatgpt tool ecosystem where you can combine tools with chatgpt and use natural language to do anything.
- llm-pdf-chat — Talk to your PDFs using an LLM.
- LLM-Performance-Benchmark — Benchmark the performance (output speed, latency) of OpenAI compatible endpoints