Reverse Dependencies of faiss-cpu
The following projects have a declared dependency on faiss-cpu:
- infervcpy — Python wrapper for fast inference with rvc
- instruct-qa — Empirical evaluation of retrieval-augmented instruction-following models.
- ipymock — A Tool that Allows You Run PyTest within Jupyter Notebook Cells
- ir-axioms — Intuitive interface to many IR axioms.
- isek — ISEK Distributed Multi-Agent Framework
- itera — A beautiful git changelog generator with semantic search capabilities
- itp-interface — Generic interface for hooking up to any Interactive Theorem Prover (ITP) and collecting data for training ML models for AI in formal theorem proving.
- ivystar — python tools package of ivystar
- jarvis-ai-assistant — Jarvis: An AI assistant that uses tools to interact with the system
- jrvc — Libraries for RVC inference
- jshbtf0302 — State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch
- jupyter-ai — A generative AI extension for JupyterLab
- jyflow — Your personalized newsfeed for keeping up with research
- kf-d3m-primitives — All Kung Fu D3M primitives as a single library
- kgtk — no summary
- kilosort — spike sorting pipeline
- KNNN — An implementation of KNNN algorithm
- knowledge-base-guardian — An LLM application to safeguard the consistency of documents in a knowledge base
- knowledgegpt — A package for extracting and querying knowledge using GPT models
- koko-cli — no summary
- kozmoai — A Python package with a built-in web application
- kube-copilot — Kubernetes Copilot
- kwwutils — Add your description here
- labelkit — build unstructured to structured data transformation pipelines
- lamini — Build on large language models faster
- lang2sql — Lang2SQL - Query Generator for Data Warehouse
- langchain_1111_Dev_cerebrum — Building applications with LLMs through composability
- langchain-by-johnsnowlabs — Building applications with LLMs through composability
- langchain-fmp-data — An integration package connecting FmpData and LangChain
- langchain-ibis — Building applications with LLMs through composability
- langchain-ray — LangChain leveraging Ray.
- langchain-xfyun — 在LangChain中流畅地使用讯飞星火大模型
- langchaincoexpert — Building applications with LLMs through composability
- langchainmsai — Building applications with LLMs through composability
- langchainn — Building applications with LLMs through composability
- langflow — A Python package with a built-in web application
- langflow-law — A Python package with a built-in web application
- langflow-nightly — A Python package with a built-in web application
- langinfra — A Python package with a built-in web application
- langpack — A library to pakcage and deploy langugage model apps
- langplus — Building applications with LLMs through composability
- languageassistant — An LLM-powered language learning assistant
- langwatch — Python SDK for LangWatch for monitoring your LLMs
- laserdato — A small example package
- latentis — A Python package for analyzing and transforming neural latent spaces.
- lazyllm — A Low-code Development Tool For Building Multi-agent LLMs Applications.
- lazyllm-beta — A Low-code Development Tool For Building Multi-agent LLMs Applications.
- ldsi — LDSI
- learn-genai — A package to learn Generative AI through practical examples
- lesa — A CLI tool to converse with any document locally using Ollama.
- lib-resume-builder-AIHawk — A package to generate AI-assisted resumes using GPT models
- libre-chat — Free and Open Source Large Language Model (LLM) chatbot web UI and API. Self-hosted, offline capable and easy to setup. Powered by LangChain and Llama 2.
- lightning-ir — Your one-stop shop for fine-tuning and running neural ranking models.
- lightrag — The Lightning Library for LLM Applications.
- linktransformer — A friendly way to do link, aggregate, cluster and de-duplicate dataframes using large language models.
- llm-agent-toolkit — LLM Agent Toolkit provides minimal, modular interfaces for core components in LLM-based applications.
- llm-bot — Python library for developing LLM bots
- llm-explorer — A Lakehouse LLM Explorer. Wrapper for spark, databricks and langchain processes
- llmebench — A Flexible Framework for Accelerating LLMs Benchmarking
- llmprototyping — A lightweight set of tools to use several llm and embeddings apis
- llmutils — A few utility classes for working with LLMs
- llmvm-cli — Command Line LLM with client-side tools support.
- llmware — An enterprise-grade LLM-based development framework, tools, and fine-tuned models
- llmyaml — YAML-based LLM configuration and execution
- lmm-tools — Toolset for Large Multi-Modal Models
- local-deep-research — AI-powered research assistant with deep, iterative analysis using LLMs and web searches
- localrag — Chat with your documents locally.
- loguru-cli — An interactive commandline interface that brings intelligence to your logs.
- longtrainer — Production Ready LangChain
- lookaside — Lookaside cache in the LLM era
- lotus-ai — lotus
- maeser — A package for building RAG chatbot applications for educational contexts.
- magna-search — AI-powered embedding similarity search for documents
- mainframe-orchestra — Mainframe-Orchestra is a lightweight, open-source agentic framework for building LLM based pipelines and self-orchestrating multi-agent teams
- manas-ai — A framework for building LLM-powered applications with intelligent agents, task decomposition, and RAG
- matchain — Record linkage - simple, flexible, efficient.
- materials-eunomia — Chemist AI Agent for Developing Materials Datasets with Natural Language Prompts
- mauve-text — Implementation of the MAUVE to evaluate text generation
- mcp-web-search — A mcp plugin for web search
- mdchat — a CLI that lets you chat with your markdown notes
- med-discover-ai — Med-Discover is an AI-powered tool designed to assist biomedical researchers by leveraging Retrieval-Augmented Generation (RAG) with fine-tuned LLMs on PubMed literature. It enables efficient document retrieval, knowledge extraction, and interactive querying from biomedical research papers, helping researchers find relevant insights quickly. The package supports both GPU-based embeddings (MedCPT) and CPU-friendly alternatives (GPT-4 embeddings), making it accessible for a wide range of users.
- meddiscover — MedDiscover is an AI-powered tool designed to assist biomedical researchers by leveraging Retrieval-Augmented Generation (RAG) with fine-tuned LLMs on PubMed literature. It enables efficient document retrieval, knowledge extraction, and interactive querying from biomedical research papers, helping researchers find relevant insights quickly. The package supports both GPU-based embeddings (MedCPT) and CPU-friendly alternatives (GPT-4 embeddings), making it accessible for a wide range of users.
- meerkat-ml — Meerkat is building new data abstractions to make machine learning easier.
- megabots — 🤖 Megabots provides State-of-the-art, production ready bots made mega-easy, so you don't have to build them from scratch 🤯 Create a bot, now 🫵
- memories-dev — Collective Memory Infrastructure for AGI
- memoripy — Memoripy provides context-aware memory management with support for OpenAI and Ollama APIs, offering structured short-term and long-term memory storage for interactive applications.
- metagpt — The Multi-Agent Framework
- metagpt-simple — The Multi-Agent Framework
- MeUtils — description
- microllama — The smallest possible LLM API
- mindsql — Text-2-SQL made easy in just a few lines of python.
- mini-rec-sys — Toolkit to train and evaluate models for search and recommendations.
- mistral-vectordb — High-performance vector database with Mistral AI embeddings support
- moatless — no summary
- moatless-tree-search — no summary
- monocle-apptrace — package with monocle genAI tracing
- mstr-robotics-magerdaniel — MicroStrateg(P)ython
- multi-swarm — A framework for creating collaborative AI agent swarms
- muzlin — Muzlin: a filtering toolset for semantic machine learning
- mw-adapter-transformers — A friendly fork of HuggingFace's Transformers, adding Adapters to PyTorch language models