Reverse Dependencies of prometheus-fastapi-instrumentator
The following projects have a declared dependency on prometheus-fastapi-instrumentator:
- ai-service-wrapper — update lifetime for api signature
- aligned — A data managment and lineage tool for ML applications.
- amarillo-metrics — Prometheus metrics for Amarillo
- amora — Amora Data Build Tool
- autobiasdetector — tools for detecting bias patterns of LLMs
- bigiq-discovery — A Prometheus file and http discovery for .......
- cloud-init-server — Simple HTTP serving cloud-init metadata files
- cognitivefactory-interactive-clustering-gui — A web application designed for NLP data annotation using Interactive Clustering methodology.
- delphai-fastapi — Package for fastAPI models
- devops-auth-service-project-2022-tim1 — no summary
- devops-event-service-project-2022-tim1 — no summary
- devops-message-service-project-2022 — no summary
- devops-offer-service-project-2022 — no summary
- devops-post-service-project-2022 — no summary
- devops-profile-service-project-2022 — no summary
- estimenergy — Estimate Energy Consumption
- evo-featureflags-server — Feature flags server
- fastapi-batteries-included — Batteries-included library for services that use FastAPI
- fastapi-rtk — A package that provides a set of tools to build a FastAPI application with a Class-Based CRUD API.
- fastapi-wire — Build FastAPI applications easily
- fastramqpi — Rammearkitektur integrations framework
- fortigate-exporter-discovery — A Prometheus file discovery for Fortigate's based on FortiManager
- g2w — Gateway to notify Worksection tasks about events from Grafana, Gitlab (e.g commits)
- genius-agent — Create various chat agents based off YAML or JSON files of predefined configs
- inference — With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
- inference-core — With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
- inference-cpu — With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
- inference-gpu — With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
- infinity_emb — Infinity is a high-throughput, low-latency REST API for serving text-embeddings, reranking models and clip.
- infoblox-discovery — A Prometheus file and http discovery for infoblox
- kframe — Kaeus development framework
- leptonai — Lepton AI Platform
- leximpact-socio-fisca-simu-etat — French State Budget Simulation
- matter-observability — A template for Matter's observability library
- metrics-impetus — A package for setting up instrumentation in FastAPI applications.
- microbootstrap — Package for bootstrapping new micro-services
- mlem — Version and deploy your models following GitOps principles
- msaBase — General Package for Microservices based on FastAPI like Profiler, Scheduler, Sysinfo, Healtcheck, Error Handling etc.
- opea-comps — Generative AI components
- opsml — Python MLOPs quality control tooling for your production ML workflows
- qena-shared-lib — A shared tools for other services
- rest-model-service — RESTful service for hosting machine learning models.
- rockai — Python SDK for RockAI.online
- textembed — TextEmbed provides a robust and scalable REST API for generating vector embeddings from text. Built for performance and flexibility, it supports various sentence-transformer models, allowing users to easily integrate state-of-the-art NLP techniques into their applications. Whether you need embeddings for search, recommendation, or other NLP tasks, TextEmbed delivers with high efficiency.
- vllm-npu — A high-throughput and memory-efficient inference and serving engine for LLMs
- vllm-xft — A high-throughput and memory-efficient inference and serving engine for LLMs
- vulcan-ms-core — Vulcan Microservice Core Library
1