Reverse Dependencies of tritonclient
The following projects have a declared dependency on tritonclient:
- afilter — no summary
- afilter-vsp — no summary
- ariel-client-triton — Client utilities for the triton inference server
- BentoML — BentoML: The easiest way to serve AI apps and models
- bionemo-controlled-generation — Guided molecule generation via the BioNemo cloud service
- body-organ-analysis — BOA is a tool for segmentation of CT scans developed by the SHIP-AI group at the Institute for Artificial Intelligence in Medicine (https://ship-ai.ikim.nrw/). Combining the TotalSegmentator and the Body Composition Analysis, this tool is capable of analyzing medical images and identifying the different structures within the human body, including bones, muscles, organs, and blood vessels.
- bsrag-unstructured — A Python package with a built-in web application
- confai — build ai with config
- datumaro — Dataset Management Framework (Datumaro)
- drlx — DRLX is a library for distributed training of diffusion models via RL
- epoch8-smartrec-client — no summary
- example-pkg-krish9d — A small example package
- fastdeploy-llm — FastDeploy for Large Language Model
- fastnn — A python library and framework for fast neural network computations.
- fedml — A research and production integrated edge-cloud library for federated/distributed machine learning at anywhere at any scale.
- genai-perf — GenAI Perf Analyzer CLI - CLI tool to simplify profiling LLMs and Generative AI models with Perf Analyzer
- infer-client — Abstraction for AI Inference Client
- kentoml — no summary
- koinapy — Python client to communicate with Koina.
- kserve-mathking — KServe Python SDK
- labelr — Add your description here
- langchain-nvidia-trt — An integration package connecting TritonTensorRT and LangChain
- llama-index-llms-nvidia-triton — llama-index llms nvidia triton integration
- lpr-pkg — no summary
- m4-utils — Biblioteca com funções de uso comum em projetos de aprendizado de máquina e ciencia de dados.
- metalm-xclient — 雪浪模型推理服务的客户端
- ml4gw-hermes — Inference-as-a-Service deployment made simple
- mlflow-tritonserver — Tritonserver Mlflow Deployment
- mlserver — MLServer
- msir-infer — Inference client for msir inference service
- OpenELM — Evolution Through Large Models
- openfoodfacts — Official Python SDK of Open Food Facts
- openvino-model-api — Model API: model wrappers and pipelines for inference with OpenVINO
- paddle-pipelines — Paddle-Pipelines: An End to End Natural Language Proceessing Development Kit Based on PaddleNLP
- remyxai — no summary
- robotoff — Real-time and batch prediction service for Open Food Facts.
- sagemaker — Open source library for training and deploying models on Amazon SageMaker.
- stochasticx — Stochastic client library
- streaming-infer — streaming_infer
- tah-example-pkg — no summary
- titan-iris — no summary
- triton-bert — easy to use bert with nvidia triton inference server
- triton-model-analyzer — Triton Model Analyzer is a tool to analyze the runtime performance of one or more models on the Triton Inference Server
- triton-model-navigator — Triton Model Navigator: An inference toolkit for optimizing and deploying machine learning models and pipelines on the Triton Inference Server and PyTriton.
- triton-requests — A high level package for Nvidia Triton requests
- triton-sushang — A unified Triton client for speech recognition and object detection.
- tritonclient-futures — 基于python标准库concurrent & requests封装tritonclient
- tritonv2 — project descriptions here
- tritony — Tiny configuration for Triton Inference Server
- ViT-TensorRT — A re-implementation of ViT containing utilities to convert to TensorRT engines and run in Triton.
- windmill-endpoint — sdk in python for windmill endpoint
- windmillendpointv1 — Add your description here
- windmilltritonv2 — project descriptions here
- wtu-mlflow-triton-plugin — W-Train Utils for MLflow Triton Plugin
- xuelang-Xclient — Triton Inference Server Client
- zerohertzLib — Zerohertz's Library
- zxftools — 工具包
1