Reverse Dependencies of onnx-graphsurgeon
The following projects have a declared dependency on onnx-graphsurgeon:
- cabrnet — Generic library for prototype-based classifiers
- modelconv — Converter for neural models into various formats.
- nemo2riva — ('NeMo Model => Riva Deployment Converter',)
- nvidia-modelopt — Nvidia TensorRT Model Optimizer: a unified model optimization and deployment toolkit.
- olive-ai — Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.
- onnx-shrink-ray — Shrinks the size of ONNX files by quantizing large float constants into eight bit equivalents, while leaving all calculations in floating point.
- onnxisolation — To isolate onnx.
- sparrow-zoo — no summary
- tensorrt-yolo — Your YOLO Deployment Powerhouse. With the synergy of TensorRT Plugins, CUDA Kernels, and CUDA Graphs, experience lightning-fast inference speeds.
- triton-model-navigator — Triton Model Navigator: An inference toolkit for optimizing and deploying machine learning models and pipelines on the Triton Inference Server and PyTriton.
- trt-cloud — A CLI utility for using TensorRT Cloud
1