Reverse Dependencies of onnxruntime-tools
The following projects have a declared dependency on onnxruntime-tools:
- adapter-transformers — A friendly fork of HuggingFace's Transformers, adding Adapters to PyTorch language models
- amf-fast-inference — uses pruning and quatisation to make inference speed faster
- cody-adapter-transformers — A friendly fork of HuggingFace's Transformers, adding Adapters to PyTorch language models
- farm-haystack — LLM framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data.
- fast-bert — AI Library using BERT
- jshbtf0302 — State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch
- mw-adapter-transformers — A friendly fork of HuggingFace's Transformers, adding Adapters to PyTorch language models
- quick-deploy — Quick-Deploy optimize and deploy Machine Learning models as fast inference API.
- shbtf0302 — State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch
- tf-shb-gabriel-0302 — State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch
- transformers — State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
1