Reverse Dependencies of transformers
The following projects have a declared dependency on transformers:
- armory-examples — TwoSix Armory Adversarial Robustness Library Examples
- armory-library — TwoSix Armory Adversarial Robustness Evaluation Library
- armory-testbed — Adversarial Robustness Test Bed
- arrakis-mi — A mechanistic interpretability library for nerds.
- artcraft — Image generation based on diffusers
- arthur-bench — validate models for production
- artificial-detection — Python framework for artificial text detection with NLP approaches.
- artkit — This section of the documentation provides detailed information
- arxiv-astro-summarizer — Scrapes arXiv astro-ph paper, summarizes the abstract, and returns relavant papers according to a user input.
- arxiv-bot — ArXiv component of AI assistant
- arxiv-summarizer — A happy toolkit for arxiv paper summarization and understanding.
- asap-ban-machine-model — no summary
- asdff — custom pipeline for auto inpainting
- asian-bart — Asian language bart models (En, Ja, Ko, Zh, ECJK)
- asian-mtl — Seamlessly translate East Asian texts with deep learning models.
- ask-terminal — Chat with your terminal, getting things done using natural langauge.
- askharrison — A package for the AskHarrison, including various GenAI tools
- askui-ml-helper — helper lib for askui open source tools
- aspect-based-sentiment-analysis — Aspect Based Sentiment Analysis: Transformer & Interpretability (TensorFlow)
- ASRChild — Package for ASRChild
- asrecognition — ASRecognition: just an easy-to-use library for Automatic Speech Recognition.
- asrp — no summary
- assert-llm-tools — Automated Summary Scoring & Evaluation of Retained Text
- Assess-Acceptability-Judgments — A library for assessing acceptability judgments
- Assistant — Your very own Assistant. Because you deserve it.
- ASTAligner — ASTAligner is designed to align tokens from source code snippets to Abstract Syntax Tree (AST) nodes using Tree-sitter for AST generation and various HuggingFace tokenizers for language tokenization. The library supports a wide range of programming languages and Fast tokenizers, enabling precise mapping between source code elements and their AST representations.
- astsnowballsplitter — A package for smartly splitting code into chunks
- asyncval — Asyncval: A toolkit for asynchronously validating dense retriever checkpoints during training.
- athena-starship — Paper - Pytorch
- atomgpt — atomgpt
- atorch — A pytorch extension for efficient deep learning.
- atradebot — atradebot package
- attention-map-diffusers — attention map for diffusers
- attention-sinks — Extend LLMs to infinite length without sacrificing efficiency and performance, without retraining
- attogradDB — A simple vector database for fast similarity search
- attribute-standardizer — BEDMess attribute standardizer for metadata attribute standardization
- audio-denoiser — A Python library for (speech) audio denoising.
- audio-file-translator — audio-file-translator. For Windows, macOS, and Linux, on Python 3
- audio-transcriber — Transcribe your .wav .mp4 .mp3 .flac files to text or record your own audio!
- audio-xlstm — Paper - Pytorch
- AudioAugmentor — Python package for simple application of wide range of audio augmentations.
- audiogen-agc — Audiogen Codec
- audioldm — This package is written for text-to-audio generation.
- audioldm2 — This package is written for text-to-audio/music generation.
- audiolm-pytorch — AudioLM - Language Modeling Approach to Audio Generation from Google Research - Pytorch
- audiolm-superfeel — AudioLM - Language Modeling Approach to Audio Generation from Google Research - Pytorch
- audiosr — This package is written for text-to-audio/music generation.
- audiossl — no summary
- AudioSummariser — Summarises the text generated from the audio files for quicker resolution. The audio files are typically the customer support recordings for now but the usecase can be extended to more dimensions. Sentiment is analysed and depicted visually.
- audiotextspeakerchangedetect — A Package to Detect Speaker Change based on Textual Features via LLMs & Rule-Based NLP and Audio Features via Pyannote & Spectral Clustering
- audiotoken — A package for creating audio tokens
- auditnlg — Auditing Generative AI Lanugage Modeling for Trustworthiness
- augly-jp — Data Augmentation for Japanese Text
- auk-scrapegraphai — A web scraping library based on LangChain which uses LLM and direct graph logic to create scraping pipelines.
- auralis — This is a faster implementation for TTS models, to be used in highly async environment
- auto-deep-learning — Automation of the creation of the architecture of the neural network based on the input
- auto-gptq — An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
- auto-ner — End to End application for named entity recognition. Highlights: 1. Powerd by GenAi 2. Few shot Learning 3. Training and inference pipelines
- auto-obsidian — A layer on top of your OS that understands what you're doing
- auto-quarot — Auto convert transformers models to QuaRot
- Auto-Research — Geberate scientific survey with just a query
- auto-round — Repository of AutoRound: Advanced Weight-Only Quantization Algorithm for LLMs
- auto-round-lib — Repository of AutoRound: Advanced Weight-Only Quantization Algorithm for LLMs
- auto-subtitle-llama — Automatically generate, translate and embed subtitles into your videos
- auto-vicuna — An experiment with Vicuna.
- auto1111sdk — SDK for Automatic 1111.
- autoacu — Interpretable and Efficient Automatic Summarization Evaluation Metrics
- autoads-test — Easy to use Ads library
- autoawq — AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference.
- autobert — AutoBert_pytorch
- autobiasdetector — tools for detecting bias patterns of LLMs
- autocontrastive-gen — Auto-Contrastive Text Generation
- autodatagen — no summary
- autodistill-albef — ALBEF module for use with Autodistill
- autodistill-altclip — AltCLIP model for use with Autodistill.
- autodistill-blip — BLIP module for use with Autodistill
- autodistill-detr — DETR module for use with Autodistill
- autodistill-distilbert — DistilBERT model for use with Autodistill
- autodistill-eva-clip — EvaClip module for use with Autodistill
- autodistill-florence-2 — Use Florence 2 to auto-label data for use in training fine-tuned object detection models.
- autodistill-kosmos-2 — Kosmos-2 base model for use with Autodistill.
- autodistill-owl-vit — OWL-ViT module for use with Autodistill
- autodistill-paligemma — Auto-label data with a PaliGemma model, or ine-tune a PaLiGemma model using custom data with Autodistill.
- autodistill-siglip — SigLIP base model for use with Autodistill
- autodistill-transformers — Use object detection models in Hugging Face Transformers to automatically label data to train a fine-tuned model.
- autoFillMaskWithCandi — Automatically mask sentences from a given input where certain words vary, and fil-mask from given candidates
- autoFillMaskWithCandy — Automatically mask sentences from a given input where certain words vary, and fil-mask from given candidates
- AutoGGUF — automatically quant GGUF models
- autogluon.multimodal — Fast and Accurate ML in 3 Lines of Code
- autogluon.timeseries — Fast and Accurate ML in 3 Lines of Code
- autogoal-transformers — transformers algorithm library wrapper for AutoGOAL
- autologie — no summary
- automate-scalable-unsupervised-dataset-generation — A brief description
- automate-supervised-dataset-generation — A brief description
- automate-unsupervised-dataset-generation — A brief description
- autonbox — Auton Lab TA1 primitives
- autonomi-nos — Nitrous oxide system (NOS) for computer-vision.
- autonon — Organon Automated ML Platform
- autopeptideml — AutoML system for building trustworthy peptide bioactivity predictors
- autoqrels — a tool for automatically inferring query relevance assessments (qrels)