Reverse Dependencies of transformers
The following projects have a declared dependency on transformers:
- autora-doc — Automatic documentation generator from AutoRA code
- AutoRAG — Automatically Evaluate RAG pipelines with your own data. Find optimal structure for new RAG product.
- autordf2gml — AutoRDF2GML: A Framework for Transforming RDF Data into Graph Representations for Graph Machine Learning.
- autoscriber — Python library for the autoscriber summarizer.
- autotrain-advanced — no summary
- autotrain-llm — autotrain_llm
- autotrain-vision — Automate the manual labelling and training for supervised models, YOLOv8
- autotransformers — a Python package for automatic training and benchmarking of Language Models.
- autoxx — LLM-based autonomous agent
- autrainer — A Modular and Extensible Deep Learning Toolkit for Computer Audition Tasks.
- avi-mmdet — Custom OpenMMLab Detection Toolbox and Benchmark
- avr — AVR is a voice anti-spoofing system that uses deep learning models to detect spoofed audio files.
- awca — A toolkit for making ancient world citation analysis, text summarization, paraphrasing and OCR for PDF to CSV
- awessome — awessome
- awq-cli — Command-line interface for AutoAWQ
- aws-fortuna — A Library for Uncertainty Quantification.
- aws-rag-bot — no summary
- awsa — AISAK-O agent for information collection and summary.
- axlearn — AXLearn
- AxlNLP — my nlp tools
- axolotl — LLM Trainer
- azureml-acft-accelerator — Contains the acft accelerator package used in script to build the azureml components.
- azureml-acft-image-components — Contains the code for vision model's components.
- azureml-automl-dnn-nlp — End to end deep learning models for NLP tasks in AutoML.
- azureml-evaluate-mlflow — Contains the integration code of AzureML Evaluate with Mlflow.
- azureml-metrics — Contains the ML and non-Azure specific common code associated with AzureML metrics.
- baal — Library to enable Bayesian active learning in your research or labeling work.
- babeltalk — Default template for PDM package
- babylon-sts — A powerful library for audio processing with advanced features for speech recognition, text translation, and speech synthesis.
- babymmlu — Sample implementation of babymmlu benchmark
- babyvec — Natural language embedding tools
- backprop — Backprop
- BahnarTextAugmentation — Bahnar Text Augmentation
- balm-antibody — BALM: Baseline Antibody Language Model
- bampe-weights — An alternative approach to building foundational generative AI models with visualizations using Blender
- band — BERT Application
- banglanlptoolkit — Toolkits for text processing and augmentation for Bangla NLP
- BanglaSpeech2Text — An open-source offline speech-to-text package for Bangla language.
- bardi — A flexible machine learning data pre-processing pipeline framework.
- bark — Bark text to audio model
- bartbroere-eland — [Development fork!] Python Client and Toolkit for DataFrames, Big Data, Machine Learning and ETL in Elasticsearch
- basaran — Open-source alternative to the OpenAI text completion API
- batch-inference — Batch Inference
- bayesian-lora — Bayesian LoRA adapters for Language Models
- BCEmbedding — A text embedding model and reranking model produced by Netease Youdao Inc., which can be use for dense embedding retrieval and reranking in RAG workflow.
- bcqa — A Benchmark for Complex Heterogeneous Question answering
- bdci — Bjontegaard Delta Confidence Interval
- bdi-kit — bdi-kit library
- be-great — Generating Realistic Tabular Data using Large Language Models
- be-great-v — (A Fork)Generating Realistic Tabular Data using Large Language Models
- beam-ds — Beam Datascience package
- beatcraft — While you are focus on the game logic, BeatCraft help you to make an authentic music for your game
- bechdelai — Automating the Bechdel test and its variants for feminine representation in movies with AI
- beir-qdrant — Qdrant integration with BEIR, simplifying quality checks on standard datasets
- beit3-gml — Package for beit3, part of UniLM by Microsoft, customized for GML tasks
- bellek — My digital memory
- bellow — Implements a pushbutton interface to the Whisper transformer using a global hotkey
- belt-nlp — BELT (BERT For Longer Texts). BERT-based text classification model for processing texts longer than 512 tokens.
- benchformer — Transformers Language Models benchmarking tool
- bent — BENT: Biomedical Entity Annotator
- bert-classifier — Transformers based NLP classification models
- bert-deid — Remove identifiers from data using BERT
- bert-embeddings — Create positional embeddings based on TinyBERT or similar bert models
- bert-extractive-summarizer — Extractive Text Summarization with BERT
- bert-fine-tuning-text-classifier-lux — A library that leverages pre-trained BERT models for multilingual text classification (French, German, English, and Luxembourgish) with easy-to-use fine-tuning capabilities.
- bert-for-sequence-classification — Easy fine-tuning for BERT models
- bert-g2p — BERT with G2P
- bert-local — PyPI Package for Circles Local Google Bert Python
- bert-multitask-learning — BERT for Multi-task Learning
- bert-pruners — Pruning BERT models + magnitude pruning + onnx export
- bert-score — PyTorch implementation of BERT score
- bert-score-flex-plot-example — An unofficial fork of PyTorch implementation of BERT score
- bert-squeeze — Tools for Transformers compression using PyTorch Lightning
- bert-summarizer — Text Summarization Library based on transformers
- bert-text-classifier — Train modern text classification models in just a few lines
- bert-token-tagger — no summary
- bertagent — Quantify linguistic agency in textual data.
- BertCC — A context-aware Simplified to Traditional Chinese converter using BERT
- berteome — A library to analyze and explore protein sequences using BERT models
- BERTLocRNA — Predicting RNA localization based on RBP binding information in BERT architecture
- bertmoticon — multilingual emoji prediction
- bertnlp — BERT toolkit is a Python package that performs various NLP tasks using Bidirectional Encoder Representations from Transformers (BERT) related models.
- bertopic — BERTopic performs topic Modeling with state-of-the-art transformer models.
- bertserini — An end-to-end Open-Domain question answering system
- bertserini-on-telegram — A library, based on PyTorch, that implements bertserini code.
- BERTSimilar — Get Similar Words and Embeddings using BERT Models
- bertstem — BERT model fine-tuned on chilean STEM lessons
- bertviz — Attention visualization tool for NLP Transformer models.
- besser-agentic-framework — BESSER Agentic Framework (BAF)
- besser-bot-framework — BESSER Bot Framework (BBF)
- best-shot — Take your best shot
- bettertool — A helper library for working with S3 and Hugging Face models
- bf-nlu — no summary
- bf-nlu-banki — no summary
- bgmhan — BGM-HAN: A PyTorch implementation of the BGM-HAN model for text classification
- bgnlp — Package for Bulgarian Natural Language Processing (NLP)
- bhsenti — 基于三分类的中文情感分析
- bicleaner-ai — Parallel corpus classifier, indicating the likelihood of a pair of sentences being mutual translations or not (neural version)
- bicv-robotlib — 基于三分类的中文情感分析
- bigcodebench — "Evaluation package for BigCodeBench"