Reverse Dependencies of einops
The following projects have a declared dependency on einops:
- transformer-lm-gan — Explorations into Transformer Language Model with Adversarial Loss
- transformerocr — Transformer base text detection
- transformerx — TransformerX is a python library for building transformer-based models using ready-to-use layers.
- transfusion-pytorch — Transfusion in Pytorch
- transganformer — TransGanFormer
- transpector — Visually inspect, analyse and debug transformer models. Aimed at reducing cycle times for interpretability research and lowering the barrier to entry.
- treeffuser — Probabilistic predictions for tabular data, using diffusion models and decision trees.
- treex — no summary
- triangle-multiplicative-module — Triangle Multiplicative Module
- trifast — Fast kernel for triangle self attetion.
- triton-transformer — Transformer in Triton
- TSB-AD — Time-Series Anomaly Detection Benchmark
- ttmask — CLI tool for mask creation in cryo-EM/ET
- tts — Deep learning for Text to Speech by Coqui.
- tts-with-rvc — TTS with RVC pipeline
- ttsim3d — Simulate a 3D electrostatic potential map from a PDB in pyTorch
- turbo-alignment — turbo-alignment repository
- txv — A Vision Transformer explainability package
- uformer-pytorch — Uformer - Pytorch
- uground-demo-test — Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents
- ullme — ULLME: A Unified Framework for Large Language Model Embeddings with Generation-Augmented Learning
- ultimate-rvc — Ultimate RVC
- unav — UNav is designed for helping navigation of visually impaired people
- uni2ts — Unified Training of Universal Time Series Forecasting Transformers
- UniCell — Universal cell segmentation
- unified-focal-loss-pytorch — An implementation of loss functions from "Unified Focal loss: Generalising Dice and cross entropy-based losses to handle class imbalanced medical image segmentation"
- UnifiedML — Unified library for intelligence training.
- uniformer-pytorch — Uniformer - Pytorch
- unit-scaling — A library for unit scaling in PyTorch, based on the paper 'u-muP: The Unit-Scaled Maximal Update Parametrization.'
- uniteai — AI, Inside your Editor.
- Unlearn-Diff — Unlearning Algorithms
- Unseal — Unseal: A collection of infrastructure and tools for research in transformer interpretability.
- unsupervised-on-policy — Unsupervised pre-training with PPG
- useful-moonshine — Speech Recognition for Live Transcription and Voice Commands
- uss — Universal source separation (USS) with weakly labelled data.
- v-pyiqa — PyTorch Toolbox for Image Quality Assessment
- vec2text — convert embedding vectors back to text
- vector-quantize-pytorch — Vector Quantization - Pytorch
- versatile-audio-upscaler — Versatile AI-driven audio upscaler to enhance the quality of any audio.
- versatile-audio-upscaler-fixed — Versatile AI-driven audio upscaler to enhance the quality of any audio -- Now supporting Numpy +1.26
- vformer — A modular PyTorch library for vision transformer models
- vicregaddon — A lightweight and modular parallel PyTorch implementation of VICReg (intended for audio, but will try to be general)
- video-clip — AskVideos-VideoCLIP model
- video-dataloader-for-pytorch — A small example package
- video-diffusion-pytorch — Video Diffusion - Pytorch
- video-vit — Paper - Pytorch
- video2dataset — Easily create large video dataset from video urls
- vietocr — Transformer base text detection
- vision-architectures — Vision architectures
- vision-llama — Vision Llama - Pytorch
- vision-mamba — Vision Mamba - Pytorch
- vision-transformer — Flexible Vision Transformer (ViT) model for your needs.
- vision-xformer — Vision Xformers
- VisionDiff — Using the [Differential Transformer](https://arxiv.org/abs/2410.05258) in a vision-friendly way, similar to [VisionMamba](https://github.com/kyegomez/VisionMamba).
- visionts — Using a visual MAE for time series forecasting.
- visu3d — 3d geometry made easy.
- visualizing-training — Modelling Training Dynamics and Interpreting the dynamics
- vit-prisma — A Vision Transformer library for mechanistic interpretability.
- vit-pytorch — Vision Transformer (ViT) - Pytorch
- vit-pytorch-implementation — Vision Transformer (ViT) - Pytorch
- vit-rgts — vit-registers - Pytorch
- vitar — Paper - Pytorch
- vjepa — Moedified from the official PyTorch codebase for the video joint-embedding predictive architecture, V-JEPA, a method for self-supervised learning of visual representations from video.
- vjepa-encoder — JEPA research code.
- vltk — The Vision-Language Toolkit (VLTK)
- VN-transformer — Vector Neuron Transformer (VN-Transformer)
- vocab-coverage — 语言模型中文识字率分析
- vocos — Fourier-based neural vocoder for high-quality audio synthesis
- voicebox-pytorch — Voicebox - Pytorch
- voicecover — no summary
- voltron-robotics — Voltron: Language-Driven Representation Learning for Robotics.
- vortex-fusion — Paper - Pytorch
- vsscunet — SCUNet function for VapourSynth
- VTON — This is a test for my python package upload to pypi
- vtransformer — no summary
- walloc — Wavelet Learned Lossy Compression
- wavemix — WaveMix - Pytorch
- welford-torch — Online Pytorch implementation to get Standard Deviation, Covariance, Correlation and Whitening.
- wildtorch — WildTorch: Leveraging GPU Acceleration for High-Fidelity, Stochastic Wildfire Simulations with PyTorch
- wmh-seg — WMH segmentation for FLAIR images
- x-clip — X-CLIP
- x-dgcnn — X-DGCNN - Pytorch
- x-maes — X-MAEs - Pytorch
- x-mlps — Configurable MLPs built on JAX and Haiku
- x-transformers — X-Transformers
- x-unet — X-Unet
- x2vlm-gml — Package for X2-VLM, a vision-language model from ByteDance
- xarray-einstats — Stats, linear algebra and einops for xarray
- xcodec2 — A library for XCodec2 model.
- xcodec2-infer-lib — Trying to achieve M chip Macbook support for https://huggingface.co/HKUSTAudio/xcodec2.
- xfold — fold for everyone.
- xfuser — A Scalable Inference Engine for Diffusion Transformers (DiTs) on Multiple Computing Devices
- xinfer — Framework agnostic computer vision inference. Run 1000+ models by changing only one line of code. Supports models from transformers, timm, ultralytics, vllm, ollama and your custom model.
- xinference — Model Serving Made Easy
- xlens — Add your description here
- xLSTM — A novel LSTM variant with promising performance compared to Transformers or State Space Models.
- xtuner — An efficient, flexible and full-featured toolkit for fine-tuning large models
- yaib — Yet Another ICU Benchmark is a holistic framework for the automation of the development of clinical prediction models on ICU data. Users can create custom datasets, cohorts, prediction tasks, endpoints, and models.
- yet-another-retnet — yet-another-retnet
- yijian-community — YiJian-Community