Reverse Dependencies of Lightning
The following projects have a declared dependency on Lightning:
- fusilli — Comparing multi-modal data fusion methods. Don't be silly, use Fusilli!
- fusion-bench — A Comprehensive Benchmark of Deep Model Fusion
- gcs-torch-dataflux — A GCS data loading integration for PyTorch
- geniml — Genomic interval toolkit
- gentab — A synthetic tabular data generation library.
- ggfm — no summary
- glasspy — Python module for scientists working with glass materials
- glaucus — Glaucus is a PyTorch complex-valued ML autoencoder & RF estimation python module.
- gluonts — Probabilistic time series modeling in Python.
- gluonts-customized — Add your description here
- goldenretriever-core — Dense Retriever
- gpnpytorchtools — A collection of pytorch tools created by GPN members
- gpt2-prot — Single NT/AA resoultion biological GPT2 language modelling
- gputranad — PyTorch Lightning wrapper library for TranAD: Deep Transformer Networks for Anomaly Detection in Multivariate Time Series.
- graph-attention-student — MEGAN: Multi Explanation Graph Attention Network
- graphamole — Pretrained amino acid residue embeddings using masked graph modeling
- graphium — Graphium: Scaling molecular GNNs to infinity.
- grappa-ff — Machine Learned Molecular Mechanics Force Field
- gt4sd-trainer-hf-pl — Transformers trainer submodule of GT4SD.
- headline-detector — An Indonesian Headline Detection Python API.
- hf-ehr — Code for Context Clues paper
- high-order-layers-torch — High order layers in pytorch
- hnn-utils — Various utilities used throughout my research
- hydronaut — A framework for exploring the depths of hyperparameter space with Hydra and MLflow.
- hypatorch — HypaTorch: A library for abstract and visual model configuration
- ichigo-asr — Ichigo Whisper is a compact (22M parameters), open-source speech tokenizer for the whisper-medium model, designed to enhance performance on multilingual with minimal impact on its original English capabilities. Unlike models that output continuous embeddings, Ichigo Whisper compresses speech into discrete tokens, making it more compatible with large language models (LLMs) for immediate speech understanding.
- ichigo-whisper — Ichigo Whisper is a compact (22M parameters), open-source speech tokenizer for the whisper-medium model, designed to enhance performance on multilingual with minimal impact on its original English capabilities. Unlike models that output continuous embeddings, Ichigo Whisper compresses speech into discrete tokens, making it more compatible with large language models (LLMs) for immediate speech understanding.
- idrt — A library that uses deep-learning to match contacts based on text-fields such as name and email.
- imsy-htc — Framework for automatic classification and segmentation of hyperspectral images.
- inspiremusic — InspireMusic: A Fundamental Music, Song and Audio Generation Framework and Toolkits
- itipy — Package for translation between image domains of different astrophysical instruments.
- itwinai — AI and ML workflows module for scientific digital twins.
- iwpc — An implementation of the divergence framework as described here https://arxiv.org/abs/2405.06397
- jammy — A Versatile ToolBox
- jarvais — jarvAIs: just a really versatile AI service
- kaiko-eva — Evaluation Framework for oncology foundation models.
- kaparoo-lightning — A Python package for (personally) common and useful features for PyTorch Lightning.
- KapoorLabs-Lightning — Lightning modules for KapoorLabs specific projects
- kit4dl — Kit4DL - A quick way to start with machine and deep learning
- kooplearn — A package to learn Koopman operators
- kraken — OCR/HTR engine for all the languages
- kraken-didip — OCR/HTR engine for all the languages: a fork for the DiDip project.
- l2winddir — A package for l2 wind direction
- lai-bashwork — Use subprocess.Popen(bash=/bin/bash) to run scripts
- langmo — toolbox for various tasks in the area of vector space models of computational linguistic
- latentis — A Python package for analyzing and transforming neural latent spaces.
- lbster — Language models for Biological Sequence Transformation and Evolutionary Representation.
- libmultilabel — A library for multi-class and multi-label classification
- lightcat — no summary
- lightning-bagua — Deep Learning Training Acceleration with Bagua and Lightning AI
- lightning-boost — PyTorch Lightning extension for faster model development.
- lightning-colossalai — Efficient Large-Scale Distributed Training with Colossal-AI and Lightning AI.
- lightning-cv — Cross validation using Lightning Fabric
- lightning-gpt — GPT training in Lightning
- lightning-habana — Lightning support for Intel Habana accelerators
- lightning-Hivemind — Lightning strategy extension for Hivemind.
- lightning-hpo — Lightning HPO
- lightning-ir — Your one-stop shop for fine-tuning and running neural ranking models.
- lightning-jupyter — JupyterLab component for Lightning Applications
- lightning-nc — no summary
- lightning-pose — Semi-supervised pose estimation using pytorch lightning
- lightning-template — A template wrapper for pytorch-lightning.
- lightning-toolbox — A collection of utilities for PyTorch Lightning.
- lightning-training-studio — Lightning HPO
- lightning-uq-box — Lightning-UQ-Box: A toolbox for uncertainty quantification in deep learning
- lightningnbeats — A Pytorch Lightning implementation of the N-BEATS algorithm with some extended functionality.
- lightningtrain — PyTorch Lightning Project Setup
- lightorch — Pytorch & Lightning based framework for research and ml-pipeline automation.
- lightray — Distribute a LightningCLI hyperparameter search with Ray Tune
- lit-commit-cb — no summary
- lit-ecology-classifier — Image Classifier optimised for ecology use-cases
- lit-mlflow — An improved Lightning mlflow logger
- litGPT — Hackable implementation of state-of-the-art open-source LLMs
- litlogger — The Lightning Logger of the Lightning.AI Platform
- llm4bi-embedder — Package including any embedder for the LLM4BI project
- luxonis-train — Luxonis training framework for seamless training of various neural networks.
- M18K — Mushroom RGB-D image dataset for object detection and instance segmentation
- macromol-gym-pretrain — Self-supervised pre-training for macromolecular data
- macromol-gym-unsupervised — An unsupervised macromolecular coordinate dataset
- maestro — Streamline the fine-tuning process for vision-language models like PaliGemma 2, Florence-2, and Qwen2.5-VL.
- mambular — A python package for tabular deep learning with mamba blocks.
- mapd — Package for applying MAP-D to your project
- mase-tools — Machine-Learning Accelerator System Exploration Tools
- matcha-tts — 🍵 Matcha-TTS: A fast TTS architecture with conditional flow matching
- matcha-tts-package — matcha-tts package by xp
- mate-cxinsys — MATE
- matgl — MatGL is a framework for graph deep learning for materials science.
- medical-data — Helpful data utilities for deep learning in medical image analysis/medical image computing
- medicraft — Medicraft synthetic dataset generator
- meds-torch — A MEDS PyTorch Dataset, leveraging a on-the-fly retrieval strategy for flexible, efficient data loading.
- minestudio — A simple and efficient Minecraft development kit.
- minirl — minirl - RL for Robot
- mlcolvar — Machine learning collective variables for enhanced sampling
- mlpforecast — Multilayer Perceptron Learning Models for Time Series Forecasting
- mml-core — This is the MML toolkit, targeting lifelong/continual/meta learning in Surgical Data Science.
- mmlearn — A modular framework for research on multimodal representation learning.
- mmv-im2im — A python package for deep learing based image to image transformation
- model2vec — Fast State-of-the-Art Static Embeddings
- modelgenerator — AIDO.ModelGenerator is a software stack powering the development of an AI-driven Digital Organism by enabling researchers to adapt pretrained models and generate finetuned models for downstream tasks.
- modlee — modlee package