Reverse Dependencies of resampy
The following projects have a declared dependency on resampy:
- adapter-transformers — A friendly fork of HuggingFace's Transformers, adding Adapters to PyTorch language models
- adversarial-robustness-toolbox — Toolbox for adversarial machine learning.
- allosaurus — a multilingual phone recognizer
- andrewAudio — A sound-object labelling machine learning model for use with Audacity. Uses VGGish for feature extraction and a Pytorch N-way classifier neural network for training.
- audio-separator — Easy to use audio stem separation, using various models from UVR trained primarily by @Anjok07
- audiossl — no summary
- audlib — A speech signal processing library with emphasis on deep learning.
- audtorch — Deep learning with PyTorch and audio
- augly — A data augmentations library for audio, image, text, & video.
- auraloss — Collection of audio-focused loss functions in PyTorch.
- basic-pitch — Basic Pitch, a lightweight yet powerful audio-to-MIDI converter with pitch bend detection.
- birdnet — A Python library for identifying bird species by their sounds.
- Brainfeatures — A toolbox to decode raw time-domain EEG using features.
- coqui-stt-training — Training code for Coqui STT
- d3net-spleeterweb — Unofficial Python package of D3Net implementation by Sony Research AI, used in Spleeter Web.
- desed — DESED dataset utils
- distil-primitives — Distil primitives as a single library
- dls-rvc — no summary
- easy-vc-dev — no summary
- elevenlabslib — Complete python wrapper for the elevenlabs API
- epilepsy2bids — Python library for converting EEG datasets of people with epilepsy to BIDS compatible datasets.
- ertk — Tools for process emotion recognition datasets, extracting features, and running experiments.
- espnet — ESPnet: end-to-end speech processing toolkit
- fast-tts — no summary
- faunanet — faunanet - A bioacoustics platform for the analysis of animal sounds with neural networks based on birdnetlib
- frechet-audio-distance — A lightweight library of Frechet Audio Distance calculation.
- gurulearn — Library for ML model analysis, multi-image model support,CT scan processing, and audio recognition(added feature confidance to audio recognition)(bug fixes)
- Harmonify — Harmonify: A project fork of RVC V2
- iarahealth-stt-training — Training code for Coqui STT
- iracema — Audio Content Analysis for Research on Musical Expressiveness and Individuality
- jarvis-akul2010 — A library built to make it extremely easy to build a simple voice assistant.
- jotts — JoTTS is a German text-to-speech engine.
- jrvc — Libraries for RVC inference
- koogu — Machine Learning for Bioacoustics
- kudio — Audio Toolbox™ KUDIO
- libmv — a library to create music videos
- librosa — Python module for audio and music processing
- macls — Audio Classification toolkit on Pytorch
- madarrays — Python package for audio data structures with missing entries
- mamkit — A Comprehensive Multimodal Argument Mining Toolkit.
- masr — Automatic speech recognition toolkit on Pytorch
- matchering — Audio Matching and Mastering Python Library
- medkit-lib — A Python library for a learning health system
- minispec — Minimal module for computing audio spectrograms
- MLProto — Modular Neural Network Protyping for Stock Market Prediction
- msa-toolbox — MSA Toolbox
- mser — Speech Emotion Recognition toolkit on Pytorch
- mvector — Voice Print Recognition toolkit on Pytorch
- mw-adapter-transformers — A friendly fork of HuggingFace's Transformers, adding Adapters to PyTorch language models
- nemo-toolkit — NeMo - a toolkit for Conversational AI
- odin-ai — Deep learning for research and production
- opensesame-plugin-omexp — BRM_OMEXP plugins for OpenSesame
- paddlespeech — Speech tools and models based on Paddlepaddle
- pafts — Library That Preprocessing Audio For TTS.
- pipecat-ai — An open source framework for voice (and multimodal) assistants
- pitch-detectors — collection of pitch detection algorithms with unified interface
- ppacls — Audio Classification toolkit on PaddlePaddle
- ppasr — Automatic speech recognition toolkit on PaddlePaddle
- ppser — Speech Emotion Recognition toolkit on PaddlePaddle
- ppvector — Voice Print Recognition toolkit on PaddlePaddle
- promonet — Prosody Modification Network
- pyclarity — Tools for the Clarity Challenge
- resemble-enhance — Speech denoising and enhancement with deep learning
- rvc-infer — Python wrapper for inference with rvc
- sadtalker-z — sadtalker
- scikit-maad — Open-source and modular toolbox for quantitative soundscape analysis in Python
- shazbot — Sound Hierarchy Attribute Zeitgeist Before Oligarchy Take
- signalworks — Library to handle signal data and perform signal processing computations
- sonosco — Framework for training deep automatic speech recognition models.
- Sound-cls — no summary
- spaudiopy — Spatial Audio Python Package
- specmatch — Calculate an IR to match a spectrum
- target-approximation — Python implementation of the Target-Approximation-Model.
- testgailbot002 — GailBot API
- testgailbotapi — GailBot Test API
- testgailbotapi001 — GailBot Test API
- torch-utilities — Simplifying audio and deep learning with PyTorch.
- torchopenl3 — Deep audio and image embeddings, based on Look, Listen, and Learn approach Pytorch
- vadtk — A toy voice activity detection tool kit
- wavetabler — Wavetabler: A tool for generating wavetables from audio files
- wavmark — AI-Based Audio Watermarking Tool
- wilddrummer — Turn sound samples into drums.
- xumx-spleeterweb — Unofficial NNabla implementation of CrossNet-Open-Unmix (X-UMX), originally created by Sony Research AI.
- xumx-unofficial — Unofficial NNabla implementation of CrossNet-Open-Unmix (X-UMX), originally created by Sony Research AI.
- yeaudio — Audio ToolKit for Python
1