Reverse Dependencies of whisper
The following projects have a declared dependency on whisper:
- audio-recaptchav2-solver — no summary
- buzz_agent — agent for buzz system
- cdt.ai — Cognitive Data Transformer for text, image, audio, and video processing
- gg_daigua — 方便的工具
- Marketingtool — A tool module to help you do marketing
- nextgenjax — A JAX-based neural network library surpassing Google DeepMind's Haiku and Optax
- prompt4all — Prompt is all you need
- prompt4all-liteon — Prompt is all you need
- pyaudiocook — A Python package for audio recording and transcribing
- python-vcon — vCon conversational data container manipulation package
- s-d-pyannote — A speaker diarization pipeline made with pyannote
- salmon — A simple metric collector with alerts.
- sd-pyannote-v1 — A speaker diarization pipeline made with pyannote
- speaker-diarization-pyaudio — A speaker diarization pipeline made with pyannote
- stopes — Large-Scale Translation Data Mining.
- vaping — vaping is a healthy alternative to smokeping!
- vChatGPT — Verbal ChatGPT
- VocalForge — Your one-stop solution for voice dataset creation
- whila — WhiLa (Whisper-to-LaTeX) connects tools to convert spoken mathematics into LaTeX code. It includes a Speech-To-Text (STT) layer using OpenAI's whisper model and a Math-To-LaTeX (MTL) layer to render mathematics in LaTeX. The MTL layer is a Large-Language Model (LLM) for converting spoken math to legible LaTeX code. WhiLa aims to bridge the gap between writing math and the digital world, particularly for education and those unable to use conventional math writing techniques.
- whisper-pandas — WhisperDB Python Pandas Reader
- whisper_spln — Whisper SPLN is a remote service that allows a user to transcrive audio files to text using whisper from openai.
- whisper-stenographer — Stenographer.py - a transcription tool for your content.
- youtube-transcriber1 — no summary
- yt-video-text-md — Fetch YouTube video transcripts and save them to markdown files.
- yt2text — Extract text from a YouTube video in a single command, using OpenAi's Whisper speech recognition model
1