Reverse Dependencies of apache-beam
The following projects have a declared dependency on apache-beam:
- aidevkit — 一些ai开发过程中使用到的工具模块
- airflow-census — This project is created to provide a simple way to
- airflow-census-jeetendra — This is the project to ingest data from densu API
- apache-airflow — Programmatically author, schedule and monitor data pipelines
- apache-airflow-backport-providers-apache-beam — Backport provider package apache-airflow-backport-providers-apache-beam for Apache Airflow
- apache-airflow-providers-apache-beam — Provider package apache-airflow-providers-apache-beam for Apache Airflow
- apache-airflow-providers-google — Provider package apache-airflow-providers-google for Apache Airflow
- apache-flink — Apache Flink Python API
- armory-testbed — Adversarial Robustness Test Bed
- array-record — A file format that achieves a new frontier of IO efficiency
- array-record-beam-sdk — A set of Beam pipelines for converting TFRecord to Array Record
- atelierflow — An ML pipeline using apache beam for run experiments
- auto-tensorflow — Build Low Code Automated Tensorflow, What-IF explainable models in just 3 lines of code. To make Deep Learning on Tensorflow absolutely easy for the masses with its low code framework and also increase trust on ML models through What-IF model explainability.
- basic-pitch — Basic Pitch, a lightweight yet powerful audio-to-MIDI converter with pitch bend detection.
- beam-io-utils — Utilities for Apache Beam disk io (local or on GCP)
- beam-nuggets — Collection of transforms for the Apache beam python SDK.
- beam-postgres — Light IO transforms for Postgres read/write in Apache Beam pipelines.
- beam-postgres-connector — An io connector for PostgreSQL read/write in Apache Beam pipelines.
- beam-pyspark-runner — An Apache Beam pipeline Runner built on Apache Spark's python API
- beam-sequencer — no summary
- beam-sink — An Apache Beam Sink Library for Databases and other Sinks.
- beametrics — A streaming pipeline that transforms PubSub messages into metrics using Apache Beam
- beamx — A small example package
- bigflow — BigQuery client wrapper with clean API
- biggerquery — BigQuery client wrapper with clean API
- biofit — BioFit: Bioinformatics Machine Learning Framework
- biosets — Bioinformatics datasets and tools
- bq-genomics-tools — Tools for BigQuery variant annotations
- BYOD — Bring Your Own Data! Self-Supervised Evaluation for Large Language Models
- carling — Useful transforms for supporting apache beam pipelines.
- census-consumer-complaint — Project has been completed.
- census-consumer-complaint-ineuron — Project has been completed.
- charmory — Adversarial Robustness Evaluation Library
- codalab — CLI for CodaLab, a platform for reproducible computation
- cubed — Bounded-memory serverless distributed N-dimensional array processing
- custom-workflow-solutions — Programmatically author, schedule and monitor data pipelines
- cxr-foundation — CXR Foundation: chest x-ray embeddings generation.
- dataflow-ext — An Extension Library for Dataflow(Apache Beam) in Python
- datalabs — Datalabs
- dataset-grouper — Dataset Grouper - A library for datasets with group-level structure.
- ddsp — Differentiable Digital Signal Processing
- dojo-beam — Apache Beam adapters and datasets for the Dojo data framework.
- dojo-beam-transforms — An Apache Beam collection of transforms
- dynamodb_pyio — Apache Beam Python I/O connector for Amazon DynamoDB
- example-lagrange-workflow — Example lagrange PyPI (Python Package Index) Package
- fact-filtering — A fact-checking project with multi-module, multi-model structure
- factory-ai — no summary
- firehose_pyio — Apache Beam Python I/O connector for Amazon Firehose
- genbq — Genomics tools for BigQuery
- geobeam — geobeam adds GIS capabilities to your Apache Beam pipelines
- gft — GFT (general fine-tuning) A Little Language for Deepnets: 1-line programs for fine-tuning, inference and more
- gft-cpu — GFT (general fine-tuning) A Little Language for Deepnets: 1-line programs for fine-tuning, inference and more
- google-weather-tools — Apache Beam pipelines to make weather data accessible and useful.
- graphite-datasets — tensorflow/datasets is a library of datasets ready to use with TensorFlow.
- hf-datasets — HuggingFace/NLP is an open library of NLP datasets.
- huhangkai — 接口自动化
- huhk — 接口自动化
- ingestor — Apache Beam pipeline that ingests data from a PostgreSQL and writes it to GCS.
- klay-beam — Toolkit for massively parallel audio processing via Apache Beam
- klio — Conventions for Python + Apache Beam
- klio-audio — Library for audio-related Klio transforms and helpers
- klio-exec — Klio pipeline executor within a job
- kubric — A data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation, depth maps, and optical flow.
- kubric-nightly — A data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation, depth maps, and optical flow.
- leap-data-management-utils — LEAP / pangeo-forge-recipes extension library for logging data in Google Big Query
- loads-pipeline — Loads Pipeline Workflow Package.
- magenta — Use machine learning to create art and music
- magenta-gpu — Use machine learning to create art and music
- mlcroissant — MLCommons datasets format.
- nlp — HuggingFace/NLP is an open library of NLP datasets.
- nuna-sql-tools — Nuna Sql Tools contains utilities to create and manipulate schemas and sql statements.
- object-detection-by-ovi — Tensorflow Object Detection Library
- oracle-n — A custom model for sentiment analysis
- osiris-sdk — Python SDK for Osiris (Energinet DataPlatform).
- pangeo-forge-recipes — Pipeline tools for building and publishing analysis ready datasets.
- pangeo-forge-runner — Commandline tool to manage pangeo-forge feedstocks
- pano-airflow — Programmatically author, schedule and monitor data pipelines
- parallel-simulations — Helper class to orchestrate in parallel Monte Carlo simulations for an arbitrary number of models, with low-level parameter granularity.
- pdprecommender — no summary
- picsellia-tf2 — Tensorflow Object Detection Library
- pysql-beam — Apache beam mysql and postgres io connector in pure python
- ray-beam — A Ray runner for Apache Beam
- rechunker — A library for rechunking arrays
- Resiliparse — A collection of robust and fast processing tools for parsing and analyzing (not only) web archive data.
- rstojnic-tfds-nightly — tensorflow/datasets is a library of datasets ready to use with TensorFlow.
- sawatabi — Sawatabi is an application framework to develop and run stream-data-oriented Ising applications with quantum annealing.
- sd-beam-nuggets — Collection of transforms for the Apache beam python SDK. Forks the original Mohamed Haseeb repository to make some workarounds
- selector-standardization-beam — Data Standardization pipeline in Apache Beam for Selector project
- sentry-sdk — Python client for Sentry (https://sentry.io)
- sentry-sdk-pubsub — Python client for Sentry (https://sentry.io) with PubSub support
- seqio — SeqIO: Task-based datasets, preprocessing, and evaluation for sequence models.
- seqio-nightly — SeqIO: Task-based datasets, preprocessing, and evaluation for sequence models.
- servir-aces — Agricultural Classification and Estimation Service (ACES)
- sigma-dataflow-custom — no summary
- sqs_pyio — Apache Beam Python I/O connector for Amazon SQS
- t5 — Text-to-text transfer transformer
- tapas-table-parsing — Tapas: Table-based Question Answering.
- TDY-PKG — its an implimentation of TF-2 , Detectron and yolov5
- TDY-PKG-saquibquddus — its an implimentation of TF-2 , Detectron and yolov5
- temporian — Temporian is a Python package for feature engineering of temporal data, focusing on preventing common modeling errors and providing a simple and powerful API, a first-class iterative development experience, and efficient and well-tested implementations of common and not-so-common temporal data preprocessing functions.
1
2