Reverse Dependencies of delta-spark
The following projects have a declared dependency on delta-spark:
- butterfree — A tool for building feature stores - Transform your raw data into beautiful features.
- calista — Comprehensive Python package designed to simplify data quality checks across multiple platforms
- data-ecosystem-python — Program Agnostic Data Ecosystem (PADE) - Python Services
- data-ecosystem-services — Program Agnostic Data Ecosystem (PADE) - Python Services
- data-quality-sdk — A Data Quality SDK for real-time metrics and checks.
- dbl-discoverx — DiscoverX - Map and Search your Lakehouse
- delta-hydro — Advanced Delta-Lake related tooling based on Apache Spark
- delta-sync — Syncing delta tables
- dlt-meta — DLT-META Framework
- dlt-utils-lib — no summary
- dq-suite-amsterdam — Wrapper for Great Expectations to fit the requirements of the Gemeente Amsterdam.
- fabric-data-guard — A library for data quality checks in Microsoft Fabric using Great Expectations
- fabric-fast-start — Fabric Fast Start is a set of tools to help you get started with Fabric.
- hopsworks — Hopsworks Python SDK to interact with Hopsworks Platform, Feature Store, Model Registry and Model Serving
- idg-metadata-client — Ingestion Framework for OpenMetadata
- jobsworthy — no summary
- katonic — A modern, enterprise-ready MLOps Python SDK
- kedro-datasets — Kedro-Datasets is where you can find all of Kedro's data connectors.
- koheesio — The steps-based Koheesio framework
- l2-data-utils — no summary
- labelspark — Labelbox Connector for Databricks
- lakehouse-engine — A configuration-driven Spark framework serving as the engine for several lakehouse algorithms and data flows.
- metamart-ingestion — Ingestion Framework for MetaMart
- mx-stream-core — This is package stream core of mindx
- myetljob-run — My first ETL library
- openmetadata-ingestion — Ingestion Framework for OpenMetadata
- patek — A collection of utilities and tools for accelerating pyspark development and productivity.
- prt-databricks-simplify-7da-data-ingest — providing creating data layers from pdro s3 buckets
- pushcart — Metadata transformations for Spark
- pyeqx — no summary
- pyeqx-core — no summary
- pyspark-helpers — A collection of tools to help when developing PySpark applications
- pysparta — Library to help ETL using pyspark
- pytest-dbx — Pytest plugin to run unit tests for dbx (Databricks CLI extensions) related code
- pytest-kuunda — pytest plugin to help with test data setup for PySpark tests
- python-prakashravip1 — Common Python Utility and Client Tools
- rialto — Rialto is a framework for building and deploying machine learning features in a scalable and reusable way. It provides a set of tools that make it easy to define and deploy features and models, and it provides a way to orchestrate the execution of these features and models.
- rtdip-sdk — no summary
- schemon — no summary
- seedspark — Spark ETL Utility Framework
- spalah — Spalah is a set of PySpark dataframe helpers
- spark-batch — spark_delta_batch for bronze > silve > gold > mart auto
- spark-hydro — Advanced Delta-Lake related tooling based on Apache Spark
- spark-utils — Utility classes for comfy Spark job authoriing.
- sparkpipelineframework — Framework for simpler Spark Pipelines
- sparkpipelineframework.testing — Testing Framework for SparkPipelineFramework
- sphinxHub — Basic library helpers
- stacks-data — A suite of utilities to support data engineering workloads within an Ensono Stacks data platform.
- trawler-on-lake — no summary
- tuberia — Tuberia... when data engineering meets software engineering
- xedro — Kedro helps you build production-ready data and analytics pipelines
- yetl-framework — yet (another spark) etl framework
1