Reverse Dependencies of pyhive
The following projects have a declared dependency on pyhive:
- airflow-indexima — Indexima Airflow integration
- alo7-airflow — Programmatically author, schedule and monitor data pipelines
- apache-airflow — Programmatically author, schedule and monitor data pipelines
- apache-airflow-backport-providers-apache-hive — Backport provider package apache-airflow-backport-providers-apache-hive for Apache Airflow
- apache-airflow-providers-apache-hive — Provider package apache-airflow-providers-apache-hive for Apache Airflow
- apache-airflow-zack — Programmatically author, schedule and monitor data pipelines
- b-i — python library for data project
- cacheless-airflow — Programmatically author, schedule and monitor data pipelines
- chronify — Time series store and mapping libray
- cobra-policytool — Tool for manage Hadoop access using Apache Atlas and Ranger.
- continual — Operational AI for the Modern Data Stack
- custom-workflow-solutions — Programmatically author, schedule and monitor data pipelines
- d22d — Migrating form DataBase to DataBase by 2 lines code
- databricks-dbapi — A DBAPI 2.0 interface and SQLAlchemy dialect for Databricks interactive clusters.
- datasaku — A small example package
- dbgpt-ext — Add your description here
- dbt-ocean-spark — The Apache Spark adapter plugin for dbt
- dbt-pyspark — The Apache PySpark adapter plugin for dbt
- dbt-spark — The Apache Spark adapter plugin for dbt
- dbt-spark-livy — The dbt-spark-livy adapter plugin for Spark in Cloudera DataHub with Livy interface
- dbt-watsonx-spark — IBM watsonx.data spark plugin for dbt
- detectpii — Detect PII columns in your database and warehouse
- discovery-connectors — an concrete implementation of the Discovery Foundation Project to hold a libary of connectors
- dragons96-tools — dragons96 个人开发python工具包
- edu-airflow — Programmatically author, schedule and monitor data pipelines
- eft — A efficient finance tool for personal usage
- enrichsdk — Enrich Developer Kit
- etl-ml — etl_ml is a tools could etl origin excel or csv dirty data and send data to ftp or server and insert data to hive database and load data from jump hive make feature project machine learning model train and jump the jump machine to connect hive get hive data to pandas dataframe
- evydcloud — EvydenceCloud
- ez-etl — ez-etl is an open-source Extract, Transform, load (ETL) library written in Python. Just configure a dict to read data from various data sources. Use code or built-in conversion algorithms to transform the data into the target data format, and write it to the target data source.
- featurebyte — Python Library for FeatureOps
- fosforio — FOSFOR-IO: To read and write dataframe from different connectors.
- fsqlfly — Flink SQL Job Management Website
- function-parser — This library contains various utils to parse GitHub repositories into function definition and docstring pairs. It is based on tree-sitter to parse code into ASTs and apply heuristics to parse metadata in more details. Currently, it supports 6 languages: Python, Java, Go, Php, Ruby, and Javascript. It also parses function calls and links them with their definitions for Python.
- general-tools — general tools
- gio-importer — GrowingIO Importer是GrowingIO CDP平台元数据创建和数据导入工具
- gio-importer-v41 — GrowingIO Importer是GrowingIO CDP平台元数据创建和数据导入工具
- gio-importer-v42 — GrowingIO Importer是GrowingIO CDP平台元数据创建和数据导入工具
- gio-importer-v43 — GrowingIO Importer是GrowingIO CDP平台元数据创建和数据导入工具
- gio-importer-v44 — GrowingIO Importer是GrowingIO CDP平台元数据创建和数据导入工具
- gio-importer-v45 — GrowingIO Importer是GrowingIO CDP平台元数据创建和数据导入工具
- google-fhir-core — Core components for working with FHIR.
- google-fhir-views — Tools to create views of FHIR data for analysis.
- gptdb — GPT-DB is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
- great-expectations — Always know what to expect from your data.
- great-expectations-cta — Always know what to expect from your data.
- heps-ds-utils — A Module to enable Hepsiburada Data Science Team to utilize different tools.
- hive-kernel — A hive kernel for Jupyter.
- hiveqlKernel — HiveQL Kernel
- honeycomb — Multi-source/engine querying tool
- idg-metadata-client — Ingestion Framework for OpenMetadata
- in-dbt-spark — Release for LinkedIn's changes to dbt-spark.
- ivystar — python tools package of ivystar
- johnshopeetools — A toolbox for credit policy analysis
- jupyterlab-data-explorer — A JupyterLab extension for Database
- jupyterlab-sql-explorer — A JupyterLab extension for Database
- laika-lib — A simple business reporting library
- llama-index-readers-hive — llama-index readers hive integration
- marvin-python-toolbox — Marvin Python Toolbox
- mcp-server-hive — A Hive MCP server for data analysis
- metamart-ingestion — Ingestion Framework for MetaMart
- metaphor-connectors — A collection of Python-based 'connectors' that extract metadata from various sources to ingest into the Metaphor app.
- mostlyai — Synthetic Data SDK
- muttlib — Collection of helper modules by Mutt Data.
- nordypy — Nordypy Package
- omniduct — A toolkit providing a uniform interface for connecting to and extracting data from a wide variety of (potentially remote) data stores (including HDFS, Hive, Presto, MySQL, etc).
- openmetadata-ingestion — Ingestion Framework for OpenMetadata
- pano-airflow — Programmatically author, schedule and monitor data pipelines
- ppextensions — PPExtenions - Set of iPython and Jupyter extensions
- pstyle — python DB-API paramstyle converter
- pydatafabric — SHINSEGAE DataFabric Python Package
- PyStellarDB — Python interface to StellarDB
- python-suanpan — Suanpan SDK
- qbiz-airflow-presto — A containerized Presto cluster for AWS.
- redash-stmo — Extensions to Redash by Mozilla
- refractio — REFRACT-IO: To read and write dataframe from different connectors.
- rspyutils — An feature extraction algorithm
- sdputils — Utils' functions to use inside the Semantix Data Platform for data transformation.
- seatools — python集成工具包
- seqslab-connector — Atgenomix SeqsLab Connector for Python
- slickdeals-dbt-spark — The Apache Spark adapter plugin for dbt, slickdeals flavor
- snap_studio — SparklineData SNAP Studio
- soda-core-spark — no summary
- soda-sql — Soda SQL library & CLI
- soda-sql-hive — no summary
- soda-sql-spark — no summary
- spark-k8s-test — This project provides some utilities function and CLI commands to test Charmed Spark on K8s.
- sqlalchemy-databricks — SQLAlchemy Dialect for Databricks
- sqlDrafter — sqldrafter is a Python library that helps you generate data queries from natural language questions.
- sqlep — a tool for testing sql queries
- ssh-jump-hive — ssh_jump_hive is a tools could jump the jump machine to connect hive get hive data to pandas dataframe
- suanpan — Suanpan SDK
- suanpan-core — Suanpan SDK
- tikit — Kit for TI PLATFORM
- tikit-en — Kit for TI PLATFORM
- tikit-test — Kit for TI PLATFORM
- typed-blocks — Modular event-centric python library made for simplification typical stream applications development with python type system strong exploitation.
- whale-pipelines — A pared-down metadata scraper + SQL runner.
- yourtools — Python helper tools
- yoyo-indexima — Indexima migration schema based on yoyo
1