Reverse Dependencies of azure-storage-file-datalake
The following projects have a declared dependency on azure-storage-file-datalake:
- abn-amro-assessment-2024 — ABN Amro technical assignment package
- abn-amro-test — ABN Amro technical assignment package
- acryl-datahub — A CLI to work with DataHub metadata
- acryl-iceberg-legacy — Acryl maintained copy of Iceberg Python bindings. Iceberg is a new table format for storing large, slow-moving tabular data
- adls-acl — A small tool for managing Azure DataLake Store (ADLS) Access Control Lists (ACLs).
- adls-management — metadata lake file management
- agno-storage — Cloud agnostic storage service
- andreani-aa-ml — Common functions for ML proyects used by the Andreani Advanced Analytics team
- apache-airflow-providers-microsoft-azure — Provider package apache-airflow-providers-microsoft-azure for Apache Airflow
- azdatalakefacade — A package to implement a singleton for azure data storage gen2
- azfs — AzFS is to provide convenient Python read/write functions for Azure Storage Account.
- azure-ai-ml — Microsoft Azure Machine Learning Client Library for Python
- azure-lmdc — Esta biblioteca tem como objetivo generalizar funções da integração entre Azure e Python utilizando o SDK do Azure
- azure-storage-helper — no summary
- azurecloudhandler — Library to optimally handle some resources on Azure
- botdependencies — no summary
- checkbin — Visualization SDK
- cloudexplain — A package for explaining cloud-based solutions provided by cloudexplain GmbH.
- cloudpathlib — pathlib-style classes for cloud storage services.
- cloudsh — A Python CLI wrapping common Linux commands for local/cloud files.
- dagster-azure — Package for Azure-specific Dagster framework op and resource components.
- data-ecosystem-dependencies — Data Ecosystem Dependencies - Python (PADE)
- data-ecosystem-python — Program Agnostic Data Ecosystem (PADE) - Python Services
- data-ecosystem-services — Program Agnostic Data Ecosystem (PADE) - Python Services
- data-pipeline-tooling — A library for databricks jobs api
- data-safe-haven — An open-source framework for creating secure environments to analyse sensitive data.
- davt-dependencies-python — Data, Analytics and Visualization Templates (DAVT) - Python Dependencies
- davt-services-python — Data, Analytics and Visualization Templates (DAVT) - Python Services
- do-data-utils — Functionalities to interact with Google and Azure, and clean data
- DukeDSClient — Command line tool(ddsclient) to upload/manage projects on the duke-data-service.
- equal-logger — A logging library for internal usage.
- fabric-ops — English: Fabric-ops is a library for automating tasks with Microsoft Fabric resources. Español: Fabric-ops es una biblioteca para automatizar tareas con los recursos de Microsoft Fabric.
- fabric-testing — Testing functionalities for Microsoft Fabric
- fabric-user-data-functions — This package contains bindings and middle ware required for fabric functions built on python.
- feathr — An Enterprise-Grade, High Performance Feature Store
- fs-azureblob — Azure blob storage filesystem for PyFilesystem2
- gen2-acl-bundle — Azure DataLake ACL setup bundle for the Pyfony Framework
- givvableutils — utility tools for givvable
- gordo-dataset — Gordo datasets and data providers
- increff-runner — Algo Runner For Increff CaaS
- ipp-ds — Biblioteca da área de Ciencia de Dados da Ipiranga Produtos de Petroleo
- jobsworthy — no summary
- kukur — Kukur makes time series data and metadata available to the Apache Arrow ecosystem.
- metalake-file-management — metadata lake file management
- mlflow-by-ckl — MLflow: A Platform for ML Development and Productionization
- mlflow-by-johnsnowlabs — MLflow: A Platform for ML Development and Productionization
- mlflow-by-johnsnowlabs-v2 — MLflow: A Platform for ML Development and Productionization
- mlflow-saagie — MLflow: A Platform for ML Development and Productionization - forked for Saagie
- mlflow-skinny — MLflow is an open source platform for the complete machine learning lifecycle
- mlflow-tmp — MLflow: A Platform for ML Development and Productionization
- mosaicml-streaming — Streaming lets users create PyTorch compatible datasets that can be streamed from cloud-based object stores
- nba-analytics — A package for collecting and analyzing NBA player data.
- npyetl — no summary
- Orange3-Azure-Data-Lake-Storage-Gen2 — Widgets to load and save tables to/from Azure Data Lake Storage (ADLS) Gen 2
- osiris-sdk — Python SDK for Osiris (Energinet DataPlatform).
- pade-python — Program Agnostic Data Ecosystem (PADE) - Python Services
- pade-python-dependencies — Program Agnostic Data Ecosystem (PADE) - Python Dependencies
- paeio — Utilities library for read/write operations and general data cleaning routines
- pano-airflow — Programmatically author, schedule and monitor data pipelines
- PBI-dashboard-creator — Automatically create PowerBI dashboards using the .pbir file type
- pre-ai-python — Microsoft AI Python Package
- pyarrowfs-adlgen2 — Use pyarrow with Azure Data Lake gen2
- pyeqx — no summary
- pyeqx-core — no summary
- pyrit-library — no summary
- pytestmsfabric — no summary
- q2label — Transfer your data to our datalake and integrate in your computer vision pipeline. For more information, go to our website: www.q2label.com.
- renameit — File renaming tool
- robotframework-adls-library — Library for Robot Framework in which to interface with Azure Data Lake Storage Gen2 (ADLS)
- rtdip-sdk — no summary
- slalom-tapdance — Tapdance is an orchestration layer for the open source Singer tap platform.
- stacks-data — A suite of utilities to support data engineering workloads within an Ensono Stacks data platform.
- tab2neo — Clinical Linked Data: High-level Python classes to load, model and reshape tabular data imported into Neo4j database
- tapdance — Tapdance is an orchestration layer for the open source Singer tap platform.
- target-azure-storage — `target-azure-storage` is a Singer target for Azure Storage, built with the Meltano SDK for Singer Targets.
- terality — The Data Processing Engine for Data Scientists
- thds.adls — ADLS tools
- thds.mops — ML Ops tools for Trilliant Health
- uio — Universal IO (uio) library.
- valemo-data-query — no summary
- watchmen-storage-adls — no summary
- xcputils — Utilities for copying data
- ydata-sdk — YData allows to use the *Data-Centric* tools from the YData ecosystem to accelerate AI development
- zdt — zdt
- zgl-streaming — Streaming lets users create PyTorch compatible datasets that can be streamed from cloud-based object stores
1