great_expectations.datasource

Package Contents

Classes

Datasource(name, data_context=None, data_asset_type=None, batch_kwargs_generators=None, **kwargs)

A Datasource connects to a compute environment and one or more storage environments and produces batches of data

PandasDatasource(name=’pandas’, data_context=None, data_asset_type=None, batch_kwargs_generators=None, boto3_options=None, reader_method=None, reader_options=None, limit=None, **kwargs)

The PandasDatasource produces PandasDataset objects and supports generators capable of

SparkDFDatasource(name=’default’, data_context=None, data_asset_type=None, batch_kwargs_generators=None, spark_config=None, **kwargs)

The SparkDFDatasource produces SparkDFDatasets and supports generators capable of interacting with local

SqlAlchemyDatasource(name=’default’, data_context=None, data_asset_type=None, credentials=None, batch_kwargs_generators=None, **kwargs)

A SqlAlchemyDatasource will provide data_assets converting batch_kwargs using the following rules:

class great_expectations.datasource.Datasource(name, data_context=None, data_asset_type=None, batch_kwargs_generators=None, **kwargs)

Bases: object

A Datasource connects to a compute environment and one or more storage environments and produces batches of data that Great Expectations can validate in that compute environment.

Each Datasource provides Batches connected to a specific compute environment, such as a SQL database, a Spark cluster, or a local in-memory Pandas DataFrame.

Datasources use Batch Kwargs to specify instructions for how to access data from relevant sources such as an existing object from a DAG runner, a SQL database, S3 bucket, or local filesystem.

To bridge the gap between those worlds, Datasources interact closely with generators which are aware of a source of data and can produce produce identifying information, called “batch_kwargs” that datasources can use to get individual batches of data. They add flexibility in how to obtain data such as with time-based partitioning, downsampling, or other techniques appropriate for the datasource.

For example, a batch kwargs generator could produce a SQL query that logically represents “rows in the Events table with a timestamp on February 7, 2012,” which a SqlAlchemyDatasource could use to materialize a SqlAlchemyDataset corresponding to that batch of data and ready for validation.

Since opinionated DAG managers such as airflow, dbt, prefect.io, dagster can also act as datasources and/or batch kwargs generators for a more generic datasource.

When adding custom expectations by subclassing an existing DataAsset type, use the data_asset_type parameter to configure the datasource to load and return DataAssets of the custom type.

Feature Maturity

icon-2516668add7311eab25a0242ac110002 Datasource - S3 - How-to Guide
Support for connecting to Amazon Web Services S3 as an external datasource.
Maturity: Production
Details:
API Stability: medium
Implementation Completeness: Complete
Unit Test Coverage: : Complete
Integration Infrastructure/Test Coverage: None
Documentation Completeness: Minimal/Spotty
Bug Risk: Low
icon-2516689cdd7311eab25a0242ac110002 Datasource - Filesystem - How-to Guide
Support for using a mounted filesystem as an external datasource.
Maturity: Production
Details:
API Stability: Medium
Implementation Completeness: Complete
Unit Test Coverage: Complete
Integration Infrastructure/Test Coverage: Partial
Documentation Completeness: Partial
Bug Risk: Low (Moderate for Windows users because of path issues)
icon-2516698cdd7311eab25a0242ac110002 Datasource - GCS - How-to Guide
Support for Google Cloud Storage as an external datasource
Maturity: Experimental
Details:
API Stability: Medium (supported via native ‘gs://’ syntax in Pandas and Pyspark; medium because we expect configuration to evolve)
Implementation Completeness: Medium (works via passthrough, not via CLI)
Unit Test Coverage: Minimal
Integration Infrastructure/Test Coverage: Minimal
Documentation Completeness: Minimal
Bug Risk: Moderate
icon-25166a72dd7311eab25a0242ac110002 Datasource - Azure Blob Storage - How-to Guide
Support for Microsoft Azure Blob Storage as an external datasource
Maturity: In Roadmap (Sub-Experimental - “Not Impossible”)
Details:
API Stability: N/A (Supported on Databricks Spark via ‘wasb://’ / ‘wasps://’ url; requires local download first for Pandas)
Implementation Completeness: Minimal
Unit Test Coverage: N/A
Integration Infrastructure/Test Coverage: N/A
Documentation Completeness: Minimal
Bug Risk: Unknown
recognized_batch_parameters
classmethod from_configuration(cls, **kwargs)

Build a new datasource from a configuration dictionary.

Parameters

**kwargs – configuration key-value pairs

Returns

the newly-created datasource

Return type

datasource (Datasource)

classmethod build_configuration(cls, class_name, module_name='great_expectations.datasource', data_asset_type=None, batch_kwargs_generators=None, **kwargs)

Build a full configuration object for a datasource, potentially including batch kwargs generators with defaults.

Parameters
  • class_name – The name of the class for which to build the config

  • module_name – The name of the module in which the datasource class is located

  • data_asset_type – A ClassConfig dictionary

  • batch_kwargs_generators – BatchKwargGenerators configuration dictionary

  • **kwargs – Additional kwargs to be part of the datasource constructor’s initialization

Returns

A complete datasource configuration.

property name(self)

Property for datasource name

property config(self)
property data_context(self)

Property for attached DataContext

_build_generators(self)

Build batch kwargs generator objects from the datasource configuration.

Returns

None

add_batch_kwargs_generator(self, name, class_name, **kwargs)

Add a BatchKwargGenerator to the datasource.

Parameters
  • name (str) – the name of the new BatchKwargGenerator to add

  • class_name – class of the BatchKwargGenerator to add

  • kwargs – additional keyword arguments will be passed directly to the new BatchKwargGenerator’s constructor

Returns

BatchKwargGenerator (BatchKwargGenerator)

_build_batch_kwargs_generator(self, **kwargs)

Build a BatchKwargGenerator using the provided configuration and return the newly-built generator.

get_batch_kwargs_generator(self, name)

Get the (named) BatchKwargGenerator from a datasource)

Parameters

name (str) – name of BatchKwargGenerator (default value is ‘default’)

Returns

BatchKwargGenerator (BatchKwargGenerator)

list_batch_kwargs_generators(self)

List currently-configured BatchKwargGenerator for this datasource.

Returns

each dictionary includes “name” and “type” keys

Return type

List(dict)

process_batch_parameters(self, limit=None, dataset_options=None)

Use datasource-specific configuration to translate any batch parameters into batch kwargs at the datasource level.

Parameters
  • limit (int) – a parameter all datasources must accept to allow limiting a batch to a smaller number of rows.

  • dataset_options (dict) – a set of kwargs that will be passed to the constructor of a dataset built using these batch_kwargs

Returns

Result will include both parameters passed via argument and configured parameters.

Return type

batch_kwargs

abstract get_batch(self, batch_kwargs, batch_parameters=None)

Get a batch of data from the datasource.

Parameters
  • batch_kwargs – the BatchKwargs to use to construct the batch

  • batch_parameters – optional parameters to store as the reference description of the batch. They should reflect parameters that would provide the passed BatchKwargs.

Returns

Batch

get_available_data_asset_names(self, batch_kwargs_generator_names=None)

Returns a dictionary of data_asset_names that the specified batch kwarg generator can provide. Note that some batch kwargs generators may not be capable of describing specific named data assets, and some (such as filesystem glob batch kwargs generators) require the user to configure data asset names.

Parameters

batch_kwargs_generator_names – the BatchKwargGenerator for which to get available data asset names.

Returns

{
  generator_name: {
    names: [ (data_asset_1, data_asset_1_type), (data_asset_2, data_asset_2_type) ... ]
  }
  ...
}

Return type

dictionary consisting of sets of generator assets available for the specified generators

build_batch_kwargs(self, batch_kwargs_generator, data_asset_name=None, partition_id=None, **kwargs)
class great_expectations.datasource.PandasDatasource(name='pandas', data_context=None, data_asset_type=None, batch_kwargs_generators=None, boto3_options=None, reader_method=None, reader_options=None, limit=None, **kwargs)

Bases: great_expectations.datasource.datasource.Datasource

The PandasDatasource produces PandasDataset objects and supports generators capable of interacting with the local filesystem (the default subdir_reader generator), and from existing in-memory dataframes.

recognized_batch_parameters
classmethod build_configuration(cls, data_asset_type=None, batch_kwargs_generators=None, boto3_options=None, reader_method=None, reader_options=None, limit=None, **kwargs)

Build a full configuration object for a datasource, potentially including generators with defaults.

Parameters
  • data_asset_type – A ClassConfig dictionary

  • batch_kwargs_generators – Generator configuration dictionary

  • boto3_options – Optional dictionary with key-value pairs to pass to boto3 during instantiation.

  • reader_method – Optional default reader_method for generated batches

  • reader_options – Optional default reader_options for generated batches

  • limit – Optional default limit for generated batches

  • **kwargs – Additional kwargs to be part of the datasource constructor’s initialization

Returns

A complete datasource configuration.

process_batch_parameters(self, reader_method=None, reader_options=None, limit=None, dataset_options=None)

Use datasource-specific configuration to translate any batch parameters into batch kwargs at the datasource level.

Parameters
  • limit (int) – a parameter all datasources must accept to allow limiting a batch to a smaller number of rows.

  • dataset_options (dict) – a set of kwargs that will be passed to the constructor of a dataset built using these batch_kwargs

Returns

Result will include both parameters passed via argument and configured parameters.

Return type

batch_kwargs

get_batch(self, batch_kwargs, batch_parameters=None)

Get a batch of data from the datasource.

Parameters
  • batch_kwargs – the BatchKwargs to use to construct the batch

  • batch_parameters – optional parameters to store as the reference description of the batch. They should reflect parameters that would provide the passed BatchKwargs.

Returns

Batch

static guess_reader_method_from_path(path)
_get_reader_fn(self, reader_method=None, path=None)

Static helper for parsing reader types. If reader_method is not provided, path will be used to guess the correct reader_method.

Parameters
  • reader_method (str) – the name of the reader method to use, if available.

  • path (str) – the to use to guess

Returns

ReaderMethod to use for the filepath

class great_expectations.datasource.SparkDFDatasource(name='default', data_context=None, data_asset_type=None, batch_kwargs_generators=None, spark_config=None, **kwargs)

Bases: great_expectations.datasource.datasource.Datasource

The SparkDFDatasource produces SparkDFDatasets and supports generators capable of interacting with local

filesystem (the default subdir_reader batch kwargs generator) and databricks notebooks.

Accepted Batch Kwargs:
  • PathBatchKwargs (“path” or “s3” keys)

  • InMemoryBatchKwargs (“dataset” key)

  • QueryBatchKwargs (“query” key)

Feature Maturity

icon-25181f2add7311eab25a0242ac110002 Datasource - HDFS - How-to Guide
Use HDFS as an external datasource in conjunction with Spark.
Maturity: Experimental
Details:
API Stability: Stable
Implementation Completeness: Unknown
Unit Test Coverage: Minimal (none)
Integration Infrastructure/Test Coverage: Minimal (none)
Documentation Completeness: Minimal (none)
Bug Risk: Unknown
recognized_batch_parameters
classmethod build_configuration(cls, data_asset_type=None, batch_kwargs_generators=None, spark_config=None, **kwargs)

Build a full configuration object for a datasource, potentially including generators with defaults.

Parameters
  • data_asset_type – A ClassConfig dictionary

  • batch_kwargs_generators – Generator configuration dictionary

  • spark_config – dictionary of key-value pairs to pass to the spark builder

  • **kwargs – Additional kwargs to be part of the datasource constructor’s initialization

Returns

A complete datasource configuration.

process_batch_parameters(self, reader_method=None, reader_options=None, limit=None, dataset_options=None)

Use datasource-specific configuration to translate any batch parameters into batch kwargs at the datasource level.

Parameters
  • limit (int) – a parameter all datasources must accept to allow limiting a batch to a smaller number of rows.

  • dataset_options (dict) – a set of kwargs that will be passed to the constructor of a dataset built using these batch_kwargs

Returns

Result will include both parameters passed via argument and configured parameters.

Return type

batch_kwargs

get_batch(self, batch_kwargs, batch_parameters=None)

class-private implementation of get_data_asset

static guess_reader_method_from_path(path)
_get_reader_fn(self, reader, reader_method=None, path=None)

Static helper for providing reader_fn

Parameters
  • reader – the base spark reader to use; this should have had reader_options applied already

  • reader_method – the name of the reader_method to use, if specified

  • path (str) – the path to use to guess reader_method if it was not specified

Returns

ReaderMethod to use for the filepath

class great_expectations.datasource.SqlAlchemyDatasource(name='default', data_context=None, data_asset_type=None, credentials=None, batch_kwargs_generators=None, **kwargs)

Bases: great_expectations.datasource.Datasource

A SqlAlchemyDatasource will provide data_assets converting batch_kwargs using the following rules:
  • if the batch_kwargs include a table key, the datasource will provide a dataset object connected to that table

  • if the batch_kwargs include a query key, the datasource will create a temporary table usingthat query. The query can be parameterized according to the standard python Template engine, which uses $parameter, with additional kwargs passed to the get_batch method.

Feature Maturity

icon-25193310dd7311eab25a0242ac110002 Datasource - PostgreSQL - How-to Guide
Support for using the open source PostgresQL database as an external datasource and execution engine.
Maturity: Production
Details:
API Stability: High
Implementation Completeness: Complete
Unit Test Coverage: Complete
Integration Infrastructure/Test Coverage: Complete
Documentation Completeness: Medium (does not have a specific how-to, but easy to use overall)
Bug Risk: Low
Expectation Completeness: Moderate
icon-251934bedd7311eab25a0242ac110002 Datasource - BigQuery - How-to Guide
Use Google BigQuery as an execution engine and external datasource to validate data.
Maturity: Beta
Details:
API Stability: Unstable (table generator inability to work with triple-dotted, temp table usability, init flow calls setup “other”)
Implementation Completeness: Moderate
Unit Test Coverage: Partial (no test coverage for temp table creation)
Integration Infrastructure/Test Coverage: Minimal
Documentation Completeness: Partial (how-to does not cover all cases)
Bug Risk: High (we know of several bugs, including inability to list tables, SQLAlchemy URL incomplete)
Expectation Completeness: Moderate
icon-251935a4dd7311eab25a0242ac110002 Datasource - Amazon Redshift - How-to Guide
Use Amazon Redshift as an execution engine and external datasource to validate data.
Maturity: Beta
Details:
API Stability: Moderate (potential metadata/introspection method special handling for performance)
Implementation Completeness: Complete
Unit Test Coverage: Minimal
Integration Infrastructure/Test Coverage: Minimal (none automated)
Documentation Completeness: Moderate
Bug Risk: Moderate
Expectation Completeness: Moderate
icon-25193676dd7311eab25a0242ac110002 Datasource - Snowflake - How-to Guide
Use Snowflake Computing as an execution engine and external datasource to validate data.
Maturity: Production
Details:
API Stability: High
Implementation Completeness: Complete
Unit Test Coverage: Complete
Integration Infrastructure/Test Coverage: Minimal (manual only)
Documentation Completeness: Complete
Bug Risk: Low
Expectation Completeness: Complete
icon-25193734dd7311eab25a0242ac110002 Datasource - Microsoft SQL Server - How-to Guide
Use Microsoft SQL Server as an execution engine and external datasource to validate data.
Maturity: Experimental
Details:
API Stability: High
Implementation Completeness: Moderate
Unit Test Coverage: Minimal (none)
Integration Infrastructure/Test Coverage: Minimal (none)
Documentation Completeness: Minimal
Bug Risk: High
Expectation Completeness: Low (some required queries do not generate properly, such as related to nullity)
icon-251937f2dd7311eab25a0242ac110002 Datasource - MySQL - How-to Guide
Use MySQL as an execution engine and external datasource to validate data.
Maturity: Experimental
Details:
API Stability: Low (no consideration for temp tables)
Implementation Completeness: Low (no consideration for temp tables)
Unit Test Coverage: Minimal (none)
Integration Infrastructure/Test Coverage: Minimal (none)
Documentation Completeness: Minimal (none)
Bug Risk: Unknown
Expectation Completeness: Unknown
icon-251938a6dd7311eab25a0242ac110002 Datasource - MariaDB - How-to Guide
Use MariaDB as an execution engine and external datasource to validate data.
Maturity: Experimental
Details:
API Stability: Low (no consideration for temp tables)
Implementation Completeness: Low (no consideration for temp tables)
Unit Test Coverage: Minimal (none)
Integration Infrastructure/Test Coverage: Minimal (none)
Documentation Completeness: Minimal (none)
Bug Risk: Unknown
Expectation Completeness: Unknown
recognized_batch_parameters
classmethod build_configuration(cls, data_asset_type=None, batch_kwargs_generators=None, **kwargs)

Build a full configuration object for a datasource, potentially including generators with defaults.

Parameters
  • data_asset_type – A ClassConfig dictionary

  • batch_kwargs_generators – Generator configuration dictionary

  • **kwargs – Additional kwargs to be part of the datasource constructor’s initialization

Returns

A complete datasource configuration.

_get_sqlalchemy_connection_options(self, **kwargs)
_get_sqlalchemy_key_pair_auth_url(self, drivername, credentials)
get_batch(self, batch_kwargs, batch_parameters=None)

Get a batch of data from the datasource.

Parameters
  • batch_kwargs – the BatchKwargs to use to construct the batch

  • batch_parameters – optional parameters to store as the reference description of the batch. They should reflect parameters that would provide the passed BatchKwargs.

Returns

Batch

process_batch_parameters(self, query_parameters=None, limit=None, dataset_options=None)

Use datasource-specific configuration to translate any batch parameters into batch kwargs at the datasource level.

Parameters
  • limit (int) – a parameter all datasources must accept to allow limiting a batch to a smaller number of rows.

  • dataset_options (dict) – a set of kwargs that will be passed to the constructor of a dataset built using these batch_kwargs

Returns

Result will include both parameters passed via argument and configured parameters.

Return type

batch_kwargs