Skip to main content
Version: 1.2.1

Expectation

class great_expectations.expectations.Expectation(*, id: Optional[str] = None, meta: Optional[dict] = None, notes: Optional[Union[str, List[str]]] = None, result_format: Union[great_expectations.core.result_format.ResultFormat, dict] = ResultFormat.BASIC, description: Optional[str] = None, catch_exceptions: bool = False, rendered_content: Optional[List[great_expectations.render.components.RenderedAtomicContent]] = None, windows: Optional[List[great_expectations.expectations.window.Window]] = None)#

Base class for all Expectations.

For a list of all available expectation types, see the Expectation Gallery.

Expectation classes must have the following attributes set:
  1. domain_keys: a tuple of the keys used to determine the domain of the expectation

  2. success_keys: a tuple of the keys used to determine the success of the expectation.

In some cases, subclasses of Expectation (such as BatchExpectation) can inherit these properties from their parent class.

They may optionally override runtime_keys, and may optionally set an explicit value for expectation_type. runtime_keys lists the keys that can be used to control output but will not affect the actual success value of the expectation (such as result_format).

Expectation classes must implement the following:
  1. _validate

  2. get_validation_dependencies

In some cases, subclasses of Expectation, such as ColumnMapExpectation will already have correct implementations that may simply be inherited.

Additionally, they may provide implementations of:
  1. validate_configuration, which should raise an error if the configuration will not be usable for the Expectation

  2. Data Docs rendering methods decorated with the @renderer decorator. See the

print_diagnostic_checklist(diagnostics: Optional[ExpectationDiagnostics] = None, show_failed_tests: bool = False, backends: Optional[List[str]] = None, show_debug_messages: bool = False) str#

Runs self.run_diagnostics and generates a diagnostic checklist.

This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental.

Parameters
  • diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self.

  • show_failed_tests (bool) – If true, failing tests will be printed.

  • backends – list of backends to pass to run_diagnostics

  • show_debug_messages (bool) – If true, create a logger and pass to run_diagnostics

run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics#

Produce a diagnostic report about this Expectation.

The current uses for this method’s output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations.

The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py

Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument.

If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation’s implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under “errors” key in the report.

Parameters
  • raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect.

  • ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests.

  • ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests.

  • for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics.

  • debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to.

  • only_consider_these_backends (optional[List[str]]) –

  • context (optional[AbstractDataContext]) – Instance of any child of “AbstractDataContext” class.

Returns

An Expectation Diagnostics report object

save()#

Save the current state of this Expectation.

validate_configuration(configuration: Optional[great_expectations.expectations.expectation_configuration.ExpectationConfiguration] = None) None#