great_expectations.core.expectation_diagnostics.expectation_diagnostics

Module Contents

Classes

ExpectationDiagnostics()

An immutable object created by Expectation.run_diagnostics.

class great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics

Bases: great_expectations.types.SerializableDictDot

An immutable object created by Expectation.run_diagnostics. It contains information introspected from the Expectation class, in formats that can be renderered at the command line, and by the Gallery.

It has three external-facing use cases:

  1. ExpectationDiagnostics.to_dict() creates the JSON object that populates the Gallery.

  2. ExpectationDiagnostics.generate_checklist() creates CLI-type string output to assist with development.

examples :List[ExpectationTestDataCases]
gallery_examples :List[ExpectationTestDataCases]
library_metadata :AugmentedLibraryMetadata
description :ExpectationDescriptionDiagnostics
execution_engines :ExpectationExecutionEngineDiagnostics
renderers :List[ExpectationRendererDiagnostics]
metrics :List[ExpectationMetricDiagnostics]
tests :List[ExpectationTestDiagnostics]
errors :List[ExpectationErrorDiagnostics]
maturity_checklist :ExpectationDiagnosticMaturityMessages
to_json_dict(self)

# TODO: <Alex>2/4/2022</Alex> A reference implementation can be provided, once circular import dependencies, caused by relative locations of the “great_expectations/types/__init__.py” and “great_expectations/core/util.py” modules are resolved.

generate_checklist(self)

Generates the checklist in CLI-appropriate string format.

static _check_library_metadata(library_metadata: AugmentedLibraryMetadata)

Check whether the Expectation has a library_metadata object

static _check_docstring(description: ExpectationDescriptionDiagnostics)

Check whether the Expectation has an informative docstring

classmethod _check_example_cases(cls, examples: List[ExpectationTestDataCases], tests: List[ExpectationTestDiagnostics])

Check whether this Expectation has at least one positive and negative example case (and all test cases return the expected output)

static _check_core_logic_for_at_least_one_execution_engine(test_results: List[ExpectationTestDiagnostics])

Check whether core logic for this Expectation exists and passes tests on at least one Execution Engine

static _check_core_logic_for_all_applicable_execution_engines(test_results: List[ExpectationTestDiagnostics])

Check whether core logic for this Expectation exists and passes tests on all applicable Execution Engines

static _count_positive_and_negative_example_cases(examples: List[ExpectationTestDataCases])

Scans examples and returns a 2-ple with the numbers of cases with success == True and success == False

static _count_unexpected_test_cases(test_diagnostics: ExpectationTestDiagnostics)

Scans test_diagnostics and returns the number of cases that did not pass.

static _convert_checks_into_output_message(class_name: str, maturity_messages: ExpectationDiagnosticMaturityMessages)

Converts a list of checks into an output string (potentially nested), with ✔ to indicate checks that passed.

static _check_input_validation(expectation_instance, examples: List[ExpectationTestDataCases])

Check that the validate_configuration method returns True

static _check_renderer_methods(expectation_instance)

Check if all statment renderers are defined

static _check_linting(expectation_instance)

Check if linting checks pass for Expectation

static _check_full_test_suite(library_metadata: AugmentedLibraryMetadata)

Check library_metadata to see if Expectation has a full test suite

static _check_manual_code_review(library_metadata: AugmentedLibraryMetadata)

Check library_metadata to see if a manual code review has been performed