{"id":20374,"date":"2023-11-10T07:47:15","date_gmt":"2023-11-10T12:47:15","guid":{"rendered":"https:\/\/qxf2.com\/blog\/?p=20374"},"modified":"2023-11-10T07:47:15","modified_gmt":"2023-11-10T12:47:15","slug":"use-pytest-to-run-great-expectations-checkpoints","status":"publish","type":"post","link":"https:\/\/qxf2.com\/blog\/use-pytest-to-run-great-expectations-checkpoints\/","title":{"rendered":"Use pytest to run Great Expectations checkpoints"},"content":{"rendered":"<p>At <a href=\"https:\/\/qxf2.com\/?utm_source=pytest_GE&#038;utm_medium=click&#038;utm_campaign=From%20blog\">Qxf2<\/a>, we&#8217;ve successfully integrated <a href=\"https:\/\/greatexpectations.io\">Great Expectations<\/a> into majority of our projects. We now have GitHub workflows in place to run Great Expectations checkpoints before deploying our applications to production. However, as our test suite expanded, we encountered a few challenges:<br \/>\n1. Triggering valid checkpoints.<br \/>\n2. Aggregating checkpoint results.<br \/>\nTo address these issues, we turned to <a href=\"https:\/\/docs.pytest.org\/\">pytest<\/a>. In this post, I&#8217;ll demonstrate how to modify your checkpoint scripts to make them compatible with pytest for seamless integration.<\/p>\n<hr>\n<h3>Modifying Checkpoint Scripts for pytest Compatibility:<\/h3>\n<p>This is how your Python checkpoint script will mostly look after you generate it using this command <code>great_expectations checkpoint script github_stats_checkpoint<\/code><\/p>\n<pre lang=\"python\">\r\n# Contents of github_stats_checkpoint.py module\r\nfrom great_expectations.data_context import DataContext\r\ndata_context: DataContext = DataContext(context_root_dir=\"great_expectations\")\r\nresult: Checkpointresult = data_context.run_checkpoint(checkpoint_name=\"github_stats_checkpoint\",\r\n                                                       batch_request=None,\r\n                                                       run_name=None,)\r\nif not result[\"success\"]:\r\n    print (\"Validation failed!\")\r\n    sys.exit (1)\r\nprint (\"Validation succeeded!\")\r\nsys.exit(0)\r\n<\/pre>\n<p>To ensure compatibility with pytest, you&#8217;ll need to implement the following changes in your checkpoint scripts:<br \/>\n1. Rename the checkpoint script<br \/>\n2. Create a test function<br \/>\n3. Create a test marker<\/p>\n<h4>1. Rename the checkpoint script:<\/h4>\n<p>To enable pytest to identify and execute the test, modify the checkpoint script&#8217;s name. It should either start with <code>test_name.py<\/code> or end with <code>name_test.py<\/code>, i.e if the name of the checkpoint script is <code>github_stats_checkpoint.py<\/code> rename it to <code>test_github_stats_checkpoint.py<\/code>.<\/p>\n<h4>2. Create a test function:<\/h4>\n<p>Revise the checkpoint script to include:<br \/>\na. A test function, the function&#8217;s name should be <code>test_function_name<\/code>.<br \/>\nb. An assert statement in the test function to validate the test status.<\/p>\n<pre lang=\"python\">\r\n# Contents of the new test_github_stats_checkpoint.py module\r\nfrom great_expectations.data_context import DataContext\r\n\r\ndef test_validate_stats(): # a. Test function adhering to the naming convention added\r\n    data_context: DataContext = DataContext(context_root_dir=\"great_expectations\")\r\n    result: Checkpointresult = data_context.run_checkpoint(checkpoint_name=\"github_stats_checkpoint\",\r\n                                                           batch_request=None,\r\n                                                           run_name=None,)\r\n    assert result[\"success\"], \"Validation failed\" # b. Assert statement to validate the status\r\n<\/pre>\n<p><strong>Note:<\/strong> For further information on Python test discovery conventions you can refer <a href=\"https:\/\/docs.pytest.org\/en\/7.4.x\/explanation\/goodpractices.html#conventions-for-python-test-discovery\">pytest conventions<\/a><\/p>\n<h4>3. Create a test marker:<\/h4>\n<p>Enhance your test function by adding a marker. This allows you to select and run tests belonging to the same category. <code>@pytest.mark.marker_name<\/code> adds a marker for the test function.<\/p>\n<pre lang=\"python\">\r\nfrom great_expectations.data_context import DataContext\r\nimport pytest\r\n\r\n@pytest.mark.github_stats\r\ndef test_validate_stats():\r\n    data_context: DataContext = DataContext(context_root_dir=\"great_expectations\")\r\n    result: Checkpointresult = data_context.run_checkpoint(checkpoint_name=\"github_stats_checkpoint\",\r\n                                                           batch_request=None,\r\n                                                           run_name=None,)\r\n    assert result[\"success\"], \"Validation failed\"\r\n<\/pre>\n<hr>\n<h3>How to run tests:<\/h3>\n<p>Now that you&#8217;ve adjusted the checkpoint scripts to align with pytest, running it becomes a straightforward process.<br \/>\nTo validate that the new test is now discoverable run the following command:<\/p>\n<pre lang=\"python\">\r\n$ pytest --collect-only tests\/data_validation\/great_expectations\/utils\r\n===================================================== test session starts =====================================================\r\nplatform xxxxxx -- Python 3.9.17, pytest-7.2.1, pluggy-1.0.0\r\nrootdir: \/project_root_dir_location, configfile: pytest.ini\r\nplugins: anyio-3.6.2\r\ncollected 1 items\r\n\r\n<Package utils>\r\n  <Module test_github_data_checkpoint.py>\r\n    <Function test_validate_stats>\r\n<\/pre>\n<p>Once you have confirmed that the test is discoverable, use the following command to trigger the appropriate checkpoints and streamline the aggregation of checkpoint results:<\/p>\n<pre lang=\"python\">\r\n$ pytest -v -m github_stats tests\/data_validation\/great_expectations\/utils\r\n=========================== test session starts ============================\r\nplatform xxxxxx -- Python 3.9.17, pytest-7.2.1, pluggy-1.0.0\r\nrootdir: \/project_root_dir_location, configfile: pytest.ini\r\nplugins: anyio-3.6.2\r\ncollected 1 item\r\n\r\ntest_github_stats_checkpoint.py::test_validate_stats PASSED                                [100%]\r\n\r\n===================== 1 passed in 0.12s ======================\r\n<\/pre>\n<hr>\n<p>There you have it\u2014a suitable test runner for executing your checkpoints with just a few simple modifications.<\/p>\n<hr>\n<h3>Hire technical testers from Qxf2<\/h3>\n<p><a href='https:\/\/qxf2.com\/contact?utm_source=pytest_GE&#038;utm_medium=click&#038;utm_campaign=From%20blog'>Hire Qxf2<\/a> for our proficiency in technical testing and problem-solving skills. We extend beyond conventional test automation to address significant testing complexities, enabling your team to iterate swiftly and deliver software of exceptional quality. Allow us to assist you in optimizing your testing methodologies and enhancing the overall quality of your products.<\/p>\n<hr>\n","protected":false},"excerpt":{"rendered":"<p>At Qxf2, we&#8217;ve successfully integrated Great Expectations into majority of our projects. We now have GitHub workflows in place to run Great Expectations checkpoints before deploying our applications to production. However, as our test suite expanded, we encountered a few challenges: 1. Triggering valid checkpoints. 2. Aggregating checkpoint results. To address these issues, we turned to pytest. In this post, [&hellip;]<\/p>\n","protected":false},"author":9,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[313,107],"tags":[],"class_list":["post-20374","post","type-post","status-publish","format-standard","hentry","category-great-expectations","category-pytest"],"_links":{"self":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/20374","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/comments?post=20374"}],"version-history":[{"count":19,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/20374\/revisions"}],"predecessor-version":[{"id":20437,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/20374\/revisions\/20437"}],"wp:attachment":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/media?parent=20374"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/categories?post=20374"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/tags?post=20374"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}