{"id":20514,"date":"2024-01-19T08:00:30","date_gmt":"2024-01-19T13:00:30","guid":{"rendered":"https:\/\/qxf2.com\/blog\/?p=20514"},"modified":"2025-03-19T09:46:46","modified_gmt":"2025-03-19T13:46:46","slug":"a-brief-introduction-to-accessibility-testing-using-axe","status":"publish","type":"post","link":"https:\/\/qxf2.com\/blog\/a-brief-introduction-to-accessibility-testing-using-axe\/","title":{"rendered":"A brief introduction to Accessibility Testing using Axe"},"content":{"rendered":"<p>This post will discuss accessibility testing &#8211; specifically the portions of using Axe on your browser as well as integrating Axe with your automated test suite. We will also briefly discuss few nuances of introducing Accessibility testing into your team&#8217;s workflow. This post will NOT cover the basics of Accessibility tests, the standards used, etc.<\/p>\n<h3>Overview<\/h3>\n<p><a href=\"https:\/\/qxf2.com\/?utm_source=accessibility_testing=click&amp;utm_campaign=From%20blog\" target=\"_blank\" rel=\"noopener\">Qxf2&#8217;s<\/a> clients are startups and early stage products. They usually need quick &#8220;first versions&#8221; of different non-functional tests. The tests need to be implemented quickly and need to provide basic coverage. As the product matures, they hire more testers, increase coverage and implement better versions of the tests we put in place. In that context, Qxf2 has gotten experience implementing quick and dirty versions of performance tests, security tests and now, accessibility tests as well.<\/p>\n<p>This post will map to our experience. It will serve as a short guide to help you go from a beginner at Accessibility testing to integrating Axe with your automated tests.<\/p>\n<h3>Part I: Pick a tool<\/h3>\n<p>In our exploration of accessibility tools for accessibility testing of both internal projects and at client end, we initially looked up for a suitable candidate for accessibility testing. We noticed that many of the tools like <a href=\"https:\/\/wave.webaim.org\/\" target=\"_blank\" rel=\"noopener\">wave<\/a> had one thing in common that they relied on using web URLs and lacked integration capabilities with automation suite which was a key requirement for us.<\/p>\n<h5>Narrowing down to Axe from deque<\/h5>\n<p>Our search for alternative option for accessibility led us to <a href=\"https:\/\/www.deque.com\/axe\/\" target=\"_blank\" rel=\"noopener\">Axe<\/a>, a renowned tool in the accessibility testing domain. What stood out about <a href=\"https:\/\/www.deque.com\/axe\/\" target=\"_blank\" rel=\"noopener\">Axe<\/a> was its capacity to seamlessly integrate with automation suites, making it a good choice for our accessibility testing needs. This also prompted us to delve deeper into Axe&#8217;s functionalities, intrigued by its potential to cater to both technical and non-technical users. We will now focus more into detailed exploration to understand how Axe precisely addresses our accessibility testing requirements.<\/p>\n<p>While exploring <a href=\"https:\/\/www.deque.com\/axe\/\" target=\"_blank\" rel=\"noopener\">Axe<\/a> tool, we found that there were two ways how we can use Axe as a accessibility tool. One was through browser plugin and another was by integrating it to our automation suite. We tried both the way.<\/p>\n<hr \/>\n<h3>Part II: Starting with axe DevTools<\/h3>\n<p>axe DevTools is the browser extension to test the web application. It can be easily plugged with the browser. It can be installed from web store and than once installed will be available in the developers tool.<\/p>\n<h5>Scanning all pages at one go to get accessibility result<\/h5>\n<p>Once the axe DevTools is installed, we navigate to the web application to uncover the accessibility issues and open the developer tools. We click the axe Devtools which gives us two options:<br \/>\n1. Scan ALL of my page<br \/>\n2. Start an Intelligent Guided Test.<br \/>\nWe first tried the Scan ALL of my page and in a jiffy, it listed the accessibility issues.<br \/>\n<a href=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2023\/12\/scan_all_page.png\" data-rel=\"lightbox-image-0\" data-rl_title=\"\" data-rl_caption=\"\" title=\"\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-20516\" src=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2023\/12\/scan_all_page.png\" alt=\"scan all page\" width=\"1906\" height=\"942\" srcset=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2023\/12\/scan_all_page.png 1906w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2023\/12\/scan_all_page-300x148.png 300w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2023\/12\/scan_all_page-1024x506.png 1024w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2023\/12\/scan_all_page-768x380.png 768w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2023\/12\/scan_all_page-1536x759.png 1536w\" sizes=\"auto, (max-width: 1906px) 100vw, 1906px\" \/><\/a><br \/>\nIt also allows to export the result in JSON, CSV, or JUnit XML format.<\/p>\n<h5>Intelligent Guided Test<\/h5>\n<p>Intelligent guided test is a more advanced way of doing accessibility testing where using a simple question and answer format about the page and content under test, Intelligent guided test leverages machine learning to help you quickly find and fix issues that are not detectable by automated testing alone.<br \/>\nWith Intelligent guided test, you can also make your testing more agile by focussing only on specific parts of the page like a table, images or forms.<br \/>\nWe have focussed less on Intelligent guided test because we were more interested to get <a href=\"https:\/\/www.deque.com\/axe\/\" target=\"_blank\" rel=\"noopener\">Axe<\/a> integrated with automation suite and the axe DevTools extension gave us a good start.<\/p>\n<hr \/>\n<h3>Part III: Integrating Axe with test automation<\/h3>\n<p>In this section we will show you how to integrate Axe with your test automation framework. We will focus on integrating Axe with <a href=\"https:\/\/github.com\/qxf2\/qxf2-page-object-model\" target=\"_blank\" rel=\"noopener\">Qxf2&#8217;s test automation framework<\/a>. But the steps remain similar for any framework of your choice.<\/p>\n<h4>1. Exploring axe-core<\/h4>\n<p>axe-core is the accessibility engine for automated Web UI testing and we will be using this as a part of our automation suite moving forward. Let&#8217;s see how we can use it.<\/p>\n<h5>Installation of axe-core<\/h5>\n<p>Installing axe-core is straight forward. To install Axe with selenium, use the pip package manager.<\/p>\n<pre lang=\"yml\">pip install axe-selenium-python<\/pre>\n<p>Axe-core has different types of rules, for WCAG 2.0, 2.1, 2.2 on level A, AA and AAA along with best practices to identify common accessibility practices like ensuring every page have a h1 heading.<\/p>\n<h5>Content of the axe-core API<\/h5>\n<p>The <a href=\"https:\/\/github.com\/dequelabs\/axe-core\/blob\/develop\/doc\/API.md\" target=\"_blank\" rel=\"noopener\">axe-core API<\/a> consists of:<br \/>\naxe.min.js &#8211; a minified version of the java script file to be used in our framework. Axe would inject this file into all frames for evaluating the contents inside that frame. This is the file which would help identify the problems related to accessibility.<\/p>\n<h4>2. Integration of axe-core with Qxf2 framework<\/h4>\n<p>As we started the integration, we thought it would be straight forward where we need to inject the JavaScript and run it on the pages and get the accessibility issues but there were few issues we hit along the way. We will discuss them, so that anyone facing this issue doesn&#8217;t spend much time debugging.<\/p>\n<h5>Include axe.min.js in your code base<\/h5>\n<p>In general there are two methods in Axe viz. inject and run which are often shown in online examples but when integrating within a framework we have to do a bit more than just using the methods directly. We dug a level deep and checked the constructor of the Axe class and realized that we were missing an important param called <strong>script_url<\/strong>. script_url will point to the file containing the axe.min.js which would be injected in the page. We had to put the axe.min.js in our framework and point the script_url to it before using any of the Axe methods. Once the script_url was included, we had to call the parent class constructor and use super() method to call the methods in the parent class i.e. Axe class.<\/p>\n<pre lang=\"python\">import os\r\nfrom axe_selenium_python import Axe\r\n\r\nscript_url=os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\", \"utils\", \"axe.min.js\"))\r\n\r\nclass Accessibilityutil(Axe):\r\n    \"Accessibility object to run accessibility test\"\r\n    def __init__(self, driver):\r\n        super().__init__(driver, script_url)\r\n<\/pre>\n<h5>Write wrappers for the inject and run methods<\/h5>\n<p>The methods of our interest were inject() and run() in the parent class. We happened to use a wrapper around the methods as shown below.<\/p>\n<pre lang=\"python\">def accessibility_inject_axe(self):\r\n    \"Inject Axe into the Page\"\r\n    try:\r\n        return self.axe_util.inject()\r\n    except Exception as e:\r\n            self.write(e)\r\n\r\ndef accessibility_run_axe(self):\r\n    \"Run Axe into the Page\"\r\n    try:\r\n        return self.axe_util.run()\r\n    except Exception as e:\r\n            self.write(e)\r\n<\/pre>\n<p>So, with this we were able to use the methods in our tests but there was a problem. When we were running the accessibility test, we were getting one test results for all the pages. That would be a mess for anyone to figure out which accessibility issue is for which page. Since, we will be dealing with multiple pages, we would need a accessibility report for each pages. To handle this, we have introduced a method in the PageFactory of our framework which would return all the pages. We than iterate over the pages in our tests, inject and run Axe and create a result for every page.<\/p>\n<pre lang=\"python\">@staticmethod\r\ndef get_all_page_names():\r\n    \"Return the page names\"\r\n    return [\"main page\",\r\n            \"redirect\",\r\n            \"contact page\"]\r\n<\/pre>\n<h5>Example tests<\/h5>\n<p>This is how a part of test looks like by fetching the pages from PageFactory and iterating over each page and using Axe.<\/p>\n<pre lang=\"python\">@pytest.mark.ACCESSIBILITY\r\ndef test_accessibility(test_obj):\r\n    \"Inject Axe and create snapshot for every page\"\r\n    try:\r\n\r\n        #Initalize flags for tests summary\r\n        expected_pass = 0\r\n        actual_pass = -1\r\n\r\n        #Get all pages\r\n        page_names = PageFactory.get_all_page_names()\r\n\r\n        for page in page_names:\r\n            test_obj = PageFactory.get_page_object(page,base_url=test_obj.base_url)\r\n            #Inject Axe in every page\r\n            test_obj.accessibility_inject_axe()\r\n            #Check if Axe is run in every page\r\n            run_result = test_obj.accessibility_run_axe()\r\n<\/pre>\n<p>Now as we were able to run the accessibility tests with accessibility result for every page, we were thinking how the test result will be useful because whenever there is a change in the UI, it is most likely to introduce new accessibility issue but the test will run and capture the latest result and there is no way we can compare the change with last run or know what changed, or worst even the change is acceptable or not. Looking up on this, we came across snapshots which would allow us to have a snapshot for reference and compare the result for each test run.<\/p>\n<h4>3. Using snapshot testing against Axe result<\/h4>\n<p>So, using snapshot was simple. We had to install a plugin for snapshot testing with pytest.<\/p>\n<pre lang=\"yml\">pip install pytest-snapshot<\/pre>\n<p>We created a class that calls the parent Snapshot class using the super() method.<\/p>\n<pre lang=\"python\">import conf.snapshot_dir_conf\r\nfrom pytest_snapshot.plugin import Snapshot\r\n\r\nsnapshot_dir = conf.snapshot_dir_conf.snapshot_dir\r\n\r\nclass Snapshotutil(Snapshot):\r\n    \"Snapshot object to use snapshot for comparisions\"\r\n    def __init__(self, snapshot_update=False,\r\n                 allow_snapshot_deletion=False,\r\n                 snapshot_dir=snapshot_dir):\r\n        super().__init__(snapshot_update, allow_snapshot_deletion, snapshot_dir)\r\n<\/pre>\n<p>To generate snapshot for the first time, use the following command-line argument while running the test script:<\/p>\n<pre lang=\"yml\">python -m pytest tests\/test_accessibility.py --snapshot-update<\/pre>\n<p>Passing the above command-line argument while running the test script creates a snapshot directory along with the snapshot file for the first time. If the snapshots already exist, it updates them with the latest copy of Axe violations in the test pages.<\/p>\n<h5>Using DeepDiff library for snapshot comparisons<\/h5>\n<p>We use the &#8216;compare_and_log_violation()&#8217; method to compare the current test results with existing snapshots using the DeepDiff library. If no new violations are found while comparing against the baseline snapshot, the test passes. If violations are resolved or new violations are found, they are logged in the console and in a separate file located in &#8216;.\/conf\/new_violations_record.txt&#8217;.<\/p>\n<pre lang=\"python\"># Use deepdiff to compare the snapshots\r\nviolation_diff = DeepDiff(existing_snapshot_dict,\r\n                          current_violations_dict,\r\n                          ignore_order=True,\r\n                          verbose_level=2)\r\n<\/pre>\n<p>So, with the above methods of Snapshot class, the test script runs accessibility checks on each page, compares the results with stored snapshots, and logs any changes found.<\/p>\n<p>Here is the complete test script of Axe accessibility testing.<\/p>\n<pre lang=\"python\">@pytest.mark.ACCESSIBILITY\r\ndef test_accessibility(test_obj, request):\r\n    \"Test accessibility using Axe and compare snapshot results and save if new violations found\"\r\n    try:\r\n\r\n        #Initalize flags for tests summary\r\n        expected_pass = 0\r\n        actual_pass = -1\r\n\r\n        #Get snapshot update flag from pytest options\r\n        snapshot_update = request.config.getoption(\"--snapshot_update\")\r\n        #Create an instance of Snapshotutil\r\n        snapshot_util = Snapshotutil(snapshot_update=snapshot_update)\r\n\r\n        #Set up the violations log file\r\n        violations_log_path = snapshot_util.initialize_violations_log()\r\n        snapshot_dir = conf.snapshot_dir_conf.snapshot_dir\r\n\r\n        #Get all pages\r\n        page_names = conf.snapshot_dir_conf.page_names\r\n        for page in page_names:\r\n            test_obj = PageFactory.get_page_object(page,base_url=test_obj.base_url)\r\n            #Inject Axe in every page\r\n            test_obj.accessibility_inject_axe()\r\n            #Check if Axe is run in every page\r\n            axe_result = test_obj.accessibility_run_axe()\r\n            #Extract the 'violations' section from the Axe result\r\n            current_violations = axe_result.get('violations', [])\r\n            # Log if no violations are found\r\n            if not current_violations:\r\n                test_obj.log_result(\r\n                    True,\r\n                    positive=f\"No accessibility violations found on {page}.\",\r\n                    negative=\"\",\r\n                    level='info'\r\n                )\r\n\r\n            #Load the existing snapshot for the current page (if available)\r\n            existing_snapshot = snapshot_util.initialize_snapshot(snapshot_dir, page, current_violations=current_violations)\r\n            if existing_snapshot is None:\r\n                test_obj.log_result(\r\n                    True,\r\n                    positive=(\r\n                        f\"No existing snapshot was found for {page} page. \"\r\n                        \"A new snapshot has been created in ..\/conf\/snapshot dir. \"\r\n                        \"Please review the snapshot for violations before running the test again. \"\r\n                    ),\r\n                    negative=\"\",\r\n                    level='info'\r\n                )\r\n                continue\r\n\r\n            #Compare the current violations with the existing snapshot to find any new violations\r\n            snapshots_match, new_violation_details = snapshot_util.compare_and_log_violation(\r\n                    current_violations, existing_snapshot, page, violations_log_path\r\n                )\r\n            #For each new violation, log few details to the output display\r\n            if new_violation_details:\r\n                snapshot_util.log_new_violations(new_violation_details)\r\n            #Log the result of the comparison (pass or fail) for the current page\r\n            test_obj.log_result(snapshots_match,\r\n                                 positive=f'Accessibility checks for {page} passed',\r\n                                 negative=f'Accessibility checks for {page} failed',\r\n                                 level='debug')\r\n\r\n        #Print out the result\r\n        test_obj.write_test_summary()\r\n        expected_pass = test_obj.result_counter\r\n        actual_pass = test_obj.pass_counter\r\n\r\n    except Exception as e:\r\n        test_obj.log_result(\r\n            False,\r\n            positive=\"\",\r\n            negative=f\"Exception when trying to run test: {__file__}\\nPython says: {str(e)}\",\r\n            level='error'\r\n        )\r\n\r\n    assert expected_pass == actual_pass, f\"Test failed: {__file__}\"\r\n<\/pre>\n<p>This is how a typical Axe test result looks.<br \/>\n<a href=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2024\/01\/axe_test_run.jpg\" data-rel=\"lightbox-image-1\" data-rl_title=\"\" data-rl_caption=\"\" title=\"\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-large wp-image-23185\" src=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2024\/01\/axe_test_run-1024x259.jpg\" alt=\"\" width=\"1200\" height=\"228\" \/><br \/>\n<\/a><br \/>\n&nbsp;<\/p>\n<hr \/>\n<h3>Part IV: Running Accessibility test as a part of CI<\/h3>\n<p>As part of our <strong>Newsletter Automation <\/strong> project, we have integrated accessibility tests into our GitHub workflow. In addition to running API and UI tests as part of CI, we wanted to include accessibility tests as well. However, we debated where these tests should fit\u2014before or after functional tests? The decision largely depended on the project\u2019s priorities and requirements. While we are still evaluating the best placement, we have currently positioned them before functional tests.<\/p>\n<p>The next problem we were getting was the frequent failure of accessibility tests. This would be a common phenomena as the web under test would evolve and there would be frequent UI changes but we don\u2019t want the CI to fail, stop and not execute the other tests because our accessibility tests failed.<br \/>\nSo, we used a parameter which would still allow us to execute the next tests after accessibility if it still fails. Below is the snippet of how we used if: always() in UI tests which would run after accessibility tests even if the accessibility tests fail.<\/p>\n<pre lang=\"python\">- name: Run Accessibility Tests\r\n  run: |\r\n    cd tests\/integration\/tests\/accessibility_tests\r\n    python -m pytest test_accessibility.py --browser headless-chrome --app_url http:\/\/localhost:5000\r\n\r\n- name: Run UI Tests\r\n  if: always()\r\n  run: |\r\n    cd tests\/integration\/tests\/ui_tests\r\n    python -m pytest -n 4 --browser headless-chrome --app_url http:\/\/localhost:5000\r\n<\/pre>\n<p>For more details, you can refer to our GitHub workflow file here by clicking this <a href=\"https:\/\/github.com\/qxf2\/newsletter_automation\" target=\"_blank\" rel=\"noopener\">link<\/a>.<\/p>\n<hr \/>\n<h3>Part V: Introducing Accessibility tests to a team<\/h3>\n<p>Now you know how to add Accessibility testing to your test suite. While the technical side of things look simple, there are problems you need to handle within your team. When you run accessibility tests for the first time, expect a lot of errors to show up. Your team is not going to be able to address everything. In fact, they might choose to address only the critical misses. So where does that leave us with CI and automation? Should the automated tests (and therefore the CI pipeline) fail all the time? Probably not. You do not want folks ignoring test results &#8220;because the tests are expected to fail&#8221;. Here is where your skill as a tester comes. You need to work out a feasible path with your team. Some sort of policy\/agreement where the team commits to a timeline after which your automated accessibility tests will run. In the meantime, they can use your automated tests to run against their local machines when they make improvements. Further, you can also have policies in your CI to make sure no new accessibility issues are being introduced.<\/p>\n<hr \/>\n<h3>Conclusion<\/h3>\n<p>This post was meant to help testers quickly produce some accessibility testing results. We would strongly encourage you to read more about Accessibility standards and how to introduce them within the context of your team.<\/p>\n<hr \/>\n<h3>Hire technical testers from Qxf2<\/h3>\n<p><a href=\"https:\/\/qxf2.com\/?utm_source=accessibility_testing=click&amp;utm_campaign=From%20blog\" target=\"_blank\" rel=\"noopener\">Qxf2<\/a> collaborates with small teams and nascent products. Our testing professionals possess technical acumen and a broad understanding of contemporary testing methodologies. We surpass conventional test automation by extending our testing proficiency to unconventional areas where testers typically don&#8217;t contribute value. If you&#8217;re seeking technical testers for your team, reach out to us today.<\/p>\n<hr \/>\n","protected":false},"excerpt":{"rendered":"<p>This post will discuss accessibility testing &#8211; specifically the portions of using Axe on your browser as well as integrating Axe with your automated test suite. We will also briefly discuss few nuances of introducing Accessibility testing into your team&#8217;s workflow. This post will NOT cover the basics of Accessibility tests, the standards used, etc. Overview Qxf2&#8217;s clients are startups [&hellip;]<\/p>\n","protected":false},"author":38,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[392],"tags":[],"class_list":["post-20514","post","type-post","status-publish","format-standard","hentry","category-accessibility-testing"],"_links":{"self":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/20514","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/users\/38"}],"replies":[{"embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/comments?post=20514"}],"version-history":[{"count":58,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/20514\/revisions"}],"predecessor-version":[{"id":23214,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/20514\/revisions\/23214"}],"wp:attachment":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/media?parent=20514"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/categories?post=20514"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/tags?post=20514"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}