{"id":4645,"date":"2016-08-29T00:00:07","date_gmt":"2016-08-29T04:00:07","guid":{"rendered":"https:\/\/qxf2.com\/blog\/?p=4645"},"modified":"2018-04-03T10:25:25","modified_gmt":"2018-04-03T14:25:25","slug":"better-failure-summary-using-pytest","status":"publish","type":"post","link":"https:\/\/qxf2.com\/blog\/better-failure-summary-using-pytest\/","title":{"rendered":"Better  pytest failure summaries"},"content":{"rendered":"<p>We, <a href=\"https:\/\/www.qxf2.com\/?utm_source=pytest_summary&amp;utm_medium=click&amp;utm_campaign=From%2520blog\">at Qxf2<\/a>, are really liking <a href=\"http:\/\/pytest.org\/\">pytest<\/a>. It is the most Pythonic test runner we have come across so far. We started using it with our GUI automation. We realized one thing very quickly &#8211; pytest&#8217;s reporting capabilities are configured to suit unit testing. We needed to reconfigure pytest&#8217;s reporting to suit our GUI automation framework. It was also hard to Google for the exact changes we wanted to do. So, once we figured out what to do, we decided to write a post showing you how to modify and control the different parts of pytest&#8217;s failure reports.<\/p>\n<hr \/>\n<h3>Setup<\/h3>\n<p>We realize that no single configuration of output message suits everybody. So we decided to provide you with a simple but comprehensive sample test. The sample will help you follow along with this post. And if you do not like how we tweaked pytest&#8217;s failure reporting, you can use the sample test to try out other configurations.<\/p>\n<p>Our sample test has the following features:<br \/>\na) writes out to stdout (stream handler)<br \/>\nb) writes out to a log file (file handler)<br \/>\nc) we force the assert fail so we can show a failure summary<br \/>\nd) we have a list of meaningful failure messages that we would like to display<br \/>\ne) we have some test metadata (e.g.: browser) we want displayed<\/p>\n<p>Here is the sample test code (file name is <code>test_pytest_summary.py<\/code>):<\/p>\n<pre lang=\"python\">\"\"\"\r\nA sample test writing out the error to the log and printing the failure message as the output.\r\n\"\"\"\r\n#START HERE\r\ndef test_my_output(): #1. Defining the method\r\n\r\n      #2. Debug\/progress messages\r\n      import logging\r\n      log = logging.getLogger('file_log') #A log handler\r\n      fileHandler = logging.FileHandler('file_log.log')\r\n      log.addHandler(fileHandler)\r\n      streamHandler = logging.StreamHandler() #Write out to the command prompt\r\n      log.addHandler(streamHandler)\r\n\r\n      log.error('Hello from log: You should not see me in the pytest summary') #write into a log\r\n      print 'Hello from print: You should not see me in the pytest summary' #\r\n\r\n      #3. results summary section\r\n      log.error('Hello from error: This is a summary of an error')\r\n      print 'Hello from not-an-error: This is a summary of stdout aka not-an-error'\r\n\r\n      #4. Example list of failures that your test may have collected\r\n      test_metadata = 'firefox 45, OS X Yosemite'\r\n      failure_list = ['1. Ineffective kiss. Frog did not turn into a prince.','2. The Queen used rat poison in the apple.','3. The birds   ate the breadcrumbs.']\r\n      assert 3 == 4\r\n\r\n#---START OF SCRIPT\r\nif __name__=='__main__':\r\n      #5. call the method\r\n      test_my_output()\r\n<\/pre>\n<h3>The pytest failure report<\/h3>\n<p>Run the above test code by using the <code>py.test test_pytest_summary.py<\/code>. You will notice that the failure summary looks like the image below:<\/p>\n<p><a href=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2016\/12\/img_1.png\" data-rel=\"lightbox-image-0\" data-rl_title=\"\" data-rl_caption=\"\" title=\"\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-4679 size-full\" src=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2016\/12\/img_1.png\" alt=\"img_1\" width=\"641\" height=\"569\" srcset=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2016\/12\/img_1.png 641w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2016\/12\/img_1-300x266.png 300w\" sizes=\"auto, (max-width: 641px) 100vw, 641px\" \/><\/a><\/p>\n<p>The pytest failure report has three parts:<br \/>\na) the failure\/traceback section<br \/>\nb) the captured stdout section<br \/>\nc) the captured stderr section<\/p>\n<p>The failure report is optimized for unit testing. For example, it shows the entire method that failed as part of the traceback. That is\/may be a good approach for a unit test but is not very useful for a <a href=\"http:\/\/martinfowler.com\/bliki\/BroadStackTest.html\">broadstack test<\/a> with multiple checkpoints. We would rather have the GUI\/broadstack test go as far along as possible while collecting a list of failures and then displaying all the failures at the end. We also write numerous log messages to the console as part of our GUI automation. We would rather not have our entire verbose log displayed as part of the failure report.<\/p>\n<p><strong>NOTE:<\/strong> We are showing you how to suppress a stderr message too. We do not really know if this will ever be useful &#8211; but we thought for the sake of completeness, we should show you how to control that section too.<\/p>\n<hr \/>\n<h3>An improved pytest failure report<\/h3>\n<p>To get a better failure summary, you need to do the following:<br \/>\n1. Stop the failure\/traceback section from displaying the entire method<br \/>\n2. Add our human-friendly failure messages to the failure section<br \/>\n3. Flush the stdout\/stderr buffers and insert only the messages we want<\/p>\n<h4>1. Stop the failure\/traceback section from displaying the entire method<\/h4>\n<p>This is easy. pytest provides a <code>--tb<\/code> command line option to control the traceback section. We preferred the output when <code>--tb<\/code> was set to <code>short<\/code>. Just run the above test code, use <code>py.test test_pytest_summary.py --tb=short<\/code> and notice that the entire method is no longer displayed as part of the failure\/traceback section.<\/p>\n<h4>2. Add our human-friendly failure messages to the failure section<\/h4>\n<p>To add human-friendly failure messages, simply add a comma after the assert statement and add the human-friendly messages. In our example, it will look like the snippet below:<\/p>\n<pre lang=\"python\">assert 3 == 4, \"\\n----TEST META DATA----\\n%s\\n----FAILURE-LIST----\\n%s\"%(test_metadata,'\\n'.join(failure_list))\r\n<\/pre>\n<p>If you run the test using the command <code>py.test test_pytest_summary.py --tb=short<\/code>, the output will look something like this:<\/p>\n<p><a href=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2016\/12\/img_2.png\" data-rel=\"lightbox-image-1\" data-rl_title=\"\" data-rl_caption=\"\" title=\"\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-4680 size-full\" src=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2016\/12\/img_2.png\" width=\"640\" height=\"356\" srcset=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2016\/12\/img_2.png 640w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2016\/12\/img_2-300x167.png 300w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/a><\/p>\n<p>Notice that the failure summary is formatted, the details of the test method do not appear in the traceback and the human-friendly failure messages have been included as part of the failure section.<\/p>\n<h4>3. Flush the stdout\/stderr buffers and insert only the messages we want<\/h4>\n<p>To control the captured stdout\/stderr sections, we need to use pytest&#8217;s <code>capsys<\/code> fixture. The <code>capsys.readouterr()<\/code> call snapshots the output so far. After the test function finishes, the original streams will be restored. Using the <code>capsys<\/code> fixture this way frees your test from having to care about setting\/resetting output streams and also interacts well with pytest\u2019s own per-test capturing.<\/p>\n<p>Modify the test as follows:<br \/>\na. Add an argument called <code>capsys<\/code> to the test function.<\/p>\n<pre lang=\"python\">def test_my_output(capsys)\r\n<\/pre>\n<p>b. Add these lines of code to the method wherever you want to flush out the stdout, stderr buffers<\/p>\n<pre lang=\"python\">#3. The key lines of code\r\n      if capsys is not None:\r\n            out,err = capsys.readouterr() #Flushes out the stdout, stderr buffers\r\n<\/pre>\n<p>This way, pytest will only output the stdout and stderr messages that were written after the buffers were flushed. If you do not want to lose the stderr messages, simply log <code>err<\/code> back as errors.<\/p>\n<p><strong>NOTE:<\/strong> Just as we finished writing this blog post, pytest came out with a <code>with capsys.disabled()<\/code> option. We have not had enough time to experiment with this option yet. So, to learn more, check out their official post <a href=\"http:\/\/blog.pytest.org\/2016\/whats-new-in-pytest-30\/\">here<\/a>.<\/p>\n<hr \/>\n<h3>Putting it all together<\/h3>\n<p>Now our test code (<strong>test_pytest_summary.py<\/strong>) should look like this:<\/p>\n<pre lang=\"python\">\"\"\"\r\nQxf2 Services:\r\nA sample test writing out the error to the log and printing the failure message as the output.\r\nThis is a contrived example to help readers follow along with out blog post \r\n\"\"\"\r\ndef test_my_output(capsys): #1. Defining the method with the argument capsys\r\n      \"Contrived test method to serve as an illustrative example\"\r\n      #2. Debug\/progress messages\r\n      import logging\r\n      log = logging.getLogger('file_log') #A log handler\r\n      fileHandler = logging.FileHandler('file_log.log')\r\n      log.addHandler(fileHandler)\r\n      streamHandler = logging.StreamHandler() #Write out to the command prompt\r\n      log.addHandler(streamHandler)\r\n\r\n      log.error('Hello from log: You should not see me in the pytest summary') #write into a log\r\n      print 'Hello from print: You should not see me in the pytest summary' #\r\n\r\n      #3. The key lines of code\r\n      if capsys is not None:\r\n            out,err = capsys.readouterr() #Flushes out the stdout, stderr buffers\r\n\r\n      #4. Results summary section\r\n      log.error('Hello from error: This is a summary of an error')\r\n      print 'Hello from not-an-error: This is a summary of stdout aka not-an-error'\r\n\r\n      #5a. Example test meta-data\r\n      test_metadata = 'firefox 45, OS X Yosemite'\r\n      #5b. Example list of failures that your test may have collected\r\n      failure_list = ['1. Ineffective kiss. Frog did not turn into a prince.','2. The Queen used rat poison in the apple.','3. The birds ate the breadcrumbs.']\r\n      assert 3 == 4, \"\\n----TEST META DATA----\\n%s\\n----FAILURE-LIST----\\n%s\"%(test_metadata,'\\n'.join(failure_list))\r\n\r\n#---START OF SCRIPT\r\nif __name__=='__main__':\r\n      #6. call the method &amp; make capsys the first argument set it None by default\r\n      test_my_output(capsys=None)\r\n<\/pre>\n<hr \/>\n<h3>Run the test<\/h3>\n<p>Run the test using the command <code>py.test test_pytest_summary.py --tb=short<\/code> where, <code>-- tb=short<\/code> is the flag for shorter traceback format.<\/p>\n<p><a href=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2016\/12\/img_3.png\" data-rel=\"lightbox-image-2\" data-rl_title=\"\" data-rl_caption=\"\" title=\"\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-4681\" src=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2016\/12\/img_3.png\" alt=\"img_3\" width=\"642\" height=\"327\" srcset=\"https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2016\/12\/img_3.png 642w, https:\/\/qxf2.com\/blog\/wp-content\/uploads\/2016\/12\/img_3-300x153.png 300w\" sizes=\"auto, (max-width: 642px) 100vw, 642px\" \/><\/a><br \/>\nW00t! The pytest failure report is so much nicer now.<\/p>\n<hr \/>\n<p>And this is how we ended up re-configuring pytest&#8217;s failure summary to suit our GUI automation needs. If you have questions, please post them below and one of us will get back to you soon.<\/p>\n<p><strong>If you liked what you read, know more <a href=\"https:\/\/qxf2.com\/blog\/about-qxf2\/\">about Qxf2<\/a>.<\/strong><\/p>\n<hr \/>\n","protected":false},"excerpt":{"rendered":"<p>We, at Qxf2, are really liking pytest. It is the most Pythonic test runner we have come across so far. We started using it with our GUI automation. We realized one thing very quickly &#8211; pytest&#8217;s reporting capabilities are configured to suit unit testing. We needed to reconfigure pytest&#8217;s reporting to suit our GUI automation framework. It was also hard [&hellip;]<\/p>\n","protected":false},"author":13,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[107,18],"tags":[],"class_list":["post-4645","post","type-post","status-publish","format-standard","hentry","category-pytest","category-python"],"_links":{"self":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/4645","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/users\/13"}],"replies":[{"embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/comments?post=4645"}],"version-history":[{"count":23,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/4645\/revisions"}],"predecessor-version":[{"id":7241,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/posts\/4645\/revisions\/7241"}],"wp:attachment":[{"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/media?parent=4645"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/categories?post=4645"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/qxf2.com\/blog\/wp-json\/wp\/v2\/tags?post=4645"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}