Five reasons why Test Automation fails

Size: px
Start display at page:

Download "Five reasons why Test Automation fails"

Transcription

1 White Paper wpsr0512 Five reasons why Test Automation fails Sharon Robson, Knowledge Engineer and Software Testing Practice Lead, Software Education B.Sc(Hons), Grad.Dip.IT, ISTQB CTFL, CTAL-ATA, CTAL-ATM, CTAL-TTA May, 2012

2 Executive Summary Bill Gates once said, The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency. ( Many teams expect that automation will be a silver bullet that allows the testing for any solution or application to be completed quickly and effectively however, in that phrase completed quickly and effectively is the key to the five main automation issues that teams face. The challenge is how to identify them and solve them. What are these key issues? 1. One tool to rule them all a single tool will do everything (how can a single tool assist in completing the work, getting coverage, or working effectively?). 2. Automated everything all testing can be automated (this challenges the completed and quickly aspects of automation). 3. Automation is quick quicker than manual (but is it really more swift than manual testing? How effective can quick testing be?). 4. Testers need to learn how to code all automation is using code and testers need to be able to write code (a real challenge for aiming for completeness and effectiveness). 5. Automated testing proves the software is good what is automation actually proving? (badly structured testing will not allow for complete coverage or effective coverage either). Lets now consider each issue in some detail: Sharon Robson, sharonr@softed.com, Software Education 2/10

3 (1) One tool to rule them all One tool to rule them all is a common fallacy. Many organisations don t understand that test automation requires a framework, a group of tools working together to provide the complete solution. A tool that injects data into a system is very different to a tool that generates the data, different again to the tools that store the data, collect the results of the activities and analyse the data. Reporting tools are also often overlooked. To get complete and accurate coverage of a solution requires the understanding of the test frameworks required. To fully understand the test framework organisations must understand what and where they are trying to test. An example of a complete test framework is illustrated below (Figure 1.) Scheduler Front End GUI Driver Data Repository Back End or Middleware Process Driver Performance Monitor Results Analysis Comparator Data or File Store Output Monitor Figure 1 - Example Test Framework As can be seen in Figure 1 there needs to be more than one tool in use to allow the automation of any of the work involved in testing of a solution. The focus of the tests needs to be clearly defined (front end, backend or storage layer for example) and then the tools that will best support the type of testing required need to be identified also. Performance testing tools are vastly different to functional test tools and must Sharon Robson, sharonr@softed.com, Software Education 3/10

4 be used to generate both the required coverage and the effective testing of the solution. By analysis and understanding of the system under testing and the key tests that are needed to prove the results being looked for, the correct framework and architecture can then be assembled. Without this level of understanding the resulting test approach may be lacking in key elements or components which then results in poor testing, or even worse, inaccurate or incomplete results that don t actually give the information required to the organisation or project. Consideration must also be given to the integration of the tool sets and the costs of the establishment of the architecture and the tool environment as these are often overlooked and can add substantially to the automation business case. (2) Automated everything? Along with the concept of one tool to rule them all, comes the common belief that if you have a tool you can test everything. A good and solid understanding of what can and can t be tested automatically is often not present in the project or organisation considering automation. There is a general lack of understanding around what is testing in general, what are the quality attributes of a solution and where and when automation can benefit the system under test. In the traditional terminology of testing what you can automate depends on your context and your solution. A team needs to have a good and thorough understanding of what they are trying to prove about a system. Good performance testing requires the use of tools to generate, manage, load the data and then collect the results of the test runs. However usability testing cannot be automated! How can a tool tell anyone how usable the solution is? Brian Marrick s test quadrants (Figure 2) is an excellent example of how the types of testing needed to prove the attribute of a solution varies depending on what you are trying to prove and some types of testing are best done manually such as User Acceptance Testing. Sharon Robson, sharonr@softed.com, Software Education 4/10

5 Figure 2 - Brian Marrick s Test Quadrants Another common issue is the misconception that a functional test tool (one that emulates typical usage) can also test performance or reliability. Some can, but most cannot. Testers also need to recognise that the results from functional testing are very different to the results from performance or reliability testing. Functional testing is about whether or not the desired task is completed. Performance or reliability testing is about how many tasks, the types of tasks and how often the tasks can be done. These are totally different test results, that usually requires a totally different tool to generate the results. Maintainability testing, or testing looking for technical debt, or testing for regression absolutely requires the use of a tool as this work is usually done at the code level. Tools are required to view the code, assess the code, analyse the code and deliver the reports on the code. With today s complicated and long code bases doing these activities without tools is impossible. Sharon Robson, sharonr@softed.com, Software Education 5/10

6 (3) Automation is quick? Once a decision has been made on what to test for and the tools to use to test for it, the next big issue in automation appears. Most teams only consider the benefits of test execution in the speed of automation, but there are costs to using tools in the set-up, writing and maintenance of the testware for automated tools. The following chart (Figure 3) shows a rough comparison to time spent in each of the key activities in generating a test case, executing and managing the test suite. This is a pretty common breakdown based on the number of hours required to complete the testing on a complex functional area covering a decision table with business rules. Figure 3 - Relative Time for Test Tasks Training testers to use a tool is usually overlooked in the business case. There is an assumption that the testers can use all tools and also know how to test, or there is the assumption that tools are easy to use. Neither of these assumptions are correct. Automation tools often become shelfware as the test teams stop using them because they are too hard to use. Time must be invested in teaching the testers how to use the tools effectively and efficiently, including test writing, data generation, Sharon Robson, sharonr@softed.com, Software Education 6/10

7 test storage, test execution and test results analysis. Manual testing generally does not have this overhead. The tester writing the test understands how to test (or can be very quickly taught) and then can generate suitable tests within a short period of time using common techniques (word or spreadsheet based test cases) Although test execution time is shorter, there is considerable overhead in writing (or recording) the scripts required to run the tests. The scripts have to cover all the steps and data sets that must be considered in the test execution. Doing this manually also takes time, and more time will be required depending on the skill of the tester and the area under test. However all automated scripts take time to design the test and then document the steps and data required to complete the test. If the testers are able to write the scripts themselves this time may be reduced somewhat, but if not it will increase. The same goes for the maintenance of the tests. This can be exceedingly difficult to manage if the team is working in an Agile environment where time is very short. Maintenance of automated testware is also very expensive in time and energy. Often the maintenance is only done after the test has failed for some reason. The reason for the failure has to be identified, the test cases isolated, the changes to the scripts and/or data made and then the scripts re-run. If the testers can do this themselves there may be time saving, but if not the time costs will be high. Manual tests generally do not require as much maintenance and rely on the tester s cognitive abilities to modify the tests as they are being run, not after they fail. The cost of maintenance is magnified if the system under test is subject to frequent or large numbers of changes. As a general rule always assume the less stable the code base is, the greater the cost of maintenance. (4) Testers needing to learn code One area where manual and automated testing will result in a similar time investment or cost is in the area of traceability. If the automated tests are generated with traceability in mind then the tools used to store and retain the tests may be used to calculate coverage of the tests and maybe used to isolate specific tests based on the part of the system being tested. A manual approach to this using a Sharon Robson, sharonr@softed.com, Software Education 7/10

8 Requirements Traceability Matrix (or similar) will generate the same output if the time and energy has been put into the recording of the key information in the manual tests and a suitable manual test repository has been developed. Historically test automation tools required the use of scripting or coding languages which lead to the belief that testers need to be able to write scripts or generate program code. This is a real problem in the world of automation. Testers have the skills around analysis of the solution and then designing the tests that will prove the test condition they need to prove. Making testers learn programming tools is often forcing one skill set into another. All testers should be able to roughly read code and understand the risks associated with the code structure, however within the development team the programmers are the group who have the scripting and programming skills. Making the testers learn how to generate scripts or programs at the command line increases the time it takes for them to write scripts or programs. They have to not only determine what and how to test something then they usually have to struggle with a new technology to record the test. Testers with limited programming skills will take too long to write their scripts or will write incorrect, inaccurate or inefficient scripts. This is a waste of time for the test team in a time in the lifecycle where they generally have very little time available. Relying on the test team to generate scripts based on the code base or the provided solution also results in a more dynamic or reactive approach to testing, rather than the analytical and preventative approach that comes from testers being able to focus on the requirements, design and fit for purpose tests that can be run prior to the code base being generated. (5) Automated testing proves the software is good? The largest problem however is the belief that automated testing proves that the code is good enough. All of the problems articulated above can lead to an automated test suite that proves very little. A lot of data may be being pushed through the Sharon Robson, sharonr@softed.com, Software Education 8/10

9 system, but unless it is pushed through in the right way at the right time it will prove nothing of value. The Pesticide Paradox is a real issue when automating. The paradox is that the more a test is executed, the less the test proves. When a test is first executed it should either find the defect it was designed to find, or prove that the defect was not there, both valid and useful outcomes. However the value in rerunning that test diminishes each time the test is run. It will no longer be looking for that bug. Most automated test suites are destined to become regression suites with the key purpose of proving that the underlying functionality (or other testable attribute) has not changed in unexpected ways. Although this is a valuable outcome for some systems, consideration needs to be given to the value of such large and potentially expensive to maintain test suites that may not prove anything. Conclusion Automated testing is a very complex area for the development team to consider. To be done well it requires a holistic view of the activities of the team, the skills of the team and the goals of the team. To generate valuable and usable test the basis of the testing must be well understood and planned. The skills of the team must be considered and matched to the tool sets required. The need for the testing must also match the tool sets required. Teams considering testing must be very sure of what they are trying to prove, how they can prove it and if automation is actually the most effective and efficient way to prove that which needs to be proved, allowing the testing to be completed quickly and effectively. About the author Sharon Robson As Software Education's (SoftEd) Software Testing Practice Lead, Sharon develops and delivers Agile and Software Testing training courses at every level for corporate and government organisations mainly in Australia. Sharon has been recognised as one of the 13 most influential women Software Testers in the world by US magazine Software Test & Performance. She has spoken at local Sharon Robson, sharonr@softed.com, Software Education 9/10

10 conferences and interest groups like Software Testing Australia/New Zealand (STANZ), Agile Australia, Australia New Zealand Testing Board (ANZTB - of which she was also a founding member) Conference, Test Professional Network (TPN) and internationally at the International Software Testing Qualifications Board (ISTQB) Special Interest Groups and Conferences, Software Test Analysis and Review (STARWEST), and has presented to the US Department of Defense. Sharon holds every level of the International Software Testing Qualifications Board (ISTQB) Certifications and is a Certified ScrumMaster. As well as a busy training and consulting schedule in 2012 Sharon will also present at SoftEd's new conference, Fusion ( on how to solve estimation issues faced by testing teams and the wider software development group. You can Sharon on sharonr@softed.com or catch up for a quick Software Education (SoftEd) are an independent SDLC training and consulting company with offices in Brisbane and Wellington, internationally recognized as local experts in software development training. For more details you can visit or info-nz@softed.com For a list of all Software Education courses you can visit Software Education Level 6, Petherick Tower 38 Waring Taylor Street Wellington info-nz@softed.com Copyright 2012 Software Education. All rights reserved. Sharon Robson, sharonr@softed.com, Software Education 10/10