Evidence and perceptions on GUI test automation

Size: px
Start display at page:

Download "Evidence and perceptions on GUI test automation"

Transcription

1 Master of Science in Software Engineering October 2017 Evidence and perceptions on GUI test automation An Explorative Multi-Case Study Chahna Polepalle Ravi Shankar Kondoju Faculty of Computing Blekinge Institute of Technology SE Karlskrona, Sweden

2 This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Software Engineering. The thesis is equivalent to 20 weeks of full time studies. Contact Information: Author(s): Chahna Polepalle Ravi Shankar Kondoju University advisor: Deepika Badampudi Department of Software Engineering Industrial advisor: Magnus Andersson Senior Specialist, Test architect Telecommunication company, Sweden Faculty of Computing Internet : Blekinge Institute of Technology Phone : SE Karlskrona, Sweden Fax :

3 Abstract Context. GUI-based automation testing is a costly and tedious activity in practice. As GUIs are well-known for being modified and redesigned throughout the development process, the corresponding test scripts are not valid anymore thereby being a hindrance to automation. Hence, substantial effort is invested in maintaining GUI test scripts which often leads to rework or waste due to improper decisions. As a result, practitioners have identified the need for decision support regarding when should GUI automation testing begin and how to make it easier and also identify what are the factors leading to waste in GUI-based automation testing. The current literature provides solutions relating to automation in general and few answers for GUI based-automation testing. Such generic answers might not be applicable to GUI test automation and also industries new to GUI development and testing. Thus, it is necessary to validate if the general solutions are applicable to GUI test automation and find additional answers that are not identified previously from practitioners opinions in an industrial context. Objectives. Capture relevant information regarding the current approach for GUI test automation within the subsystems from a case company. Next, identify the criteria for when to begin automation, testability requirements and factors associated with waste from literature and practice. Methods. We conducted a multiple-case study to explore opinions of practitioners in two subsystems at a Swedish telecommunication industry implementing GUI-automation testing. We conducted a literature review to identify answers from scientific literature prior to performing a case study. A two-phased interview was performed with different employees to collect their subjective opinions and also gather their opinions on the evidence collected from literature. Later, Bayesian synthesis method was used to combine subjective opinions of practitioners with research based evidence to produce context-specific results. Results. We identified 12 criteria for when to begin automation, 16 testability requirements and 15 factors associated with waste in GUI test automation. Each of them are classified into categories namely SUT-related, test-process related, test-tool related, human and organizational, environment and cross-cutting. New answers which were not present in the existing literature in the domain of the research are found. Conclusions. On validating the answers found in literature, it was revealed that the answers applicable for software test automation in general are valid for GUI automation testing as well. Since we incorporated subjective opinions to produce context specific results, we gained an understanding that every practitioner has their own way of working. Hence, this study aids in developing a common understanding to support informed subjective decisions based on evidence.

4 Keywords: GUI test automation, when to automate, testability requirements, automation waste, maintenance. ii

5 Contents Abstract Acknowledgments i viii 1 Introduction Overview Research aim and objectives Structure of the thesis Background and Related Work Automated GUI-based software testing Criteria for when to automate Testability requirements Factors affecting waste Related work Criteria for when to automate Testability requirements Factors affecting waste Research gap Research Method Method selection Study design Literature review design Case selection and unit of analysis Overview of cases Case study design Data collection protocol Standard process documents Interviews Interview design Formulation of interview questionnaire Transcription Post interview iii

6 3.2.5 Data analysis protocol Data synthesis protocol Results Overview of results Interview results Summary of interviews Phase 1 results Current approach for GUI test automation within the subsystems Results of thematic analysis Literature review results Criteria for when Testability requirements Factors associated with waste Phase 2 interview results Data synthesis and analysis Data synthesis Data analysis Criteria for when SUT-related Test-process related Test-tool related Human and organizational Cross-cutting Testability requirements SUT-related Test-process related Test-tool related Environment Factors associated with waste SUT-related Test-process related Test-tool-related Human and organizational Environment Cross cutting Discussions and limitations Summary of findings RQ1. When is it suitable to begin GUI test automation?. 65 iv

7 6.1.2 RQ2. What are the testability requirements for performing GUI test automation? RQ3. What are the factors that lead to waste in GUI test automation? Validity threats Construct validity Internal validity: External validity: Reliability/Conclusion validity: Conclusions and Future Work Future work References 74 Appendices 82 A Interview questionnaire 82 B Transcription 84 C Criteria for when 85 D Testability requirements 86 E Factors affecting waste 87 F Prior probabilities 88 G Data synthesis 90 v

8 List of Figures 3.1 EBSE steps Snowballing process Overview of case selection and unit of analysis Distribution of interview participants Interview design Snapshot of the interview transcript with a note to change the initial questionnaire Thematic analysis framework Bayesian synthesis Test activities before product release An overview of GUI automation testing levels Generating initial codes Searching for themes Defining and naming themes Frequency of occurrence in research papers Practitioners consensus results Validity level Synthesis results B.1 Snapshot of the transcription software C.1 Practitioners opinions regarding criteria for when D.1 Practitioners opinions regarding testability requirements E.1 Practitioners opinions regarding factors affecting waste F.1 Calculation of prior probabilities F.2 Proportion of practitioners mentioning the variables in phase 1 interview G.1 Data synthesis: prior probability, likelihood and posterior probability vi

9 List of Tables 3.1 Summary of snowballing procedure Overview of the cases Team structure Overview of roles Summary of interview participants Initial codes Criteria for when Testability requirements Factors associated with waste vii

10 Acknowledgments We would first like to thank our thesis supervisor Deepika Badampudi. The door to her office was always open whenever we ran into a trouble spot or had a question about our research or writing. She consistently allowed this thesis to be our own work, but steered us in the right direction whenever she thought we needed it. We would also like to thank Mr. Mattias Carlsson and Mr. Magnus Andersson for providing us with the opportunity to conduct our thesis at the telecommunication company. The experts who were involved in the interviews for this research project were extraordinary and without their passionate participation and input, this thesis could not have been successfully conducted. We would also like to acknowledge Prof. Claes Wohlin for providing feedback on the research topic and insights into the thesis work. We are gratefully indebted to his very valuable comments on this thesis. Finally, We express our profound gratitude to our parents for providing us with unfailing support and continuous encouragement throughout our years of study and through the process of researching and writing this thesis. This accomplishment would not have been possible without them. Thank you. viii

11 Chapter 1 Introduction "Success in test automation does not come to those that automate more, but to those that automates better" [1]. 1.1 Overview Testing is a predominant activity in software engineering irrespective of the development process followed or development goal [2][3] and [4]. Test activities are prevalent in the industry for quality assurance mainly by evaluating if the software product conforms to the requirement specifications or not [4]. Due to the demand for releasing new functionalities quickly and growing complexity of the software, testing has become critical and increasingly difficult [5][4] and [6]. To maintain a balance between the quality of testing and increasing criticality and size of the software, practitioners have turned towards automation [4]. Organizations perceive software test automation as a means to achieve a better-quality product and provide a long-lasting solution to reduce testing costs [1][7] and [8]. Software automation testing can be performed at various levels of the system that are characterized as unit, integration and black box tests [9] and [10]. Unit tests are applied on the lower levels such as the source code of the system components whereas integration and black box tests are performed on the higher levels, e.g. graphical user interface (GUI) [10] and [11]. Interacting with system under test (SUT) especially through a GUI is a challenging part of software test automation [6]. The reason is that GUIs are continuously changing during development breaking their corresponding test scripts thereby, causing a hindrance to test automation [12]. The GUI test scripts are to be frequently maintained to ensure they are complaint and can be executed on a new version of the SUT [13]. As a result, implementing the automation solutions for GUI level testing is problematic as it is costly and a tedious activity in practice [10] and [14]. Their presence varies across different systems. For instance, GUIs are rarely available in telecommunication products directly unless it is specially required [15]. Hence, GUI automation process is more challenging in telecommunication industry as they are new to GUI development and testing in comparison to other domains 1

12 Chapter 1. Introduction 2 where GUI development is predominant such as e-commerce and mobile applications. Despite the criticality in performing GUI-level test automation, the aims of automated testing can still be achieved when it is established at the right time and with a suitable approach [7] and [16]. Thus, deciding when to begin automation and how to make it easier is a challenging question asked frequently [7][9][17] and [18]. To achieve the latter, the software can be improved in terms of testability through identification of testability requirements [19]. These requirements ensure that a GUI product can be automatically tested in an effective and efficient manner [17] and [20]. As a substantial amount of effort is required before one begins automation, industry practitioners are keen on exploring answers to such questions [8] and [17]. Moreover, as GUIs are known for their frequent modifications, they cause unnecessary rework in maintaining the test scripts termed as "waste" [13]. Thereby, it is crucial to also identify factors affecting the unnecessary maintenance of GUI automation test cases which often results in waste. If such answers are known and followed, practitioners can control the test effort and may find it to be less tedious to perform GUI test automation [8] and [20]. Different studies have reported on decision support approaches regarding when to begin automation ([18] and [21]), testability requirements [19] and factors affecting waste of automated testing [22]. These solutions are for test automation in general. For instance, product stability and defining scope of the automation are criteria that must be fulfilled before one begins automation. Usage of diagnostic and monitoring techniques are identified to be a testability requirement which make test automation effective. Knowledge and experience is a primary factor associated with unnecessary maintenance of automation scripts. In addition, literature also provides few answers for testability requirements and maintenance factors in the context of GUI based automation testing. Use of unique class names [23] is an example of testability requirement while number of GUI components [13] is a maintenance factor applicable to GUI test automation. Though literature exists regarding such decisions, it is essential to validate if the general variables are applicable to GUI test automation and find additional variables which have not been identified previously. It is worth investigating as these variables might differ for GUI test automation as its interface changes frequently and GUI itself must be tested to verify its conformance to the GUI specifications [24] and [25]. Moreover, existence of such generic answers might not be suitable for every industrial scenario. As mentioned earlier, telecommunication domain rarely use and develop products dependent on GUI. Considering their relatively new adoption of GUI development and testing, such answers might not be applicable and few other variables which have not been identified yet can be influential in the context. A case organization with similar characteristics will

13 Chapter 1. Introduction 3 help to identify the answers from the subjective opinions of practitioners involved in GUI test automation. In order to address the research gap, this study aims to explore the criteria which must be fulfilled for deciding when to begin GUI test automation and testability requirements which make GUI automation testing effective and efficient. Additionally, the factors influencing rework in terms of maintenance of GUI test scripts that lead to waste are identified in the context of telecommunication domain. A telecommunication organization identified a need to investigate the above mentioned aspects of GUI test automation. This industrial need coupled with the research gap as identified from the literature provides a foundation for performing our research in the area of GUI test automation. This thesis work was therefore performed in the Swedish telecommunication industry to fulfill the research objectives. The identified answers will support the practitioners to make favorable decisions regarding when to automate and how to make automation easier by considering evidence and being informed about maintenance factors. We implemented evidence-based software engineering (EBSE) [26] process in the industry where we investigated two subsystems relatively new to the field of GUI development. A multiple case-study is performed within the two subsystems which follow similar approach for GUI based automation testing while vary majorly in terms of size of the GUI developed. As a part of the case study, a two-phased interview was conducted to collect subjective opinions of the practitioners to gain an in-depth understanding of the various aspects of GUI test automation and also gather their opinions on the evidence collected from literature. We adopted Bayesian synthesis method proposed by Badampudi et al. [58] for combining subjective opinions of practitioners with research based evidence to produce context-specific results. This helps make up the thesis main contributions as follows: C1: To summarize results found in literature and practice in terms of criteria for when to automate, testability requirements and factors associated with waste. C2: To provide results from existing literature to the practitioners which aids them in making informed decisions by relying on evidence. C3: Validate the results found in literature as applicable to GUI test automation and in a telecommunication domain by capturing the opinions of the practitioners on the literature results. C4: To identify additional criteria for when to begin, testability requirements and factors associated with waste from the opinions of the practitioners working on GUI automation testing which are not found in literature.

14 Chapter 1. Introduction Research aim and objectives The overall goal of the research is to explore "when", "testability" and "waste" aspects of GUI based automation testing in the context of telecommunication domain. The aim is to support practitioners involved in GUI automation testing to make informed decisions by relying on evidence. Hence, the primary focus is on identifying the criteria for when, testability requirements and factors associated with waste in GUI test automation by combining contextualized practitioners opinions with research based evidence. The following objectives help in achieving the overall aim of the thesis. O1: Identify the criteria which must be fulfilled prior beginning of GUI automation testing both in literature and practice. (Mapped to C1 and C4) O2: On determining the criteria for when, the next step is to explore the testability requirements which make GUI test cases feasible to automate in an effective and efficient manner both in literature and practice. (Mapped to C1 and C4) O3: Determine the factors associated with waste when performing GUI automation testing both in literature and practice. (Mapped to C1 and C4) O4: After identifying the required variables, the next step is to translate evidence into practice and validate the results of literature review. (Mapped to C2 and C3) 1.3 Structure of the thesis The thesis is fundamentally structured as follows. Chapter 2 describes background and related work for this research including: An overview of automation testing and GUI test automation, the foundations for criteria for when, testability requirements and factors affecting waste. In Chapter 3, we present an outline of the research methodology used for conducting the thesis including research questions, the multi-staged EBSE process and case study design. A detailed description of the literature review results and outcomes of the interviews conducted as a part of case study are mentioned in Chapter 4. In Chapter 5, the results of Bayesian synthesis approach and corresponding data analysis is presented. Discussions and the potential threats to validity of the study are described in Chapter 6. Towards the end of the document, conclusions and directions for future research are reported in Chapter 7.

15 Chapter 2 Background and Related Work To gain significant insight into the context of the current research, the primary step is to understand the characteristics of automated GUI-based software testing and its associated challenges. The concepts of criteria for when to begin automation, testability requirements and factors associated with waste are comprehended and defined. In addition, we identified and compiled the above mentioned concepts found in scientific literature and presented it as related work. This gives us a foundation and adequate knowledge about the research topic which aids in understanding the approach followed for GUI test automation in the cases and comprehend the opinions presented by the practitioners. In addition, it facilitates the identification of the research gap. 2.1 Automated GUI-based software testing GUIs are the well-known interfaces in software industry as they provide userfriendly interactions with the functionalities of the product [27][28] and [29]. They are used to display information and give control to the system through graphical elements or widgets [14]. As with all software applications, the behavior of a GUI and the underlying code must be tested to verify their correctness [30] and [31]. Due to the extensive usage of GUIs in software systems, much emphasis is placed on GUI testing as it stands vital in making the entire system more safe, robust and usable [32][33] and [28]. With the increase of GUI code in the software, automated testing has become necessary to relieve stress on the testers and avoid manual errors during testing [6] and [29]. The growth of automated GUI testing in industries is slow as it is difficult and differs from traditional software testing approaches [10] and [28]. As an example, Memon [32] mentions that it is not feasible to test the GUI with conventional test coverage criteria. This is because the GUI elements and the underlying software code have varied abstraction levels, it is difficult to set appropriate coverage criteria [10]. Pillai et al. [6] claims that interacting with the SUT through the GUI is a challenging part of software test automation. This is because GUIs are known for being frequently modified and redesigned through 5

16 Chapter 2. Background and Related Work 6 the development process which result in breaking of test solutions hampering test automation. GUI automation testing is more challenging than automation testing of other types of interfaces such as command line interfaces (CLIs) and application-programming interfaces (APIs) [8]. This is because GUI automation testing requires being on par with design changes performed on a GUI. In addition, it is challenging and always require some amount of technical instrumentation to ensure the automation tool works with the GUI product developed [8]. Despite the challenges associated with GUI automated testing, the benefits of automated testing can still be achieved when it is established at the right time [16] and [7]. Moreover it is vital to identify and consider testability requirements to achieve testability in the GUI product to ensure automation of it is feasible, efficient and effective [23]. It is beneficial to identify the factors associated with waste which is the additional effort spent in maintaining the GUI test scripts which break due to modifications made to the GUI [12]. Such knowledge facilitates practitioners to keep test effort under control [13]. 2.2 Criteria for when to automate It is essential to spend substantial amount of effort for fulfilling certain criteria prior beginning of automation testing. [8]. When performed at the right time the benefits of automation can be reaped and when not it can be a costly practice leading to frustration [8]. The examples of criteria can be, defining the scope of the automation, using dedicated and skilled team and product being stable [7]. It is crucial to identify and discuss what all criteria needs to be taken care of before one begins test automation. If it is done properly, it can be advantageous for organizations leaning towards automation who generally view it as an expensive exercise [8]. Sometimes, after automating the testing of a software product, the tool is found to be inconvenient by the automation team as the tool could have been selected in an haphazard way. Substantial effort is invested in choosing the right tool which often is limited in features and lacks user-friendliness [8]. The automation team usually has a false sense of security that the tool will manage the details surrounding automation thereby, initiating the automation development without a clear strategy which leads to rework [34]. Nonetheless, if proper care is exercised and sufficient criteria are followed before automation of the software product, the emerging tool will not only be cost-effective, but also ensures great amount of flexibility and user friendliness [8].

17 Chapter 2. Background and Related Work Testability requirements Testability is the degree to which a software or component has been designed to facilitate testing. It enhances test design efficiency and makes the automation process easier and feasible [1] and [35]. In short, testability is anything that makes it easy to test the SUT in an automated way [7]. Thus, to ensure that a GUI product is feasible to automate effectively and efficiently, it is important to identify and consider testability requirements to achieve testability [23]. Testability requirements are associated with two aspects such as a) design of the SUT for testability and b) design of the automated testing environment. These requirements act as a primary cost driver and main enabler or disabler for automated testing [22]. Testability is not solely a property of SUT and environment but is also influenced by other characteristics such as test automation tool and skilllevel of humans [7]. It is one of the criteria that must be satisfied before one begins automation upon which the automation becomes easier [1]. Nowadays, software development projects commonly overlook testability due to testing itself being a time-consuming and expensive process [19]. On identifying testability requirements, it helps the practitioners to keep the testing effort under control and reduce maintenance cycles by making the automation process easier [19]. 2.4 Factors affecting waste GUI test automation often results in waste which can be described as the additional effort spent in maintaining the GUI test scripts to reuse them [13]. This is necessary to ensure they are complaint and can be executed on a new version of the SUT [12]. Maintaining such test cases is an expensive and cumbersome process. As a result practitioners usually discard these test scripts to minimize maintenance costs [36]. The reason for such maintenance issues is dependent on several factors common to automation testing and on the dynamic nature of the GUI [13]. For instance, missing test documentation is one such factor which increases maintenance cycles of test automation. Poor structure of test scripts is another factor which results in wastage by increasing maintenance time [37]. On identifying the factors that affect maintenance, organizations can identify improvement areas and create maintainable robust tests. 2.5 Related work In the following sections, we present an overview of the related studies.

18 Chapter 2. Background and Related Work Criteria for when to automate According to a literature review [7], many decision support approaches have been proposed by researchers and practitioners in relation to "when to automate". Dustin et al. [38] presented a comprehensive checklist to facilitate decision making for "when to automate". In his book, a number of criteria such as test-automation strategy, human skill-set, testing framework and return of investment (ROI) have been identified. Garousi et al. [21] proposed a modeling technique called system dynamics (SD) for the purpose of decision support on when to automate. It has been developed through a case study as an extension of their previous work in [39] and [17]. The SD process aids decision makers in deciding whether and to what extent the industry should automate but does not describe the criteria that must be satisfied before beginning automation. In another study, the author proposes a genetic-algorithm-based tool named test automation decision matrix (TADM) to enable systematic decision making in test planning activity [20]. The tool indicated what test activities must be automated and left manual. Planning test automation strategy in varied contexts is important and difficult to achieve. Though the paper does not cover the criteria for when, as part of future work, the author stresses the need for further research and empirical studies in this area. Keith [18] mentioned the importance of focusing on the precise analysis of ROI before beginning test automation. He recommended to consider criteria such as "rate of change of what is being tested", "frequency of test execution" and "usefulness of automation" before choosing which tests to automate. Another cost-benefit evaluation model has been proposed in [40] which determines whether or not a project is suitable for automated testing based on test-case prioritization. Similar cost-benefit evaluation models have been proposed in [41] [42] and [17] but have shortcomings that are addressed in [40]. Ravinder [43] states that decisions regarding "when to automate" are crucial which the testing or development teams must make. He proposed eleven major criteria which must be satisfied for automation testing. Few of the criteria included platform and Operating System (OS) independence, customizable reporting and version control. Nonetheless, the paper is an overview of automation testing and the criteria have not been elaborated or empirically supported. As perceived from the above literature, several scientific papers highlight the criteria for taking decisions regarding whether and to what extent automation should be performed. However, there is limited empirical evidence stating a validated list of criteria which must be fulfilled prior automation once the decision to automate has been taken. It is worthwhile to investigate the applicability of criteria identified in literature to GUI automation testing considering its typical characteristics described above. Moreover, some additional criteria which have not been identified in a telecommunication industry need to be explored.

19 Chapter 2. Background and Related Work Testability requirements Testability makes the software product easier to test in an automated manner either by making design of the tests easier or testing more efficiently [19]. Though researchers and practitioners have assumptions on what makes a software testable, a validated set of testability requirements has not yet been defined [19]. Pettichord [23] cites examples of testability requirements from his own experience. These include assertions, resource monitoring and verbose output. Such requirements greatly help automation testers by providing reliable and convenient testing interfaces to automate effectively and efficiently. The paper also discusses how GUI test automation is affected by testability and the strategies for avoiding them. On a similar note, Patwa et al. [19] elaborates the importance of testability and acknowledges that it must be considered even before a project begins. The paper also mentions few examples of testability requirements related to GUI test automation like proper naming standards for user interface elements, and tools can recognize custom controls. As part of future work, the paper identifies the need for further empirical studies to establish testability engineering as a part of software engineering research. A study at Microsoft [44] finds that testability of the SUT must be considered in relation to the design and system architecture to analyze how it affects the testing effort and overall testing process. Karhu et al. [45] also view testability of the software as a large concern because poor code can make the testing process unreliable. The authors state that one of the common reason for automation failure is due to design and implementation of software devoid of testability. This causes increase in maintenance costs of the architecture and automation tool. A case study performed by Liebel et al. [10] reports lack of testability in the software product as an issue that made the automation testing difficult. The paper however did not mention any testability requirements or detailed description of the issues. Xie et al. [46] suggest strategic usage of assertions and creating minimal number of testcases as examples of testability requirements for automation. They further mention that decisions taken by designers not only impact usability but also testability. As per Alanen et al. [47], it is crucial to involve both testers and developers in understanding testability requirements to enhance software testability. They must gain insights about design of the system, the requirements documentation and the code base to generate appropriate testability requirements. Furthermore, they must promote testability earlier in the development phase and make the SUT easier to test by providing guidelines that aid testability [23][19] and [47]. Though literature evidence exists for testability requirements of software systems, no paper proposes a validated set of testability requirements which must be considered for performing GUI automation testing through an empirical study

20 Chapter 2. Background and Related Work 10 in a specific context. These testability requirements may differ for GUI automation considering the fact that GUI is dynamic in nature. There is always some technical challenge to make automation tool to work with the GUI product [8] Factors affecting waste Maintaining test cases is generally viewed as a expensive and time consuming process by software developers [36]. A study at Accenture revealed that even simple changes to GUIs lead to 30% to 70% modifications to the test scripts making 74% of the test cases unusable during GUI regression testing [12]. Valid empirical data on maintenance is limited especially when it comes to GUI test automation [13]. In their systematic literature review (SLR) of merits and drawbacks of automated testing, Rafi et al. [48] reported only four papers that describe the difficulties associated with maintenance of automation test cases. Out of which only one study identified factors that affect maintenance in the form of theoretical cost models [22]. Similarly [12][45] and [39] has stated several factors, both technical (standardized technological infrastructure and frequent changes to the underlying technology) and context dependent (undocumented architecture and poorly structured testware) which impact maintenance of automated testing. Karhu et al. [45] performed a case study where they observed the factors that influenced the use of software testing automation like taking maintenance costs into consideration as well as human factors. Their observations are supported by Berner et al. [22] who further propose that design of the test suite architecture is also an important factor which is commonly overlooked. In addition, absence of proper testware documentation and lack of guidelines creating reusable and maintainable tests are additional factors affecting waste [45][22] and [13]. Moreover, Alègroth et al. [13] reported thirteen factors influencing maintenance and their impact on automated GUI based testing, represented by Visual GUI testing (VGT). Furthermore, companies generally abandon automated testing even after substantial investment due to wrong expectations while implementing test automation [45][22] and [13]. Considering the importance of identifying such factors and the existence of vast amount of literature, it becomes important to validate if these factors are applicable to GUI test automation and find additional factors which have not been identified in an industrial scenario Research gap There exists evidence regarding criteria for when to automate, testability requirements and factors affecting waste for automation in general and few of them for GUI test automation (some testability requirements in [23] and maintenance factors in [13]) as well. However, existence of such generic variables is often broad

21 Chapter 2. Background and Related Work 11 and largely ineffective as they try to resolve every problem [49]. A multi-vocal literature review [7] proposed 15 factors which support decision-making on whether and what to automate in software testing. It does not discuss the criteria for when to begin automation. There is no SLR outlining the testability requirements influencing software test automation. Moreover, the SLR proposed by Rafi et al. [48] reports only four papers discussing about maintenance associated with automation test cases but not specific to GUI test automation. This provides a motivation for conducting a literature review to find answers in our research area. Garousi et al. [7] identifies the need to validate the research evidence in an industrial scenario for a specific context. It is likely that not all variables influencing GUI test automation are suitable for every industry. For instance, even though GUI testing is a vast domain, its availability varies on different platforms. GUIs are rarely present in telecommunication products [15]. Thus, the identified evidence might not be applicable in such a context. It is worth investigating as these variables may be different for GUI test automation in such a context owing to their dynamic nature. Moreover, GUI itself must be tested to verify its conformance to the specifications [25][24] and [20]. As identified by the authors in [7] and [13], it is essential to collaborate researchers and practitioners to identify additional variables and understand how such software test automation decisions and maintenance factors are affected in a specific context. As per Banerjee et al. [33] research on GUI testing is both timely and relevant. There has been a steady increase in the number of articles related to GUIs mainly contributing towards improved testing techniques,tools and testing paradigms [33] and [14]. Nonetheless, only 13.23% of the articles have been published with a collaboration between researchers and practitioners while 73.52% of the papers are solely published by the academia [33]. There has been lack of articles investigating the opinions of practitioners towards GUI test automation and also researchers about their opinions on the current state-of-the-art and on possible future research directions [14] and [33]. Overall, the lack of contextspecific and validated evidence is the main motivator presented in this thesis. Such knowledge is necessary for industrial practitioners to assess if these criteria, testability requirements and factors are applicable and valid in their context to make informed decisions by relying on evidence. As a result, this thesis is motivated by the industrial need coupled with the research gap as identified from the literature. It provides a foundation for performing our research in GUI test automation to explore criteria for when to automate, testability requirements, and factors leading to waste in GUI test automation. Furthermore, this research validates the empirical evidence in an industrial scenario and contributes to the body of literature.

22 Chapter 3 Research Method In this section, an outline of the research design that has been employed for the fulfillment of research aims and objectives in this thesis work are presented. Yin [50] defines research design as a logical plan for deriving answers from a set of initial research questions. It comprises of a research method describing the type of study, data collection method for gathering suitable data and data analysis method for interpreting the collected data. 3.1 Method selection Experiments [51], case study [50], surveys [52] and action research [53] are widely used empirical research methods in the context of software engineering. Experiments [51] require identification of all variables in the initial stages of the study which is difficult in our research as they can be identified only after exploration. An on-going project is more suitable for examining the variables under study which are not known at the beginning. Moreover, the goal is to focus on GUI test automation in an industry with several people involved, it is not feasible to replicate such a scenario in a lab environment. Survey [52] is commonly known as research in large or research in breadth as it collects an overview of a phenomena with respect to a population. However, as the current research aims at gaining an in-depth understanding, it is deemed inappropriate. As mentioned in Section 1, the study is being conducted because there is a need from the industry and the problem being investigated is not well known among the practitioners working at the company. This requires the research to be conducted with the practitioners of the company to capture their opinions and experiences ruling out the option of a survey. Action research [53] is carried out by intervening in a real world setting and attempts to improve the situation by observing what the effect of intervention is. Nonetheless, the objective of the research is not to intervene in the real-world 12

23 Chapter 3. Research Method 13 situation and observe its effects as no change is made or action is taken. Only criteria, testability requirements, and factors are explored for which action research is not suitable as the researchers behave more as observers. Out of the several research methods that are used in software engineering, we have adopted case study as our empirical method during this thesis work to achieve the research objectives. Case study is defined as, "an empirical inquiry that investigates a contemporary phenomenon within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident" [50]. As this study aims at inspecting a problem which is a need to the industry and not very well known among its practitioners, a case study is appropriate for investigating the problem and resolving it in the real-world scenario. Moreover, it is a qualitative approach and allows to gain an in-depth understanding of the problem. 3.2 Study design We perform an in-depth investigation using the the evidence-based software engineering (EBSE) guidelines provided by Kitchenham et al. [26]. EBSE incorporates evidence into decision making to provide best solution for a given practical problem. It comprises of the following steps: 1) identify the need for information and translate it into a set of answerable research questions, 2) track down the best evidence to answer the questions, 3) critically appraise the evidence, 4) translate research evidence into practice by incorporating practitioners opinions and 5) evaluate the effectiveness and efficiency of the prior steps and identify improvement areas. We adopted steps 1, 2 and 4 for conducting the research and eliminated the steps 3 and 5. The collected evidence was superficially appraised for its validity, impact and applicability as a part of snowballing procedure [54]. Hence, step 3 was not conducted in a formalized manner. Step 5 was not considered, as our aim was not to reflect on using EBSE process to find improvement areas in software engineering practices. We have used a multi-staged EBSE research process in our study wherein each subsequent step is built upon the previous step as shown in Figure 3.1. In order to systematically close the gap between the outcome of step 1 and step 2 of the EBSE process we have used Bayesian synthesis method in step 4.

24 Chapter 3. Research Method 14 Figure 3.1: EBSE steps. The overall goal of the research is that practitioners involved in GUI test automation will be able to make favorable decisions regarding when to automate by relying on evidence and are informed about testability requirements and maintenance factors. The following steps of the research lead up to our goal. EBSE Step 1 : The primary step was to identify the need for information (research scope, problem domain and management procedures) to translate it into a set of answerable research questions. This was derived from the case organization by having an elaborate discussion with the industry supervisor. It was further refined by forming an expert panel wherein supervisor from BTH and the supervisor from industry collectively discussed and decided the research scope and problem domain. Moreover, it is important to gain an in-depth understanding of the GUI test automation approach for which case studies are appropriate. This lead to the formation of three research questions which were asked in the case study. They are: RQ1. When is it suitable to begin GUI test automation? Description: The when question will be answered by exploring criteria which must be fulfilled to begin GUI test automation. The potential criteria may be that the test automation scope is set, mock ups are ready or compatibility of the automation tool is checked. The goal is to identify in literature and practice the criteria for when to begin GUI test automation. (Mapped to O1 and O4) RQ2. What are the testability requirements for performing GUI test automation? Description: Testability requirements aim at improving test design efficiency and facilitate test automation. These requirements might be related to the design of SUT such as following proper test scripting standards or design of the test environment such as automatic installation and configuration procedures for the SUT. The goal is to explore in literature and practice the testability requirements influencing GUI test automation and organize them in a systematic manner. (Mapped to O2 and O4)

25 Chapter 3. Research Method 15 RQ3. What are the factors that lead to waste in GUI test automation? Description: The potential factors may be knowledge/experience as identified from test automation in general and number of GUI interface components in the test script specifically for GUI test automation. Moreover, not all factors that lead to waste in test automation are generic and it is possible that not all factors that affect GUI test automation have been found yet. The goal is to identify in literature and practice the factors associated with waste which is the additional effort required for maintaining automation test scripts while performing GUI based automation testing. (Mapped to O3 and O4) EBSE Step 2: The second step is to gather best evidence for answering the research questions. This evidence is categorized as research based evidence and practitioners opinions based evidence. The criteria for when to automate (RQ1), testability requirements (RQ2) and mainly the factors associated with waste (RQ3) are identified through literature review. A literature review is conducted for gathering research evidence while interviews (case study) are conducted to capture the viewpoints of the practitioners regarding the prior mentioned aspects of GUI test automation. The reason for conducting a literature review is two-fold. Firstly, it helped us to gain an understanding of the context and background of the study (initial literature review) and capture information related to the problem domain by synthesizing evidence from multiple studies (targeted literature review). This provides knowledge and decision support [55]. Secondly, the information collected provides answers for the three research questions. These answers are used for validation of the evidence base as well as integration with the subjective opinions of the practitioners performed as a part of EBSE step 4. EBSE Step 4: To translate research evidence collected from literature review into practice we used Bayesian synthesis method. Bayesian synthesis method was selected because it provides a systematic approach to integrate subjective opinions with research evidence thus, making them more suitable and addresses the gap that we identified related to researchers and practitioners collaboration [55]. Moreover, the evidence gathered from the literature review was validated and additional variables specific to the context were collected as a part of the case study to answer the research questions. To validate and use the research-based evidence obtained from literature, it is necessary to integrate it with opinions and experiences of the practitioners who have concrete knowledge about the specific situations and circumstances encountered in the context. Such engagement of the practitioners in the learning process to combine the evidence with previous knowledge and experience enables him or her to make informed decisions in each situation rather than simply following the proposed criteria or factors for GUI test automation.

26 Chapter 3. Research Method Literature review design We conducted a literature review using snowballing approach by following the guidelines proposed by Wohlin [54]. Our intention was to find maximum number of available sources addressing the research questions in a systematic manner. Hence, guidelines for systematic literature review (SLR) are followed while a SLR is not conducted as our study doesn t aim to perform an in-depth investigation about the research area by including thesis dissertations and grey literature. Moreover, our study does not find trends and patterns in existing literature [56]. The purpose of snowballing was to identify criteria for when, testability requirements and factors associated with waste. Snowballing procedure was chosen to ensure that we included all the relevant sources as much as possible by conducting forward and backward snowballing. Moreover, it was selected over a regular database search as any new study cites at least one paper previously published in the similar research area which can be easily found through snowballing. Hence, finding relevant papers is almost straightforward. In this scenario, snowballing is used for identifying additional papers using a list of references and citations in a paper. Figure 3.2 illustrates the snowballing procedure. Figure 3.2: Snowballing process.

27 Chapter 3. Research Method 17 First, we set the scope of search strategy to cover criteria for when, testability requirements and factors associated with waste in GUI test automation. Based on this, the subsequent inclusion and exclusion criteria were implemented. Inclusion criteria Research papers providing information regarding criteria for when, testability requirements or maintenance factors for test automation in general or GUI test automation in specific. We included papers describing about success factors or best practices for test automation or explicitly GUI test automation since they could help us identify criteria to be fulfilled to begin automation or check for testability. As an increase in trend of publication in the research was seen from the year 2000 [33], 2000 was selected as the starting period. The end of the search period is 2017 as we started our research in this year. Exclusion criteria If the paper just mentions about the concept of GUI test automation without conducting an in-depth investigation within the scope of our search strategy or only mentions it in related work, summary or future work, it is not included. Research papers which did not target any of our research questions are excluded. For example, papers relating to technical debt in GUI test automation, techniques for GUI test automation and survey on current software testing practices and improvements without any link to when criteria, testability requirements or maintenance factors. Studies which are not published in journals and conference proceedings. Studies whose contribution is not assessable, i.e. not written in English, tutorials and presentations. Start set: Firstly relevant search strings were formulated by identifying suitable keywords. The search strings were, "when to automate" AND (GUI testing OR GUI test automation), agile AND (GUI testing or GUI test automation), GUI test automation AND testability requirements, "testability requirements" AND (GUI testing OR GUI test automation), (GUI testing OR GUI test automation ) AND (maintenance OR maintenance costs). For each search string, the first ten results of Google Scholar were examined by checking their abstract and title for relevance to generate a tentative start set. Google Scholar is comprehensible and more up to date containing articles which are not indexed by other databases like Scopus and Inspec. Thus, it was selected as our database. Next, the full text of each paper from the tentative set was studied. After applying our inclusion and

28 Chapter 3. Research Method 18 exclusion criteria, 9 studies were obtained forming our final start set. Snowballing: We conducted backward and forward snowballing to find sources of papers in the chosen research area. It is an iterative process which spans until no new studies are found. In each iteration, backward and forward snowballing of the studies in the start set were conducted simultaneously as follows: Backward snowballing: Here, we examined the reference list of papers from the start set to identify new papers that are to be included. From the list, we checked the publication type, place and context of the reference of the study. If it satisfied both criteria, it was decided to be included as a candidate for selection. Forward snowballing: In Google Scholar, forward citations of the paper covering until 2017 were examined. Forward citations of each study in the start set were recorded and examined. The first level of screening was based on information presented in Google Scholar. If this information is insufficient to make a decision, the abstract and the place of citation is studied in the paper. If required, the full text of the citing paper is examined. The iterative process ended at fourth iteration as no new studies were found. Moreover, at each iteration, duplicates were removed and inclusion and exclusion criteria were applied upon reading full text of the papers. A summary of results of snowballing procedure are presented in Table Case selection and unit of analysis This thesis reports a multi-case study performed at a global and leading company in the areas of telecommunication and multimedia at one of its sites located in Karlskrona, Sweden. The company provides various product solutions such as charging systems for mobile phones, network solutions and multimedia solutions. Two subsystems that are a part of a large-scale product, a convergent end-toend customer business and management product, are the distinct cases that were studied at the company. Both the subsystems employ several automated jobs to continuously integrate and test the product. As the company largely focuses on providing telecommunication services, it is relatively new to the field of GUI and has identified the need to explore criteria and factors associated with GUI test automation. We find these subsystems of particular interest for identifying and understanding the various aspects of GUI test automation as they are involved with development and automation testing of the GUI at the site. Testing through GUI is performed at unit, integration and black-box end-to-end levels. But the focus of the study is only on integration and black box levels. For GUI integration testing a proprietary tool called basic integration test (BIT) is used and for black box end to end testing Selenide framework powered by Selenium is

29 Chapter 3. Research Method 19 Search action Start set Papers for iteration New selection Selection of start set Final start set: 9 studies selected First iteration Backward snowballing Forward snowballing Second iteration Backward snowballing Forward snowballing Third iteration Backward snowballing Forward snowballing Fourth iteration Backward snowballing Forward snowballing 1 0 Table 3.1: Summary of snowballing procedure. implemented. BIT tests are primarily written for testing specific widgets whereas end to end flows are tested using Selenide. The unit of analysis of the study is the test automation approach implemented for GUI within the two subsystems. An overview of the same is depicted in Figure 3.3. Figure 3.3: Overview of case selection and unit of analysis Overview of cases The overview of the subsystems is presented in Table 3.2. The characteristics of each case are mentioned below.

30 Chapter 3. Research Method 20 Subsystem Persons GUI Size (LOC) GUI Unit Tests GUI Integration Tests GUI Black Box Tests , , Table 3.2: Overview of the cases. Case 1 description at subsystem 1: The subsystem 1 composes of entities that offer a business configuration to the end users and are heavily dependent on the GUI. Their GUI sets up specifications of the entities which are required to create product offerings. As seen from the Table 3.2, they develop large amount of GUI code and write equivalent number of GUI test cases. Case 2 description at subsystem 2: The subsystem 2 composes of several entities that offer services for executing the business configuration produced by subsystem 1. Their GUI is flexible and is used to set up the configuration for the services offered. They are not heavily dependent on the GUI and hence the GUI produced is smaller in comparison to subsystem Case study design The guidelines given by Runeson et al. [57] are used for conducting the case study. The design of the case study is flexible [57]. Thus, the steps within the design plan are adjustable which aids researchers in adapting to improved understanding of the problem under study. Moreover, it enables us to incorporate additional unplanned data collection and analysis opportunities in the design Data collection protocol We utilized multiple sources of evidences for data collection to enhance credibility of the research results and conclusion [57] and [58]. Interviews and process documents were used for gathering necessary qualitative data from the company Standard process documents The company provides several standard process documents to assist employees in their work. Such documents existed for GUI test automation process as well. These process documents include test analysis documents, test reports and GUI description documents. The process documents and authorization permissions were procured from the industry supervisor. Process documents were reviewed to gain a basic understanding of the process and terminology followed in the company.

31 Chapter 3. Research Method Interviews Interviews are conducted for capturing the perceptions of the practitioners to obtain qualitative data Interview planning and selection of participants An invitation is sent to the interviewees over the company s dedicated Microsoft Outlook account. The invitation included date, time, place of the interview along with the introduction of the researchers and the purpose of the study. The participants either accepted the invitation or an alternative date and time was suggested upon which the interview was rescheduled and an updated invitation was sent and confirmation was received. The interviews were conducted in an available meeting room in the company and were audio recorded after seeking the consent of the participants. This was done to ensure minimal outside interference so that recording wasn t affected by noise and to hear what each other says adequately. The interviewees were selected such that the opinions of the practitioners involved with GUI test automation can be captured. As mentioned earlier, only the two subsystems within the site work with GUI. Hence, the participants for the interview are chosen from these two subsystems. Firstly, a complete list of people working with GUI test automation was obtained from the industry. The overall team structure within the two subsystems is presented in Table 3.3. The roles selected for interviews are positions which Subsystems Teams Teams working with GUI Table 3.3: Team structure. are directly involved with test related activities or are influenced by the outcome of the testing process. The overview of the roles is presented in Table 3.4. We used cluster sampling for selecting the interview participants. We aimed at selecting at least one person randomly from each role in both the subsystems. Such an approach of selecting participants based on differences among roles was made to avoid replicating similarities in the study. Moreover, the participants were selected based on their availability in the time-period during which we could conduct the interviews. If more number of practitioners were available for a specific role they were selected and interviewed. The distribution of interview participants across the two subsystems is presented in Figure 3.4.

32 Chapter 3. Research Method 22 S.no Role Description 1 Test architect Responsible for the overall strategy within the subsystem and manages the automation framework and environment. Checks the competence level of the team members. 2 Product Owner Coordinates the test activities within the team. Ensures that team members follow the code of conduct and manages the product as a whole. 3 Design lead Aims to improve testing and development activities in their team. Ensures proper test coverage is achieved. 4 Test lead Responsible for the overall test activities such as test case design and implementation. 5 Developer The developer uses the requirements for designing and implementing the GUI. They are also responsible for testing. Table 3.4: Overview of roles. Figure 3.4: Distribution of interview participants. Overall 18 practitioners were approached of which a few were unavailable for participation. Thus, we interviewed a total of 12 participants from the two subsystems. Based on recommendations by Rowley [59], a good thumb-rule for new researchers is that they should aim to conduct at least six to eight interviews lasting for about one hour. Moreover, we interviewed at least one practitioner per role per subsystem thereby capturing opinions from different roles. Hence, we consider our size of the interview participants to be adequate Interview design We have adopted a semi-structured interview strategy [52] for all the interviews. As the study is exploratory, semi-structured interview was suitable to gather in-depth and better-quality information as it allows to ask follow up questions when interesting points were raised by the interviewee [60]. Based on guidelines provided by Runeson et al. [57], the interview was divided into four themes which lasted for approximately 90 minutes each. The interview began with open ended

33 Chapter 3. Research Method 23 questions, moved towards more particular and formulated questions and then opens again towards the end of the interview forming a time glass model [57]. The overall design of the interview is depicted in Figure 3.5. Figure 3.5: Interview design. The themes of the interviews were: Preparation and experience: Introducing the researchers and presenting the research goals, objectives and interview format. Questions relating to interviewee background, experience, current role and responsibilities in GUI test automation. Overview of GUI test automation: Questions regarding the end to end process (test activities including pre and post activities) for performing GUI test automation and the tools used for performing the same. Criteria for when, testability requirements and factors associated with waste in GUI test automation: In this phase, the interviewees are supposed to explain the activities that they do during the entire testing time-line, things that should be ready before they begin GUI test automation and when it is suitable to perform GUI test automation. In addition, any tips, procedures or guidelines for achieving testability and problems relating to the SUT and environment when performing GUI test automation. Finally, they should provide details regarding the problems they faced while maintaining GUI test cases and how such issues are handled. Presenting the literature review evidence: In the last phase, Bayesian synthesis method was utilized for integrating research evidence into practice by incorporating subjective opinions. Here, the evidence captured from the litera-

34 Chapter 3. Research Method 24 ture addressing the research questions was presented in a tabular format to the interviewees. They carefully examined each item in the table to provide their opinions (already practicing them or not, validation, importance) and mentioned if they were suitable to their context. This phase lasted for about minutes. The first three themes are referred to as Phase 1 of the interview while the last theme is called Phase 2 of the interview throughout the document. The aim is to capture the opinions of the practitioners relating to the various aspects of GUI test automation and then later integrate the research based evidence obtained through literature review with the practitioners subjective opinions. Due to limited availability of the practitioners, Phase 1 and Phase 2 of the interview were conducted as a part of single interview. However, it was ensured that the personal opinions of the practitioners on the three research questions were fully captured (EBSE step 2) before presenting the literature review results for executing the Bayesian synthesis method (EBSE step 4) Formulation of interview questionnaire A semi-structured questionnaire was designed by implementing the following 3 steps. Formulation of questions based on literature review and process documents: Firstly, an initial questionnaire was formulated using literature review results and information in process documents. The study of background and related work from literature and company documentation facilitated a better communication between the researcher and interviewee. Thereby, the focus of the interview laid on more important aspects of GUI test automation than understanding the basics. The questionnaire was designed such that it was open-ended and understandable to the researchers and interviewees according to suggestions provided in [59] and [61]. Review and update questionnaire with university and industry supervisor: It is crucial to create effective research questions to gather maximum data from the interviews. The initial interview questionnaire was sent to university and industry supervisors for review. Later, separate meetings were set up with the supervisors to discuss the feedback since their experience in conducting such interviews or being an interview subject previously will help in enhancing the interview questionnaire. It was revised until the supervisors and researchers reached a consensus on the final interview questionnaire. The final and initial questionnaire were similar in terms that only the language was improved to reduce ambiguity and additional questions were added in-case if the primary questions were not answered by the interviewee. Moreover, the questions were framed at a level such that it was applicable to all roles of interview participants.

35 Chapter 3. Research Method 25 Update interview questionnaire based on pilot interview: To find further limitations and weaknesses in the interview questionnaire, a pilot interview was conducted for refinement of the questionnaire prior to actual interviews. Moreover, as data analysis and data collection was performed simultaneously, on finding new information from the collected data, the questionnaire was updated for upcoming interviews. For example, the interviewee mentioned that a test analysis is conducted as a part of study phase prior to development and testing. Though the structure of the phase may vary from team to team but it is a formalized process for all the teams to conduct test analysis. An additional question to capture information regarding the various activities performed as a part of test analysis was added to the initial questionnaire. The final interview questionnaire is presented in Appendix A. A snapshot of the interview transcript indicating the required modification to the questionnaire is illustrated in Figure 3.6. Figure 3.6: Snapshot of the interview transcript with a note to change the initial questionnaire Transcription All the interviews were transcribed as audio recording of each interview was available. To convert audio into text format, ExpressScribe transcription software was used. The audio files were imported to the transcription software and were renamed to the name and role of the interviewee along with serial number of the interview. Each interview was transcribed by the end of the week in which it was conducted. On transcribing, it was exported to Microsoft Word Processor for further analysis. All the transcripts were stored in a separate folder on both researcher s devices and Google Drive. The snapshot of the transcription software is present in Appendix B Post interview Once the interview was finished, each interviewee was sent a follow up to thank him/her for their participation. In some interviews when the interviewee was unaware of certain GUI related aspects he was requested to provide the details after inquiring. This information was also collected and stored.

36 Chapter 3. Research Method Data analysis protocol Several qualitative approaches which are greatly diverse and complex exist, one of which is thematic analysis [62]. It is the most widely used analysis method for qualitative research [63]. As defined by Boyatzis [64], Thematic analysis is a method for identifying, analyzing, and reporting (themes) within data. Given its ability to be flexible, it allows for a rich and detailed description of data [62][65] and [64]. It is substantially different from other analytical methods such as grounded theory which mainly focuses on theory development [66]. Since our research goal is to not build a theory or find patterns that are theoretically bounded, approaches such as grounded theory has not been adopted for the study. Thematic analysis is mainly suitable for questions related to people s views and experiences [63]. It attempts to gather opinions and interpret something important about the data in relation to the research questions and present patterns and meaning within the data set [62]. We have adopted the six step thematic analysis framework proposed by Braun and Clark [62] to analyze qualitative data. Braun and Clarke provide an overview of thematic analysis built on other qualitative analytic methods searching for themes in relation to ontological and epistemological variables. As our study focuses on capturing opinions of the practitioners, we selected the generic thematic-analysis approach. The same guidelines are used for categorizing raw literature review results. An overview of the six step thematic analysis framework is illustrated in Figure 3.7. Each of these steps are elaborated below. Figure 3.7: Thematic analysis framework. Become familiar with the data According to Braun and Clarke [62] the first step in thematic analysis process is to familiarize ourselves with the available data. Bird [67] describes this step as key phase of data analysis within interpretative qualitative methodology. It is vital to immerse ourselves with the content through repeated reading before moving on to the next step of the method as it forms a bedrock for the rest of the analysis [62]. Moreover, whether one has a goal for detailed analysis, or are searching for themes, or are theoretically driven, it is important to become familiar with all aspects of the content [65]. To prepare for this step, all the audio recordings were transcribed as described in Section in order to begin thematic analysis. The time spent on transcription was crucial as one develops better understanding of the data upon transcribing and it facilitates close reading and interpretative skills required for analyzing the data [68]. With the final transcribed interview document in hand, the entire content was read thoroughly by

37 Chapter 3. Research Method 27 both the researchers and reviewed by marking important ideas and thoughts of the participants. Once we performed this step, the formal coding process begins wherein the highlighted information are assigned codes that give meaning to what the data represents. It means that throughout the entire analysis coding will be continually defined and developed. Generating initial codes Boyatzis [64] defines code as the fundamental element or segment of raw data that convey meaning to a phenomenon. Coding based qualitative data analysis can be performed either manually or with the use of computer-assisted qualitative data analysis software (CAQDAS) like Nvivo [69] and MAXQDA [70]. Atherton et al. [71] states that both the approaches have their own benefits and drawbacks that can be debated. Nonetheless, the choice of performing qualitative data analysis ultimately resides on the researcher depending on the project requirements [71]. Thus, on careful comparison of both the approaches, we decided to conduct data analysis manually. After we became familiar with the data and gained sufficient insights about the crucial parts in the transcript, we generated a list of initial codes. In this step, the entire interview transcript was scrutinized giving full and equal attention to each data item. Next, the highlighted data were assigned codes to give meaning to the extracts. Both the researchers individually developed the initial codes from the transcripts which were later finalized after careful discussion. When conflicts arose, interview transcripts were revisited and final decision was taken after careful analysis. During the coding phase the following things were kept in mind by both the authors a) to code for as many potential themes as possible b) mark the necessary data that explain the code to avoid loss of context [62]. The obtained initial codes are presented in Table 4.2 in Section 4.2. Searching for themes This phase switches the focus to analysis at a broader level of themes preferably than codes by sorting them into potential themes. The essence of this step lies in how different codes can be combined to form themes [62]. To achieve this, the generated list of codes identified from the data set were put in a tabular format. Next, each code was carefully analyzed to identify if any code was conceptually similar or have interconnected meaning. This was done to place them as per our research questions and in separate categories. Codes were constantly compared to make the analysis process effective prior to grouping. In addition, as analysis is being performed in parallel with data collection, every new code which is captured is compared with others to differentiate it from the remaining codes. This lead to formation of categories of codes such as test-process related, test-tool related, SUT-related and others. All the codes related to testing are placed under test-related category and vice versa. On a similar note, the set of 50

38 Chapter 3. Research Method 28 initial codes were placed into 5 categories with respect to our research questions as shown in Figure 4.4 in Section 4.2. Reviewing and refining themes Once we have a set of candidate themes, this phase commences with the refinement of available themes. As per Braun and Clarke [62], there are two levels of reviewing and refining the themes. In level one the coded data extracts are reviewed while in level two the themes itself are validated against the data extracts. We perform both the levels of reviewing and refining of our themes and corresponding codes. Defining and naming themes In this step, the obtained themes and corresponding codes are defined and labeled by iteratively revising the coded data [65]. The data from the literature review and transcripts were consulted to provide correct labels to the codes and themes. For each emerging theme a detailed analysis was conducted to identify the story conveyed by each theme. We consider whether each theme contributes and fits into the broader research outcome. The final set of themes after labeling are depicted in Figure 4.5 in Section 4.2. Producing final themes In the last step, for each of the themes supporting data extracts are obtained and the findings for each theme are described. The final themes and their findings are presented in Section Data synthesis protocol Upon gathering research based evidence from the literature, EBSE step 4 commences wherein we synthesize the evidence and integrate it with the practitioners opinions [72]. The outcome of a literature review must be useful for practitioners and should be viewed as a guideline that can enable and facilitate good decision making by relying on evidence in software engineering (SE) research [55]. Often knowledge translation is implemented in a ad-hoc manner by just providing the results of a literature review to practitioners which are generally objective and unhelpful [73] and [74]. Rather, it must be a research activity combining researchers and subjective opinions of practitioners to support evidence-informed decisions [73]. Thus, to achieve such objective we adopted Bayesian synthesis method proposed by Badampudi et al. [55]. Bayesian approaches synthesize data and provide inferences by incorporating knowledge and experiences of practitioners [55] and [75]. Its flexibility lies in efficient usage and synthesis of methodological diverse findings from every available resource [55]. In short, Bayesian approaches are useful for synthesizing data

39 Chapter 3. Research Method 29 and translating knowledge to practitioners suitable to their context by integrating their subjective opinions in data synthesis. Figure 3.8 depicts the Bayesian synthesis approach. Below is a description of Bayesian synthesis steps. Figure 3.8: Bayesian synthesis. Prior probability: The opinions of 12 practitioners on various aspects of GUI test automation are collected. During the first phase of the interview, the practitioners stated their personal experiences and opinions on GUI test automation mentioning the variables without any supportive information from the scientific research. We used thematic analysis to analyze their responses and generate codes and potential themes (categories) for criteria for when, testability requirements and factors associated with waste. The information provided by each interviewer for each code were combined to yield prior probability (percentage of practitioners mentioning the code as valid and important) for that code being valid in the context. The calculated prior probability is shown in Section 5.1 Likelihood: Likelihood represents what is already known i.e. the representation of research based evidence from the body of literature [55]. Data was extracted by the researchers from the literature for addressing the three research questions. The extracted criteria or factors are encoded into categories. The themes (categories) generated through thematic analysis in the previous step are aligned with the categories derived from the literature and if required new categories are added. The likelihood is calculated as the proportion of the studies reporting the criteria or factor to be valid and important within the scope of each research question which is shown in Figure 4.6 of Section 4.3. Posterior probability: A targeted literature review was conducted prior to the interviews to be able to capture practitioners opinions on the data extracted from the literature. In the second phase of the interview, we presented the variables extracted from each study in a tabular format to the practitioners. Each practitioner provided their opinions regarding the validity, applicability and importance of each variable in their context. This information was used to calculate the posterior probability (proportion of the practitioners stating the item to be valid in the context) combining the practitioners subjective opinions with research based evidence which is presented in Section An overview of the calculated prior probabilities, likelihood and posterior is depicted in Figure 5.1 in Section 5.1.

40 Chapter 4 Results 4.1 Overview of results Literature review and interviews were conducted to track down the best evidence for answering the research questions as a part of EBSE step 2. Literature review produced research based evidence while the interviews captured the opinions of the practitioners working in the two subsystems. Firstly, the phase 1 interview results are presented followed by literature review and phase 2 interview results. 4.2 Interview results This section presents the results that were acquired in phase 1 of the 12 interviews conducted as a part of EBSE step Summary of interviews In order to obtain relevant qualitative data from the two subsystems within the industry, a total of 12 interviews were conducted. A summary of the interview participants is presented in Table 4.1. We ensured that at least one person from each role was selected for interview from both the subsystems. In addition, the experiences of the participants ranged from one year (being relatively new to the company or role) to more than couple of years (highly proficient in their role). Such diverse characteristics of the participants in terms of their experiences enabled us to capture data from varied perspectives Phase 1 results This section provides a description of the GUI test automation approach followed within the two subsystems as well as the criteria for when, testability requirements and factors associated with waste as identified from the opinions of the practitioners. 30

41 Chapter 4. Results 31 Interviewee Subsystems Experience Role Interviewee 1 Subsystem 1 7 years Test architect Interviewee 2 Subsystem 2 2 years Developer/Tester Interviewee 3 Subsystem 1 11 years Test lead Interviewee 4 Subsystem 2 7 years Product owner Interviewee 5 Subsystem 2 5 years Design lead Interviewee 6 Subsystem 1 9 years Test lead Interviewee 7 Subsystem years Product owner Interviewee 8 Subsystem 2 22 years Test architect Interviewee 9 Subsystem years Developer/Tester Interviewee 10 Subsystem 1 7 years Design lead Interviewee 11 Subsystem 2 12 years Test architect Interviewee 12 Subsystem 2 6 years Test lead Table 4.1: Summary of interview participants Current approach for GUI test automation within the subsystems The test activities performed as a part of test automation strategy before the product release are common to both the subsystems and are illustrated in Figure 4.1. This information has been captured through the interviews and from relevant internal documents which outline the overall test flow and the tools used at each testing level. Test analysis is conducted as a part of the business use-case study phase. In this phase, the new requirements or changes are analyzed and the test scope is defined. This is a time boxed activity and the results are well documented in a pre-defined template. It is decided and specified if GUI automation tests are required and which parts of the GUI code need to be tested. Moreover, the level and type of GUI testing is analyzed. Within both the subsystems, the GUI test automation is applied at Unit, Integration and Black box levels.

42 Chapter 4. Results 32 Figure 4.1: Test activities before product release. A combination of manual exploratory and automation testing is implemented for testing the GUI. The manual testing is performed to explore the behavior of the GUI. This provides the base information required for GUI automation in terms of test coverage. The test coverage provides details mainly for unit testing but however, specifies an overview of the components to derive functional test coverage. This input is used for generating the final regression test suite for GUI test automation. The unit tests and integration tests can be performed in parallel to GUI development while black box testing is conducted towards the end of development as it requires the GUI to be almost developed. An overview of automation GUI Unit testing, Integration testing and Black box testing is presented below and shown in Figure 4.2. GUI unit testing: At this level, the behavior of GUI components within the production code is tested. These tests are used to verify the input and output of independent GUI code sections and any modifications made to the Document Object Model (DOM). These tests are short and simple and are not dependent on the results of other tests. When the situation demands, specific GUI unit tests are excluded from the test suites or re-written from the scratch. The developers/testers in each team are responsible for writing the GUI Unit test cases and are stored in Git repository. The two subsystems have automated the execution process such that on each push to the repository the GUI unit tests are automatically triggered and the test verdict is shown. As the runtime of the tests is about a few milliseconds to a second, the developers are motivated to make small changes in the GUI code and frequently run the tests to obtain feedback. The two cases utilize a proprietary GUI unit testing tool developed within the company. GUI integration test: At this level, the functional coverage of each business use case is tested. It aims to test components such as widgets, regions or an entire application and the flows in an application or a page. In addition to verifying the

43 Chapter 4. Results 33 input and output of each flow, the modifications to the DOM along with mocked back-end services are tested. It provides opportunities to test both positive and negative flows. Like unit tests, the GUI integrations tests are the responsibility of the developers/testers of each individual team. These tests are automatically triggered on every code push to the Git repository. The GUI integrations tests are called BIT in both the subsystems which are implemented using a proprietary tool. The runtime of BIT tests is usually slower than GUI Unit tests ranging from hundreds of milliseconds to seconds. These tests are stable and easy to develop and maintain than the GUI black box tests. This type of testing is seen as an effective way of testing to assure quality as it provides faster feedback than a full end to end test. GUI black box test: At this level, it is ensured that the connections between the GUI, server and other dependencies work in all circumstances. Every business use case has a corresponding black box test case to verify the possibility to configure an entire use case in an end-user browser. Like integration tests, the black box test cases must be frequently maintained for use in regression testing. These tests are written by the developer/testers in each team towards the end of GUI development. They are automatically run on each code push to the Git repository. Selenium automation testing tool coupled with selenide framework is used for black box testing in both the cases. It is used for performing real end to end GUI automation testing without mocked back-end services. Selenide tests are unstable by nature. Moreover, the runtime of these tests is very high ranging from thousands of seconds to several minutes. It is seen that the selenium tests work at certain times while fail in some cases. It is difficult to trace the root cause of this issue as sufficient logs are not available in this specific case. Subsystem 1 is trying the possibility to employ and configure Bit tests to support black box testing and eliminate selenium test cases completely. In the recent times, this idea has gained huge attention from all the practitioners and a lot of effort is invested to make it work. Figure 4.2: An overview of GUI automation testing levels.

44 Chapter 4. Results Results of thematic analysis We employed the six-step thematic analysis framework proposed by Braun and Clarke [62] to analyze the interview data. The following section presents the outcome of each step from the adopted analysis technique. Become familiar with the data We thoroughly read the transcript several times and highlighted important areas to become familiar with the data. This ensured that the we could trace back the trend of participant s opinions using highlighted quotations from the transcription. Moreover, by re-reading the transcript, we gained an overall understanding of the phenomenon under study. Generating initial codes The preliminary results associated with the identification of criteria for when, testability requirements and factors associated with waste was based on the second step of thematic analysis. In this step, a set of initial codes were generated after reading the transcript and highlighting meaningful units from the raw-data to reveal explicit and implicit meanings. It was an iterative process which spanned during the entire interview period. An instance of this process is depicted in Figure 4.3. Here, we highlighted important information given by the participants and assigned codes when needed. Figure 4.3: Generating initial codes. Some examples of the codes are decide on test level, decide test automation process, dedicated and skilled team for development and testing of the GUI and use of large functions. A total of 50 initial codes were obtained and are illustrated in Table 4.2. S.no Codes S.no Codes 1. Decide on test level 26. Test case readability and understandability 2. Decide on test type 27. Proper test coding standards 3. Decide on test coverage 28. Common method for navigation to the desired page or object in minimal steps 4. Define test automation strategy 29. Test cases should be independent 5. Decide test automation process 30. Reset test data between test runs

45 Chapter 4. Results Check compatibility of GUI automation tool with the GUI framework 7. Upgrading automation tools to newer versions 31. Out of order sleep and wait ranges 8. Understand functionality of the system 33. Use of constants 32. Reduce number of assertions which check for status messages 9. Knowledge and experience 34. Separate functionality in GUI code 10. GUI should be almost developed 35. System call back on loading a configuration 11. Test data should be ready 36. Number of GUI components 12. Product requirements should be clear and understandable 37. Timely feedback from developers 13. Check for availability of resources 38. Consider testability requirements from the beginning 14. Documentation and support for the automation tool 15. Perform manual testing before writing automated tests 16. Develop skeletons for GUI automation test cases 39. Standard set of testability requirements 40. Changes in system functionality 41. Choice of GUI test automation tool affects maintenance 17. Check if the product is stable 42. Large number of automation test files 18. Dedicated and skilled team for development and testing of the GUI 43. Organization of test suite 19. Use of cross functional teams 44. Inclusion of test data in test code 20. Build stable framework around basic GUI framework 21. Use of existing frameworks for GUI development 45. High attrition rate of GUI developers 46. Same internal structure for all GUI pages 22. Use of effective logs 47. Use of large functions 23. Use of IDs to access GUI components 48. Broken test scripts 24. Use of unique class names for distinct sections 49. Length of test cases 25. Minimal number of anonymous functions 50. Compatibility of test and development processes Table 4.2: Initial codes. During this process, we found that the two subsystems follow similar process for GUI test automation and only differ in terms of size of the GUI code. Moreover, there is not much difference in the codes obtained from the two subsystems. Thus, the following sections depict outcomes in relation to both the subsystems. Searching for themes In the second step, we organized the codes into categories to give a similar meaning to a set of codes with varying attributes. It was solely based on our creativity and understanding of the case under study. Firstly, the codes were organized as per our research questions i.e. criteria for when, testability requirements and

46 Chapter 4. Results 36 factors associated with waste. Next, they were placed into categories which we found to best explain the codes. The categories include test-related, test-tool related, SUT-related, human and organizational, and cross-cutting. Each category can be described as follows. Test-process related: This category refers to characteristics of test cases or test suites and relevant testing activities. For instance, we inferred that the codes decide test automation process and develop skeletons for GUI automation test cases are testing activities that must be performed before one begins GUI test automation. Hence, they are placed under our first research question criteria for when and the category Test-related. Similarly, 20 codes were placed under Test-related category. Test-tool related: The quality, support and selection of appropriate GUI test automation tools plays a major role in the automation process. Thus, it includes codes like upgrading automation tools to newer versions and check compatibility of GUI automation tool with the GUI framework. Finally, 4 test-tool related codes were obtained. SUT-related: SUT related category explains the properties of system under test (SUT). We found 17 codes belonging to SUT-related category like GUI should be almost developed, build stable framework around basic GUI framework and same internal structure for all GUI pages. Human and organizational: Human and organizational decisions also impact the automation process. 6 codes were placed under human and organizational category which include check for availability of resources and timely feedback from developers. Cross-cutting: On analyzing the data, we observed that certain codes have an implicit relationship with other categories thereby making them applicable in the context of more than one category. We named such category as cross-cutting. For example, the code standard set of testability requirements is placed in the cross-cutting category. Moreover, if there was any code answering more than one research question, it was assigned to the one that is greatly impacted by the code. For example, documentation and support for the automation tool is both criteria for when and testability requirement. However, after considering it s description and applicability it was placed under testability requirements. Thus, the set of 50 initial codes were placed into 5 categories with respect to our research questions. These 5 categories are our themes. Figure 4.4 illustrates the results of the third step of the process. All the categories are color coded to enhance readability.

47 Chapter 4. Results 37 Figure 4.4: Searching for themes. Reviewing and refining themes The fourth step involved revising the obtained codes and refining them using our judgment. Upon revision, The codes such as decide on test level, decide on test type, decide on test coverage and define test automation strategy were re-written as test automation scope as they can be better understood by that code. Similarly, few other refined codes like test suite architecture were obtained. To ensure the consistency of the analysis process and reduce any premature and incomplete data analysis, we distanced ourselves from the data for over a period and re-analyzed the obtained codes to reveal any new insights. For example, the code GUI test automation tool was initially placed under the research question criteria for when. However, we found out that it is related to both criteria for when and factors associated with waste but the maintenance issues resulting due to the code are much higher so it was placed under the third research question.

48 Chapter 4. Results 38 Defining and naming themes In this step, we sorted the codes into labels to convey meaning to the ideas developing from the codes. We consulted the interview transcript and the literature to provide labels for the codes. For example, upon reading the transcribed interview, few participants mentioned about synchronous behavior which is similar to the code system call back on loading a configuration. Hence, it is assigned the same label. In the same manner, the remaining codes were assigned the labels which we found to be mentioned in the interview transcript and research-based evidence from the literature. However, the themes did not require any additional defining and naming as they are found to be sufficiently meaningful. Figure 4.5 depict the labels obtained in this step.

49 Chapter 4. Results 39 Figure 4.5: Defining and naming themes. Producing final themes The supporting data extracts and findings of each theme are presented in Section 5.2. Figures F.1 and F.2 in Appendix F depicts the proportion of practitioners stating a variable to be applicable to GUI test automation. 4.3 Literature review results A literature review was conducted as per the design reported in Section An initial literature review was performed to understand the state-of-art in GUI test automation whose results were presented in Section 2. Later, a targeted literature review was conducted to identify appropriate information for addressing our three research questions. This was required as a part of EBSE step 2 wherein one needs to refer to the evidence base for creating solutions and knowledge [72]. Based on our literature review, we present the criteria for beginning automation, testability requirements and several factors associated with unnecessary maintenance of automation testing. We have selected the variables which were most appropriate to the subsystems under study and the strategies adopted by the teams to implement GUI test automation. The number of references in relation to each of the proposed variable was calculated by looking at the availability and description of the information. The variables mentioned for automation in general but considered to be applicable for GUI test automation in specific are included in the study. The identified variables within each research question were further divided into categories that have been generated by thematic analysis of interview data and new categories are added if required. The category cross-cutting was assigned to a particular variable when it was applicable to more than one group. For example, testability level of SUT a criteria for when, determines how feasible it is to test the system in an automated way. It is not only a characteristic of the SUT but it

50 Chapter 4. Results 40 is also influenced by the type of automation tool and the skill level of the testers. Hence, it is placed in the cross-cutting category. An overview of the variables describing their frequency of occurrence in the papers under consideration and their category is illustrated in Figure 4.6 to show the level of attention on each variable. Next, we present a detailed description of our findings with respect to each of our research question. Figure 4.6: Frequency of occurrence in research papers.

51 Chapter 4. Results Criteria for when During the review, 10 criteria (applicable to test automation in general) which were most suitable to the context under study are identified to the best of our knowledge and are presented in Table 4.3. These criteria are divided into 5 categorizes using thematic analysis. Defining scope of the automation, selection of the right GUI automation tool and testability level of the SUT are the most important criteria based on frequencies which must be taken care of before beginning automation testing. Criteria Description References Category: SUT-related Product stability The product should be stable in terms of functionality such that if new features are added, it should not disturb or affect the existing functionality. If the product is not mature enough, it will increase the number of falsepositive defect reports from automated tests. Additional effort is required for fixing and analyzing the defect reports thereby, reducing the benefits of automated testing. As GUIs are known to be modified throughout the development process, it is a major concern in the area of automated testing to keep tests up-to-date with the recent changes in GUI. Category: Test-process related Defining scope of the automation Well-defined test process Test data availability Before we begin to automate the testing of the product, it is important to define the scope or coverage of the automation tests. The scope of test automation defines which type of tests are to be performed at which test level. One can decide to automate testing of specific features or selective test cases of several features. It is necessary for the automation activities to be well-defined and organized to avoid unpredictable results. As part of test process, it is necessary to identify and define the testing approach to carry out the test activities in a systematic manner. The automation suite must contain all the available test data necessary for various test cases. The test data can be simple data input like numeric input to provide parameters to the functions for testing several conditions or complex data input like files for testing specific functionality. Category: Test-tool related Selection of the right automation tool The automation tool must be very carefully chosen before the test automation process begins.it is essential to select the tool that would be most suitable for automating the testing of the product. As every tool has its own technical limitation, the tool must be evaluated to verify if they can interact with the required attributes of a feature. For example, in GUI test automation it is important to verify if the tool can capture the required data from the GUI and its child objects. Changing the testing tool in the middle of an ongoing project is inadvisable. At times, it is beneficial to create our own scripting tool using a suitable scripting language. Category: Human and organizational [18][25][7] [7][8][1][22] [1][35] [8] [7][8][1][35] Feasibility assessment Organizations must know in advance if an automation project is feasible for their needs or not.they must confirm if their projects are technically and economically feasible (economic benefits) eg. Return of Investment (ROI) [7][1][35]

52 Chapter 4. Results 42 Dedicated and skilled automation team Resource availability Category: Cross-cutting It is a misconception that the existing test team will dedicate few hours in the automation process. It is advantageous to have a separate test team and automation team as they execute distinct/different works. High level of technical knowledge about the system and environment is required by the automation team to handle tasks like test tool acquisition, installation and configuration. Moreover, testing tools can be very complex and demands experts working on them. It is advisable for the organizations to confirm and identify whether all necessary resources to automate tests are or will be available. Finding out that there are insufficient resources in the middle of an ongoing project to ensure its continuity can be problematic. [7][8] [1][35] Testability level of the SUT Compatibility of the development process Testability level specifies if a software has been designed to facilitate testing or not. Testability enhances test design efficiency and simplifies automation. It is related to both SUT and test-tool categories. One needs to check if the test automation process is functioning efficiently in the chosen software development process. For example, agile methodology. It is related to both test-process related and human and organizational category. Table 4.3: Criteria for when [7][1][22][35] [7] Testability requirements On eliciting the criteria for when to automate from literature, we found that testability is one of the criteria to be considered before one begins automation [1] and [35]. 10 testability requirements are identified are shown in Table 4.4. As described earlier, testability requirements are not related to SUT alone but are affected by the automation tool and the characteristics of automation tests. Hence, testability requirements are divided into 4 categories using thematic analysis. Use of unique names for different windows and interface controls and support for custom controls are the two testability requirements identified as specific to GUI test automation. Use of effective event log storage management is identified as the most important testability requirements based on frequency of occurrence in literature. Testability Description References Category: Test-process related Following proper test scripting standards Identifying and converting common steps into functions As automation testing consists of a mini development cycle, it is important to prepare and follow proper coding standards. It is vital to develop checklists for reviewing the test scripts. All the software practices which are followed for developing the product and are applicable to the development of the automation test suite should be practiced. Common steps in the tests cases must be identified and converted to functions. They can be placed in a common file and later be called by other test cases by passing suitable parameters as needed. This increases re-usability of the code and reduces time and effort. [8][22] [8][22]

53 Chapter 4. Results 43 Clean-up between test runs Category: SUT-related Effective event log storage management Usage of diagnostic techniques Usage of monitoring techniques Usage of fault injection techniques Use of unique names for different windows and interface controls After completion of execution of the automation suite, it must be ensured that the application is reverted to its original state. The generated temporary files must be deleted and the changes made to properties or configuration file must be reverted. Even on abrupt termination clean-up of the test suite must be ensured. Excessive logging can burden the system performance and they may become hard to decipher. It is important to determine what should be logged and how the storage should be managed. One should consider logging of events such as significant processing in a subsystem, major system milestones, internal errors, unusual events (logged as warnings). It should include timestamps and identify the subsystem that produced them. When reviewing logs, we can understand how different parts of the system interact which can be used as a baseline for later testing. For example, verbose output is one such technique for logging events. Diagnostic techniques are used for detecting bugs when they occur. It becomes difficult to notice bugs without diagnostic techniques. In certain cases, internal data may be corrupt but might not result in noticeable failure until further testing accesses that data. For example, assertions are used for diagnostic purposes. Monitoring techniques facilitates access to internal workings of the code. Test points are an example of monitoring technique which allow data to be inserted at different points in the system. They are useful for monitoring as well as checking faulty data in the system. It aids in testing error handling code. It injects environmental errors like disk-full errors, bad-media errors or loss of network connectivity which are difficult to simulate otherwise. Fault injection hooks is one such example. When using generic names for windows, it becomes hard for the tool to learn all the controls in the interface. Moreover, tests won t be able to recognize if the desired window is shown or a different window with the same name is displayed. Use of unique names will ease out the process of recognizing the controls by the test tool. Category: Test-tool related Support for custom controls Category: Environment They are controls which are not detectable by GUI tools. They can be widgets or text boxes which upon customizing must be detected by the tool. Custom controls may vary from tool to tool. Hence, we need to check compatibility of the testing tools with the custom controls. Sometimes, the tools need to be configured to recognize custom controls. [8] [8][23] [22][47][76] [23][22] [47][76] [23][22] [47][76] [23][47][76] [23][47][76] [22][23] [16][19] Automatic installation and configuration Installation of software using install scripts that can automatically install the software on several machines with different configurations. Table 4.4: Testability requirements [23][22]

54 Chapter 4. Results Factors associated with waste 12 factors which negatively affect the maintenance of GUI based automation testing are identified from the literature and presented in Table 4.5. Most of these factors are found in the context of GUI automation testing. Moreover, the corresponding mitigation strategy for each factor are also described. These factors were divided into 6 categories using thematic analysis. Knowledge and experience was observed to be the most important factor influencing maintenance of test scripts in terms of frequency of occurrence. Factors Description and mitigation strategy References Category: Test process related Variable names and script logic Test case consistency Broken test scripts Test case length Loops and branches Testing of test code Structure of test suite architecture Category: SUT-related Complex test scripts hinder readability, re-usability, and maintainability of the script code. Mitigation strategy: Define a clear and consistent test script architecture and define a naming convention for variable and methods. If the automated tests are not run often and frequently enough, the test suite degraded into a state where the test cases are inconsistent and difficult to understand. This increases maintenance effort by a disproportional amount. This degradation is a result of development or maintenance of the tested system or changes to the system requirements. If these tests need to be repeated maintenance cost increases. Mitigation strategy: Frequently run and maintain the test scripts. Broken tests cause many problems. Updating broken tests takes time. Developers may not take the time to inspect all failing tests to distinguish regression failures from broken tests. They may instead choose to ignore or delete some failing tests from the test suite, thereby reducing its effectiveness. Repairing tests is tedious and time-consuming, especially when a large number of tests fail. Mitigation strategy: Developers must update the test code (and perhaps the SUT) such that the tests pass. Long test scripts are less readable and are complex to maintain due to lack of understanding. Moreover, they take longer time to execute and verify causing frustration. Mitigation strategy: Use of shorter test scripts. Provide support for modular architecture that allows maintenance of subsets of script independent from the rest of the script. Usage of loops and branches in test scripts lowers their readability thereby making the script failure analysis more complex. Mitigation strategy: The scripts must be kept as linear as possible. And it is advisable to break loops and branches into separate scripts. The test code sometimes does not detect an error or misses a failure due to a bug in the test ware. Maintenance of such test scripts becomes more complex even if the test suite is well structured. Hence, the test code needs to be tested as well. Mitigation strategy: The test suite should be tested at least superficially with another test suite. When test automation is performed with improper design, planning, and documentation of test suite architecture it results in additional effort required to maintain the test scripts and automation infrastructure. A poorly structured test suite architecture will prevent developers from reusing scripts as they do not understand them. Moreover, analysis of the failed test and locating a script for a test case becomes difficult. Mitigation strategy: Proper documentation specifying the requirements of the test suite architecture should be written before implementation. To avoid duplication of scripts, properly structure and document the test suite architecture such that it is easier to locate and use scripts for specific test cases. [22][13] [22][13] [22][77] [13] [13] [22][8] [13][22]

55 Chapter 4. Results 45 Number of GUI interaction components Maintenance cost increases with the number of interaction components present in the test scripts and becomes substantial for long scripts as modified component properties are detected only during the execution of the test script. Mitigation strategy: Verify if the GUI interaction component whose properties have been modified has been used more than once in the test script. If the tool supports, try replacing the duplicate components with variables. Category: Human and organizational Knowledge and experience It is a good practice to perform and maintain GUI test automation by a domain expert to make the analysis and implementation of test scripts easier by utilizing the domain specific knowledge. Mitigation strategy: Provide significant training to the practitioners before starting to work with the GUI automation tool. Conduct workshops to update their domain knowledge. Category: Test-tool related GUI test automation tool Category: Environment Simulator support Category: Cross-cutting Different GUI testing tools have varied functionality which are more or less suitable for specific contexts. Failing to pick the right tool for the right context can have disproportional effect on development and maintenance costs. Mitigation strategy: Requires more research in this area and can be taken up as future work. Simulator is used to emulate software or hardware that is a part of the systems operational environment. Adding or modifying functionality of the SUT causes simulators to stop working completely or partially. Partially working simulators causes test script failures restricting the ability to maintain the corresponding test scripts. Mitigation strategy: Frequent maintenance of the test systems environment. [13] [13][22][45] [48][12][78] [13][45] [13][22] System infrastructure Constant technological changes either in the development or product infrastructure will require keeping the test scripts up to date. This demands significant maintenance costs. It is associated with all the other categories. Mitigation strategy: Frequently run and maintain the test scripts. Table 4.5: Factors associated with waste [45] 4.4 Phase 2 interview results To validate and use the research-based evidence obtained from literature, it is necessary to integrate it with opinions and experiences of the practitioners who have concrete knowledge about the specific situations and circumstances encountered in the context. Such engagement of the practitioners in the learning process to combine the evidence with previous knowledge and experience enables him or her to make informed decisions in each situation rather than simply following the proposed variables for GUI test automation. This is conducted as a part of EBSE step 4. To achieve our goal, the raw results of literature review before categorization through thematic analysis were presented to the practitioners working within the two subsystems. A tabular format was used to present the evidence list of criteria

56 Chapter 4. Results 46 for when to automate, testability requirements and factors affecting waste. The practitioners provided information regarding each criteria or factor and mentioned if they think it impacts the decisions regarding GUI based automation testing. Moreover, the practitioners were asked to state the validity of each item in their context and also if they are implementing it. The results of the discussion are represented in Figures C.1, D.1 and E.1 in Appendix C, D, and E. The differences in opinions were mainly due to the varying roles and the subjective choices of the practitioners. We arrive at consensus results depicted in Figure 4.7 after keenly observing patterns from the raw data set and discussing the results with the practitioners. We have presented raw data in the Appendix and the aggregated results in the main document. Figure 4.7: Practitioners consensus results. As seen from the Figure 4.7 (consensus column), the variables criteria and testability requirements are grouped into three scenarios. The three scenarios are: 1) variable is valid and followed in the subsystems (A, X), 2) variable is valid and partially followed in the subsystems (A, Y), 3) variable is valid but not followed in the subsystems (A, Z). The value partially followed (Y) implies that the variable is followed but its application can be improved. For instance, the criterion product stability is assigned (A, X) as the teams evaluate stability of the product prior beginning of test automation. Selection of the right automation tool is assigned (A, Y) because the black-box tests developed using selenium tool and take longer time to execute. The practitioners see a need to adopt an efficient tool for the same. Dedicated and skilled automation team is assigned (A, Z) as the practitioners feel that it is beneficial to have a separate team for GUI test automation for managing the test environment as well as taking the responsibility of GUI test cases. However, it is assigned a value Z because there is no such team

57 Chapter 4. Results 47 in the current scenario as the developers and testers work as a cross-functional team. A dedicated team will be soon formed considering its importance. The factors associated with waste are only represented as valid to the subsystems (A) and valid in general but not in the subsystems (C). The factor loops and branches has a consensus value of C indicating it is not applicable to waste in the subsystems while valid to GUI test automation in general. The variables whose assigned value (from Figures C.1, D.1 and E.1 ) is A (valid to the context) or B (partially valid in the context) are considered for calculating the level of validity while those with values C (valid in general but not to the context), D (not valid) and E (unsure) are not. None of the variables had a consensus value of D or E showing that all the variables obtained from literature (generic and GUI specific) are applicable to GUI automation testing in the subsystems. The validity level of each criteria, testability requirement and maintenance factor from literature review results is calculated based on the opinions and experiences of the practitioners. The validity level of each criteria or factor in the described context is illustrated in Figure 4.8. The probability is calculated as a percentage of the number of practitioners stating it to be valid (1) or partially valid (0.5) of the total number of interviewees (12).

58 Chapter 4. Results 48 Figure 4.8: Validity level. As a part of EBSE step 4, Bayesian synthesis method is used to synthesize the results of literature review, phase 1 and phase 2 of the interviews. It provides a meaningful interpretation of the outcome in the study context by utilizing the experience, knowledge and opinions of the practitioners who will be applying the researched aspects of GUI test automation. The results of the method are clearly described in the following section.

59 Chapter 5 Data synthesis and analysis This section provides an overview of data synthesis and analysis for the research study. 5.1 Data synthesis As mentioned in Section 3.3, we employed Bayesian approach to synthesize researchbased evidence and utilize knowledge and experience of industry practitioners. From the pre-existing subjective beliefs of 12 practitioners, we generated the criteria for when, testability requirements and factors affecting waste which were categorized into common descriptive themes (categories) as presented in Section Opinions of each practitioner were combined to yield the prior probability of each criteria or factor representing their level of validity in the context. The prior probability was calculated as the percentage of interviewees who mentioned a criteria or factor as valid and important in their context from Figure G.1 in Appendix G. For instance, the criterion resource availability is assigned a prior probability of 42% as 5 out of 12 interviewed practitioners mentioned it as a criteria which needs to be fulfilled prior beginning of automation in the first phase of the interview. Next, relevant variables were extracted from literature and they are categorized using the same themes obtained from interviews and a few new themes are added. The likelihood of each item was calculated as a percentage of number of studies reporting it to be true as shown in Figure 5.1. The likelihood of resource availability is 29% as 2 out of 7 studies selected for criteria for when report and describe it as important. Lastly, the posterior probability was calculated by combining practitioner s opinions with research-based evidence as mentioned in Section 5.1. The proportion of practitioners mentioning the criteria or factor identified from the literature to be valid to their context is reported as posterior probability. The criterion resource availability is assigned a posterior probability of 83% as 10 out of 12 practitioners claim the variable to be valid in the subsystems on viewing the literature evidence. The prior, likelihood and posterior probability of each criteria or factor is presented in Figure 5.1 and in Figure G.1 in Appendix G. 49

60 Chapter 5. Data synthesis and analysis 50 S.no Category Code Criteria for when Prior probability Likelihood 1. SUT-related Product stability 17% 43% 67% 2. SUT-related Availability of developed GUI 58% NA 58% Posterior probability 3. Define scope of the automation 100% 57% 100% 4. Test-process related Well-defined test process 75% 29% 100% 5. Test data availability 33% 14% 75% 6. Test-process related Develop test case skeletons 42% NA 42% 7. Selection of the right automation 58% 57% 83% Test-tool related tool 8. Dedicated and skilled GUI build & 33% 29% 42% Human and organizational automation team 9. Resource availability 42% 29% 83% 10. Human and organizational Feasibility assessment NA 43% 83% 11. Testability level of the SUT 42% 57% 92% 12. Cross cutting Compatibility of the development 17% 14% 50% process Testability requirements 1. Effective log storage management 25% 71% 75% SUT-related 2. Use of unique class names 42% 43% 63% 3. Usage of diagnostic techniques 25% 57% 75% 4. Define clear product requirements 50% NA 50% 5. Use of GUI development 58% NA 58% SUT -related framework 6. Synchronous behavior 58% NA 58% 7. Use of minimal number of 25% NA 25% anonymous functions 8. Usage of monitoring techniques NA 57% 75% SUT-related 9. Usage of fault injection techniques NA 43% 54% 10. Following proper test scripting 17% 29% 92% standards 11. Test-process related Identifying and converting 25% 29% 92% common steps into functions 12. Clean-up between test runs 42% 14% 92% 13. Perform manual testing prior 50% NA 50% Test-process related automation testing 14. Automation tool 17% NA 16% Test-tool related documentation/support 15. Test-tool related Support for custom controls NA 57% 75% 16. Environment Automatic installation and configuration NA 29% 100% Factors associated with waste 1. Number of GUI-intertaction 16% 14% 92% SUT-related components 2. Broken test scripts 33% 29% 75% 3. Test-process related Structure of test suite architecture 42% 29% 88% 4. Test-process related Sleep functions 58% NA 58% 5. Variable names and script logic NA 29% 100% 6. Test case consistency NA 29% 92% 7. Test-process related Test case length NA 14% 100% 8. Loops and branches NA 14% 0% 9. Testing of test code NA 29% 50% 10. Test-tool related GUI test automation tool 50% 29% 100% 11. Human and organizational Knowledge and experience 50% 86% 100% 12. Feedback from developers 8% NA 8% Human and organizational 13. Attrition of GUI developers 42% NA 42% 14. Environment Simulator support NA 29% 54% 15. Cross cutting System infrastructure NA 14% 92% : Variables found from literature review and practitioners opinions. : Variables found from practitioners opinions : Variables found from literature review Figure 5.1: Synthesis results. As observed from the Figure 5.1, for most of the variables the posterior prob-

61 Chapter 5. Data synthesis and analysis 51 abilities have substantially increased over prior probabilities. This shows that practitioner s pre-existing subjective opinions shifted considerably and became more consensual after reviewing the research based evidence. Some new variables influencing the when decisions, testability requirements and waste in GUI test automation which are valid to the context were identified. These were not present in practitioners initial view points indicating the role of research based evidence in channelizing their pre-existing opinions. Some of the variables were already known and practiced by the interviewees. However, research-based evidence helped to streamline their beliefs and present their opinions concretely. Furthermore, 11 variables identified in the prior distribution were not found in the body of literature. The reason is, we conducted a targeted literature review and these variables could not be identified from literature within the scope of the research questions. 5.2 Data analysis This section aims to analyze the results obtained in previous section for explicitly addressing the three research questions. It provides a description for the criteria for when, testability requirements and factors affecting waste in GUI test automation Criteria for when During data synthesis, we have identified a total of 12 criteria based on the discussion presented in and Out of the 12 criteria, 9 were obtained from both practitioners opinions and literature, 2 were identified solely from practitioners opinions and 1 from literature alone. For the criteria product stability and testability level of the SUT, the inclusion of research based evidence substantially increased the posterior probability indicating higher validity levels. Defining scope of the automaton, selection of the right automaton tool and testability level of SUT have been identified to be the most important criteria from literature and are also highly valid as can be seen from posterior probabilities. The theme test-process related consists of maximum number of codes i.e 4 criteria. The remaining codes are equally distributed among the other themes. Moreover, the likelihood, prior and posterior probabilities of 2 criteria of test-process related theme are very high and are at medium level for the other 2. This shows high validity of the theme and its corresponding codes signifying its importance. Each criteria is described in detail with respect to each of the categories shown in Figure 5.1.

62 Chapter 5. Data synthesis and analysis SUT-related Product stability: Product stability refers to the ease with which new functionality can be added without disturbing the existing functionality [8]. One of the architect mentions that "If the product is not stable, you may have changes affecting all parts of the GUI, I think you should not automate at all". However, one of the interviewee has some concerns regarding stability "This is debatable. Our GUI started one and half years ago. In the initial stages it is known that more functionality will be added. When new functionality was added, we were erasing complete automation and doing from scratch. It was not at all stable". Another architect has similar view "It is not like if the product is not stable, I m not gonna write any tests. Product stability is vital anyway". We can infer that considering product stability is indeed important but it should not stop one from automation. It depends on the impact the change has on the product. If the change is small, then one can continue with automation and simultaneously resolve the issue. Moreover, increase in demand to bring out new functionality and time constraints have forced the developers to adopt automation even if stability issues persist. One of the developer says "If you want to automate the tests, you need to at least identify potential stability issues. We know that if we change the class on something then we often have a chain of events that will fail a test". Hence, it is important for the practitioners to be aware of stability problems and address these issues. Availability of developed GUI: "We cant really write the GUI tests without actually having something to write it on. So we develop first and test when its done". A Test responsible says "When we write selenium test cases the GUI and design need to be finished before we actually can write tests". For BIT tests (integration testing), the test cases can be written in parallel with development while selenium tests (black-box testing) require finished GUI for automation. However, it is beneficial to explore and create basic test case skeletons prior to automation. This indicates that the validity of the criteria varies for different testing levels and increases with the level of testing Test-process related Define scope of the automation: Before beginning automation testing, it is crucial to define scope or required coverage of the automation tool [8]. It describes which features should be tested at which level. "Its good to set the scope so you know what to test where, else you will test too much or too little". In the current subsystems the scope consists of parts to be covered by automated tests for BIT, Selenium and unit testing. Well-defined test process: The testing process should be developed and

63 Chapter 5. Data synthesis and analysis 53 refined by the teams. The same process should be used by all the teams to ensure success with test automation."we have a test process and we follow that. It is important in which level you are supposed to do this test. We also have the test lead to go and talk to and reason about where to put things". Test data availability: Availability of test data facilitates the developer to focus on developing the GUI framework around the automation test environment. A test lead outlines the importance of test data as"we create a test case for GUI and using that test data and enter into the GUI and see that the outcome is as we expect so that we register everything and activate the product that we are testing" Test-tool related Develop test case skeletons: The final GUI test solution keeps changing and is rarely understood initially. Though the final test solution cannot be obtained, it is beneficial to develop a test case skeleton which puts together the main architectural components. The test cases can be evolved subsequently as more concrete requirements are found. This helps produce robust test cases and ensures timely delivery of the software system with quality code and reduced costs. A developer mentions, "When GUI test scripts are written in later stages, you discover issues with the GUI late in development. You get panic fixes in a later stage where we do quick fixes one day before the release. To avoid this, we can actually do it in a proper way and have a reliable way of working". Selection of the right automation tool: Though an additional effort is required to find and evaluate automation tools available in the market, it reaps benefits in the long run as compatible tools create tests effectively and quickly. According to a test architect, they have carefully chosen the automation tools required for the product. " We had directive to use a proprietary development platform for GUI development. To be used with the development platform for end-to-end testing, selenium was an available option. Though selenium tests are unstable and practitioners wanted to have everything with BIT, it is not a good approach as you need to have a balance, you need to get end to end test as well. But at the time, the only available option was Selenium". It is important to choose the right tool from the beginning because otherwise one might spend a lot of time changing the tool in the middle of an on-going project Human and organizational Dedicated and skilled GUI build and automation team: In both the subsystems, development and testing is done by the same team as the activities are connected to each other. "I want the developers who are developing to actually

64 Chapter 5. Data synthesis and analysis 54 write the tests as well and we do manual testing". However, a few practitioners believe that it is beneficial to have a separate team focusing on GUI automation testing. As GUI development is relatively new in telecommunication industry, not all practitioners are proficient at GUI design and testing."having team for managing the GUI automation is a good idea". Resource availability: Organizations must identify whether necessary resources are available for automating the tests. "There are estimates being done every quarter to see how many teams are we, how much hardware do we need. And that is followed up continuously". However, it s difficult to predict a change in the project scope which makes planning in terms of resources problematic. To avoid such uncertainty, it is a good practice to perform an initial resource check for automating the tests and acquire additional resources if there is a time pressure to deliver quickly. For instance, GUI usually requires a lot of tests and the tests run for a long time. If the large number of automation tests need to be written in a stipulated amount of time, it is advisable to gather more resources. "We cant have a lot of people in the beginning. But if a lot of tests need to be written, we ask management for more resources". Feasibility assessment: For instance, if there is a very old product which is not automated but is on the market for several years with low rate of faults, one must evaluate if its worth the cost to automate tests which most likely doesn t find anything. Such evaluations must be performed if there is a need to automate something or not. " Is it worth the money to automate tests which most likely not find anything? So, those kinds of evaluations, we always do. We always do a evaluation if there is a need to automate something or not" Cross-cutting Testability level of SUT: To achieve higher levels of testability, it is important to keep testability in mind since the beginning of the development so that the software can be designed to make test automation easier[19]. Due to increase in demand for new functionality and growing maintenance issues, improving the testability greatly reduces the cost of testing[19]. "Testability has been very low prioritized and which has made the automation process a bit of a struggle". Focusing on testability from the earliest phases of the software life-cycle improves the effectiveness of automation process and supports the practitioners to keep test effort under control. "We have learned that we should take testability into consideration during the study phase". One of the test architect provides the need for testability "We have failing regression cycles which might waste an hour or two if you want to get a package out. Going forward that would be the mitigation plan to see how we can get the testability parts into it". Hence, without testability it is not possible to perform test automation in a stable manner and will demand

65 Chapter 5. Data synthesis and analysis 55 for workarounds to fix the issues resulting in wastage of time and effort. Compatibility of the development process: In the cases being studied, agile methodology is followed for the software development process. Agile methods provide significant advantages in terms of suitability and correctness. This is because, one can evaluate the progress and correct development at specific stages in the middle of an ongoing project. Thus, it is easier to track and resolve persisting errors thereby increasing testability of the software [79]. "GUI needs to be implemented in a way so that it can be tested. If we re-introduce something that makes it impossible to test, we have a way to change that during the development in agile processes. We can go back and re-do requirements" Testability requirements During data synthesis, we have identified a total of 16 testability requirements based on the discussion presented in and Out of the 16 testability requirements, 6 were obtained from both practitioners opinions and literature, 6 were identified solely from practitioners opinions and 4 from literature alone. The posterior probabilities of most of the testability requirements increased substantially after reviewing the literature evidence. This is because the literature helped the practitioners to streamline their thoughts and provided a context for the practitioners to claim the testability requirements as valid. It can be observed that practitioners mentioned several testability requirements which were not identified in the targeted literature review. A new theme environment related is identified from literature and its relevant code automatic installation and configuration is claimed to be valid in the context by all the interviewed practitioners indicating its importance. Berner et al. [22] mentions that while considering testability, design of the software is highly important. The same trend can be observed from our findings. Most of the testability requirements (9 codes from literature and practice) are found under the theme SUT-related. Moreover, the opinions of the practitioners introduces 4 new codes under SUT-related. This pattern is indicative of the high significance of the theme and its codes while implementing testability. Each testability requirement is described in detail with respect to each of the categories shown in Figure SUT-related Effective log storage management: While designing the logs it is important to verify if the logs contain enough information for another practitioner to understand the problem. One of the test lead mentions "I think it could be clearer messages. If there is a configuration file error then it should say what configuration file and in best case, what place in the configuration". For GUI test automation, it is advantageous to have snapshots indicating the parts of the GUI where an

66 Chapter 5. Data synthesis and analysis 56 error occurred. The visuals coupled with console logs increases the effectiveness of the logs and helps testers identify the error quickly. Thus, the effectiveness of the logs can always be improved depending on practitioners need. Use of unique class names: Use of unique class names ensures that the automation tool can identify the right component when multiple components with the same identifier exist. For instance, if certain GUI components are using the same class name, the complexity increases while identifying the required component for testing. This is because the same class name is assigned in several places and tests usually fail for detecting incorrect element. "Its hard to find the element that you need on the page if you have the same class name in a lot of places". Hence, use of unique class names for the GUI components present within the same section of the code produces robust test cases. "Using unique names and thats reduces complexity a lot and I believe that tests become more robust". Use of diagnostic techniques: Assertions are used to perform such diagnosis in testing. "For every step we do, we assert to check if it is having correct values". Moreover, open source tools are available which makes the diagnosis easier. "we are using chrome dev tools and Without the tool we wouldn t be able to debug GUI tests. You can always have more tests but a tool still makes life easier". Such diagnosis is beneficial for all types of automation tests and in particular for GUI automation testing. Several frameworks and assert libraries exist for developing GUI test cases to diagnose if the GUI elements return the expected behavior. Define clear product requirements: As mentioned by a test-architect "To be testable that could imply that the requirements are extremely clear and not fluffy so we know what we are supposed to verify, that is one kind of testability". A similar experience has been presented by a test-lead "You don t want to have a requirement that is a one liner. You need to know if you are able to develop tests that you can actually check that it is true or not". It is vital for product requirements to be clear to ensure that the system is testable and is verified in an efficient manner by the tests. Use of GUI development framework: The framework used for developing the GUI should be testable. A test architect mentions that "Selenium isn t that compatible with the GUI framework developed in the company and hence trying to push our tests down to its integration tests. We seek more testing at that level and then just keep basic end to end flows on a black box level just to minimize the incompatibility". For higher testing levels such as end to end testing in black box tests, external automation tools might be required for testing. To ensure compatibility of the development framework with the automation tool, the GUI and its underlying framework should be developed with testability in mind.

67 Chapter 5. Data synthesis and analysis 57 Synchronous behavior: Synchronous behavior is a mechanism to know if the system is in a ready state after accomplishment of all the asynchronous events. Once the events are processed, the system must respond that it is ready for automation without manually checking its behavior. One of the interviewee mentions, "We want synchronous behavior, where all the asynchronous events have been propagated to the system. We need a way to know if the system is ready. The configuration that I sent you is processed already, so that I can run my test. And that is extremely difficult to achieve". The preparation and ready state of the system ensures the GUI tests run without any timeout issues. Use of minimal number of anonymous functions: The use of minimum number of anonymous functions in GUI code is beneficial as they make test automation less difficult. The use of these functions can be reduced by using call backs to actually lift the call back on a named declared function. It is easy to test this function or mock it out rather than testing a large function just to test a small anonymous in-line function for its behavior. A developer mentions that "I try to minimize the use of anonymous functions for instance, since they can be really hard to test". Usage of monitoring techniques: For instance, in the current subsystems, trace mechanism is implemented which allows to add breakpoints in code and follow its flow. Such techniques are useful for detecting stability failure rates and execution times which is extremely important from GUI test automation perspective."we have trace. From GUI black box perspective you can activate trace and follow whats happening". Usage of fault injection techniques: The fault scenarios cannot be simulated in black box level while it is possible in integration levels. "There is no way to fake real errors in black box system. We cant fake that but we are mocking that in previous test cases like in unit and integration tests to simulate those fault scenarios" Test-process related Following proper test scripting standards: Clear guidelines should be created and followed for writing robust test cases. Open source tools are available which check code conventions and throws an alert message when source or test code isn t written in a prescribed way. "Sonar is a tool that does code analysis which you can be configure with the required rules. It will give a critical warning. so its a very good tool in making sure you write decent code". However, a test architect mentions that for GUI test automation, "Sonar doesn t help in creating robust and maintainable test cases as it is a static code analysis tool". Thus, it

68 Chapter 5. Data synthesis and analysis 58 is essential to create guidelines and a practical approach for writing robust test scripts. Identifying and converting common steps into functions: Achieving modularity through conversion of common code into functions is a common coding practice in the development process. It is important to extend the same principle while generating test cases. The common testing steps from several test cases should the found and transformed into functions which can be placed in separate files. This facilitates re-usability of code in turn reducing the maintenance effort. This is highly beneficial in case of GUI test automation as GUI components are re-used at several places and usage of functions reduces the testing effort required."while writing selenide tests, there are class names or widgets that can be common for many pages. So we suggest to have these widgets as a common one. We will write one method for the widget and we can take the same thing for all the test cases". Clean-up between test runs: In GUI testing frameworks, the test cases are dependent on each other and modifications in one test case might affect the execution of subsequent test cases. For instance, one interviewee mentions, "In Selenide, sometimes when we have one failing test case, the following test cases will also fail as they can t find where to click, or they click in the wrong places. That s a problem in all GUI test frameworks". Moreover, manually reseting the data between test runs is problematic since every step needs to be repeated and executed again. Thus, it is extremely important to automatically clear or reset data between test cases after each run. Perform manual testing prior automation testing: Owning to the nature of the GUIs which change frequently, this testability requirement provides an opportunity to understand the behavior of the GUI and implement the same for generating automation test cases. Moreover, such a strategy enables to detect and resolve errors that might result in failing GUI automation test case quite early in the development cycle. Moreover, testing manually will save time in the long run as the product will be more stable and will contain minimal errors due to early testing. A test lead mentions that "Since it is GUI we cannot always do automation first, we start testing manually and then automation" Test-tool related Automation tool documentation/support: To perform test automation in an efficient and an accurate way, it is necessary to refer to the documentation of the tool. This ensures that the full capability of the automation tool can be referred and implemented making automation testing more organized and easy. A test lead claims thati think the best part of the tool is that it is well documented

69 Chapter 5. Data synthesis and analysis 59 and its easy to find others that uses it so you can get help when you get stuck in it. Moreover, access to user groups and forums holding discussions on the tool benefits the tester to identify a quick solution in case of a problem. This is important for GUI automation tools to make it work with the system being tested. Support for custom controls: They can be widgets or text boxes which upon customizing must be detected by the tool. This is a common problem faced by several GUI automation testers as perceived from literature [23]. Sometimes, the tools need to be configured to recognize the custom controls [23]. In the current subsystems, the tools are configurable to be able to detect custom controls."it will show up in DOM as anyone else, so if you write test cases for it, if you say look for this button, it will find it". Moreover, plugins can be developed for the automation tools to improve its capability in recognizing different types of GUI components Environment Automatic installation and configuration: This ensures that the software being tested is installed automatically using install scripts on several machines with varying configurations. Provision of such install scripts enables the practitioners to focus mainly on development of automation test scripts and saves time by not configuring software manually."this is very important otherwise development would be standing still if we dint have automation and config automatically" Factors associated with waste During data synthesis, we have identified a total of 15 factors based on the discussion presented in Sections and Out of the 15 factors, 5 were obtained from both practitioners opinions and literature, 3 were identified solely from practitioners opinions and 7 from literature alone. The posterior probabilities of most of the factors increased substantially after reviewing the literature evidence for the same reason as mentioned above. The factor loops and branches identified in literature to influence waste in GUI test automation was found to be not applicable in the context as they are not widely used in writing test cases within both the subsystems. Feedback from developers has been mentioned to be valid by single interviewee indicating very low posterior probability. Hence, these variables have been omitted for further discussion and are not considered to be highly valid in the context being studied. The literature facilitated identification of 2 new themes environment related and cross-cutting. The corresponding codes simulator support and system infrastructure are validated to be applicable to GUI test automation by practitioners in both the subsystems. For factors associated with waste, most of the codes (8 factors from literature and practice) are found under the test-process related theme. This is indicative of the importance of the theme

70 Chapter 5. Data synthesis and analysis 60 over others which should be considered for reducing unnecessary maintenance of GUI test scripts. The remaining codes are uniformly distributed across the other themes. Each factor is described in detail with respect to each of the categories shown in Figure SUT-related Number of GUI-interaction components: As the number of GUI-interaction components increases, the unnecessary effort required for maintaining the test scripts increases. It is beneficial to re-use GUI components to decrease programming effort and achieve same internal structure across the product. The functionality of the component can be verified by testing it at one particular place as same internal structure is followed. Though it facilitates unified way of development, customization of the component at a particular place affects maintenance of GUI automation test cases as the same component is used in several other places. A product owner mentions "All the GUI pages have some general components. But most of the pages also have specific components and adding these breaks the existing automation test cases. So maintaining GUI automation test cases is a bit hard as GUI should look the same way while having specific things" Test-process related Broken test scripts: Even a small change in the code breaks a huge number of tests. It is a time consuming activity to repair broken test scripts. In an ideal scenario, broken tests covering valid functionality need to be repaired. The problem of broken test scripts can be dealt in a smart way by appropriately structuring the test suite architecture to reduce maintenance effort "If you are changing code, you are breaking the old functionality. Then you should update it as you have made a change. If its an expected change, the implementation must be done in a smart way to reduce effort to update the tests". However, if the fix is highly complex or the test covers a functionality which does not exist anymore, the test should be eliminated from the suite [77]. Structure of test suite architecture: A poorly structured test suite requires additional effort for maintaining the test scripts. Previously, one test file was created for each use-case in the subsystems. This lead to generation of huge number of test files residing in a directory. Whenever the GUI changed, the test cases broke and one had to edit each of the affected test files to resolve the issue. Moreover, from a maintenance point of view, it is difficult to track the defected file in the huge lot. "It is difficult to understand the fix and the workaround implemented. All the files that are lying around everywhere. It s a big problem. There was no real structure when coding began". The architectural decisions must be taken after careful analysis and investigation. It is essential to produce standard-

71 Chapter 5. Data synthesis and analysis 61 ized guidelines and document the structure of test suite architecture. Sleep functions: GUI automation testing differs from other types of automation testing as they must be synchronized with the timing of the system being tested [13]. Thus, sleep functions are introduced in the scripts to catch the GUI element while executing the tests. The time parameter of sleep functions demand heavy development as well as maintenance effort of the test scripts. A developer mentions the issues with sleep functions as "If you assume the page to load in 5s, it might load in 5ms. This increases execution time. Its important to do a continuous check to find the apt time. 1s might work in most of the cases. But, when the computer slows down it results in blinking test cases". Sleep functions increase the test case execution time and timeout issues resulting in wastage of maintenance effort. To mitigate this problem, it is advisable to develop the test scripts such that the tests wait for a GUI event. Variable names and script logic: It is a good practice to have a concise test script architecture and follow naming conventions for variables and methods. "You need good variable names and it should be readable". A design lead says, "The names affect very much. A continue button is named color blue. If we change that name, then we need to maintain that in the tests. The color blue is a bad name. It should better be continue button instead". Moreover, it is advisable to have common class names and widgets used at several places as a single method which can be used to test the common code elements. Thus, complexity of the test scripts can be kept minimal. Test case consistency: "If you have a maintenance team trying to execute the tests and then we have a part that changes and if they don t follow the same cycle you will have increased maintenance costs". It is advantageous to implement a continuous integration framework which runs the tests every time a code is pushed or committed. If the tests pass, the tool must change to green if not red. Thus, the test cases must be run consistently several times a day to reduce the maintenance problems of the test cases. Test case length: "I m a firm believer that all code should be short. It should fit on the screen so you can see it without scrolling". If there are large number of nested elements, it leads to long test cases which can be hard to read and to find what to actually test. Moreover, in case of integration and black box testing, the test cases are usually long since they need to test the entire GUI while unit tests can be kept short. "We want to keep them short but, selenium and integration tests are long as they need to go through entire GUI. But others like unit tests are very short". Testing of test code: "We found one test case last week where we thought

72 Chapter 5. Data synthesis and analysis 62 the test tested correctly but it didn t. It passed all the time. So its important to test the test code". However, few practitioners disagreed to this factor and saw it as a wastage of effort. A consensus was agreed that at least the test cases must be tested superficially. One possible solution is to run through the test case manually and verify if it works. "Run the test case manually. If there is a statement in the code then you need to make sure you have tested it manually" Test-tool-related GUI test automation tool: Wrong choice of the automation tool will have heavy impact on both development and maintenance of test scripts [13]. Using older versions of the automation tool is a liability due to restrictions of the tools availability towards new programming languages and features. For example, at the case being studied, practitioners had issues with the older version of the tool since certain features were not available or not supported by it. However, the fix was already released in the new version. "Selenide doesn t support HTML5 drag and drop. There are workarounds but not for the old Selenide version that we are using ". Limitations in the automation tool will hamper test script development and maintenance. Knowledge and experience: "We dint have much experience with GUIs as it new for a telecommunication company. We assumed that hiring a lot of young people with good knowledge into the product, they might have good experience in creating GUI automation tests. It turned out to be extremely not true". Practitioners lacking knowledge and experience tend to develop poorly structured GUI framework and unstable test scripts. "We should have taken the time to create basic frameworks and hired someone with great GUI competence to get the basics up. This would have helped us gain knowledge on developing maintainable suites". Thus, it is advisable for domain experts to perform GUI test automation Human and organizational Attrition of GUI developers: A regular change of practitioners causes difficulty in re-assigning the responsibility to others possessing code knowledge. Knowledge translation will be a tedious process as the person responsible for the specific part in the GUI are unavailable. According to the experience of a design lead, "Someone writes part of code and then quits. No one really owns that part anymore". Thus, maintenance effort aggravates due to high attrition rate of practitioners working in the industry. Another interviewee mentions, "one guy developed major functionality and later quit the company. It s badly written and nobody takes the responsibility to improve it". Organizations generally recruit consultants to assist in development to cope with time pressure. But their fixed contracts hinders continuity of GUI development and test automation causing

73 Chapter 5. Data synthesis and analysis 63 additional maintenance problems Environment Simulators: Simulators are generally shared between teams in organizations to emulate certain parts of their environment. Thus, if there is any update or modification in the original environment, the same should be done in the simulated environment as well. "It needs to work with whatever functionalities required of it, so its maintenance cost". This demands additional effort for updating the relevant test scripts. "As we are using different simulators when there are any changes in the actual code, we need to update as well" Cross cutting System infrastructure: A product owner claims"if we switch from Ymer to angular JS it will affect maintenance because we need to update a lot of GUI tests". If the technology change is introduced in the middle of an ongoing project, it will need additional investment of effort for adapting to the change. For instance, one of the interviewee mentions, "When the GUI was introduced, Java designs were used but later we switched to JavaScript (JS). As not many had JS competence, the language in itself has been a challenge. Writing corresponding automations tests is a bigger challenge".

74 Chapter 6 Discussions and limitations Section 6.1 outlines the summary of findings and inferences of our study. The potential threats to validity and their corresponding strategies to mitigate or minimize them are discussed in Section Summary of findings We identified 12 criteria for when to begin, 16 testability requirements and 15 factors associated with waste in GUI test automation. From figure 5.1 it can be observed that the posterior probabilities of all the variables identified from literature except loops and branches has increased significantly over prior probabilities. This factor has a likelihood of 14% in literature base while the posterior probability reduced to 0%. This factor is applicable to waste in GUI test automation in general but not in the context as loops and branches are avoided for developing GUI test cases. The practitioners performing GUI test automation were already following most of the evidence based variables as inferred from figures C.1, D.1 and E.1 (refer to Appendix C, D, and E). However, the Bayesian synthesis approach which focused on combining the subjective opinions of the practitioners with research based evidence produced results specific to the subsystems. As every practitioner has their own way of working (interpreted from opinions of practitioners described in Section 5.2), the variables mentioned are subjective. It is beneficial for all the practitioners to have a common understanding of making decisions so that the subjective choice they make is more informed. This method facilitated validation of literature evidence in telecommunication domain and made industry practitioners more informed regarding various aspects of GUI test automation. This builds confidence among the practitioners that the GUI test automation process being followed and decisions being taken by them are on par with literature. Additionally, some new variables influencing GUI test automation were identified from subjective opinions of the practitioners which were not found in literature. There is a lack of literature with respect to criteria for when, testability requirements and factors associated with waste specific to GUI automation test- 64

75 Chapter 6. Discussions and limitations 65 ing. Most of these variables are identified as the study focused on GUI automation testing and also the subsystems were relatively new to the field of GUI development and testing. Such a context facilitated identification of new variables. The posterior probabilities are only indicative of the validity level of the criteria in the context and do not represent their importance. We feel that every case where GUI test automation is performed, it is beneficial to understand, consider and rank the criteria to obtain their importance in that specific context RQ1. When is it suitable to begin GUI test automation? The 12 criteria for when to begin GUI test automation are classified into five categories namely SUT-related, test-process related, test-tool related, human and organizational and cross-cutting criteria. 10 criteria were identified from literature while 11 are obtained from subjective opinions of the practitioners as described in Sections 4.2 and 4.3. Later, the final set of context-specific criteria presented in figure 5.1 are obtained as an outcome of Bayesian Synthesis approach [55]. The entire set of criteria for when identified in literature were found in the context of software test automation in general. Through this study it is found that these criteria are valid and important to GUI based automation testing as well. As the category test-process related consists of maximum number of criteria, it signifies its importance over other categories. The inclusion of research based evidence modified the posterior probabilities of the variables under study. The posterior probabilities of product stability and testability level of the SUT increased substantially on presenting literature to the practitioners. The criteria possessing higher frequency of occurrence in research papers such as defining scope of the automation [7][8][1] and [22], selection of the right automation tool [7][8][1] and [35] and testability level of SUT [7][1][22] and [35] also have elevated posterior probabilities indicating their validity level for GUI test automation. On similar lines defining scope of automation and well-defined test process are the primary criteria which must be fulfilled prior automation mentioned by all the practitioners as valid for successful GUI test automation. One criterion, dedicated and skilled automation team was mentioned in two sources [7] and [8] and was identified to be applicable for GUI test automation. However the two subsystems were not practicing this criteria as cross-functional teams were used for development and automation testing. The importance was already realized among the practitioners and soon a separate GUI build and automation team will be formed. Testability level of SUT is a vital criteria for when to begin GUI based automation testing with a posterior probability of 92%. As identified from literature

76 Chapter 6. Discussions and limitations 66 [19] testability aids in performing test automation in a stable, effective and efficient manner. The importance of the criterion coupled with the need to identify testability requirements by the industry is a motivation for our next research question RQ2. What are the testability requirements for performing GUI test automation? The 16 testability requirements are classified into four categories namely SUTrelated, test-process related, test-tool related and environment. 10 testability requirements are identified from literature base and 12 are obtained from subjective opinions of the practitioners as described in Sections 4.2 and 4.3. Similar to criteria for when, the final set of context-specific testability requirements presented in figure 5.1 are obtained as an outcome of Bayesian synthesis approach [55]. Use of unique class names [23] and support for custom controls [23] are the two variables found in literature to be specific to GUI automation testing while the remaining 8 testability requirements were found in the context of test automation in general. The general category testability requirements were validated and found to be applicable for making GUI based automation testing easier. The category SUT related is found to be the most important category while considering testability requirements as it consists of maximum number of testability requirements. This is also mentioned in literature [22]. Automatic installation and configuration [22] and [23] is found to be a primary testability requirement placed under environment-related, a category introduced by literature. All the practitioners stated this testability requirement as valid for GUI test automation. Furthermore, subjective opinions of the practitioners helped in identifying six new variables define clear product requirements, use of GUI development framework, synchronous behavior, use of minimal number of anonymous functions, perform manual testing prior automation and automation tool support. The literature helped in identifying a new category valid to GUI test automation in combination with subjective opinions of the practitioners and six new testability requirements were identified solely from their opinions. This is a major contribution to the body of existing literature. Testability is of paramount importance while developing large-scale software systems [19]. Literature indicates that it is vital for both software developers and testers as it aids them in keeping the overall test effort under control and proper implementation of testability requirements reduces the maintenance effort of test scripts [19]. The results of the thesis study indicate the same and show that testability mindset and designing the SUT to incorporate testability requirements has a significant influence on unnecessary maintenance waste of GUI test

77 Chapter 6. Discussions and limitations 67 automation RQ3. What are the factors that lead to waste in GUI test automation? The 15 factors associated with waste in GUI test automation are divided into six categories namely SUT-related, test-process related, test-tool related, human and organizational, environment and cross-cutting. 12 variables were identified from literature and then 8 were obtained from subjective opinions of the practitioners as described in Sections 4.2 and 4.3. Bayesian synthesis approach [55] was followed for combining the subjective opinions of the practitioners with research evidence to produce the final set of factors presented in Figure 5.1. Knowledge and experience is identified as a primary factor associated with unnecessary maintenance from literature and also has 100% posterior probability indicating high validity in the context. Out of the 15 final set of variables, loops and branches has a zero posterior probability and is perceived as not applicable in the context under study. However, 92% of the practitioners feel that it affects maintenance of GUI test scripts but is not valid in the context as loops and branches are not used for writing test cases. Most of the factors such as Number of GUI-interaction components, variable names and script logic, test case consistency, test case length, loops and branches, GUI test automation tool, knowledge and experience and simulator support were identified from literature as applicable to GUI based automation testing as well as software testing in general [13]. Few variables such as structure of test suite architecture and system infrastructure are found to be immensely valid for maintaining GUI test scripts though literature was available for software test automation alone. The results of the study are on par with literature and validates these factors in the context of telecommunication industry. The category test-process related consists of maximum number of factors associated with waste indicating its higher importance over other categories. The use of research based evidence in identifying the factors associated with waste aided in finding two categories of environment and cross-cutting in addition to the ones found through opinions of the practitioners. Three new factors applicable to GUI test automation were identified from the subjective opinions of the industry practitioners. Sleep functions are used for synchronization of GUI test scenarios with the system being tested. Invalid sleep rages cause timeout issues as well as increase the total script execution time making maintenance of GUI test scripts difficult. High attrition rate of GUI developers affects maintenance cycles especially because telecommunication industry is relatively new to GUI development and testing and knowledge translation is a tedious process. Feedback from developers is not considered to be valid considering its very low posterior probability. This shows the importance of considering both subjective opinions

78 Chapter 6. Discussions and limitations 68 and research based evidence in conducting the research study. Furthermore, the discussion presented in Section indicates that testability level of the system has an impact on overall test script maintenance cycles. Improper implementation of testability requirements increases the maintenance effort of GUI test automation and other test automation in general as identified from literature and practice. 6.2 Validity threats Every empirical study is subjected to various validity threats which must be addressed during all the phases of the study [57]. To validate the quality of an empirical research four tests are commonly used such as construct validity, internal validity, external validity and reliability or conclusion validity [50]. We identified and carefully addressed potential threats to validity during our research and took steps to reduce or mitigate them. Such early identification and mitigation of threats enhanced the success of our case study. We discuss next the validity of the multiple-case study presented in this thesis in the context of aforementioned validity threats Construct validity Construct validity reflects the degree to which right measures are implemented to represent the researcher s viewpoints and the concept being studied [57]. The following steps were adopted to mitigate this threat. Selection of interviewees: It is a threat that case study results can be biased due to biased selection of interviewees. As mentioned in section 3.2.4, firstly, a complete list of people involved with GUI test automation from the two-subsystems was obtained. From this pool, each interviewee was selected randomly. The selection of representatives from the two subsystems was done by keeping the following characteristics in mind such as roles, GUI test automation understanding and knowledge, and distribution across subsystems. Hence, sufficient care has been taken to ensure diversity across roles and subsystems within selected people, which assisted in minimizing the risk of bias. Misinterpretation of interview questions: One of the threats might be misinterpretation of the interview questions by the interviewees which results in collection of unrelated data from what was being investigated. This risk is mitigated in two steps. Firstly, the formulated interview questions were discussed with and reviewed by the BTH supervisor and industry supervisor. Their feedback was used for revising the questions to increase their viability and in turn minimize the possibility of misunderstanding. Next, a pilot study for our inter-

79 Chapter 6. Discussions and limitations 69 view design was conducted with an industry practitioner who is a test architect at the company to help us get a clear understanding of the design. In addition, the data collected from the interviews was analyzed in parallel to identify any possible modifications and adjustments to the interview questions. Prior to the interviews, the context of the study is clearly explained through mail and in person before questions were posed to the interviewees. Interviewer bias: Another potential threat to construct validity is that the interviewer might ask probing questions to steer answers in his favor which affects the outcome of the results. To mitigate this threat, we avoided spontaneous reactions to the interviewee s responses by simply nodding our head which encourages them to expand on their point. This indicates that the researchers are interested in what the interviewee is saying and can take their time to explain the point. Moreover as seen from the interview questionnaire (refer to Appendix A), we avoided asking closed questions that invite a yes or no answer Internal validity: Internal validity examines the casual relationships and how the conclusions are drawn from the collected data [57] and [50]. Deriving incorrect conclusions from data analysis: There is a possible threat of poor data analysis resulting in improper conclusions due to inexperience of the researchers. To mitigate this threat, the results obtained from data analysis were discussed with the thesis supervisor and practitioners of the company. Such feedback enabled to rectify and correct improper conclusions obtained from data analysis. Furthermore, evidence from the literature were used to support the validity of the conclusions drawn. Additionally, data collection and analysis was performed simultaneously to ensure internal validity which is one of the important mitigation strategy as per [80] External validity: External validity is primarily concerned about the extent to which the findings of the research can be generalized to a specific context and are applicable beyond the investigated case [57] and [50]. A specific company: This threat is plausible as the study is conducted at a single company making it difficult to generalize the findings to other organizations. Establishing contacts with other organizations and practitioners working on other projects is not feasible while being employed at the company. To mitigate the risk of generalizability, the context within which the study is conducted and the characteristics of each subsystem is explained in sufficient detail. This aided

80 Chapter 6. Discussions and limitations 70 generalization of the outcomes making it possible to map the results to other organizations possessing a similar context Reliability/Conclusion validity: Reliability is the extent to which data and its analysis procedures are affected by researchers bias [57] and [50]. It deals with repetition and replication and in specific whether the same result would be obtained if the study is redone in the same context [57]. Interpretation of data: There exists a possible threat to reliability due to the outcome of the thesis being affected by the interpretation of the researchers. This risk was mitigated by designing the study in such way that data is collected from multiple sources of evidences. Process documents like test reports and GUI description documents were reviewed to gain a basic understanding of the process and terminology followed in the company. Apart from such documents, interviews were conducted to capture perceptions of the practitioners to obtain qualitative data for answering the research questions. Each interview was recorded and its interpretation were validated by both the researchers to ensure reliability of the data. Moreover, the literature evidence was used for obtaining conclusions from the interview data. Thus, utilizing more than one source provided us an invaluable advantage to achieve convergent lines of inquiry. Moreover, to make the study repeatable the case study protocol was designed and followed by documenting every step as described in

81 Chapter 7 Conclusions and Future Work Software test automation proves to be beneficial and ensures success only when implemented carefully [20]. There is a need for test-automation decision support for both software testing practitioners and future research in terms of when should automation begin and testability requirements which must be met to make automation effective [20] and [19]. The need aggravates in case of GUI test automation as GUIs are known for changing continuously during development and breaking their corresponding test scripts hindering test automation [12]. As maintenance of such test scripts is an expensive and cumbersome process, identifying the major factors leading to such waste becomes essential. As substantial effort which is invested might go to waste due to improper decisions relating to GUI automation testing, we think that such a study is relevant and essential. Though evidence exists for these variables for automation in general and some for GUI test automation in specific as described in Sections 2.4 and 4.2, there are several other context specific variables which are important to consider while automating GUI testing and validate if the variables identified from literature are applicable to GUI test automation in telecommunication domain. This is achieved through interviews with telecommunication industry practitioners. We synthesized all the variables affecting GUI automation testing using Bayesian synthesis approach [55], as mentioned in Section 3.3. Though the size of the GUI developed varied significantly in both the subsystems, the results were identical as they follow a similar approach for GUI test automation. Performing the study in two similar subsystems and obtaining identical results increased the validity of results obtained through the case study. The research study makes four contributions. First (C1), we identified the criteria for when to automate, testability requirements which make automation easier and the factors associated with waste in GUI based automation testing from both literature and practice as shown in Figure 5.1. As perceived from literature and opinions of the practitioners, all the three research questions are inter-related to each other. The answers reveal that testability requirements are an important criteria which must be fulfilled prior beginning of GUI automation 71

82 Chapter 7. Conclusions and Future Work 72 testing and other types of automation testing in general. Furthermore, it was found that when the system is not developed to meet testability requirements it increases the overall unnecessary maintenance effort (waste) of test scripts particularly in GUI based automation testing. Hence, testability level is an important criteria for when to automate and has a significant influence on shortening the maintenance cycles. Second (C2), we translated research based evidence for the three research questions into practice by combining the subjective opinions of the practitioners with evidence as mentioned in Section 5.1. This produced context specific results which can be used by the practitioners to make more informed decisions regarding when to begin GUI automation testing by relying on evidence and be cautious about maintenance factors. In this process, the practitioners become aware and are informed about the various aspects of GUI test automation and builds confidence among them that the practices they follow are on par with literature. As every practitioner has their own way of working, this study aids in developing a common understanding to make consistent subjective decisions based on evidence. Hence, this shows the importance of considering both subjective opinions of the practitioners and literature based evidence in performing a research study. Third (C3), we validated the answers found in literature for the three research questions in telecommunication domain which are relatively new in the field of GUI development and testing. The variables applicable for software test automation in general are valid for GUI automation testing also as perceived from the discussion presented in Sections 5.2 and 6.1. Fourth (C4), a few new variables influencing GUI test automation (for instance availability of developed GUI-criteria for when, synchronous behavior-testability requirement, Sleep function-factor associated with waste) were identified through interviews which collected the subjective opinions of the practitioners. The evidence from literature is limited in terms of criteria for when, testability requirements and factors associated with waste for GUI automation testing. This in an important contribution to the body of literature and a base for future academic research. 7.1 Future work We plan to take up the following future work directions: Classify the decision-support variables regarding criteria for when to begin automation, testability requirements and factors influencing waste in GUI test automation by the level of testing, e.g, unit level, integration level, and black-box level.

83 Chapter 7. Conclusions and Future Work 73 Perform interview studies and surveys to understand how do modern development concepts such Agile software development and DevOps influence the decision support variables of criteria for when, testability requirements and maintenance factors. Rank the identified criteria and factors to indicate the importance of each so that practitioners can lay more focus on the criteria or factors of high priority. This can be achieved through an assessment process. Validate our initial set of variables in other industrial contexts. Moreover, not all criteria for when to begin automation, testability requirements and factors influencing waste might have been identified. This will be our focus in the future to if identify other variables.

84 References [1] A. Rodrigues, A. C. Dias-Neto, and A. Bezerra, Tapn: Test automation s pyramid of needs, in XIV Brazilian Symposium on Software Quality. SBQS, 2015, pp [2] B. Agarwal, S. Tayal, and M. Gupta, Software engineering and testing. Jones & Bartlett Learning, [3] R. S. Pressman, Software engineering: a practitioner s approach. Palgrave Macmillan, [4] A. Bertolino, Software testing research: Achievements, challenges, dreams, in 2007 Future of Software Engineering. IEEE Computer Society, 2007, pp [5] G. J. Myers, C. Sandler, and T. Badgett, The art of software testing. John Wiley & Sons, [6] G. M. D. Gandhi and A. S. Pillai, Challenges in gui test automation, International Journal of Computer Theory and Engineering, vol. 6, no. 2, p. 192, [7] V. Garousi and M. V. Mäntylä, When and what to automate in software testing? a multi-vocal literature review, Information and Software Technology, vol. 76, pp , [8] N. Garousi, V. Senthil Anand and R. Bhavani, Software test automation - the ground realities realized, Theoretical and Applied Information Technology, vol. 43, no. 2, pp , [9] O. Taipale, J. Kasurinen, K. Karhu, and K. Smolander, Trade-off between automated and manual software testing, International Journal of System Assurance Engineering and Management, vol. 2, no. 2, pp , [10] G. Liebel, E. Alégroth, and R. Feldt, State-of-practice in gui-based system and acceptance testing: An industrial multiple-case study, in Software Engineering and Advanced Applications (SEAA), th EUROMICRO Conference on. IEEE, 2013, pp

85 References 75 [11] C. Bertolini, G. Peres, M. d Amorim, and A. Mota, An empirical evaluation of automated black box testing techniques for crashing guis, in Software Testing Verification and Validation, ICST 09. International Conference on. IEEE, 2009, pp [12] M. Grechanik, Q. Xie, and C. Fu, Maintaining and evolving gui-directed test scripts, in Proceedings of the 31st International Conference on Software Engineering. IEEE Computer Society, 2009, pp [13] E. Alégroth, R. Feldt, and P. Kolström, Maintenance of automated test suites in industry: An empirical study on visual gui testing, Information and Software Technology, vol. 73, pp , [14] T. Hellmann, E. Moazzen, A. Sharma, M. Z. Akbar, J. Sillito, F. Maurer et al., An exploratory study of automated gui testing: Goals, issues, and best practices, University of Calgary, Tech. Rep., [15] A. Bachmutsky, System design for telecommunication gateways. John Wiley & Sons, [16] C. Kaner, Avoiding shelfware: a manager s view of automated gui testing [online]. 1998, Dostupné z: kaner. com/pdfs/autosqa. pdf [vid ]. [17] Z. Sahaf, V. Garousi, D. Pfahl, R. Irving, and Y. Amannejad, When to automate software testing? decision support based on system dynamics: an industrial case study, in Proceedings of the 2014 International Conference on Software and System Process. ACM, 2014, pp [18] K. Stobie, Too much automation or not enough? when to automate testing, in Pacific Northwest Software Quality Conference, [19] S. Patwa and A. K. Malviya, Testability of software systems, International Journal of Research and Reviews in Applied Sciences, vol. 5, no. 1, [20] V. Garousi and F. Elberzhager, Test automation: Not just for test execution, IEEE Software, vol. 34, no. 2, pp , [21] V. Garousi and D. Pfahl, When to automate software testing? a decisionsupport approach based on process simulation, Journal of Software: Evolution and Process, vol. 28, no. 4, pp , [22] S. Berner, R. Weber, and R. K. Keller, Observations and lessons learned from automated testing, in Software Engineering, ICSE Proceedings. 27th International Conference on. IEEE, 2005, pp

86 References 76 [23] B. Pettichord, Design for testability, in Pacific Northwest Software Quality Conference, 2002, pp [24] A. M. Memon, M. E. Pollack, and M. L. Soffa, Hierarchical gui test case generation using automated planning, IEEE transactions on software engineering, vol. 27, no. 2, pp , [25] E. F. Collins and V. F. de Lucena, Software test automation practices in agile development environment: An industry experience report, in Proceedings of the 7th International Workshop on Automation of Software Test. IEEE Press, 2012, pp [26] B. A. Kitchenham, T. Dyba, and M. Jorgensen, Evidence-based software engineering, in Proceedings. 26th International Conference on Software Engineering, May 2004, pp [27] P. A. Brooks and A. M. Memon, Automated gui testing guided by usage profiles, in Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering. ACM, 2007, pp [28] A. Ruiz and Y. W. Price, Gui testing made easy, in Practice and Research Techniques, TAIC PART 08. Testing: Academic & Industrial Conference. IEEE, 2008, pp [29] A. Ahmed, Test automation for graphical user interfaces: A review, in Computer Applications and Information Systems (WCCAIS), 2014 World Congress on. IEEE, 2014, pp [30] A. M. Memon, M. E. Pollack, and M. L. Soffa, Automated test oracles for guis, in ACM SIGSOFT Software Engineering Notes, vol. 25, no. 6. ACM, 2000, pp [31] A. M. Memon and M. L. Soffa, Regression testing of guis, ACM SIGSOFT Software Engineering Notes, vol. 28, no. 5, pp , [32] A. M. Memon, Gui testing: Pitfalls and process, IEEE Computer, vol. 35, no. 8, pp , [33] I. Banerjee, B. Nguyen, V. Garousi, and A. Memon, Graphical user interface (gui) testing: Systematic mapping and repository, Information and Software Technology, vol. 55, no. 10, pp , [34] P. Sabev and K. Grigorova, Manual to automated testing: An effort-based approach for determining the priority of software test automation, World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering, vol. 9, no. 12, pp , 2015.

87 References 77 [35] A. Rodrigues and A. Dias-Neto, Relevance and impact of critical factors of success in software test automation lifecycle: A survey, in Proceedings of the 1st Brazilian Symposium on Systematic and Automated Software Testing. ACM, 2016, p. 6. [36] M. Mirzaaghaei, F. Pastore, and M. Pezze, Automatically repairing test cases for evolving method declarations, in Software Maintenance (ICSM), 2010 IEEE International Conference on. IEEE, 2010, pp [37] K. Wiklund, S. Eldh, D. Sundmark, and K. Lundqvist, Technical debt in test automation, in Software Testing, Verification and Validation (ICST), 2012 IEEE Fifth International Conference on. IEEE, 2012, pp [38] E. Dustin, T. Garrett, and B. Gauf, Implementing automated software testing: How to save time and lower costs while raising quality. Pearson Education, [39] Y. Amannejad, V. Garousi, R. Irving, and Z. Sahaf, A search-based approach for cost-effective software test automation decision support and an industrial case study, in Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on. IEEE, 2014, pp [40] M. Cui and C. Wang, Cost-benefit evaluation model for automated testing based on test case prioritization, Journal of Software Engineering, vol. 9, no. 4, pp , [41] R. Ramler and K. Wolfmaier, Economic perspectives in test automation: balancing automated and manual testing with opportunity cost, in Proceedings of the 2006 international workshop on Automation of software test. ACM, 2006, pp [42] S. Y.-l. C. Yuan-yan and L. X.-s. J. Zhi-gang, Cost benefits analysis of automated testing plan [j], Microcomputer Information, vol. 6, p. 090, [43] R. V. Hooda, An automation of software testing: A foundation for the future, International Journal of Latest Research in Science and Technology, vol. 1, no. 2, pp , [44] L. Williams, G. Kudrjavets, and N. Nagappan, On the effectiveness of unit test automation at microsoft, in Software Reliability Engineering, IS- SRE th International Symposium on. IEEE, 2009, pp [45] K. Karhu, T. Repo, O. Taipale, and K. Smolander, Empirical observations on software testing automation, in Software Testing Verification and Validation, ICST 09. International Conference on. IEEE, 2009, pp

88 References 78 [46] Q. Xie and A. M. Memon, Designing and comparing automated test oracles for gui-based software applications, ACM Transactions on Software Engineering and Methodology (TOSEM), vol. 16, no. 1, p. 4, [47] J. Alanen and L. Y. Ungar, Comparing software design for testability to hardware dft and bist, in AUTOTESTCON, 2011 IEEE. IEEE, 2011, pp [48] D. M. Rafi, K. R. K. Moses, K. Petersen, and M. V. Mäntylä, Benefits and limitations of automated software testing: Systematic literature review and practitioner survey, in Proceedings of the 7th International Workshop on Automation of Software Test. IEEE Press, 2012, pp [49] H. Heiskanen, M. Maunumaa, and M. Katara, Test process improvement for automated test generation, Tampere University of Technology, Department of Software Systems, Tech. Rep, [50] R. K. Yin, Case study research: Design and methods. Sage publications, [51] C. Wohlin, P. Runeson, M. Höst, M. C. Ohlsson, B. Regnell, and A. Wesslén, Experimentation in software engineering. Springer Science & Business Media, [52] C. Robson and K. McCartan, Real world research. John Wiley & Sons, [53] S. Easterbrook, J. Singer, M.-A. Storey, and D. Damian, Selecting empirical methods for software engineering research, in Guide to advanced empirical software engineering. Springer, 2008, pp [54] C. Wohlin, Guidelines for snowballing in systematic literature studies and a replication in software engineering, in Proceedings of the 18th international conference on evaluation and assessment in software engineering. ACM, 2014, p. 38. [55] D. Badampudi and C. Wohlin, Bayesian synthesis for knowledge translation in software engineering: Method and illustration, in Software Engineering and Advanced Applications (SEAA), th Euromicro Conference on. IEEE, 2016, pp [56] S. Keele et al., Guidelines for performing systematic literature reviews in software engineering, in Technical report, Ver. 2.3 EBSE Technical Report. EBSE. sn, 2007.

89 References 79 [57] P. Runeson and M. Höst, Guidelines for conducting and reporting case study research in software engineering, Empirical software engineering, vol. 14, no. 2, p. 131, [58] J. A. Maxwell, Qualitative research design: An interactive approach. Sage publications, 2012, vol. 41. [59] J. Rowley, Conducting research interviews, Management Research Review, vol. 35, no. 3/4, pp , [60] C. Robson, Real world research: a resource for social scientists and practitioner, Adapting Open Innovation in ICT Ecosystem Dynamics References Real World Research: A Resource for Social Scientists and Practitioner, p. 270, [61] D. W. Turner III, Qualitative interview design: A practical guide for novice investigators, The qualitative report, vol. 15, no. 3, p. 754, [62] V. Braun and V. Clarke, Using thematic analysis in psychology, Qualitative research in psychology, vol. 3, no. 2, pp , [63] M. Vaismoradi, H. Turunen, and T. Bondas, Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study, Nursing & health sciences, vol. 15, no. 3, pp , [64] R. E. Boyatzis, Transforming qualitative information: Thematic analysis and code development. sage, [65] M. Vaismoradi, J. Jones, H. Turunen, and S. Snelgrove, Theme development in qualitative content analysis and thematic analysis, Journal of Nursing Education and Practice, vol. 6, no. 5, p. 100, [66] P. L. Rice and D. Ezzy, Qualitative research methods: A health focus, Melbourne, Australia, [67] C. M. Bird, How i stopped dreading and learned to love transcription, Qualitative inquiry, vol. 11, no. 2, pp , [68] J. C. Lapadat and A. C. Lindsay, Transcription in research and practice: From standardization of technique to interpretive positionings, Qualitative inquiry, vol. 5, no. 1, pp , [69] P. Bazeley and K. Jackson, Qualitative data analysis with NVivo. Sage Publications Limited, [70] E. K. Saillard, Systematic versus interpretive analysis with two caqdas packages: Nvivo and maxqda, in Forum Qualitative Sozialforschung/Forum: Qualitative Social Research, vol. 12, no. 1, 2011.

90 References 80 [71] A. Atherton and P. Elsmore, Structuring qualitative enquiry in management and organization research: A dialogue on the merits of using software for qualitative data analysis, Qualitative research in organizations and management: An International Journal, vol. 2, no. 1, pp , [72] T. Dyba, B. A. Kitchenham, and M. Jorgensen, Evidence-based software engineering for practitioners, IEEE software, vol. 22, no. 1, pp , [73] B. A. Kitchenham, D. Budgen, and P. Brereton, Evidence-based software engineering and systematic reviews. CRC Press, 2015, vol. 4. [74] D. Budgen, B. Kitchenham, and P. Brereton, The case for knowledge translation, in Empirical Software Engineering and Measurement, 2013 ACM/IEEE International Symposium on. IEEE, 2013, pp [75] M. Dixon-Woods, S. Agarwal, D. Jones, B. Young, and A. Sutton, Synthesising qualitative and quantitative evidence: a review of possible methods, Journal of health services research & policy, vol. 10, no. 1, pp , [76] D. L. Mulder and G. Whyte, A theoretical review of the impact of test automation on test effectiveness, in Proceedings of The 4th International Conference on Information Systems Management and Evaluation ICIME 2013, 2013, p [77] L. S. Pinto, S. Sinha, and A. Orso, Understanding myths and realities of testsuite evolution, in Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering. ACM, 2012, p. 33. [78] C. Liu, Platform-independent and tool-neutral test descriptions for automated software testing, in Proceedings of the 22nd international conference on Software engineering. ACM, 2000, pp [79] V. Kettunen, J. Kasurinen, O. Taipale, and K. Smolander, A study on agility and testing processes in software organizations, in Proceedings of the 19th international symposium on Software testing and analysis. ACM, 2010, pp [80] J. M. Morse, M. Barrett, M. Mayan, K. Olson, and J. Spiers, Verification strategies for establishing reliability and validity in qualitative research, International journal of qualitative methods, vol. 1, no. 2, pp , 2002.

91 Appendices 81

92 Appendix A Theme 1: Introduction Phase 1 The researchers introduce themselves and present research goals, objectives and interview format. Objectives of the interview and case study We are doing a thesis on GUI test automation. Our research aims to explore when to automate GUI tests and factors that lead to maintenance in GUI test automation. How interview data will be used We are simply trying to capture your thoughts and perspectives on GUI test automation. Your responses to the questions will be kept confidential. Your answers will be a valuable addition to our research and findings. The results of the research will help the company and practitioners gain sufficient knowledge about the less known aspects of GUI test automation. The study will reveal the criteria which will determine when to begin automation of GUI tests, testability requirements to make GUI test automation easier and the factors that affect maintenance. Recording Ask consent for recording their responses to increase validity of the results and to ensure that we correctly interpret and put forward their opinions and perspectives. Introductory questions 1. Which TPG do you work for? 2. What are the various roles that you have worked on relevant to testing? 3. How many years of experience do you have working in testing? 4. What are your responsibilities in GUI test automation? 5. In any TPG, as part of test activities before Release ready, GUI unit testing, GUI integration testing and GUI black box testing are performed. Which testing level are you involved in? Theme 2: Overview of GUI test automation process 1. Can you describe the end to end process you use for performing GUI test automation? 2. Why have you selected BIT and Selenide tools for performing GUI test automation over other available tools? a. Have you experienced any problems with these tools? b. Why do you think these problems occur? c. Despite facing these problems with these tools, why do you continue to use them? Theme 3: A. Criteria for When Interview questionnaire 1. As soon as you receive a customer request for a requirement, do you directly start writing GUI automation test cases? a. If they mention any activities, i. Can you mention those activities and describe them briefly? ii. How did this start or initiate? iii. Is it a widely-accepted process across all TPGs or is it dynamic based on products? 2. We have heard that there is analysis phase prior to development. i. What do you do as a part of analysis phase? ii. Who all are involved? 3. Can one start writing the test cases on their own or do you need someone s approval? a. What conditions should be satisfied before giving the approval? b. To satisfy the conditions, what activities should be performed before giving the approval? c. Why do you perform these activities? d. How did this start or initiate? e. Is it a widely-accepted process across all TPGs or is it dynamic based on products? 4. Can you think of more activities that you do during the entire testing timeline like before, during and after GUI automation testing? 5. What are the things that should be ready before you start testing the GUI? 6. Which methodology do you follow for the GUI testing process? 7. Have you worked with any other methodology for performing GUI test automation? 8. Agile principles states that you can accommodate changing requirements. Does it have any influence on GUI test automation? a. How do you handle the change in requirements for GUI automation testing? 9. How do you think agile influences the GUI test automation process? a. Which agile principles affect the activities you mentioned previously? b. When do you perform GUI testing in an agile development environment? Do you have a dedicated sprint to conduct GUI testing or do you do it in parallel with development? B. Testability requirements 82

93 Appendix A. Interview questionnaire 83 We would now like to capture testability requirements. So first we will explain what testability is. A definition of testability requirement is presented to the interviewee. Testability indicates if the software under test and its environment has been designed or not to facilitate testing. Testability increases test design efficiency and facilitates automation. 1. Do you follow any tips, procedures or guidelines for achieving testability in GUI test automation? a. Can you mention these and describe them briefly? b. How did you come up with these? c. Do all the team members or TPGs follow these practices or is it something individual? d. Do you use any specific term or terms for these tips, procedures or guidelines for communicating among team members? 2. Do you face any problems relating to the SUT and environment when performing GUI test automation? a. Can you state these problems and describe them briefly? b. Why do you think these problems arise? c. What do you do to handle them? C. Factors associated with unnecessary maintenance 1. Have you encountered any problems while maintaining GUI test cases? a. Can you state these problems and describe them briefly? b. Why do you think these problems arise? c. What do you do to handle them? 2. Are these problems common to all types of testing or specific to GUI test automation? 3. Were there any other situations where in you felt maintenance of GUI test scripts was difficult? Theme 4: Presenting the literature review evidence Phase 2 We presented the evidence captured from the literature in three tables to the interviewees. Each table provides solutions from the literature addressing the research questions. Based on the gathered research based evidence, questions such as the following were asked. In the interview, we did not come across some of the criteria, testability requirements and factors influencing waste in GUI test automation. However, we have identified few answers from the literature. We would like to know your opinions. What do you think about this variable?

94 Appendix B Transcription Figure B.1: Snapshot of the transcription software. 84

95 Appendix C Criteria for when Figure C.1: Practitioners opinions regarding criteria for when. Valid to the context A Partially valid to the context B Valid in general but not to the context - C Not valid D Unsure E Further information for options A and B only: Followed - X Partially Followed - Y Not Followed- Z 85

96 Appendix D Testability requirements Figure D.1: Practitioners opinions regarding testability requirements. Valid to the context A Partially valid to the context B Valid in general but not to the context - C Not valid D Unsure E Further information for options A and B only: Followed - X Partially Followed - Y, Not Followed- Z 86

97 Appendix E Factors affecting waste Figure E.1: Practitioners opinions regarding factors affecting waste. Valid to the context - A Partially valid to the context - B Valid in general but not to the context - C Not valid D Unsure - E 87

98 Appendix F Prior probabilities S.no Category Code I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 I11 I12 Criteria for when 1. SUT-related Product stability 2. SUT-related Availability of developed GUI 3. Define scope of the automation Test-related 4. Well-defined test process 5. Develop test case Test-related skeletons 6. Test-tool Selection of the right related automation tool 7. Dedicated and skilled Human and organizational GUI build & automation team 8. Resource availability 9. Testability level of the SUT Cross cutting 10. Compatibility of the development process Testability requirements 1. Effective log storage management 2. SUT-related Use of unique class names 3. Usage of diagnostic techniques 4. Define clear product requirements 5. Use of GUI SUT-related development framework 6. Synchronous behavior 7. Use of minimal number of anonymous functions 8. Following proper test scripting standards 9. Identifying common Test-related steps to convert them into functions 10. Test data availability 11. Clean-up between test runs 12. Perform manual testing Test-related prior automation testing 13. Test-tool Automation tool related documentation/support Factors associated with waste 1. SUT-related Reusability of code 2. Broken test scripts 3. Test-related Structure of test suite architecture 4. Test-related Sleep functions 5. Test-tool GUI test automation related tool 6. Human and Knowledge/experience organizational 7. Feedback from Human and developers 8. organizational Attrition of GUI developers Figure F.1: Calculation of prior probabilities. 88

99 Appendix F. Prior probabilities 89 Figure F.2: Proportion of practitioners mentioning the variables in phase 1 interview.

100 Appendix G Data synthesis 90

101 Appendix G. Data synthesis 91 Figure G.1: Data synthesis: prior probability, likelihood and posterior probability.

Static Code Analysis A Systematic Literature Review and an Industrial Survey

Static Code Analysis A Systematic Literature Review and an Industrial Survey Thesis no: MSSE-2016-09 Static Code Analysis A Systematic Literature Review and an Industrial Survey Islam Elkhalifa & Bilal Ilyas Faculty of Computing Blekinge Institute of Technology SE 371 79 Karlskrona,

More information

arxiv: v1 [cs.se] 4 Apr 2017

arxiv: v1 [cs.se] 4 Apr 2017 Checklists to Support Test Charter Design in Exploratory Testing Ahmad Nauman Ghazi, Ratna Pranathi Garigapati, and Kai Petersen arxiv:1704.00988v1 [cs.se] 4 Apr 2017 Blekinge Institute of Technology,

More information

Validation of NORM (Needs Oriented Framework for Producing Requirements Decision Material) Framework in Industry

Validation of NORM (Needs Oriented Framework for Producing Requirements Decision Material) Framework in Industry Master Thesis Software Engineering Thesis no: MSE-2012:102 09 2012 Validation of NORM (Needs Oriented Framework for Producing Requirements Decision Material) Framework in Industry Salman Nazir Rizwan Yousaf

More information

Book Outline. Software Testing and Analysis: Process, Principles, and Techniques

Book Outline. Software Testing and Analysis: Process, Principles, and Techniques Book Outline Software Testing and Analysis: Process, Principles, and Techniques Mauro PezzèandMichalYoung Working Outline as of March 2000 Software test and analysis are essential techniques for producing

More information

A Study of Elicitation Techniques in Market-Driven Requirements Engineering

A Study of Elicitation Techniques in Market-Driven Requirements Engineering Master of Science in Software Engineering May 2017 A Study of Elicitation Techniques in Market-Driven Requirements Engineering Wenguang Li, Shuhan Fan Faculty of Computing Blekinge Institute of Technology

More information

CHAPTER 2 LITERATURE SURVEY

CHAPTER 2 LITERATURE SURVEY 10 CHAPTER 2 LITERATURE SURVEY This chapter provides the related work that has been done about the software performance requirements which includes the sub sections like requirements engineering, functional

More information

Comparative Selection of Requirements Validation Techniques Based on Industrial Survey

Comparative Selection of Requirements Validation Techniques Based on Industrial Survey Master Thesis Computer Science Thesis no: MSC-2010:18 December 2009 Comparative Selection of Requirements Validation Techniques Based on Industrial Survey Latif Hussain Sulehri Department of Interaction

More information

Requirements Analysis and Design Definition. Chapter Study Group Learning Materials

Requirements Analysis and Design Definition. Chapter Study Group Learning Materials Requirements Analysis and Design Definition Chapter Study Group Learning Materials 2015, International Institute of Business Analysis (IIBA ). Permission is granted to IIBA Chapters to use and modify this

More information

Measuring Cost Avoidance Through Software Reuse

Measuring Cost Avoidance Through Software Reuse Master Thesis Software Engineering Thesis no: MSE-2010-38 12 2010 Measuring Cost Avoidance Through Software Reuse A model to measure costs avoided through software reuse and guidelines to increase profits

More information

Quality Management. Managing the quality of the design process and final

Quality Management. Managing the quality of the design process and final Quality Management Managing the quality of the design process and final product Objectives To introduce the quality management process and key quality management activities To explain the role of standards

More information

Reasons Governing the Adoption and Denial of TickITplus A Survey

Reasons Governing the Adoption and Denial of TickITplus A Survey Thesis no: MSSE-2015-11 Reasons Governing the Adoption and Denial of TickITplus A Survey Navneet Reddy Chamala Faculty of Computing Blekinge Institute of Technology SE-371 79 Karlskrona Sweden This thesis

More information

Testing. CxOne Standard

Testing. CxOne Standard Testing CxOne Standard CxStand_Testing.doc November 3, 2002 Advancing the Art and Science of Commercial Software Engineering Contents 1 INTRODUCTION... 1 1.1 OVERVIEW... 1 1.2 GOALS... 1 1.3 BACKGROUND...

More information

FAQ: How to build User Profiles

FAQ: How to build User Profiles User Experience Direct (UX Direct) FAQ: How to build User Profiles Disclaimer The following is intended to outline our general product direction. It is intended for information purposes only, and may not

More information

Requirements Validation and Negotiation

Requirements Validation and Negotiation REQUIREMENTS ENGINEERING LECTURE 2014/2015 Dr. Sebastian Adam Requirements Validation and Negotiation AGENDA Fundamentals of Requirements Validation Fundamentals of Requirements Negotiation Quality Aspects

More information

Service Virtualization

Service Virtualization Service Virtualization A faster, more efficient and less costly way to develop and test enterprise-class applications As cloud and mobile computing gain rapid acceptance, IT departments are expected to

More information

A Review of Constraints to Using Data for Decision Making. Recommendations to Inform the Design of Interventions

A Review of Constraints to Using Data for Decision Making. Recommendations to Inform the Design of Interventions A Review of Constraints to Using Data for Decision Making Recommendations to Inform the Design of Interventions A Review of Constraints to Using Data for Decision Making Recommendations to Inform the Design

More information

7. What is planning? It is an act of formulating a program for a definite course of action. Planning is to decide what is to be done.

7. What is planning? It is an act of formulating a program for a definite course of action. Planning is to decide what is to be done. UNIT I FUNDAMENTALS 2 MARKS QUESTIONS & ANSWERS 1. What is software project management? Software project management is the art and science of planning and leading software projects. It is sub discipline

More information

Succession Management Implementation Guide

Succession Management Implementation Guide HR Advancement Center EXCERPT Succession Management Implementation Guide Four steps for building high-impact succession plans: 1. Pinpoint future leadership gaps 2. Identify top talent 3. Customize high-potential

More information

Test Case Selection and Prioritization: Risk-Based or Design-Based?

Test Case Selection and Prioritization: Risk-Based or Design-Based? Test Case Selection and Prioritization: Risk-Based or Design-Based? Jussi Kasurinen, Ossi Taipale and Kari Smolander Lappeenranta University of Technology FI-53851 Lappeenranta, Finland +358 400 213 864

More information

SOFTWARE MAINTENANCE PROCESS MODEL AFTER DELIVERY WITH QUALIFIED OUTPUT

SOFTWARE MAINTENANCE PROCESS MODEL AFTER DELIVERY WITH QUALIFIED OUTPUT SOFTWARE MAINTENANCE PROCESS MODEL AFTER DELIVERY WITH QUALIFIED OUTPUT SALEEM AL-ZOUBI Faculty of Information Technology, Irbid National University E-mail: it@inu.edu.jo Abstract-Software maintenance

More information

Software Development Life Cycle (SDLC) Tata Consultancy Services ltd. 12 October

Software Development Life Cycle (SDLC) Tata Consultancy Services ltd. 12 October Software Development Life Cycle (SDLC) Tata Consultancy Services ltd. 12 October 2006 1 Objectives (1/2) At the end of the presentation, participants should be able to: Realise the need for a systematic

More information

An Application of Causal Analysis to the Software Modification Process

An Application of Causal Analysis to the Software Modification Process SOFTWARE PRACTICE AND EXPERIENCE, VOL. 23(10), 1095 1105 (OCTOBER 1993) An Application of Causal Analysis to the Software Modification Process james s. collofello Computer Science Department, Arizona State

More information

Passit4Sure.OG Questions. TOGAF 9 Combined Part 1 and Part 2

Passit4Sure.OG Questions. TOGAF 9 Combined Part 1 and Part 2 Passit4Sure.OG0-093.221Questions Number: OG0-093 Passing Score: 800 Time Limit: 120 min File Version: 7.1 TOGAF 9 Combined Part 1 and Part 2 One of the great thing about pass4sure is that is saves our

More information

UPGRADE CONSIDERATIONS Appian Platform

UPGRADE CONSIDERATIONS Appian Platform UPGRADE CONSIDERATIONS Appian Platform ArchiTECH Solutions LLC 7700 Leesburg Pike #204 www.architechsolutions.com 703-972-9155 atsdelivery@architechsolutions.com TABLE OF CONTENTS Introduction... 3 Upgrade

More information

Introduction to Systems Analysis and Design

Introduction to Systems Analysis and Design Introduction to Systems Analysis and Design What is a System? A system is a set of interrelated components that function together to achieve a common goal. The components of a system are called subsystems.

More information

Role of Technical Complexity Factors in Test Effort Estimation Using Use Case Points

Role of Technical Complexity Factors in Test Effort Estimation Using Use Case Points Role of Technical ity s in Test Effort Estimation Using Use Case Points Dr. Pradeep Kumar Bhatia pkbhatia.gju@gmail.com Ganesh Kumar gkyaduvansi@gmail.com Abstarct-The increasing popularity of use-case

More information

IFAC Education Committee Meeting Agenda 8-C Stockholm, August 2004

IFAC Education Committee Meeting Agenda 8-C Stockholm, August 2004 INTERNATIONAL FEDERATION OF ACCOUNTANTS 545 Fifth Avenue, 14th Floor Tel: +1 (212) 286-9344 New York, New York 10017 Fax: +1 (212) 856-9420 Internet: http://www.ifac.org Agenda Item 8-C First Issued July

More information

IN COMPLEX PROCESS APPLICATION DEVELOPMENT

IN COMPLEX PROCESS APPLICATION DEVELOPMENT BUSINESS-IT ALIGNMENT IN COMPLEX PROCESS APPLICATION DEVELOPMENT TABLE OF CONTENTS 1 Introduction 2 Model-driven development in BPMS: myth and reality 3 From disparate to aligned models 4 Model traceability

More information

CIO Metrics and Decision-Making Survey Results

CIO Metrics and Decision-Making Survey Results CIO Metrics and Decision-Making Survey Results Executive Summary & Key Findings: CIOs and other IT executives from mid- to large-sized organizations make significant business-altering decisions every day

More information

INF 3121 Software Testing - Lecture 05. Test Management

INF 3121 Software Testing - Lecture 05. Test Management INF 3121 Software Testing - Lecture 05 Test Management 1. Test organization (20 min) (25 min) (15 min) (10 min) (10 min) (10 min) INF3121 / 23.02.2016 / Raluca Florea 1 1. Test organization (20 min) LO:

More information

LOGISTICAL ASPECTS OF THE SOFTWARE TESTING PROCESS

LOGISTICAL ASPECTS OF THE SOFTWARE TESTING PROCESS LOGISTICAL ASPECTS OF THE SOFTWARE TESTING PROCESS Kazimierz Worwa* * Faculty of Cybernetics, Military University of Technology, Warsaw, 00-908, Poland, Email: kazimierz.worwa@wat.edu.pl Abstract The purpose

More information

TOOL #47. EVALUATION CRITERIA AND QUESTIONS

TOOL #47. EVALUATION CRITERIA AND QUESTIONS TOOL #47. EVALUATION CRITERIA AND QUESTIONS 1. INTRODUCTION All evaluations and fitness checks should assess the evaluation criteria of effectiveness, efficiency, coherence, relevance and EU added value

More information

PRINCESS NOURA UNIVESRSITY. Project Management BUS 302. Reem Al-Qahtani

PRINCESS NOURA UNIVESRSITY. Project Management BUS 302. Reem Al-Qahtani PRINCESS NOURA UNIVESRSITY Project BUS 302 Reem Al-Qahtani This is only for helping reading the PMBOK it has our notes for focusing on the book Project Framework What is PMBOK? o PMBOK= Project Body of

More information

Electronic Checklist of Risk Items on Construction Projects

Electronic Checklist of Risk Items on Construction Projects Electronic Checklist of Risk Items on Construction Projects Moselhi, O. 1 and Hassanein, A. 2 1 Prof. and Chair, Dept. of Building, Civil and Environmental Engineering Concordia University, Montréal, Québec

More information

Software Processes. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 4 Slide 1

Software Processes. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 4 Slide 1 Software Processes Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 4 Slide 1 Objectives To introduce software process models To describe three generic process models and when they may be

More information

Software Engineering Lecture 5 Agile Software Development

Software Engineering Lecture 5 Agile Software Development Software Engineering Lecture 5 Agile Software Development JJCAO Mostly based on the presentation of Software Engineering, 9ed Exercise Describe the main activities in the software design process and the

More information

Course Contents: TM Activities Identification: Introduction, Definition, Identification processes, Case study.

Course Contents: TM Activities Identification: Introduction, Definition, Identification processes, Case study. Chapter 2 Technology Identification Course Contents: TM Activities Identification: Introduction, Definition, Identification processes, Case study. Contents Chapter 2 Technology Identification... 1 Introduction...

More information

Introduction to Software Engineering

Introduction to Software Engineering Introduction to Software Engineering 2. Requirements Collection Mircea F. Lungu Based on a lecture by Oscar Nierstrasz. Roadmap > The Requirements Engineering Process > Functional and non-functional requirements

More information

Chapter 4 Software Process and Project Metrics

Chapter 4 Software Process and Project Metrics Chapter 4 Software Process and Project Metrics 1 Measurement & Metrics... collecting metrics is too hard... it's too time-consuming... it's too political... it won't prove anything... Anything that you

More information

Getting Started with Risk in ISO 9001:2015

Getting Started with Risk in ISO 9001:2015 Getting Started with Risk in ISO 9001:2015 Executive Summary The ISO 9001:2015 standard places a great deal of emphasis on using risk to drive processes and make decisions. The old mindset of using corrective

More information

White Paper. Non Functional Requirements of Government SaaS. - Ramkumar R S

White Paper. Non Functional Requirements of Government SaaS. - Ramkumar R S White Paper Non Functional Requirements of Government SaaS - Ramkumar R S Contents Abstract Summary..4 Context 4 Government SaaS.4 Functional Vs Non Functional Requirements (NFRs)..4 Why NFRs are more

More information

MBA BADM559 Enterprise IT Governance 12/15/2008. Enterprise Architecture is a holistic view of an enterprise s processes, information and

MBA BADM559 Enterprise IT Governance 12/15/2008. Enterprise Architecture is a holistic view of an enterprise s processes, information and Enterprise Architecture is a holistic view of an enterprise s processes, information and information technology assets as a vehicle for aligning business and IT in a structured, more efficient and sustainable

More information

Software Development Life Cycle:

Software Development Life Cycle: Software Development Life Cycle: The systems development life cycle (SDLC), also referred to as the application development life-cycle, is a term used in systems engineering, information systems and software

More information

The software process

The software process Software Processes The software process A structured set of activities required to develop a software system Specification; Design; Validation; Evolution. A software process model is an abstract representation

More information

SWE 211 Software Processes

SWE 211 Software Processes SWE 211 Software Processes These slides are designed and adapted from slides provided by Software Engineering 9 /e Addison Wesley 2011 by Ian Sommerville 1 Outlines Software process models Process activities

More information

Introduction to Software Engineering

Introduction to Software Engineering UNIT I SOFTWARE PROCESS Introduction S/W Engineering Paradigm life cycle models (water fall, incremental, spiral, WINWIN spiral, evolutionary, prototyping, objects oriented) -system engineering computer

More information

Software Processes. Objectives. Topics covered. The software process. Waterfall model. Generic software process models

Software Processes. Objectives. Topics covered. The software process. Waterfall model. Generic software process models Objectives Software Processes To introduce software process models To describe three generic process models and when they may be used To describe outline process models for requirements engineering, software

More information

Please see the website for further information:

Please see the website for further information: Foundation for Auditing Research: Call for Research Project Proposals 2018 December 2017 What is FAR? The Dutch Foundation for Auditing Research (FAR) was launched in Amsterdam on October 20, 2015. While

More information

Objectives. The software process. Topics covered. Waterfall model. Generic software process models. Software Processes

Objectives. The software process. Topics covered. Waterfall model. Generic software process models. Software Processes Objectives Software Processes To introduce software process models To describe three generic process models and when they may be used To describe outline process models for requirements engineering, software

More information

WORKGROUP-LEVEL OVERVIEW. What You Will Learn. What You Will Apply To Your Workgroup

WORKGROUP-LEVEL OVERVIEW. What You Will Learn. What You Will Apply To Your Workgroup INTRODUCTION TO PERFORMANCE SCORECARDS WORKGROUP-LEVEL OVERVIEW What You Will Learn 1. By implementing Performance Scorecards, you are adopting an organized, proven method of defining key business outcomes

More information

Advantages and Disadvantages of. Independent Tests. Advantages. Disadvantages

Advantages and Disadvantages of. Independent Tests. Advantages. Disadvantages 8.0 Test Management Outline 8.1 Test organisation 8.2 Test planning and estimation 8.3 Test program monitoring and control 8.4 Configuration management 8.5 Risk and testing 8.6 Summary Independent Testing

More information

Topics covered. Software process models Process iteration Process activities The Rational Unified Process Computer-aided software engineering

Topics covered. Software process models Process iteration Process activities The Rational Unified Process Computer-aided software engineering Software Processes Objectives To introduce software process models To describe three generic process models and when they may be used To describe outline process models for requirements engineering, software

More information

Test Automation ROI Calculator

Test Automation ROI Calculator WHITE PAPER ROI Calculator T by Ananth Narayanan, Aspire Systems his white paper is all about understanding the parameters required and how to calculate the ROI from test automation. ROI is notoriously

More information

Module 1 Introduction. IIT, Bombay

Module 1 Introduction. IIT, Bombay Module 1 Introduction Lecture 1 Need Identification and Problem Definition Instructional objectives The primary objective of this lecture module is to outline how to identify the need and define the problem

More information

0 Introduction Test strategy A Test Strategy for single high-level test B Combined testing strategy for high-level tests...

0 Introduction Test strategy A Test Strategy for single high-level test B Combined testing strategy for high-level tests... TPI Automotive Test Process Improvement Version: 1.01 Author: Sogeti Deutschland GmbH Datum: 29.12.2004 Sogeti Deutschland GmbH. Version 1.01 29.12.04-1 - 0 Introduction... 5 1 Test strategy...10 1.A Test

More information

developer.* The Independent Magazine for Software Professionals Automating Software Development Processes by Tim Kitchens

developer.* The Independent Magazine for Software Professionals Automating Software Development Processes by Tim Kitchens developer.* The Independent Magazine for Software Professionals Automating Software Development Processes by Tim Kitchens Automating repetitive procedures can provide real value to software development

More information

COST OF QUALITY (COQ): WHICH COLLECTION SYSTEM SHOULD BE USED?

COST OF QUALITY (COQ): WHICH COLLECTION SYSTEM SHOULD BE USED? COST OF QUALITY (COQ): WHICH COLLECTION SYSTEM SHOULD BE USED? Gary Zimak Manager, Quality Improvement SUMMARY It is hard to believe that it has been fifty years since Juran introduced Gold in the Mine,

More information

Surviving the Top Ten Challenges of Software Testing

Surviving the Top Ten Challenges of Software Testing Surviving the Top Ten Challenges of Software Testing: A Closer Look at Understanding Software Testing Randy Rice, CQA, CSTE Rice Consulting Services, Inc. 405-692-7331 http://www.riceconsulting.com rcs@telepath.com

More information

Development of AUTOSAR Software Components with Model-Based Design

Development of AUTOSAR Software Components with Model-Based Design Development of AUTOSAR Software Components with Model-Based Design Guido Sandmann Automotive Marketing Manager, EMEA The MathWorks Joachim Schlosser Senior Team Leader Application Engineering The MathWorks

More information

Compliance & Validation Validation of Software-as-a-Service (SaaS Solutions)

Compliance & Validation Validation of Software-as-a-Service (SaaS Solutions) www.arisglobal.com A White Paper Presented By ArisGlobal Compliance & Validation Validation of Software-as-a-Service (SaaS Solutions) ARIS GLOBAL CORPORATE HEADQUARTERS ArisGlobal, 1266 East Main Street,

More information

CRM System Tester. Location London Department Supporter and Community Partnerships. CRM Project Manager Salary Band C

CRM System Tester. Location London Department Supporter and Community Partnerships. CRM Project Manager Salary Band C CRM System Tester Location London Department Supporter and Community Partnerships Reports to (Job Title) CRM Project Manager Salary Band C Matrix manager (if applicable) Click here to enter text. Competency

More information

IE 366 Requirements Development

IE 366 Requirements Development IE 366 Requirements Development Developing and Managing Requirements IE 366 Requirements Definition: A requirement describes something that is needed or desired in a system (or product or process) that

More information

Watson-Glaser II Critical Thinking Appraisal. Development Report. John Sample COMPANY/ORGANIZATION NAME. March 31, 2009.

Watson-Glaser II Critical Thinking Appraisal. Development Report. John Sample COMPANY/ORGANIZATION NAME. March 31, 2009. Watson-Glaser II TM Critical Thinking Appraisal Development Report John Sample COMPANY/ORGANIZATION NAME March 31, 2009 Form (D,E) How to Use Your Report Success in the 21st century workplace demands critical

More information

Five Stages of IoT. Five Stages of IoT 2016 Bsquare Corp.

Five Stages of IoT. Five Stages of IoT 2016 Bsquare Corp. Five Stages of IoT The Internet of Things (IoT) is revolutionizing the way industrial companies work by enabling the connectivity of equipment and devices, collection of device data, and the ability to

More information

INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 01 GLASGOW, AUGUST 21-23, 2001 PATTERNS OF PRODUCT DEVELOPMENT INTERACTIONS

INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 01 GLASGOW, AUGUST 21-23, 2001 PATTERNS OF PRODUCT DEVELOPMENT INTERACTIONS INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 01 GLASGOW, AUGUST 21-23, 2001 PATTERNS OF PRODUCT DEVELOPMENT INTERACTIONS Steven D. Eppinger and Vesa Salminen Keywords: process modeling, product

More information

The ROI of training: Strategies and evaluation methods

The ROI of training: Strategies and evaluation methods The ROI of training: Strategies and evaluation methods As global organizations seek to gain a competitive advantage through employee development, more pressure has been placed on HR and learning professionals

More information

TIPS PREPARING AN EVALUATION STATEMENT OF WORK ABOUT TIPS

TIPS PREPARING AN EVALUATION STATEMENT OF WORK ABOUT TIPS NUMBER 3 2 ND EDITION, 2010 PERFORMANCE MONITORING & EVALUATION TIPS PREPARING AN EVALUATION STATEMENT OF WORK ABOUT TIPS These TIPS provide practical advice and suggestions to USAID managers on issues

More information

Research Article / Paper / Case Study Available online at: Analysis of Strengths and Weakness of SDLC Models Shikha Verma Delhi India

Research Article / Paper / Case Study Available online at:  Analysis of Strengths and Weakness of SDLC Models Shikha Verma Delhi India ISSN: 2321-7782 (Online) Volume 2, Issue 3, March 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Paper / Case Study Available online at: www.ijarcsms.com

More information

A Guide for Developing Health Research Knowledge Translation (KT) Plans

A Guide for Developing Health Research Knowledge Translation (KT) Plans A Guide for Developing Health Research Knowledge Translation (KT) Plans Adapted from: A Guide for Assessing Health Research Knowledge Translation Plans 2007 A Guide for Assessing Health Research Knowledge

More information

CMII-100G. CMII Standard for Integrated Process Excellence and. and

CMII-100G. CMII Standard for Integrated Process Excellence and. and CMII-100G CMII Standard for Integrated Process Excellence and and About this Standard How an organization does what it does has everything to do with processes. Every organization has a network of core

More information

Organization Realignment

Organization Realignment Organization Realignment Collective Wisdom Fresh Ideas We are all drawn to tangible, concrete ideas. Consequently we find that when we are considering organizational change, there is a strong temptation

More information

The importance of the right reporting, analytics and information delivery

The importance of the right reporting, analytics and information delivery The importance of the right reporting, and information delivery Prepared by: Michael Faloney, Director, RSM US LLP michael.faloney@rsmus.com, +1 804 281 6805 Introduction This is the second of a three-part

More information

Standing up to the semiconductor verification challenge

Standing up to the semiconductor verification challenge 43 Bill Butcher Standing up to the semiconductor verification challenge Companies should seek faster, more cost-effective ways to test the quality of complex system-on-a-chip devices. Aaron Aboagye, Mark

More information

Working better by working together

Working better by working together Working better by working together Deal Advisory / Germany We can help you Partner. / 1 A pragmatic approach to enhancing value through partnerships. Your vision. Our proven capabilities. Businesses thrive

More information

Available online at ScienceDirect. Procedia CIRP 28 (2015 ) rd CIRP Global Web Conference

Available online at  ScienceDirect. Procedia CIRP 28 (2015 ) rd CIRP Global Web Conference Available online at www.sciencedirect.com ScienceDirect Procedia CIRP 28 (2015 ) 179 184 3rd CIRP Global Web Conference Quantifying risk mitigation strategies for manufacturing and service delivery J.

More information

Mobile Payment: Critical Success Factors for FinTech Business Models

Mobile Payment: Critical Success Factors for FinTech Business Models Mobile Payment: Critical Success Factors for FinTech Business Models Bachelorarbeit zur Erlangung des akademischen Grades Bachelor of Science (B. Sc.) im Studiengang Wirtschaftswissenschaft der Wirtschaftswissenschaftlichen

More information

Adopting to Agile Software Development

Adopting to Agile Software Development doi: 10.1515/acss-2014-0014 Adopting to Agile Software Development Gusts Linkevics, Riga Technical University, Latvia Abstract Agile software development can be made successful, but there is no well-defined

More information

EXTERNAL EVALUATION OF THE EUROPEAN UNION AGENCY FOR FUNDAMENTAL RIGHTS DRAFT TECHNICAL SPECIFICATIONS

EXTERNAL EVALUATION OF THE EUROPEAN UNION AGENCY FOR FUNDAMENTAL RIGHTS DRAFT TECHNICAL SPECIFICATIONS EXTERNAL EVALUATION OF THE EUROPEAN UNION AGENCY FOR FUNDAMENTAL RIGHTS DRAFT TECHNICAL SPECIFICATIONS / / / 1) Motivation for this evaluation According to the founding Regulation (168/2007) of the Fundamental

More information

» Kienbaum 360 Degree Feedback

» Kienbaum 360 Degree Feedback » Kienbaum 360 Degree Feedback Develop leaders. Improve leadership quality. What we offer 2» The Challenge 3 Self-reflected, authentic, confident Why leadership quality is so important good leaders make

More information

Blackblot PMTK Competitor Analysis. <Comment: Replace the Blackblot logo with your company logo.>

Blackblot PMTK Competitor Analysis. <Comment: Replace the Blackblot logo with your company logo.> Blackblot PMTK Competitor Analysis Company Name: Name: Date: Contact: Department: Location: Email: Telephone: Document Revision History:

More information

Design Guide: Impact of Quality on Cost Economics for In-circuit and Functional Test.

Design Guide: Impact of Quality on Cost Economics for In-circuit and Functional Test. Design Guide: Impact of Quality on Cost Economics for In-circuit and Functional Test USA, CANADA, MEXICO, MALAYSIA, CHINA, UNITED KINGDOM Contact locations: www.circuitcheck.com Copyright Circuit Check,

More information

BASICS OF SOFTWARE TESTING AND QUALITY ASSURANCE. Yvonne Enselman, CTAL

BASICS OF SOFTWARE TESTING AND QUALITY ASSURANCE. Yvonne Enselman, CTAL BASICS OF SOFTWARE TESTING AND QUALITY ASSURANCE Yvonne Enselman, CTAL Information alines with ISTQB Sylabus and Glossary THE TEST PYRAMID Why Testing is necessary What is Testing Seven Testing principles

More information

The 9 knowledge Areas and the 42 Processes Based on the PMBoK 4th

The 9 knowledge Areas and the 42 Processes Based on the PMBoK 4th The 9 knowledge Areas and the 42 Processes Based on the PMBoK 4th www.pmlead.net PMI, PMP, CAPM and PMBOK Guide are trademarks of the Project Management Institute, Inc. PMI has not endorsed and did not

More information

Intermediate Certificate in Software Testing Syllabus. Version 1.4

Intermediate Certificate in Software Testing Syllabus. Version 1.4 Intermediate Certificate in Software Testing Syllabus February 2010 Background This document is the syllabus for the intermediate paper which leads to the practitioner level of qualification, as administered

More information

THE PURPOSE OF TESTING

THE PURPOSE OF TESTING Chapter 6 THE PURPOSE OF TESTING Context-Driven Overview of Quadrants Tests That Support the Team Tests That Critique the Product Quadrant Intro Purpose of Testing Managing Technical Debt Knowing When

More information

An Ordering Strategy for a Retail Supply Chain

An Ordering Strategy for a Retail Supply Chain An Ordering Strategy for a Retail Supply Chain Improving the Ordering Process between a Retail Brand Owning Company and its Distributors and Suppliers Master's thesis in the Master's Programme Supply Chain

More information

MANAGEMENT INFORMATION SYSTEMS

MANAGEMENT INFORMATION SYSTEMS Management Information Systems 1 MANAGEMENT INFORMATION SYSTEMS For undergraduate curriculum in business, major in management information systems. The Department of Supply Chain and Information Systems

More information

CHAPTER 1. Business Process Management & Information Technology

CHAPTER 1. Business Process Management & Information Technology CHAPTER 1 Business Process Management & Information Technology Q. Process From System Engineering Perspective From Business Perspective In system Engineering Arena Process is defined as - a sequence of

More information

Online Course Manual By Craig Pence. Module 12

Online Course Manual By Craig Pence. Module 12 Online Course Manual By Craig Pence Copyright Notice. Each module of the course manual may be viewed online, saved to disk, or printed (each is composed of 10 to 15 printed pages of text) by students enrolled

More information

Transitioning from Management to Engineering Ida Hashemi, Yun Tian

Transitioning from Management to Engineering Ida Hashemi, Yun Tian Transitioning from Management to Engineering Ida Hashemi, Yun Tian Computer Science Department, California State University, Fullerton Ya-fei Jia College of Computer Science, Beijing University of Technology,

More information

PAYIQ METHODOLOGY RELEASE INTRODUCTION TO QA & SOFTWARE TESTING GUIDE. iq Payments Oy

PAYIQ METHODOLOGY RELEASE INTRODUCTION TO QA & SOFTWARE TESTING GUIDE. iq Payments Oy PAYIQ METHODOLOGY RELEASE 1.0.0.0 INTRODUCTION TO QA & SOFTWARE TESTING GUIDE D O C U M E N T A T I O N L I C E N S E This documentation, as well as the software described in it, is furnished under license

More information

SOFTWARE TESTING REVEALED

SOFTWARE TESTING REVEALED SOFTWARE TESTING REVEALED TRAINING BOOK SECOND EDITION BY INTERNATIONAL SOFTWARE TEST INSTITUTE www.test-institute.org COPYRIGHT INTERNATIONAL SOFTWARE TEST INSTITUTE Dedication To all of the International

More information

REPORT 2016/033 INTERNAL AUDIT DIVISION

REPORT 2016/033 INTERNAL AUDIT DIVISION INTERNAL AUDIT DIVISION REPORT 2016/033 Advisory engagement on the Statement on Internal Control project at the United Nations Joint Staff Pension Fund 25 April 2016 Assignment No. VS2015/800/01 CONTENTS

More information

The SharePoint Workflow Conundrum

The SharePoint Workflow Conundrum SharePoint Workflow Conundrum Author: Geoff Evelyn The SharePoint Workflow Conundrum An examination of the workflow provisions around SharePoint, Office365, Windows Azure and what implications there are

More information

Communication Audit of the Academic & Career Advising Center. Table of Contents

Communication Audit of the Academic & Career Advising Center. Table of Contents Helping organizations reach new heights through effective communication Communication Audit of the Academic & Career Advising Center Table of Contents Mission Statement 4 Executive Summary 5 Introduction

More information

QUESTIONS NUMBER ONE (Total marks 20) NUMBER TWO (Total marks 20) NUMBER THREE

QUESTIONS NUMBER ONE (Total marks 20) NUMBER TWO (Total marks 20) NUMBER THREE NUMBER ONE QUESTIONS The growth of telecommunications has made information a key organisational resource, which requires careful management. a. Give your definition of an Information System. (5 b. The

More information

PSS E. High-Performance Transmission Planning Application for the Power Industry. Answers for energy.

PSS E. High-Performance Transmission Planning Application for the Power Industry. Answers for energy. PSS E High-Performance Transmission Planning Application for the Power Industry Answers for energy. PSS E architecture power flow, short circuit and dynamic simulation Siemens Power Technologies International

More information

GoldSRD Audit 101 Table of Contents & Resource Listing

GoldSRD Audit 101 Table of Contents & Resource Listing Au GoldSRD Audit 101 Table of Contents & Resource Listing I. IIA Standards II. GTAG I (Example Copy of the Contents of the GTAG Series) III. Example Audit Workprogram IV. Audit Test Workpaper Example V.

More information

A Model for CAS Self Assessment

A Model for CAS Self Assessment Introduction An effective Contractor Assurance System integrates contractor management, supports corporate parent governance and facilitates government oversight systems. The purpose of a CAS is threefold:

More information

Lectures 2 & 3. Software Processes. Software Engineering, COMP201 Slide 1

Lectures 2 & 3. Software Processes. Software Engineering, COMP201 Slide 1 Lectures 2 & 3 Software Processes Software Engineering, COMP201 Slide 1 What is a Process? When we provide a service or create a product we always follow a sequence of steps to accomplish a set of tasks

More information