A Framework for Maturity Assessment in Software Testing for Small and Medium-Sized Enterprises

Similar documents
Characterising the state of the practice in software testing through a TMMi-based process

Journal of Engineering Technology

CERTICS - A Harmonization with CMMI-DEV Practices for Implementation of Technology Management Competence Area

Identifying Relevant Product Quality Characteristics in the Context of Very Small Organizations

Software technology 3. Process improvement models. BSc Course Dr. Katalin Balla

Assessment Results using the Software Maintenance Maturity Model (S 3m )

WORK PLAN AND IV&V METHODOLOGY Information Technology - Independent Verification and Validation RFP No IVV-B

Developing International Standards for Very Small Enterprises

MPS.BR: A Tale of Software Process Improvement and Performance Results in the Brazilian Software Industry

Analyzing a Process Profile for Very Small Software Enterprises

PAYIQ METHODOLOGY RELEASE INTRODUCTION TO QA & SOFTWARE TESTING GUIDE. iq Payments Oy

Brochure of CMMI 1.3!

CITS5501 Software Testing and Quality Assurance Standards and quality control systems

Guidelines for Testing Maturity

Extending an Agile Method to Support Requirements Management and Development in Conformance to CMMI

Capability Maturity Model for Software (SW-CMM )

Performing ISO/IEC Conformant Software Process Assessments in Small Software Companies

A Nationwide Program for Software Process Improvement in Brazil

How to Develop Highly Useable CMMI Documentation

feature Validating and Improving Test-Case Effectiveness

CENTRE (Common Enterprise Resource)

Recommended Configuration Management Practices for Freelance Software Developers

CMMI-DEV V1.3 CMMI for Development Version 1.3 Quick Reference Guide

How mature is my test organization: STDM, an assessment tool

CENTRE (Common Enterprise Resource)

1. Can you explain the PDCA cycle and where testing fits in?

CENTRE (Common Enterprise Resource)

Erol Simsek, isystem. Qualification of a Software Tool According to ISO /6

Software Engineering

By: MSMZ. Standardization

Software Testing Education and Training in Hong Kong 1 2

CMMI for Services. Gordon Ward Director Of Quality & R6s Raytheon RIS Juan Ceva, Mark Pumar, Enterprise Process Group Raytheon

CMMI Level 2 for Practitioners: A Focused Course for Your Level 2 Efforts

This resource is associated with the following paper: Assessing the maturity of software testing services using CMMI-SVC: an industrial case study

Taba Workstation: Supporting Software Process Deployment based on CMMI and MR-MPS

Software Process Evaluation

CENTRE (Common Enterprise Resource)

WNR Approach: An Extension to Requirements Engineering Lifecycle

Changes to the SCAMPI Methodology and How to Prepare for a SCAMPI Appraisal

CMMI Version 1.2. Model Changes

Teaching Software Quality Assurance in an Undergraduate Software Engineering Program

Revista Economică 70:4 (2018) USING THE INTEGRATED CAPABILITY AND MATURITY MODEL IN THE DEVELOPMENT PROCESS OF SOFTWARE SYSTEMS

Evaluating and Building Portfolio Management Maturity

Managing Geographically Distributed Software Projects: Success Factors and Lessons Learned

Construction of the Capability Maturity Model of Dynamic Strategic Cost Management on Real Estate Development Enterprises

Independent Verification and Validation (IV&V)

Reflection on Software Process Improvement

CENTRE (Common Enterprise Resource)

Review of the Monitoring & Reporting Decision (2009/442/EC) Explanatory Note

MTAT Software Engineering Management

QUALITY ASSURANCE PLAN OKLAHOMA DEPARTMENT OF HUMAN SERVICES ENTERPRISE SYSTEM (MOSAIC PROJECT)

What s New in V1.3. Judah Mogilensky Process Enhancement Partners, Inc.

Capability Maturity Model the most extensively used model in the software establishments

AZIST Inc. About CMMI. Leaders in CMMI Process Consulting and Training Services

Assessment of CAC Self-Study Report

UNIVERSITY OF NAIROBI

Software Test Factory (A proposal of a process model to create a Test Factory)

Categorizing Identified Deviations for Auditing

Designing Systems Engineering Profiles for VSEs

Software Quality Engineering where to find it in Software Engineering Body of Knowledge (SWEBOK)

Software testing education and training in Hong Kong. Proceedings - International Conference On Quality Software, 2005, v. 2005, p.

RESEARCHERS and practitioners have realized that

USING PILOTS TO ASSESS THE VALUE AND APPROACH OF CMMI IMPLEMENTATION. Goddard Space Flight Center (GSFC)

CMMI-SVC V1.3 CMMI for Services Version 1.3 Quick Reference Guide

Supplier s s Perspective on

Applying PSM to Enterprise Measurement

SCAMPI SM C ++ to C- How much is enough?

Contents. Preface. Acknowledgments. Tables and Figures

A Practical Methodology for DO-178C Data and Control Coupling Objective Compliance

A CASE STUDY ON THE CHALLENGES AND TASKS OF MOVING TO A HIGHER CMMI LEVEL

Process and Project Alignment Methodology: A Case- Study based Analysis

Chapter 6. Software Quality Management & Estimation

Feature. Checking the Maturity of Security Policies for Information and Communication

An End-To-End Industrial Software Traceability Tool

SWEN 256 Software Process & Project Management

Professor Mervyn King Chairman International Integrated Reporting Council Submitted via

THE CASE FOR QUALITY: MOVING FROM COMPLIANCE TO PERFORMANCE IN THE REGULATORY WORLD

An Agent-Based Approach to the Automation of Risk Management in IT Projects

Debra J. Perry Harris Corporation. How Do We Get On The Road To Maturity?

Lean CMMI: An Iterative and Incremental Approach to CMMI-Based Process Improvement

Improving the Test Process with TMMi

The CMMI Product Suite and International Standards

Five principles for Managing application lifecycle with SpiraTeam

CMMI Current State. Bob Rassa Industry CMMI Chair, Raytheon. Clyde Chittister Chief Operating Officer, Software Engineering Institute

Software Quality Management

Software testing education and training in Hong Kong

The Explicit Relationship Between CMMI and Project Risks

SCAMPI V1.3 Status. SCAMPI Upgrade Team Carnegie Mellon University

Work Plan and IV&V Methodology

Business Process Management

Quality Management with CMMI for Development v.1.3 (2013)

DO-178B 김영승 이선아

Investigating measures for applying statistical process control in software organizations

Architecture Exception Governance

Evidence Management for the COBIT 5 Assessment Programme By Jorge E. Barrera N., CISA, CGEIT, CRISC, COBIT (F), ITIL V3F, PMP

Soft Systems Methodology for Hard Systems Engineering - The Case of Information Systems Development at LIT/INPE/BRAZIL

Exam questions- examples

NATURAL S P I. Critical Path SCAMPI SM. Getting Real Business Results from Appraisals

Software Test Process Assessment Methodology

MEASURING PROCESS CAPABILITY VERSUS ORGANIZATIONAL PROCESS MATURITY

Transcription:

A Framework for Maturity Assessment in Software Testing for Small and Medium-Sized Enterprises Adailton F. Araújo 1, Cássio L. Rodrigues 1, Auri M.R. Vincenzi 1, Celso G. Camilo 1 and Almir F. Silva 2 1 Institute of Informatics, Federal University of Goias, Goiania, Goias, Brazil 2 Development Department, Decisão Sistemas, Goiania, Goias, Brazil Abstract This paper proposes a framework for a software maturity assessment model for small and medium-sized enterprises (SMEs) based on the TMMi model. Our framework includes an evaluation questionnaire based on TMMi subpractices, support tools with examples of artefacts required to ensure the questionnaire is thoroughly completed, as well as an automated tool support for its application, enabling SMEs to carry out self-assessment. The model was applied to four SMEs. Keywords: Maturity Assessment of Test Process, Software Test Process Improvement, TMMi, TMM 1. Introduction Small and medium-sized software enterprises play a major role in world economic development, representing 99,2% of the world s software companies [1]. In 2010, computer programs developed in Brazil had a 35% share in the country s software market. This market is explored by about 8,520 companies dedicated to software development, production and distribution, as well as to services. A total of 94% of all software development and production companies (2,117) are classified as SMEs [12]. Moreover, those with less than ten employees represent 93% of European companies and 56% of American companies [9]. Thus, most of the world s software development businesses are classified as SMEs [8], which shows their importance in the market. The production of high-quality software is a challenge for SMEs. In general, they are known to develop good software quality in limited ways [9]. The software market has become more demanding and SMEs must continually improve their products to meet market demands. The quality of a system or product is usually determined by the quality of the process used and requires a foundation to maximize people s productivity as well as technology to enhance competition in the market [7]. A good process alone does not help an organization to develop a successful product, but a good process is necessary to create a successful one [9]. Considering the importance of the use of processes, the software industry seeks certifications in various models which highlight discipline, such as CMMI (Capability Maturity Model Integration) [7] and MPS.BR (Software Process Improvement in Brazil) [10]. Companies of different sizes need minimum financial requirements to start a software process improvement program. Such investment may be insignificant to a large company s income, but may not be feasible to an SME [2]. Furthermore, software process improvement in small businesses must give special attention to the applications of models and standards designed by large organizations [16]. Testing is an important part of the development cycle which leads to high-quality software production [15]. Test maturity models such as CMMI and MPS.BR present some problems in the way that they do not perceive the test as important, in addition to requiring testing only for high levels of maturity, which are not usually reached by SMEs. According to MPS.BR, which focuses on SMEs, only 8% of registered companies have high level certifications A, B, C and D [13], which require the verification and validation process. The software industry has focused on process improvement to enhance its performance [15]. Therefore, test maturity models were created for such purpose, such as TMM (Test Maturity Model) [3] and TMMi (Test Maturity Model Integration) [4]. However, such models are not adequate for SMEs due to their high costs and the fact that they are originally created from the standpoint of large organizations. TMM is one of the most widely used models in the world, and TMMi is an upgraded version of it [3]. Up to the present, there are no free assessment questionnaires available for SMEs to use as a self-evaluation tool. Thus, our work aims at defining a framework for a software maturity assessment model based on TMMi practices which may be applied by SMEs. Due to financial restrictions, these companies need a method that enables them to assess their test maturity process, for the investments required by a formal assessment are not compatible with the reality of SMEs. Also, they must deal with low maturity levels during both test and improvement processes. Therefore, we offer support by providing resources for companies to evaluate themselves without advanced knowledge of the model. Our main contribution is an objective questionnaire for maturity assessment which follows the TMMi model. For each question, examples are presented on how the company may meet the model s demands. Furthermore, an automated tool support was developed to help with the assessment and attainment of results. Such support may reduce assessment effort because it takes into consideration the relationships

between the questions, therefore reducing the number of questions to be answered. The benefits for SMEs would be numerous. According to the Tassey report [14], the lack of an appropriate test process would result in: 1) an increased number of faults due to its low quality; 2) higher development costs; 3) delay in production; and 4) rise in tool support costs. The remaining part of this paper is organized as follows: Section 2 presents related works; Section 3 shows the steps taken to create our proposed framework; Section 4 describes the case study by setting the context in which the framework was applied; Section 5 exhibits the results and an assessment of the framework. Finally, the conclusions are shown in Section 6. 2. Related Works Various works and tools have been developed to support the maturity assessment of development and test processes. Tayamanon et al. [15] propose a supporting tool based on TMM-AM, which enables each company to make its own test process evaluation. The first steps taken for its construction were the analysis of each question in the TMM- AM questionnaire and the registration of all products based on IEEE 730 and 829 standards, all of which would be necessary to answer each question. Tool assessment was carried out by comparing the standard product to the one produced by the organization. Ng et al. [11] suggest that the downside to the use of such approach in SMEs lies on the fact that most of them do not implement test documentation requirements and end up creating customized documents. Partial documentation would frequently be produced by the companies, a fact not taken into account by the tool, which would ultimately hinder its assessment as regards SMEs. This would result in a subjective assessment of the level of each product, which is not adequate for SMEs. Appraisal Assistant is an assessment tool that was developed by Software Quality Institute (SQI) at Griffith University [6]. This support tool is used to assess an organization s capacity or maturity process, based on assessment models such as SCAMPI (CMMI) and ISO/IEC 15504. It uses multiple reference models, including CMMI V1.1, V1.2 and V1.3, ISO/IEC 12207 and others. Even though it is used to assess the maturity of the development process, tests carried out on it showed it can be used easily. Höhn [5] shows a framework which aimed at gathering general knowledge on the test area and making it available for the community to facilitate its transfer, test process definition and improvement, also providing more quality. KITTool is a part of this framework, which carries out software maturity assessment based on the TMMi model. It provides two types of test process diagnostic procedures: one of them is based on TMMi goals and the other on its practices. In the first case, both the process areas and goals are identified. Each goal representing the test process is given a grade (0, 4, 6, 8, 10). The second approach uses SCAMPI assessment model in CMM and presents the goals and practices related to each objective. The assessor then gives a rate (0, 25%, 50%, 75%, 100%) based on a reference table, which shows how much practice has been implemented in the assessment process. However, applying this tool requires a high level of knowledge of the TMMi model and does not focus on SMEs. Thus, considering the related works, our framework is different in the sense that: it provides support for SMEs to carry out maturity self assessment in test process; its automated support allows the association of the evidence provided to check whether each TMMi goal/practice has been met. It gives an overview on how the company is making use of the model, facilitating the process s reassessment as well as potential external auditing; the automated support promotes direct relationships between the questions from the assessment questionnaire, reducing the number of items that need checking and even revealing inconsistencies during the evaluation process; the framework was applied and assessed in a case study. 3. Creating the Framework This section presents the steps taken to create the framework for software test maturity assessment based on the TMMi model. 3.1 Definition of the assessment questionnaire There are five maturity levels in the TMMi model and each of them contains a set of process areas. Each process area has a set of goals, each goal has a set of practices and each practice has a set of sub-practices. The latter is a detailed description that provides guidelines for interpreting and implementing a specific practice [4]. For example, the first column of Table 1 shows some of the sub-practices associated with the practice Define Test Goals, the aim Establish Test Policies, and the process area Test and Policy Strategies, which is related to maturity level 2. Table 1: Questions based on TMMi sub-practices Sub-practice Question 1. Study business needs and objectives; OR 2. Allow feedback to clarify business OR needs and objectives, if necessary; 3. Define test goals which trace back to (3)Are test goals defined business needs and objectives; based on business needs and objectives? 4. Review test goals with the interested OR parties; 5. Reassess test goals whenever necessary e.g. annually. (5) Are test goals periodically reviewed?

This is the structure of all process areas related to levels 2, 3, and 4. Level 5 process areas do not contain practices or sub-practices; the generic practices, which may be applied to all process areas, do not have sub-practices. Höhn s work [5] provides two forms of assessment: 1) based on the goals; and 2) based on model practices. Assessing the TMMi model showed that the provision of details of sub-practices would be crucial for SME s self-assessment, for in this case the assessor does not know the detailed information required to achieve a certain goal/practice. Therefore, a questionnaire was created to assess the adherence to the TMMi model based on its sub-practices. The questionnaire took into account the following criteria: Questions were created only for the sub-practices that represent a given result to be achieved by the model. The remaining sub-practices that are orientations for practice implementation did not have related questions. Table 1 shows examples of questions based on subpractices 3 and 5. Sub-practices which did not have related questions (1, 2, and 4) were named OR (orientation); Model traceability among questions and sub-practices was maintained to facilitate questions future improvements and maintenance. As shown in Table 1, the numbers in parentheses beside each question indicate such traceability; Whenever the model does not provide sub-practices (all the generic practices and some specific practices), a question was created based on the practice s description. For instance, the practice Distribute Test Policy to Interested Parties, with the description Test policies and goals are presented and explained to the interested parties involved or not involved with the test activity, had the following question: Were test policies presented to the interested parties (that are involved or not with the test)?; and Whenever the model does not provide practices (level 5 process areas), a question was created based on goal description. For example, the goal Select Test Process Improvements, with the description Test process improvements are chosen when they contribute to the achievement of product quality as well as process development goals, had the following question: Are test process improvements selected to achieve product quality and process development goals? As a result, a questionnaire was created to cover all TMMi process areas with 261 questions. Twelve of them are related to generic model practices, which means they can be applied to all process areas. As for the questions based on sub-practices, each was associated to one or more sub-practices related to the same practice. Furthermore, there were sub-practices with no associated questions. Regarding the questions directly linked with the goals and practices, only one question was associated to each of them due to their greater specificity. The maturity level is defined based on the answers given in the questionnaire, according to the TMM [3] assessment method. The answers predicted by this method are: Yes, No, Not Applicable, and Unknown. 3.2 Definition of the support material In order to help SMEs carry out their self-assessment, even if the assessor does not have advanced knowledge of the TMMi model, each question provides examples of artefacts typical of works that are often used to prove the organization attends the result expected by the model. The TMMi model already provides examples for many of its subpractices. Therefore, the examples were chosen according to the following criteria: For the practice-related questions, examples provided by the TMMi model were used. Some of these were extensive, so, to prevent the support material from disturbing the assessment, a maximum of five examples was established per item. All examples were read and then chosen when clearly related to the item; For the practice-related questions without examples in the TMMi model, examples were taken from other practices that showed details of model implementation; and For the questions that did not fit into any of the previous cases, experience and/or other references were used in the artifact s proposition. Consequently, examples were produced for all 261 questions in the questionnaire, some of which are shown in Table 2. Questions (3) and (5) derived from Table 1, whereas the rest were created from other TMMi sub-practices. Question (1), Were test project techniques selected to provide adequate test coverage regarding the risks of defined products?, the following examples were indicated: Equivalence Partitioning, Boundary Value Analysis, Decision Table, State Transition Testing and Use-Case Testing. 3.3 Definition of the automated support for assessment An electronic spreadsheet made it possible to visualize each question s support material, attribute an answer (Yes, No, Not Applicable, or Unknown) and associate evidence from artefacts produced by the company that support Yes answers. Results were based on the TMM [3] assessment method, which is also questionnaire-oriented. We were able to use this method because TMMi process areas are similar to TMM goals, and TMMi goals are similar to TMM sub-goals. To carry out the assessment according to TMMi, each maturity goal has a set of related questions. The goal is regarded as achieved when the number of Yes answers is higher than or equal to 50% i.e. if its degree of satisfaction

Table 2: Some examples based on the TMMi model and on experience Question Support Material (1)Were test project techniques selected to provide adequate test coverage regarding Typical work product: Test Plans with test project techniques. Examples of test techniques: Equivalence the risks of de- Partitioning, Boundary Value Analysis, fined products? Decision Table, State Transition Testing, (3)Are test goals defined based on business needs and objectives? (4)Was documentation developed to support the implementation of the testing environment? (5) Are testing goals periodically reviewed? Use-Case Testing. Typical work product: Testing goals which trace back to business goals. Examples of testing goals: Validate product for use; Prevent faults during operation; Verify conformity to external standards; Provide information on product quality. Typical work product: Support documentation for the implementation of the testing environment. Examples: Guide for environmental setup; Guide for environmental operation; Guide for environmental maintenance. Typical work product: Review of testing goals. Examples of reviews: Meeting with the interested parties to review and discuss the need to make changes to testing goals (register in the meeting minute). is medium, high or very high. Table 3 shows how a goal s degree of satisfaction was calculated. Table 3: Goals Degree of Satisfaction [3] Degree of Satisfaction Criteria Very high If yes answers are above 90% High If yes answers range between 70-90% Medium If yes answers range between 50-69% Low If yes answers range between 30-49% Very low If yes answers are below 30% Not Applicable If Not Applicable answers are 50% or more Not Classified If Unknown answers are 50% or more Maturity classification in process areas relies on goal classification (see Table4). A satisfactory level indicates that all maturity goals were achieved. A spreadsheet was used to implement this method. Considering the answers given, it then automatically produced the results and establishes a maturity level for the company. Results are displayed in both summary and detailed versions. In the summary version, the satisfaction and maturity level of each process area are determined. The detailed version provides not only information listed in the summary version but also the degree of satisfaction of each goal related to the process areas, stating whether there are questions without any attributed answer. 3.4 Establishment of dependencies between questionnaire questions A question X is considered dependent of Y when it is only possible to obtain a positive answer to X if Y was previously attributed a positive answer. An example that might represent Table 4: Satisfaction of Process Areas [3] Satisfaction Criteria Achieved If achieved goals are 50% or more Not achieved If achieved goals are below 50% Not Applicable If not applicable goals are 50% or more Not Classified If not classified goals are 50% or more this situation would be to attribute X= Was the test project monitored throughout its lifecycle by comparing what was planned and what was accomplished in relation to potential risks and Y= Were test project risks identified and was each risk analyzed regarding its probability of occurrence, impact and priority?. Therefore, it is not possible to monitor the risks that were not defined previously. This relationship was established through the assessment of existing dependencies between questions, which created a dependency matrix among the questions. Establishing such dependency has two distinct aims: 1) To detect inconsistencies in assessment answers - use dependency relationships to find inconsistencies between questionnaire answers i.e. if there are Yes answers to questions that depend on another question which has No as an answer; and 2) To eliminate dependent questions - use dependency relationships to eliminate questions that depend on other questions which have a No answer, thus reducing the number of questions to be answered. Our proposed framework will make both approaches available. If a company chooses to reduce assessment time by eliminating some questions, it will not be able to identify inconsistencies. The selection of the most adequate approach may be done in accordance with the company s maturity level. For example, companies with more mature testing processes may feel more secure to eliminate some questions. 4. Case Study Framework analysis was carried out to assess its adaptability in four SMEs, here referred to as A, B, C, and D for secrecy issues. All of them are involved in a test process improvement project. These companies required some feedback on the actual condition of their test process before implementing any improvements. To obtain such information, the first assessment was thus carried out: A TMM [3] questionnaire was applied; The questionnaire was answered by 2-3 collaborators from each company who were members of the development/test team; No evidence of attainment of model practices was required. Results from the first assessment revealed a 67,8% rate of divergence among the answers given by collaborators from the same company. It was then concluded that the diagnostic

results might not show the true level of maturity of the companies during test process. Also, the final results may have been affected by a combination of factors, such as the lack of knowledge of collaborators and lack of clarity in questions. The diverging results contributed to the adoption of the TMMi model. Examples were then provided and evidence of practices was required. The second assessment was carried out as follows: The proposed framework contained the TMMi questionnaire and the support material; The questionnaire was answered by only one company collaborator, also a member of the test team; With each positive answer in the spreadsheet, evidence (artefacts) showed how the company puts into practice what is required by the model; Once all answers were concluded and evidence was associated, company representatives assessed the framework and raised the problems found as well as suggested improvements; Auditing was carried out to focus on the evidence shown on the spreadsheet, in order to check if the documentation adhered with the practices required by the model. 5. Results Table 5 compares maturity level results from the first assessment, which used the TMM model, to the ones from the second assessment, which used our framework. Satisfaction of a certain maturity level by a company is represented by S and non-satisfaction by NS. Also, different results are highlighted in bold. Companies A and C showed different results, reaching a two-level difference in relation to company A. Comparison of both software testing evaluations also revealed a significant difference related to goal achievement (Table 6). Such difference reached 69% in company C and 41% in company A. Considering that the second assessment was audited, its results better represent the companies current situation in software testing. Table 5: Assessment Results Company Evaluation Level2 Level3 Level4 Level5 A 1 S S S NS 2 S NS NS NS B 1 NS NS NS NS 2 NS NS NS NS C 1 S NS NS NS 2 NS NS NS NS D 1 NS NS NS NS 2 NS NS NS NS The questionnaire was used to assess our framework. Each question had four answer options, of which only one should be chosen, presented in the form of classification ratings. Table 7 shows the relationships between the applied questions and the options for each question. The company identifier (A, B, C, and D) represents the answer it attributed to a given classification range for each question. For example, company A selected the same range for all questions (76-100%), except for the third one Table 6: Metas Satisfeitas Company Assessment % of goals achieved Variation 1 85% A 41% 2 44% 1 8% B 8% 2 0% 1 69% C 69% 2 0% 1 0% D 0% 2 0% Results revealed that 80% of the answers ranged from 51-100%, and most of them ranged from 76-100%. Company A was the one that better assessed the framework and hence used it more frequently, as it was the only company that reached at least one maturity level. All answers ranging from 26-50% were attributed to company D (Table 7). As is shown in Table 6, the same company did not meet any goals according to TMMi assessment. The company s feedback also revealed that, since the beginning of the assessment, it had concluded that an ad hoc test was being carried out. Therefore, there would be no evidence for any of the questions in the questionnaire i.e. the company did not have the means to carry out a more detailed assessment of the framework due to the fact it was rarely used. Despite being defined as part of the framework, as described in Section 3.4, the second assessment did not use the dependency relationship between the questions to eliminate dependent questions. Table 8 shows the percentage of questions that each company would not answer due to the elimination of some questions, based on the No answers. Company D showed a 65% rate, which represents the maximum reduction obtained by using a certain dependency relationship, seen that this company answered No to all the questions. 6. Conclusions The present paper has shown that software development SMEs are important for world economy. These companies also have a constant need to improve their product quality to meet market demands. Considering the importance of software testing for high-quality software production, we created a framework to the assess maturity level in software testing based on the TMMi model, which is suitable to the reality faced by SMEs. Results revealed that the objective questionnaire, the support material related to each question, and the spreadsheet which automatically processes assessment results contribute to SMEs self-assessment of maturity levels in software

Table 7: Questionnaire Assessment Question 0-25% 26-50% 51-75% 76-100% 1-What was the degree of clarity of questions? D C A B 2-What was the degree of clarity of examples? B C D A 3-What was the level of importance of examples in understanding the D A B C questions? 4-What was the level of importance of examples in associating the evidence D A B C with the questions? 5-What was the level of association between the vocabulary used in the questions and examples and your reality? D BC A Table 8: Reduction via Dependencies Company Reduction A 26% B 37% C 60% D 65% testing, despite their low maturity levels. However, in order to make results more compatible with a company s reality, we suggest that: 1) the questionnaire be answered by only one company representative; 2) this person be a member of the test team or have a great knowledge of the company s test process and 3) the assessment be evidence-based. Companies positively assessed the questionnaire. A total of 45% of assessed items was rated 76-100%, the largest range available, and 80% of all items was classified in the 51-100% range. The main obstacle faced by the companies was the large number of questions. This aspect may be improved by the use of better support material to implement the option to eliminate questions based on previously defined interdependent relationships between the questions. This would promote a 65% reduction on the number of questions. By comparing the results from both assessments, we observed a 69% variation regarding goal satisfaction. Furthermore, we learned that wrong results may be generated if low maturity companies carry out a self-assessment without a material that supports the attribution of answers or without evidence association. However, our proposed framework aims at solving these kinds of problems, hence presenting more realistic results in software testing for SMEs. The results of this assessment will be used as a basis for the development of a test process that is adequate for SMEs, one which considers its maturity level. The process will be implemented in all four companies through pilot project applications. Moreover, further assessment will be carried out using the proposed framework to identify the companies maturity development. Finally, the framework will be available to the community free of charge. 7. Acknowledge Grateful thanks are owed to the four enterprises participants of the survey for their invaluable help in this study. This work was partially supported by FAPEG. References [1] M. Q. A. A., S. Ramachandram, and A. M. A. An evolutionary software product development process for small and medium enterprises (SMEs). pages 298 303. IEEE, Oct. 2008. [2] J. Brodman and D. Johnson. What small businesses and small organizations say about the CMM. pages 331 340. IEEE Comput. Soc. Press, Aug. 2002. [3] I. Burnstein. Practical Software Testing: a process-oriented approach. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2002. [4] T. Foundation. Test maturity model integration - TMMi version 3.1. [5] E. Höhn. KITeste - A Framework of Knowledge and Test Process Improvement. PhD thesis, USP, São Carlos - SP - Brazil, June 2011. (in Portuguese). [6] S. Q. Institute. Appraisal assistant, 2007. [7] S. E. Institute-SEI. Capability maturity model integration - CMMI version 1.3. [8] M. Khokhar, K. Zeshan, and J. Aamir. Literature review on the software process improvement factors in the small organizations. pages 592 598. IEEE Comput. Soc. Press, May 2010. [9] C. Laporte, S. Alexandre, and A. Renault. Developing international standards for very small enterprises. Computer, 41(3):98 101, Mar. 2008. [10] M. A. Montoni, A. R. Rocha, and K. C. Weber. MPS.BR: a successful program for software process improvement in brazil. Softw. Process, 14(5):289 300, Sept. 2009. [11] S. Ng, T. Murnane, K. Reed, D. Grant, and T. Chen. A preliminary survey on software testing practices in australia. pages 116 125. IEEE, 2004. [12] B. A. of Software Companies. Brazilian software market - overview and trends. Technical report, June 2011. (in Portuguese). [13] Softex. Evaluations published MPS.BR, Mar. 2012. (in Portuguese). [14] G. Tassey. The economic impacts of inadequate infrastructure for software testing. May 2002. [15] T. Tayamanon, T. Suwannasart, N. Wongchingchai, and A. Methawachananont. TMM appraisal assistant tool. pages 329 333. IEEE, Aug. 2011. [16] T. Varkoi, T. Makinen, and H. Jaakkola. Process improvement priorities in small software companies. page 555. Portland Int. Conf. Manage. Eng. & Technol. PICMET, Aug. 2002.