'Good' Organizational Reasons for 'Bad' Software Testing: An Ethnographic Study of Testing in a Small Software Company.

Size: px
Start display at page:

Download "'Good' Organizational Reasons for 'Bad' Software Testing: An Ethnographic Study of Testing in a Small Software Company."

Transcription

1 'Good' Organizational Reasons for 'Bad' Software Testing: An Ethnographic Study of Testing in a Small Software Company Article Review

2 Background on Study Ethnographic study Direct observation Interviews Transcriptions General Details Examine and evaluate the practicalities of software testing in a small company Illustrate that testing is a socio-technical discipline and not just a technical one Focused on systems integration and acceptance testing Demonstrate ambiguity involved with integration and acceptance testing

3 How the Study was Conducted Observed work as it was happening Interviewed employees Note taking Video recordings Photographic recordings Audio recordings Total of 30 days across a 9 month period 'ethnomethodological' approach "This approach is to focus without prior hypotheses on understanding how plans and procedures are implemented in practice, how participants coordinate their work, how they reason about their work and organise their activities as a recognisable social accomplishment."

4 About W1REsys Developers of a 'write once, run everywhere' development environment. Used by customers to develop applications with XML for mobile phones and pocket PCs ( ). Produce development environment that allows programs to be run on various different mobile devices Target generic application over customer-specific Seven full-time employees Four programmers Practice elements of XP Did not always employ pair programming or automated testing* Customers in various industries (vehicle repair assistance and couriering and supply delivery)

5 W1REsys Handling Requirements Fluid approach that is focused on survival Examine immediate circumstances What will bring in new customers? What would be useful for current customers? Requirements finding process controlled by senior members (sales, marketing, training and strategy) On-site "customer" is the customer relationship manager No contractual obligation to deliver specific requirements

6 W1REsys Unit Testing Full automated unit testing* Do not perform test driven development nor do they test every method Develop up tests in a variety of ways Some times they write tests before they code Adapt older tests for new code after the code is written New tests Full regression testing when integrating new code with old code

7 W1REsys Systems Integration Testing Performed during left over time between build and release Can continue post-release Use whiteboard lists to determine testing "we need to test this, this, this" Tests are produced during the process itself Questions for approaching testing "What seems sensible and possible given what time we have?" "What do we know or think about users and use?" Successfully completed tests are used to create demonstrations Documentation for new features Recruit new customers

8 Testing Web Services Requirement was developed without a request from a customer Investigated requirement knowing that it may be shelved The requirement was vague making it unreasonable to develop a test beforehand Developed two-part solution 'primitive' service 'complex' service Tested on two different types of devices PPC MIDP 2.0 mobile phone Tested with free web services Experienced a number of failures Unsure if the failed tests were due to handling the response or a lack of response By the time the issues were sorted out, the testing period was finished Testing serves additional purpose as documentation and demonstration

9 Testing the Message Push Server Decision is made to rebuild the message push server Requirement was discovered through conversation Preparatory work and discussions about how to test users Testing Process Took 3 months to build (though they estimated 3-4 weeks) Developed a test harness that was installed on one machine and then sent 1000 messages each from five other machines. Initial test is successful Second test had issues 15 connections out of failed Major failure when sending back responses to clients A single message failure caused the whole system to fail Initial solution was to prioritize failed messages and add them to end of the message queue Another bug is discovered with the 'peek and remove' procedure This is fixed and they successfully run the test again Testing process took 3 weeks

10 Observations W1REsys' priorities (survival) Dynamics of real customer relationships Using limited effort in the most effective way The timing of software releases The need to 'grow the market' for the system

11 Observations Requirements testing Fluid nature of of W1REsys' requirements makes it impossible to create requirements tests for all requirements Adequacy of testing is often unclear until customers use a feature Role of testing was to demonstrate requirements rather than discover defects in the design or implementation Test Coverage Generic design of W1REsys' product makes it impossible to know how exactly the customer will use the system and thus impossible to define what complete testing is The need to retain customers and add features to attract new customers made it difficult to achieve adequate testing coverage "Why waste resources on something that cannot be well defined in advance?"

12 Observations Automating Testing 'XP Customer' is a W1REsys employee No real "customer-owned" tests Prefer general/vague requirements definitions Avoid making "unambiguous" automated tests Automated tests cannot replace manual tests Test Scenario Design Test scenarios are informally derived throughout development and testing by imagining how users will use the system Understand the customers and their needs Generic scenarios are incomplete and fragmented

13 Conclusions Testing problems are largely organizational Economic pressure to quickly deliver software Increasing complexity Strict quality requirements Volatile system requirements It's not always practical to implement software testing research Software testing research focuses on improving testing from a technical standpoint Practicality requires that tests are designed to meet the company's needs by designing tests that are effective and minimize time and effort Testing is a socio-technical discipline