Testing: How much is enough? Ian Ashworth Coverity

Size: px
Start display at page:

Download "Testing: How much is enough? Ian Ashworth Coverity"

Transcription

1 Testing: How much is enough? Ian Ashworth Coverity

2 Traditional Software Testing - Objectives Ensure the software all works as described in the requirements specification Make sure there are No bugs, especially crash causing ones! No issues affecting the business logic, control flow etc Everything runs sweetly, no performance bottlenecks We deliver a good user experience Oh, and don t take too long we re shipping in n weeks! 2

3 Traditional Testing - Result Developers tend to do a minimal amount of testing Pass the codebase over to QA / Test teams Bugs are found but late in the SDLC Referred back to Development for advice and fixing The cycle continues for another iteration. Consequently: A software release could be delayed or Worse still, a product ships with bugs! 3

4 Testing Methods and Techniques Objective: find as many errors as you can, without knowing where they are or what they look like or their numbers. Test driven development write the test first, engineer the code until it passes the test Function / Unit / Module testing White box, Black box, Grey Box Fuzzing End-to-End, System, Hardware/Engineering tests Performance, Security, Regression etc etc 4

5 Testing When to stop? When has a sufficient level of testing been reached? Zero bugs? Exhausted all possible pathways through the code? Ticked everything on your checklist? Run out of time to execute any further test cases? Usability - It all seems to be working well enough No It s always a pragmatic balance of effort & cost versus time The pursuit of perfection protection 5

6 Coverity s approach Recognises that developers are best placed to find & fix bugs Introduced them knowingly or not Know the code well also the design/architecture Fresh in their minds as new features are being developed Generally, there are more developers than QA and testers How can we better harness this resource? education, guidance, efficiency = more effective 6

7 Coverity - Static Analysis ++ Assists with : test case design inputs, pathways, exit paths etc locating gaps in a testing strategy test effectiveness boolean satisfiability assessment of risk impact of omissions, code change provide useful supporting metrics and meta data enable governance of issues over a time period tracking, charting, alerting and reporting 7

8 Code Coverage a popular test measure White box testing methodology Structural how does a program work Tests behaviour against the code design/function Employed during a module testing phase Code that has not been covered could contain bugs Percentage = statements executed statements in the program 8

9 Code Coverage is it flawed? What does it achieve? Finds areas of the codebase not being called by test cases Gaps spotted without having any pre-knowledge Reliable and Repeatable implies correctness More test cases increases your coverage percentage Bugs assumed to be associated with control flow = Indirect measure of testing quality But what is an acceptable minimum percentage coverage? 9

10 Code Coverage Min Percentage? 100% coverage? But does this mean no bugs? Is this really achievable? Bugs lurk in corner cases Tests are only as good as the checks they include Diminishing return for increased effort What is missing here? 10

11 Code Coverage Questionable? Suitable minimum percentage 70% / 80% / 90%? Can you rely on it? One size fits all? 100% for one function doesn t necessarily apply to another Treating all statements /structure as equally important Do all functions need to be tested? Results in terms of logical modules rather than functions? Impact of changes to a function or interface? Consequence of leaving gaps in our coverage? What is the Cost of Failure? 11

12 The Coverity Process Define Policies Express your goals of testing, quality, set benchmarks Collect data about testing activities Branch and line coverage, static analysis, metrics, meta data Analyse this information, with context relate it back to your policies violations Review test results alongside SA and other measures full/partial violations, defects, kill paths, dead code etc Govern with an automated workflow with SDLC triage, assign tasks, remediate issues 12

13 Coverity Policies f15 Foo f90 Set varying coverage thresholds Per module/file/function Note the critical areas Tighter /Looser policy metrics Complexity (MCABE CCM) Ignore debug, logging, kill paths, exception handling Identify new code versus legacy code Relate to information within the SCM two reference dates Impact analysis (compare between test runs) Impact age define change ripples f56 f34 f f25 f f77... f76 f89 f34 f23... f54 f32 f67 13

14 Test Advisor - delivers Actionable work items for improving quality and test cases Clear violations and omissions Context to review how important Status of existing tests Code they reach into Historical pass/fail status Focus testing where highest impact most recent /impacted changes or a feature branch Full visibility into development testing efforts defect density, technical debt, trends and charts When all your Coverity Policies pass and show green you know you re all done testing! 14

15 Enforce Consistent Quality, Security and Testing Standards and Identify Risk 15

16 Monitor Adoption Of Development Testing Track impact on quality, security and testing over time 16

17 Identify Specific Policy Violations 17

18 Demo 18

19 Untested function recently modified 19

20 Insufficiently tested function recently modified 20

21 Triaging an untested recent function 21

22 Identifying the author & test cases for a recently modified function requiring additional testing 22

23 Broad test coverage results across components of a codebase 23

24 Profile of quality, security and testing 24

25 Observing source control data inline with a test violation 25