Avoiding the Pesticide Paradox in Software Testing

Boris Beizer defined the Pesticide Paradox as: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.”

This simply means that as the same test suite is run multiple times, it becomes ineffective in catching bugs. And moreover, these test sets will also fail to catch parts of new bugs introduced into the system with recurring enhancements and fixes.

And with agility gaining momentum- speed to market is becoming the decisive factor for getting a competitive edge, making the implications of this paradox all the more relevant. As we add automated testing into our mix of testing methods, we start relying on these tests for eternity. We keep running them frequently and review sparingly. If you are a tester, you certainly know what it means to get attached to the tests you have added to the test suite and fall into the illogicality of complete reliance on the same set of tests over time. This simply means the invisible bugs will be left unattended, only to be caught later in the SDLC or will get passed into the release- a faux pas leading to loss of credibility and revenue.                  
                                                                Source: IBM

In order to avoid these bugs from getting released or to be caught later, causing big losses; there is a need for constant maintenance and updating of the test suites, whether automated or manual.

But how should one go about making the tests relevant?

Constantly monitor changes

Tester’s ability to make all structural and functional connections to identify new scenarios and update the existing test cases will increase the test coverage and support new functionality, hence, increasing the chances to find new defects.

Track the bug statistics regularly

This will help you with a clear understanding of how effective your tests have been. If a test has not reported a bug in last few runs, you need to check if the test is worth moving into archives section. This needs responsiveness to regular test feedbacks by continually renewing/reviewing the tests, and a sharp eye on your tests in the test suite to remove useless test cases, that may be piling up.  So revisit, revise and renew often, and change your test data.

Build variance into tests

This can be done in the design phase itself, where models can be designed to create different paths through or to the feature under test. Additional data may be created to add-on to the alternative flows.  This aim is to ensure that the feature is fully exercised in different ways.  It is certainly easy to create the additional set of data when you are in the process rather than reviewing the complete set of tests. So yes, this is a good way to avoid pesticide paradox.

Go for Exploratory Testing

Exploratory tests are not identified in advance. In absence of dependency on the scripted tests, exploratory testing involves the breadth and depth of the tester’s imagination and his knowledge about the product. This, in turn, helps in finding more bugs than normal testing and can cover various scenarios and cases normally ignored. After all, mechanized processes cannot think, but testers can.  The human element should thus be incorporated to enhance the testing effectiveness, and escape from the trap of repeating same automated tests again and again.

Conclusion

To recap, there is no “foolproof testing suite” that can discover all the bugs without the need for any modification. If you rely on anyone for eternity, you will never know how worn out your test suite is- only to result in a miserable product release.

A better way is thus to keep a tab on changes and review the suit regularly by adding more scenarios and cases, as required. Additionally, a hawk-eye on bug statistics will let you know the effectiveness of your test suite. Besides, you can keep adding the extra set of test data with alternative paths within the build phase, to imbibe variance in the tests.  Finally, human element and intelligence should be added to the testing process, as exploratory tests can help find out bugs through cases and scenarios that the system is unable to identify.

Moreover, it does not harm to do some contemplation or take peer reviews, or even start out fresh if there is a major change in the component. This will, in turn, help to control the impact of pesticide paradox, and though, there is no guarantee that all bugs can be caught this way, but definitely, a better and efficient outcome can be expected.