Emerging Trends in QA & Testing

The software testing space has undergone a significant transformation with technology advancing at a rapid pace. Big Data, Virtualization, and Cloud-based applications are evolving speedily along with the hyper-connected devices being our future. Besides, trends like mobile app testing, crowdsourced testing, and context-driven testing have completely reframed the testing and development landscape.

On top of this, tough competition has created pressure on the testing teams to manage faster product releases without losing focus on superior quality. This has led traditional testing methods to take a backseat and latest QA & testing trends to rise up to the challenge.

Let’s observe the latest QA trends that are currently influencing the market.

Increased Automation Levels

In response to the deployment velocity requirements and need to ensure wider coverage, testing teams are adopting automation, wherever possible. This trend is set to rise. The test execution turnaround time and bug detection time will require a more robust and innovative test automation strategy to survive future need for speed and quality.

Agility & DevOps Will be the Norm

Agility as a concept has been there for quite a while now, however, in terms of its application, agility in testing is still not highly evolved, and so is DevOps. Nevertheless, with delivery cycles getting shorter, traditional models are taking a backseat, and businesses with an application of Agility and DevOps in their true sense will be the future frontrunners, with CI and CD becoming central components in the application lifecycle management process.

TCoE’s Will Grow in Numbers

The burgeoning business requirements for speed, quality and cost-effectiveness have led testing companies to set up CoE’s, which are likely to rise in number in near future. Their aim is to establish highly standardized QA and testing practices that deliver near zero-defect apps to the clients and contribute towards a positive shift in the organizational culture. This is must in order to provide the requisite competitive edge to the companies in the current market scenario.

Security Will be a Big Concern

With the growth in cloud computing, mobility and IoT developments, focus on end to end security solutions will be paramount. Sensitive and confidential online data is highly vulnerable to cyber-attacks, necessitating the companies to dig deeper and avoid any leaks, code errors, and holes. Open source security tools will be in demand and security testing may evolve as a separate specialization dealing with the continuously varying nature and severity of security attacks.

Context-Driven Testing Will Rise

The trend is emerging slowly but is likely to rise. Higher levels of diversity and device integration (which is likely to increase further) is making it a complex task for QA teams to set up a single testing strategy, thus demanding context changes to be accounted for in addition to ensuring wider test coverage from varied angles, necessitating the context-driven testing.

Crowdsourced Testing Will Witness a Surge

Sophisticated software has its own development and testing expenses. And in the current complex scenario, companies may not be well-equipped with all the requisite testing resources and may lack budget required to test the software in varied environments. This leads to the demand for crowdsourced testing which can help companies manage their costs, and ensure testing quality. This trend is gaining momentum and is likely to rise in future with multifaceted testing requirements.

Manual Testing Will Always Remain in Demand

Though Automation will be critical to faster product releases, manual testing will remain an integral part of software testing. The wisdom, judgment, and experience of testers can never be replaced by automation. Nevertheless, testers will be required to learn additional technical skills to remain competent.

Concluding Thoughts

These emerging trends will help you get prepared for upcoming testing challenges. With your readiness to learn, grow and adopt these trends; you may plan, strategize and navigate your future testing processes in an efficient manner. And these trends are likely to accelerate with Virtualization, Predictive Analytics, and Machine Learning. So keep a watch on the latest developments to remain competitive. Happy testing!

Is Test Plan a Dead Document in an Agile Environment?

Does the Agile Manifesto, ‘Implementation over Documentation’, refers to No Documentation? No, not at all!

However, in the competitive environment where speed is crucial, a number of teams have abandoned the test plans altogether. The plans have acquired a bad reputation of being time-killers. They are definitely hard to maintain, and nobody seems to read them at all once the project is signed-off, and even during the project, they seem to become obsolete half-way through.

Yet another challenge is these documents are rarely reviewed and are mostly copies of the previously created plans with no new insights or critical thinking involved.

But having said that, it is still difficult to answer that in the absence of a clear outline to follow, isn’t the project going to run into trouble?

This indicates that we definitely need to go retro but using a different approach, which is more in sync with the era of Agile, Scrum and Lean. A heavy thesis may have been outdated, but test plans have definitely not. Test plans with some amount of documentation are still required.

But let’s first understand what a test plan entails?

Test plans, as such, begin with a brainstorming session that is needed before execution. Behind the document goes a thought process that identifies and defines the scope, approach, resources and schedule of the intended test activities.

This exercise determines the type of tests that will be needed during the course of development by dividing the tasks according to the needs, goals, and test types, and accordingly clarifies how much automation and manual testing is going to be required.

But coming back to the current scenario, we need to do this in the light of the collaborative spirit of agile where things are more likely to be discussed and agreed upon at a daily standup meeting, ensuring that everyone is on the same page and things move quickly.

In such a scenario, the test plans must be skimmed for agility but are still needed to prevent the teams from getting too focused on low-level user stories ignoring the bigger picture. A test plan will assist the teams to lose the myopic view in order to get a real larger picture.

Also, communicating, collaborating and getting agreements on special test types that are particularly included or excluded should be documented for transparency and speed. We may keep this part lightweight, but cannot dismiss it altogether, for a sense of direction is required.

Besides, as agility has moved the process from few teams to multiple teams on one release, gaps are bound to happen along with communication breakdowns and integration issues, making it even more significant to have a test plan with project specifications to refer to.

Hence, the main tenet here is to keep the conversation moving around the test planning process and the right sequence of things in order to provide a clear path, and a basic test plan does just that.

In addition, the test plans are still in demand for compliance with the regulatory agencies or for internal groups during an audit, or for the contractual formalities requiring plans to be presented to the client.

In a nutshell, test plans cannot be done away with, however, they have to be factual, precise and short to be in sync with the agile environment, and should at least contain the agreed-upon team plans that are not bound to change very frequently along with the special test inclusions.

Moreover, if teams are not referring to this document time and again, or this document is not created at all, testing teams may end up with issues post release and may have no proper literature to refer back to. Hence, test plans are a living document rather than a dead one and hold significant value even in an agile framework.

Avoiding the Pesticide Paradox in Software Testing

Boris Beizer defined the Pesticide Paradox as: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.”

This simply means that as the same test suite is run multiple times, it become ineffective in catching bugs. And moreover, these test sets will also fail to catch parts of new bugs introduced into the system with recurring enhancements and fixes.

And with agility gaining momentum- speed to market is becoming the decisive factor for getting a competitive edge, making the implications of this paradox all the more relevant. As we add automated testing into our mix of testing methods, we start relying on these tests for eternity. We keep running them frequently and review sparingly. If you are a tester, you certainly know what it means to get attached to the tests you have added to the test suite and fall into the illogicality of complete reliance on the same set of tests over time. This simply means the invisible bugs will be left unattended, only to be caught later in the SDLC or will get passed into the release- a faux pas leading to loss of credibility and revenue.                  
                                                                Source: IBM

In order to avoid these bugs from getting released or to be caught later, causing big losses; there is a need for constant maintenance and updating of the test suites, whether automated or manual.

But how should one go about making the tests relevant?

Constantly monitor changes

Tester’s ability to make all structural and functional connections to identify new scenarios and update the existing test cases will increase the test coverage and support new functionality, and hence, increasing the chances to find new defects.

Track the bug statistics regularly

This will help you with a clear understanding of how effective your tests have been. If a test has not reported a bug in last few runs, you need to check if the test is worth moving into archives section. This needs responsiveness to regular test feedbacks by continually renewing/reviewing the tests, and a sharp eye on your tests in the test suite to remove useless test cases, that may be piling up.  So revisit, revise and renew often, and change your test data.

Build variance into tests

This can be done in the design phase itself, where models can be designed to create different paths through or to the feature under test. Additional data may be created to add-on to the alternative flows.  This aim is to ensure that the feature is fully exercised in different ways.  It is certainly easy to create the additional set of data when you are in the process rather than reviewing the complete set of tests. So yes, this is a good way to avoid pesticide paradox.

Go for Exploratory Testing

Exploratory tests are not identified in advance. In absence of dependency on the scripted tests, exploratory testing involves the breadth and depth of the tester’s imagination and his knowledge about the product. This, in turn, helps in finding more bugs than normal testing and can cover various scenarios and cases normally ignored. After all, mechanized processes cannot think, but testers can.  The human element should thus be incorporated to enhance the testing effectiveness, and escape from the trap of repeating same automated tests again and again.

Conclusion

To recap, there is no “foolproof testing suite” that can discover all the bugs without the need for any modification. If you rely on anyone for eternity, you will never know how worn out your test suite is- only to result in a miserable product release.

A better way is thus to keep a tab on changes and review the suit regularly by adding more scenarios and cases, as required. Additionally, a hawk-eye on bug statistics will let you know the effectiveness of your test suite. Besides, you can keep adding the extra set of test data with alternative paths within the build phase, to imbibe variance in the tests.  Finally, human element and intelligence should be added to the testing process, as exploratory tests can help find out bugs through cases and scenarios that the system is unable to identify.

Moreover, it does not harm to do some contemplation or take peer reviews, or even start out fresh if there is a major change in the component. This will, in turn, help to control the impact of pesticide paradox, and though, there is no guarantee that all bugs can be caught this way, but definitely, a better and efficient outcome can be expected.

A Journey to the IoT World-II

This is a three part series

Testing Challenges in an IoT Framework

In connect to our previous blog, we may rightly define Internet of Things (IoT) or our ‘Cobweb’ as the new gigantic ‘tech-wave’ disruptive to the existing technologies with no apparent parallels, at present, or in near future. But what’s so unique about it?

It’s not an ordinary cobweb which is superficially connected, rather each thread of the cobweb can sense the activities of each and every other thread, and can communicate with it in real time. Stating differently, IoT implies- flawless communication of devices among the internal and external environments in real time through the exchange of data and split-second information, enabling intelligent decision- making. Sounds fascinating? It certainly is exciting for the users, however, not so appealing for the testing world. Let us comprehend the reasons.

Dealing with an Avalanche of Internet-Enabled Devices

IoT framework implies further increase in the existing heap of devices, making testing on all real devices a sheer impossibility. A wide range of traffic patterns, big data, different types of interfaces, numerous OS, networks, locations, and device specific features, poses a complex matrix of possible testing scenarios, making the QA testing task highly sophisticated and challenging.

Difficulties in Ensuring Hyper-connectivity Across the Multi-Layered IoT architecture

With the multitude of sensors and actuators collecting huge chunks of data through multiple networks, the task of dynamically collating and displaying streams of data in real-time may cause storage-analysis paralysis. Therefore, Quality Assurance testing for ensuring device interoperability for perfect user interaction will require numerous tests to run for longer time span to ensure reliability, compatibility, and security, in turn, hitting the time to market the product. Besides, can security be ensured even after that?

Security and Compatibility Concerns

The inflow of constant stream of data will make it crucial to ensure data safety. Scrutinizing that data does not leak when being transmitted from one device to another,  and is properly encrypted, will require comprehensive testing solutions.Moreover, resolving the compatibility issues for integrating various controller devices in the existing systems for data generation is another challenge.This will not be a straightforward task and a lot of knowledge and understanding will be required along with time considerations.

Hence, security issues and testing for backward compatibility with upgraded versions will be major areas of unrest for the testers, especially when speed to market matters, and when trade-off is not an acceptable option.

HP study reveals that 70 percent of IoT devices are vulnerable to attack, and IoT devices averaged 25 vulnerabilities per product, indicating expanding attack surface for adversaries.

Source: Android Authority

This reiterates the need for thorough end-to-end agile testing solutions. Is it easy? Let’s see.

No Substitute for Agility

The need for faster releases will pull agility to the mainstream. Though both Automation and Manual testing may be required for the IoT apps; however, testers and organizations stuck with slow traditional waterfall models will not be able to survive without updating themselves.

Speed to market will be the key, and automation and better communication will need radical changes in the current testing approaches along with the organizational set-up. What does it imply?

Challenges in Adopting DevOps

DevOps will become a norm as teams will be required to work more effortlessly and converse quickly to mitigate the higher technological risks. This will be a major bottleneck for traditional organizations who will need a complete turnaround, not just in technological terms but also in terms of dealing with the change. The need for agility and DevOps will further imply increased dependency on Open Source Frameworks that can enable faster and thorough testing across multiple platforms and devices.

Challenges of the Current Open Source Frameworks

The current Open Source Networks may not be able to cope up with the enhanced platform fragmentation, and future testing needs. The current frameworks require tester’s to do more work around test automation development, and around setting up the frameworks, which will be a mismatch to the sprawling network requirements in the rapidly growing IoT sector.

Setting up the suitable test framework for agile testing requires fast and incessant testing with an enhanced pace of development and quick release patterns. With IoT, it will get even more complex leading to longer test cycles, defeating the need for agility. Further, this can consume a huge chunk of the testing budget set aside by companies, posing a challenge, particularly for the small testing companies.

Can Testing Companies Manage their Budgets?

Accessing the next-gen automation tools to ensure faster and shortened SDLCs along with the need for elaborate testing in the IoT context could mean extensive cost to the company especially on hardware and test infrastructure.

In addition, companies shifting to Agile and DevOps approaches from the old traditional approaches will need to spend expansively. Finally, if the testers are not well trained, they shall not be able to decide the right tools or use them wisely, thus adding further cost to the company.

Companies Will Need Skilled Testers

Lack of skills can lead to a big hole in the testing budget of the testing companies. Emerging technologies like IoT will need new skill-sets, and this may require changing the current workforce or extensive training, both of which implies higher costs.

Conclusion

In a nutshell, with the adoption of IoT-platform, and device fragmentation will increase the testing complexities in multifold. Let’s make an attempt to highlight the major problem areas:

  • Security will be a big challenge and despite the need for longer test runs, the speed to market will remain a priority, posing a tradeoff.
  • Companies will be compelled to adopt Agile and DevOps and manual testers will not be able to survive without updating their skills.
  • The current Open Source Frameworks will not be sufficient. With the increase in a number of internet enabled devices, platform and device fragmentation problems will increase.
  • The changing landscape and increased budgetary requirements, with a need for radical change in the management mindset, will be a matter of great concern especially for mid-sized and small testing companies who may be forced to go out of business.

Let us try to comprehend if testing companies can prepare themselves before the IoT storm hits them in the face. Let us sneak into probable solutions to the testing challenges in our third and final part- Managing the IoT Storm- Probable Solutions to the Testing Ordeals.