Service Virtualization In Software Testing

The complexity of the current software testing market impels businesses to strive for quality software quickly and economically. DevOps and Agile workflows have made the software testing ride much smoother than the traditional practices by introducing automation and enabling superior communication, collaboration, and transparency. However, waiting on the dependent components can stymie even the best of approaches, and this is where Service Virtualization (SV) can help speed up things.

What Is Service Virtualization

In the absence of key components in your system architecture, SV involves using virtual services to enable frequent and comprehensive testing by emulating the behavior of essential components. This means testing teams get a comprehensive first-hand testing platform equipped with all the components of a real production environment, enabling testers to test component-driven applications such as independent APIs, SaaS-based apps and SOAs (Service-oriented architectures).

Service Virtualization In Relation To Stubbing and Mocking

Modern applications are complex and rely on numerous dependent services. Adding to the complexity of software functionality, the rise of Agile software development has made it increasingly difficult for testers to manually develop the number, scope, and complexity of stubs or mocks required to complete testing tasks for modern enterprise application development scenarios.

Service Virtualization should not be mixed-up with unit testing with stubbing and mocking– which are mere workarounds, unlike properly architected SV technology. With stubs and mocks, the test suite simply ignores the unavailable components, often leaving the vital components out of the testing sphere until a final end-to-end testing is conducted prior to the release. The major advantage with SV is that testing teams have the ability to virtually test application behavior incrementally before the full availability of all components. This largely eliminates some of the major disadvantages of stubs and mocks, making SV a valuable asset for the testing companies.

Advantages of Service Virtualization in Software Testing

Let us highlight some of the key benefits Service Virtualization offers:

Speedy Delivery: In the current Continuous Development (CD) scenario, testing needs to occur alongside development, and this is especially desirable in the production of heterogeneous systems involving multiple layers of interdependent components, APIs, and third party apps. It is no longer feasible to wait for QA teams to give a green signal for each and every component to be market ready; rather the behavior of the connected components can be understood in a demo environment using SV. This leads to reduced timeframes and shorter release cycles. This is further validated by the voke Research 2015 where 34% participants experienced a 50% or greater reduction in test cycle times, while 40% participants saw their software release cycles decrease by 40% or more using SV.

Access to Otherwise Unavailable Resources: A complete end-to-end test can be conducted even when the dependent system components (third party apps, APIs and so on) of the app under test cannot be properly accessed or configured for testing. SV helps to simulate these dependencies. Moreover, almost all kind of scenarios can be tested using SV, including varying levels of functionality, performance and maintenance levels.

Reduced Costs: Operational costs can be reduced significantly through a planned and systematic approach reducing test environment configuration time, easy test environment access and setup, and elimination of interface dependencies. Moreover, since each component can be tested individually without waiting for the complete assembly, unit and regression testing can take place sooner, is more complete, and bugs and performance issues can be identified long before integration or user acceptance testing, making resolution possible early in the SDLC, thus saving huge remediation costs. Infrastructure and resource costs are also significantly saved. This can further be justified in the light of HPE Service Virtualization Case Study that depicts cost savings of £1.94 million through SV.

Reduced Business Risks and Increased ROI: With the ability to test early and often, defects get exposed when they are easiest and least costly to fix. Early detection of bugs implies reduced risk of defects slipping into the final product and faster delivery, ensuring that businesses stay on top of their competition in a cost effective way. This reduces the business risk of product failure and offers a superior ROI through speedy product delivery. HPE Service Virtualization Case Study depicts an outstanding value for money using SV,  yielding an ROI of 88.6%.

Better Quality: The actual product deployment scenarios can be mimicked with SV making it easier for QA teams to identify issues and failures before the product goes LIVE for users. Development errors are caught well in time through the shift-left approach with enhanced scalability, ascertaining a robust end product. As per the voke Survey 2015, 36% participants reported a reduction in production defects by more than 41% by adopting SV, while 46% participants experienced more than 41% reduction in total defects, thus resulting in a superior quality product.

Service Virtualization, hence, reduces the time, effort and cost of delivering secure, reliable and compliant software by eliminating numerous software testing constraints. It is, therefore, a smart investment for software companies, culminating in measurable, tangible benefits. Happy testing!!

Exploring the Business Value of QA

Transformative technologies are disrupting businesses as usual. Companies that fail to adapt and transform are likely to go out of business soon. Changes in regulatory policies and the technology landscape, globalization, rapidly evolving business needs, and dynamic customer demands have all necessitated the inclusion of robust testing programs across the application development ecosystem. As such, QA and testing services providers must offer testing services that can scale to support diverse application portfolios, and can quickly adapt to changing business needs. This has lead to a big shift in the testing realm, and in the role of QA.

The criticality of QA in ensuring the readiness of a product to go LIVE is well recognized by IT organizations. Traditionally, the role of QA was merely restricted to serve as a safety net that catches bugs at the bottom of the waterfall; but the introduction of Agile methodologies completely changed the scenario, whereby QA teams are involved throughout the SDLC process. Now, QA gets involved early on in the project, and largely influences the business metrics.

The value that QA brings to businesses can be summarized as follows:

Articulating Business Value in the Entire Testing Process

QA teams comprehend business objectives by involving all the stakeholders in the process. They play a vital role by asking relevant questions and getting the requisite clarity about the project requirements. They then map these project objectives to the business metrics or KPIs. These KPIs are then standardized and stakeholder feedback is imbibed by means of constant dialogue with the stakeholders. Based on the KPIs, a guideline for business process data is established. This lays a solid foundation for the work to be done in future, against which data captured from the actual project is reviewed and measured.

Overall, this ensures that KPIs are oriented towards the business goals, and business goals are tied to the testing process all the time.

Delivering Fast Business Value

Speed to market, high quality and reliability are mandatory to meet the expectations of a digitally empowered consumer.

To fulfill this objective, business driven QA teams work towards achieving maximum test coverage in optimal time with risk-based testing. Wherever feasible, the test artifacts from a previous release are reused. Additionally, historical data and pattern analysis is done to precisely and timely predict defects. Finally, with the adoption of Agile testing methodologies, CI, CD and DevOps; the overall delivery time has reduced, leading to a competitive edge for businesses.

Reduced Cost and Faster ROI

Businesses look for fast and scalable solutions while trying to keep cost to a minimum.. Early Lifecycle Validation or Shift Left testing performed by the QA team focuses on validation of the deliverables of upstream life cycle processes. This results in reduced effort and cost of fixing the bugs as compared to doing it at a later stage, ultimately leading to reduced time to market and a good quality end-product. Moreover,  progressive automation is assisting in further streamlining the release processes by initiating automation at an early stage, thus optimizing QA efforts, and leading to a faster end-product and hence quicker ROI.

Incorporating changing Business Priorities

The role of QA is critical in dealing with changing business priorities because the business impact of releasing a buggy or obsolete software is immense. Continuous Improvement in a timely demeanor is vital for responding to business changes, and QA is responsible to ensure these changes are imbibed via continuous communication with the stakeholders. QA also verifies that significant quality parameters are analyzed precisely to measure the outcome of key business strategies.

This provides us with an overview of the value that QA bring to the businesses. This value can be further increased when a company employs high-quality skilled testers who are also risk-aware. The future-businesses need testers that can help fuel innovation, are open to learning new techniques, and believe in customer-centric testing.

Investing in strong QA teams will ultimately strengthen business’ credibility in the market, as they are able to deliver high quality reliable products with lower costs and faster time to market. According to World Quality Report, there is an overall prediction that QA and testing budgets will soar to 40% of development costs by 2019,  and 52% of IT teams site higher amounts of releases as the reason for higher QA budgets.

Strategically investing in building a team of skilled testers will definitely help set high-performing companies apart from competition by enhancing user experience and engagement. Companies that neglect the role of QA will likely perish.

Emerging Trends in QA & Testing

The software testing space has undergone a significant transformation with technology advancing at a rapid pace. Big Data, Virtualization, and Cloud-based applications are evolving speedily along with the hyper-connected devices being our future. Besides, trends like mobile app testing, crowdsourced testing, and context-driven testing have completely reframed the testing and development landscape.

On top of this, tough competition has created pressure on the testing teams to manage faster product releases without losing focus on superior quality. This has led traditional testing methods to take a backseat and latest QA & testing trends to rise up to the challenge.

Let’s observe the latest QA trends that are currently influencing the market.

Increased Automation Levels

In response to the deployment velocity requirements and need to ensure wider coverage, testing teams are adopting automation, wherever possible. This trend is set to rise. The test execution turnaround time and bug detection time will require a more robust and innovative test automation strategy to survive future need for speed and quality.

Agility & DevOps Will be the Norm

Agility as a concept has been there for quite a while now, however, in terms of its application, agility in testing is still not highly evolved, and so is DevOps. Nevertheless, with delivery cycles getting shorter, traditional models are taking a backseat, and businesses with an application of Agility and DevOps in their true sense will be the future frontrunners, with CI and CD becoming central components in the application lifecycle management process.

TCoE’s Will Grow in Numbers

The burgeoning business requirements for speed, quality and cost-effectiveness have led testing companies to set up CoE’s, which are likely to rise in number in near future. Their aim is to establish highly standardized QA and testing practices that deliver near zero-defect apps to the clients and contribute towards a positive shift in the organizational culture. This is must in order to provide the requisite competitive edge to the companies in the current market scenario.

Security Will be a Big Concern

With the growth in cloud computing, mobility and IoT developments, focus on end to end security solutions will be paramount. Sensitive and confidential online data is highly vulnerable to cyber-attacks, necessitating the companies to dig deeper and avoid any leaks, code errors, and holes. Open source security tools will be in demand and security testing may evolve as a separate specialization dealing with the continuously varying nature and severity of security attacks.

Context-Driven Testing Will Rise

The trend is emerging slowly but is likely to rise. Higher levels of diversity and device integration (which is likely to increase further) is making it a complex task for QA teams to set up a single testing strategy, thus demanding context changes to be accounted for in addition to ensuring wider test coverage from varied angles, necessitating the context-driven testing.

Crowdsourced Testing Will Witness a Surge

Sophisticated software has its own development and testing expenses. And in the current complex scenario, companies may not be well-equipped with all the requisite testing resources and may lack budget required to test the software in varied environments. This leads to the demand for crowdsourced testing which can help companies manage their costs, and ensure testing quality. This trend is gaining momentum and is likely to rise in future with multifaceted testing requirements.

Manual Testing Will Always Remain in Demand

Though Automation will be critical to faster product releases, manual testing will remain an integral part of software testing. The wisdom, judgment, and experience of testers can never be replaced by automation. Nevertheless, testers will be required to learn additional technical skills to remain competent.

Concluding Thoughts

These emerging trends will help you get prepared for upcoming testing challenges. With your readiness to learn, grow and adopt these trends; you may plan, strategize and navigate your future testing processes in an efficient manner. And these trends are likely to accelerate with Virtualization, Predictive Analytics, and Machine Learning. So keep a watch on the latest developments to remain competitive. Happy testing!

Is Test Plan a Dead Document in an Agile Environment?

Does the Agile Manifesto, ‘Implementation over Documentation’, refers to No Documentation? No, not at all!

However, in the competitive environment where speed is crucial, a number of teams have abandoned the test plans altogether. The plans have acquired a bad reputation of being time-killers. They are definitely hard to maintain, and nobody seems to read them at all once the project is signed-off, and even during the project, they seem to become obsolete half-way through.

Yet another challenge is these documents are rarely reviewed and are mostly copies of the previously created plans with no new insights or critical thinking involved.

But having said that, it is still difficult to answer that in the absence of a clear outline to follow, isn’t the project going to run into trouble?

This indicates that we definitely need to go retro but using a different approach, which is more in sync with the era of Agile, Scrum and Lean. A heavy thesis may have been outdated, but test plans have definitely not. Test plans with some amount of documentation are still required.

But let’s first understand what a test plan entails?

Test plans, as such, begin with a brainstorming session that is needed before execution. Behind the document goes a thought process that identifies and defines the scope, approach, resources and schedule of the intended test activities.

This exercise determines the type of tests that will be needed during the course of development by dividing the tasks according to the needs, goals, and test types, and accordingly clarifies how much automation and manual testing is going to be required.

But coming back to the current scenario, we need to do this in the light of the collaborative spirit of agile where things are more likely to be discussed and agreed upon at a daily standup meeting, ensuring that everyone is on the same page and things move quickly.

In such a scenario, the test plans must be skimmed for agility but are still needed to prevent the teams from getting too focused on low-level user stories ignoring the bigger picture. A test plan will assist the teams to lose the myopic view in order to get a real larger picture.

Also, communicating, collaborating and getting agreements on special test types that are particularly included or excluded should be documented for transparency and speed. We may keep this part lightweight, but cannot dismiss it altogether, for a sense of direction is required.

Besides, as agility has moved the process from few teams to multiple teams on one release, gaps are bound to happen along with communication breakdowns and integration issues, making it even more significant to have a test plan with project specifications to refer to.

Hence, the main tenet here is to keep the conversation moving around the test planning process and the right sequence of things in order to provide a clear path, and a basic test plan does just that.

In addition, the test plans are still in demand for compliance with the regulatory agencies or for internal groups during an audit, or for the contractual formalities requiring plans to be presented to the client.

In a nutshell, test plans cannot be done away with, however, they have to be factual, precise and short to be in sync with the agile environment, and should at least contain the agreed-upon team plans that are not bound to change very frequently along with the special test inclusions.

Moreover, if teams are not referring to this document time and again, or this document is not created at all, testing teams may end up with issues post release and may have no proper literature to refer back to. Hence, test plans are a living document rather than a dead one and hold significant value even in an agile framework.

Avoiding the Pesticide Paradox in Software Testing

Boris Beizer defined the Pesticide Paradox as: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.”

This simply means that as the same test suite is run multiple times, it become ineffective in catching bugs. And moreover, these test sets will also fail to catch parts of new bugs introduced into the system with recurring enhancements and fixes.

And with agility gaining momentum- speed to market is becoming the decisive factor for getting a competitive edge, making the implications of this paradox all the more relevant. As we add automated testing into our mix of testing methods, we start relying on these tests for eternity. We keep running them frequently and review sparingly. If you are a tester, you certainly know what it means to get attached to the tests you have added to the test suite and fall into the illogicality of complete reliance on the same set of tests over time. This simply means the invisible bugs will be left unattended, only to be caught later in the SDLC or will get passed into the release- a faux pas leading to loss of credibility and revenue.                  
                                                                Source: IBM

In order to avoid these bugs from getting released or to be caught later, causing big losses; there is a need for constant maintenance and updating of the test suites, whether automated or manual.

But how should one go about making the tests relevant?

Constantly monitor changes

Tester’s ability to make all structural and functional connections to identify new scenarios and update the existing test cases will increase the test coverage and support new functionality, and hence, increasing the chances to find new defects.

Track the bug statistics regularly

This will help you with a clear understanding of how effective your tests have been. If a test has not reported a bug in last few runs, you need to check if the test is worth moving into archives section. This needs responsiveness to regular test feedbacks by continually renewing/reviewing the tests, and a sharp eye on your tests in the test suite to remove useless test cases, that may be piling up.  So revisit, revise and renew often, and change your test data.

Build variance into tests

This can be done in the design phase itself, where models can be designed to create different paths through or to the feature under test. Additional data may be created to add-on to the alternative flows.  This aim is to ensure that the feature is fully exercised in different ways.  It is certainly easy to create the additional set of data when you are in the process rather than reviewing the complete set of tests. So yes, this is a good way to avoid pesticide paradox.

Go for Exploratory Testing

Exploratory tests are not identified in advance. In absence of dependency on the scripted tests, exploratory testing involves the breadth and depth of the tester’s imagination and his knowledge about the product. This, in turn, helps in finding more bugs than normal testing and can cover various scenarios and cases normally ignored. After all, mechanized processes cannot think, but testers can.  The human element should thus be incorporated to enhance the testing effectiveness, and escape from the trap of repeating same automated tests again and again.

Conclusion

To recap, there is no “foolproof testing suite” that can discover all the bugs without the need for any modification. If you rely on anyone for eternity, you will never know how worn out your test suite is- only to result in a miserable product release.

A better way is thus to keep a tab on changes and review the suit regularly by adding more scenarios and cases, as required. Additionally, a hawk-eye on bug statistics will let you know the effectiveness of your test suite. Besides, you can keep adding the extra set of test data with alternative paths within the build phase, to imbibe variance in the tests.  Finally, human element and intelligence should be added to the testing process, as exploratory tests can help find out bugs through cases and scenarios that the system is unable to identify.

Moreover, it does not harm to do some contemplation or take peer reviews, or even start out fresh if there is a major change in the component. This will, in turn, help to control the impact of pesticide paradox, and though, there is no guarantee that all bugs can be caught this way, but definitely, a better and efficient outcome can be expected.

A Journey to the IoT World-II

This is a three part series

Testing Challenges in an IoT Framework

In connect to our previous blog, we may rightly define Internet of Things (IoT) or our ‘Cobweb’ as the new gigantic ‘tech-wave’ disruptive to the existing technologies with no apparent parallels, at present, or in near future. But what’s so unique about it?

It’s not an ordinary cobweb which is superficially connected, rather each thread of the cobweb can sense the activities of each and every other thread, and can communicate with it in real time. Stating differently, IoT implies- flawless communication of devices among the internal and external environments in real time through the exchange of data and split-second information, enabling intelligent decision- making. Sounds fascinating? It certainly is exciting for the users, however, not so appealing for the testing world. Let us comprehend the reasons.

Dealing with an Avalanche of Internet-Enabled Devices

IoT framework implies further increase in the existing heap of devices, making testing on all real devices a sheer impossibility. A wide range of traffic patterns, big data, different types of interfaces, numerous OS, networks, locations, and device specific features, poses a complex matrix of possible testing scenarios, making the QA testing task highly sophisticated and challenging.

Difficulties in Ensuring Hyper-connectivity Across the Multi-Layered IoT architecture

With the multitude of sensors and actuators collecting huge chunks of data through multiple networks, the task of dynamically collating and displaying streams of data in real-time may cause storage-analysis paralysis. Therefore, Quality Assurance testing for ensuring device interoperability for perfect user interaction will require numerous tests to run for longer time span to ensure reliability, compatibility, and security, in turn, hitting the time to market the product. Besides, can security be ensured even after that?

Security and Compatibility Concerns

The inflow of constant stream of data will make it crucial to ensure data safety. Scrutinizing that data does not leak when being transmitted from one device to another,  and is properly encrypted, will require comprehensive testing solutions.Moreover, resolving the compatibility issues for integrating various controller devices in the existing systems for data generation is another challenge.This will not be a straightforward task and a lot of knowledge and understanding will be required along with time considerations.

Hence, security issues and testing for backward compatibility with upgraded versions will be major areas of unrest for the testers, especially when speed to market matters, and when trade-off is not an acceptable option.

HP study reveals that 70 percent of IoT devices are vulnerable to attack, and IoT devices averaged 25 vulnerabilities per product, indicating expanding attack surface for adversaries.

Source: Android Authority

This reiterates the need for thorough end-to-end agile testing solutions. Is it easy? Let’s see.

No Substitute for Agility

The need for faster releases will pull agility to the mainstream. Though both Automation and Manual testing may be required for the IoT apps; however, testers and organizations stuck with slow traditional waterfall models will not be able to survive without updating themselves.

Speed to market will be the key, and automation and better communication will need radical changes in the current testing approaches along with the organizational set-up. What does it imply?

Challenges in Adopting DevOps

DevOps will become a norm as teams will be required to work more effortlessly and converse quickly to mitigate the higher technological risks. This will be a major bottleneck for traditional organizations who will need a complete turnaround, not just in technological terms but also in terms of dealing with the change. The need for agility and DevOps will further imply increased dependency on Open Source Frameworks that can enable faster and thorough testing across multiple platforms and devices.

Challenges of the Current Open Source Frameworks

The current Open Source Networks may not be able to cope up with the enhanced platform fragmentation, and future testing needs. The current frameworks require tester’s to do more work around test automation development, and around setting up the frameworks, which will be a mismatch to the sprawling network requirements in the rapidly growing IoT sector.

Setting up the suitable test framework for agile testing requires fast and incessant testing with an enhanced pace of development and quick release patterns. With IoT, it will get even more complex leading to longer test cycles, defeating the need for agility. Further, this can consume a huge chunk of the testing budget set aside by companies, posing a challenge, particularly for the small testing companies.

Can Testing Companies Manage their Budgets?

Accessing the next-gen automation tools to ensure faster and shortened SDLCs along with the need for elaborate testing in the IoT context could mean extensive cost to the company especially on hardware and test infrastructure.

In addition, companies shifting to Agile and DevOps approaches from the old traditional approaches will need to spend expansively. Finally, if the testers are not well trained, they shall not be able to decide the right tools or use them wisely, thus adding further cost to the company.

Companies Will Need Skilled Testers

Lack of skills can lead to a big hole in the testing budget of the testing companies. Emerging technologies like IoT will need new skill-sets, and this may require changing the current workforce or extensive training, both of which implies higher costs.

Conclusion

In a nutshell, with the adoption of IoT-platform, and device fragmentation will increase the testing complexities in multifold. Let’s make an attempt to highlight the major problem areas:

  • Security will be a big challenge and despite the need for longer test runs, the speed to market will remain a priority, posing a tradeoff.
  • Companies will be compelled to adopt Agile and DevOps and manual testers will not be able to survive without updating their skills.
  • The current Open Source Frameworks will not be sufficient. With the increase in a number of internet enabled devices, platform and device fragmentation problems will increase.
  • The changing landscape and increased budgetary requirements, with a need for radical change in the management mindset, will be a matter of great concern especially for mid-sized and small testing companies who may be forced to go out of business.

Let us try to comprehend if testing companies can prepare themselves before the IoT storm hits them in the face. Let us sneak into probable solutions to the testing challenges in our third and final part- Managing the IoT Storm- Probable Solutions to the Testing Ordeals.