Service Virtualization In Software Testing

The complexity of the current QA software testing market impels businesses to strive for quality software quickly and economically. DevOps and Agile workflows have made the software testing ride much smoother than the traditional practices by introducing automation and enabling superior communication, collaboration, and transparency. However, waiting on the dependent components can stymie even the best of approaches, and this is where Service Virtualization (SV) can help speed up things.

What Is Service Virtualization

In the absence of key components in your system architecture, SV involves using virtual services to enable frequent and comprehensive testing by emulating the behavior of essential components. This means testing teams get a comprehensive first-hand testing platform equipped with all the components of a real production environment, enabling testers to test component-driven applications such as independent APIs, SaaS-based apps and SOAs (Service-oriented architectures).

Service Virtualization In Relation To Stubbing and Mocking

Modern applications are complex and rely on numerous dependent services. Adding to the complexity of software functionality, the rise of Agile software development has made it increasingly difficult for testers to manually develop the number, scope, and complexity of stubs or mocks required to complete testing tasks for modern enterprise application development scenarios.

Service Virtualization should not be mixed-up with unit testing with stubbing and mocking– which are mere workarounds, unlike properly architected SV technology. With stubs and mocks, the test suite simply ignores the unavailable components, often leaving the vital components out of the testing sphere until a final end-to-end testing is conducted prior to the release. The major advantage with SV is that testing teams have the ability to virtually test application behavior incrementally before the full availability of all components. This largely eliminates some of the major disadvantages of stubs and mocks, making SV a valuable asset for the testing companies.

Advantages of Service Virtualization in Software Testing

Let us highlight some of the key benefits Service Virtualization offers:

Speedy Delivery: In the current Continuous Development (CD) scenario, testing needs to occur alongside development, and this is especially desirable in the production of heterogeneous systems involving multiple layers of interdependent components, APIs, and third party apps. It is no longer feasible to wait for QA teams to give a green signal for each and every component to be market ready; rather the behavior of the connected components can be understood in a demo environment using SV. This leads to reduced timeframes and shorter release cycles. This is further validated by the voke Research 2015 where 34% participants experienced a 50% or greater reduction in test cycle times, while 40% participants saw their software release cycles decrease by 40% or more using SV.

Access to Otherwise Unavailable Resources: A complete end-to-end test can be conducted even when the dependent system components (third party apps, APIs and so on) of the app under test cannot be properly accessed or configured for testing. SV helps to simulate these dependencies. Moreover, almost all kind of scenarios can be tested using SV, including varying levels of functionality, performance and maintenance levels.

Reduced Costs: Operational costs can be reduced significantly through a planned and systematic approach reducing test environment configuration time, easy test environment access and setup, and elimination of interface dependencies. Moreover, since each component can be tested individually without waiting for the complete assembly, unit and regression testing can take place sooner, is more complete, and bugs and performance issues can be identified long before integration or user acceptance testing, making resolution possible early in the SDLC, thus saving huge remediation costs. Infrastructure and resource costs are also significantly saved. This can further be justified in the light of HPE Service Virtualization Case Study that depicts cost savings of £1.94 million through SV.

Reduced Business Risks and Increased ROI: With the ability to test early and often, defects get exposed when they are easiest and least costly to fix. Early detection of bugs implies reduced risk of defects slipping into the final product and faster delivery, ensuring that businesses stay on top of their competition in a cost effective way. This reduces the business risk of product failure and offers a superior ROI through speedy product delivery. HPE Service Virtualization Case Study depicts an outstanding value for money using SV,  yielding an ROI of 88.6%.

Better Quality: The actual product deployment scenarios can be mimicked with SV making it easier for QA teams to identify issues and failures before the product goes LIVE for users. Development errors are caught well in time through the shift-left approach with enhanced scalability, ascertaining a robust end product. As per the voke Survey 2015, 36% participants reported a reduction in production defects by more than 41% by adopting SV, while 46% participants experienced more than 41% reduction in total defects, thus resulting in a superior quality product.

Service Virtualization, hence, reduces the time, effort and cost of delivering secure, reliable and compliant software by eliminating numerous software testing constraints. It is, therefore, a smart investment for quality assurance software companies, culminating in measurable, tangible benefits. Happy testing!!

What is Scrum, and Why You Should Adopt It

Hirotaka Takeuchi and Ikujiro Nonaka introduced the term ‘Scrum’ in the context of product development in their article, ‘The New New Product Development Game’. By definition, Scrum is a project development framework that highlights teamwork, collective accountability, transparency and iterative progress towards a defined goal.

In the contemporary competitive environment, stakeholders are vying for speed to market, excellent product quality, and a quicker ROI. In addition, frequently changing business requirements need to be addressed continuously. This is where Scrum fits in. In Scrum, tasks are divided into shorter fixed timeframes of release cycles with adjustable scope to address frequently changing development needs. Scrum is unlike the traditional Waterfall Model that follows a step by step process to get a full featured product– a major drawback to which is that any changes added later in the SDLC would involve revisiting the earlier phases, and redoing the changes. Scrum saves this effort.

The Scrum approach is open to changes and welcomes change as long as they enhance customer experience. The Scrum dev team starts working with the product owner from early on to determine the minimum viable product or MVP, from which point on the incremental development proceeds till the full set of requirements is delivered. Scrum teams normally consist of five to seven members, and work is done in ‘Sprints’ with predefined timelines, resulting in a fully tested product with additional functionality.

The three key roles in any Scrum Team are:

Product Owner: The key stakeholder who is actively engaged with the Scrum team, and is business savvy with a clear understanding of what the product functionality should be. The product owner ensures that the expectations for the end product have been communicated and agreed upon, and can prioritize user stories for the product as required, along with making sure that any new requirements are not assigned during the Sprint.

The Scrum Master: The Champion of the Scrum ensuring that the Scrum team is productive and progressive. They may take up any role in the team to finish the task required to move the Sprint forward, and in case of any obstructions, Scrum Master follows up and resolves the issue. They also organize sprint planning and stand-up meetings, reviews, retrospectives to keep the sprint moving.

Developer/Tester:  Sprint teams consist of a mix of competencies working together, and the roles may rotate Sprint by Sprint. Testers, Developers, Database people, Support- all work in close collaboration to develop and implement the defined features and there are no set rules or defined job descriptions, rather it depends on what the team agrees upon. Overall, it’s a ‘whole-team’ responsibility to deliver the working software at the end of the sprint.

Let’s understand the entire Scrum Process in brief along with where the above roles come into the picture.

The Scrum Process

  • The Product Owner creates a Product Backlog.
  • Sprint Planning takes place and based on the priority, the team imports items from the Product Backlog to the Sprint Backlog, and brainstorms on how to implement it.
  • Daily Scrum meetings are conducted to assess the progress and share the impediments.
  • At the end of each Sprint, delivery teams ensure the work is in a potentially shippable state.
  • Scrum Master ensures that Sprint is moving forward, tasks are being completed in time, and impediments removed.
  • Sprint ends with a Sprint Review, and a Sprint Retrospective to identify what went wrong and what went right.
  • For the next Sprint, the team pulls another prioritized chunk from the Product Backlog and begins working.

The cycle is iterative and whenever the project ends, Scrum ensures that the most significant work has been completed. So you get a viable product at a lower cost in a short time span.

Let us check the benefits that Scrum offers to businesses.

Benefits of Scrum

Overall, the Scrum Framework offers the following benefits:

Quick Deliverables: The involvement of the Product Owner to progressively elaborate the requirements and to set priorities along with providing real time clarification reduces the time to market. ‘High value and risk’ requirements can be delivered before the ‘low value and risk’ requirements, with every Sprint resulting in a working product that is potentially shippable.
Increased ROI: Daily meetings, regular monitoring, continuously imbibing market changes, and shorter predefined release cycles- all lead to increased ROI. Regular stakeholder feedback enables early corrections sparing a lot of time and money. Additionally, automation and up-front testing results in lower wastage and faster deployment, and thus a better ROI. Finally, if the product has to fail, it fails faster.
Superior Quality: Regular inspection of the working product, with daily testing and Product Owner feedback in the development process, allows for early visibility of the quality issues and necessary adjustments. Sprint reviews and retrospectives allow for continuous improvement and thus a superior end product.
Increased Collaboration and Ownership:The complete team works together on the entire project, and decisions are made in consensus. Sprint  Planning meetings help self-organizing cross-functional teams to set their pace and organize their work around the given business priorities, and further, daily Scrum meetings, Sprint reviews and retrospectives enhance team spirit and collaboration.
Enhanced Customer Satisfaction: Scrum enables organizations to change the project and the deliverables at any point in time, resulting in the most apt release. Scrum thus embraces changing customer requirements leading to increased customer satisfaction.
Better Project Control:  Regular feedback, the ability to address changing market demands, Sprint reviews and daily meetings offer ample opportunities to keep the project under control, and make timely amends.
Transparency: Expectations are effectively met with Scrum as the key stakeholders are actively involved throughout the project. Continuous inspection and adaptation, and total transparency are the real benefits of Scrum.

Given these benefits, it would not be an overstatement to say that if an organization adopts Scrum in its true sense, everyone involved will be able to discover the real benefits Scrum brings along.

At Astegic, we have developed a Scrum framework specifically crafted for QA and Testing stages of product development- SDEFT (Scrum Driven Engagement Framework for Testing).  SDEFT introduces a set of best practices that create a flexible framework allowing consistent and predictable result delivery, resolving the critical client concerns of quality, agility, cost effectiveness, quicker ROI and speed to market.

Requirement Analysis and the Role of QA

Requirement analysis is the process of determining user expectations for a new or modified product and is vital for effective QA software testing as it lays down the basic foundation for various stages of the SDLC. Without a thorough need analysis and a clear road-map, even the most efficient system architecture, well-written code, or comprehensive testing will prove futile.

Hence, the first step to kick-off any project is “Requirement Analysis”. And to be useful, requirements should be scripted, actionable, quantifiable, traceable and directly linked to the business needs or opportunities.

Requirement Analysis Process

A typical Requirement Analysis process should identify testable requirements through frequent interactions with various stakeholders (Client, Business Analyst and Technical Leads etc.). The intent is to brainstorm and clarify any ambiguous requirements before getting to the next phase and to precisely define the Automation needs for the project.

For this, QA should be involved early on in the project ensuring that issues are identified before they become defects, even prior to the completion of the technical design. This, in turn, saves considerable time and money.

Any lacunae in requirement analysis will have a huge bearing on project timelines, quality, and costs, and may become the sole reason for the project failure because the further you go into the STLC, lesser are the chances of the bug getting detected and fixed.

                                                                    Source: gslab

However, with the influx of new and disruptive technologies, analyzing and validating the testing requirements is not an easy task. Besides, there are few typical challenges that further complicates the requirement analysis process.

Common Challenges in Requirement Analysis

Normally, in the early stages, the project scope is not well-defined and there is a lack of clarity in the processes. Additionally, involving all the stakeholders, understanding their viewpoints, and precisely defining the requirements is not a straightforward task.

If the business needs are not clearly specified or there are conflicts around an ambiguous term, or there is a constant inflow of frequent new requirements, the task gets even more complicated.

Here, including QA team in the process can assist in better understanding the requirements to deliver a quality product as QA tends to be detail-oriented and looks for quantifiable metrics.

Role of QA

Quality Analysts play a vital role in requirement analysis by ensuring each requirement is analyzed from the specification document, and then lists down high-level scenarios. The early involvement enables testers to plan their testing from the live understanding of the requirement definitions.

Any ambiguity is clarified and functionalities are defined by asking relevant questions and seeking explanations. The project risks are identified and defect tracking methodology is established. And finally, a Requirement Traceability Matrix is prepared that maps and trace user requirements with test cases. It ensures all the test cases are covered and no functionality is missed.

Now, with everything in place, the tester is ready to sign-off from the Requirement Analysis Phase after ensuring a sound foundation has been laid.

At Astegic, we have developed a comprehensive Testing Requirement Analysis Framework (TRAF) to cater specifically to the requirement analysis needs of the businesses. TRAF assists in rapid QA/Testing project set-ups and ensures prevention of early defects encompassing the three most critical aspects of the testing requirement process- Scope, Coherence, and Consistency.

Equipped with experienced and skilled resources with access to the latest tools and technologies, we enable businesses to speed up their QA software testing requirement analysis time and achieve superior quality in a cost-efficient manner.

A Journey to the IoT World- I

This is a three part series

An Introduction to IoT

With lots of noise around the word, IoT or Internet of Things is increasingly becoming the talk of the town. Still, many people are simply trying to grasp the fundamentals of what exactly is IoT, and Google is increasingly being fed with the technical jargon to understand the enigma. Let’s try to unveil this mystery.

IoT-  A Cobweb

Using the analogy, IoT may be referred as the cobweb created by ‘internet’ or ‘the spider’ that crawls across n number of devices connecting ‘each-other’ and the ‘user’. Sounds interesting?

Stating plainly, IoT is all about connecting devices to the internet. This includes everything from washing machines, refrigerators, lamps, wearable devices, to almost anything that you may think of, with an on-off knob. Likewise, the devices will be able to flawlessly converse with us, and with other devices and applications.

This implies that, effectively, the transformation caused by IoT shall be felt across all facets of life. As per Gartner, the number of connected ‘things’ will be around 25 billion, by the year 2020. But what does this connectivity mean to us?

Voyage from Cobweb to Silk

Ever thought of smart cities? Visualize smart traffic signals that control traffic in real time or smart cars that automatically sense these traffic signals in real time. What if your alarm clock not just wakes you up but also notifies your water heater to start heating water? This will simply mean that IoT will creep into everything, which implies – if it’s not connected, it does not exist!

This is merely an iota of what IoT may be able to achieve. Innumerable possibilities with endless connections have resulted in IoT being a much-debated topic. Let us understand what it takes to weave a perfect Cobweb?

The Layers of the Cobweb

Let’s unweave each layer of the cobweb to understand it better.

                         Fig: IoT Architecture Layers           Source: C # Corner

The IoT architecture begins with connected devices or ‘things’ with sensors and actuators, collecting and giving out data in real time, and these need networks to communicate and become useful, moving us to the Network and Communications layer. This layer enables rapid collection and transfer of data, leading to Management layer that stores, manages, and analyzes this data intelligently. Further, the managed information is then released to the application layer for proper utilization of the data accumulated.

This is how a multi-layered IoT architecture works, but can it work flawlessly? After all, the cobweb may suffer from weaving anomalies.

Cobweb in Jeopardy

Security will be the major concern with all the threads of cobweb intertwined. Someone hacks a piece of information from your smartwatch and now it gets easy to draw more information from all the connected devices you have.

This implies that privacy and data sharing will become complicated, and dealing with tons of data getting accumulated from billions of devices will make storage, tracking, and analysis, a challenging chore. This burdens the software QA testing community with the challenging task of ensuring seamless integration of devices, with internal and external environments, along with safeguarding information exchange in the hyperconnected world across the multi-layered IoT architecture.

Safeguarding the Cobweb

As testers are already witnessing testing troubles caused by escalating device and platform fragmentation; the complexity is going to grow exponentially with IoT. Quality Assurance Software Testing that was successful with new innovations previously is going to stumble upon a paradigm-shift with a deluge of data coming from millions of sensors each day, enhancing the testing zone complexities.

Let us focus on the challenges that testing community is going to face in the next part of this series- Testing Challenges in an IoT Framework