Is Test Plan a Dead Document in an Agile Environment?

Does the Agile Manifesto, ‘Implementation over Documentation’, refers to No Documentation? No, not at all!

However, in the competitive environment where speed is crucial, a number of teams have abandoned the test plans altogether. The plans have acquired a bad reputation of being time-killers. They are definitely hard to maintain, and nobody seems to read them at all once the project is signed-off, and even during the project, they seem to become obsolete half-way through.

Yet another challenge is these documents are rarely reviewed and are mostly copies of the previously created plans with no new insights or critical thinking involved.

But having said that, it is still difficult to answer that in the absence of a clear outline to follow, isn’t the project going to run into trouble?

This indicates that we definitely need to go retro but using a different approach, which is more in sync with the era of Agile, Scrum and Lean. A heavy thesis may have been outdated, but test plans have definitely not. Test plans with some amount of documentation are still required.

But let’s first understand what a test plan entails?

Test plans, as such, begin with a brainstorming session that is needed before execution. Behind the document goes a thought process that identifies and defines the scope, approach, resources and schedule of the intended test activities.

This exercise determines the type of tests that will be needed during the course of development by dividing the tasks according to the needs, goals, and test types, and accordingly clarifies how much automation and manual testing is going to be required.

But coming back to the current scenario, we need to do this in the light of the collaborative spirit of agile where things are more likely to be discussed and agreed upon at a daily standup meeting, ensuring that everyone is on the same page and things move quickly.

In such a scenario, the test plans must be skimmed for agility but are still needed to prevent the teams from getting too focused on low-level user stories ignoring the bigger picture. A test plan will assist the teams to lose the myopic view in order to get a real larger picture.

Also, communicating, collaborating and getting agreements on special test types that are particularly included or excluded should be documented for transparency and speed. We may keep this part lightweight, but cannot dismiss it altogether, for a sense of direction is required.

Besides, as agility has moved the process from few teams to multiple teams on one release, gaps are bound to happen along with communication breakdowns and integration issues, making it even more significant to have a test plan with project specifications to refer to.

Hence, the main tenet here is to keep the conversation moving around the test planning process and the right sequence of things in order to provide a clear path, and a basic test plan does just that.

In addition, the test plans are still in demand for compliance with the regulatory agencies or for internal groups during an audit, or for the contractual formalities requiring plans to be presented to the client.

In a nutshell, test plans cannot be done away with, however, they have to be factual, precise and short to be in sync with the agile environment, and should at least contain the agreed-upon team plans that are not bound to change very frequently along with the special test inclusions.

Moreover, if teams are not referring to this document time and again, or this document is not created at all, testing teams may end up with issues post release and may have no proper literature to refer back to. Hence, test plans are a living document rather than a dead one and hold significant value even in an agile framework.

QA Outsourcing Benefits Vs Costs

Some Considerations Before You Outsource QA & Testing Services

In the current market scenario, outsourcing has become an effective option for strategic management. However, when you decide to outsource your QA &Testing services, you should carefully weigh the related costs and benefits before making a final decision.

Improving the Credibility of Self-Service Analytics with Data Quality

In this age of analytics, intelligence is powered by algorithms and decisions are based on facts analyzed from datasets. And Self-Service BI systems can assist businesses in making these data-driven decisions by analyzing the significant datasets without the real need of including IT professionals to produce reports or explain the facts.

Infact, with this data democratization, the complete data world has opened up for the business users for timely and logical decision making. But the question to ponder is how good this data is?

The fact is that the real value of self-service analytics has been undermined due to poor data quality. ETL and data collection techniques of the past have been hit by raw, unstructured, crowd-sourced fast-paced data inflow. And as per Experian Data Quality Survey, on average, businesses believe that 27 % of their data is inaccurate.This compels the business leaders to make decisions based on this unreliable, conflicting and siloed data, which in turn, leads to business losses.

To overcome this problem, data has to be checked for accuracy, audited, and prepared for use by self-service analysts, and organizations are increasingly understanding this. As per Gartner, the data preparation market will reach $1billion by 2019, with 30% of organizations adopting some form of self-service data preparation.

Let us try to find ways in which organizations can overcome these data challenges and ensure data quality required for the business users.

Getting the basics right

Begin with correct data entry as this is still a big issue and needs to be taken care of. Data entry and system errors need be resolved at the origin rather than in the ETL processes later, making it a problematic task.

Moreover, parameters for deciphering and standardizing data values needs to be discussed and consented upon within the organization.

Laying down clear guidelines

Companies should create a set of data management rules, policies, and procedures to eliminate errors, duplicate entries, and inconsistencies in their dataset to ensure Enterprise Data Quality,

Likewise, accuracy expectations must be clearly specified for high priority datasets, and companies should ensure that these datasets must be validated for their compliance against these expectations.

Moreover, assigning a data quality role or executive sponsorship to oversee the entire data management process can make a huge difference.

Executive Sponsorship

The role of the executive sponsor can be helpful in establishing a good governance structure by ensuring the right technological environment, defining operational resources and procedures, and in creating a resource plan.

To be precise, executive sponsorship ensures sophisticated data quality management so that data change can be handled aptly for the users. Data Stewards must also be designated to preserve data integrity.

Data Change Management for Usability

Non-technical users, when using these datasets, will be required to integrate and enhance data from different data sources, and further, in order to relate these sources, they will be required to introduce changes, resulting in multiple copies of data.
To tackle this issue, good self-service BI governance, systems, and data quality solutions must be in place enabling companies to organize the organizational dataset into distinct and logical data groups based on specific criterion still keeping it all in a single place for a 360-degree view of the business operations. Then users can be added and managed accordingly.

Data quality solutions can further assist in maintaining a logical storehouse of personalized data changes, meaning and context, which otherwise, could result in data chaos.

Hence, shifting from traditional models to more fluid self-service approach can be best facilitated via a robust data management system offering useful controls to ease the transition.

Master Data Management (MDM) System and other data quality solutions

MDM system can be helpful in creating a single source of truth for business processes through a blend of technology solutions including data integration, data quality , nd business process management. Users can access any dataset from a single place and analyze and drill-down into.

Besides MDM, there are also a number of data preparation platforms/tools that transforms data for business users to analyse. Data cataloging assists in providing a searchable repository of metadata for improved data management, and there are other self-service data preparation offerings.

To conclude, by consistently standardizing, transforming and maintaining clean source data, along with managing the data quality through the application of robust data management system under the guidance of assigned data quality professionals can help streamline the self-service data preparation process ensuring better business decisions and early product releases.

About Security Testing- Part ll

This is a two part series:
1.  Significance of Security Testing in an era of illimitable Cyber-Attacks
2. Open Source Security Testing Tools You Should Know About

Open Source Security Testing Tools You Should Know About

In connect with our previous blog, we can begin by simply stating that security testing has become an inevitable part of software development, and a slack approach towards security can prove costly in terms of:

  • Incoherent website performance
  • Loss of customer trust
  • Loss of revenue
  • Possible legal implications

Hence, security testing cannot be taken lightly, and with the dawn of highly connected IoT world, no organization can claim to have a foolproof security system in place.

This clearly directs us to the need for using web security testing tools to proactively detect the application vulnerabilities and to secure the websites.

From an array of Open Source Security Tools available in the market, we have made an attempt to discuss some of the popular ones you should know about:

Wapiti

Wapiti is a command-line application that performs black box scans. It supports both GET and POST HTTP attack methods. For beginners, it may be difficult to use, but for experts, it’s a great tool. Wapiti can detect vulnerabilities like file handling errors, database injection, XSS Injection, LDAP Injection and CRLF Injection.
Source: http://wapiti.sourceforge.net/

Vega

Vega is written in Java, is GUI based and runs on Linux, OSX, and Windows platforms. It can detect web-app vulnerabilities like blind SQL injection, header injection, stored cross-site scripting, shell injection and others. The tool can be extended using a powerful API written in JavaScript.
Source: https://subgraph.com/vega/

W3af

A web-app audit and attack framework which is effective against more than 200 vulnerabilities. W3af is developed using Python and is suitable for both beginners and experts. It identifies vulnerabilities like Cross-Site Scripting, unhandled app-errors, SQL injection, and PHP misconfigurations. It comes with a graphical and console interface.
Source: http://w3af.org/

ZED Attack Proxy (ZAP)

It is an easy to use integrated penetration testing tool for finding vulnerabilities in web apps. It is available for Windows, Unix/Linux and Mac platforms. It is ideal for both beginners and professionals. Besides other features, it also possesses features like port scanner, fuzzing, smart card support, and Anti-CSRF Token Handling. It can detect vulnerabilities like SQL injection, Blind SQL injection, File Handling and command execution.
Source: https://www.owasp.org/index.php/

IronWASP

It is a GUI based vulnerability scanner that checks for over 25 different kinds of well-known web vulnerabilities. It provides false negatives and false positives detection support, and its reports are both in HTML and RTF formats. An advanced user with Python/Ruby scripting expertise is best suited to make full use of the platform but even an amateur user can use a lot of simple features that IronWASP possesses. It can detect vulnerabilities like SQL, Header and XPATH Injection, and Cross Site Scripting.
Source:  https://ironwasp.org/

Conclusion

With cyber threats on the rise- whether you already have changes premeditated for your security stack or not; the use of security tools early in the SDLC will help you in reducing the security assessment workload executed before the deployment of the application, and will augment early detection rates, thus saving costs and increasing the speed to market.

In a nutshell, organizations should make security a business priority, and adopt a well-defined integrated defense approach in this era of illimitable cyber-attacks.

About Security Testing – Part I

This is a two part series:
1. Significance of Security Testing in an era of illimitable Cyber-Attacks
2. Open Source Security Testing Tools You Should Know About

Significance of Security Testing in an era of illimitable Cyber-Attacks

Considering the number of breaches and security threats that currently exist, security testing has become a critical part of the Software Development Life Cycle(SDLC)

Even the most secure platforms have been invaded by hackers-be it Apple’s iCloud, NASA’s computers, or Sony’s email server- let alone the vulnerable ones. The staggering cyber-attack statistics by Hackmegeddon stands as a testimony to the fact that these threats are on the rise, and there appears to be no foolproof plan to safeguard against these threats.

Hackmegeddon

Source: HACKMEGEDDON

The figures are appalling, and so are the repercussions of security loopholes. As per Cisco 2017 Annual Cybersecurity Report, 22% of breached organizations lost customers, and 40% of them lost more than 20% of their customer base.

The consequences are obvious-data loss, loss of revenue, lawsuits, fines and other disruptive business implications.

But how does this happen?

As per the findings of the World Quality Report, 80% of these security breaches occur at the application layer and 86% have issues associated with authentication and access control. So, high-quality rigorous security testing is definitely required especially at these weak spots.

And security testing is also essential to ensure safety against some of the most commonly executed cyber-attacks- like Malware, SQL Injection, Phishing Attacks, Cross-Site Scripting (XSS), Denial-of-Service, and Session Hijacking Attacks.

You will be surprised to know that as per a study conducted by Aberdeen Group involving more than 150 organizations, the average cost of remediating a single app security incident comes around approx. US$300,000.

No doubt, it’s a very costly affair, and with IoT being the face of the future, security testing will become paramount as hyper-connectivity may cause a single loophole to result in huge data loss-the impact of which can be devastating.

All this points towards the need for highly reliable security testing services that can timely uncover vulnerabilities and ensure app risk minimization further implying that security has to be embedded right from the beginning in the SDLC, rather than an afterthought.

Being an expensive endeavor, not all software development companies can afford in-house testing, and hence outsourcing can be a good option- both in terms of cost and time.

Dedicated testing services companies can be relied upon to have the requisite resources and expertise to employ the critical testing techniques like:

Vulnerability Scanning : Normally done using an automated software to scan the basic known vulnerability.

Penetration Testing: Penetration testing is the black box approach to test your applications for security loopholes.It simulates the attack from a malicious hacker to determine vulnerabilities that an attacker could exploit.

Ethical Hacking: The system is attacked from within to expose and fix the security flaws and loopholes.

Security ScanningIn addition to automated software scanning, manual assessment is performed to check log files, error messages, error codes and so on.

Risk Assessment: A technique to analyze and segregate risks into high, medium and low categories. This assessment further assists in strategizing to resolve these risks.

Security ReviewThis involves reviews of architecture diagrams, code reviews, and document reviews along with performing the gap analysis to ensure standards are adhered to, and implemented aptly.

These techniques will definitely aid to combat the probable security threats, however, the significance of technical expertise and knowledge of the tester will remain an irreplaceable asset.

Being an expensive endeavor, it will again be feasible to outsource security services to experts in testing companies who possess requisite ISO/IEEE certifications, in addition to years of valuable experience, especially in this era of cyber warfare, newfangled cybercrimes and vicious cyber attacks.

NASSCOM claims that the current share of cyber-security is likely to rise to US $35 billion from the current US $1.5 billion by the year 2025, and nearly 1000 startups will emerge in the security domain over the next 10 years.

Hence, digital landscape is going to be the future war zone, and security testing will be a big and sophisticated discipline.

Conclusion

Security testing is highly relevant-both in the current and future scenario, and organizations should be either prepared with end-to-end security testing solutions that can be embedded into the SDLC right from the initial stage involving both manual and automated testing processes or should outsource the solutions.

Overall, a good testing services company with a skilled and experienced team of testers specialized in emerging technologies will be the one to survive the impending cybersecurity onslaught. Also, there are many open source security tools available in the market that testing companies can use- and we will discuss them in our next blog.

Astegic, a pure-play QA & Testing services company, with years of experience and learning, is adept at safely leveraging the convergence of cloud, mobility, social computing and web applications through security testing across multiple platforms and networks. And is constantly adopting latest tools and techniques to become the future market leader.

To know more about our security testing services, visit our page or contact our experts.

And don’t forget to read our next blog to find out about some popular open source security tools available in the market- Open Source Security Testing Tools You Should Know About

 

Product Quality And Speed To Market– How To Nail Both

Product Managers are constantly striving to achieve three key outcomes from their QA and testing teams: 1) increased speed to market, 2) reduced cost of testing, and of course, 3) improved software quality.

While most Product Managers will agree that the above three are indeed at the top of the list of their priorities; a lot of the IT organizations lack clear pathways and QA approaches required to achieve uncompromising software quality in a fast and cost effective manner.

There are pragmatic practices that Product Managers can follow to achieve profitable outcomes from their QA and testing initiatives.

Below are the top five pointers that can help Product Managers to improve their software quality while still being able to accelerate release cycles:

1) Outline a focused approach to QA & testing: A focused approach to QA & testing optimizes testing performance. Application Development (AD) teams are increasingly pressured to build and deploy software fast to meet delivery deadlines. Very few organizations have time or resources to deploy a comprehensive software quality assurance program. It is key to clearly define your QA goals and develop broad and quality metrics that are traceable.

2) Start off by setting the requirements right: An issue caught right at the start of a Software Test Life Cycle (STLC) is likely to cause less damage as opposed to a bug that goes into production. It’s quite obvious, yet many development teams face the consequences of neglecting this factor.

3) Automate where possible: Test automation will free up your resources from repetitive tasks. This will allow the to focus on high-priority tasks– ultimately resulting in increased productivity, output and accuracy.

4) Leverage task management and engagement best-practices: Managing frequent change requests while working against tight deadlines can be detrimental to product quality. Testing teams can avoid this trap by leveraging Jira boards and make the best of Scrum and Kanban methodologies.

5) Go Lean-Agile where there are distributed teams involved: Lean-Agile methodologies bring team focus on the most critical features and break cycles into sprints– making tasks more manageable and avoiding overloads.

Followed thoroughly, the above practices can enable Product managers to achieve their critical QA and testing goals.

Out With The Old, In With The New

Astegic unveils its new face with the launch of their new website.

Astegic also has a new logo.

Learn more about Astegic QA & Testing expertise, technology drivers and how these are deployed in real scenarios to deliver client success.

For 15 years, Astegic has been helping Fortune 500 through startup companies with their Testing & QA needs. Astegic has a dedicated Testing Center of Excellence (TCoE), specializing in providing solutions across Mobile, Cloud, and API testing.

With our in-depth experience across a wide range of industries, our clients receive innovative and best-in-class service and solutions. We offer near 24/7 rapid service from both Astegic’s US and India based testing operations. By combining manual functional testing, with world-class automation capabilities, we are proud to provide our diverse clientele with mobile, web and cloud based applications.

Follow Astegic on LinkedIn, Facebook and Twitter.

Avoiding the Pesticide Paradox in Software Testing

Boris Beizer defined the Pesticide Paradox as: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.”

This simply means that as the same test suite is run multiple times, it become ineffective in catching bugs. And moreover, these test sets will also fail to catch parts of new bugs introduced into the system with recurring enhancements and fixes.

And with agility gaining momentum- speed to market is becoming the decisive factor for getting a competitive edge, making the implications of this paradox all the more relevant. As we add automated testing into our mix of testing methods, we start relying on these tests for eternity. We keep running them frequently and review sparingly. If you are a tester, you certainly know what it means to get attached to the tests you have added to the test suite and fall into the illogicality of complete reliance on the same set of tests over time. This simply means the invisible bugs will be left unattended, only to be caught later in the SDLC or will get passed into the release- a faux pas leading to loss of credibility and revenue.                  
                                                                Source: IBM

In order to avoid these bugs from getting released or to be caught later, causing big losses; there is a need for constant maintenance and updating of the test suites, whether automated or manual.

But how should one go about making the tests relevant?

Constantly monitor changes

Tester’s ability to make all structural and functional connections to identify new scenarios and update the existing test cases will increase the test coverage and support new functionality, and hence, increasing the chances to find new defects.

Track the bug statistics regularly

This will help you with a clear understanding of how effective your tests have been. If a test has not reported a bug in last few runs, you need to check if the test is worth moving into archives section. This needs responsiveness to regular test feedbacks by continually renewing/reviewing the tests, and a sharp eye on your tests in the test suite to remove useless test cases, that may be piling up.  So revisit, revise and renew often, and change your test data.

Build variance into tests

This can be done in the design phase itself, where models can be designed to create different paths through or to the feature under test. Additional data may be created to add-on to the alternative flows.  This aim is to ensure that the feature is fully exercised in different ways.  It is certainly easy to create the additional set of data when you are in the process rather than reviewing the complete set of tests. So yes, this is a good way to avoid pesticide paradox.

Go for Exploratory Testing

Exploratory tests are not identified in advance. In absence of dependency on the scripted tests, exploratory testing involves the breadth and depth of the tester’s imagination and his knowledge about the product. This, in turn, helps in finding more bugs than normal testing and can cover various scenarios and cases normally ignored. After all, mechanized processes cannot think, but testers can.  The human element should thus be incorporated to enhance the testing effectiveness, and escape from the trap of repeating same automated tests again and again.

Conclusion

To recap, there is no “foolproof testing suite” that can discover all the bugs without the need for any modification. If you rely on anyone for eternity, you will never know how worn out your test suite is- only to result in a miserable product release.

A better way is thus to keep a tab on changes and review the suit regularly by adding more scenarios and cases, as required. Additionally, a hawk-eye on bug statistics will let you know the effectiveness of your test suite. Besides, you can keep adding the extra set of test data with alternative paths within the build phase, to imbibe variance in the tests.  Finally, human element and intelligence should be added to the testing process, as exploratory tests can help find out bugs through cases and scenarios that the system is unable to identify.

Moreover, it does not harm to do some contemplation or take peer reviews, or even start out fresh if there is a major change in the component. This will, in turn, help to control the impact of pesticide paradox, and though, there is no guarantee that all bugs can be caught this way, but definitely, a better and efficient outcome can be expected.

A Journey to the IoT World-III

This is a three part series

Managing the IoT Storm- Probable Solutions to the Testing Ordeals

In our last blog, we highlighted the testing challenges that IoT brings along, and even though, it’s not easy to overcome these, yet if managed efficiently, IoT can offer enormous benefits to organizations and societies.

By 2020, IoT-enabled service models could save a trillion dollars a year in maintenance and service costs, as per Gartner. Moreover, McKinsey’s Global Institute predicts that IoT is set to have an economic impact of between $4 trillion and $11 trillion by 2025.

This definitely indicates that IoT will transform our lives beyond imagination. Hence, a rational approach is to recognize and understand its impacts and get prepared, in advance, to deal with this disruptive technology, which poses huge challenges to the testing world.

As per the World Quality Report 2016, key opportunities for solutions for IoT Testing includes:

Source: Capgemini

Keeping these answers in mind, let’s make an attempt to offer some probable solutions to the testing world ordeals we underscored in our last blog.

Dealing with an Avalanche of Internet-Enabled Devices

To leverage the real benefits of IoT, testing companies will be required to create an end-to-end QA testing strategy covering the diverse set of embedded devices, applications, testing methodologies, and environments, along with continuously remaining updated with the latest testing tools and their intended uses. Real device testing will not completely lose its relevance but shall be used prudently and selectively.

Surprisingly, a  number of organizations with IoT as part of their business still do not have a test strategy for IoT as per WQR2016.

Source: Capgemini

Ensuring Hyper-connectivity Across Multi-Layered IoT architecture

Given the increasing number of devices, OSs, traffic patterns, varied UI, and diverse networks, testing on the cloud with emulators and simulators can offer a plausible solution for seamless hardware-software integration in the real time, unfailing device interoperability, and perfect user interaction.

Hence, the future focus will be on cloud testing and virtualization solutions, but this implies that a single hole in security can become a major threat.

Dealing with Security Concerns

The solution lies in adopting ‘multi-channel’ and ‘behavior driven’ testing models and approaches along with intensive platform migration testing.

Companies deploying controller devices will be compelled to consider the type of data they are handling, and implications of data leakage. Organically building high levels of security into the IoT devices and software along with regular security updates may offer a plausible but complex solution to ensure data security.

This implies that elaborate testing with lengthy test cycles will be required and sensitive information will be encrypted, but not at the cost of sacrificing the speed to market.

Applicability of Agile without exception

Agility will be the face of the future. Organizations still with traditional models should now adopt fast and responsive QA and testing solutions with an agile mindset. This necessitates the need to go via DevOps platform.

Adopting DevOps

DevOps implies a cultural shift, and though not easy, the benefits will overweigh the traditional testing approaches which shall become, more or less, dysfunctional over time. But for DevOps, organizations need to be ready with a plan that clearly streamlines how developers and ops teams shall be working together to reap the real benefits. Clearly delineating responsibilities by making people accountable will make the transition smooth.

The companies will be required to implement contemporary technologies essential to IoT and cross all the key gaps towards implementing agility. Adopting Open Source frameworks will be a step in the right direction.

Open Source Frameworks

With countless sensors, millions of routers, gateways, and data servers, the scalability requirements will only be met via open source frameworks.

Moreover, these frameworks will assist testers in not getting stuck with a particular tool thus mitigating the risk of a potential ‘lock-in’, and organizations will be free to switch solutions as and when required.

Undoubtedly, future testing will need more innovative and smart open source frameworks and testing companies will be required to run longer test cycles for ensuring security and reliability of the app. Besides, continuous training of the staff and swift adoption of the agile framework will stretch the budget.

Managing the Budget

Budgets will definitely soar, and the only way out would be to remain well-prepared. Companies still with the Waterfall Models should begin the transition process, and those with Agile should get involved in more research and understanding of the implications and challenges of IoT.

A separate budget should be set aside to train the testers on latest tools and technologies to avoid costly mistakes. Moreover, carefully planning and setting up Agile and scalable TCoE’s with contemporary test service delivery models can be a good step.

TCoE

To promote an IoT ecosystem, setting up an Agile TCoE can be a good initiative for companies in their preparedness for the disruptive technology. This could help with enhanced quality and cost efficiency, along with instilling agility and clarity in the software development process and bringing the required cultural shift for organizations to deal with the changes needed in the IoT world.

Conclusion

To put it briefly, with IoT, QA testing will be a complex task with innumerable testing scenarios. The possible solutions to these testing challenges can be as follows:

  • For testing diverse highly connected devices, testing companies will be required to create a blueprint of an end-to-end QA strategy. Cloud and virtualization solutions will be the way forward.
  • Multi-channel and behavior driven testing models, with information encryption and intensive platform migration testing, will assist in dealing with possible security threats. And, organically building high levels of security within the devices will be a good solution.
  • Agility and DevOps will become an integral part of the testing process and hence companies should be well-prepared for that.
  • With increased reliance on open source frameworks, companies will be able to manage their budgets along with getting access to a highly scalable cloud platform for testing purposes. Additionally, training the testing team on the latest tools can help in getting a competitive edge.
  • Finally, setting up a TCoE and investing, if possible, in future research will be a major step in enabling the testing companies to remain ahead of the game.

Therefore, testing companies can follow these guidelines and initiate the organizational shift required to deal with the looming IoT storm.

A Journey to the IoT World-II

This is a three part series

Testing Challenges in an IoT Framework

In connect to our previous blog, we may rightly define Internet of Things (IoT) or our ‘Cobweb’ as the new gigantic ‘tech-wave’ disruptive to the existing technologies with no apparent parallels, at present, or in near future. But what’s so unique about it?

It’s not an ordinary cobweb which is superficially connected, rather each thread of the cobweb can sense the activities of each and every other thread, and can communicate with it in real time. Stating differently, IoT implies- flawless communication of devices among the internal and external environments in real time through the exchange of data and split-second information, enabling intelligent decision- making. Sounds fascinating? It certainly is exciting for the users, however, not so appealing for the testing world. Let us comprehend the reasons.

Dealing with an Avalanche of Internet-Enabled Devices

IoT framework implies further increase in the existing heap of devices, making testing on all real devices a sheer impossibility. A wide range of traffic patterns, big data, different types of interfaces, numerous OS, networks, locations, and device specific features, poses a complex matrix of possible testing scenarios, making the QA testing task highly sophisticated and challenging.

Difficulties in Ensuring Hyper-connectivity Across the Multi-Layered IoT architecture

With the multitude of sensors and actuators collecting huge chunks of data through multiple networks, the task of dynamically collating and displaying streams of data in real-time may cause storage-analysis paralysis. Therefore, Quality Assurance testing for ensuring device interoperability for perfect user interaction will require numerous tests to run for longer time span to ensure reliability, compatibility, and security, in turn, hitting the time to market the product. Besides, can security be ensured even after that?

Security and Compatibility Concerns

The inflow of constant stream of data will make it crucial to ensure data safety. Scrutinizing that data does not leak when being transmitted from one device to another,  and is properly encrypted, will require comprehensive testing solutions.Moreover, resolving the compatibility issues for integrating various controller devices in the existing systems for data generation is another challenge.This will not be a straightforward task and a lot of knowledge and understanding will be required along with time considerations.

Hence, security issues and testing for backward compatibility with upgraded versions will be major areas of unrest for the testers, especially when speed to market matters, and when trade-off is not an acceptable option.

HP study reveals that 70 percent of IoT devices are vulnerable to attack, and IoT devices averaged 25 vulnerabilities per product, indicating expanding attack surface for adversaries.

Source: Android Authority

This reiterates the need for thorough end-to-end agile testing solutions. Is it easy? Let’s see.

No Substitute for Agility

The need for faster releases will pull agility to the mainstream. Though both Automation and Manual testing may be required for the IoT apps; however, testers and organizations stuck with slow traditional waterfall models will not be able to survive without updating themselves.

Speed to market will be the key, and automation and better communication will need radical changes in the current testing approaches along with the organizational set-up. What does it imply?

Challenges in Adopting DevOps

DevOps will become a norm as teams will be required to work more effortlessly and converse quickly to mitigate the higher technological risks. This will be a major bottleneck for traditional organizations who will need a complete turnaround, not just in technological terms but also in terms of dealing with the change. The need for agility and DevOps will further imply increased dependency on Open Source Frameworks that can enable faster and thorough testing across multiple platforms and devices.

Challenges of the Current Open Source Frameworks

The current Open Source Networks may not be able to cope up with the enhanced platform fragmentation, and future testing needs. The current frameworks require tester’s to do more work around test automation development, and around setting up the frameworks, which will be a mismatch to the sprawling network requirements in the rapidly growing IoT sector.

Setting up the suitable test framework for agile testing requires fast and incessant testing with an enhanced pace of development and quick release patterns. With IoT, it will get even more complex leading to longer test cycles, defeating the need for agility. Further, this can consume a huge chunk of the testing budget set aside by companies, posing a challenge, particularly for the small testing companies.

Can Testing Companies Manage their Budgets?

Accessing the next-gen automation tools to ensure faster and shortened SDLCs along with the need for elaborate testing in the IoT context could mean extensive cost to the company especially on hardware and test infrastructure.

In addition, companies shifting to Agile and DevOps approaches from the old traditional approaches will need to spend expansively. Finally, if the testers are not well trained, they shall not be able to decide the right tools or use them wisely, thus adding further cost to the company.

Companies Will Need Skilled Testers

Lack of skills can lead to a big hole in the testing budget of the testing companies. Emerging technologies like IoT will need new skill-sets, and this may require changing the current workforce or extensive training, both of which implies higher costs.

Conclusion

In a nutshell, with the adoption of IoT-platform, and device fragmentation will increase the testing complexities in multifold. Let’s make an attempt to highlight the major problem areas:

  • Security will be a big challenge and despite the need for longer test runs, the speed to market will remain a priority, posing a tradeoff.
  • Companies will be compelled to adopt Agile and DevOps and manual testers will not be able to survive without updating their skills.
  • The current Open Source Frameworks will not be sufficient. With the increase in a number of internet enabled devices, platform and device fragmentation problems will increase.
  • The changing landscape and increased budgetary requirements, with a need for radical change in the management mindset, will be a matter of great concern especially for mid-sized and small testing companies who may be forced to go out of business.

Let us try to comprehend if testing companies can prepare themselves before the IoT storm hits them in the face. Let us sneak into probable solutions to the testing challenges in our third and final part- Managing the IoT Storm- Probable Solutions to the Testing Ordeals.