Automation is often treated like a magic bullet, a cure-all for increasing demands on testing personnel who face new network quality concerns, additional devices, and other challenges every day. However, the truth is that automating any process, especially a critical one like network testing, is fraught with pitfalls. These five best practices can help ensure the success of network testing automation.
1. Lay the foundation for automation success.
Successful implementation of an automated testing framework starts long before the first testing algorithm is ever written. It starts with situating automation in the context of your organization's strategic business goals. As with any IT initiative, getting buy-in from both business leaders and end users is critical. Start by conducting an audit to understand the testing ecosystem fully. The audit should illustrate the following:
- A complete inventory of all network devices and the vendors that support those devices
- The pipeline for application releases, along with their SLA's and maintenance windows
- Application scaling requirements in both production and non-production
- The environments that must be supported
- Application types and how applications interact with each other
This process will support the development of an automation roadmap, so that everyone can see the benefits of the initiative and how it will progress incrementally to ensure minimal disruption. Business leaders will be more likely to invest in automation if they can see a clear path to ROI. End users will be less reluctant to change existing processes when they see that change will occur over time, with a pre-defined goal.
2. Choose the right tests to automate.
While automated tests can save significant time and money, there is also a cost associated with developing and maintaining an automated testing program. It's important to consider which tests are suitable for automation. Consider the following factors to determine whether a test should be automated:
- Test frequency: The more often a test is run, the more benefits can be gained from automation. Avoid wasting time on automating tests that will not become part of your regression suite or will only be run once.
- Stability: Automated testing during the development phase, when an application has unstable features, can be difficult. But once development is complete and the application has reached a stable phase, automation is a better option.
- Workflow complexity: Automating complex workflows might seem ideal, but it's actually a challenge to create these workflows because tests must be run sequentially, not in parallel. Workflow automation also requires a single path through an application, which can lead to missed issues with different paths.
- Pass/fail criteria: If the outcome of a test is objective and predictable, the test is a good candidate for automation. Tests where the result might vary, or require interpretation by a human, should still be completed manually.
- Human interaction: Some use cases require human interaction. For example, mobile app users might be interrupted by other notifications that a user must manually acknowledge. Or a login might include a CAPTCHA. Testing processes that include these actions should usually be done manually.
3. Prioritize based on ROI and Cost.
It's not uncommon for automation initiatives to fail because teams try to automate too much all at once. A better approach is to start slowly, learn from mistakes, and build on small successes. Start with tests that will help maximize direct or indirect ROI.
Cross-device testing, for instance, is both time consuming and expensive--according to Software Device News, a testing lab would need 50 different devices to test just 80% of possible combinations in 2018. That number has already grown, and as more IoT-enabled devices enter the market, experts predict that cross-device testing will demand exponentially more resources from telecommunications companies. Automation of cross-device testing could save countless hours of manual testing.
Meanwhile, not all failures carry the same potential costs. Determine which failures carry the highest risk in terms of actual cost, which might include anything from equipment replacement to lost customers. Although it may be impossible to attach an exact monetary figure to each failure, you should have sufficient information to assess the financial risk associated with different failures. Give preference to automating tests that detect or prevent these high-risk failures.
4. don't neglect workflows to address test failures.
The job of automating network testing doesn't end when the test scripts are written. It's really complete when the scripts are written, and there is a robust procedure for handling test failures. After all, automated testing detects failures and can even help identify root causes of issues. But it usually cannot fix those issues; human intervention is almost always necessary. Yet it's not uncommon for the results of automated testing to get ignored simply because there is not process in place to address them.
The best testing automation software actually supports these processes through smart features like automated notifications or dashboards that show only those test cases that require human attention. Be sure to consider these factors during the evaluation of any automated testing tool.
While these features certainly facilitate addressing issues discovered by automated network testing, they aren't enough. Develop and document a cohesive process that outlines who is responsible for addressing failures, how that person will be notified, and how issues should be escalated.
5. Continually evaluate the necessity of each automated network test.
Network testing requirements continue to grow more complex and numerous each year thanks to an explosion in new use cases and regulations. Testing automation offsets these demands. But even automated tests require resources--they still need ongoing monitoring and maintenance by a human, and they also demand IT resources like data storage. However, automated tests often get overlooked when the testing team "weeds out" unnecessary or outdated tests.
Automated network tests should be evaluated just as manual tests are. Discontinue any automated tests that, for instance, focus on network configurations or devices that are no longer supported. Removing these unnecessary tests will simplify your testing protocols (particularly your regression testing suite), decreasing the time and resources needed to finish comprehensive testing.