As the average telco network—and thus telco testing—gets more and more complex, the value of automation is becoming increasingly obvious. Whether you’re a device tester looking down a mountain of use cases requiring verification or a product manager desperate to improve your time-to-market, the ability to run through hundreds of use cases per day with the press of a button has to have an obvious appeal. And yet, the way we discuss test automation for telecoms often feels a little abstract. Broader discussions of how and why testers should automate might leave readers wondering what, exactly, an automated test environment would look and feel like.
Or, more specifically, “What particular tools and elements do I need in test framework to successfully automate testing?”
Out-of-the-box Mobile Devices
Let’s start with the most obvious: in order to automate most telco testcases, you’ll need test phones. For most activity that will take place on your core network, you’ll want both Android and iOS devices. More specifically, you’ll want out-of-the-box Android and iOS devices that are identical to the ones your subscribers will be using. Why is this important? Because rooted and jailbreaked devices (which many test automation solutions employ in order to install testing software) can give you a skewed notion of how end-users will actually interact with your network. Especially in an era when latency times are shrinking, small differences in device functionality can have a big impact on perceived network quality. Naturally, for other use cases (like PSTN activities) you’ll potentially want analog modems and IP phones integrated into your workflows as well.
Host PCs/USB Hubs
Of course, the phones themselves are only useful to you if they’re integrated into a larger test framework, and that framework has to live somewhere. Thus, you’ll need a host PC to house the installation of the framework (barring a cloud deployment). This PC can also conceivably host the server that actually runs the tests on the devices—though you could use a separate PC if desired. Beyond that, you’ll also need a PC to host testcase development activities (though this too can be co-located with the computer that hosts the device control and the framework itself).
In order to connect to the mobile devices discussed above (and other relevant hardware) to the device server, you’ll also need switchable USB hubs. All of this might seem obvious, but understanding the divisions between the different servers and the connections between them and your devices can actually be crucial to conceptualizing the way the testing actually works.
For most use cases, the ability to control mobile devices and run testcases on them through the appropriate servers will be sufficient for verifying service. There are, however, more complex use cases that might require additional equipment. For instance, if you’re testing handovers or SRVCC functionality, you’re testing features that are designed to kick in only when the subscriber is in motion (e.g. moving from an area of strong LTE coverage to an area where she can only connect to 2G/3G service). As such, you need a way to control signal strength remotely without actually having to wander around searching for dead zones. In order to incorporate this kind of service verification into your automation flow, you’ll need attenuators and shielded boxes. These elements can be controlled through the device server as if they were mobile devices, meaning that they can be incorporated into your test scripts (via keywords defined in your testcase database) and then run as needed. In this way, previously laborious and time consuming use cases, from SRVCC to RAN functionality verification, can be performed quickly and easily, meaning that it can also be incorporate into your standard regression suite without much added time or effort—resulting in fewer errors and improved network quality overall.
Up to this point, we’ve mostly been talking about physical hardware that testers need in an automated environment (though naturally the software and firmware on that hardware has been a crucial point as well). Now, we’re going to transition away from that to talk about the less tangible tools that are critical to test automation success. For starters, there’s your testcase database. We spoke a little bit above about hosting your testcase development activity on the same server as the automation framework itself, but your actual library of testcases isn’t necessarily going to live in the same place. Why? Because in a keyword-driven test environment you aren’t going to be developing all of your test scripts from scratch. Rather, you can utilize preexisting test libraries that are already populated with telco use cases. You might, for instance, develop some keywords and test scripts on your own while pulling some from libraries created by your automation provider. As such, these types of libraries are often kept in Git servers for easy access and high levels of visibility—two elements that are crucial to success in a framework that’s powered by easy test scripting and execution.
If your goal is to implement automated end-to-end tests as a telco operator, the tools we’ve laid out above will give you an ideal foundation for smarter, more efficient service verification. If your goal is to go beyond end-to-end testing, however, you may need some additional tools. But wait: what exactly do we mean by going beyond end-to-end? Simply put, going beyond end-to-end means getting test results that go deeper than whether or not a test passes or fails by providing signalling data about the system-under-test. By examining the activity of your network on a protocol level in this way, you can get deeper insights into any potential errors and disruptions in service that arise. What tools do you need to perform this kind of protocol-level analysis? Typically you can get this kind of data by employing tcpdump or Wireshark programs within your automation framework. Once you’ve gathered signal traces using these tools, your automation framework should incorporate that data into any test reports that it generates. In this way, you can approach root cause analysis with a much more informed idea of what might be causing a given problem.