In an ideal world, service verification for voice, data, and mobile broadband usage would probably look a lot different than it does right now. Test cycles would be perfectly matched to the timelines for updates, testers would be able to complete tests for the entire range of use cases with time to spare, and any bugs uncovered could be addressed before new updates were rolled out. Unfortunately, that’s not really the world we live in. Instead, we’re stuck with update cycles that are often too short for thorough use case testing, and service verification begins to feel like an unwanted albatross around the neck of any given telco operator.
Because of this mindset, some service providers have a hard time measuring test lab ROI for things like voice and data verification. Sure, you’ll find some bugs and patch up some areas of poor network quality, but mostly it’s a necessary evil—a time and money suck that can’t really be done away with. Of course, this view is a little blinkered: testing efficiently and effectively can be a huge asset, and it can offer telco operators a real competitive advantage, not just in terms of providing the best service but also in keeping down costs. If you’re a test lab engineer, you already know this. The question is, how do you prove it? How can you measure the ROI of what you’re doing in a way that will make its mission critical status obvious?
Does Testing Reduce Expenses?
Because what happens in the test lab isn’t the kind of thing that makes its way to the sales floor, it can be easy to overlook the ways in which successful testing can reduce costs. And yet, there are numerous areas in which this might happen: daily regression testing for voice and data helps ensure high network quality over time, which leads to happy customers who keep paying for your services; testing keeps businesses from rolling out software updates that aren’t ready, helping to prevent costly damage control measures if and when something goes wrong with your service; done right, it also provides documentation that future engineering projects can rely on not just for service verification but for future builds.
Based on these methods of cost saving, we can work backwards to develop a crude ROI calculation. If you have any information on customer attrition rates (e.g. how many people switch away from your service, and for what reasons), you can sketch out your cost of low quality—i.e. how many bugs or outages on average it takes to lose a customer. It’s not an exact science, but loosely speaking you can assert that the number of bugs found in testing corresponds to a certain number of customers retained. From there, you can estimate the cost of the kind of catastrophic service failure we discussed above (based on previous examples from your company or others) and incorporate that number based on how successfully you’re able to ward off those kinds of failures (i.e. the ones that result in costly PR damage control campaigns, to say nothing of numerous engineer-hours spent on fixing the situation). Then, divide those numbers by the resources you’ve spent in terms of person-hours and money on your voice and data tests.
Congratulations, you have a the beginnings of an ROI!
From here, things get a little bit more complicated. Whether we’re talking about verification for a particular network update or just ongoing regression tests, we have to account for test coverage, i.e. the percentage of identified use cases that were actually verified in the test period. Why? Because we need to account for the fact that the benefits and cost savings sketched out above can only be attributed to successful testing in cases where testers were actually able to verify service.
Thus, if a tester or team of testers is able to power through 75% of the relevant use cases for verifications before the go-no-go decision gets made, then ROI as we calculated in the last section should be adjusted accordingly. Of course, testers are able to prioritize configurations that are more likely to come up in the field, meaning that the amount of real world network traffic covered by an incomplete test might be rather higher than the sheer percentage would suggest. As a result, maximizing ROI becomes a complex balancing act: at a certain point, you’re unlikely to find bugs that will actually lose you customers, meaning that testing, by this logic, isn’t worth the additional engineer-hours.
Now, the idea that your ROI could go down for testing too thoroughly might seem counterintuitive on its face—and it is. But it’s also a reflection of the wild increase over the past few years in the number of use cases that require testing and an increase in testing complexity overall. Essentially, voice and data verification are getting more difficult for engineers, requiring more hours and resources than ever before. As a result, it’s more and more difficult to demonstrate a positive ROI for a function that we all know to be fundamentally essential. What does this suggest? Easy, that something about the way that telco test labs operate has to change. Whether this means working towards automation, or some other evolution in test lab operations, the current state of affairs can’t go on forever.
As it happens, changing the way that you perform your tests makes calculating ROI much more straightforward than what we’ve been discussing above. You can determine the resources (in terms of time and money spent) that your current testing requires and compare it to your new testing paradigm. Does the new paradigm save time? Then its ROI should be easy to demonstrate (even assuming that it has some startup cost associated with it). If, in addition to being more time-efficient, your new method is better from the perspective of actual service verification, then you’ll see fewer bugs reaching the market and thus lower attrition rates from your current network subscribers. This can be added on top of the time and money saved to arrive at your new test lab ROI.
LEARN MORE ABOUT TEST AUTOMATION
Are you looking for a solution to automate your service verification that includes both true end-to-end testing and full access to the systems under test.