According to a recent McKinsey study, network quality concerns comprise some of the most important factors impacting a given customer’s choice of mobile carrier. While pricing is still the most important one on average, survey respondents were also quick to list national and local coverage, network speed, and quality of 4G as critical deciding factors in carrier selection. In spite of the growing importance of network quality, however, McKinsey also found that the average quality of service for voice has decreased across Europe in recent years.
Sure, usage patterns among mobile phone users are changing, but this state of affairs still seems unsustainable. If a particular operator constantly makes gains in mobile broadband at the expense of voice functionality, they're likely to see an increase in churn. Why? Because customers increasingly think of their service as a holistic experience, rather than a set of discreet service offerings. Just because you’re able to offer the lowest latency for voice-over-IP, doesn’t mean that you’ll be forgiven for spotty geographical coverage, and vice versa. The question is: how can telco operators use the resources they have to improve network quality and meet ever-changing customer requirements?
One intriguing potential answer is testing automation.
Measuring Network Quality
The use of mobile data continues to grow exponentially each year, and with that explosive growth comes a corresponding increase in complexity. The number of potential test cases for “typical” mobile device usage, for instance, increases as the diversity in potential user experiences ticks up. A given user might be especially worried about latency times while using a particular app during periods where capacity is already strained, all while using a multi-band smartphone that potentially presents signaling problems. At the same time, another use might be using primarily voice and messaging apps around the country, and is thus more concerned with packet loss and widespread coverage. There isn’t exactly one objective KPI that perfectly encapsulates the needs of these two users, meaning your quality of service (QoS) verification is necessarily going to be somewhat of a patchwork—increasingly like your service itself.
What this means, essentially, is that if you’re using a metric like peak-hour bandwidth or latency, or total throughput, or what have you, you may be missing crucial elements of the way that your network quality is perceived by end-users. Small differences in speed or capacity under lab-test conditions might be important to your average telco operator from a technical perspective, but it’s not always the case that the best technical stats translate into the best customer experience. Just as diversity is increasing rapidly among potential devices and protocols that your network needs to support, there’s also an increasing diversity of potential use cases that are increasingly difficult to replicate without reproducing the exact usage conditions being considered.
The Impact (and Limitations) of Testing
Because quality of service is more and more defined by performance in particular use cases, it stands to reason that network quality verification would become increasingly reliant on use case-based testing. Sure, your test lab will still need to use cutting-edge equipment to optimize your performance, but from the perspective of operational regression testing the focus should be less on verifying network quality under these conditions and more about meeting customer needs across the innumerable service configurations that arise across the various and disparate touchpoints on your network. This means putting less emphasis on simulated tests, or tests performed on rooted devices, and focusing instead on verifying the devices and scenarios that your users (the ultimate arbiters of your QoS) are likely to encounter.
By refocusing your service verification in this way, you can create better alignment between user experience and network functionality—i.e. you uncover the issues that are most likely to impact your users on a daily basis. If you can preempt the kinds of service shortfalls that really affect public perception before they occur—whether that’s by getting ahead of a new software update or by performing routine regression testing—you can keep your users from switching to a different operator. The only issue here is that the kind of testing we’re discussing can be prohibitively time-consuming. Your average test engineer may only be able to run through 6-10 use cases per day, meaning that any time you roll out a new update it’s likely that there will be some unexplored, unverified territory. This is where automation comes in.
The Power of Automation
We’ve seen the ways that use-case based testing can improve perceived quality of service within your network—now'll look to automation as a way to make this kind of testing feasible. Here, by automation we don’t mean simulated tests, nor do we mean volume testing performed on rooted devices. Rather, automation in this context means running scalable, repeatable tests on the same devices that end-users will ultimately be utilizing, e.g. smartphones, tablets, etc. Using the kinds of workflows we’re talking about, you could potentially increase test volume by orders of magnitude, going from half a dozen or a dozen tests per day to hundreds. Thus, rather than prioritizing different configurations for verification and hoping that you get through the most common ones before you have to roll out the latest network update, you can breeze through test cases quickly and efficiently. In this way, the ideal alignment between QoS perception and service verification is made not just possible but cost- and time-effective.
Not only does this give telco operators the tools to improve network quality from the point of view of customers' changing needs, it gives them a foundation for continued success. Since automated processes like the ones we’re discussing are, by their nature, digital, scalable, and repeatable, testing a large volume of use cases can become an everyday practice, rather than an occasional time-consuming boondoggle. Different configurations can be documented, standardized, and tracked, meaning that, taking voice quality as an example, daily regression testing can help you collect data that can be analyzed over time. Based on this data analysis, you can continue to improve the quality of your voice services even when your (and your customers’) primary focus might be elsewhere. In the long term, the value of your data analytics will compound—leading to even further improvements to your network quality and improved ROI for your testing workflows.