MVNOs (mobile virtual network operators) sometimes get a bad rap. This isn’t entirely without reason: A recent report found that MVNO download speeds across the U.S. are 23% worse than those of their host networks on average. While download speeds for MVNO often could reach the same heights as their host networks, they tended to offer those speeds less consistently, leading to lower “consistent quality scores” in terms of meeting minimum speeds and staying below allowable levels of jitter, latency, and packet loss.
Every year, Gartner gives a rundown of what it predicts will be the major strategic trends for companies to explore in the coming year, and 2019 was no different. The leading research and advisory firm has some lofty expectations for the year ahead—including practical blockchain applications, an increase in distributed cloud computing, more focus on transparency, and the democratization of expertise, among others—but one of their new strategic trends in particular caught our eye at SEGRON: hyperautomation.
As of a few years ago, more than 90% of software testers reported that they were automating between 50 and 100% of their tests. Of the survey respondents who had automated, about a quarter saw ROI immediately, another quarter within 6 months, and another quarter within a year. Fewer than 10% failed to reach ROI. Naturally, the telecom domain is its own beast, and in all likelihood the numbers for automation adoption would look a little bit less robust—but an examination of similar trends adjacent to the telco industry can still be telling for network operators and testers.
Mobile banking is not just a fad. By 2020, the U.S. is expected to have more than 160 million mobile banking users, and in the UK mobile banking is already overtaking internet banking in popularity. This could be attributed to a number of factors, from simple convenience to the increasing primacy of mobile phones in general—but whatever the cause, the implication for banking and financial services businesses is pretty clear: you really need a robust mobile application that your subscribers can access on the go.
Let’s say you’re migrating all of your subscribers to a highly redundant HSS database. You know it’s going to be a long process, but you want to set an ambitious deadline and perform the migration before your next quarterly all-hands meeting. As such, you put together a list of the things that are most likely to cause delays. And what should be at the top of that list? Depending on how your current test operations function, service verification could be your prime suspect.
Let’s say you’re in a management position at prominent European network provider, and you’re trying to assess network quality. Some of your biggest questions are about testing: How quickly are your regression test suites running after network updates? How frequently do your tests uncover bugs, and how do those bugs get resolved? To gain answers to some of these questions, you contact your test team, who send you the most recent test reports—but you can’t make heads or tails of them, and your questions remain unanswered.
Let’s say that after all these years you’re finally ditching your legacy billing system in favor of something new and shiny. For a while now you’ve been hearing about the OSS/BSS solutions of the future, with their unprecedented ability to integrate with IP networks, and you’ve finally decided to take the plunge. When all is said and done, you’ll have transitioned from a system that was basically designed to tally up voice usage and SMS traffic to spit out appropriate bills, to something that offers your customers real-time tracking of their data usage so that they can manage their behavior accordingly. This has the potential to be a huge win, but there’s just one problem: how do you make sure it’s working?
Often, when we talk about quality of service (QoS) we tend to focus on things like latency, jitter, and packet loss, i.e. the network conditions that users experience in their day-to-day mobile device usage. But back-office processes like provisioning and invoicing can often have just as much of an impact on subscriber happiness in the long term—both because subscribers sometimes have to interact with billing systems and because smoother functioning in the back office often helps network operators to improve service elsewhere. For this reason, savvy telco operations tend to think long and hard about storage and access to subscriber information.
Way back in the day, if you were on the go and needed to make a phone call, you kept your eyes peeled for the nearest payphone. You would drop in your coins and the operator would connect you to your desired call recipient. After the time you paid for had run out, a live operator would butt in and let you know that in order to extend your call you needed to add more money. Today, the idea of something like this happening during a VoLTE call seems a little bit absurd—but in point of fact it represents a need that all telco operators still grapple with: real-time provisioning of services based on what the user has actually paid for.
Let’s say you have a small system whose functionality you want to test, and you determine that there are five distinct use cases that require verification. As a technically-adept test engineer with a fair amount of programming skill, what do you do? In all likelihood, you simply script up each test case individually, so that for each use case there’s a unique piece of code that needs to run in order to make sure that everything is working smoothly. This makes sense, and as test automation has become more common (in some sectors, anyway), we've seen a lot more people doing just that. For a limited number of test cases, this is probably the smart thing to do. But what happens when it’s not half a dozen use cases that require scripting, but hundreds or thousands?