Lots of articles have argued that the first ever internet of things (IoT) device was a soda machine at Carnegie Mellon that was connected to the early internet in the 1980s. The machine was famously automated to let users on the local network see how recently the machine had been filled (and thus how cold the soda was). It’s a great story—but one device does not an internet make. In other words, what we in the modern era understand as the IoT isn’t really about individual devices; it’s about a tremendous volume of different devices all working together in tandem.
In the US right now, Verizon and T-Mobile are battling it out for who can provide the most extensive LTE coverage. And as of OpenSignal’s 2018 report, the two were in a statistical tie—each was able to provide an LTE signal 93.7% of the time. These are historic highs for the companies, and objectively impressive numbers, but that still means that they’re failing to provide an LTE signal fully one-twentieth of the time. As such, SRVCC (Single Radio Voice Call Continuity) would still be critical to providing high quality of service (QoS). Why? Because right now it’s the standard solution for ensuring mid-call handovers from all-IP LTE networks to circuit-switched call bearer 2G/3G networks.
When it comes to banking and financial services functionality, “almost” doesn’t count. From your end users’ perspective, they’re either able to complete their desired action—whether that’s checking their balance, transferring funds, or setting up automatic payments—or they’re not. They’re not going to spend a lot of time looking for workarounds, they’re simply going to register their displeasure by choosing a new app or a new service provider.
If there’s one thing your customers are certainly paying attention to, it’s their bills at the end of the month. We’ve all read about customers who have accidentally been charged enormous sums above and beyond the correct amount, and we’ve all no doubt recoiled in horror at the PR snafus that inevitably result for the telco operator in question. Needless to say, any steps that a business can take to avoid this kind of error are worth their weight in image consultations and damage control.
If McKinsey has done their due diligence, the global insurance industry is going to look very different by 2030. By their estimates, the continued introduction of new technology like the internet of things (IoT) and artificial intelligence will radically change the way that most insurance providers do business—paving the way for smart, automated workflows that reduce much of the need for paperwork and manual interventions. As a result of these changes, McKinsey estimates that fully 25% of positions in the industry could be automated or consolidated by 2025, and that by 2030 the number of personnel associated with claims in particular could be reduced by more than 70%.
End-to-end testing: for many telco operators it’s the holy grail of service verification, but it can also be a slow, laborious process that adversely impacts time to market. Even if you’ve managed to automate your relevant equipment and collect success and failure data from the relevant end-points, you might still find yourself in a position where hard-to-read data and hard-to-program use cases stop your end-to-end tests from running as quickly as you would like. When this happens, you’re in the uncomfortable position of either sacrificing high levels of test coverage by cutting the test off early, or delaying your network migration or device rollout to accommodate slow testing.
One of the top goals that every telecom operator aspires to is consistent service, and a big part of that consistency is tied to how well you can coordinate with other networks to offer high quality roaming service for your customers. Perhaps more so than in the past, users don’t want to comb through a lot of fine print about where their in-network coverage begins and ends—they simply want to be able to use Gmail while they’re out and about in the world without experiencing any glitches or service anomalies.
It’s no secret that test quality has a direct impact on quality of service, meaning that high quality tests can and do correlate with telco operators’ ability to attract and retain customers. And yet, as telecommunications networks become more and more complex, maintaining high quality tests for things like subscriber migrations, new network rollouts, device acceptance, etc. is becoming more difficult and time consuming than ever. Obviously, testers need to find a way to maintain coverage and quality levels—even in the face of growing network complexity—but the path to doing so is not always clear.
The digitization of telecommunications has led to the adoption of many software testing methodologies, including end-to-end (E2E) testing. Sometimes confused with system testing, E2E testing goes much farther, validating the interoperability of different network components and their complex interactions.
We talk a lot on this blog about end-to-end testing, and we don’t plan to change that fact any time soon. Why? Because end-to-end still represents the only testing methodology that puts the needs of end-users at the center of the testing process—and end-user experience is only becoming more important in the ever-changing telecom domain. So, naturally we want to give our readers the tools and information they need to outline end-to-end tests within their networks in order to maintain a high quality of service. That’s why today we’re taking a deeper look at some specific instances of end-to-end testing, in order to provide a more concrete idea of what this methodology looks like in practice.