For the best possible results, your test lab needs to make use of the absolute latest in cutting edge technology. Because this is the arena in which you test changes and additions to your network—including new connected devices like the internet of things (IoT)—you need to find a way to replicate existing network conditions as closely as possible. Otherwise, you might test out new service offerings or network adjustments in a lab setting only to find that your service doesn’t work correctly under real life conditions—which could result in costly delays, outages, and potential subscriber churn.
One of the biggest concerns we hear from telco operators looking to automate their tests is the relative ease or difficulty of test case scripting. Telecoms aren’t typically staffed with a huge number of technically proficient developers, and a complex testcase scripting language would therefore make it difficult to leverage non-technical personnel in the testing process. This is an eminently reasonable concern. After all, the fewer people there are in your organization who can understand the tests being performed, the more likely you are to find yourself saddled with a testing silo—which could potentially lead to slower bug fixes and poor alignment between testing and other functions.
At telco operators around the world, test engineers and operations managers are experiencing a bit of a conundrum. With networks growing in complexity by the day and manual testing getting more time consuming than ever, the need for automation is becoming obvious. At the same time, not all automation is created equal, and testers need to make sure they’re equipped with the right tools for the challenges that await them as modern networks continue to evolve.
Since the introduction of LTE in 2009, most telco operators have had to maintain three networks in parallel—2G, 3G, and LTE—with smooth interworking between the three of them, sometimes even in the course of a single phone call. Not only that, but operators need to offer their customers seamless connections with a host of other legacy systems, from ISDN to POTS. Each time the larger telco landscape gets more complex (which happens practically by the day), the presence of existing legacy systems amplifies that complexity—making life increasingly difficult for the testers tasked with verifying network functionality.
At most telco operators, end-to-end tests are the dream. Given that each new network update or service offering potentially requires dozens of testcases verifying conformance, acceptance, functionality, and performance, true end-to-end tests that validate the entire functioning of the network from the end-user’s perspective are daunting and often difficult to accomplish (especially if you’re testing by hand). As such, your average network tester might balk at the idea that end-to-end testing wasn’t enough to ensure high network quality—considering that going end-to-end is no mean feat in itself.
MVNOs (mobile virtual network operators) sometimes get a bad rap. This isn’t entirely without reason: A recent report found that MVNO download speeds across the U.S. are 23% worse than those of their host networks on average. While download speeds for MVNO often could reach the same heights as their host networks, they tended to offer those speeds less consistently, leading to lower “consistent quality scores” in terms of meeting minimum speeds and staying below allowable levels of jitter, latency, and packet loss.
Every year, Gartner gives a rundown of what it predicts will be the major strategic trends for companies to explore in the coming year, and 2019 was no different. The leading research and advisory firm has some lofty expectations for the year ahead—including practical blockchain applications, an increase in distributed cloud computing, more focus on transparency, and the democratization of expertise, among others—but one of their new strategic trends in particular caught our eye at SEGRON: hyperautomation.
As of a few years ago, more than 90% of software testers reported that they were automating between 50 and 100% of their tests. Of the survey respondents who had automated, about a quarter saw ROI immediately, another quarter within 6 months, and another quarter within a year. Fewer than 10% failed to reach ROI. Naturally, the telecom domain is its own beast, and in all likelihood the numbers for automation adoption would look a little bit less robust—but an examination of similar trends adjacent to the telco industry can still be telling for network operators and testers.
Mobile banking is not just a fad. By 2020, the U.S. is expected to have more than 160 million mobile banking users, and in the UK mobile banking is already overtaking internet banking in popularity. This could be attributed to a number of factors, from simple convenience to the increasing primacy of mobile phones in general—but whatever the cause, the implication for banking and financial services businesses is pretty clear: you really need a robust mobile application that your subscribers can access on the go.
Let’s say you’re migrating all of your subscribers to a highly redundant HSS database. You know it’s going to be a long process, but you want to set an ambitious deadline and perform the migration before your next quarterly all-hands meeting. As such, you put together a list of the things that are most likely to cause delays. And what should be at the top of that list? Depending on how your current test operations function, service verification could be your prime suspect.
Let’s say you’re in a management position at prominent European network provider, and you’re trying to assess network quality. Some of your biggest questions are about testing: How quickly are your regression test suites running after network updates? How frequently do your tests uncover bugs, and how do those bugs get resolved? To gain answers to some of these questions, you contact your test team, who send you the most recent test reports—but you can’t make heads or tails of them, and your questions remain unanswered.
Let’s say that after all these years you’re finally ditching your legacy billing system in favor of something new and shiny. For a while now you’ve been hearing about the OSS/BSS solutions of the future, with their unprecedented ability to integrate with IP networks, and you’ve finally decided to take the plunge. When all is said and done, you’ll have transitioned from a system that was basically designed to tally up voice usage and SMS traffic to spit out appropriate bills, to something that offers your customers real-time tracking of their data usage so that they can manage their behavior accordingly. This has the potential to be a huge win, but there’s just one problem: how do you make sure it’s working?
Often, when we talk about quality of service (QoS) we tend to focus on things like latency, jitter, and packet loss, i.e. the network conditions that users experience in their day-to-day mobile device usage. But back-office processes like provisioning and invoicing can often have just as much of an impact on subscriber happiness in the long term—both because subscribers sometimes have to interact with billing systems and because smoother functioning in the back office often helps network operators to improve service elsewhere. For this reason, savvy telco operations tend to think long and hard about storage and access to subscriber information.
Way back in the day, if you were on the go and needed to make a phone call, you kept your eyes peeled for the nearest payphone. You would drop in your coins and the operator would connect you to your desired call recipient. After the time you paid for had run out, a live operator would butt in and let you know that in order to extend your call you needed to add more money. Today, the idea of something like this happening during a VoLTE call seems a little bit absurd—but in point of fact it represents a need that all telco operators still grapple with: real-time provisioning of services based on what the user has actually paid for.
Let’s say you have a small system whose functionality you want to test, and you determine that there are five distinct use cases that require verification. As a technically-adept test engineer with a fair amount of programming skill, what do you do? In all likelihood, you simply script up each test case individually, so that for each use case there’s a unique piece of code that needs to run in order to make sure that everything is working smoothly. This makes sense, and as test automation has become more common (in some sectors, anyway), we've seen a lot more people doing just that. For a limited number of test cases, this is probably the smart thing to do. But what happens when it’s not half a dozen use cases that require scripting, but hundreds or thousands?
Lots of articles have argued that the first ever internet of things (IoT) device was a soda machine at Carnegie Mellon that was connected to the early internet in the 1980s. The machine was famously automated to let users on the local network see how recently the machine had been filled (and thus how cold the soda was). It’s a great story—but one device does not an internet make. In other words, what we in the modern era understand as the IoT isn’t really about individual devices; it’s about a tremendous volume of different devices all working together in tandem.
Sometimes, the OS that comes built into an Apple or Android device can seem like it’s actively preventing you from doing what you want. And, to some extent, that’s true—Apple only lets you download the apps that are in the App Store because there are some actions that they want to prevent you from taking. As an end user, this often seems capricious and arbitrary. As a tester, it feels similar: why shouldn’t you be allowed to make the necessary changes and additions to the phone’s functionality to make end-to-end testing easier?
In the US right now, Verizon and T-Mobile are battling it out for who can provide the most extensive LTE coverage. And as of OpenSignal’s 2018 report, the two were in a statistical tie—each was able to provide an LTE signal 93.7% of the time. These are historic highs for the companies, and objectively impressive numbers, but that still means that they’re failing to provide an LTE signal fully one-twentieth of the time. As such, SRVCC (Single Radio Voice Call Continuity) would still be critical to providing high quality of service (QoS). Why? Because right now it’s the standard solution for ensuring mid-call handovers from all-IP LTE networks to circuit-switched call bearer 2G/3G networks.
If you’re a telco test engineer, you know that the world of telecommunications is getting more complex by the day. Your service needs to integrate with more devices than ever, and you still have to maintain interworking with any number of legacy systems. This is driving an increase in the number of test cases that require verification, and also a renewed push towards automation across the field. Whether you’re selecting an automation solution or simply trying to keep your testing process as efficient as possible, it’s important to keep tabs on the new challenges and opportunities that are likely to be coming down the pike.
Back in 2017, the UK launched its 5G Testbeds and Trials Programme, the stated aim of which was to support coordinated research into 5G technology and use cases among British telecom businesses, equipment manufacturers, and scientists. By their account, the creation of these 5G testbeds provides a crucial proving ground for technology that’s still taking shape, and whose full uses haven’t come close to being explored yet. The UK envisions a world in which 5G powers increased connectivity in rural areas, on roads, and along rail lines, as well as smart tourism and smart cities and towns, but they won’t know whether these goals are realistic until they’ve had the chance to test things out in a safe environment.
Backward compatibility is a noble goal for any communication system. For telecom networks in particular, it's an important part of providing seamless coverage across legacy platforms, which means you can keep customers happy without forcing them to upgrade their service constantly. By placing an emphasis on service in this way, you can help protect revenue over time. That said, ensuring legacy integration can often provide challenges for operators.
One of the most significant ways in which the rise of mobile phones and laptops has changed the business world is encouraging a tremendous increase in mobility. Not only are employees at many businesses able to telecommute more easily than ever before, folks traveling for business are able to stay connected so effectively that it can feel like they’ve never left the office at all. For obvious reasons, this makes life easier and more pleasant for a lot of folks, all while helping to make truly global enterprises more connected than ever. The cost of this increased freedom, however, is a subsequent increase in complexity for telco operators.
When it comes to banking and financial services functionality, “almost” doesn’t count. From your end users’ perspective, they’re either able to complete their desired action—whether that’s checking their balance, transferring funds, or setting up automatic payments—or they’re not. They’re not going to spend a lot of time looking for workarounds, they’re simply going to register their displeasure by choosing a new app or a new service provider.
As the Internet of Things (IoT) grows and expands, the number of different elements that will have to consistently connect to any given network is expanding with it. Of course, some of these elements are more impactful than others. For instance, as of March of 2018, an EU directive requires that all new passenger cars be equipped with an EU eCall system. Because every second can be vital after a serious accident, it’s essential that in case of an accident the eCall device transmits emergency data to the nearest emergency center (PSAP, or “public safety answering point”) and/or trigger an emergency call. This means that in every EU country your network must automatically relay relevant information (e.g. vehicle type, direction, number of passengers, engine type, VIN, GPS coordinates, etc.) from the eCall modem to the correct emergency center in case the driver is too incapacitated to speak.
The pace of technological change has been speeding up at a seemingly unprecedented pace in the 21st century, and in few places is that more evident than in the telecom sector. Just a few short decades have taken us from what would now be considered legacy telephony through 4 generations of broadband cellular networks—with a fifth on the way. In the meantime, things like VoLTE and VoWiFi have become commonplace offerings in your typical telco operator’s portfolio. The race to provide new service offerings and update the functionality of existing ones is underway, and it’s already pretty heated.
It’s become a fairly well established stereotype that people in many parts of the world can’t or won’t look up from their smartphones. Back in the day, the only reason to look at or think about your telephone was because it was ringing or because you intended to make a call fairly imminently. As a consequence, if a signaling error was preventing your home phone from completing calls as intended, you might not even notice for a few days. Nowadays, on the other hand, if your LTE service is interrupted for two or three minutes you’ll probably notice immediately—and you’ll be none too happy about it.
If there’s one thing your customers are certainly paying attention to, it’s their bills at the end of the month. We’ve all read about customers who have accidentally been charged enormous sums above and beyond the correct amount, and we’ve all no doubt recoiled in horror at the PR snafus that inevitably result for the telco operator in question. Needless to say, any steps that a business can take to avoid this kind of error are worth their weight in image consultations and damage control.
Let’s talk about people for a second: by and large, they can’t and shouldn’t work 24 hours a day, seven days a week. They rightly prefer to stick to working hours, and when their jobs call for them to work outside of that time they’re usually provided extra compensation accordingly. On top of that, one human being can really only do one thing at a time (studies show that people are actually really bad at multi-tasking), which means, for instance, they can either be running tests or fixing bugs at any given time—but not both.
If McKinsey has done their due diligence, the global insurance industry is going to look very different by 2030. By their estimates, the continued introduction of new technology like the internet of things (IoT) and artificial intelligence will radically change the way that most insurance providers do business—paving the way for smart, automated workflows that reduce much of the need for paperwork and manual interventions. As a result of these changes, McKinsey estimates that fully 25% of positions in the industry could be automated or consolidated by 2025, and that by 2030 the number of personnel associated with claims in particular could be reduced by more than 70%.
End-to-end testing: for many telco operators it’s the holy grail of service verification, but it can also be a slow, laborious process that adversely impacts time to market. Even if you’ve managed to automate your relevant equipment and collect success and failure data from the relevant end-points, you might still find yourself in a position where hard-to-read data and hard-to-program use cases stop your end-to-end tests from running as quickly as you would like. When this happens, you’re in the uncomfortable position of either sacrificing high levels of test coverage by cutting the test off early, or delaying your network migration or device rollout to accommodate slow testing.
Today, the insurance industry is in the midst of a digital transformation. Sure, there are gradations from one insurance provider to the next in terms of how far along they are and how they envision the future of the industry—but the general trend is that the world of pens and paper needs to give way to connected, intelligent workflows that can generate, validate, and pay out claims digitally. The result of this impulse is already being felt by end users—who are already more likely than they were a few years ago to make use of an app when interfacing with their insurers—but it’s being felt just as acutely by internal staff at insurance companies. After all, they need solid UX in order to do their jobs quickly and efficiently.
One of the top goals that every telecom operator aspires to is consistent service, and a big part of that consistency is tied to how well you can coordinate with other networks to offer high quality roaming service for your customers. Perhaps more so than in the past, users don’t want to comb through a lot of fine print about where their in-network coverage begins and ends—they simply want to be able to use Gmail while they’re out and about in the world without experiencing any glitches or service anomalies.
It’s no secret that test quality has a direct impact on quality of service, meaning that high quality tests can and do correlate with telco operators’ ability to attract and retain customers. And yet, as telecommunications networks become more and more complex, maintaining high quality tests for things like subscriber migrations, new network rollouts, device acceptance, etc. is becoming more difficult and time consuming than ever. Obviously, testers need to find a way to maintain coverage and quality levels—even in the face of growing network complexity—but the path to doing so is not always clear.
As recently as a few years ago, the idea of a smart home—in which all of your appliances and other sensors around your home are networked together digitally—still seemed more like science fiction than a fact of life. And yet, today you can walk into many new homes and use your smartphone to control the temperature and the lighting, you can preheat the oven remotely, and you can get alerts to your mobile device if your smoke detector or burglar alarm goes off. It’s the type of home that technologists have dreamed of for decades.
The digitization of telecommunications has led to the adoption of many software testing methodologies, including end-to-end (E2E) testing. Sometimes confused with system testing, E2E testing goes much farther, validating the interoperability of different network components and their complex interactions.
At some point in their growth and development, most businesses regardless of industry eventually reach a point where they realize they can’t do it all themselves. Either they need help marketing their product, or some of their more tedious HR tasks could be outsourced, or their testing operations could be made more efficient by partnering with an outside agency. Some companies outgrow this stage and ultimately reclaim their ability to do things in-house, but others continue to grow with their partnerships in tact. Neither approach is necessarily better than the other—but they both present distinct pros and cons.
We talk a lot on this blog about end-to-end testing, and we don’t plan to change that fact any time soon. Why? Because end-to-end still represents the only testing methodology that puts the needs of end-users at the center of the testing process—and end-user experience is only becoming more important in the ever-changing telecom domain. So, naturally we want to give our readers the tools and information they need to outline end-to-end tests within their networks in order to maintain a high quality of service. That’s why today we’re taking a deeper look at some specific instances of end-to-end testing, in order to provide a more concrete idea of what this methodology looks like in practice.
Let’s imagine that you’re a trendy new startup. You’ve got a new widget that lots of people are downloading that helps that track their runs, or manage their time more effectively, or connect with other members of their community. Sure, there are the usual set of information security concerns, and you have plenty of functionality to build out over time, but the occasional bug or service outage isn’t going to be the end of the world. While high quality testing is still mission critical, it might not feel like a life and death situation.
When technology changes and evolves—as it does almost constantly—in the telecom domain, it typically takes standard-setting bodies like 3GPP six months to a year to establish a new set of test cases for conformance testers. Once those test cases come out, there’s a flurry of activity while operators, device manufacturers, OEMs, and others attempt to verify compliance and interoperability with new and existing standards. The fact that it takes 3GPP a fairly long stretch of time does very little to lessen the time pressure that testers usually face when it comes to performing each new round of service verification.
The modern cycle of updates for telecom networks continues to speed up, and telco operators need to do the same in order to keep pace with the market. For some of you, this may be leading you to consider test automation for verifying service on your network. Sure, there’s plenty of content out there about automated testing, and some of if even pertains to the specific challenges that your company has—but it’s still more than a little bit daunting. You know you need a framework that can automatically test, for instance, audio quality for VoLTE service across a suite of modern and legacy devices, but how do you get started?
We’ve talked a little bit already at this blog about how automated testing is becoming a virtual necessity for telco operators. As increasing device fragmentation and the proliferation of different protocols continue to inflate the number of use cases that require testing, human testers are struggling to keep up. But, if you’re reading this, you probably know all that. More than that, you’re probably almost ready to take the plunge and seriously consider an automated testing solution for your business. The question at this point is, how do you choose the right tool?
Network quality has long been a top cause of customer churn for telcos. Yet organizations often continue to struggle with delivering adequate quality because the demand for more data has negatively impacted voice service. That demand will likely grow; according to a recent McKinsey study, consumer demand for data will increase by 40 to 80% per year, depending on customer patterns and geographic region. While data might seem more urgent, voice is still important. Telcos that wish to remain competitive are placing new emphasis on network quality testing.
According to a recent GSMA study, the IoT market will be worth $1.1 trillion and include about 25 billion IoT connections by 2025. The majority of those connections will be in the industrial and vertical industry segments (13.8 billion connections) and the smart home market (11.4 billion).
The most obvious benefits of automation for any industry include increased efficiency and decreased reliance on human employees. But for telcos, automation, and particularly automated testing, offers multiple other sources of ROI, from reduced time to market, to better implementation of the Continuous Delivery model.
Automation is often treated like a magic bullet, a cure-all for increasing demands on testing personnel who face new network quality concerns, additional devices, and other challenges every day. However, the truth is that automating any process, especially a critical one like network testing, is fraught with pitfalls. These five best practices can help ensure the success of network testing automation.
Let’s say you’re a telco operator pushing out a change to your billing platform. For many in the business world, the hope for a project like this is that the team behind it has a certain level of agility, meaning that they’re a cross-functional group that’s empowered to solve problems in a flexible manner within the company’s larger mission. Unfortunately, agility usually isn’t what we find in cases like these. Instead, we find “waterfall” projects where teams are constantly waiting for approval, wading through red tape, and carrying out pre-agreed plans even as potential challenges and hurdles come to light.
End-to-end testing—it’s all the rage right now in any number of industries, and with good reason. As global technologies become more connected and more thoroughly interwoven into the fabric of society, more thorough and efficient testing will become not just a luxury, but a necessity. As such, it should come as no surprise that a number of testing automation providers have cropped up in the past few years, covering everything from software security to telecoms. This, too, makes sense on the surface: how different could testing solutions for different industries really be? If you can automate verification for a new social media platform, why can’t you do the same for VoIP verification?
Let’s say you’re working through a whole slate of different test scenarios to verify service on a new network that you’re rolling out. One of your first tasks is tackling VoIP (voice over IP) tests, which, as it turns out, present some very particular challenges. Because jitter and latency in voice conversations can quickly render a call frustrating and incomprehensible, your tests have to seek out extremely granular data about packet loss and packet delay for a number of different use cases. In order to do so effectively, testers need a wealth of specialized knowledge.
Okay, you’ve decided to take the plunge. You know that in order to keep pace with all of the new devices, network protocols, and use cases emerging every day in the telecom domain, you need to do something about your testing framework. Your internal engineers can’t manually test all the required use cases anymore, and service verification is only going to get more complex as technologies like 5G enter the scene. You’ve even done your research via Google and your professional network to see who the worthwhile vendors might be for a telecom testing solution. Now you get to the hard part: how do you choose between them?
Smart cities. Augmented reality. Net neutrality. The Internet of Things. These are just some of the buzzwords in telecommunications right now. They all indicate a technology-driven shift in the industry, one that has fundamentally changed the successful telecom business model. As telecom companies seek to cut costs and keep customers, they must also take advantage of emerging technologies to power innovation.
In 1877, Alexander Graham Bell demonstrated the possibility of making long distance telephone calls by calling the offices of The Boston Globe. As you can imagine, the press had a field day—and it’s easy to understand why. From the perspective of technological progress, transmitting voice communication over a trunk line successfully was a feat that would have been scarcely imaginable even a few decades before, and the research and experimentation that led to it would have been extremely complex. On the other hand, service verification would have been a breeze. Since the ceiling for voice quality back then was quite low, it was still a simple matter of placing a call and hoping that it went through successfully. If so: service verified.
For many telco operators, testing can seem like an onerous requirement. It’s often costly and time consuming, and as telecom networks grow more complex and customer use cases and devices become increasingly fragmented, verifying service with any level of confidence is harder than ever. Because of this high degree of complexity, testers need to achieve higher test coverage than ever before in order to maintain network quality—resulting in the relatively widespread adoption of end-to-end testing among those in the industry. Rather than testing voice protocols and Wi-Fi connectivity from a handful of user devices, testers are walking through entire systems and subsystems in the ways that users are likely to do.
In an ideal world, service verification for voice, data, and mobile broadband usage would probably look a lot different than it does right now. Test cycles would be perfectly matched to the timelines for updates, testers would be able to complete tests for the entire range of use cases with time to spare, and any bugs uncovered could be addressed before new updates were rolled out. Unfortunately, that’s not really the world we live in. Instead, we’re stuck with update cycles that are often too short for thorough use case testing, and service verification begins to feel like an unwanted albatross around the neck of any given telco operator.
Okay, let’s say you're one of the major telco operators in your geographic area, and in order to increase your competitiveness you’re hoping to be the first one to roll out a 5G network for mobile voice and data. You’ve spent months laying the groundwork and taking pains to get your equipment and protocols in line with the new standards, and you’ve done your market research to determine the level of demand among local users (including adoption of 5G-enabled devices, etc.). It’s “all steam ahead,” and the only question is how quickly you’ll be able to get your product to market.
Network demands have grown increasingly complex in recent years. For example, people's reliance on smartphones for everything from navigation to mobile banking means that networks must be more robust and secure than ever. Meanwhile, the proliferation of IoT-enabled devices has introduced new protocols and device configurations.
Let’s say you’re a telco service provider: after careful deliberation, you decide to migrate your network in order to improve bandwidth for your growing customer base. After some time, the hard part appears to be over—you developed a plan that involved key stakeholders, you sketched out the scope of the migration, and you updated all of your switches and other equipment as needed. Now, it’s just a matter of verifying that you’re still providing all of the services you think you’re providing.
In June of last year, The 3rd Generation Partnership Project set the official standards for standalone 5G, effectively paving the way for the era of true 5G functionality. It might be a little bit of an exaggeration to say that we’re now experiencing a race to create usable 5G networks and devices among the wireless carriers and device manufacturers of the world (Apple, for instance, has been forthright about its decision to wait until 2020 to roll out its first 5G-enabled smartphone), but the floodgates are certainly beginning to open—and carriers like Verizon and AT&T are already performing a limited rollout of 5G home and mobile networks. In Europe, a leading operator in the 5G space has already announced its support for the OPPO Reno 5G.
According to a recent McKinsey study, network quality concerns comprise some of the most important factors impacting a given customer’s choice of mobile carrier. While pricing is still the most important one on average, survey respondents were also quick to list national and local coverage, network speed, and quality of 4G as critical deciding factors in carrier selection. In spite of the growing importance of network quality, however, McKinsey also found that the average quality of service for voice has decreased across Europe in recent years.
A few years ago, a tester working on a typical telco project could run through about 10 use cases per day. Now, that number is closer to 8 use cases per day. This trend might be worrying from the outside, but if you’re a test engineer within the world of telecommunications it really shouldn’t be shocking. After all, as the complexity of global networks skyrockets, it stands to reason that verifying service for any particular node or function would become incrementally more complex as a result. The question is: what can network operators do about it? How can you maintain standards and achieve a positive testing ROI in these increasingly difficult environments?
Today, SEGRON announced that it has raised 3M euros in a series A funding round from Credo and OTB. To any of you who have made use of SEGRON’s ATF to automate testing for your networks, devices, or services, we hope this won’t come as too much of a shock. Following a period in which we completed several successful commercialization trials in Germany, Switzerland, Austria, and elsewhere, we demonstrated that our product was truly the most innovative and comprehensive on the market.
In 2012, an OpenSignal study found that there were about 4,000 different Android device models on the market. Within a couple of years, that number had risen to 12,000, and it’s likely only gone up since then. As a device tester, you already know that there’s considerable diversity among your customers, and that their needs are going to vary on a case-by-case basis—but who knew there was so much diversity just in the devices themselves?