Today, the insurance industry is in the midst of a digital transformation. Sure, there are gradations from one insurance provider to the next in terms of how far along they are and how they envision the future of the industry—but the general trend is that the world of pens and paper needs to give way to connected, intelligent workflows that can generate, validate, and pay out claims digitally. The result of this impulse is already being felt by end users—who are already more likely than they were a few years ago to make use of an app when interfacing with their insurers—but it’s being felt just as acutely by internal staff at insurance companies. After all, they need solid UX in order to do their jobs quickly and efficiently.
Unsurprisingly, this new slate of digital processes puts considerable testing pressure on organizations in the midst of their digital transformations. You have to verify functionality for actions to be taken by both internal and external users in order to ensure that you’re actually providing the services you think you are. This raises the question: how exactly do testing and user experience affect one another in these new and emerging spaces?
Testing Internal Applications
On some level, insurance companies are like other business: they want to keep customers relatively happy. And the only way that they can do so is by empowering their own staff to meet customer needs, meaning that internal tools need to consistently offer robust functionality. Whether we’re discussing claims for auto, home, car, life, or any other sort of insurance, testers need to verify that users on internal apps can log in, access relevant claims, calculate liability levels, verify policy adherence, access policies, create and transfer data to the correct places, etc. Users will, of course, be taking different actions based on their individual roles and permissions, so you’ll need to ensure that, for instance, someone who has read-only access to policy information can’t accidentally edit someone’s policy.
Of course, different users will be accessing these applications from different devices potentially running different operating systems and web browsers, meaning that testers will have to try out each possible combination of user permissions, devices, OS, and browsers in order to make sure that users can take necessary actions under any of the conditions that might arise during their work. Obviously, the employees themselves are users in this context, so testing out the possible configurations that are likely to arise in their day-to-day lives will help reduce the likelihood of bugs and outages that prevent them from doing their jobs. At the same time, internal tools should be aimed at driving positive outcomes for customers—so in that sense, a robust testing program for your internal users also improves quality of service (QoS) for end-users, who won’t have to grapple with the kinds of delays in getting policies or claims sorted out that come from disempowered or delayed employees.
Challenges in Optimizing QoS
Of course, application testing also needs to be performed on the end-user side in order to make sure that your customers can actually submit claims and update their policies. The QoS implications on this side of things should be a lot more obvious, and as a result the challenges are somewhat more obvious as well.
- Device fragmentation: Outside of a controlled company setting, it’s more difficult to know what devices are being used to access your software. Are most users on desktops or mobile devices? What operating systems are they running? You need to account for a much wider variety of possible configurations in order to ensure that no one is left out in the cold.
- Access control/security: Though access control within your corporate structure is obviously important, it’s somewhat higher stakes once you open up your applications to traffic from end-users. Testers need to make sure that users can’t access other users’ information, for instance, or make edits to things that they shouldn’t be able to. By the same token, you’ll have to perform penetration tests to uncover less obvious kinds of security vulnerabilities that might be exploited by malicious users.
- Performance testing: Sure, you have to ensure success/failure for any given set of actions, but you also have to make sure that your application isn’t plagued by long latency times and laggy or buggy interfaces. These are the kinds of issues that—while not necessarily suggestive of huge structural issues—disproportionately frustrate customers.
These represent a few of the primary challenges that insurance providers face in trying to keep up service quality in the midst of a digital transformation. It might be obvious just from reading the above concerns that testing can quickly become a complex, and time consuming process—but the alternative is to risk losing customers with buggy software.
So far, we’ve looked at internal users and end users separately, but is it possible that taking a more holistic approach can help testers to verify their entire digital ecosystem? In point of fact, it is—through end-to-end testing. This means considering any given functionality as a complete action involving multiple endpoints. Thus, for a car insurance claim to be filed, an end user would have to initiate the process on her end, that information would have to make its way to an adjuster, who would in turn manage claim to the point where a notification could be sent back to the end user for confirmation.
In this way, you capture the entire lifecycle of a claim within one testing structure, helping you to better align your verification with actual usage. The result, hopefully, is better user experience. Of course, going this route can also increase the complexity of your testing workflows. Why? Because it requires a clear, fully fleshed-out framework that maps out your entire software usage flow in depth. By putting this framework in place, however, you set yourself up for more consistent, repeatable testing, and even for future automation down the line.