Background
What is Argo Digital?
Argo Group is a large international business insurance conglomerate. The group consists of twenty-five individual Insurance “business units” (separate corporate entities which specialize in specific sets of coverage types in different countries), eighteen of which are US based.
Argo’s Digital department builds (primarily web) products to improve various aspects of an individual business unit’s operations. Over its four years, Digital has built several successful products focusing on different parts of the Insurance life cycle. However, each product was custom built by small product squads for individual Insurance lines.
Need for a longer horizon
Although the department had proven it could build successful products, there were two challenges that emerged later in the products’ life cycles:
- Changes to Business Units’ policy rating guidelines proved cumbersome for our engineers to encode. Sometimes the annual changes would take several weeks to implement
- Because our solutions were so specific to business units, they weren’t architected in a scalable fashion. If we wanted to add more insurance lines, we’d essentially have to rebuild the products from scratch
Argo Portal Mission Statement
From these learnings, Digital Leadership decided to combine the department’s resources to build the largest application to date – a portal designed to handle all aspects of the Insurance Policy life cycle, built in a modular Domain Driven Design Architecture.
The differentiating concept of this Portal in contrast to previous Digital applications was greater scalability due to its modular architecture. The goal was that new lines of insurance could be added with relative speed as only one module would need to be adjusted.
User Research Methodology
The portal’s initial version’s primary target users were Insurance Brokers. Digital had long established a robust multi-channel user research and outreach program consisting of:
- Frequent user Interviews (multiple squad initiatives with 3-4 broker interviews per quarter)
- Frequent user tests on new concepts (minimum of one round of 3-4 broker tests per month)
- Fullstory recordings of user behavior on Argo applications
- A research library of user insights on Dovetail
- Bi-weekly “Voice of the Customer” team Q&A sessions with various stakeholders in the insurance process
- Quarterly research visits to brokerage offices
- Constant meetings with internal Business Unit stakeholders – primarily Underwriters who interfaced with brokers daily
Background User Context
With so many research initiatives, Digital established a fairly in-depth understanding of the broker user context.
In the traditional Insurance submission process:
- A broker has a client (the insured) fill out either a generic or carrier-specific insurance application form
- Broker emails scanned pdf of filled form to a handful of selected underwriters at various carriers
- Selection of which underwriters is heavily influenced by:
- Personal relationship between broker and underwriter developed over years of doing business together
- The specifics of the risk attributes of a particular client’s business and knowledge of which carrier’s risk appetite (and corresponding quote price) will match up best
- Selection of which underwriters is heavily influenced by:
- Underwriter gives an initial appetite check scan of submission. If they think it might be a good submission they will likely email broker with further questions about the insured
- Underwriters are monitoring their email inboxes frequently each day for new submissions. However, they get more submissions than they can feasibly analyze. Many attempt to develop shortened initial risk scans to suss out which submissions are worth looking into further.
- What they’re scanning for includes:
- Potential high risk red flags (which would trigger declination of business)
- Good premium to risk ratio (sizable enough business that fits in a standardized box of risk attributes)
- Good history of insurance policies with low claims
- Often broker and underwriter engage in multi-email or call back and forth as UW gathers more information about the Insured’s risk attributes. This exchange may range from hours to several days depending on the size of the Insured’s business, how complex the risk profile and how quickly they can provide documentation
- Eventually the Underwriter will complete the risk assessment and generate a quote (based on internal actuarial rating algorithm). This process is formally called Rating.
- Underwriter sends official quote document to Broker. Broker will present quote (typically along with multiple quote options from other carriers) to client
- If client accepts quote offer, the carrier will generate a binder (an official set of documents stating the terms of the policy pending a check that all provided information on the application is true). At this point coverage has officially started.
- Once prerequisite check is complete, the insured receives official policy documents from the carrier (via the broker).
RQBI Diagram
Problems to solve
There are multiple inefficiencies with the traditional process that the Portal was aiming to solve for. Some of the biggest ones include:
1.) No filtration process for “good” submissions
As stated, Underwriters have their email inboxes filled with more submissions than they can feasibly look at. Although they do their best to see as many as possible, inevitably some of the best submissions (ones with a strong risk to premium ratio) will either fall through the cracks or another carrier might win the business just by responding faster. The flip side of this is underwriters might spend unwanted time looking at submissions that later turn out to be a poor fit for the carrier.
2.) Slow back-and-forth process: longer time to quote
Brokers’ initial aim in the submission process is to get to a quote as soon as possible. The slow communication chain of underwriters requesting more info from the broker who in turn needs to contact the insured who then needs to look for documentation and all the way back adds unnecessary time to the process.
3.) Inconsistency in individual underwriter quoting
A recently published study by Daniel Kahneman highlighted that at major Insurance carriers, individual underwriter judgement was producing varying premium valuations of up to 50%: https://btlaw.com/en/insights/blogs/policyholder-protection/2019/the-noisy-business-of-the-law-and-insurance-claims
4.) Poor data capturing
Probably the worst problem for carriers is that the traditional method does not effectively capture long-term risk data. The entire concept of the Insurance industry rests on the premise that the more data one has for a type of risk, the more precisely you can quote it. The hand-written scanned PDF model did not provide an effective solution for extracting and storing that data so that it could later be cross-referenced against the proceeding claims in subsequent years of the policies’ life cycles.
Portal value proposition
The portal looked to solve these problems by providing a digital automated quoting process. The goal was to cut out any need for interaction with an underwriter for a high percentage of submissions. The premise was that brokers could fill out a digital submission form (roughly fifteen minutes) and upon submission get a quote instantly.
Broker goals and perceptions of quoting platforms
The introduction of online platforms in the Business Insurance industry was not a strategy unique to Argo. More and more carriers were introducing digital quoting options for more lines of insurance. The industry has been shifting from a traditionally relationship based model to one of digital automation.
Our exploratory research on Broker perception of digital platforms found mixed sentiments. Some brokers preferred the digital application processes and certain brokerages had dedicated “automation” teams who exclusively submitted business via the platforms. Others were more hesitant and preferred the human touch of the relationship building and price negotiations.
There was also a strong variance in broker perception amongst individual carrier platforms. Unsurprisingly, we found a strong correlation between broker sentiment and the survivability of the platform.
The strongest indicators we found for positive broker perception of a platform were:
1.) Reliability to get to a quote once filling out the application form
Our research indicated that a favorable threshold for automatic quote generation was 70% of submissions. If broker submissions were “referred” (determined to require an underwriter to review – and hence an elongated time to quote) at a rate higher than 30% of the time, broker perception of the portal’s usefulness would decline. If a submission was referred, brokers would prefer to just go directly to the underwriter as they had more faith in receiving a response.
2.) Consideration of Broker data entry time
The biggest complaints from brokers were about scenarios in which they invested time to fill out a form only to find out later that their submission isn’t eligible on the platform for various reasons. Essentially, if their submission was going to be rejected for some reason, they wanted to know that as soon as possible and hence not waste any more time than necessary.
3.) Responsiveness on Portal related queries
We heard consistently positive feedback from brokers about portals that had features with responsive customer service. For example, one of the most successful portals had a chat feature with an average response time of under two minutes. This gave brokers faith that their requests would be serviced speedily and encouraged more time spent on the platform (and hence more business submitted on it).
My Responsibilities:
Customer Experience Squad
As the designer on the Customer Experience Squad (CXS) I was responsible for designing the central portal experience. For the first version of the portal, this would entail:
- Log in/Sign up
- New Business Home
- View Submission
- Start a new Submission (Clearance)
Design Process
From an Information architectural standpoint, my designs were heavily influenced by my analysis of successful portals that Digital had built in the past (read more on my Design Inventory here) as well as competitor analysis from various research initiatives I had taken part in.
UI and Visual Design was being developed and fine-tuned concurrently with the development of our department’s Design System (also known as APL – read more about the design phase of that here). Later iterations of the portal designs almost exclusively implemented components from the Design System.
Page Designs
Portal Mockups
User Testing
We ran five rounds of User tests with 3-4 brokers per round: three rounds of moderated testing and two trying out unmoderated testing tools with Usertesting.com and Playbook UX.
Testing methodology and artifacts
Each round of testing was focused on a set of user goals. We were testing whether users could navigate the information architecture and comprehend the cues and feedback for basic processes.
For each test, after determining which specific goals to test, I wrote up a script with task prompts and follow-up questions as well as a Google sheet for tracking results. We measured results of the tests based on whether users were able to successfully complete tasks or not.
We measured usability with a three level scale:
- Completed task with ease
- Completed task with some difficulty
- Failed to complete the task
User Testing Artifacts
We included comment areas for each task and sub-task to note any insights users shared during the testing sessions.
We also made video recordings of test sessions (which were all remote and on Zoom) so we could review them later and clean up our notes (with permission of course).
Testing Results
By and large, testing results were wildly positive with the first couple rounds recording 100% and 99% successful task completion (without hesitation) respectively. The third round which tied together flows from two squads and delved into the more complex rating process, had slightly more learnings with 91% task completion and with some hesitation.
The later unmoderated testing trials were focused on adding further specific features to the flows.
Handoff to development
During the development phase, I wrote CSS specs for developers and attached them to the squad JIRA tickets to assist with styling.
Argo Portal Release
We’ve been onboarding the first batch of brokers to the portal in June 2021 and business is starting to flow through it. Since it’s only been a matter of weeks, we’ll need more time to invite more brokers and track results.