Advisen FPN

Advisen Front Page News - Wednesday, May 11, 2022

Cutting out the 'data noise' in underwriting benefits insureds, brokers: expert


Cutting out the 'data noise' in underwriting benefits insureds, brokers: expert

By Alex Zank, Advisen

By reducing "noise" in data gathering and analysis, insurers can offer a more streamlined application process and more reliable quotes in small commercial policies, says an underwriting expert at Verisk.

“When you have the high-quality data and analytics up front, that can really help reduce the number of times that the actual premium is different from what the insured was originally quoted,” said Tracey Waller, product director for small commercial underwriting at Verisk, in a recent interview with Advisen.

The goal for insurers is to reduce underwriting questions and automate underwriting, and ensure businesses have the coverage they need. Underinsurance is an issue throughout the industry, but particularly among small business policyholders, said Waller.

Machine learning can help insurers identify risks that may otherwise be overlooked in the application and underwriting process. For example, a more manual process could leave certain equipment underinsured, or overlook a professional exposure that an insured didn’t think to share or a broker didn’t think to inquire about, Waller said.

“When there is not a full understanding of the holistic risk that an insured presents, there’s always a chance they’re not going to have the insurance that they need,” she said. “AI [artificial intelligence] and machine learning, and just having high-quality data and analytics on hand, can help identify those things upfront without having to have the broker even ask the question, or have the insured answer the question.”

A Verisk analysis of businessowners policy (BOP) insurance data found about 53% of risks were generally misclassified, resulting in $6.5 billion worth of premium leakage in a single year.

“That means that there were significant number of insureds [with] policies that didn’t contemplate all exposures,” Waller said.

Creating a complete risk profile of an insured requires the right kind of data, not just more of it. “The chances that you need a thousand pieces of information to write a policy are very slim; there’s probably just a few pieces,” Waller said.

Too much data can be noisy. Data noise refers to information that’s incomplete, inaccurate, outdated, or irrelevant for insurance purposes. Working with partners who have expertise in insurance is key in validating quality sets of data, added Waller.

One way that insurers can wind up with bad data is through a “bad match.” This happens when a search turns up information of a company that sounds similar – such as “Jones Plumbing” instead of “Jane’s Plumbing.” Inaccurate data is worse than no data at all, Waller said.

More insurance companies are embracing data and analytics, and the results are noticeable, noted Waller.

“What we’re seeing happening today is, as more and more companies start to integrate data and analytics in their workflows for real, that’s the tipping point here is that the industry has moved past exploration and moved into implementation,” she said. “We’re seeing significant gains, and those gains are happening in simply increases in quotes, the number of policies bound, [and] the amount of straight-through processing.”

Reporter Alex Zank can be reached at

Safety National
Lockton Companies