Global P&C Insurers Require Global Data Confidence

Disorganised data can result in underwriting errors, regulatory breaches, and tough questions for the front end of the business. That’s according to Nick Mair, co-founder and CEO of insurtech start-up DQPro.

Tech evangelists love to mention that the last few years have seen more data created than in the entirety of human history previously. But if this data boom represents ‘the new oil’ – at the risk of repeating a much abused analogy – then refining such a precious resource into reliable and useful underwriting intelligence is crucial to global P&C insurers’ success.

Gone are the days when the insurance business lacked for data. Nowadays it depends on it. The speed of business has accelerated at the same time as it has become more globalised. For today’s specialty underwriters, taking decisions on a daily basis across lines of business requires high-speed analysis of high-quality data.

All of this means that without proper controls, the risks of data uncertainty have gotten higher, posing an urgent question for underwriters and business leaders: when the chips are down, do you really have confidence in your data?

Without strong data controls in place, globally, it’s hard for underwriters to avoid feeling nervous. They need reliable, timely information from their operations and data teams. Not having it there leads to missed opportunities. Taking ill-informed decisions quickly without data confidence can carry bigger risks.

Those operations teams are often wasting time by re-keying and manipulating  data into spreadsheets and legacy systems. The potential for error is high. Slowness is for sure. The likelihood is one of missing out on an opportunity to a more competitive rival, or of making a bad underwriting decision that comes back to haunt the Actuarial or Compliance department.

Insurtech – A $400bn Opportunity or Risk?

The data influx feeding this challenge is only going to grow. Insurtechs which have largely focused on the sales and distribution side so far, are expected to create at least $400bn of additional premium by 2023, according to a recent Juniper Research study. That new premium will be reliant on data, and will itself create still more data.

This means risks as well as opportunities, but also plenty of missed opportunities if data inflow is not marshalled properly. After all, what use are predictive analytics, risk modelling, machine learning and artificial intelligence without clean data to begin with?

Geographic expansion and M&A activity among insurers has also exacerbated the data challenges facing multinational businesses. Few of the largest specialty carriers active today are not the result of mergers which combine a previous generation of insurance entities with a mixed bag of inherited systems and data.

The result can be that specialty carriers are doing business globally, in dozens of lines of business, using an unwieldy mess of legacy systems and software that was not designed with integration in mind. This is an anathema to effective data controls, and the results are often tactical,  disjointed or dysfunctional.

Data hygiene might seem like unglamorous nuts-and-bolts detail, but it is only one short step behind the specialty underwriter. This is why the need for enterprise wide data controls requires urgent acknowledgement and action from the industry.

Front line decision-makers are only as good as the data at their fingertips. Lack of data oversight upstream leads to costly inefficiencies downstream which, in turn, can produce mistakes that have serious underwriting or regulatory implications.  

The carriers taking confident underwriting decisions for a strong 2019 will be those with true, global data confidence.

More from this Author