Confidential Computing: What It Is and Why It Matters for Insurance

June 25, 2021

By: Victor Boardman, Industry Business Lead at R3


It’s long been said that data is the new gold. It’s true that modern companies are data driven and data rich. In fact, it’s been reported that Facebook has over 50,000 data points on each of its 2.6 million monthly active users, and Mastercard has over 13,000 data points on individual consumer behavior, global trade and every rung of commerce in between. “The data, and how we work with the data, is as important as the transactions themselves,” says Mastercard’s President of Operations and Technology, Ed McLaughlin.

Insurance is no different. Without data, insurance would not exist. From the very beginning of the modern insurance industry, data was captured, processed, and shared to enable risks to be understood, priced and transferred. Modern day insurance is no different—data is at the core of insurance decision making, except now insurers have access to more data than ever before. In fact, entire business models have grown up to support the insurance industry to process data and extract insights, making sure insurers provide customers with relevant, affordable, and sustainable products, as well as manage their own business.

However, the challenges around protecting data are complex and onerous. Take, for example, GDPR. The Data Protection Act of 2018 contains 354 pages concerning regulation on processing of information relating to individuals. And the penalty for GDPR violation? A maximum fine of £17.5 million or 4% of annual global turnover, whichever is greater!

Large and notable fines include:

    1. Google (€50m/£43.2m). Google was one of the first companies to be hit by a substantial GDPR fine of €50m in 2019.
    2. H&M (€35.3m/£32.1m)
    3. Tim – Telecom Italia (€27.8m/£24m)
    4. British Airways (£20m)
    5. Marriott International Hotels (£18.4m)

And, according to research from DLA Piper, across Europe in the 12 months leading up to January 27, 2021:

    • GDPR fines rose by nearly 40%
    • Penalties under the GDPR totalled €158.5 million ($191.5 million)
    • Data protection authorities recorded 121,165 data breach notifications (19% more than the previous 12-month period)

Beyond regulation, companies have internal governance and controls relating to data to ensure their own valuable intellectual property remains protected. Data is their lifeblood, used to make daily business decisions or drive key strategic initiatives to benefit customers, employees, and shareholders. In short, data really is gold.

So, how do companies ensure data is protected and will not be misused? The answer: “soft” policy controls.

Internally, companies deploy vast resources to ensure policies and procedures are developed and maintained to ensure everyone acts to keep data protected. The policy controls require all individuals in the organization to “follow the rules” with annual/periodic training and/or certification. Remember your last data protection training course?

These “soft” policy controls, often wrapped up in contractual terms and conditions, are also used when companies send data to 3rd party services that provide analytics.

These are so called “soft” policy controls as they rely on individuals following the rules. This is very difficult to validate, monitor and control, especially with independent 3rd party companies.

On a technical level, data can be kept secure when it’s stored using encryption (data at rest). And when the data is transmitted? That’s encrypted, too (data in transit). But how is data kept secure when it’s being “processed?” And how do you know data is being processed in the “right way?” This has until now been a significant weak point because even with the “soft” policy controls there has been no way of knowing if data is being used in the way the owner of that data intended, and no way to prove that the “processor” can’t see the data or won’t misuse it.

Take the example of filling out an online loan application. You enter the data on the webpage, then the data is sent to the loan provider in a secure encrypted form and stored in their database that’s also encrypted. But you have absolutely no way of knowing how your data is being processed, who it’s being shared with, or that it is secure when being processed. And what applies to your personal data when applying for a loan also applies to your company’s data when sharing it with 3rd parties, be it data analytics service providers, industry bodies, peers, or regulators.

However, new technology known as confidential computing is now available to solve this problem. By closing the loop and providing true end-to-end data privacy, confidential computing ensures that data is strictly isolated during processing and that the data is only analyzed in the agreed-upon way.   The data being processed is invisible and unknowable to anything or anyone else, including the operator of the service and hardware. Cryptographic proof is provided confirming your data is being processed only as agreed.

This new way to process data is enabled by enhancements in hardware from the leading manufacturers such as Intel. This new hardware technology, called Trusted Execution Environments (TEE), brings in new levels which go far beyond the “soft” policy controls that had been used historically. Instead, they use hardware controls that provide technical assurances that the data will be used only as intended. We will go into the technical details in a subsequent blog.

The end result? Individuals and companies can now process and share their precious and valuable data, safe in the knowledge that it is kept secure in transit, at rest, and now, in processing. Crucially, the data is only processed in the agreed way with very secure hardware assurances.

For insurance, the implications of this are numerous:

    1. Insurers can keep sensitive data in more secure environments and protect it from hosting providers or insider threats.
    2. Inbound: Insurers can prove with certainty that they are processing sensitive customer data only in the agreed way (i.e. used for the purposes of risk assessment, fraud detection, or claims payments).
    3. Outbound: Data can be handed over to a 3rd party for analysis confident in the fact that the 3rd party cannot see the raw data and that the data is analyzed in the agreed way.
    4. Data can be pooled with competitors for industry benchmarking or fraud purposes where no one can see the raw data.

In this blog post, we have introduced the new field of hardware controls to secure data during processing and how this can apply to insurance.

In our following blogs, we will discuss the technology behind these hardware controls, Trusted Execution Environments (TEEs) & remote attestation, along with specific insurance use cases and solutions.

Want to learn more?

Below are resources to learn more about Conclave and Confidential Computing: