Insurance fraud is a huge issue for the industry, estimated to cost insurance companies and agents globally more than US$40 billion per year. A particular challenge is “double dipping” fraud, where one insured event such as a motor insurance claim is claimed twice from two different insurers. But now a solution is at hand: ClaimShare, a leading-edge fraud detection application.
Insurance fraud is a huge issue for the industry, estimated to cost insurance companies and agents globally more than US$40 billion per year. A particular challenge is “double dipping” fraud, where one insured event such as a motor insurance claim is claimed twice from two different insurers. But now a solution is at hand: ClaimShare, a leading-edge fraud detection application. Built on R3’s Corda Enterprise and Conclave platforms, ClaimShare was developed through a collaboration between IntellectEU and KPMG, and recently won the global Corda Challenge: Insurtech from R3 and B3i.
Using blockchain and confidential computing technologies to help insurers collaborate and mitigate fraud, ClaimShare is generating huge interest and excitement across the insurance sector. To share more about the solution, and the benefits and opportunities it opens up, R3 recently hosted a webinar where Victor Boardman, R3’s head of insurance for EMEA & APAC, was joined by ClaimShare director Chaim Finizola and Kami Zargar, ACA Director and Head of Forensic at KPMG in Belgium.
At the beginning of the session, Victor introduced Conclave, and its revolutionary impact which springs from its ability to allow information from different companies to be pooled together for joint analysis while keeping the underlying raw information confidential. Explaining that confidential computing keeps data cryptographically secure at rest, in transit and also in processing, Victor expanded: “That means the data can be pooled between multiple insurance companies and brokers, safe in the knowledge that the underlying information can’t be seen by any party, not even the operator of the service.”
Why is such a solution needed? KPMG’s Kami took up the story: “By some estimates, insurance fraud is the second most common type of fraud after tax fraud…and it is estimated that only half of it is detected. So we have been working over several years with the insurers and regulators to define solutions that will help them detect and manage this issue within the bounds of what is possible, given that there have historically been limitations around data, etc, in terms of how to tackle this issue.”
A large proportion of the insurance fraud that’s going undetected is believed to be double-dipping fraud – and ClaimShare’s Chaim explained how the solution addresses this type of crime: “We’ve worked to create an application that would enable the sharing of claims data between different insurers to detect double dipping fraud without needing to make changes to their back office or front office systems. Previously, there has never been a technology that enabled the sharing of data in a fully compliant manner. Using Corda and Conclave as a platform was a game changer, allowing us to share data and match the sensitive data while being fully compliant.”
Chaim went on to provide the webinar attendees with a live demo of ClaimShare in action. To see this – and learn more about how ClaimShare provides insurers with a powerful new weapon in their battle against fraud – click here to view the webinar in full.
Want to learn more?
Here are some helpful resources to learn more about Privacy-Enhancing Computation (PEC), Confidential Computing and Conclave.
Hear from R3’s Victor Boardman on how data sharing can address ‘double dipping’ in insurance claims in a recent article on Insider Engage.
In an effort to reduce risk, institutions continually monitor and segment their customer data. They employ a host of different techniques to effectively do this, including software tools, precise risk scoring, behavior analytics, and more. However, they still may not get the “full picture” when it comes to a customer’s risk profile. Unknown knowns remain in part because customers often have relationships with more than one bank, insurance firm or provider.
In a recent survey conducted by KPMG, 51% of banks reported a significant number of false positives resulting from their technology solutions, hampering efficiencies in fraud detection.
In an effort to reduce risk, institutions continually monitor and segment their customer data. They employ a host of different techniques to effectively do this, including software tools, precise risk scoring, behavior analytics, and more. However, they still may not get the “full picture” when it comes to a customer’s risk profile. Unknown knowns remain in part because customers often have relationships with more than one bank, insurance firm or provider. On top of this, financial crime, money laundering and other fraudulent activities remain difficult to detect because criminals purposely distribute their activities across different institutions, knowing that those institutions hesitate to share their data with one another.
Companies know that data sharing could hold the key to unlocking greater analytics and insights in their fight against fraud and financial crimes. That said, it rarely happens because of an inherent mistrust in how their data will be used. Concerns around data privacy, lack of control, and fear of proprietary information getting into the wrong hands outweigh the benefits of insights that might come from peers pooling data. But what if firms could receive technical assurances that their data will not be viewed or misused as it’s being pooled and analyzed? That they would be in complete control over how it will be used?
Well, now they can.
How Privacy-Enhancing Computation (PEC) can help fight fraud
While fraudulent activities have become more sophisticated, fortunately so has the technology that has the power to stop them. Techniques like Privacy-Enhancing Computation (PEC) provide protections around data privacy, confidentiality and integrity that make different entities feel comfortable pooling data securely across company lines, to gain insights and identify risk. This means that firms will now be able to combine both data and forces to fight financial crime. In fact, a recent Gartner report highlighted Privacy-Enhancing Computation as one of the top strategic trends for 2021 that IT leaders should assess.
PEC solves the challenge that has been facing security experts for years: how to protect data as it’s being shared, analyzed and operated without exposing or viewing the underlying data. Protecting data in use is the next frontier of data security. Data encryption today covers data at rest and in transit but has not been extended to data in use—until now.
Encrypting data in use is critical to identifying the increasingly complex methods that are evolving to commit financial crimes because it allows firms to pool and analyze sensitive data without compromising on data privacy. In the insurance industry, for example, by using privacy-enhancing techniques, firms can pool, analyze, process, and gain insight from confidential data without exposing the underlying data itself, to identify possible fraud due to multiple claims being presented for the same insured event.
This opens up a world of possibilities across industries. This concept is simple yet was nearly impossible to execute on beforehand due to data privacy concerns. R3, IntellectEU and KPMG have leveraged this technology to deliver ClaimShare, a platform developed to detect and prevent double-dipping across insurers, allowing competing insurers to collaborate to fight fraud.
What are the types of techniques included in PEC?
There are a variety of software and hardware-based methods to protect data in use. Some examples include: secure multi-party computation, homomorphic encryption, zero knowledge proofs and trusted execution environments (TEE). Each technique tackles the problem of how to securely protect data in use, with accompanying advantages and drawbacks.
Confidential Computing stands out as a stable, scalable and highly performant solution for a broad range of use cases. By performing computation in a hardware-based TEE, it prevents unauthorized access or modification of applications and data while in use, thereby increasing the security assurances for organizations that manage sensitive and regulated data.
Meet Conclave – a revolutionary new Confidential Computing platform
Conclave is a new privacy-enhancing platform from R3 that utilizes Confidential Computing. Conclave enables the development of solutions, like ClaimShare, that deliver insight from data shared across parties without actually revealing the data to any other party, thus maintaining controls over how data is used while addressing security and compliance-related obligations.
With Conclave-enabled apps, confidential data can be pooled and processed within an Intel® SGX enclave where none of the contributing parties, nor the enclave host, can access the data. End-users can audit the application’s source code and cryptographically verify the application will run as intended before providing sensitive data for joint pooling or analysis.
As a result, end-users can be confident while sending proprietary data outside of their organization, and software providers can build more predictive financial crime risk management and compliance solutions using sensitive data from multiple firms for cross-institutional insights.
Here are some helpful resources to learn more about PEC, Confidential Computing, and Conclave.
Read the Gartner report on why Privacy-Enhancing Computation is a top strategic trend for 2021
Learn more about Conclave in a recent blog from R3’s Richard Gendal Brown, titled ‘Introducing Conclave‘
Hear from R3’s Victor Boardman on how data sharing can address ‘double dipping’ in insurance claims in a recent article on Insider Engage
Conclave is an application development platform that helps build secure applications that can process data confidentially. I gave a brief introduction to Conclave in my previous blog. Now let’s look at how we could build our very first app on Conclave. What are we building? We will build a secure auction app where bids from… Read more »…
Conclave is an application development platform that helps build secure applications that can process data confidentially. I gave a brief introduction to Conclave in my previous blog. Now let’s look at how we could build our very first app on Conclave.
What are we building?
We will build a secure auction app where bids from parties would remain confidential using an enclave. All bids would be processed within the enclave and the result of the auction would be revealed, without compromising the bids submitted by each participant.
Components of an application built on Conclave
Conclave has three major components: Enclave, Host, and Client. So let’s get into business and start building each of the components of our application on Conclave.
An enclave is a program that runs in a protected memory space that can’t be accessed by the OS, Kernel or the BIOS. Thus you can rest assured that the data sent to the enclave is completely confidential.
To build our secure auction app, we need to write an enclave program which takes bids from participants and process the bids to determine the highest bid in order to come up with a winner.
Conclave provides the Enclave class which can be subclassed to build our own enclaves. Data can be sent to the enclave using the Conclave Mail API. Mail API helps achieve end-to-end encrypted communication between the client and the enclave.
We can use the receiveMail method to receive mails send to the enclave. The userRoute parameter helps to map mails to different clients. We have two maps to store userBids and their public keys respectively. Note that Message is a user-defined model object we use to transfer data.
Conclave uses GraalVM Native Image JVM to run enclave programs, which doesn’t support Java serialization. However it supports Kryo, hence we use Kryo for serialization.
Once we have all the bids submitted, the auctioneer can asks the enclave to process the bids and send the result to all participants. The complete code is shown below:
We have used the auctionAdmin to store the auctioneer keys and routing string which is used later to send the result back to the auctioneer. Conclave provides the Postman feature which is used to create mail to communicate between enclave and clients which is used in the sendMail method.
That completes our enclave, let’s look at building our host component next.
Host programs are used to load the enclaves and it also serves as a proxy between the client and the enclave. Hosts are considered untrusted at all times and hence all communication between host and enclaves is always encrypted.
The first thing we do as we initialize our host is to verify the hardware support to run enclaves.
Once we have verified the platform support, we can go ahead and load the enclave program.
When the enclave is started it returns a callback which is used to send enclave responses back to the client. The MailCommand object contains the response content and the routing parameter to map responses to different clients.
Once the enclave program has been started we can now start the TCP server to start accepting client requests.
We are using simple TCP connection (for simplicity) for communication between host and client. You could use more sophisticated protocols like GRPC or whatever suits you better.
All clients must verify the authenticity of the enclave before sending confidential information to the enclave, hence we send the attestation object to the clients as they connect to the host. Clients can utilize this information to verify the measurement and make decisions if they could trust the enclave.
Finally the clients can send their confidential information which the host can then forward to the enclave.
Now we are left with the final piece of the puzzle to complete our first application on Conclave. Let’s build our client.
First we try to establish a connection with the host and get the attestation object.
For the purpose of this blog I have just printed out the Attestation info on the console. For real applications however the attestation should be verified before sending any information to the enclave.
In real world use cases either the client would have access to the source code of the enclave which they built to reproduce the measurement or use a trusted service provider to verify the attestation information.
The next step would be send the bid to the enclave.
Notice that we use the postman feature again to create our Mail to be sent to the enclave. To get the bid we take an input from the user from the console.
Finally we need to write the code for reading the response from the enclave. We could use the postoffice’s decryptMail to decrypt the encrypted message from the enclave to read the response.
Congratulations!! We have successfully built our first application on Conclave
The entire source code for this application we built is available here. If you wish to run the application, please follow our guide in the documentation here.
I hope you liked the tutorial and thanks for reading.
Another day, another vulnerability, another hack. Losing control of critical personal data feels random and inevitable (here’s one very recent example). It’d be great if we could trust IT service providers, but we can’t. Even if they’re totally respectable pillars of society who have only the best intentions, the difficulty of keeping networks secure means… Read more »…
Another day, another vulnerability, another hack. Losing control of critical personal data feels random and inevitable (here’s one very recent example).
It’d be great if we could trust IT service providers, but we can’t. Even if they’re totally respectable pillars of society who have only the best intentions, the difficulty of keeping networks secure means their good will isn’t enough.
The computer industry has spent many years researching solutions. One of them is confidential computing (formerly known as trusted computing). Software running on a server can prove its identity over the internet via what’s called a remote attestation. The attestation contains the hash of the program, the fact that it’s been tamper-proofed by the hardware and an encryption key. Software outside the so-called enclave cannot get in to access its secrets, devices outside the CPU are blocked by transparent memory encryption, and clients can communicate with the enclave using the key.
It’s a simple concept yet with it you can solve many problems that previously required intractably slow or complex cryptography:
Any computation that combines data from multiple parties and doesn’t want a trusted intermediary.
Building decentralized services from peer-to-peer networks of untrusted nodes.
Blocking attacks on your servers by keeping the data the attackers want inside an enclave whilst pushing as much software as possible to the outside.
Only one problem: the idea is simple yet using it is hard.
But first, let’s briefly talk about how to design enclaves the right way.
The Zen Of Enclaves
As of January 2021 the best implementation of confidential computing is Intel SGX. It follows the UNIX philosophy of small programs, each doing one task well. An enclave is meant to be as small as possible and meant to do only one thing — computation on some data. Everything else should be kept outside the enclave: network handling, database management, monitoring, metrics, administration … all of it.
Some approaches to confidential computing don’t do this, and attempt to run an entire operating system and serving stack inside the enclave. This isn’t useless, but it’s also not really maximizing the benefits of the concept. There are two problems with it:
Putting a vulnerable software stack inside a protected memory space doesn’t make it secure. Enclaves erect a hard border between software components so malicious or hacked software on one side can’t get into the other, but that’s no use if the software the enclave is protecting is itself vulnerable. One way to minimize vulnerabilities is to just minimize the amount of code in the protected space that’s handling attacker-controlled data.
Remote attestation is a fundamental part of the concept. Users check what’s running before they upload their data. But, attestations just give you a SHA2 hash. To know what it means someone must audit the software that hashes to that value, and check that it really does what it claims to do. If your software stack changes every day the hash will change every day too so how can your users — or external auditors — possibly keep up?
Reflecting on the zenof enclaves we reach the following conclusions:
An enclave is a protected sub-module of your wider application, not an entire application or serving stack by itself.
An enclave is only core business logic that your users care about.
An enclave is a security weak point: coding errors inside the enclave render the protection useless.
Therefore plumbing — stuff that’s neither here nor there from your users perspective— that stuff should be kept outside the enclave. Upgrades to it won’t change the hash reported to clients and thus won’t imply any additional audit work. The enclave itself should be written with tools that help us avoid coding errors.
A JVM is an excellent tool for writing enclaves because of its emphasis on combining performance and safety. Garbage collected and type safe code is provably free of memory management errors, which are still one of the most common ways software gets hacked. In Conclave we use the GraalVM Native Image JVM, which produces self-contained binaries with minimal memory usage.
There are always 3 components in any enclave-oriented application:
Conclave provides a client library that can be used to send and receive messages from the enclave. It works a little differently to how other enclave APIs do, so you can read about the justifications here and here.
Writing a simple apps is straightforward. Follow the hello world tutorial to learn the additional steps required: mostly, this means configuring the build system and then checking the server-side code from the client using the Conclave API.
Today we launched the first stable version of Conclave. It’s been in beta for a while, and during that time we’ve done usability studies on the API to ensure it’s understandable and flexible. It’s free for individuals and early-stage startups who open source their code, and pricing for everybody else starts low and grows only as your solution itself gains adoption. In other words it’s free to experiment and learn. The documentation is available here.
This new release adds to beta 4 the following enhancements:
A better API for mail.
Padding for mails to ensure that message sizes can’t act as a side channel. Different padding policies are provided: you can pick between a fixed min size, max seen so far, a moving average or a custom policy.
The java.nio.file API is now connected to an in-memory file system. This enhances compatibility with libraries that expect to load data or configuration from files, whilst avoiding the complexities of running a full filesystem engine inside the enclave. For persistence use the mail-to-self pattern.
A new script is provided to run Gradle inside a Linux container, on macOS. This can simplify running tests against a fully compiled enclave (i.e. not using mock mode).
Enclaves are now locked by default, i.e. multi-threaded enclaves are now opt-in rather than opt-out. This ensures a malicious host can’t multi-thread an enclave that’s not thread-safe.
GraalVM has been upgraded to 20.3, improving performance and compatibility. An upgrade to 21.0 will come soon which will add support for Java object serialization.
Various usability enhancements, bug fixes, and other safer defaults.
R3 launched Conclave this week, a new platform that makes it easy to build privacy-protecting solutions, through the use of Confidential Computing. But what is Confidential Computing, why should you care and why is Conclave the platform to watch? The data sharing dilemma Imagine you’re a hospital administrator whose mouse is hovering over the “submit”… Read more »…
R3 launched Conclave this week, a new platform that makes it easy to build privacy-protecting solutions, through the use of Confidential Computing. But what is Confidential Computing, why should you care and why is Conclave the platform to watch?
The data sharing dilemma
When you click “submit data” you lose control of your data. But what if there was a technological way to retain control?
Imagine you’re a hospital administrator whose mouse is hovering over the “submit” button. When you click that button some of your patients’ most sensitive healthcare records will be uploaded to a research firm which is performing a clinical trial. You have the patients’ consent; they desperately want to help advance medical science. But you’re still worried.
What would happen if a rogue employee at the research firm stole the data? What if the research firm is using your patients’ data in a way they didn’t agree to? It’s a scary thought.
And it’s not just hospitals. Organizations in all industries face this dilemma every single time they send sensitive data to a third party. The brutal reality is that the moment you send information to somebody else, you’ve just given up any technological control over it.
You’re reliant on privacy policies, goodwill, contracts and law. Those privacy policies that nobody reads? That’s the only thing protecting your most sensitive information when you send it somewhere else!
This is truly terrifying when you think about it. Yet we somehow regard it as normal. “It’s just the way computers work”
Ultimately, when you send data to somebody else’s computer, you’ve handed over full control to them. They could change the algorithms that are running and you would never know. They could take a copy of your data and use it for their own purposes and you would never know. They could give you false results and you would never know.
But what if there was a way that you could know? What if you could know, for sure, what algorithm your service provider was running? What if you could know if it had changed? And what if you could know that your data remains protected at all times so the service provider could not observe or copy it even if they wanted to?
It turns out you now can!
Confidential Computing: protect your data when in someone else’s hands
Confidential Computing allows us to imagine a new weapon in our privacy arsenal: the ability to protect your data even whilst it’s in the hands of somebody else.
Imagine if the hospital administrator could examine the algorithms that the research firm will use before they upload their data, and that not even a rogue employee in the IT department of the research firm could see the patient data or change those algorithms.
Confidential Computing lets us imagine a future where you know, for sure, what will happen to your information after you click the ‘upload’ button.
And this technology isn’t limited to doctors and hospitals. Imagine a multi-national bank can now share data for analytics between its branches located in different jurisdictions, all without worrying about violating privacy laws. Or a group of insurers can detect fraudulent claims by cross-checking against each other’s private client data, yet never revealing what that data actually is.
That’s the future Confidential Computing techniques allows us to imagine.
But there’s a problem: the technology is hard to use and few people even know it exists.
After all, did you know this future was a possibility before you read the paragraphs above?!
Conclave makes Confidential Computing easy
This is why R3 developed Conclave: to make it easy to use this technology.
Conclave makes it possible for regular developers to write applications that their users can remotely ‘audit’. And Conclave makes it easy for these users to actually wield that power. Conclave makes the promise of “know exactly what will happen to your data when it’s in somebody else’s hands” a reality.
And we at R3 are working with our network of clients and partners to bring Conclave-enabled solutions rapidly to market so everybody can see for themselves how powerful this new technology could be.
Indeed, the insurance scenario I described above isn’t just an idea; ClaimShare is using Conclave to do just that!
Conclave is available as a 60 day trial if you’re an established firm. And if you’re an individual or an early-stage startup, you can use Conclave for free. The new world of Confidential Computing is so big and so under-explored that we at R3 want as many innovators as possible to join us in exploring the potential of this new technological super-power. All we ask is that if take up the free option after any trial you share what you learn by open sourcing your apps so others can follow in your footsteps. The potential of Confidential Computing is awe-inspiring. And Conclave is the key to mass adoption. Join us to make this new world a reality!
You might have heard this phrase — “Data is the new Currency”, very true indeed. Over the past decade, we have been inventing technologies that use data to benefit humans in ways unimaginable. Thus data is precious and should be protected from misuse in all ways possible. While we have ways to protect data from unauthorized access,… Read more »…
You might have heard this phrase — “Data is the new Currency”, very true indeed. Over the past decade, we have been inventing technologies that use data to benefit humans in ways unimaginable. Thus data is precious and should be protected from misuse in all ways possible. While we have ways to protect data from unauthorized access, something that has been neglected is the misuse of data by authorized personnel. We always rely on trust in various cases on some person or organization, to be honest, and handle sensitive data in a proper way. This is not a great idea and leads to a lot of issues in many cases.
Let’s take an example of a tender processing use case. Suppose an organization issues an RFP (Request for Proposal) to invite bids for a particular project. While different interested parties submit their bids/ proposals in a confidential manner, they are still at the mercy of the person/ organization handling the entire process to keep their bids confidential. It is pretty much possible that someone having access to this confidential information might leak it for their own benefit.
This definitely seems to be a huge problem. Someone needs to have access to the data to be able to process it. The only possible way to protect data in such cases is to process it without revealing it. But is it even possible?
Conclave is an application development platform that can be used to build enclaves. In simple words, an enclave is a small piece of software that runs in an isolated region in the memory. Access to this region of memory is blocked to everyone, even privileged software like kernel and BIOS. Thus code and data on the enclave can’t be read/ tempered by anyone, not even by the owner of the computer in which it runs.
Enclaves require some hardware support, Intel SGX (Software Guard Extensions) is an implementation of enclave-oriented computing. Conclave builds on SGX to give developers a toolkit to build enclaves using high-level languages like Java.
While SGX enabled hardware is required to run apps in production, it’s not essential for development. You could run your application in simulation mode which doesn’t require an SGX hardware. Learn about different enclave modes here: http://docs.conclave.net/tutorial.html#enclave-modes
Thus multiple parties can use a conclave app to solve a multi-party compute problem, without worrying about the data being compromised. Data is encrypted and send to the enclave where its decrypted and processed and the result is sent back. Thus no one has access to the private data other than the enclave. Ans since enclaves are loaded in protected memory space which can’t be accessed, data can’t be tempered.
Getting to know a Conclave powered app
Before we start building your first application on Conclave, we first need to understand some of the basics so that we know how to design your app.
An app built on Conclave has 3 major components:
Enclaves are the programs that are loaded in the protected memory space.
Hosts are programs that are responsible for loading the enclaves and provide resources required by the enclave. They mostly act as a proxy between the client and the enclave. Hosts are untrusted and are assumed to be malicious at all times, hence communication between host and enclaves are encrypted.
Clients send encrypted data to the enclave for processing via the host. Conclave comes with the Mail API to ease the communication between enclaves and clients.
Client’s don’t directly communicate with the enclave, They send encrypted messages to the host and the host forwards them to the enclave for processing. Enclaves have their own key-pair which is used for encryption. Thus though data is transferred via the host, the host cant tamper with it since only the enclave can decrypt the messages using its private key.
But how does a client trust that a public key actually belongs to an enclave and not something pretending to be an enclave? To handle this issue, something called remote attestation is used.
Remote attestation is a piece of data that contains important information that can be used to verify an enclave. Among other information, it contains something called a measurement. A measurement is a unique hash generated using a special tool, the hash is generated using the fat-jar that’s loaded onto the enclave. The measurement can be verified by compiling the enclave source code. Conclave takes care of the fact that the multiple builds of the same source-code produce the same measurement.
The approach however could get a little complicated across upgrades, thus a signing key could be used as an alternative. The enclaves could be signed with a specific key and its information can be included in the remote attestation.
Enclave Application Components Interaction
In addition to remote attestation, clients can also request Intel for an assessment of the enclave to verify if it is secure.
That should give you a brief idea of what Conclave is and how you could benefit from building on it.
We will look at how to build your first application on Conclave in my next blog. Stay tuned and thanks and reading!
In 1671, a man named Thomas Blood almost managed to steal the Crown Jewels. He achieved this by befriending the keeper of the jewels, Talbot Edwards. After gaining his trust, Blood convinced Edwards to let him into the Jewel House to show the jewels to Blood and his companions. Once the thieves had been let… Read more »…
In 1671, a man named Thomas Blood almost managed to steal the Crown Jewels.
He achieved this by befriending the keeper of the jewels, Talbot Edwards. After gaining his trust, Blood convinced Edwards to let him into the Jewel House to show the jewels to Blood and his companions. Once the thieves had been let in, they knocked Edwards unconscious and took the jewels from right under his nose!
Why did Blood choose to steal the jewels in this way? What were his alternatives? He could have attempted to break into the Jewel House while the jewels were unattended. Alternatively, he could have waited until the jewels had to be moved, and attempted to steal them while they were being transported to their destination. In both scenarios, the jewels would have been under strong protection. It would have been nearly impossible to break into the Jewel House, and the jewels were likely to be guarded very closely while in transit. Instead, Blood took advantage of the trust he had gained to try to steal the jewels while they were most vulnerable: when the majority of their protection had been lifted.
Data exists in one of 3 states: at rest in local storage, in transit between two locations, and in use when it is being processed by applications. Jewels unattended in the vault are like data at rest, and jewels being transported to a new location can be thought of as data in transit. In both of these scenarios, modern encryption makes accessing the data infeasible.
When we want a third party to do something with our data, we have to decrypt it to allow them to use it. Third party services allow us to make the most of our data by providing software and hardware that lets us process it, but this comes with associated risk. Our data, like the jewels, is most vulnerable while in use.
Knowing this, how can we protect data that is in use while still allowing third parties to help us generate insight from it? Is there a way to leverage the power of cloud computing but protect our data from cloud providers? How can we ensure that the people using our data will not be able to misuse it in this vulnerable state?
What if Edwards had been able to place a secure, tamperproof barrier in front of the Crown Jewels, even while they were being viewed? What if he could have kept Blood bound to his promise that he only intended to view the jewels and nothing more?
In software, we can achieve this using secure enclaves, which create a trusted environment that is isolated from the operating system that hosts them. Enclaves can access and perform computations on data, while the data inside remains encrypted to the host operating system. We can cryptographically assert exactly what code is running inside our enclaves so we know that there will be no surprises.
Inviolable truths about the limits of technology govern every industry… until they don’t. “You can’t do that with client data” will soon be one of them… Are consumer-grade camera manufacturers stupid? No. And yet they have presided over a near total collapse in sales in the last decade, and are experiencing profit shrinkage of existential… Read more »…
Inviolable truths about the limits of technology govern every industry… until they don’t. “You can’t do that with client data” will soon be one of them…
Consumer-grade camera sales have collapsed. How could the manufacturers sit back and just watch? Source.
This means they must have been watching the smartphone revolution and yet concluded they could either retain a profitable niche or perhaps even outlast it as a passing fad.
What was going through their minds?!
What were they thinking when they saw the emergence of smartphones with cameras of ever greater resolution?
Couldn’t they see that the only future that faced them was oblivion?
I have a theory.
I suspect they looked at the form factors of smartphones and correctly concluded that the compromises needed to fit the lenses inside the body meant the optics would always be worse than ‘real’ cameras. It was perfectly reasonable to conclude from this that there would always be a good market for cameras that could take ‘high quality’ pictures, with the phones filling the high-convenience, low-quality, commodity niche that they didn’t want to compete in anyway.
Software and CPU power means that optically inferior smartphones can outperform purpose-build cameras for all practical purposes.
But what they perhaps failed to appreciate were the advances in CPUs and software that would mean the photographs most people can take on the latest iPhones are better than they could ever have achieved with a standalone device.
Far from populating the low-value niche below standalone cameras, smartphones leapt ahead and relegated the standalone cameras to commodity status!
The manufacturers were right: there are hard physical limits on what the optical portions of smart phone cameras could ever achieve. But the ‘wildcard’ of advancements in software and CPU power meant the inviolable truth that ‘real’ cameras can always outperform commodity cellphones was neither true nor inviolable.
Technology Often Dictates Market Structure
Here’s another example: life in the London taxi industry used to be so simple. If you had a ‘black cab’ license you could pick up passengers on the street. Or you could get a ‘minicab’ license and focus on customers who had pre-booked. Two worlds. Two business models. No overlap.
If you can instantly match supply with demand and have pervasive connectivity, then the distinction between “black cabs” and
“That’s just the way things are.”
Until they weren’t.
We all know how the Uber story played out but we don’t always ask ourselves why and why then.
My answer is the emergence of pervasive connectivity and smartphones. You needed both.
A key insight on which Uber was built was that if passengers and drivers have pervasive connectivity then nothing stops you driving the time from ‘booking’ to ‘pickup’ to as close to zero as you like.
You can provide a ‘hail from the street’ experience under the licensing regime of ‘prebooking’ if you know where the drivers are at all times and the passengers have such confidence in the cellular network that they don’t worry about ‘booking’ until they’re ready for their ride.
You needed to believe that is where technology was heading or you couldn’t have pulled it off. Nobody could have justified the eye-watering investment in marketing, and driver and passenger incentives, without that belief.
In short: “minicab firms can’t do on-street pickups” was an inviolable truth until Uber and others saw where technology was going and realized it wasn’t going to be true much longer.
There are many examples of this sort of phenomenon when you start looking.
For example, it may turn out that 2020 proves that the ‘inviolable truth’ that companies need headquarters buildings also turns out to be false.
But, if it is false, it will only be because bandwidth and communications technologies have reached the right level of maturity.
It can be fun to find examples of this sort of thing in history too.
The entire structure of agricultural markets is driven by assumptions around our ability to predict the weather.
Another example of something people saw as an ironclad fact of life—“you can’t reliably predict the weather”—that resulted in the restructure of an entire market when technology proved the assumption to be false.
What these examples all have in common is that the structure of entire markets is often determined by deep-baked assumptions about the nature and limits of technology.
“Nobody will ever have a permanently-connected super computer in their pocket.”
“The physics of light means a smartphone camera can never outperform a standalone camera.”
“The inadequacies of video-conferencing mean the commercial property market has to keep throwing up tower blocks in business districts.”
“Nobody can make even directionally correct predictions about this year’s harvest so nobody can buy hedges or sell futures.”
And here’s the scary thing: we often don’t even realize we’re making these assumptions.
Take the world of ‘customer data’. Every firm has to deal with it.
Some firms’ businesses are based on it.
If you’re a stock exchange, your job is literally to store and process data from multiple customers, to identify pairs whose desires to buy and sell exactly match each other.
If you’re a fraud analytics firm, your job is to take lists of payment transactions from banks and scan them for evidence of money laundering or other illegal activity.
Customer-provided data is the lifeblood of these firms. It’s what makes them useful and valuable. And yet it’s also very risky. It can be hacked. It can be misused. It can be inadvertently commingled with inappropriate datasets.
It’s a wicked tradeoff: your business depends on access to sensitive data, and yet your business can also be destroyed if you don’t respect it!
So, many of these firms have built their businesses on the basis that there are some things they just aren’t allowed to do. There are some data sets your customers will never let you have. There are some data aggregations, especially across your customers, that you will never be allowed to do.
And here’s the thing: this is going to look just as misguided in a few years as failing to anticipate the revolution in photography that was unleashed when ‘software ate the world.’
The most corrosive of these assumptions are the ones where you say: “we’d love to provide this service but the customer data we need is just too hot to handle. We just can’t risk having it in the first place.”
When you send data to somebody else, there is nothing technological you can do to control how they use it. Or at least, not until now…
These assumptions are, of course, not misplaced. Data doesn’t come padlocked as if in a straitjacket that allows it to be used for one purpose, but nothing else. Once you have it, you can do anything with it. And this means so can a rogue employee, a poorly trained intern, a malicious hacker or a distracted programmer.
It doesn’t matter how valuable the service you envisage might be, the risk that the data could fall into the wrong hands or be used for a prohibited purpose is just too great. Those sorts of data are just too hot to handle.
But what if that assumption was false?
What if it were possible for your customers to wrap their data in a straitjacket before they sent it to you? What if it were possible to control exactly how customer data will be used and to be able to prove this to your customers before they sent it to you?
What new services could you offer if you unleashed yourself from fear? What groundbreaking new insights could you discover for your clients—or for the whole market—if you could safely combine datasets from different customers with their full knowledge and permission, because they were assured that was all you would do with it?
Could you identify suspicious payment transaction graphs that spanned dozens of banks but which looked entirely legitimate in isolation? Could you operate a provably fair ‘dark pool’ for under-served markets?
What could you do for your customers if you could prove to them exactly how you would use their data… where you didn’t have to worry about what else could happen to it whilst in your custody?
Well, you have to wonder no longer. Because such a thing is now possible. Confidential Computing is a technological breakthrough that allows customers to put a ‘straitjacket’ around their data and control how it will be used.
And Conclave from R3 puts this power into the hands of your own analysts and developers.
Remember when Apple first allowed you to put “1,000 songs in your pocket”? Few of us now remember the days when we had to carry around a wallet of CDs and a discman if we wanted to listen to music on the move. But notice how we barely even remember iPods, either! Something that seemed… Read more »…
Remember when Apple first allowed you to put “1,000 songs in your pocket”?
Few of us now remember the days when we had to carry around a wallet of CDs and a discman if we wanted to listen to music on the move.
But notice how we barely even remember iPods, either! Something that seemed so new was itself first commoditised and then rendered redundant.
It’s probably a long time since you’ve ‘ripped’ a CD or paid for a song on iTunes. But you probably don’t even bother to download Spotify tracks for ‘offline’ use these days, so prevalent has high-quality pervasive connectivity become.
It’s a normal part of the technology lifecycle that a product that at first seemed ground-breaking soon becomes commoditised and accepted as the status quo… and sometimes even then obsoleted.
We know this is just the circle of technological life.
But that period of novelty, even if it is fleeting, is nevertheless a period of ambition, creation and opportunity. Even when you know something will be commoditised there can still be good money to be made from it whilst it’s still new.
Think about something as mundane as security on the web.
The once rare, but now ubiquitous green padlock in the URL bar is a simple visual cue to the end user of a website that the page is secure, and they can submit sensitive information to your server. As we all now understand, this is because the site uses HTTPS, which is designed to prevent anyone from reading or modifying the data you exchange with the website, made possible because of the SSL/TLS protocol that secures transmitted data.
When this was first introduced in 1994, very few websites used it. As adoption grew, we became subconsciously trained to look for it as web users. Firms who adopted it before their competitors could win business from the laggards.
Everybody knew it would soon be ubiquitous. But it didn’t happen immediately. There was opportunity even when you knew where things were heading.
Now we’re at the point where it’s table-stakes – we even use it for simple documentation sites, and any website owner that doesn’t use it is seen as negligent.
And the firms who mastered the technology early were well placed when it became an expected cost of doing business. If you were ever in doubt about the importance of mastering pivotal technologies before ubiquity, just look at the price of eCommerce specialists in 2020 as the retailers who failed to invest in their web presence went into full-on panic mode when the pandemic struck.
But how does this apply to data security in the world of enterprise technology?
As we head into 2021, we’re beginning this same process in the lifecycle of a previously niche technology, Confidential Computing. It was those working on enterprise blockchain projects who have helped propel it into the mainstream but the impact will spread far beyond as it helps us deliver on the promise of securing a business’s data whilst in use.
Securing business data whilst it’s being used? Aren’t security protocols such as HTTPS already meant to protect us like that?
Well, you might assume that, but…no.
Have you ever stopped to ask yourself what that little green security padlock actually means?
Secure in what way?
What does it actually represent?
What protection is it giving you?
What bad things could happen to you if the padlock wasn’t there?
And in any case, isn’t there a padlock when you browse sites like Facebook? And yet aren’t they appearing in the news regularly accused of “selling” or “misusing” your data? How can they do this if they have the padlock and the padlock means it’s “secure”?
The answer, of course, is that the padlock is there simply to ensure you really are logged in to facebook.com and not some other site. And it ensures that nobody can intercept your private information as it flows back and forth between your computer and Facebook’s data centres.
The padlock in your browser keeps your data safe as it travels to and from your favourite social media service. That’s important, of course.
But notice what that padlock doesn’t do.
That padlock doesn’t tell you anything about what Facebook will do with your data once it arrives. You just know you’re sharing your data with them and not somebody else.
In the world of business, where data is often a firm’s most valuable asset, this situation is no longer acceptable. Traders, for example, want to buy and sell stocks for the best prices in the most liquid venues. But they don’t want the operators of those venues using their orders to trade against them.
This is where Confidential Computing comes in.
This technology makes it possible to check what program is running on somebody else’s computer before you send your information, and to be sure that the owner of that computer can neither influence nor observe what’s happening.
And it’s going to utterly transform how we think about data security.
OK, but what does this have to do with blockchain?
In my last column I stated the fact that no technology stands alone. After all, market-level cooperation, which is the central promise of enterprise blockchain platforms, relies on accurate, timely and secure data sharing between firms.
But that’s not always the whole story. What if firms need to gain collective intelligence from data that needs to remain concealed? Blockchain has no answers to that question. But by integrating an adjacent technology – such as Confidential Computing – this challenge can finally be overcome.
The last five years of enterprise blockchain development have woken the business world up to the fact we can solve problems for entire markets in a way that we couldn’t in the past. Just look at some of the market-wide initiatives that are already live – Spunta Banca DLT for interbank reconciliation in Italy, B3i for the global insurance industry, and Contour and Marco Polo for trade finance. But that’s not to say it’s easy bringing so many different players together – in fact, it’s been much harder than many of us anticipated, and taken much longer. But it is possible – and the live use cases of this technology continue to grow month by month.
Ironically, however, as the technology moves towards widespread adoption, fewer and fewer businesses will realise that the platforms and apps they’re using are being powered by blockchain. It won’t be new or exciting anymore, it’ll just be there – and it will work.
Similar to that little green padlock.
As we head into 2021, Confidential Computing will begin its journey on this same lifecycle. Ever since blockchain firms began working with clients on tackling their challenges with blockchain technology, there would always be someone in the room that would say: “you don’t need a blockchain for that!” And guess what – sometimes they were right. In some scenarios, firms needed to collaborate at a market level but not everyone’s records needed to be synchronised.
The challenge was sometimes to bring together data to extract insight but without anybody seeing anybody else’s information – and this is what Confidential Computing is able to achieve. And so, the combination of these two innovations enables collaborative data processing without giving up privacy. This seemingly simple premise is in fact so revolutionary that it will enable businesses to gain a major competitive edge and grow market share in the coming years.
There have been some very high-profile examples of front running in this scenario – so imagine if a bank actively gave up its freedom to see your data by deploying Confidential Computing. Isn’t it possible it would actually grow its market share?
Or imagine multiple institutions being able to share all their transaction data to a third party via a blockchain-based anti-fraud solution and the third party being able to analyse it for fraud patterns without actually seeing any of the sensitive data. Would they not quickly become a market leader?
And that’s why the convergence of blockchain and Confidential Computing is my tip for 2021’s most meaningful development in the enterprise software world.
As the world’s software engineers come to view Corda and other blockchain platforms as just another tool in their toolkits – just like the green padlock, Confidential Computing will begin this same journey towards adoption and ubiquity. Because with the massive benefits it offers businesses that value the privacy of their data, there’s no way it can be held back.
And even though it’s abundantly obvious to me that it will be the table-stakes for anybody processing other people’s data in a few years’ time, it’s also the case that those who master it in 2021 will enjoy an amazing period of competitive advantage when they’re the only ones in their industry who can make data security promises to their customers that their competitors could only dream of.
There’s a new version of Conclave out and about, with a whole lot of exciting new features. Writing secure enclaves in JVM languages just got easier, so download the SDK and let’s go over what’s been added. Easy deployment to Azure Azure probably has the best support for SGX of all the clouds right now,… Read more »…
There’s a new version of Conclave out and about, with a whole lot of exciting new features. Writing secure enclaves in JVM languages just got easier, so download the SDK and let’s go over what’s been added.
Easy deployment to Azure
Azure probably has the best support for SGX of all the clouds right now, and in Beta 4 we added full support for it. Conclave now works out of the box on Azure Confidential Compute VMs, and without any need to get an approved signing key: you can self sign enclaves and go straight to ‘release mode’ on Azure. With this new support you can go from code complete to fully encrypted RAM in the cloud in five minutes flat. Read how to deploy your app to Azure to see how simple it is.
This work is enabled by our new support for Intel’s new DCAP remote attestation protocol. DCAP comes with a variety of advantages, one of which is your server no longer has a dependency on Intel servers to start up. Instead your host will contact caching proxies run by your cloud provider to obtain the necessary data at first start, and thus has the same availability as the cloud itself.
Smoother developer experience
We’re fanatical about developer experience here at R3 Towers, so this release continues our tradition of polishing the usability of the SDK.
Enclaves are primarily a Linux thing, so you previously needed Linux to compile your enclave for deployment. In Beta 4 the Gradle plugin automatically creates and uses a Docker container for building the enclave, which means compiling them on Windows and macOS now works transparently. You don’t need to do anything except installing Docker, and no Docker knowledge is required. If you like you can re-use the same container to execute your app in a Linux environment for testing. An explanation of how to do that is in the docs, and making that even more automatic is another way we plan to polish the development cycle in future.
We’ve simplified the enclave calling API. The EnclaveCall interface was unnecessary complexity, so we eliminated it. The Mail API was tweaked to make it easier to bind to network connections. Finally, the new remote attestation support allowed us to simplify startup: “attestation keys” and “SPIDS” are no longer required. The Hello World tutorial has been refreshed with this now unnecessary complexity removed. You’ll still need to deal with the older EPID protocol and its requirements if you want to deploy to hardware that isn’t of the latest generation—Conclave continues to support that.
System.currentTimeMillis now grants access to the host’s clock with enhanced side channel protections by avoiding an OCALL.
Our module dependencies are better isolated, so you can now use a different version of Kotlin and other libraries to the ones we use to develop the framework itself. This means you can now use Conclave with R3’s Corda blockchain platform. A sample showing this in action is under development.
A new EnclaveHost.capabilitiesDiagnostics API returns a wealth of detailed technical data about the machine CPUs useful for debugging deployments and filing support tickets with R3.
We’ve improved Math.random and related non-secure random number APIs. In standard Java these RNGs are initialized from the system clock, but in an enclave that’s controlled by the host. Conclave now initializes these RNGs from the secure on-chip source of random entropy. The SecureRandom API has already used this source for some time.
The usual assortment of improved error messages, better API naming etc.
Automatic blocking of downgrade attacks
Mail is Conclave’s solution for both async messaging and data storage. In beta 4 we upgraded it to transparently integrate with SGX’s defense mechanism against downgrade attacks (the ‘security version number’).
A downgrade attack occurs when a security problem is found either with your enclave or the support infrastructure and the problem is fixed, but nothing forces the host to stay upgraded. It means the host is able to run the latest secure version when the user is submitting data and then downgrade it to a vulnerable version afterwards to gain illegitimate access. The SGX key derivation function allows enclaves to calculate the keys for their own version and any prior version. That means old enclaves can’t calculate the keys for new enclaves, but vice-versa works.
Normally you would need to explicitly manage “sealing keys” and this downgrade mechanism. In Conclave beta 4 mail identifies which version it was intended for and the infrastructure automatically calculates the correct key to decrypt it.
This allows you to restart an enclave without disrupting conversations taking place with clients, but it’s especially useful when mail is used to store persistent data using the ‘mail to self’ pattern. New enclaves can decrypt old data, and as that data becomes steadily replaced with newly encrypted mails, the system’s security re-seals itself.
SGX protects the state of a thread so the host can’t tamper with it. For that to work the enclave must reserve the memory up front for a fixed number of ‘thread slots’, into which the CPU can save and restore thread data.
This raises some questions:
What happens if you run out of thread slots?
How to set the number of slots you have?
How are host threads mapped to enclave threads?
In Beta 4 Conclave handles this complexity for you. If a host thread tries to call into an enclave or deliver mail whilst all the slots are taken by other threads, the host will wait until a slot frees up.
In rare cases you may accidentally write software that can deadlock due to this limit on simultaneously active threads, e.g. a call into an enclave starts a second thread and then waits for it to complete, which may block forever if the enclave has run out of slots. Conclave will now detect such deadlocks and abort the enclave with a helpful error explaining how to add more.
We re-read and refreshed the entire documentation website for this release, with new content and improvements on nearly every page. For example, we’ve not only made SGX threading significantly easier to use but also written new documentation on threads with diagrams to teach you what’s going on behind the scenes.
You can learn how to configure the threading support and many other things in our new configuration guide. This takes you through the different options available to tune the enclave at build time. The defaults are provided along with explanations of each setting.
The FAQ has new content based on questions we see often from new users: check it out!
Download Conclave Beta and get started
Conclave Beta 4 is available for non-production and evaluation use. Download the SDK today!