From The Blog

June 29, 2021
June 29, 2021

Conclave 1.1: Writing Confidential Computing Apps Just Got a Whole Lot Easier

We’re excited to announce the release of Conclave 1.1, the latest version of our platform that allows fast, simple development of Confidential Computing applications in Java, Kotlin or JavaScript. With this release we have really focused on developer experience, ensuring that we have a good range of sample projects covering a diverse set of applications… Read more »

We’re excited to announce the release of Conclave 1.1, the latest version of our platform that allows fast, simple development of Confidential Computing applications in Java, Kotlin or JavaScript. With this release we have really focused on developer experience, ensuring that we have a good range of sample projects covering a diverse set of applications including machine learning, as well as making big improvements to testing and debugging workflows through use of our improved ‘mock’ mode.

Furthermore, Conclave 1.1 has been tested on the latest 3rd Gen Intel Xeon Scalable processors, allowing for up to 512GB of encrypted memory per CPU.

Before we look at the new features, let’s recap some of the details of Confidential Computing and Conclave.

What problems are we solving?

When it comes to hosting a solution in the cloud, who do your customers trust with their sensitive data? Do they trust you as a software vendor? Do they trust the cloud service provider hosting the application? Do they need to trust that other customers will not get access to their data, or gain a competitive advantage through unauthorized use of that data?

Well the good news is that with Confidential Computing, they do not need to worry about anyone using their data for any purpose other than what the customers approve and authorize.

The way this is achieved is via the use of a hardware-based Trusted Execution Environment (TEE) such as Intel SGX that isolates the code and memory used to process confidential data from the rest of the application. The software vendor can cryptographically prove that when a customer sends data to the application it can only be accessed inside an up-to-date secure TEE executing code that has been approved by the customer themselves.

Now, this sounds simple but in reality there are a lot of things to consider. How does the customer know the data is being processed by a real hardware TEE? How do they know what code is running inside the TEE? What happens if a vulnerability is found in the TEE implementation? How does the software vendor separate the business logic from the sensitive data processing?

There are lots of Confidential Computing domain-specific concepts to understand when solving these problems. Does that mean you need to be an expert in these concepts before you can implement and deploy your Confidential Computing application?

No! Not when using Conclave!

Conclave hides all of this complexity from the developer. You just develop a Java application as normal, making sure to keep your data contained within the ‘enclave’ part of your application. Conclave makes it really easy for the software vendor and the vendor’s customers to check that the enclave is running in an Intel SGX TEE and that the code running inside the enclave is exactly as expected.

As soon as the data leaves the customer environment it is encrypted in transit, at rest and most importantly, in use.

Let’s take a look at those new features in Conclave 1.1 then. We have made a number of improvements that make it easier for developers to get started, develop and test their Conclave applications.

Mock Mode

Firstly, we’ve completely redesigned they way ‘mock’ enclaves work. Conclave 1.0 included a special way of building your enclaves named ‘mock mode.’ This special mode allows you to build and run your enclaves without ever leaving a Java environment. However, with Conclave 1.0 you needed to write code specifically to take advantage of ‘mock mode.’

With Conclave 1.1, ‘mock mode’ has now been fully integrated into the SDK, meaning you can switch between your production build and your mock build with a simple build parameter.

One of the challenges when working with Intel SGX is in testing for scenarios that relate to the state of SGX itself. If a vulnerability is found within SGX, Intel quickly sends down an update to the Trusted Execution Environment. This causes all the encryption keys used by enclaves to be rotated, meaning that any secrets encrypted with the latest version of SGX cannot be read by the potentially vulnerable older version.

But how can a Conclave user test that this is indeed the case?

Well, with the new mock mode, Conclave makes it really easy to simulate changes to the SGX environment including version upgrades and downgrades, allowing tests to be written that check everything works correctly when this happens in a real SGX environment.

R3’s Sneha Damle has written a great blog on the latest mock features in Conclave 1.1.

Samples

Check out our new samples repository! Here you’ll find some great new samples including:

    • A sample showing how to use the Tribuo Java machine learning library in a Conclave enclave providing tools for classification, regression, clustering and model development
    • A sample Event Manager which gives a demonstration of how to host multi-party computations

In addition, you’ll find the CorDapp sample that is bundled with the Conclave SDK has been revamped to show how to integrate Corda network identities with Conclave.

Documentation

We’ve made loads of improvements to the Conclave documentation, ensuring it is accurate, easy to follow and generally really useful. The API documentation has also been given a facelift.

What else?

In addition to all the above, we’ve made loads of small improvements and fixes to make the developer experience better than ever. We hope you’ll agree that our hard work really does make it easy for you to write Confidential Computing applications.

Why don’t you try it out for yourself today?

Download Conclave 1.1 and see just how easy we have made it for you!

June 29, 2021
June 29, 2021

To Mock Or Not To Mock An Enclave

It is an exciting time: Conclave 1.1 is out and loaded with a bunch of new features and enhancements. In this blog I am going to talk about when you should compile and load an enclave in mock mode and how to do it, thus simplifying the development and testing when writing enclaves. Let’s start… Read more »

It is an exciting time: Conclave 1.1 is out and loaded with a bunch of new features and enhancements. In this blog I am going to talk about when you should compile and load an enclave in mock mode and how to do it, thus simplifying the development and testing when writing enclaves. Let’s start by downloading the SDK and the hello-world sample included within it.

Now, let’s talk about what an enclave is and how we use Conclave to build enclave applications. Then, we can explore the different modes you can use when building and running an enclave using Conclave. Finally, we’ll talk about ‘mock’ enclaves.

What is an enclave?

An Intel SGX enclave is a protected region of memory where the code executed and the data accessed is isolated in terms of:

    • Confidentiality: no one has access to the data
    • Integrity: no one can change the code and its behavior

No one can access this memory; not even the processes running at higher levels, the operating system, or even the user himself.

What is the role of Conclave when writing an SGX enclave?

Conclave builds on top of SGX, making it easier to develop enclaves in high-level languages, like Java or Kotlin.

An enclave can be built and loaded as a shared object on Linux OS. When using Conclave, this is managed automatically for you. You develop your application in two parts: the ‘host’ and the ‘enclave.’

The host runs outside the enclave and is responsible for loading the enclave and connecting it with the outside world. The enclave is where you put your code that handles data, keeping it locked away and safe in a small, controlled environment.

The command shown here is an example of how to load and start a Conclave application on a Linux machine

./gradlew host:run

But what if you’re using a Mac or Windows platform?

How to load enclaves on Mac or Windows

Conclave provides you with a container-gradle plugin, which essentially provides a Linux environment to build your enclave using a docker container. You can compile, load, start, and then run a host and enclave using the container-gradle plugin command.

./container-gradle host:run

When building an enclave for Intel SGX, Conclave compiles the code down to a native binary using the native-image capability in GraalVM. The resulting enclave benefits from a faster startup time and lower runtime memory overhead than if we had embedded an alternative JVM inside the enclave.

You might be wondering how to run an enclave if you don’t have an SGX-enabled CPU. This takes us to our next point.

Running an enclave in simulation mode

Conclave provides a ‘simulation’ mode for running enclaves on systems that don’t have an Intel SGX capability. Building and running Conclave applications in simulation mode uses the same process as production enclaves, except that you can run your application on a system that does not support SGX. However, simulation mode still needs to run on a Linux platform.

Again, the container-gradle script comes to our rescue on non-Linux platforms, allowing us to load our simulation enclave using a Linux docker container. If you build the ‘hello-world’ sample provided with the Conclave SDK, then by default when you run the container-gradle script, Conclave loads the enclave in simulation mode.

This mode still compiles your code down to a native binary, which can result in fairly lengthy build times. You might be wondering — are there any other modes in which you can develop your enclave which makes build times as short as possible and debugging your enclave as easy as possible?

Running an enclave in modes other than simulation

Enclaves can also be built and run in debug and release modes. Both require an SGX-enabled CPU.

Debug mode lets you run your enclave code in a real SGX environment but with certain debugging features enabled. For instance, if you write to the console inside an enclave in debug mode, it will be shown in the console output. It is also possible to connect a debugger to a debug mode enclave, however, at present it is not easy to step through Java code using this mode. Although debug enclaves run in a real SGX environment, the debug capability opens the door to accessing data inside the enclave so debug mode enclaves should never be used in production!

Use release mode enclaves when you want to deploy your enclave to production. Release mode enclaves give you the full protection provided by Intel SGX. Unlike with a debug mode enclave, any writes to the console inside the enclave never leave the enclave, which prevents you from accidentally leaking data via the console.

To build an enclave in debug or release mode, specify the mode in container-gradle command for non-Linux systems.

./container-gradle -PenclaveMode=debug host:build

For Linux systems, use the below command:

./gradlew -PenclaveMode=debug host:build

We spoke about building an enclave in release, debug and simulation mode.

Now what if you want to test your enclave logic quickly, without the need to convert the enclave code to a native image? This is where mock mode comes into the picture.

How to run an enclave in mock mode

With the Conclave 1.1 release you can now load your enclave in mock mode. This means the enclave is loaded in the same JVM as that of the host, but no native image is created, no SGX-enabled CPU is required, and the build time is also reduced drastically. In mock mode, calling an enclave function is similar to calling a plain Java function. This is useful when you are continuously changing the enclave logic in the development phase and want to quickly test it. To run your enclave in mock mode, use the same script and provide “mock” as a parameter.

Unlike when using simulation, debug or release modes, the below command will work on macOS and Windows, as well as Linux. You do not need to use the container-gradle script for running mock mode enclaves on non-Linux platforms.

./gradlew -PenclaveMode=mock host:run

Unit testing your Enclave Module using Mock Enclave

To test the enclave logic you can write unit tests in the enclave module. You need to add the host dependency to your enclave build.gradle, as the enclave instance is loaded by the EnclaveHost class present in the host module.

testImplementation "com.r3.conclave:conclave-host"

You can then create an instance of the enclave by calling EnclaveHost.load.

EnclaveHost mockHost = EnclaveHost.load("com.r3.conclave.sample.enclave.ReverseEnclave");
mockHost.start(null, null);

Conclave automatically loads the enclave in mock mode by internally checking if the enclave class file is available on classpath. You can obtain the enclave instance using the EnclaveHost.mockEnclave property and then access the mock enclave’s internals.

ReverseEnclave reverseEnclave = (ReverseEnclave)mockHost.getMockEnclave();

Once you have the enclave handle, you can unit test the business logic in the enclave just as you would test any simple Java function. All the business logic written in your enclave class should ideally be unit tested in the enclave module using the mock enclave mode.

If you run the below command, by default the enclave is loaded in mock mode.

./gradlew enclave:test

Unit testing your Host Module using Mock Enclave

When you want to test your enclave in simulation or on real SGX hardware (debug or release mode), it would make sense to write an integration test in the host module. Use the below command to run the host tests. By default, an enclave is loaded in simulation mode in the host.

You can now take advantage of the fact that an enclave can also be loaded in mock mode from the host test class as shown by the below command. Though I would say, when writing integration tests from the host, you would usually want to load your enclave in simulation mode or on real SGX hardware.

./gradlew -PenclaveMode=mock host:test

MockConfiguration

Sealing is the ability to encrypt and save data outside the enclave and make sure that only the appropriate enclave can decrypt it. When you build an enclave, the resulting enclave can be identified by two ‘measurements:’

    • MRENCLAVE is a hash of the code, static data and layout of the enclave binary and ensures the enclave cannot be modified before it has been loaded.
    • MRSIGNER is a hash of the public part of the key used to sign the enclave.

To encrypt the data, a key is used which is derived either from MRENCLAVE or MRSIGNER, and also version numbers for the software revocation level and the SGX TCB level. When data is sealed using MRENCLAVE, only that enclave can decrypt the data, hence updates to the enclave will leave the data useless. Sealing data to MRSIGNER allows different enclaves to access the data and also allows for enclave upgrades. Enclaves with higher revocation levels or TCB levels can read data sealed to lower levels, but enclaves with lower revocation or TCB levels cannot read data sealed by higher levels. You can test this behavior by passing in the different levels to a mock configuration as shown below.

MockConfiguration mockConfiguration = new MockConfiguration();
mockConfiguration.setRevocationLevel(1);
mockConfiguration.setTcbLevel(2);
mockConfiguration.setProductID(1);
EnclaveHost mockHost = EnclaveHost.load("com.r3.conclave.sample.enclave.ReverseEnclave", mockConfiguration);

Note:

Revocation level is the mock revocation level of the enclave, and gets incremented if a bug in the enclave/Conclave code is fixed.

TCB level is the current security state of the CPU platform identified by Intel. This also includes the CPU SGX microcode version.

Want to learn more?

Below are some helpful resources to learn more about Conclave and Confidential Computing:

June 25, 2021
June 25, 2021

Confidential Computing: What It Is and Why It Matters for Insurance

It’s long been said that data is the new gold. It’s true that modern companies are data driven and data rich. In fact, it’s been reported that Facebook has over 50,000 data points on each of its 2.6 million monthly active users, and Mastercard has over 13,000 data points on individual consumer behavior, global trade… Read more »

It’s long been said that data is the new gold. It’s true that modern companies are data driven and data rich. In fact, it’s been reported that Facebook has over 50,000 data points on each of its 2.6 million monthly active users, and Mastercard has over 13,000 data points on individual consumer behavior, global trade and every rung of commerce in between. “The data, and how we work with the data, is as important as the transactions themselves,” says Mastercard’s President of Operations and Technology, Ed McLaughlin.

Insurance is no different. Without data, insurance would not exist. From the very beginning of the modern insurance industry, data was captured, processed, and shared to enable risks to be understood, priced and transferred. Modern day insurance is no different—data is at the core of insurance decision making, except now insurers have access to more data than ever before. In fact, entire business models have grown up to support the insurance industry to process data and extract insights, making sure insurers provide customers with relevant, affordable, and sustainable products, as well as manage their own business.

However, the challenges around protecting data are complex and onerous. Take, for example, GDPR. The Data Protection Act of 2018 contains 354 pages concerning regulation on processing of information relating to individuals. And the penalty for GDPR violation? A maximum fine of £17.5 million or 4% of annual global turnover, whichever is greater!

Large and notable fines include:

    1. Google (€50m/£43.2m). Google was one of the first companies to be hit by a substantial GDPR fine of €50m in 2019.
    2. H&M (€35.3m/£32.1m)
    3. Tim – Telecom Italia (€27.8m/£24m)
    4. British Airways (£20m)
    5. Marriott International Hotels (£18.4m)

And, according to research from DLA Piper, across Europe in the 12 months leading up to January 27, 2021:

    • GDPR fines rose by nearly 40%
    • Penalties under the GDPR totalled €158.5 million ($191.5 million)
    • Data protection authorities recorded 121,165 data breach notifications (19% more than the previous 12-month period)

Beyond regulation, companies have internal governance and controls relating to data to ensure their own valuable intellectual property remains protected. Data is their lifeblood, used to make daily business decisions or drive key strategic initiatives to benefit customers, employees, and shareholders. In short, data really is gold.

So, how do companies ensure data is protected and will not be misused? The answer: “soft” policy controls.

Internally, companies deploy vast resources to ensure policies and procedures are developed and maintained to ensure everyone acts to keep data protected. The policy controls require all individuals in the organization to “follow the rules” with annual/periodic training and/or certification. Remember your last data protection training course?

These “soft” policy controls, often wrapped up in contractual terms and conditions, are also used when companies send data to 3rd party services that provide analytics.

These are so called “soft” policy controls as they rely on individuals following the rules. This is very difficult to validate, monitor and control, especially with independent 3rd party companies.

On a technical level, data can be kept secure when it’s stored using encryption (data at rest). And when the data is transmitted? That’s encrypted, too (data in transit). But how is data kept secure when it’s being “processed?” And how do you know data is being processed in the “right way?” This has until now been a significant weak point because even with the “soft” policy controls there has been no way of knowing if data is being used in the way the owner of that data intended, and no way to prove that the “processor” can’t see the data or won’t misuse it.

Take the example of filling out an online loan application. You enter the data on the webpage, then the data is sent to the loan provider in a secure encrypted form and stored in their database that’s also encrypted. But you have absolutely no way of knowing how your data is being processed, who it’s being shared with, or that it is secure when being processed. And what applies to your personal data when applying for a loan also applies to your company’s data when sharing it with 3rd parties, be it data analytics service providers, industry bodies, peers, or regulators.

However, new technology known as confidential computing is now available to solve this problem. By closing the loop and providing true end-to-end data privacy, confidential computing ensures that data is strictly isolated during processing and that the data is only analyzed in the agreed-upon way.   The data being processed is invisible and unknowable to anything or anyone else, including the operator of the service and hardware. Cryptographic proof is provided confirming your data is being processed only as agreed.

This new way to process data is enabled by enhancements in hardware from the leading manufacturers such as Intel. This new hardware technology, called Trusted Execution Environments (TEE), brings in new levels which go far beyond the “soft” policy controls that had been used historically. Instead, they use hardware controls that provide technical assurances that the data will be used only as intended. We will go into the technical details in a subsequent blog.

The end result? Individuals and companies can now process and share their precious and valuable data, safe in the knowledge that it is kept secure in transit, at rest, and now, in processing. Crucially, the data is only processed in the agreed way with very secure hardware assurances.

For insurance, the implications of this are numerous:

    1. Insurers can keep sensitive data in more secure environments and protect it from hosting providers or insider threats.
    2. Inbound: Insurers can prove with certainty that they are processing sensitive customer data only in the agreed way (i.e. used for the purposes of risk assessment, fraud detection, or claims payments).
    3. Outbound: Data can be handed over to a 3rd party for analysis confident in the fact that the 3rd party cannot see the raw data and that the data is analyzed in the agreed way.
    4. Data can be pooled with competitors for industry benchmarking or fraud purposes where no one can see the raw data.

In this blog post, we have introduced the new field of hardware controls to secure data during processing and how this can apply to insurance.

In our following blogs, we will discuss the technology behind these hardware controls, Trusted Execution Environments (TEEs) & remote attestation, along with specific insurance use cases and solutions.

Want to learn more?

Below are resources to learn more about Conclave and Confidential Computing:

June 22, 2021
June 22, 2021

How familiar are you with secure computing?

How familiar are you with secure computing? Have you used an enclave before? If you haven’t, then this is the right blog post for you.  Launching a secure service can be a daunting task. You’ve got to make a lot of decisions about system design, infrastructure, data storage policies, regulatory barriers…and of course, cost. It… Read more »

How familiar are you with secure computing? Have you used an enclave before? If you haven’t, then this is the right blog post for you.  Launching a secure service can be a daunting task. You’ve got to make a lot of decisions about system design, infrastructure, data storage policies, regulatory barriers…and of course, cost. It seems like there’s very little a software engineer can trust when it comes to developing technical infrastructure.

It seems like every month we’re made aware of new leaks and vulnerabilities on platforms like SSL. This isn’t just limited to specific organizations or application designs either. The Linux kernel has had over 60 vulnerabilities in 2021 and there’s been another server VMware vulnerability as I write this article.  If you’re threat modeling for sophisticated actors or foreign government operations, it’s probably time to think about swapping your architecture for secure computing platforms. Depending on the circumstances, if an attacker can corrupt the operating system, or worse, the kernel, they can control the computer.

In a perfect industry, these technical issues would be the only ones to worry about—but they are only part of the problem. As developers, it’s easy to think about programs the way that a mathematician might. We have confidence in the software we write because we test it thoroughly. But as we saw above, data can be taken, protocols can be breached, and information can be leaked.

For example, in the securities market, many banks operate “dark pools,” essentially private exchanges for large market participants. One risk to those participants is that the hosts of some of these exchanges will front-run trades in violation of SEC rules. Even with a contract in place, the trust is with the institutions, and that trust is not always well-founded.  The truth is the hosts of certain kinds of data have incentives to misuse it. The computing world has only recently started to grapple with this problem, which is where tools like Conclave come in.

What is an enclave?

An enclave is a secure computing environment for the safe processing of data. Enclaves can verify exactly what algorithm will process the data it’s given. Imagine an ideal CPU that runs specific programs that match the exact hash of the software they were prepared with. In addition, the data going in and out of this enclave is encrypted. The host machine couldn’t read it even if it wanted to.

This special CPU enclave works just like a normal Intel processor with the same set of instructions. In this case it’s implemented by Intel’s software guard extensions (SGX).

Having a secure system like this is great, but it’s not useful to your users if you still must be trusted to host and maintain the system. But what if you could also prove to your users that the code you are hosting does exactly what you claim it does? This is possible for enclaves that support the ability to verify the cryptographic hash of the entire sum of code running within the enclave. Conclave accomplishes this through remote attestation, enabling users to have complete confidence in the software they’re using.

What is the enclave guarding against?

The primary uses for an enclave are instances where the host operating system can’t be trusted, or where an entity maintaining a computer for a particular use case can’t be trusted.

Enclaves protect against both external attacks over the web and attacks from “trusted” places like the kernel, operating system and the hypervisor. The CPU is in theory the only place safe from compromise at every level of the computing stack, and there’s not much that users can do other than switch to software that puts security at the top of its list.

The concept of the enclave assumes that the entire computing environment in which it will be run is hostile. The design of the tool is such that it can’t be run unless it’s within a specific CPU-level enclave (such as Intel SGX).

Physical security and side-channel attacks

A side-channel attack is an attack that gains information using the implementation of a computer system instead of leveraging weaknesses in the software itself. It may not come as a surprise that the physicist’s role is again crucial here. Common attacks are things like cache attacks, power monitoring, timing attacks, acoustic cryptoanalysis and more.

One of the most fascinating examples of this I’ve come across is that theoretically, it’s possible to get into a secure enclave if you have the physical chip and try to read the memory with an electron microscope, but this approach would require serious effort and potentially spoil the data. Of course, the attacker must have the physical device. Ideally, when the chip is manufactured, it creates its own key and uses that for all of its encryption going forward.

We don’t think about this much on the software side, but when it comes to the physical device itself you may wonder how an enclave is secured. From the outset, when data is stored in an enclave it’s encrypted in memory. It is only this enclave that contains the key to read anything at the outset. What a lot of the hardware manufacturers do is design the chip such that it wipes any data on it once it’s physically tampered with. The idea is to store critical information in battery-backed static RAM so it spoils when tampered with or powered off.

You can see more information on this kind of hardware security on page 5 of this Microsemi design document. While I’m no expert, I’d imagine the approach of other semiconductor manufacturers is similar. If you’re curious to learn more about side channel attacks, I’d recommend reading the Conclave docs.

What is Conclave?

Conclave introduces a set of abstractions to enable a developer to interact with an enclave, along with a toolkit for development and emulation of secure computing applications that run within enclaves.

Conclave gives developers some basic abstractions within the architecture. To paraphrase the docs:

  • Clients exchange encrypted messages with enclaves by interacting with the host over the network.
  • Host programs just load enclaves. From a security perspective hosts are assumed to be malicious at all times.
  • Enclaves are classes that are loaded into a dedicated sub-JVM with a protected memory space, running inside the same operating system process as the host JVM. Because they don’t share heaps, the host may only exchange byte buffers with the enclave. Direct method calls also don’t work out of the box: that would require you to add some sort of RPC system on top. In this way it’s similar to interacting with a server over a network, except the enclave is fully local.

With all of this in mind, the architecture diagram below should start to come into focus:

Conclave architecture diagram
Conclave architecture diagram

Where do things go from here?

This is one of the first tools of its kind to be made available for developers. You can use Conclave to build out secure computing applications that have never existed before.

The classic example of this kind of problem is the “zero knowledge proof.” Imagine we wanted to write a program that would tell us which of the two of us has more money? Assuming we didn’t want to reveal our numbers that would actually be quite tricky. One of the nice things about Conclave is it can make solving this kind of problem quite trivial. The entire enclave is simply a machine that compares two numbers. The enclave can be run by either of us, but our numbers are never revealed to each other, only the answer.

This is only an instructive example. You can actually write fully trustable client server programs where you can be sure that the host of a product or service will only use your data in exactly the way you expect. For example, you can isolate secure and private keys from the host machine instead of them being stored in the regular filesystem. Another popular example is private machine learning. Hospitals and other medical groups would be able to exchange medical data to build more robust statistical models without leaking any private or identifying patient information. You can also have provably private searches, such as searching a database where a query is only handled by the enclave, and even the operator of that machine wouldn’t be able to know what the query was.

The use cases really are fascinating, and we’ve only just scratched the surface of what’s possible. If this kind of computing sounds interesting to you, I highly recommend downloading Conclave and trying it for yourself.

Want to learn more?

Here are some additional resources you might find helpful:

June 18, 2021
June 18, 2021

Supply Chain Resilience using Confidential Computing

The pandemic introduced the global public to the very real problems of supply chain continuity. It also changed the way that supply chains approach resilience. Together, these changes are leading to some fundamental shifts in the technologies being used for supply chain management. Of these, confidential computing is one that targets the single most important… Read more »

The pandemic introduced the global public to the very real problems of supply chain continuity. It also changed the way that supply chains approach resilience.

Together, these changes are leading to some fundamental shifts in the technologies being used for supply chain management. Of these, confidential computing is one that targets the single most important underlying problem in supply chain management: how to process business-sensitive data in use. By using confidential computing, it is now possible to protect sensitive data throughout all stages of its lifecycle, as well as provide technical assurances that it can’t be misused. This opens up several opportunities for the supply chain industry to build and take advantage of new collaborative solutions.

To translate this, we will go through two categories of use cases: aligning suppliers and managing products.

Resilience doesn’t come from working harder

Crisis is not new to supply chain management. Earthquakes, port strikes, road closures, cyberattacks and cranky customs officers are par for the course. Normally, these exceptions are local, and so are the solutions. In the beginning of the pandemic, this is exactly what happened. The explosion in job openings for supply chain professionals indicates that many companies immediately responded by working harder.

But as the pandemic has stretched into 2021, supply chains are looking at ways to work smarter.  Unfortunately, this isn’t always easy to do. The old and outdated systems currently being used aren’t designed to establish trust between counterparties or verify that an organization’s data will be protected. However, with the advent of confidential computing, it is now possible to build systems that aggregate data across multiple parties to build new solutions that align supply chain partners.

Below we outline several ideas on how this new collaboration can be implemented in highly innovative ways.

Use case I: Alignment With Supply Chain Partners

  • Cost reduction – Managing day-to-day operations in supplier relationships often necessitates a delicate balance between demands and incentives. Today’s supply chains experience high levels of inefficiency due to the inability to jointly manage cost. Suppliers are often unlikely to share cost data beyond contractual mandates simply because they fear price cuts. Likewise, related costs such as those incurred for transportation and logistics are often not reported, even when they could be optimized through collaboration.
  • Load tendering – The same applies when industry data is aggregated. Load tenders and cost per mile in transportation are instructive examples. Both can be obtained from a variety of sources to analyze not just who is over- or underpaid but also to identify key trends. It is further conceivable that we will witness the emergence of blockchain solutions that initiate transactions such as the procurement of transportation and warehousing services based on bids or auctions. This is a natural evolution and we already observe the emergence of load matching platforms and data aggregators.
  • Scorecarding – Monitoring supplier performance, or Scorecarding, is typically performed in ERP systems today, but it is hard to obtain data that allows for direct comparisons between different suppliers. If scorecards could be securely shared not just across all suppliers of a manufacturer but between several manufacturers, an entirely different picture of supplier performance would likely emerge. It would be possible to extract best-practices that all parties could directly benefit from.

Use case II: Managing Products and Demand

  • Secure data collection – An obvious example where collaboration is easy to achieve is the management of product features, observation of consumer behavior and collection of usage data. Confidential computing allows manufacturers to securely collect data about product usage in the field without intrusion of privacy or data leakage to improve the products they build. The resulting insights are incredibly valuable, especially when they are shared among all relevant parties in a supply chain.
  • Collaborative planning – Today, forecast accuracy typically ranges from 25% – 75%. Demand planning becomes substantially harder the farther we move away from the point of sale, as it is impossible for upstream suppliers to anticipate demand without downstream market knowledge. Leveraging confidential computing means everyone can pool data to derive accurate forecasts while remaining confident they aren’t giving away their competitive advantage.  This brings down inventory levels across the entire chain while also optimizing costs. The more parties that participate, the more benefits there are for everyone.
  • Inventory planning – Suppliers can also record planning, order, inventory, and production data securely on a blockchain and aggregate the data across multiple organizations on a secure confidential computing platform. All participants are assured confidentiality and anonymity in this way while they benefit from the results of analysis. The key to such a solution is that it must be trusted by all parties and guarantee that the underlying data is inaccessible to everyone, including malicious actors.

Can You Afford to Wait?

In 2021, we’re already referring to “the time before.” Solving local exceptions with brute force is part of that time. The emergence of confidential computing as a feasible technology for data sharing and processing means that supply chain management can evolve on a more fundamental level than ever before.

The use cases described here are just a peek at some of the changes that are happening as industry dynamics change with technology. The question may not be whether you can afford to get started today, but rather whether you can afford to wait any longer.

Want to learn more?

Here are some helpful resources to learn more about Confidential Computing and Conclave.

June 14, 2021
June 14, 2021

Remote Attestation: Your X-ray vision into an enclave

Confidential Computing can prove to be a game-changer in enabling multi-party computation without getting into the risk of data leakage or tampering. It would allow multiple enterprises to share confidential information and run various algorithms on it without the risk of their data being seen by each other. If you are new to Confidential Computing… Read more »

Confidential Computing can prove to be a game-changer in enabling multi-party computation without getting into the risk of data leakage or tampering. It would allow multiple enterprises to share confidential information and run various algorithms on it without the risk of their data being seen by each other.

If you are new to Confidential Computing or Conclave, consider taking a look at this article for a brief introduction.

Confidential Computing could lead to huge benefits in various fields—for instance, we can now develop better machine learning models—because of the availability of bigger datasets which was earlier not possible because of the risk of the data being compromised when shared between organizations.

It all comes down to sharing your confidential data with an enclave, where it would be processed and the result would be returned back. All well and good, but how would you know that the enclave in question is really authentic?

Remote Attestation

Remote attestation is that piece of information that helps us to identify the authenticity of an enclave. It is a special data structure that contains the following information:

  • Information indicating that a genuine Intel CPU is running
  • The public key of the enclave
  • A code hash called the measurement
  • Information indicating whether the computer is up-to-date and configured correctly

The most important piece of information that we are interested in here is the measurement. It is a complex hash of the entire module along with its dependencies that is loaded onto the enclave.

Every time a client wants to connect to an enclave and send confidential information for processing, it must first check the remote attestation of the enclave and verify the authenticity of the enclave by comparing the measurement. The remote attestation can be requested from the host.

Below is an example of remote attestation received from the host for an enclave running in simulation mode:

Remote attestation for enclave DB2AF8DD327D18965D50932E08BE4CB663436162CB7641269A4E611FC0956C5F:
— Mode: SIMULATION
— Code signing key hash: 80A866679B567D6B27F5EF9044C13CCB057E761AB8400AD09CC8D70208579611
— Public signing key: 302A300506032B657003210052C7DFDE99D81DF7FF05A2EBED5F8E25FC659A203FAFCA5B07B18CFFD3C5915E
— Public encryption key: F3F02623B55E908C556CE17A13DF385BA621E5D5BCDCDEA8E92E30D4397E0404
— Product ID: 1
— Revocation level: 0
Assessed security level at 2021-05-10T10:09:08.107702Z is INSECURE
– Enclave is running in simulation mode.

Conclave was developed so that any two builds on the same source code should always produce the same measurement. Thus developers can either generate the measurement themselves or rely on a trusted third-party service provider to provide the measurement of the enclave.

Since any update to the source code would change the measurement, it is guaranteed that the enclave does exactly what it says does.

A note on upgrade

It’s pretty evident that any upgrade to the enclave code would result in a change in measurement. This would result in failure since the client would not identify the enclave anymore. A potential solution is to maintain a whitelist of acceptable hashes.

Alternatively, a signing key could be used and as long as the enclave is signed with the key, it could be deemed as authentic.

Want to learn more?

Here are some helpful resources to learn more about Conclave and Confidential Computing.

June 03, 2021
June 03, 2021

A New Era of Privacy-Enhancing Technology Has Arrived

The next frontier for data privacy is fast approaching: according to analyst firm Gartner, by 2025 50% of large organizations will be looking to adopt privacy-enhancing computation (PEC) for processing data in untrusted environments and for multiparty data analytics. PEC is a cross-industry advance that will cause existing data privacy models and techniques to be… Read more »

The next frontier for data privacy is fast approaching: according to analyst firm Gartner, by 2025 50% of large organizations will be looking to adopt privacy-enhancing computation (PEC) for processing data in untrusted environments and for multiparty data analytics. PEC is a cross-industry advance that will cause existing data privacy models and techniques to be radically disrupted, as it offers a new approach to protecting and sharing data across parties without actually revealing that data to anyone.

The appeal of data sharing is clear: sharing data across parties holds the key to unlocking greater analytics and insights, as well as identifying risks and detecting fraud. But if this is the case, why aren’t companies sharing data more freely? The answer is this: they are concerned about the data privacy and security risks that could come from doing so.

Fortunately, the solutions to these concerns are now at hand, with the introduction of Confidential Computing and other privacy-enhancing techniques that put firms in complete control over how their data will be used. To discuss the potential of these new privacy-enhancing technologies, R3’s Chief Technology Officer Richard Gendal Brown recently hosted a webinar where he was joined by two world-leading experts in the field: Michael Klein, Managing Director for Blockchain and Multiparty Systems Architecture at Accenture, and Paul O’Neill, Senior Director of Strategic Business Development at Intel.

Setting the scene, Richard mapped out the discussion in three stages: first, by scoping out the business problems around privacy that traditional technology can’t solve; second, by looking at some of the new technological approaches such as PEC that can solve these problems; and third, by examining how these technologies can actually be applied. According to Richard, “this isn’t a future-looking phenomenon. This is a collection of technologies that can be applied right now.”

So, what exactly is the business problem? Paul O’Neill of Intel said that looking across different industries – especially highly-regulated sectors such as healthcare and finance – the biggest challenge has been the rise of “incentivized collaboration.”

“Imagine you’re a hospital administrator, and you’re going to submit sensitive patient data and healthcare records to a research firm that’s going to perform a clinical trial with the patient’s consent,” Paul explained. “You desperately want to advance medical science. But as an enterprise, you’re worried. What happens if a rogue employee at the research firm steals that data? What if the research firm is using your patient’s data in a way that they didn’t agree to? To anybody involved in privacy, that’s really, really scary.”

What’s needed is a way for firms to know that their data remains protected at all times in a way that a third party cannot observe or even copy it – which is what technologies like Confidential Computing enable. However, these technologies are perceived as complex, and a recurrent theme during the debate was how to cut through this complexity to get to the core business issues. Accenture’s Michael Klein commented: “There are many techniques to encrypt data in use. Some are completely software-based, while some are hardware-based. And we can talk [to clients] about who they are actually trusting. Are they trusting the creator of the software or the creator of the hardware? And then, what are the features that the technique enables, and how ready is it to scale? I think those questions are probably the two biggest things that we encounter as we introduce our privacy-preserving functions or computations: helping our clients to understand that these are all valid techniques, and then choose the one that’s going to best fit their scenario and also scale to meet their needs.”

There isn’t room in this short blog to go into the full richness of the debate. To experience it, click here to watch the webinar recording in-full.

Want to learn more?

Here are some helpful resources to learn more about PEC, Confidential Computing and Conclave.

  • Hear from Gartner on why PEC is a top strategic technology for 2021 in a recent report.
  • Want to learn more about Conclave? Read Richard Brown’s recent blog post titled, “Introducing Conclave.”
  • Are you an app builder? Try a free 60 day trial of Conclave today.

May 14, 2021
May 14, 2021

Solving Double Pay-out Fraud in the Insurance Industry

Insurance fraud is a huge issue for the industry, estimated to cost insurance companies and agents globally more than US$40 billion per year. A particular challenge is “double dipping” fraud, where one insured event such as a motor insurance claim is claimed twice from two different insurers. But now a solution is at hand: ClaimShare, a leading-edge fraud detection application.

Insurance fraud is a huge issue for the industry, estimated to cost insurance companies and agents globally more than US$40 billion per year. A particular challenge is “double dipping” fraud, where one insured event such as a motor insurance claim is claimed twice from two different insurers. But now a solution is at hand: ClaimShare, a leading-edge fraud detection application. Built on R3’s Corda Enterprise and Conclave platforms, ClaimShare was developed through a collaboration between IntellectEU and KPMG, and recently won the global Corda Challenge: Insurtech from R3 and B3i.

Using blockchain and confidential computing technologies to help insurers collaborate and mitigate fraud, ClaimShare is generating huge interest and excitement across the insurance sector. To share more about the solution, and the benefits and opportunities it opens up, R3 recently hosted a webinar where Victor Boardman, R3’s head of insurance for EMEA & APAC, was joined by ClaimShare director Chaim Finizola and Kami Zargar, ACA Director and Head of Forensic at KPMG in Belgium.

At the beginning of the session, Victor introduced Conclave, and its revolutionary impact which springs from its ability to allow information from different companies to be pooled together for joint analysis while keeping the underlying raw information confidential. Explaining that confidential computing keeps data cryptographically secure at rest, in transit and also in processing, Victor expanded: “That means the data can be pooled between multiple insurance companies and brokers, safe in the knowledge that the underlying information can’t be seen by any party, not even the operator of the service.”

Why is such a solution needed? KPMG’s Kami took up the story: “By some estimates, insurance fraud is the second most common type of fraud after tax fraud…and it is estimated that only half of it is detected. So we have been working over several years with the insurers and regulators to define solutions that will help them detect and manage this issue within the bounds of what is possible, given that there have historically been limitations around data, etc, in terms of how to tackle this issue.”

A large proportion of the insurance fraud that’s going undetected is believed to be double-dipping fraud – and ClaimShare’s Chaim explained how the solution addresses this type of crime: “We’ve worked to create an application that would enable the sharing of claims data between different insurers to detect double dipping fraud without needing to make changes to their back office or front office systems. Previously, there has never been a technology that enabled the sharing of data in a fully compliant manner. Using Corda and Conclave as a platform was a game changer, allowing us to share data and match the sensitive data while being fully compliant.”

Chaim went on to provide the webinar attendees with a live demo of ClaimShare in action. To see this – and learn more about how ClaimShare provides insurers with a powerful new weapon in their battle against fraud – click here to view the webinar in full.

Want to learn more?

Here are some helpful resources to learn more about Privacy-Enhancing Computation (PEC), Confidential Computing and Conclave.

April 12, 2021
April 12, 2021

Privacy-Enhancing Computation: The future of cross-institutional secure data sharing

In an effort to reduce risk, institutions continually monitor and segment their customer data. They employ a host of different techniques to effectively do this, including software tools, precise risk scoring, behavior analytics, and more. However, they still may not get the “full picture” when it comes to a customer’s risk profile. Unknown knowns remain in part because customers often have relationships with more than one bank, insurance firm or provider.

In a recent survey conducted by KPMG, 51% of banks reported a significant number of false positives resulting from their technology solutions, hampering efficiencies in fraud detection.

KPMGConclaveQuote

In an effort to reduce risk, institutions continually monitor and segment their customer data. They employ a host of different techniques to effectively do this, including software tools, precise risk scoring, behavior analytics, and more. However, they still may not get the “full picture” when it comes to a customer’s risk profile. Unknown knowns remain in part because customers often have relationships with more than one bank, insurance firm or provider. On top of this, financial crime, money laundering and other fraudulent activities remain difficult to detect because criminals purposely distribute their activities across different institutions, knowing that those institutions hesitate to share their data with one another.

Companies know that data sharing could hold the key to unlocking greater analytics and insights in their fight against fraud and financial crimes. That said, it rarely happens because of an inherent mistrust in how their data will be used. Concerns around data privacy, lack of control, and fear of proprietary information getting into the wrong hands outweigh the benefits of insights that might come from peers pooling data. But what if firms could receive technical assurances that their data will not be viewed or misused as it’s being pooled and analyzed? That they would be in complete control over how it will be used?

Well, now they can.

How Privacy-Enhancing Computation (PEC) can help fight fraud

While fraudulent activities have become more sophisticated, fortunately so has the technology that has the power to stop them. Techniques like Privacy-Enhancing Computation (PEC) provide protections around data privacy, confidentiality and integrity that make different entities feel comfortable pooling data securely across company lines, to gain insights and identify risk. This means that firms will now be able to combine both data and forces to fight financial crime. In fact, a recent Gartner report highlighted Privacy-Enhancing Computation as one of the top strategic trends for 2021 that IT leaders should assess.

PEC solves the challenge that has been facing security experts for years: how to protect data as it’s being shared, analyzed and operated without exposing or viewing the underlying data. Protecting data in use is the next frontier of data security. Data encryption today covers data at rest and in transit but has not been extended to data in use—until now.

Encrypting data in use is critical to identifying the increasingly complex methods that are evolving to commit financial crimes because it allows firms to pool and analyze sensitive data without compromising on data privacy. In the insurance industry, for example, by using privacy-enhancing techniques, firms can pool, analyze, process, and gain insight from confidential data without exposing the underlying data itself, to identify possible fraud due to multiple claims being presented for the same insured event.

This opens up a world of possibilities across industries. This concept is simple yet was nearly impossible to execute on beforehand due to data privacy concerns. R3, IntellectEU and KPMG have leveraged this technology to deliver ClaimShare, a platform developed to detect and prevent double-dipping across insurers, allowing competing insurers to collaborate to fight fraud.

What are the types of techniques included in PEC?

There are a variety of software and hardware-based methods to protect data in use. Some examples include: secure multi-party computation, homomorphic encryption, zero knowledge proofs and trusted execution environments (TEE). Each technique tackles the problem of how to securely protect data in use, with accompanying advantages and drawbacks.

Confidential Computing stands out as a stable, scalable and highly performant solution for a broad range of use cases. By performing computation in a hardware-based TEE, it prevents unauthorized access or modification of applications and data while in use, thereby increasing the security assurances for organizations that manage sensitive and regulated data.

Meet Conclave – a revolutionary new Confidential Computing platform

Conclave is a new privacy-enhancing platform from R3 that utilizes Confidential Computing. Conclave enables the development of solutions, like ClaimShare, that deliver insight from data shared across parties without actually revealing the data to any other party, thus maintaining controls over how data is used while addressing security and compliance-related obligations.

With Conclave-enabled apps, confidential data can be pooled and processed within an Intel® SGX enclave where none of the contributing parties, nor the enclave host, can access the data. End-users can audit the application’s source code and cryptographically verify the application will run as intended before providing sensitive data for joint pooling or analysis.

As a result, end-users can be confident while sending proprietary data outside of their organization, and software providers can build more predictive financial crime risk management and compliance solutions using sensitive data from multiple firms for cross-institutional insights. ​​

What’s next?

Here are some helpful resources to learn more about PEC, Confidential Computing, and Conclave.

  • Read the Gartner report on why Privacy-Enhancing Computation is a top strategic trend for 2021
  • Learn more about Conclave in a recent blog from R3’s Richard Gendal Brown, titled ‘Introducing Conclave
  • Hear from R3’s Victor Boardman on how data sharing can address ‘double dipping’ in insurance claims in a recent article on Insider Engage
  • Try a free 60 day trial of Conclave today

February 24, 2021
February 24, 2021

Building our First App on Conclave

Conclave is an application development platform that helps build secure applications that can process data confidentially. I gave a brief introduction to Conclave in my previous blog. Now let’s look at how we could build our very first app on Conclave. What are we building? We will build a secure auction app where bids from… Read more »

Conclave is an application development platform that helps build secure applications that can process data confidentially. I gave a brief introduction to Conclave in my previous blog. Now let’s look at how we could build our very first app on Conclave.

What are we building?

We will build a secure auction app where bids from parties would remain confidential using an enclave. All bids would be processed within the enclave and the result of the auction would be revealed, without compromising the bids submitted by each participant.

Components of an application built on Conclave

Conclave has three major components: Enclave, Host, and Client. So let’s get into business and start building each of the components of our application on Conclave.

Enclave

An enclave is a program that runs in a protected memory space that can’t be accessed by the OS, Kernel or the BIOS. Thus you can rest assured that the data sent to the enclave is completely confidential.

To build our secure auction app, we need to write an enclave program which takes bids from participants and process the bids to determine the highest bid in order to come up with a winner.

Conclave provides the Enclave class which can be subclassed to build our own enclaves. Data can be sent to the enclave using the Conclave Mail API. Mail API helps achieve end-to-end encrypted communication between the client and the enclave.

 

 

 

We can use the receiveMail method to receive mails send to the enclave. The userRoute parameter helps to map mails to different clients. We have two maps to store userBids and their public keys respectively. Note that Message is a user-defined model object we use to transfer data.

Conclave uses GraalVM Native Image JVM to run enclave programs, which doesn’t support Java serialization. However it supports Kryo, hence we use Kryo for serialization.

Once we have all the bids submitted, the auctioneer can asks the enclave to process the bids and send the result to all participants. The complete code is shown below:

 

 

We have used the auctionAdmin to store the auctioneer keys and routing string which is used later to send the result back to the auctioneer. Conclave provides the Postman feature which is used to create mail to communicate between enclave and clients which is used in the sendMail method.

That completes our enclave, let’s look at building our host component next.

Host

Host programs are used to load the enclaves and it also serves as a proxy between the client and the enclave. Hosts are considered untrusted at all times and hence all communication between host and enclaves is always encrypted.

 

 

The first thing we do as we initialize our host is to verify the hardware support to run enclaves.

 

 

Once we have verified the platform support, we can go ahead and load the enclave program.

 

 

When the enclave is started it returns a callback which is used to send enclave responses back to the client. The MailCommand object contains the response content and the routing parameter to map responses to different clients.

Once the enclave program has been started we can now start the TCP server to start accepting client requests.

 

 

We are using simple TCP connection (for simplicity) for communication between host and client. You could use more sophisticated protocols like GRPC or whatever suits you better.

All clients must verify the authenticity of the enclave before sending confidential information to the enclave, hence we send the attestation object to the clients as they connect to the host. Clients can utilize this information to verify the measurement and make decisions if they could trust the enclave.

 

 

Finally the clients can send their confidential information which the host can then forward to the enclave.

 

 

You could take a look at the final code here.

Client

Now we are left with the final piece of the puzzle to complete our first application on Conclave. Let’s build our client.

 

 

First we try to establish a connection with the host and get the attestation object.

 

 

For the purpose of this blog I have just printed out the Attestation info on the console. For real applications however the attestation should be verified before sending any information to the enclave.

In real world use cases either the client would have access to the source code of the enclave which they built to reproduce the measurement or use a trusted service provider to verify the attestation information.

The next step would be send the bid to the enclave.

 

 

Notice that we use the postman feature again to create our Mail to be sent to the enclave. To get the bid we take an input from the user from the console.

 

 

Finally we need to write the code for reading the response from the enclave. We could use the postoffice’s decryptMail to decrypt the encrypted message from the enclave to read the response.

 


Congratulations!! We have successfully built our first application on Conclave

Source code

The entire source code for this application we built is available here. If you wish to run the application, please follow our guide in the documentation here.

I hope you liked the tutorial and thanks for reading.