Criticism can be painful. But criticism - and how we respond to it, using emotional intelligence - is one of the richest kinds of fuel for personal growth.
Application security: 4 things to know about how the Confidential Computing Consortium helps
Securing today’s applications increasingly involves dependencies for which a developer doesn’t have control or direct visibility. That’s where trusted execution environments come in
When we talk about security, the focus often comes down to the application developer or the system administrator – or perhaps we expand the purview to a newfangled IT expert like the site reliability engineer (SRE).
Avoiding basic security blunders, such as those tracked by OWASP for web apps, is certainly important. But the reality is that securing an application increasingly involves dependencies that an app developer doesn’t control or have direct visibility into.
That’s a problem that the Confidential Computing Consortium was created to help address.
What is the Confidential Computing Consortium?
The Confidential Computing Consortium (CCC), announced in August 2019 by The Linux Foundation, is “a community dedicated to defining and accelerating the adoption of confidential computing. Companies committed to this work include Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom, and Tencent.”
The primary focus is on encrypting data while it is being used to prevent buggy or malicious code in middleware, the operating system, or other software layers between the application and hardware/firmware from exposing that data to unauthorized parties. Current cryptographic approaches address data at rest and in transit, but the third and possibly most challenging step – providing a fully encrypted lifecycle for sensitive data – requires protecting data that’s actually being analyzed or otherwise put to some productive use.
A matter of trust
Backing up, we face a number of fundamental problems in application security.
On the one hand, our software and hardware stacks are increasingly complex and sourced from an ever-wider array of suppliers. In open source software and elsewhere, this means we’re often seeing more experimentation, iteration, and innovation. However, it also means that we’re seeing more rapid change – together with the inevitable, often subtle, mistakes – up and down the hardware and software stack.
At the same time, we’re seeing an increasingly sophisticated and well-funded threat environment all the way up to the nation-state level.
The key question is: How can you have confidence in your computing platform, given these conditions?
As Mike Bursell, Red Hat chief security architect, puts it: “When you run any process, or you run any application, any program on a computer, it’s an exercise in trust. You are trusting that all the layers below what you’ve written, assuming you’ve written it right in the first place, are things you can trust to do what they say they’re going to do on the card. I’ve got to trust my middleware; I’ve got to trust the firmware; I’ve got to trust the BIOS; I’ve got to trust the CPU or the hardware; the OS; the hypervisor; the kernel; all the different pieces of the software stack. I’ve got to trust them to do things like not steal my data, not change my data, not divert my data to somebody who shouldn’t be seeing it. So that’s a lot of pieces.”
[ Read also: Why IT leaders must speak risk fluently. ]
Enter trusted execution environments
Some level of inherent trust and independent verification is probably always going to be needed within our computing environments. For example, hardware vendors mostly need to be trusted at some level not to install backdoors – although, for example, we can monitor networks to give us some degree of additional confidence they have not in fact done so. However, the technique the CCC is most focused on today is called trusted execution environments (TEE).
Initial planned open source contributions to the CCC include the Intel Software Guard Extensions (SGX) Software Development Kit, Microsoft Open Enclave SDK, and Red Hat Enarx.
Enarx is a project providing hardware independence for securing applications using TEEs. As Red Hat security engineer Nathaniel McCallum puts it: “We want it to be possible for you to write your applications using the standard APIs that you already use, in the languages you already use, with the frameworks that you already use, and to be able to deploy these applications inside any hardware technology possible.
“One of the things we realized early on was that there’s a new technology called WebAssembly, which is being used in browsers all around the world,” McCallum says. “The thing that’s really interesting about WebAssembly to us is the capabilities that WebAssembly can deliver in conjunction with the WebAssembly system API. It is almost exactly the same set of functions that you can actually do inside these hardware environments. You get to write an application in your own language with your own tooling. You can compile it to WebAssembly.”
Enarx then lets you securely deliver that application all the way into a cloud provider and execute that remotely. “The way that we do this is, we take your application as inputs and we perform an attestation process with the remote hardware,” explains McCallum. “We validate that the remote hardware is in fact the hardware that it claims to be, using cryptographic techniques. The end result of that is not only an increased level of trust in the hardware that we’re speaking to; it’s also a session key, which we can then use to deliver encrypted code and data into this environment that we have just asked for cryptographic attestation on.”
Of course, you now need to trust the TEE code. However, it’s open source and has a small footprint, which makes it a lot easier to trust than the plethora of middleware on which many applications depend.
[ Want to learn more? Check out this podcast with Gordon Haff: https://grhpodcasts.s3.amazonaws.com/enarx1908.mp3. ]
TEEs are not the end game, of course – there is never an end game when it comes to security. It often seems like a never-ending arms race between increasingly sophisticated adversaries and sometimes not-so-sophisticated defenders.
Bursell believes that trusted platform modules (TPM) may see a comeback, noting that they’ve been around since around the early 2000s but people haven’t used them much. A big part of the problem is that TPM got tightly coupled to another three-letter acronym: DRM, or digital rights management. And as Bursell notes, “DRM has long been anathema to much of the open source community.”
However, Bursell adds, “There’s a new version of TPM 2.0, which is much improved, and people are beginning to realize there’s great benefit in using them. The thing about a TPM is it’s a hardware root of trust. It’s really good for that if you need to be building up levels of trust because you can’t do everything in Enarx yet. There are times you need to build up trust, and it’s a very good building block for those sorts of things.”
[ Learn the do’s and don’ts of cloud migration: Get the free eBook, Hybrid Cloud for Dummies. ]