Monday, September 23, 2013

Thoughts on Intel's upcoming Software Guard Extensions (Part 2)


In the first part of this article published a few weeks ago, I have discussed the basics of Intel SGX technology, and also discussed challenges with using SGX for securing desktop systems, specifically focusing on the problem of trusted input and output. In this part we will look at some other aspects of Intel SGX, and we will start with a discussion of how it could be used to create a truly irreversible software.

SGX Blackboxing – Apps and malware that cannot be reverse engineered?

A nice feature of Intel SGX is that the processor automatically encrypts the content of SGX-protected memory pages whenever it leaves the processor caches and is stored in DRAM. In other words the code and data used by SGX enclaves never leave the processor in plaintext.

This feature, no doubt influenced by the DRM industry, might profoundly change our approach as to who controls our computers really. This is because it will now be easy to create an application, or malware for that matter, that just cannot be reversed engineered in any way. No more IDA, no more debuggers, not even kernel debuggers, could reveal the actual intentions of the EXE file we're about to run.

Consider the following scenario, where a user downloads an executable, say blackpill.exe, which in fact logically consists of three parts:

  1. A 1st stage loader (SGX loader) which is unencrypted, and which task is to setup an SGX enclave, copy the rest of the code there, specifically the 2nd stage loader, and then start executing the 2nd stage loader...
  2. The 2nd stage loader, which starts executing within the enclave, performs remote attestation with an external server and, in case the remote attestation completes successfully, obtains a secret key from the remote server. This code is also delivered in plaintext too.
  3. Finally the encrypted blob which can only be decrypted using the key obtained by the 2nd stage loader from the remote server, and which contains the actual logic of the application (or malware).

We can easily see that there is no way for the user to figure out what the code from the encrypted blob is going to do on her computer. This is because the key will be released by the remote server only if the 2nd stage loader can prove via remote attestation that it indeed executes within a protect SGX enclave and that it is the original unmodified loader code that the application's author created. Should one bit of this loader be modified, or should it be attempted to run outside of an SGX enclave, or within a somehow misconfigured SGX enclave, then the remote attestation would fail and the key will not be obtained.

And once the key is obtained, it is available only within the SGX enclave. It cannot be found in DRAM or on the memory bus, even if the user had access to expensive DRAM emulators or bus sniffers. And the key cannot also be mishandled by the code that runs in the SGX enclave, because remote attestation also proved that the loader code has not been modified, and the author wrote the loader specifically not to mishandle the key in any way (e.g. not to write it out somewhere to unprotected memory, or store on the disk). Now, the loader uses the key to decrypt the payload, and this decrypted payload remains within secure enclave, never leaving it, just like the key. It's data never leaves the enclave either...

One little catch is how the key is actually sent to the SGX-protected enclave so that it could not be spoofed in the middle? Of course it must be encrypted, but to which key? Well, we can have our 2nd stage loader generate a new key pair and send the public key to the remote server – the server will then use this public key to send the actual decryption key encrypted with this loader's public key. This is almost good, except for the fact that this scheme is not immune to a classic main in the middle attack. The solution to this is easy, though – if I understand correctly the description of the new Quoting and Sealing operations performed by the Quoting Enclave – we can include the generated public key hash as part of the data that will be signed and put into the Quote message, so the remote sever can be assured also that the public key originates from the actual code running in the SGX enclave and not from Mallory somewhere in the middle.

So, what does the application really do? Does it do exactly what has been advertised by its author? Or does it also “accidentally” sniffs some system memory or even reads out disk sectors and sends the gathered data to a remote server, encrypted, of course? We cannot know this. And that's quite worrying, I think.

One might say that we do accept all the proprietary software blindly anyway – after all who fires up IDA to review MS Office before use? Or MS Windows? Or any other application? Probably very few people indeed. But the point is: this could be done, and actually some brave souls do that. This could be done even if the author used some advanced form of obfuscation. Can be done, even if taking lots of time. Now, with Intel SGX it suddenly cannot be done anymore. That's quite a revolution, complete change of the rules. We're no longer masters of our little universe – the computer system – and now somebody else is.

Unless there was a way for “Certified Antivirus companies” to get around SGX protection.... (see below for more discussion on this).

...And some good applications of SGX

The SGX blackboxing has, however, some good usages too, beyond protecting the Hollywood productions, and making malware un-analyzable...

One particularly attractive possibility is the “trusted cloud” where VMs offered to users could not be eavesdropped or tampered by the cloud provider admins. I wrote about such possibility two years ago, but with Intel SGX this could be done much, much better. This will, of course, require a specially written hypervisor which would be setting up SGX containers for each of the VM, and then the VM could authenticate to the user and prove, via remote attestation, that it is executing inside a protected and properly set SGX enclave. Note how this time we do not require the hypervisor to authenticate to the users – we just don't care, if our code correctly attests that it is in a correct SGX, it's all fine.

Suddenly Google could no longer collect and process your calendar, email, documents, and medial records! Or how about a tor node that could prove to users that it is not backdoored by its own admin and does not keep a log of how connections were routed? Or a safe bitcoin web-based wallet? It's hard to overestimate how good such a technology might be for bringing privacy to the wide society of users...

Assuming, of course, there was no backdoor for the NSA to get around the SGX protection and ruin this all goodness...(see below for more discussion on this).

New OS and VMM architectures

In the paragraph above I mentioned that we will need specially written hypervisors (VMMs) that will be making use of SGX in order to protect the user's VMs against themselves (i.e. against the hypervisor). We could go further and put other components of a VMM into protected SGX enclaves, things that we currently, in Qubes OS, keep in separate Service VMs, such as networking stacks, USB stacks, etc. Remember that Intel SGX provides convenient mechanism to build inter-enclave secure communication channels.

We could also take the “GUI domain” (currently this is just Dom0 in Qubes OS) and move it into a separate SGX enclave. If only Intel came up with solid protected input and output technologies that would work well with SGX, then this would suddenly make whole lots of sense (unlike currently where it is very challenging). What we win this way is that no longer a bug in the hypervisor should be critical, as it would be now a long way for the attacker who compromised the hypervisor to steal any real secret of the user, because there are no secrets in the hypervisor itself.

In this setup the two most critical enclaves are: 1) the GUI enclave, of course, and 2) the admin enclave, although it is thinkable that the latter could be made reasonably deprivileged in that it might only be allowed to create/remove VMs, setup networking and other policies for them, but no longer be able to read and write memory of the VMs (Anti Snowden Protection, ASP?).

And... why use hypervisors? Why not use the same approach to compartmentalize ordinary operating systems? Well, this could be done, of course, but it would require considerable rewrite of the systems, essentially turning them into microkernels (except for the fact that the microkernel would no longer need to be trusted), as well as the applications and drivers, and we know that this will never happen. Again, let me repeat one more time: the whole point of using virtualization for security is that it wraps up all the huge APIs of an ordinary OS, like Win32 or POSIX, or OSX, into a virtual machine that itself requires orders of magnitude simpler interface to/from the outside world (especially true for paravirtualized VMs), and all this without the need to rewrite the applications.

Trusting Intel – Next Generation of Backdooring?

We have seen that SGX offers a number of attractive functionality that could potentially make our digital systems more secure and 3rd party servers more trusted. But does it really?

The obvious question, especially in the light of recent revelations about NSA backdooring everything and the kitchen sink, is whether Intel will have backdoors allowing “privileged entities” to bypass SGX protections?

Traditional CPU backdooring

Of course they could, no question about it. But one can say that Intel (as well as AMD) might have been having backdoors in their processors for a long time, not necessarily in anything related to SGX, TPM, TXT, AMT, etc. Intel could have built backdoors into simple MOV or ADD instructions, in such a way that they would automatically disable ring/page protections whenever executed with some magic arguments. I wrote more about this many years ago.

The problem with those “traditional” backdoors is that Intel (or a certain agency) could be caught using it, and this might have catastrophic consequences for Intel. Just imagine somebody discovered (during a forensic analysis of an incident) that doing:

MOV eax, $deadbeef
MOV ebx, $babecafe
ADD eax, ebx

...causes ring elevation for the next 1000 cycles. All the processors affected would suddenly became equivalents of the old 8086 and would have to be replaced. Quite a marketing nightmare I think, no?

Next-generation CPU backdooring

But as more and more crypto and security mechanisms got delegated from software to the processor, the more likely it becomes for Intel (or AMD) to insert really “plausibly deniable” backdoors into processors.

Consider e.g. the recent paper on how to plant a backdoor into the Intel's Ivy Bridge's random number generator (usable via the new RDRAND instruction). The backdoor reduces the actual entropy of the generator making it feasible to later brute-force any crypto which uses keys generated via the weakened generator. The paper goes into great lengths describing how this backdoor could be injected by a malicious foundry (e.g. one in China), behind the Intel's back, which is achieved by implementing the backdoor entirely below the HDL level. The paper takes a “classic” view on the threat model with Good Americans (Intel engineers) and the Bad Chinese (foundry operators/employees). Nevertheless, it should be obvious that Intel could have planted such a backdoor without any effort or challenge described in the paper, because they could do so at any level, not necessarily below HDL.

But backdooring an RNG is still something that leaves traces. Even though the backdoored processor can apparently pass all external “randomness” testes, such as the NIST testsuite, they still might be caught. Perhaps because somebody will buy 1000 processors and will run them for a year and will note down all the numbers generated and then conclude that the distribution is quite not right. Or something like that. Or perhaps because somebody will reverse-engineer the processor and specifically the RNG circuitry and notice some gates are shorted to GND. Or perhaps because somebody at this “Bad Chinese” foundry will notice that.

Let's now get back to Intel SGX -- what is the actual Root of Trust for this technology? Of course, the processor, just like for the old ring3/ring0 separation. But for SGX there is additional Root of Trust which is used for remote attestation, and this is the private key(s) used for signing the Quote Messages.

If the signing private key somehow got into the hands of an adversary, the remote attestation breaks down completely. Suddenly the “SGX Blackboxed” apps and malware can readily be decrypted, disassembled and reverse engineered, because the adversary can now emulate their execution step by step under a debugger and still pass the remote attestation. We might say this is good, as we don't want irreversible malware and apps. But then, suddenly, we also loose our attractive “trusted cloud” too – now there is nothing that could stop the adversary, who has the private signing key, to run our trusted VM outside of SGX, yet still reporting to us that it is SGX-protected. And so, while we believe that our trusted VM should be trusted and unsniffable, and while we devote all our deepest secrets to it, the adversary can read them all like on a plate.

And the worst thing is – even if somebody took such a processor, disassembled it into pieces, analyzed transitor-by-transitor, recreated HDL, analyzed it all, then still it all would look good. Because the backdoor is... the leaked private key that is now also in the hands of the adversary, and there is no way to prove it by looking at the processor alone.

As I understand, the whole idea of having a separate TPM chip, was exactly to make such backdoor-by-leaking-keys more difficult, because, while we're all forced to use Intel or AMD processors today, it is possible that e.g. every country can produce their own TPM, as it's million times less complex than a modern processor. So, perhaps Russia could use their own TPMs, which they might be reasonably sure they use private keys which have not be handed over to the NSA.

However, as I mentioned in the first part of this article, sadly, this scheme doesn't work that well. The processor can still cheat the external TPM module. For example, in case of an Intel TXT and TPM – the processor can produce incorrect PCR values in response to certain trigger – in that case it no longer matters that the TPM is trusted and keys not leaked, because the TPM will sign wrong values. On the other hand we go back now to using “traditional” backdoors in the processors, whose main disadvantage is that people might got cought using them (e.g. somebody analyzed an exploit which turns out to be triggering correct Quote message despite incorrect PCRs).

So, perhaps, the idea of separate TPM actually does make some sense after all?

What about just accidental bugs in Intel products?

Conspiracy theories aside, what about accidental bugs? What are the chances of SGX being really foolproof, at least against those unlucky adversaries who didn't get access to the private signing keys? The Intel's processor have become quite a complex beasts these days. And if you also thrown in the Memory Controller Hub, it's unimaginably complex beast.

Let's take a quick tour back discussing some spectacular attacks against Intel “hardware” security mechanisms. I wrote “hardware” in quotation marks, because really most of these technologies is software, like most of the things in electronics these days. Nevertheless the “hardware enforced security” does have a special appeal to lots of people, often creating an impression that these must be some ultimate unbreakable technologies....

I think it all started with our exploit against Intel Q35 chipset (slides 15+) demonstrated back in 2008 which was the first attack allowing to compromise, otherwise hardware-protected, SMM memory on Intel platforms (some other attacks against SMM shown before assumed the SMM was not protected, which was the case on many older platforms).

This was then shortly followed by another paper from us about attacking Intel Trusted Execution Technology (TXT), which found out and exploited a fact that TXT-loaded code was not protected against code running in the SMM mode. We used our previous attack on Q35 against SMM, as well as found a couple of new ones, in order to compromise SMM, plant a backdoor there, and then compromise TXT-loaded code from there. The issue highlighted in the paper has never really been correctly patched. Intel has spent years developing something they called STM, which was supposed to be a thin hypervisor for SMM code sandboxing. I don't know if the Intel STM specification has eventually been made public, and how many bugs it might be introducing on systems using it, or how much inaccurate it might be.

In the following years we presented two more devastating attacks against Intel TXT (none of which depending on compromised SMM): one which exploited a subtle bug in the processor SINIT module allowing to misconfigure VT-d protections for TXT-loaded code, and another one exploiting a classic buffer overflow bug also in the processor's SINIT module, allowing this time not only to fully bypass TXT, but also fully bypass Intel Launch Control Policy and hijack SMM (several years after our original papers on attacking SMM the old bugs got patched and so this was also attractive as yet another way to compromise SMM for whatever other reason).

Invisible Things Lab has also presented first, and as far as I'm aware still the only one, attack on Intel BIOS that allowed to reflash the BIOS despite Intel's strong “hardware” protection mechanism to allow only digitally signed code to be flashed. We also found out about secret processor in the chipset used for execution of Intel AMT code and we found a way to inject our custom code into this special AMT environment and have it executed in parallel with the main system, unconstrained by any other entity.

This is quite a list of Intel significant security failures, which I think gives something to think about. At the very least that just because something is “hardware enforced” or “hardware protected” doesn't mean it is foolproof against software exploits. Because, it should be clearly said, all our exploits mentioned above were pure software attacks.

But, to be fair, we have never been able to break Intel core memory protection (ring separation, page protection) or Intel VT-x. Rafal Wojtczuk has probably came closest with his SYSRET attack in an attempt to break the ring separation, but ultimately the Intel's excuse was that the problem was on the side of the OS developers who didn't notice subtle differences in the behavior of SYSRET between AMD and Intel processors, and didn't make their kernel code defensive enough against Intel processor's odd behavior.

We have also demonstrated rather impressive attacks bypassing Intel VT-d, but, again, to be fair, we should mention that the attacks were possible only on those platforms which Intel didn't equip with so called Interrupt Remapping hardware, and that Intel knew that such hardware was indeed needed and was planning it a few years before our attacks were published.

So, is Intel SGX gonna be as insecure as Intel TXT, or as secure as Intel VT-x....?

The bottom line

Intel SGX promises some incredible functionality – to create protected execution environments (called enclaves) within untrusted (compromised) Operating System. However, for SGX to be of any use on a client OS, it is important that we also have technologies to implement trusted output and input from/to the SGX enclave. Intel currently provides little details about the former and openly admits it doesn't have the later.

Still, even without trusted input and output technologies, SGX might be very useful in bringing trust to the cloud, by allowing users to create trusted VMs inside untrusted provider infrastructure. However, at the same time, it could allow to create applications and malware that could not be reversed engineered. It's quote ironic that those two applications (trusted cloud and irreversible malware) are mutually bound together, so that if one wanted to add a backdoor to allow A/V industry to be able to analyze SGX-protected malware, then this very same backdoor could be used to weaken the guarantees of the trustworthiness of the user VMs in the cloud.

Finally, a problem that is hard to ignore today, in the post-Snowden world, is the ease of backdooring this technology by Intel itself. In fact Intel doesn't need to add anything to their processors – all they need to do is to give away the private signing keys used by SGX for remote attestation. This makes for a perfectly deniable backdoor – nobody could catch Intel on this, even if the processor was analyzed transistor-by-transistor, HDL line-by-line.

As a system architect I would love to have Intel SGX, and I would love to believe it is secure. It would allow to further decompose Qubes OS, specifically get rid of the hypervisor from the TCB, and probably even more.

Special thanks to Oded Horowitz for turning my attention towards Intel SGX.

13 comments:

  1. With regard to Intel giving keys to NSA, it is inconceivable that NSA would not subpoena them. Therefore, one can rest assured that NSA has they keys.

    ReplyDelete
  2. "So, perhaps Russia could use their own TPMs, which they might be reasonably sure they use private keys which have not be handed over to the NSA."

    could an external device, e.g. USB-attached, work as an independent TPM? possibility of MITM seems generic to any TPM-like device, interested to hear your thoughts on this.

    ReplyDelete
  3. @jake: I don't think a USB-based TPM could be of any use, especially in a dynamic scheme, such as TXT or SGX, becuase there is no equivalent of a "special wire" from the processor to the device which could be used to inform the device that a trusted event occurred (e.g. SENTER needs to inform the TPM that is should reset certain PCR registers). In case of a USB device, I'm afraid, such signals could be just spoofed by malware.

    ReplyDelete
  4. @jake: there are also similar (hardware) attacks against independent TPMs were you spoof certain events (e.g. reboots) (check out this paper if you're interested)

    @joanna: great paper about SGX!
    It is indeed most likely that an enclave has complete control over all memory, with the exception of other enclaves, and can snoop everything in memory. However, this is not required. In principle, the untrusted OS could set up virtual memory where the enclave must reside. Besides, the enclave only cares that it is loaded somewhere in memory and with it's pages in the correct order. And that it's pages remain in that order. That way, the OS, even though untrusted, can control what unprotected memory the loaded enclave can access and what not. This is similar to the Fides architecture (disclaimer: I'm a co-author of that paper). It'll be very interesting to see how enclaves are exactly implemented. Can't wait for some more details from Intel.

    ReplyDelete
  5. Hi Joanna, excellent post on SGX. It remains to be seen just how well these extensions will work or how they'll be implemented in real life. I personally believe we should throw out the whole x86 architecture rather than cobbling together "trusted" extensions.

    ReplyDelete
  6. The Intel programming manual is available here:

    http://software.intel.com/sites/default/files/329298-001.pdf

    /CF

    ReplyDelete
  7. Great analysis :)
    One thing about those enclaves is that they run at ring 3 and anything they do with the external world is visible to the OS. You noted this regarding the I/O, but if there is malware in an enclave - you can know exactly what it is doing (beside reading memory probably). every system call can be tracked, so the risk of malware is not that big (maybe a 'sleeping' malware that wait for the right moment is a problem, but still you can monitor this).

    ReplyDelete
  8. I wonder if SGX can protect against buffer overflow control-hijack attacks or ret-2-libc attacks.

    ReplyDelete
  9. Have I missed something or is the only thing which stops me from emulating the processor that I don't know the secret signing key?

    If so this would mean that when this technology is used widely this key would be targeted by many groups.
    And in contrast to software keys there would be no change to replace it.

    ReplyDelete
  10. This comment has been removed by the author.

    ReplyDelete
  11. A question, Joanna, you mentioned that we're not sure if VT-x has a flaw that lead to EPT or functionalities bypassed.

    Recently actually some of ideas are coming out in my mind to consider if we can break VT-x or get something internals for VT-x behaviors. Do you think it is valid thinking? see my blog -
    http://hypervsir.blogspot.com/2014/09/thoughts-on-vmxon-and-vmcs-regions-in.html

    ReplyDelete
  12. I'm sure you are up on the newest research, but just in case you missed it here is an interesting paper/presentation from Microsoft Research attempting, with some success, to use SGX to create a "trusted cloud" execution environment.

    ReplyDelete