Friday, August 30, 2013

Thoughts on Intel's upcoming Software Guard Extensions (Part 1)

Intel Software Guard Extensions (SGX) might very well be The Next Big Thing coming to our industry, since the introduction of Intel VT-d, VT-x, and TXT technologies in the previous decade. It apparently seem to promise what so far has never been possible – an ability to create a secure enclave within a potentially compromised OS. It sounds just too great, so I decided to take a closer look and share some early thoughts on this technology.

Intel SGX – secure enclaves within untrusted world!

Intel SGX is an upcoming technology, and there is very little public documents about it at the moment. In fact the only public papers and presentations about SGX can be found in the agenda of one security workshop that took place some two months ago. The three papers from Intel engineers presented there provide a reasonably good technical introduction to those new processor extensions.

You might think about SGX as of a next generation of Intel TXT – a technology that has never really took off, and which has had a long history of security problems disclosed by certain team of researchers ;) Intel TXT has also been perhaps the most misunderstood technology from Intel – in fact many people thought about TXT as if it already could provide security enclaves within untrusted OS – this however was not really true (even ignoring for our multiple attacks) and I have spoke and wrote many times about that in the past years.

It's not clear to me when SGX will make it to the CPUs that we could buy in local shops around the corner. I would be assuming we're talking about 3-5 years from now, because the SGX is not even described in the Intel SDM at this moment.

Intel SGX is essentially a new mode of execution on the CPU, a new memory protection semantic, plus a couple of new instructions to manage this all. So, you create an enclave by filling its protected pages with desired code, then you lock it down, measure the code there, and if everything's fine, you ask the processor to start executing the code inside the enclave. Since now on, no entity, including the kernel (ring 0) or hypervisor (ring “-1”), or SMM (ring “-2”) or AMT (ring “-3”), has no right to read nor write the memory pages belonging to the enclave. Simple as that!

Why have we had to wait so long for such technology? Ok, it's not really that simple, because we need some form of attestation or sealing to make sure that the enclave was really loaded with good code.

The cool thing about an SGX enclave is that it can coexist (and so, co-execute) together with other code, such all the untrusted OS code. There is no need to stop or pause the main OS, and boot into a new stub mini-OS, like it was with the TXT (this is what e.g. Flicker tried to do, and which was very clumsy). Additionally, there can be multiple enclaves, mutually untrusted, all executing at the same time.

No more stinkin' TPMs nor BIOSes to trust!

A nice surprise is that SGX infrastructure no longer depends on the TPM to do measurements, sealing and attestation. Instead Intel has a special enclave that essentially emulates the TPM. This is a smart move, and doesn't decrease security in my opinion. It surely makes us now trust only Intel vs. trusting Intel plus some-asian-TPM-vendor. While it might sound like a good idea to spread the trust between two or more vendors, this only really makes sense if the relation between trusting those vendors is expressed as “AND”, while in this case the relation is, unfortunately of “OR” type – if the private EK key gets leaked from the TPM manufacture, we can bypass any remote attestation, and no longer we need any failure on the Intel's side. Similarly, if Intel was to have a backdoor in their processors, this would be just enough to sabotage all our security, even if the TPM manufacture was decent and played fair.

Because of this, it's generally good that SGX allows us to shrink the number of entities we need to trust down to just one: Intel processor (which, these days include the CPUs as well as the memory controller, and, often, also a GPU). Just to remind – today, even with a sophisticated operating system architecture like those we use in Qubes OS, which is designed with decomposition and minimizing trust in mind, we still need to trust the BIOS and the TPM, in addition to the processor.

And, of course, because SGX enclaves memories are protected against any other processor mode's access, so SMM backdoor no longer can compromise our protected code (in contrast to TXT, where SMM can subvert a TXT-loaded hypervisor), nor any other entity, such as the infamous AMT, or malicious GPU, should be able to do that.

So, this is all very good. However...

Secure Input and Output (for Humans)

For any piece of code to be somehow useful, there must be a secure way to interact with it. In case of servers, this could be implemented by e.g. including the SSL endpoint inside the protected enclave. However for most applications that run on a client system, ability to interact with the user via screen and keyboard is a must. So, one of the most important questions is how does Intel SGX secures output to the screen from an SGX enclave, as well as how does it ensure that the input the enclave gets is indeed the input the user intended?

Interestingly, this subject is not very thoroughly discussed in the Intel papers mentioned above. In fact only one paper briefly mentions Intel Protected Audio Video Path (PVAP) technology that apparently could be used to provide secured output to the screen. The paper then references... a consumer FAQ onBlueRay Disc Playback using Intel HD graphics. There is no further technical details and I was also unable to find any technical document from Intel about this technology. Additionally this same paper admits that, as of now, there is no protected input technology available, even on prototype level, although they promise to work on that in the future.

This might not sound very surprising – after all one doesn't need to be a genius to figure out that the main driving force behind this whole SGX thing is the DRM, and specifically protecting Holywwod media against the pirate industry. This would be nothing wrong in itself, assuming, however, the technology could also have some other usages, that could really improve security of the user (in contrast to the security of the media companies).

We shall remember that all the secrets, keys, tokens, and smart-cards, are ultimately to allow the user to access some information. And how does people access information? By viewing in on a computer screen. I know, I know, this so retro, but until we have direct PC-brain interfaces, I'm afraid that's the only way. Without properly securing the graphics output, all the secrets can be ultimately leaked out.

Also, how people command their computers and applications? Well, again using this retro thing called keyboard and mouse (touchpad). However secure our enclave might be, without secured input, the app would not be able to distinguish intended user input from simulated input crafted by malware. Not to mention about such obvious attacks as sniffing of the user input.

Without protected input and output, SGX might be able to stop the malware from stealing the user's private keys for email encryption or issuing bank transactions, yet the malware will still be able to command this super-secured software to e.g. decrypt all the user emails and later steal the screenshots of all the plaintext messages (with a bit of simple programming, the screenshot's could be turned back into nice ASCII text for saving on bandwidth when leaking them out to a server in Hong Kong), or better yet, perhaps just forward them to an email address that the attacker controls (perhaps still encrypted, but using the attackers key).

But, let's ignore for a moment this “little issue” of lack of protected input, and lack of technical documentation on how secure graphics output is really implemented. Surely it is thinkable that protected input and output could be implemented in a number of ways, and so let's hope Intel will do it, and will do right. We should remember here, that whatever mechanism Intel is going to use to secure the graphics and audio output, it surely will be an attractive target of attacks, as there is probably a huge money incentive for such attacks in the film illegal copying business.

Securing mainstream client OSes and why this is not so simple?

As mentioned above, for SGX enclaves to be truly meaningful on client systems we need protected input and output, to and from the secured enclaves. Anyway, lets assume for now that Intel has come up with robust mechanisms to provide these. Let's now consider further, how SGX could be used to turn our current mainstream desktop systems into reasonably secure bastions.

We start with a simple scenario – a dedicated application for viewing of incoming encrypted files, say PDFs, performing their decryption and signature verification., and displaying of the final outcome to the user (via protected graphics path). The application takes care about all the key management too. All this happens, of coruse, inside an SGX enclave(s).

Now, this sounds all attractive and surely could be implemented using the SGX. But what about if we wanted our secure document viewer to become a bit more than just a viewer? What if we wanted a secure version of MS Word or Excel, with its full ability to open complex documents and edit them?

Well it's obviously not enough to just put the proverbial msword.exe into a SGX enclave. It is not, because the msword.exe makes use of million of other things that are provided by the OS and 3rd libraries, in order to perform all sorts of tasks it is supposed to do. It is not a straightforward decision to draw a line between those parts that are security sensitive and those that are not. Is font parsing security critical? Is drawing proper labels on GUI buttons and menu lists security critical? Is rendering of various objects that are part of the (decrypted) document, such as pictures, security critical? Is spellchecking security critical? Even if the function of some of a subsystem seem not security critical (i.e. not allows to easily leak the plaintext document out of the enclave), let's not forget that all this 3rd party code would be interacting very closely with the enclave-contained code. This means the attack surface exposed to all those untrusted 3rd party modules will be rather huge. And we already know it is rather not possible to write a renderer for such complex documents as PDFs, DOCs, XLS, etc, without introducing tons of exploitable bugs. And these attack are not coming now from the potentially malicious documents (against those we protect, somehow, by parsing only signed document from trusted peers), but are coming from the compromised OS.

Perhaps it would be possible to take Adobe Reader, MS Word, Powerpoint, Excel etc, and just rewrite every of those apps from scratch in a way that they were properly decomposed into sensitive parts that execute within SGC enclave(s), and those that are not-sensitive and make use of all the OS-provided functionality, and further define clean and simple interfaces between those parts, ensuring the “dirty” code cannot exploit the sensitive code. Somehow attractive, but somehow I don't see this happening anytime soon.

But, perhaps, it would be easier to do something different – just take the whole msword.exe, all the DLLs it depends on, as well as all the OS subsystems it depends on, such as the GUI subsystem, and put all of this into an enclave. This sounds like a more rational approach, and also more secure.

Only notice one thing – we just created... a Virtual Machine with Windows OS inside and the msword.exe that uses this Windows OS.. Sure, it is not a VT-x-based VM, it is an SGX-based VM now, but it is largely the same animal!

Again, we came to the conclusion why the use of VMs is suddenly perceived as such an increase in security (which some people cannot get, claiming that introducing VM-layer only increases complexity) – the use of VMs is profitable because of one of thing: it suddenly packs all the fat libraries- and OS-exposed APIs and subsystems into one security domain, reducing all the interfaces between this code in the VM and the outside world. Reducing of the interfaces between two security domains is ALWAYS desirable.

But our SGX-isolated VMs have one significant advantage over the other VM technologies we got used to in the last decade or so – namely those VMs can now be impenetrable to any other entity outside of the VM. No kernel or hypervisor can peek into its memory. Neither can the SMM, AMT, or even a determined physical attacker with DRAM emulator, because SGX automatically encrypts any data that leave the processor, so everything that is in the DRAM is encrypted and useless to the physical attacker.

This is a significant achievement. Of course SGX, strictly speaking, is not a (full) virtualization technology, it's not going to replace VT-x.. But remember we don't always need full virtualization, like VT-x, often we can use paravirtualization and all we need in that case is a good isolation technology. For examaple, Xen uses paravirtualization for Linux-based PV VMs, and uses good-old ring3/ring0 separation mechanism to implement this, and the level of isolation of such PV domains on Xen is comparable to the isolation of HVMs, which are virtualized using VT-x.

To Be Continued

In the next part of this article, we will look into some interesting unconventional uses of SGX, such as creating malware that cannot be reversed engineered, or TOR nodes or Bitcoin mixers that should be reasonably trusted, even if we don't trust their operators. Then we will discuss how SGX might profoundly change the architecture of the future operating systems, and virtualization systems, in a way that we will no longer need to trust (large portions of) their kernels or hypervisors, or system admins (Anti Snowden Protection?) And, of course, how our Qubes OS might embrace this technology in the future.

Finally, we should discuss the important issue of whether this whole SGX, while providing many great benefits for system architects, should really be blindly trusted? What are the chances of Intel building in backdoors there and exposing those to the NSA? Is there any difference in trusting Intel processors today vs. trusting the SGX as a basis of security model of all software in the future?

26 comments:

Anonymous said...

"Secure Input and Output (for Humans)" << do you know this paper? http://users.ece.cmu.edu/~jmmccune/papers/ZhGlNeMc2012.pdf

Anonymous said...

"It surely makes us now trust only Intel vs. trusting Intel plus some-asian-TPM-vendor. "

I'm assuming the writer is bright enough to understand that a) NSA will know every so called secret key Intel will have and b) Intel has to put a backdoor into every chip because NSA says so?

And Intel must deny everything when asked, that's part of the order.

Same procedure as is in use with Microsoft and "trusted platform". Only it means that hardware trusts MS/NSA, _against me_. For me it's worse than useless: An remote controlled spying device.

Trusting any US company alone on any security issue means directly trusting that NSA is not going harass _you_.

That's a reality US is living in, now.

Joanna Rutkowska said...

@anon-who-distrusts-intel:

Bright might I be, or not, but at least I try ;)

http://theinvisiblethings.blogspot.com/2009/06/more-thoughts-on-cpu-backdoors.html

The problem is that there is no escape from trusting Intel or AMD today.

However, an important question is: does SGX offer Intel an easier/more deniable way to backdoor the processors? As mentioned in the post, I will discuss this topic in the part 2 of the article.

Anonymous said...

Hi
This might answer your thouhts
"Upon delivery of an SMI to a processor supporting Intel MPX, the content of IA32_BNDCFGS is saved to SMM state
save map and cleared when entering into SMM [..] Thus, Intel MPX is disabled inside an SMM
handler until SMM code enables it explicitly"

Anonymous said...

"The problem is that there is no escape from trusting Intel or AMD today."

Yes, this is exactly right. So, what is the best option?

1) Trust Intel.
2) Trust AMD.
3) Don't use a computer for sensitive things. (Go back to using paper, like the Russian government recently announced.)
4) Open-source hardware (CPUs)?

This is a serious question I pose to you, Joanna! Which do you choose (and why)? Can 4 ever become a reality?

Anonymous said...

Oh, I almost forgot to say: This is a great post! Very informative, and your insight is always appreciated!

Joanna Rutkowska said...

@anon-talking-about-mpx-and-smm:

Hmm, but what does MPX extensions have to do with SGX? Apparently I'm not bright enough to understand your comment. Please explain.

Joanna Rutkowska said...

@anon-who-advertises-cmu-paper:

A quote from the paper:

"[T]he HV also downgrades the graphics controller to basic VGA text mode".

No more comments... The academia always amuses me with how much people could do to get a paper published ;)

Anonymous said...

I honestly DO believe intel has included a backdoor in their processors because of the news that the NSA spent billions to break encryption. Joanna is there no way to test if there are backdoors in their cpus?

Anonymous said...

http://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html

In the light of those latest revelations, the "Intel/NSA Backdoor" scenario is, unfortunately, increasingly likely. Very disheartening.

We need people like you, Joanna! Keep up the good work.

Anonymous said...

nah don't call it advertise.. I was just interested in your thoughts on the concept in general. That the implementation is nothing but a PoC should be clear.

Ark-kun said...

>"just take the whole msword.exe, all the DLLs it depends on, as well as all the OS subsystems it depends on, such as the GUI subsystem, and put all of this into an enclave"
Look at the Microsoft's Drawbridge research OS http://research.microsoft.com/en-us/projects/drawbridge/

Joanna Rutkowska said...

@Ark-kun:

So, what they describe is essentially a one-vm-per-app approach. Whether the "VM" is an actual fully virtualized VT-x VM, some form of a paravirtualized VM, or just a process in an OS with limited API surface, this all falls under the "use VMs for better isolation" paradigm. And, again, the whole point about using "VMs for better isolation" is to reduce interfaces between the VM and the rest of the system. And they're doing just that.

Regarding the specifics of their approach -- let me quote the "Limitations" section from the paper:

"Solving the problem of multiprocess applications is much harder particularly for applications that communicate through shared state in
win32k as is done in many OLE scenarios. We have considered, but not implemented, two possible designs. One is to load multiple applications into a single address space. Another is to run win32k in a separate user-mode server process that can be shared by multiple applications in the same isolation container."

Their first solution would be essentially "one-vm-for-multiple-app", so a traditional one (like the one we use in Qubes). The latter approach (wine32k in a separate server process) would be reintroduction of fat interfaces between the isolated apps and the *trusted* part of the system, which win32k surely is!

In other workds, back to what we use in Qubes OS, or back to what is currently in Windows :)

Also of note: "At the time of writing, Microsoft has no plans to productize
any of the concepts prototyped in Drawbridge"... and without MS producing it, there is no way we could have Windows Library OS ready to run real-world apps, as they also admit in the paper. Remember WINE on Linux? Same story. But this, of course, is more of a political problem, than a technical one. Albait an important stopper.

Anonymous said...

joanna: curious about your thoughts on isolating user input into a dedicated hardware device w/ 2 modes similar to KVM as proposed in this paper:

http://www.mitre.org/work/tech_papers/2012/12_0024/12_0024.pdf

Anonymous said...

Intel assumes that the OS is untrusted and therefore SGX may live. This assumption isn’t very accurate. I’m well aware of the current security issues, but let’s keep in mind that they were built on a long chain of legacy support and bad design foundations which is long backed by the Intel architecture.
Maintaining support in legacy architecture while adding proprietary patch (SGX) isn’t the way to go.
for the amount of time, money and effort that Intel has spent in redesigning it’s conception (and this has not yet started with software enabling) – I think we could have a much simpler chip design with a much better OS architecture on top of it.
Academic research has obsoleted current OS design a decade ago and we keep trying to hold the same crappy assumptions because we are used to them (only re programmed by big corporations to think the same).

Anonymous said...

Could you clarify how the trusted code is bootstrapped, in terms of keys?

That is, does each CPU have a public/private key pair, and if so, how does the the software vendor know that it's a CPU and not software, pretending to be a CPU and providing public keys?

Will only vendors who have their code signed by Intel be allowed to create enclaves?

In other words, how does the vendor<->CPU "handshake" work?

Joanna Rutkowska said...

@anon-who-asks-about-key-mgmt:

Remote Attention used by SGX has been described in this paper in more detail:

https://docs.google.com/file/d/0B_wHUJwViKDaSUV6aUcxR0dPejg/edit?pli=1

To answer your question let me quote the paper:

"Quoting Enclave verifies REPORTs from other enclaves on the platform using the Intra-platform enclave attestation method described above, and then replaces the MAC over these REPORTs with a signature created with a device specific (private) asymmetric key. The output of this process is called a QUOTE."

How are 3rd parties supposed to get the matching public key for verification is not discussed in the paper (or I'm missing it). I could imagine Intel will just publish the certificate(s) for the each processor series somewhere. (Just like TPM manufactures are supposed to publish certs for TPM's EK keys, which play a similar role in authenticating a real TPM).

They also state that the actual device-specific key is not used directly for Quote signing in order to prevent de-anonymization of user's machines, and instead they use something they call EPID which is a scheme based on DAA, used previously in TPM for Remote Attestation. I haven't studied neither DAA nor EPID in detail, but the paper provides a link to another paper about the EPID.

Arthur said...

There's some recent work on formally correct software that's heading towards real world usability by only verifying a shim. There's obvious links to having a secure part of a chip.

See "Establishing Browser Security Guarantees
through Formal Shim Veriļ¬cation" http://goto.ucsd.edu/quark/usenix12.pdf

Anonymous said...

Intel just published 156 pages of material on SGX.

http://software.intel.com/en-us/intel-isa-extensions#pid-19539-1495

Morty29 said...

Sorry to be a bit late posting here - while i followed the TXT/TPM technologies closely (to the point of implementing my own dynamically loaded MLE from Windows!) I must have fallen behind because I had not heard about SGE until now.

I haven't read much about the technology yet apart from some blog posts at Intel's web site and I'm not sure what is public.

However, reading your post I'm wondering about what you write:
>>>
Intel TXT has also been perhaps the most misunderstood technology from Intel – in fact many people thought about TXT as if it already could provide security enclaves within untrusted OS – this however was not really true (even ignoring for our multiple attacks) and I have spoke and wrote many times about that in the past years.
<<<

I don't understand what is meant here because as far as I know, TXT can indeedn be used to create trusted enclaves within the CPU, namely in the way Flickr or other custom-written MLE's does which you also mention later in the article. While there are many practical problems with such an approach and it also requires trust in the CPU, TPM and SINIT modules (which you have shown is not infallible - but such errors are still correctable and the correction can be 'measured') it does in theory provide an enclave.



(by the way I don't think Flickr came up with the idea to load from within the OS which seems to be suggested by this blog post - I think that was the whole point of the dynamic root of trust additions to the TPM 1.2 standard which are employed by TXT. If one simply seeks to secure the first program loaded, standard TPM mechanisms, which don't require a TPM, are sufficient).

Joanna Rutkowska said...

@Morty29: the requirement the freeze the whole OS for the time when you want to run your trusted App (in TXT "enclave") is just ridiculous. Today's OSes are not MS-DOS anymore, you cannot just freeze it.

Also, regarding things that one needs to trust when using TXT -- don't forget about the BIOS and the SMM. The problem of TXT dependency on the BIOS/SMM has never been solved in a good way AFAIK. Even with the over-complex and somehow pathetic notion of the STM.

Looking back at TXT now, seeing where the STM "solution" went, I consider it a big failure. Even though in the previous years I was somehow excited about it. God we will have SGX instead.

Morty29 said...

Great thanks for your reply. Yes I agree TXT is very impractical, my point was just that the same things could be achieved so I think TXT and SGX are just two ways of achieving exactly the same end-result. SGX seems more promising and since it's new presumably Intel believes in it, whereas it seemed TXT didn't get that much attention in recent years. What I especially like about SGX is that enclave entries/exits can now happen from user mode, and that the enclave code itself (upon entry) executes in user mode. Hopefully this would enable the technology

Joanna Rutkowska said...

@Morty29: as I explained above, TXT does *not* allow to do the same things as SGX. SGX is stronger architecturally.

Anonymous said...

"because SGX automatically encrypts any data that leave the processor, so everything that is in the DRAM is encrypted and useless to the physical attacker." does not seem right to me.

I suppose content in the EPC is protected by memory access control while content swapped out is protected by encryption. Did I miss anything?

Alex Dubois said...

Hi Joanna,

Very good post as alway.

One precision I believe... SGX main goal is Intel to stay relevant in a cloud environment (not DRM). Next year almost 50% of hardware will be ordered by cloud providers. They need capabilities to allow organizations to externalize their computation, securely.

Anonymous said...

@Morty29 Intel TXT is not SMM proof (as far as public knowledge goes) and also you can't have concurrent execution of trusted and untrusted environments. These two are solved in SGX.