Tuesday, September 22, 2009

Intel Security Summit: the slides

Last week I was invited to Hillsboro to speak at the Intel's internal conference on security. My presentation title was "A Quest To The Core: Thoughts on present and future attacks on system core technologies", and my goal was to somehow make a quick summary of the recent research our team has done over the last 12 months or so, and explain why we're so keen on hacking the low-level system components, while all the rest of the world is excited about browser and flash player bugs.

The slides (converted to PDF) can be found here. As you will see, I decided to remove most of the slides from the "Future" chapter. One reason for that was that we didn't want to hint Loic our competition as to some of our new toys we're working on;) The other reason was that, I think, the value of presenting only thoughts about attacks, i.e. unproven thoughts, or, should I even say, feelings about future attacks, has little research value, and while I can understand such information being important to Intel, I don't see how others could benefit from them.

I must say it was nice and interesting to meet in person with various Intel architects, i.e. the people that actually design and create our basic "universe" we all operate in. You can always change the OS (or even write your own!), but still you must stick to the rules, or "laws", of the platform (unless you can break them ;)

Wednesday, September 02, 2009

About Apple’s Security Foundations, Or Lack Of Thereof...

Every once in a while it’s healthy to reinstall your system... I know, I know, it’s almost a heresy to say that, but that’s reality in the world where our systems are totally unverifiable. In fact I don’t even attempt to verify if my Mac laptop has been compromised in any way (most system files are not signed anyway). But sometimes, you got this feeling that something might be wrong and you decide to reinstall to start your (digital) life all over again :)

So, every time I (re)install a Mac-based system, I end up cursing horribly at Apple’s architects. Why? Because in the Apple World they seem to totally ignore the concept of files integrity, to such extent that it’s virtually impossible to get any assurance that the programs I install are in any way authentic (i.e. not tampered by some 3rd party, e.g. by somebody controlling my Internet connection).

Take any Apple installer package, e.g. Thunderbird. In most cases an installer package on Mac is a .dmg file, that represents an installation disk image. Now, when you open such a file under Mac, the OS will never display any information about if this file is somehow signed (e.g. by who) or not. In fact, I’m pretty sure it’s never signed. What you end up with, is a .dmg file that you just downloaded over plaintext HTTP and you have absolutely no way of verifying if it is the original file the vendor really published. And you’re just about to grant admin privileges to the installer program that is inside this file -- after all it’s an installer, so must got root privileges, right (well, not quite maybe)? Beautiful...

Interestingly, this very same Thunderbird installer, but for Windows, is correctly signed, and Windows, correctly, displays that information (together with the ability to examine the certificate) and allows the user to make a choice of whether to allow it to run or not.



Sure, the certificate doesn’t guarantee that Mozilla didn’t put a nasty backdoor in there, nor that the file was not compromised due to Mozilla’s internal server compromise. Or that the certificate (the private key) wasn’t somehow stolen from Mozilla, or that the issuing authority didn’t make a mistake and maybe issued this certificate to some random guy, who just happened to be named Mozilla.

But the certificate provides liability. If it indeed turns out that this very Thunderbird installer was somehow malicious, I could take this signed file to the court and sue either Mozilla, or the certification authority for all the damages it might have done to me. Without the certificate I cannot do that, because I (and nobody) cannot know if the file was tampered while being downloaded (e.g. malicious ISP) or maybe because my system was already compromised.

But in case of Apple, we have no such choice -- we need to take the risk every time we download a program from the Internet. We must bet the security of our whole system, that at this very moment nobody is tampering with out (unsecured) HTTP connection, and also that nobody compromised the vendor’s Web Server, and, of course, we hope that the vendor didn’t put any malicious code into its product (as we could not sue them for it).

So that sucks. That sucks terribly! Without ability to check the integrity of programs we want to install, we cannot build any solid foundations. It’s funny how people divagate whether Apple implemented ASLR correctly in Snow Leopard, or not? Or whether NX is bypassable. It’s meaningless to dive into such advanced topics, if we cannot even assure that at the day 0 our system is clean. We need to start building our systems from the ground up, and not starting from the roof! Ability to assure the software we install is not tampered seems like a reasonable very first step. (Sure it could be compromised 5 minutes later, and to protect against this we should have other mechanisms, like e.g. mentioned above ASLR and NX).

And Apple should not blame the vendors for such a situation (“Vendors would never pay $300 for a certificate”, blah, blah), as it is just enough to have a look at the Windows versions of the same products, and that most of them do have signed installers (gee, even open-source TrueCrypt, has a signed installer for Windows!).

One should say that a few vendors, seeing this problem on Mac, do publish PGP signatures for their installation files. This includes e.g. PGP Desktop for Mac, KeePassX, TrueCrypt for Mac, and a few others. But these are just exceptions and I wonder how many users will be disciplined (and savvy) enough to correctly verify those PGP signatures (in general it requires you to download the vendor keys many months before, keep it in your ring, to minimize possibility that somebody alters both the installer files and the keys you download). Some other vendors offer pseudo-integrity by displaying MD5/SHA1 sums on their websites. That would make some sense only if the website on which the hashes are displayed was itself SSL-protected (still the file signature is a better option), as otherwise we can be sure that the attacker that is tampering with the installer file, will also take care about adjusting the hash on the website... But of course this never is the case -- have a look e.g. at the VMWare download page for the Mac Fusion (one need to register first). Very smart, VMWare! (Needles to say, the VMWare Workstation installer for Windows is properly signed).

BTW, anybody checked if the Apple updates are digitally signed somehow?

All I wrote here in this post is just trivial. It should be just obvious for every decently educated software engineer. Believe me it’s really is much more fun for me to write about things like new attacks on chipsets or virtualization. But I have this little hope that maybe somebody at Apple will read this little post and fix their OS. Because I really like Apple products for their aesthetics...

Wednesday, August 26, 2009

PDF signing and beyond

Today I got an advertising email from GlobalSign (where I previously bought a code signing certificate for Vista kernel drivers some years ago) highlighting their new (?) type of certificates for signing of Adobe PDF files. It made me curious, because, frankly, I've been recently more and more missing this feature. After a quick online research it turned out that this whole Adobe Certified Documents Services (CDS) seem to be nothing new, as apparently even Adobe Reader 6.0 had support for verifying those CDS certificates. The certificates are also available from other popular certification authorities like e.g. Entrust and Verisign, and a couple of others.

So, I immediately felt stupid that I haven't been aware of such a great feature, which apparently is out there for a few years now. Why I thought it was so great a feature? Consider the following scenario…

At our Invisible Things Lab resources page we offer a handful of files to download — slides and some proof of concept code. The website is served over a plaintext HTTP. This means that if you're downloading anything over a public WiFi (hotel, airport lounge, etc) you never know if the PDF you actually get has not been infected somewhere in the middle, e.g. by a guy in the lobby that is messing with the hotel WiFi.

So, one might argue that I should have paid a few hundred bucks and get an SSL certificate for my website and start serving it over HTTPS. But here's the problem — I, as zillions of other small businesses and individuals, host my website on some 5-dollar-a-month one-of-the-thousands hosting provider. I have zero knowledge about what people work there and if they can be trusted, and I also know nothing (and have zero impact) on how secure (or not, for that matter) the server is. (Same applies to my cell phone carrier, ISP, etc, BTW).

Now, the SSL certificate for the website "knows" nothing about how the files on my website should look like, in particular if they are compromised or not. All the SSL certificate does is to give assurance to the remote client that he or she downloaded the actual files that were on the server in the moment of downloading — whether they were the original ones authored by me, or perhaps maliciously modified by somebody who got access to the server.

So, the solution with an SSL certificate would work only if I trusted my web server, which could be assumed only if I run my own dedicated server. That, however, would be an overkill for a small company like ITL, especially that our business is not based on our web presence — in fact the website is maintained mainly for other researchers and students, who can easily download our papers and code from there, and also for the reporters so they can e.g. download a press release from there.

Surprisingly, the website has never been compromised, probably because it doesn't present an interesting target for any skilled person (or maybe exceptionally skilled people work at the hosting provider?). But I cannot know for sure, as I don't constantly monitor all the hashes of all the files, as this would require… well a dedicated server that would be running an SHA1 calculating script in a loop for 24/7 :)

Of course, zillions of other websites works this very same way and present the very same problems.

Now, ability to sign PDFs would be just a great solution here, because I could sign all those files with my certificate, and then all the people downloading stuff from ITL could know they are getting original PDFs that were created on one of the ITL members desktop computers, no matter how compromised the web server or the network connection is.

For the same reasons, I would welcome if others started doing the same, as currently I simply must assume every PDF I download from the net (and PDFs account for the majority of file downloads I do) to be potentially malicious. So, I always open them in my Red or Yellow VM (depending on the source of the download), and only if it "looks good" (very fuzzy term, I know), I might decide to move it to my host desktop (it's easier to work with PDFs on your host, and actually you should use your host desktop for something).

(Yes, I know, Kostya Kortchinsky, or Rafal, can sometimes escape from VMWare, but still I believe that today the best isolation I can get on a desktop, without sacrificing much convince, is via a type II hypervisor. It's horribly inelegant, but well, that's life).

So, I read some more about this Adobe CDS, being all excited about it, and ready to spend a few hundred euros on a certificate, only to realize that it doesn't look as good as I thought.

First disappointment comes from the fact that you must create a PDF using Adobe Acrobat software (not the Reader, but the commercial one). I've created all my PDFs using either Office (in the past) or iWork (today), and none of them seem to offer a way to digitally sign the PDF. I would like to get a simple tool, say pdfsign.exe, that I could use to sign any PDF I have, no matter how I generated it. Also, not surprisingly, the Mac native PDF viewer (Preview) doesn't seem to recognize the digital signature, and I bet some Linux PDF viewers do not as well.

Worst of all, even the Acrobat Reader 9, that I tested under Windows, and that correctly displayed all the CDS information, does one unbelievably stupid thing — it parses and renders the whole PDF before displaying the signature info. So, if you downloaded a malicious PDF, Acrobat Reader will happily open it and parse, without asking you a question of whether you would like to open it (as it is perhaps unsigned). At least I was unable to find an option that would force it to do that. So, if this PDF contained an exploit for the reader, it surely would get executed. Compare this with the (correct) behavior of Vista UAC where it presents the executable signature details before executing it.

You can see how your software works with Adobe PDF signatures, e.g. by looking at this exemplary file signed by GlobalSign.

So, Adobe CDS, in the form they are today, seem to be pretty useless, as far as protection from potentially malicious PDFs is considered (they surely have other positive applications, e.g. to certify about authenticity of e.g. a diploma).

But wouldn't it be great to have such a file signing mechanism globally adopted and not only for PDFs, but for any sort of files, including ZIPs, tgz's, heck, even plain text files? And have our main OSes generically recognize those signatures and display unified prompts of whether we want to allow an application to to open the file or not? Perhaps, in some situations, we could even define policies for specific applications. This seems easy to do from the technical point of view — we just need to "hook" (oh, God, did I say "hook"?) high-level OS API's like e.g. open() or CreateFile().

What about PGP and possibility of using this for signing any sort of files? Well, we use PGP a lot at ITL, but mainly for securing peer-to-peer communication (e.g. between us and our clients). There really is no good way to publish one's PGP key — the concept of Web of Trust might be good for some closed groups of people, but not for publishing files "to the world". And, of course, the first thing that an attacker who subverted PDFs on our website will do is to also subvert the PGP key displayed on the website. I also tried once to publish a PGP key to a key server, but got discouraged immediately after I noticed it didn't use SSL for submission. BTW, anybody knows if the key servers today use SSL? If not, how the trust is established? Maybe email clients, e.g. Thunderbird, come with built in PGP keys for select key servers?

So, I guess that was the main point of writing this post — to express how madly I would welcome a generic, OS-based, non-obligatory, signature verification for files, based on PKI :)

Ah, before a dozen of people jumps to the comment box to tell me that digital signatures do not assure non-maliciousness of anything — please don't do that, because I actually know that. In fact, it is not possible to assure non-maliciousness of pretty much anything, especially without strictly defining an ethical system we would like to use first. What the signatures provide is the liability, so that I know who to sue, in case my naked holiday pictures got leaked to the public because of some malicious PDF exploiting my system. In that case I can sue either the actual person who signed the PDF (if this person is identifiable) or the certification authority who issued the certificate to a wrong (unidentifiable) person.

Tuesday, August 25, 2009

Vegas Toys (Part I): The Ring -3 Tools

We've just published the proof of concept code for the Alex's and Rafal's "Ring -3 Rootkits" talk, presented last month at the Black Hat conference in Vegas. You can download the code from our website here. It's highly recommended that one (re)reads the slides before playing with the code.

In short, the code demonstrates injection of an arbitrary ARC4 code into a vPro-compatible chipset AMT/ME memory using the chipset memory reclaiming attack. Check the README and the slides for more information.


The actual ARC4 code we distribute here is very simple: it sets a DMA write transaction to the host memory every ca. 15 seconds in order to write the "ITL" string at the predefined physical addresses (increased by 4 with every iteration). Of course one can do DMA read as well.


The ability to do DMA from the ARC4 code to/from the host memory is, in fact, all that is necessary to write a sophisticated rootkit or any sort of malware, from funny jokers to sophisticated secret sniffers. Your imagination (and good pattern searching) is the only limit here.

The OS, nor any software running on the host OS, cannot access our rootkit code, unless, of course, it used the same remapping attack we used to insert our code there :) But the rootkit might even cut off this way by locking down the remapping registers, so fixing the vulnerability on the fly, after exploiting it (of course it would be insane for any AV to use our remapping attack in order to scan ME space, but just for completeness;)

An OS might attempt to protect itself from DMA accesses from the rootkit in the chipset by carefully setting VT-d protections. Xen 3.3/3.4, for example, sets VT-d protections in such a way that our rootkit cannot access the Xen hypervisor memory. We can, however, access all the other parts of the system which includes all the domains memory (i.e. where all the interesting data are located). Still, it should be possible to modify Xen so that it set VT-d mappings in such a strict way, that the AMT code (and the AMT rootkit) could not access any useful information in any of the domains. This, in fact, would be a good idea anyway, as it would also prevent any sort of hardware-based backdoors (except for the backdoors in the CPU).

An AMT rootkit can, however, get around such a savvy OS because it can modify the OS's VT-d initialization code before it sets the VT-d protections. Alternatively, if the protections are set before the rootkit was activated, the rootkit can force the system to reboot and boot it from the AMT Virtual CDROM (In fact AMT has been designed to be able to do exactly that), which would contain rootkit agent code that would modify the OS/VMM to-be-loaded image, so that it doesn't setup VT-d properly.

Of course, the proper solution against such an attack would be to use e.g. Intel TXT to assure trusted boot of the system. In theory this should work. In practice, as you might recall, we have already shown how to bypass Intel TXT. This TXT bypass attack still works on most (all?) hardware, as there is still no STM available in the wild (all that is needed for the attack is to have a working SMM attack, and last month we showed 2 such attacks — see the slides for the BIOS talk).

Intel has released a patch a day before our presentation at Black Hat. This is a cumulative patch that is also targeting a few other, unrelated, problems, like e.g. the SMM caching attack (also reported by Loic), the SMM nvacpi attack, and the Q45 BIOS reflashing attack (for which the code will be also published shortly).

Some of you might remember that Intel has patched this very remapping bug last year, after our Xen 0wning Trilogy presentations, where we used the very same bug to get around Xen hypervisor protections. However, Intel forgot about one small detail — namely it was perfectly possible for malware to downgrade BIOS to the previous, pre-Black-Hat-2008 version, without any user consent (after all this old BIO file was also digitally signed by Intel). So, with just one additional reboot (but without a user intervention needed) malware could still use the old remapping bug, this time to get access to the AMT memory. The recent patch mentioned above solves this problem by displaying a prompt during reflash boot, if reflashing to an older version of BIOS. So now it requires user intervention (a physical presence). This "downgrade protection" works, however, only if we have administrator password enabled in BIOS.

We could get into the AMT memory on Q35, however, even if the downgrade attack was not possible. In that case we could use our BIOS reflashing exploit (the other Black Hat presentation).

However, this situation looks differently on Intel latest Q45 chipsets (that also have AMT). As explained in the presentation, we were unable to get access to the AMT memory on those chipsets, even though we can reflash the BIOS there, and consequently, even though we can get rid of all the chipset locks (e.g. the remapping locks). Still, the remapping doesn't seem to work for this one memory range, where the AMT code resides.

This suggest Intel added some additional hardware to the Q45 chipset (and other Series 4 chipsets) to prevent this very type of attacks. But we're not giving up on Q45 yet, and we will be trying other attacks, as soon as we recover from the holiday laziness ;)

Finally, the nice picture of the Q35 chipset (MCH), where our rootkit lives :) The ARC4 processor is somewhere inside...

Thursday, July 30, 2009

Black Hat 2009 Slides

The wait is over. The slides are here. The press release is here. Unless you're a chipset/BIOS engineer kind of person, I strongly recommend reading the press release first, before opening the slides.

So, the "Ring -3 Rootkit" presentation is about vPro/AMT chipset compromises. The "Attacking Intel BIOS" presentation is about exploiting a heap overflow in BIOS environment in order to bypass reflashing protection, that otherwise allows only Intel-signed updates to be flashed.

We will publish the code some time after get back from Vegas.

Enjoy.

ps. Let me remind my dear readers that all the files hosted on the ITL website are not digitally signed and are served over a plaintext connection (HTTP). In addition, the ITL's website is hosted on a 3rd party provider's server, on which we have totally no control (which is the reason why we don't buy an SSL certificate for the website). Never trust unsigned files that you download from the Internet. ITL cannot be liable for any damages caused by the files downloaded from our website, unless they are digitally signed.

Friday, July 17, 2009

Interview

Alan Dang from Tom's Hardware did an interview with me. I talk there about quite a lot of things, many of which I would probably write about on this blog sooner or later (or already had), so I thought it might be of interest to the readers of this blog.

Friday, June 12, 2009

Virtualization (In)Security Training in Vegas

VM escapes, hypervisor compromises (via "classic" rootkits, as well as Bluepill-like rootkits), hypervisor protection strategies, SMM attacks, TXT bypassing, and more — these are some of the topics that will be covered by our brand new training on Virtualization (In)Security at the upcoming Black Hat USA.

The training offers quite a unique chance, I think, to absorb the results of 1+ year of the research done by our team within... just 2 days. This will be provided via detailed lectures and unique hands-on exercises.

Unlike our previous training on stealth malware (that will also be offered this year, BTW), this time we will offer attendees a bit of hope :) We will be stressing that some of the new hardware technologies (Intel TXT, VT, TPM), if used properly, have potential to dramatically increase security of our computer systems. Sure, we will be showing attacks against those technologies (e.g. TXT), but nevertheless we will be stressing that this is the proper way to go in the long run.

Interestingly, I'm not aware of any similar training of this kind, that would be covering the security issues related to virtualization systems and bare metal hypervisors. Hope we will not get into troubles with the Antitrust Commission for monopolizing this field ;)

The training brochure (something for your boss) is here.

The detailed agenda spanning 2 full days can be downloaded here.

The Black Hat signup page is here.