Ah, there is no feeling like seeing your name in the news when drinking your morning coffee... In this piece some Steve Riley, a senior security strategist at Microsoft, decided to "rebute" our recent Black Hat presentations research results.
Mr. Riley had been quoted by ZDnet as saying:
"Her [Joanna Rutkowska] insistence is that you can replace the hypervisor without anybody knowing... Our assertion is that this is incorrect," Riley told the audience. "First of all, to do these attacks you need to become administrator at the root. So that's going to be, on an appropriately configured machine, an exceedingly difficult thing to happen."
Apparently, Mr. Riley has never seen our Black Hat presentations (or slides at least) that he is referring to (oh, wait, that is the typical case with all our "refuters", how come?)...
First, we never said anything about replacing the hypervisor. I really have no idea how this idea was born in Mr. Riley's head? Replacing the hypervisor - that would indeed be insane for us to do!
Second, it is not true that the attacker needs to become an administrator "at the root" (he mean the root partition or administrative domain here I assume). The attack we presented in our second speech, that exploited a heap overflow in the Xen hypervisor FLASK module, could have been conducted from the unprivileged domain, as we demonstrated during the presentation.
Mr. Riley continues with his vision:
"Because you [the attacker] didn't subject your own replacement hypervisor through the thorough design review that ours did, I'll bet your hypervisor is probably not going to implement 100 percent of the functionality as the original one," Riley said. "There will be a gap or two and we will be able to detect that."
Well, if he only took the effort of looking into our slides, he would realize that, in case of XenBluePill, we were slipping it beneath (not replacing!) the original hypervisor, and then run the original one as nested. So, all the functionality of the original hypervisor was preserved.
Mr. Riley also shares some other ground breaking thoughts in this article, but I think we can leave them uncommented ;)
This situation is pretty funny actually - we have here the words and feelings of some Microsoft executive vs. our three technical presentations, all the code that we released for those presentations, and also a few of our demos. Yet, it's apparently still worth getting into the news and reporting what the feeling of Mr. Riley are...
Let me, however, write one more time, that I'm (still) not a Microsoft hater. There are many people at Microsoft that I respect: Brandon Baker, Neil Clift, the LSD guys, Mark Russinovich, and probably a few more that I just haven't had occasion to meet in person or maybe forgot about at the moment. It's thus even more sad that people like Mr. Riley are also associated with Microsoft, even more they are the face of Microsoft for the majority of people. Throwing a party in Vegas and Amsterdam once a year certainly is not enough to change the Microsoft's image in this case...
Interestingly, if Mr. Riley only attended our Xen 0wning Trilogy at Black Hat, then he would notice that we were actually very positive about Hyper-V. Of course, I pointed out that Xen 3.3 certainly has a more secure architecture right now, but I also said that I knew (from talking to some MS engineers from the virtualization group) that Hyper-V is going to implement similar features in the next version(s) and that this is very good. I also prized the fact it has only about 100k LOC (vs. about 300k LOC in Xen 3.3).
So, Mr. Senior Security Strategist, I suggest you do your homework more carefully next time before throwing mud at others and trying to negate the value of their work (and all the efforts of Microsoft's PR people).
On a separate note, I found it quite unprofessional that ZDNet's Liam Tung and Tom Espiner, the authors of the news, didn't ask me for a commentary before publishing this. Not to mention that they also misspelled Rafal's name and forgot to mention about Alex, the third co-author of the presentations.
Sunday, September 07, 2008
Saturday, September 06, 2008
Xen 0wning Trilogy: code, demos and q35 attack details posted
We have posted all the code that we used last month during our Black Hat presentations about Xen security, and you can get it here. This includes the full source code for:
1) The generic Xen Loadable Modules framework
2) Implementation of the two Xen Hypervisor Rootkits
3) The Q35 exploit
4) The FLASK heap overflow exploit
5) The BluePillBoot (with nested virtualization support on SVM)
6) The XenBluePill (with nested virtualization support on SVM)
Beware the code is by far not user-friendly, it requires advanced Linux/Xen, C and system-level programming skills in order to tweak some constants and run it successfully on your system. Do not send us questions how to compile/run it, as we don’t have time to answer such questions. Also do not send questions how the code works – if you can’t figure it out by reading our slides and the source code, then it means you should probably spend more time on this yourself. On the other hand, we would appreciate any constructive feedback.
The code is our gift to the research community. There is no warranty and Invisible Things Lab takes no responsibility for any potential damage that this code might cause (e.g. by rebooting your machine) or any potential malicious usage of this code, or any other code built on top of this code. We believe that by publishing this code we help to create more secure systems in the future.
Additionally, we also posted the full version of our second Black Hat talk, which now includes all the slides about the Q35 bug and how we exploited it. Those slides had to be previously removed during our Black Hat presentation, as the patch was still unavailable during that time.
1) The generic Xen Loadable Modules framework
2) Implementation of the two Xen Hypervisor Rootkits
3) The Q35 exploit
4) The FLASK heap overflow exploit
5) The BluePillBoot (with nested virtualization support on SVM)
6) The XenBluePill (with nested virtualization support on SVM)
Beware the code is by far not user-friendly, it requires advanced Linux/Xen, C and system-level programming skills in order to tweak some constants and run it successfully on your system. Do not send us questions how to compile/run it, as we don’t have time to answer such questions. Also do not send questions how the code works – if you can’t figure it out by reading our slides and the source code, then it means you should probably spend more time on this yourself. On the other hand, we would appreciate any constructive feedback.
The code is our gift to the research community. There is no warranty and Invisible Things Lab takes no responsibility for any potential damage that this code might cause (e.g. by rebooting your machine) or any potential malicious usage of this code, or any other code built on top of this code. We believe that by publishing this code we help to create more secure systems in the future.
Additionally, we also posted the full version of our second Black Hat talk, which now includes all the slides about the Q35 bug and how we exploited it. Those slides had to be previously removed during our Black Hat presentation, as the patch was still unavailable during that time.
Tuesday, September 02, 2008
The three approaches to computer security
If we looked at the computer systems and how they try to provide security, I think we could categorize those attempts into three broad categories:
1) Security by Correctness
2) Security by Isolation
3) Security by Obscurity
Let's discuss those categories in more detail below.
Security by Correctness
The assumption here is obvious: if we can produce software that doesn't have bugs (nor any maliciously behaving code), then we don't have security problems at all. The only problem is that we don't have any tools to make sure that a given code is correct (in terms of implementation, design and ethical behavior). But if we look at various efforts in computer science, we will notice a lot of effort has been made to achieve Security by Correctness: "safe" languages, code verifiers (although not sound ones, just heuristic based), developer's education, manual code audit, etc. Microsoft's famed Secure Development Life-cycle is all about Security by Correctness. The only problem is: all those approaches sometimes work and sometimes do not, sometimes they miss some bug and also there are problems that I simple don't believe can be addresses by automatic code verifiers or even safe languages, like e.g. logic/design bugs or deciding on wheatear a given code behaves maliciously or not (after all this is an ethical problem in many cases, not a computer science problem).
To sum it: I think that in some more or less distant future (some people think abuout a timeframe of 50 years or so), we would get rid of all the implementation bugs, thanks to safe languages and/or sound code verifiers. But I don't believe we could assure correctness of software on any higher level of abstraction then implementation level.
Security by Isolation
Because of the problems with effectively implementing Security by Correctness approach, people, from the very beginning, has also taken another approach, which is based on isolation. The idea is to split a computer system into smaller pieces and make sure that each piece is separated from the other ones, so that if it gets compromised/malfunctions, then it cannot affect the other entities in the system. Early UNIX's user accounts and separate process address spaces, things that are now present in every modern OS, are examples of Security by Isolation.
Simple as it sound, in practice the isolation approach turned out to be very tricky to implement. One problem is how to partition the system into meaningful pieces and how to set permissions for each piece. The other problem is implementation - e.g. if we take a contemporary consumer OS, like Vista, Linux or Mac OSX, all of them have monolithic kernels, meaning that a simple bug in any of the kernel components (think: hundreds of 3rd party drivers running there), allows to bypass of the isolation mechanisms provided by the kernel to the rest of the system (process separation, ACLs, etc).
Obviously the problem is because the kernels are monolithic. Why not implement Security by Isolation on a kernel level then? Well, I would personally love that approach, but the industry simply took another course and decided that monolithic kernels are better then micro-kernels, because it's easier to write the code for them and (arguably) they offer better performance.
Many believe, including myself, that this landscape can be changed by the virtualization technology. Thin bare-metal hypervisor, like e.g. Xen, can act like a micro kernel and enforce isolation between other components in the system - e.g. we can move drivers into a separate domain and isolate them from the rest of the system. But again there are challenges here on both the design- as well as the implementation-level. For example, we should not put all the drivers into the same domain, as this would provide little improvement in security. Also, how to make sure that the hypervisor itself is not buggy?
Security by Obscurity (or Security by Randomization)
Finally we have the Security by Obscurity approach that is based on the assumption that we cannot get rid of all the bugs (like in Security by Isolation approach), but at least we can make exploitation of those bugs very hard. So, it's all about making our system unfriendly to the attacker.
Examples of this approach include Address Space Layout Randomization (ASLR, present in all newer OSes, like Linux, Vista, OSX), StackGuard-like protections (again used by most contemporary OSes), pointer encryption (Windows and Linux) and probably some other mechanisms that I can't remember at the moment. Probably the most extreme example of Security by Obscurity would be to use a compiler that generates heavily obfuscated binaries from the source code and creates a unique (on a binary level) instances of the same system. Alex did his PhD on this topic and his an expert on compilers and obfuscators.
The obvious disadvantage of this approach is that it doesn't prevent the bugs from being exploited - it only make the meaningful exploitation very hard or even impossible. But if one is concerned also about e.g. DoS attacks, then Security by Obscurity will not prevent them in most cases. The other problem with obfuscating the code is the performance (compiler cannot optimize the code for speed) and maintenance (if we got a crash dump on an "obfuscated" Windows box, we couldn't count on help from the technical support). Finally there is a problem of proving that the whole scheme is correct and that our obfuscator (or e.g. ASLR engine) doesn't introduce bugs to the generated code and that we will not get random crashes later (that we would be most likely unable to debug, as the code will be obfuscated).
I wonder if the above categorization is complete and if I haven't forgotten about something. If you know an example of a security approach that doesn't fit here (besides blacklisiting), please let me know!
1) Security by Correctness
2) Security by Isolation
3) Security by Obscurity
Let's discuss those categories in more detail below.
Security by Correctness
The assumption here is obvious: if we can produce software that doesn't have bugs (nor any maliciously behaving code), then we don't have security problems at all. The only problem is that we don't have any tools to make sure that a given code is correct (in terms of implementation, design and ethical behavior). But if we look at various efforts in computer science, we will notice a lot of effort has been made to achieve Security by Correctness: "safe" languages, code verifiers (although not sound ones, just heuristic based), developer's education, manual code audit, etc. Microsoft's famed Secure Development Life-cycle is all about Security by Correctness. The only problem is: all those approaches sometimes work and sometimes do not, sometimes they miss some bug and also there are problems that I simple don't believe can be addresses by automatic code verifiers or even safe languages, like e.g. logic/design bugs or deciding on wheatear a given code behaves maliciously or not (after all this is an ethical problem in many cases, not a computer science problem).
To sum it: I think that in some more or less distant future (some people think abuout a timeframe of 50 years or so), we would get rid of all the implementation bugs, thanks to safe languages and/or sound code verifiers. But I don't believe we could assure correctness of software on any higher level of abstraction then implementation level.
Security by Isolation
Because of the problems with effectively implementing Security by Correctness approach, people, from the very beginning, has also taken another approach, which is based on isolation. The idea is to split a computer system into smaller pieces and make sure that each piece is separated from the other ones, so that if it gets compromised/malfunctions, then it cannot affect the other entities in the system. Early UNIX's user accounts and separate process address spaces, things that are now present in every modern OS, are examples of Security by Isolation.
Simple as it sound, in practice the isolation approach turned out to be very tricky to implement. One problem is how to partition the system into meaningful pieces and how to set permissions for each piece. The other problem is implementation - e.g. if we take a contemporary consumer OS, like Vista, Linux or Mac OSX, all of them have monolithic kernels, meaning that a simple bug in any of the kernel components (think: hundreds of 3rd party drivers running there), allows to bypass of the isolation mechanisms provided by the kernel to the rest of the system (process separation, ACLs, etc).
Obviously the problem is because the kernels are monolithic. Why not implement Security by Isolation on a kernel level then? Well, I would personally love that approach, but the industry simply took another course and decided that monolithic kernels are better then micro-kernels, because it's easier to write the code for them and (arguably) they offer better performance.
Many believe, including myself, that this landscape can be changed by the virtualization technology. Thin bare-metal hypervisor, like e.g. Xen, can act like a micro kernel and enforce isolation between other components in the system - e.g. we can move drivers into a separate domain and isolate them from the rest of the system. But again there are challenges here on both the design- as well as the implementation-level. For example, we should not put all the drivers into the same domain, as this would provide little improvement in security. Also, how to make sure that the hypervisor itself is not buggy?
Security by Obscurity (or Security by Randomization)
Finally we have the Security by Obscurity approach that is based on the assumption that we cannot get rid of all the bugs (like in Security by Isolation approach), but at least we can make exploitation of those bugs very hard. So, it's all about making our system unfriendly to the attacker.
Examples of this approach include Address Space Layout Randomization (ASLR, present in all newer OSes, like Linux, Vista, OSX), StackGuard-like protections (again used by most contemporary OSes), pointer encryption (Windows and Linux) and probably some other mechanisms that I can't remember at the moment. Probably the most extreme example of Security by Obscurity would be to use a compiler that generates heavily obfuscated binaries from the source code and creates a unique (on a binary level) instances of the same system. Alex did his PhD on this topic and his an expert on compilers and obfuscators.
The obvious disadvantage of this approach is that it doesn't prevent the bugs from being exploited - it only make the meaningful exploitation very hard or even impossible. But if one is concerned also about e.g. DoS attacks, then Security by Obscurity will not prevent them in most cases. The other problem with obfuscating the code is the performance (compiler cannot optimize the code for speed) and maintenance (if we got a crash dump on an "obfuscated" Windows box, we couldn't count on help from the technical support). Finally there is a problem of proving that the whole scheme is correct and that our obfuscator (or e.g. ASLR engine) doesn't introduce bugs to the generated code and that we will not get random crashes later (that we would be most likely unable to debug, as the code will be obfuscated).
I wonder if the above categorization is complete and if I haven't forgotten about something. If you know an example of a security approach that doesn't fit here (besides blacklisiting), please let me know!
Subscribe to:
Posts (Atom)