I just came back from Stockholm where I attended the Virtualization Forum, and saw several, quite interesting vendor presentations. One that caught my attention was a talk by VMware, and especially the part that talked about the new ESX 3i hypervisor and presented it as "razor-thin". This "razor-thin" hypervisor will have, according to VMWare, the footprint "of only 32MB".
My regular readers might sense that I’m a bit ironic here. Well, 32MB of code is definitely not a "razor-thin" hypervisor in my opinion and it’s not even close to a thin hypervisor... But why am I so picky about it? Is it really that important?
Yes, I think so, because the bigger the hypervisor the more chances that there is a bug somewhere out there. And one of the reasons for using virtual machines is to provide isolation. Even if we use virtualization because of business reasons (server consolidation), still we want each VM to be properly isolated, to make sure that if an attackers "gets into" one VM, she will not be able to 0wn all the other VMs on the same hardware… In other words, isolation of VMs, is an extremely important feature.
During my presentation I also talked about thin hypervisors. I first referenced a few bugs that were found in various VMMs in the recent months by other researchers (congrats to Rafal Wojtczuk of McAfee for some interestingly looking bugs in VMWare and Microsoft products). I used them as an argument that we should move towards very thin hypervisor architecture, exploiting hardware virtualization extension as much as possible (e.g. Nested Paging/EPT, IOMMU/DEV/NoDMA, etc) and avoiding doing things "in software".
Nobody should be really surprised seeing VMMs bugs – after all we have seen so many bugs in OS kernels over years, so no surprise we will see more and more bugs in VMMs, unless we switch to very thin hypervisors, so thin that it would be possible to understand and verify their code by one person. Only then we would be able to talk about security advantage (in terms of isolation) offered by VMMs comparing to traditional OSes.
I couldn’t refrain myself from mentioning that the existence of those bugs in popular VMMs clearly shows that having a VMM already installed doesn’t currently prevent from the "Blue Pill threat" – a point often expressed by some virtualization vendors, who notoriously try to diminish the importance of this problem (i.e. the problem of virtualization based malware).
I also announced that Invisible Things Lab has just started working with Phoenix Technologies. Phoenix is the world leader in system firmware, particularly known for providing BIOSes for PCs for almost 25 years, and currently is working on a new product called HyperCore that would be a very thin and lightweight hypervisor for consumer systems. ITL will be helping Phoenix to ensure the security of this product.
HyperCore hypervisor will use all the latest hardware virtualization extensions, like e.g. Nested Paging/EPT to minimize the unnecessary complexity and to provide negligible performance impact. For the same reasons, the I/O access will go through almost natively, just like in case of our Blue Pill...
Speaking about Blue Pill – Phoenix is also interested in further research on Blue Pill, which will be used as a test bed for trying various ideas – e.g. nested virtualization, which might be adopted in the future versions of HyperCore to allow users to use other commercial VMMs inside their already-virtualized OSes. Blue Pill’s small size and minimal functionality makes it a convenient tool for experimenting. Phoenix will also support The Blue Pill Project which means that some parts of our research will be available for other researchers (including code)!
In case you still feel like having a look into my slides, you can get them here.
Kernel, Hypervisor, Virtualization, Trusted Computing and other system-level security stuff
Thursday, October 25, 2007
Wednesday, October 17, 2007
Thoughts On Browser Rootkits
Petko D. Petkov from GNUCITIZEN wrote a post about Browser Rootkits, which inspired me to give some more thoughts on this subject. Petko is an active researcher in the field of client-side exploits (e.g. recent Adobe Acrobat PDF flaw), so it’s not a surprise that he’s thinking about browsers as a natural environment for rootkits or malware. Also it’s quite common to hear an opinion these days that browsers become so complicated and so universal that they are almost like operating systems rather than just standard applications.
Petko in his post gives several ideas of how browser-based malware could be created and I’m sure that we will see more and more such malware in the near future (I would actually be surprised if it didn’t exist already). His main argument for creating “Browser Rootkits” is that they would be “closer to the data”, which is, of course, undisputable.
The other argument is the complexity of a typical browser like e.g. Firefox or Internet Explorer. It seems like we have a very similar situation here to what we have with "classic" operating systems like e.g. Windows. Windows is so complex that nobody (including Microsoft) can really spot all the sensitive places in the kernel where a rootkit might "hook" – thus it’s not possible to effectively monitor all those places. We have a similar problem with Firefox and IE because of their extensible architecture (think about all those plugins, add-ons, etc) – although we could examine the whole memory of firefox.exe process, we still would not be able to decide whether something bad is there or not.
I’m even quite sure that my little malware taxonomy could be used here to classify Firefox or IE infections. E.g. the browser malware of type 0, would be nothing else then just additional plugins, installed using official API and not trying to hide from browser reporting mechanisms (in other words they still will be visible to users when they will ask the browser to list all the installed plugins). And we will have type I and type II infections, the former would be simply modifying some code (be that a code of a browser or maybe of some other plugin) while the latter would be hooking some function pointers or changing some data only – this all to hide the offensive module.
BTW, there is a little problem with classifying JIT-generated code – should it be type I or type II infection? I don’t know the answer for now and I welcome all the feedback on this. And we can even imagine type III infections of browsers, but I will leave it as an exercise for my readers :)
So, should we expect the classic, OS-based rootkits to die and the efforts in the malware community to move towards creating Browser-based rootkits? I don’t think so. While the browser-based malware is and will definitely be more and more important problem, it has one disadvantage comparing to classic OS-based malware. Namely it’s quite easy to avoid, or at least minimize the impact from browser-based rootkits. It’s just enough to use two different browsers – one for sensitive and the other one for non-sensitive operations.
So, for example, I use IE to do all my sensitive browsing (e.g. online banking, blogger access, etc), while Firefox to do all the casual browsing, which includes morning press reading, google searching, etc. The reason I use Firefox for non-sensitive browsing doesn’t come from the fact that I think it’s more secure (or better written) then IE, but because I like using NoScript and there is no similar plugin for IE...
Of course, an attacker still might exploit my non-sensitive browser (Firefox) and then modify configuration or executable files that are being used by my sensitive browser (IE). However this would require write-access to those files. This is yet another reason why one should run the non-sensitive browser with limited privileges and technologies like UAC in Vista help to achieve it. I wrote an article some time ago about how one can configure Vista to implement almost-full privilege separation.
Of course, even if we decide to use 2 different browsers - one for sensitive and the other one for non-sensitive browsing, an attacker still might be able to break out from account protection via a kernel mode exploit (e.g. exploiting one of the bug that Alex and I presented in Vegas this year). However this would not be a browser malware anymore – this would be a good old kernel-mode malware :)
A solution to this problem will probably be the use of a Virtual Machine to run the non-sensitive browser. Even today one might download e.g. the Browser Appliance from VMWare and we will see more and more solutions like this in the coming years I think. This BTW, will probably stimulate more research into VM escaping and virtualization-based malware.
Of course, the very important and sometimes non-trivial question is how to decide which type of browsing is sensitive and which is non-sensitive. E.g. most people will agree the online banking is a sensitive browsing, but what about webmail? Should I use my sensitive or non-sensitive browser for accessing my mail via web? Using a sensitive browser for webmail is dangerous, as it’s quite possible that it could be infected via some malicious mail that would be in our inbox. While using the non-sensitive browser for webmail is also not a good solution, as most people would like to consider mail as sensitive and would not like to allow the possibly-compromised browser to learn the password for the mailbox.
I avoid this problem by not using a browser for webmail and by having a special account just for running a thunderbird application (see again my article on how to do this in Vista). It works well for me.
Of course, one could also do the same for browser – i.e. instead of having 2 browsers (i.e. sensitive and non-sensitive), one could have 3 or more (maybe even 3 different virtual machines). But the question is how many accounts should we use? One for email, one for sensitive browsing, one for non-sensitive, one for accessing personal data (e.g. pictures)...? I guess there is no good answer for this and it depends on the specific situation (i.e. different configuration for home user that uses computer mostly for "fun" and different for somebody using the same computer for both work and "fun", etc...)
On a side note – I really don’t like the idea of using a web browser to do "everything" – I like using browser to do browsing, while to do other things to use specialized applications. I like having my data on my local hard drive. It’s quite amazing that so many people these days use Google not only for searching, but also for email, calendaring and documents editing – it’s like giving all your life secretes on a plate! Google can now correlate all your web search queries with a specific email account and even see who are you meeting with next evening and also know what a new product your company will be presenting next week, as you prepared you presentation using Google Documents. I’m not sure whether it’s Google or the people’s naivety that disturbs me more...
Petko in his post gives several ideas of how browser-based malware could be created and I’m sure that we will see more and more such malware in the near future (I would actually be surprised if it didn’t exist already). His main argument for creating “Browser Rootkits” is that they would be “closer to the data”, which is, of course, undisputable.
The other argument is the complexity of a typical browser like e.g. Firefox or Internet Explorer. It seems like we have a very similar situation here to what we have with "classic" operating systems like e.g. Windows. Windows is so complex that nobody (including Microsoft) can really spot all the sensitive places in the kernel where a rootkit might "hook" – thus it’s not possible to effectively monitor all those places. We have a similar problem with Firefox and IE because of their extensible architecture (think about all those plugins, add-ons, etc) – although we could examine the whole memory of firefox.exe process, we still would not be able to decide whether something bad is there or not.
I’m even quite sure that my little malware taxonomy could be used here to classify Firefox or IE infections. E.g. the browser malware of type 0, would be nothing else then just additional plugins, installed using official API and not trying to hide from browser reporting mechanisms (in other words they still will be visible to users when they will ask the browser to list all the installed plugins). And we will have type I and type II infections, the former would be simply modifying some code (be that a code of a browser or maybe of some other plugin) while the latter would be hooking some function pointers or changing some data only – this all to hide the offensive module.
BTW, there is a little problem with classifying JIT-generated code – should it be type I or type II infection? I don’t know the answer for now and I welcome all the feedback on this. And we can even imagine type III infections of browsers, but I will leave it as an exercise for my readers :)
So, should we expect the classic, OS-based rootkits to die and the efforts in the malware community to move towards creating Browser-based rootkits? I don’t think so. While the browser-based malware is and will definitely be more and more important problem, it has one disadvantage comparing to classic OS-based malware. Namely it’s quite easy to avoid, or at least minimize the impact from browser-based rootkits. It’s just enough to use two different browsers – one for sensitive and the other one for non-sensitive operations.
So, for example, I use IE to do all my sensitive browsing (e.g. online banking, blogger access, etc), while Firefox to do all the casual browsing, which includes morning press reading, google searching, etc. The reason I use Firefox for non-sensitive browsing doesn’t come from the fact that I think it’s more secure (or better written) then IE, but because I like using NoScript and there is no similar plugin for IE...
Of course, an attacker still might exploit my non-sensitive browser (Firefox) and then modify configuration or executable files that are being used by my sensitive browser (IE). However this would require write-access to those files. This is yet another reason why one should run the non-sensitive browser with limited privileges and technologies like UAC in Vista help to achieve it. I wrote an article some time ago about how one can configure Vista to implement almost-full privilege separation.
Of course, even if we decide to use 2 different browsers - one for sensitive and the other one for non-sensitive browsing, an attacker still might be able to break out from account protection via a kernel mode exploit (e.g. exploiting one of the bug that Alex and I presented in Vegas this year). However this would not be a browser malware anymore – this would be a good old kernel-mode malware :)
A solution to this problem will probably be the use of a Virtual Machine to run the non-sensitive browser. Even today one might download e.g. the Browser Appliance from VMWare and we will see more and more solutions like this in the coming years I think. This BTW, will probably stimulate more research into VM escaping and virtualization-based malware.
Of course, the very important and sometimes non-trivial question is how to decide which type of browsing is sensitive and which is non-sensitive. E.g. most people will agree the online banking is a sensitive browsing, but what about webmail? Should I use my sensitive or non-sensitive browser for accessing my mail via web? Using a sensitive browser for webmail is dangerous, as it’s quite possible that it could be infected via some malicious mail that would be in our inbox. While using the non-sensitive browser for webmail is also not a good solution, as most people would like to consider mail as sensitive and would not like to allow the possibly-compromised browser to learn the password for the mailbox.
I avoid this problem by not using a browser for webmail and by having a special account just for running a thunderbird application (see again my article on how to do this in Vista). It works well for me.
Of course, one could also do the same for browser – i.e. instead of having 2 browsers (i.e. sensitive and non-sensitive), one could have 3 or more (maybe even 3 different virtual machines). But the question is how many accounts should we use? One for email, one for sensitive browsing, one for non-sensitive, one for accessing personal data (e.g. pictures)...? I guess there is no good answer for this and it depends on the specific situation (i.e. different configuration for home user that uses computer mostly for "fun" and different for somebody using the same computer for both work and "fun", etc...)
On a side note – I really don’t like the idea of using a web browser to do "everything" – I like using browser to do browsing, while to do other things to use specialized applications. I like having my data on my local hard drive. It’s quite amazing that so many people these days use Google not only for searching, but also for email, calendaring and documents editing – it’s like giving all your life secretes on a plate! Google can now correlate all your web search queries with a specific email account and even see who are you meeting with next evening and also know what a new product your company will be presenting next week, as you prepared you presentation using Google Documents. I’m not sure whether it’s Google or the people’s naivety that disturbs me more...