Petko D. Petkov from GNUCITIZEN wrote a post about Browser Rootkits, which inspired me to give some more thoughts on this subject. Petko is an active researcher in the field of client-side exploits (e.g. recent Adobe Acrobat PDF flaw), so it’s not a surprise that he’s thinking about browsers as a natural environment for rootkits or malware. Also it’s quite common to hear an opinion these days that browsers become so complicated and so universal that they are almost like operating systems rather than just standard applications.
Petko in his post gives several ideas of how browser-based malware could be created and I’m sure that we will see more and more such malware in the near future (I would actually be surprised if it didn’t exist already). His main argument for creating “Browser Rootkits” is that they would be “closer to the data”, which is, of course, undisputable.
The other argument is the complexity of a typical browser like e.g. Firefox or Internet Explorer. It seems like we have a very similar situation here to what we have with "classic" operating systems like e.g. Windows. Windows is so complex that nobody (including Microsoft) can really spot all the sensitive places in the kernel where a rootkit might "hook" – thus it’s not possible to effectively monitor all those places. We have a similar problem with Firefox and IE because of their extensible architecture (think about all those plugins, add-ons, etc) – although we could examine the whole memory of firefox.exe process, we still would not be able to decide whether something bad is there or not.
I’m even quite sure that my little malware taxonomy could be used here to classify Firefox or IE infections. E.g. the browser malware of type 0, would be nothing else then just additional plugins, installed using official API and not trying to hide from browser reporting mechanisms (in other words they still will be visible to users when they will ask the browser to list all the installed plugins). And we will have type I and type II infections, the former would be simply modifying some code (be that a code of a browser or maybe of some other plugin) while the latter would be hooking some function pointers or changing some data only – this all to hide the offensive module.
BTW, there is a little problem with classifying JIT-generated code – should it be type I or type II infection? I don’t know the answer for now and I welcome all the feedback on this. And we can even imagine type III infections of browsers, but I will leave it as an exercise for my readers :)
So, should we expect the classic, OS-based rootkits to die and the efforts in the malware community to move towards creating Browser-based rootkits? I don’t think so. While the browser-based malware is and will definitely be more and more important problem, it has one disadvantage comparing to classic OS-based malware. Namely it’s quite easy to avoid, or at least minimize the impact from browser-based rootkits. It’s just enough to use two different browsers – one for sensitive and the other one for non-sensitive operations.
So, for example, I use IE to do all my sensitive browsing (e.g. online banking, blogger access, etc), while Firefox to do all the casual browsing, which includes morning press reading, google searching, etc. The reason I use Firefox for non-sensitive browsing doesn’t come from the fact that I think it’s more secure (or better written) then IE, but because I like using NoScript and there is no similar plugin for IE...
Of course, an attacker still might exploit my non-sensitive browser (Firefox) and then modify configuration or executable files that are being used by my sensitive browser (IE). However this would require write-access to those files. This is yet another reason why one should run the non-sensitive browser with limited privileges and technologies like UAC in Vista help to achieve it. I wrote an article some time ago about how one can configure Vista to implement almost-full privilege separation.
Of course, even if we decide to use 2 different browsers - one for sensitive and the other one for non-sensitive browsing, an attacker still might be able to break out from account protection via a kernel mode exploit (e.g. exploiting one of the bug that Alex and I presented in Vegas this year). However this would not be a browser malware anymore – this would be a good old kernel-mode malware :)
A solution to this problem will probably be the use of a Virtual Machine to run the non-sensitive browser. Even today one might download e.g. the Browser Appliance from VMWare and we will see more and more solutions like this in the coming years I think. This BTW, will probably stimulate more research into VM escaping and virtualization-based malware.
Of course, the very important and sometimes non-trivial question is how to decide which type of browsing is sensitive and which is non-sensitive. E.g. most people will agree the online banking is a sensitive browsing, but what about webmail? Should I use my sensitive or non-sensitive browser for accessing my mail via web? Using a sensitive browser for webmail is dangerous, as it’s quite possible that it could be infected via some malicious mail that would be in our inbox. While using the non-sensitive browser for webmail is also not a good solution, as most people would like to consider mail as sensitive and would not like to allow the possibly-compromised browser to learn the password for the mailbox.
I avoid this problem by not using a browser for webmail and by having a special account just for running a thunderbird application (see again my article on how to do this in Vista). It works well for me.
Of course, one could also do the same for browser – i.e. instead of having 2 browsers (i.e. sensitive and non-sensitive), one could have 3 or more (maybe even 3 different virtual machines). But the question is how many accounts should we use? One for email, one for sensitive browsing, one for non-sensitive, one for accessing personal data (e.g. pictures)...? I guess there is no good answer for this and it depends on the specific situation (i.e. different configuration for home user that uses computer mostly for "fun" and different for somebody using the same computer for both work and "fun", etc...)
On a side note – I really don’t like the idea of using a web browser to do "everything" – I like using browser to do browsing, while to do other things to use specialized applications. I like having my data on my local hard drive. It’s quite amazing that so many people these days use Google not only for searching, but also for email, calendaring and documents editing – it’s like giving all your life secretes on a plate! Google can now correlate all your web search queries with a specific email account and even see who are you meeting with next evening and also know what a new product your company will be presenting next week, as you prepared you presentation using Google Documents. I’m not sure whether it’s Google or the people’s naivety that disturbs me more...
Wednesday, October 17, 2007
Subscribe to:
Post Comments (Atom)
6 comments:
It's worth to note that Google services are not protected with encryption. So working on data stored remotely allows not only Google to steal that data. Even passive eavesdropper can do this.
When I was creating my GMail account I was impressed that entire http session was encrypted using SSL (AFAIR no big free mail provider did it that time). But Google has changed its encryption policy, and now only login phase is protected (I think that SSL was too CPU-expensive).
Luckily POP3 connections are still encrypted (probably because only few % of people use this access method). So this is another reason to use standalone mail client.
RE: anoymous' comment:
"But Google has changed its encryption policy, and now only login phase is protected"
The entire gmail sessions are now encrypted via HTTPS, try starting with https://www.gmail.com, instead of http://www.gmail.com (note the s).
one could have 3 or more (maybe even 3 different virtual machines).
That made me smile Joanna, because it's just what I've been doing too. I've got 3 virtual machines, named White, Gray, and Black. The names are self-explanatory, I think. I don't use them just for browsers but for all applications that deserve to be sandboxed.
What I find particularly useful are VM snapshots. Whenever I need to run a suspicious application, I take a snapshot of the "Black" VM, boot it, and run the application within it. Then I simply revert to the last VM snapshot. It is an ideal and perfect form of "System Restore". VMware and VirtualBox (free and open source) have very good snapshot managers, but even Virtual PC can do it (albeit only only snapshot).
hi joanna :)
i think we could see some popular plugins being hijacked as "malware hosts" which would possibly contain code to use steganographic communication channels through popular file formats (pdf,mpeg,mp3..etc) - if its not already being done.
there is the problem for malware writer to find some way around verification of executable files..but thats probably less of an issue.
if i have physical access to your machine, for example.and just want to spy on your browsing habits.i am going to assume that you'll find an OS kernel rootkit quicker than a browser based plugin :)
as you have mentioned online banking, i believe a more subtle approach to spying on you, theoritically of course!! is to replace cryptographic components in the operating system..thats again assuming i can do this physically.
its a simple backdoor, but one that is much more difficult to detect.
k
everyone should read Same-Origin Policy Part 2: Server-Provided Policies?
i use multiple firefox profiles and name them things like "gmail", "gdocs", "gcal", "gread", "myspace", "reddit", "digg", etc. the google accounts use different usernames.
one of the ideas not discussed in the above blog and commentary is LocalRodeo and its concepts to prevent DNS pinning and CSRF. if XSS can also be prevented with NoScript, FireKeeper, and/or XSS Warning - then CSRF and other attacks become more and more unlikely. Combine SafeCache and SafeHistory, PublicFox, and FormFox - and you basically have a good solution to solve all the major web-based attacks.
Positing the above, there are certainly other things you can do. Running Java, Flash, Acrobat, QuickTime, or Javascript at all poses risks, but certain aspects of each can be turned off in your browser or restricted. Turn Java, Flash, Acrobat, QuickTime, etc off (i.e. make them download the files instead). For Javascript, NoScript, XSS Warning, and FireKeeper should allow you to stay in good shape, but other "Allow script" directives can be turned off and all about:config "dom.disable_window*" values can be set to "true".
also of note is gnucitizen's post of xss worms and mitigation controls, where ntp lays out general guidelines for browser vendors to implement policies - links to jeremiah grossman's thoughts on his blog - and even discusses the benefits of script signing with Tim Brown (who has early work on this concept).
if you recall the arguments presented in open-source rootkit detection, you'll note that i typically talk about a few strategies to preventing rootkits that should apply universally. the idea is to start with TPE (a trusted path of execution). in the case of browsers, content-restrictions and httponly represent TPE. the second movement is to have a running system that decides what code can be run based on digital signatures (e.g. DigSig/DSI) and the final step is some basic forms of browser-based IDS/IPS (such as found in FireKeeper, but in the matasano article, I argue for kernel support of things like St. Jude / St. Michael -- or grsecurity).
of course, these will not prevent all vulnerabilities in Firefox because it is such a large codebase written in C++ with tons of plugins and addons, etc. one of my latest suggestions is to basically go mental and browse html/images with no javascript using this method (albeit it was sort of sent as a joke). however, i do think it would be easier to code review and fuzz test links or elinks to a very assured level of security. maybe other small browsers, such as PIE and Minimo could also be looked at for security assurance purposes, although my guess is that today they are not even as secure as their parental equivalents.
the final step is SSL and encrypting and anonymizing all your traffic. this is easy to do. start with tor and find (or setup) a root-level CA certified CGIProxy in SSL mode. Tor to the CGIProxy. SSL to everything else. you could also do something similar using an SSL VPN such as sslexplorer, especially if you want to handle traffic besides just HTTP. this may cause some sub-optimal traffic with SSL over SSL or other TCP over TCP problems - but such is the price you pay for that extra level of assurance.
I don't think virtual machines are a good solutions. First of all, this means that we'll have to bump up our machines' specs, in order to be able to keep up with the demands.
Second, not every computer user is an IT guru (i.e. virtual machines are not a user friendly solution); there are far more people doing online financial stuff, than there are IT gurus doing online financial stuff.
Finally, this is a workaround, rather than a direct solution to the problem.
I use one browser, an email client, an IM-client - to keep things separated. I admit that I have several computers, and I tend to do the sensitive stuff on one of them only (so this is still the equivalent of having multiple browsers). But this just doesn't feel right.
Post a Comment