I want to make a short philosophical comment about how some approaches to building security are wrong.
Let’s move back in time to the last decade of the XX century, to the 90’s... Back in those days one of the most annoying problems in computer security was viruses, or, more precisely, executable file infectors. Many smart guys were working on both sides to create more stealthy infectors and also better detectors for those infectors…
Russian virus write Z0mbie with his Mistfall engine and Zmist virus went probably closest to the Holy Grail in this arm race – the creation of an undetectable virus. Peter Szor, a Symantec’s chief antivirus researcher, wrote about his work in 2001:
Many of us will not have seen a virus approaching this complexity for a few years. We could easily call Zmist one of the most complex binary viruses ever written.
But nothing is really undetectable if you have a sample of the malware in your lab and can spent XXX hours analyzing it – you will always come up with some tricks to detect it sooner or later. The question is – were any of the A/V scanners back then ready to detect such an infection if it was a 0day in the wild? Will any of the today’s scanners detect a modified/improved Zmist virus, or would they have to count on the virus author being nice enough to send them a sample for an analysis first?
Interestingly, file infectors stopped being a serious problem a few years ago. But this didn’t happen because the A/V industry discovered a miracle cure for viruses, but rather because the users’ habits changed. People do not exchange executables that often as 10 years ago. Today people rather download an executable from the Web (legal or not) rather then copy it from a friend’s computer.
But could the industry have solved the problem of file infectors in an elegant, definite way? The answer is yes and we all know the solution – digital signatures for executable files. Right now, most of the executables (but unfortunately still not all) on the laptop I’m writing this text on are digitally signed. This includes programs from Microsoft, Adobe, Mozilla and even some open source ones like e.g. True Crypt.
With digital signatures we can "detect" any kind of executable modifications, starting form the simplest and ending with those most complex, metamorphic EPO infectors as presented e.g. by Z0mbie. All we need to do (or more precisely the OS needs to do) is to verify the signature of an executable before executing it.
I hear all the counter arguments: that many programs out there are still not digitally signed, that users are too stupid to decide which certificates to trust, that sometimes the bad guys might be able to obtain a legitimate certificate, etc...
But all those minor problems can be solved and probably will eventually be solved in the coming years. Moreover, solving all those problems will probably cost much less then all the research on file infectors cost over the last 20 year. But that also means no money for the A/V vendors.
Does it mean we get a secure OS this way? Of course not! Digital signatures do not protect against malicious code execution, e.g. they can't stop an exploit from executing its shellcode. So why bother? Because certificates allow to verify that what we have is really what we should have (e.g. that nobody infected any of our executable files). It’s the first step in ensuring integrity of an OS.
The case of digital signatures vs. file infectors is a good example of how problems in security should be addressed. But we all know that the A/V industry took a different approach – they invested zillions of dollars into research on polymorphic virus detection, built advanced emulators for analysis of infected files, etc. The outcome – lots of complex heuristics that usually work quite well against known patterns of infection, but are often useless against new 0day engines and also are so complex that nobody really knows how many false positives they can produce and how buggy the code itself is. Tricks! Very complex and maybe even interesting (from a scientific point of view) tricks.
So, do I want to say that all those years of A/V research on detecting file infections was a waste time? I’m afraid that is exactly what I want to say here. This is an example of how the security industry took a wrong path, the path that never could lead to an effective and elegant solution. This is an example of how people decided to employ tricks, instead looking for generic, simple and robust solutions.
Security should not be built on tricks and hacks! Security should be built on simple and robust solutions. Oh, and we should always assume that the users are not stupid – building solutions to protect uneducated users will always fail.
Kernel, Hypervisor, Virtualization, Trusted Computing and other system-level security stuff
Friday, August 31, 2007
Friday, August 03, 2007
Virtualization Detection vs. Blue Pill Detection
So, it’s all over the press now, but, as usual, many people didn’t quite get the main points of our Black Hat talk. So, let’s clear things up... First, please note that the talk was divided into two separate, independent, parts – the first one about bypassing vista kernel protection and the second one about virtualization based malware.
The message of the first part was that we don’t believe it’s possible to implement effective kernel protection on any general purpose OS based on monolithic kernel design.
The second part, the one about virtualization, had several messages...
We posted the full source code of out New Blue Pill here. We believe that it will help other researchers to to analyze this threat and hopefully we will find a good solution soon, before this ever become widespread.
Happy bluepilling!
On a side note: now I can also explain (if this is not clear already) how we were planning to beat our challengers. We would simply ask them to install Virtual Server 2005 R2 on all the test machines and we would install our New Blue Pill on just a few of them. Then their wonderful detectors would simply detect that all the machines have SVM mode enabled, but that would be a completely useless information. Yes, we still believe we would need a couple of months to get our proof-of-concept to the level we would be confident that we will win anyway (e.g. if they used memory scanning for some “signature).
BTW, you might be wondering why I introduced the “no CPU peek for more then 1s” requirement? I will leave finding an answer as an exercise from a psychology to my dear readers ;)
The message of the first part was that we don’t believe it’s possible to implement effective kernel protection on any general purpose OS based on monolithic kernel design.
The second part, the one about virtualization, had several messages...
- The main point was that detecting virtualization is not the same as detecting virtualization based malware. As hardware virtualization technology gets more and more widespread, many machines will be running with virtualization mode enabled, no matter whether blue pilled or not. In that case blue pill-like malware doesn’t need to cheat that virtualization is not enabled, as it’s actually expected that virtualization is being used for some legitimate purposes. In that case using a "blue pill detector", that in fact is just a generic virtualization detector is completely pointless.
Obviously in such scenarios blue pill-like malware must support nested hypervisors. And this is what we have implemented in our New Blue Pill. We can run tens of blue pills inside each other and they all work! You can try it by yourself, but you should disable comport debug output to do more then twenty nested pills. We still fail at running Virtual PC 2007 as a nested hypervisor (when it’s guest switches to protected mode), but we hope to have this fixed in the coming weeks (please note that VPC’s hypervisor doesn’t block blue pill from loading – see our slides for more info).
In other words, if somebody announces to the world that they can fight virtualization based malware using generic virtualization detectors, it’s like if they said that they can detect e.g. a botnet agent, just by detecting that an executable is using networking! - We have also decided to discuss how blue pill could potentially cheat those generic virtualization detectors, even though we don’t believe it would be necessary in the coming years, as everything will be virtualized anyways (see previous point). But, we still decided to look into some of the SVM detection methods. First, we found out that many methods that people described as a way to detect virtualization do not work in the simple form as they were described. We took a closer look e.g. at the TLB profiling methods that were suggested by several researchers as a reliable method for virtualization detection. However all the papers that were describing this method missed the fact that some of the caches are not fully associative and one needs to use special effort (which means additional complexity) to make sure to e.g. fill the whole TLB L2 buffer. Obviously we provided all the necessary details of how to write those detectors properly (we even posted one such detector).
In other words - we believe that it will always be possible to detect virtualization mode using various tricks and hacks, but: 1) those hacks could be forced to be very complex and 2) in case virtualization is being used on the target computer for some legitimate purposes all those methods fail anyway (see point 1). - Some people might argue that maybe then we should build these virtualization detectors into all the legitimate hypervisors (e.g. Virtual PC hypervisor), so that they know at least whether they are being run on a native machine or maybe inside blue pill. However this approach contradicts the rules we use to build secure and effective hypervisors. These rules say that hypervisors should be as small as possible and there should be no 3rd party code allowed there.
Now imagine that A/V company try to insert their virtualization detectors (which BTW would have to be updated from time to time to support e.g. new processor models) into hypervisors – if that ever happened, it would be a failure of our industry. We need other methods to address this threat, methods that would be based on documented, robust and simple methods. Security should not be built on bugs, hacks and tricks!
We posted the full source code of out New Blue Pill here. We believe that it will help other researchers to to analyze this threat and hopefully we will find a good solution soon, before this ever become widespread.
Happy bluepilling!
On a side note: now I can also explain (if this is not clear already) how we were planning to beat our challengers. We would simply ask them to install Virtual Server 2005 R2 on all the test machines and we would install our New Blue Pill on just a few of them. Then their wonderful detectors would simply detect that all the machines have SVM mode enabled, but that would be a completely useless information. Yes, we still believe we would need a couple of months to get our proof-of-concept to the level we would be confident that we will win anyway (e.g. if they used memory scanning for some “signature).
BTW, you might be wondering why I introduced the “no CPU peek for more then 1s” requirement? I will leave finding an answer as an exercise from a psychology to my dear readers ;)