Friday, August 31, 2007

Tricky Tricks

I want to make a short philosophical comment about how some approaches to building security are wrong.

Let’s move back in time to the last decade of the XX century, to the 90’s... Back in those days one of the most annoying problems in computer security was viruses, or, more precisely, executable file infectors. Many smart guys were working on both sides to create more stealthy infectors and also better detectors for those infectors…

Russian virus write Z0mbie with his Mistfall engine and Zmist virus went probably closest to the Holy Grail in this arm race – the creation of an undetectable virus. Peter Szor, a Symantec’s chief antivirus researcher, wrote about his work in 2001:

Many of us will not have seen a virus approaching this complexity for a few years. We could easily call Zmist one of the most complex binary viruses ever written.

But nothing is really undetectable if you have a sample of the malware in your lab and can spent XXX hours analyzing it – you will always come up with some tricks to detect it sooner or later. The question is – were any of the A/V scanners back then ready to detect such an infection if it was a 0day in the wild? Will any of the today’s scanners detect a modified/improved Zmist virus, or would they have to count on the virus author being nice enough to send them a sample for an analysis first?

Interestingly, file infectors stopped being a serious problem a few years ago. But this didn’t happen because the A/V industry discovered a miracle cure for viruses, but rather because the users’ habits changed. People do not exchange executables that often as 10 years ago. Today people rather download an executable from the Web (legal or not) rather then copy it from a friend’s computer.

But could the industry have solved the problem of file infectors in an elegant, definite way? The answer is yes and we all know the solution – digital signatures for executable files. Right now, most of the executables (but unfortunately still not all) on the laptop I’m writing this text on are digitally signed. This includes programs from Microsoft, Adobe, Mozilla and even some open source ones like e.g. True Crypt.

With digital signatures we can "detect" any kind of executable modifications, starting form the simplest and ending with those most complex, metamorphic EPO infectors as presented e.g. by Z0mbie. All we need to do (or more precisely the OS needs to do) is to verify the signature of an executable before executing it.

I hear all the counter arguments: that many programs out there are still not digitally signed, that users are too stupid to decide which certificates to trust, that sometimes the bad guys might be able to obtain a legitimate certificate, etc...

But all those minor problems can be solved and probably will eventually be solved in the coming years. Moreover, solving all those problems will probably cost much less then all the research on file infectors cost over the last 20 year. But that also means no money for the A/V vendors.

Does it mean we get a secure OS this way? Of course not! Digital signatures do not protect against malicious code execution, e.g. they can't stop an exploit from executing its shellcode. So why bother? Because certificates allow to verify that what we have is really what we should have (e.g. that nobody infected any of our executable files). It’s the first step in ensuring integrity of an OS.

The case of digital signatures vs. file infectors is a good example of how problems in security should be addressed. But we all know that the A/V industry took a different approach – they invested zillions of dollars into research on polymorphic virus detection, built advanced emulators for analysis of infected files, etc. The outcome – lots of complex heuristics that usually work quite well against known patterns of infection, but are often useless against new 0day engines and also are so complex that nobody really knows how many false positives they can produce and how buggy the code itself is. Tricks! Very complex and maybe even interesting (from a scientific point of view) tricks.

So, do I want to say that all those years of A/V research on detecting file infections was a waste time? I’m afraid that is exactly what I want to say here. This is an example of how the security industry took a wrong path, the path that never could lead to an effective and elegant solution. This is an example of how people decided to employ tricks, instead looking for generic, simple and robust solutions.

Security should not be built on tricks and hacks! Security should be built on simple and robust solutions. Oh, and we should always assume that the users are not stupid – building solutions to protect uneducated users will always fail.

31 comments:

Peter J. Cranstone said...

Joanna,

Checkout http://www.secure64.com - they have a secure OS capable of supporting Root trust and also coupled to 4 levels of privilege. It's the only secure OS I know of and runs on Intel's Itanium platform.

denis bider said...

The issue here is that the security vendors, as long as they exists as separate from software vendors, cannot provide a solution that does not resort to tricks, because a real solution would mean to modify the development and deployment processes of software vendors, and it would put security vendors as separate companies out of business. There would be nothing to secure.

I'm not saying that the security industry is causing the problem, but it is definitely part of it; and when the problem is fixed - if ever - you will see that by the security industry going away. As long as that doesn't happen, the only thing that they can do is continue to pretend that they can actually do something, and continue developing new tricks.

Anonymous said...

This is the first time I decided to post a comment. I've been following your blog for long time, but this particular story got my attention, because I can only say, "damn right". Damn right about stop fighting the symptoms and damn right for educating users!

Miro

Evgeniy S. said...

The digital sign doen's solve all problems. It is a partial solution. What will we do with bodyless malware like Code Red and Slammer? With trojans that do not infect files? With exploits?

joanna said...

@Evgeniy: I think I made it clear in my post that digital signatures solve the problem of file infections, but do not solve the problem of exploits. Please read more carefully before posting comments next time;)

Evgeniy S. said...

If digital signing was implemented at the DOS era virus writers simply would hack signature checks. DOS didn't work in protected CPU mode and such protections were basically useless.

Vess said...

Back in the 90s RSA had a stranglehold patent on public key encryption and digital signing. Those of us who worked on free tools for this like PGP were hunted like something worse than software pirates - we couldn't even think about wide implementation of executable code signing.

You're right about one thing - digitally signing executables will improve immensely the malware problem - as happened in the Symbian and in the Office macro worlds when digital signing was introduced.

However, digital signing is by far not a panacea. If you expect that it will solve the malware problem - you're very much mistaken. It has a lot of problems on its own and can be bypassed in various ways - the space here is way too small to list them all; it would require a full-sized paper.

Jakub Debski said...

So, do I want to say that all those years of A/V research on detecting file infections was a waste time? I’m afraid that is exactly what I want to say here.

AV (and the whole security industry) provides solutions for security flaws. They don't invest money in pointless research, but in research of fixing others' mistakes.

Oh, and we should always assume that the users are not stupid – building solutions to protect uneducated users will always fail.

Users are not stupid. They just don't want to learn about computer security as you don't want to waste time on learning about food assimilation process. Both could help you but not having such knowledge cause problems only sometimes. Moreover, in case of security, they would have to be educated all their live, because of threats evolution.

This is an example of how people decided to employ tricks, instead looking for generic, simple and robust solutions.

History shows that the simple and robust solutions have flaws not visible during the analysis, project and implementation.

I hear all the counter arguments: that many programs out there are still not digitally signed, that users are too stupid to decide which certificates to trust, that sometimes the bad guys might be able to obtain a legitimate certificate, etc...
But all those minor problems can be solved and probably will eventually be solved in the coming years. Moreover, solving all those problems will probably cost much less then all the research on file infectors cost over the last 20 year. But that also means no money for the A/V vendors.


I hear all the great solutions for security problems with just a few minor problems to solve. I also hear about machines to produce free energy with just a few minor problems to solve (Steorn etc.)...

joanna said...

@Evgeniy:
you seem to confuse the problem of protecting the integrity of the OS/security program itself with efficiency of the deployed security algorithm. In DOS era, nothing could be protected against tampering, neither A/V scanners nor hypothetical OS signature enforcement code. Even today on modern OS like Vista x64, this is still hard to achieve as Alex and I proofed more then once. This problem applies equally to A/V products.

@vess:
Good point about RSA patents, but please note that the A/V programs and The OS in the 90's were not free!

@Jakub:
I can't really argue with your points, as they are too general and more of a demagogy rather then a technical arguments.

To all: In order to keep the entropy of this blog at a desirable level, please refrain from repeating truisms such as "digital signing is by far not a panacea" over and over again ;)

joanna said...

BTW, before all the A/V people start hating me, I want to make it clear that I don't blame them exclusively for "taking a wrong path". The OS vendor(s) is/are equally responsible for that and maybe even more. In general, as far as security of the OS is concerned, 3rd party firms can't do much without cooperation with the OS vendor.

So, maybe the A/V industry had no choice? I don't think so -- they could have and should have pushed on the OS vendor(s) so that they could come up with the proper solution together. But, yes, using tricks is always easier... but that's a wrong path.

Vess said...

So, maybe the A/V industry had no choice? I don't think so -- they could have and should have pushed on the OS vendor(s) so that they could come up with the proper solution together.

OK, let's see. An A/V producer had the following choice: either (a) force Microsoft to re-design their OS from the ground up and convince every software producer to pay royalties to RSA every time that producer released a new executable, or (b) make a program for virus detection and removal entirely by themselves (nothing dependent on convincing third parties of anything), sell it and receive money for that themselves.

Which choice was the more realistic one and the one more likely to be made, hmm? :-)

joanna said...

Vess, I perfectly understand the business motivations behind the A/V industry. However, this blog is not focused on the business side of the IT security industry, but rather on the technical/scientific side.

Even though I'm a CEO of my brand new company now (in other words a businesswoman now), still, when writing this blog I'm an (idealistic) researcher only :)

Evgeniy S. said...

I think AV vendors had no choice in early 90s. There wasn't worldwide-spread 32-bit protected OS until Windows 2000 release in 1999 (MS had to release non-protected Windows ME even in 2000 year). But you are right, it was wrong to continue using old AV scanning technologies on protected OSs like Windows NT and Linux.

joanna said...

Evgeniy, but again - the problem of tampering with security code applies equally to A/V software as well.

Jakub Debski said...

Joanna, it's very hard to prove in a technical way that users can't be educated. I tried to explain it by analogy what you take as demagogic and general.
The "minor problems" you write about are in fact very hard to solve, if not impossible.
Please, read some papers about user education results or read a good conclusion here.
If you are not convinced... I'm waiting for your user education program. Seriously, short sketch is enough.

Evgeniy S. said...

It's easier to hijack widespread OS-implemented protection (if it was made for DOS =)) than make hack tricks for each antivirus.

joanna said...

@Jakub: Ironically the paper you referenced only supports my thesis: "The only real solution is to make security a built-in feature of all computing elements." And then they write: "Digitally sign all information to prevent tampering and develop a simple way to inform users whether something is from a trusted source" :)

@Evgeniy: signature verification code could be placed into those A/V programs back in those days as well. Instead of complicated, and never-good-enough techniques for "direct" file infection detection.

Vess said...

Vess, I perfectly understand the business motivations behind the A/V industry. However, this blog is not focused on the business side of the IT security industry, but rather on the technical/scientific side.

Joanna, if you blame the industry for not implementing solutions that made no business sense, you live in a fantasy world. I can give you several other, much more potent methods of stopping viruses completely - limited functionality, limited sharing or limited transitivity (read Fred Cohen's papers). They are all great in theory. They all make the computer unusable - which is why nobody uses them. A viable solution must make business sense. Known-virus scanning in the 90s made such sense. Pushing for widely adopted code signing did not. The market has spoken.

signature verification code could be placed into those A/V programs back in those days as well. Instead of complicated, and never-good-enough techniques for "direct" file infection detection.

Joanna, you are missing the point. Code signing requires a PKI - a hierarchy of certifying authorities, support by the OS, etc. It's not something that can be implemented in one stand-alone program.

Way back then, of course, AV programs did include some kind of self-checksumming in order to ensure their own integrity. But this was no code signing (couldn't be - there was no PKI in place), it could be hacked by malware (and was), and it could be used only for self-protection - it couldn't be used as a universal anti-virus measure to protect arbitrary executables.

In fact, even back then, self-checksumming was pretty much pointless due to stealth viruses (a subject on which you're supposed to know something because you're teaching it).

There were many variations of the theme - programs that appended self-checksumming modules to other executables (bad, bad idea), integrity checkers (that held a separate database of checksums - separate from the protected executables, I mean), etc. None of them was good enough. Scanners were easy to understand and use, which is why this is what the users bought and used.

Of course, one of the reasons for the failure was because this wasn't real code signing - which, as I already said, requires a PKI in place and that simply wasn't present back then. It is present now, which is why it is a good idea for the software producers to start signing their executables (we do). If widely adopted, it will help reduce the malware problem significantly. It won't solve it completely, though.

joanna said...

Dear Vess,

1) You confuse "business sense" from the A/V vendors point of view with the business sense from the user's point of view, trying to make them equal and then argue that what is not attractive for the A/V industry should also not be attractive for users. This obviously is not true.

2) As many other people, you confuse the integrity protection of the OS/AV code with infection methods that simply bypasses A/V heuristics.

3) Everybody knows that certificates require PKI! There was no reason why the industry couldn't start building PKI infrastructure 10 years earlier, instead of going into the pointless arm race with virus writers (where they always were and will be losing).

Vess said...

You confuse "business sense" from the A/V vendors point of view with the business sense from the user's point of view

I do not. I am talking only about the business sense from the point of view of the A/V vendors. It was they who implemented and sold virus defenses, so they were the ones who were forced to develop solutions that made business sense from their point of view. You can't realistically blame them for doing so. (Well, there were some AV producers who experimented with solutions that made no business sense. They failed, of course.)

trying to make them equal and then argue that what is not attractive for the A/V industry should also not be attractive for users

Not at all. I am just saying that any sane vendor would only produce solutions that make business sense from his point of view and that you're not being realistic by blaming them for doing so. The users might very well find "attractive" if computers were given away at no cost - but no sane computer maker will distribute their goods like that.

you confuse the integrity protection of the OS/AV code with infection methods that simply bypasses A/V heuristics

I do not. The various virus and anti-virus techniques comprise chapters 3 and 4 of my Ph.D. thesis, so, believe me, I know what I'm talking about. (Google me.)

There is a set of techniques used by malware specifically against integrity-based protections - in fact, I have a paper dedicated to this subject.

Bypassing known-malware scanning and heuristics is just another set of tricks (among many) used by malware - used widely, because these AV techniques are widespread.

And you have to ensure the integrity of the OS and establish trusted execution paths, if you want to have any hope that your protection will work and won't be bypassed by malware.

There was no reason why the industry couldn't start building PKI infrastructure 10 years earlier

Which industry? There was a perfectly good reason why the OS makers (i.e., Microsoft), the software producers and the AV people didn't go this way - the RSA patent. In case you haven't noticed, the PKI started being built after the patent on digital signatures expired. (The patent on public key encryption expired later.)

Anonymous said...

Leaving aside all arguments already made against your points on building PKI infrastructure in the 90s, I remember at Black Hat you said it only took you 2.5 hours and $200 to create a digital certificate for your company to sign your own files. What's to stop malware writers from doing the same?

Similarly, the top 15 adware/spyware installers today are digitally signed and nobody is actively (let alone timely) revoking these certificates. In fact in some strange cases where people have tried to revoke one of these certificates it has taken over a month to do so.

How exactly do you plan on preventing malware writers from distributing their digitally signed malicious code?

when writing this blog I'm an (idealistic) researcher only :)

It shows :)

-- P

joanna said...

Users should be aware/smart enough and not trust publishers like e.g. "United Russian Coders, Inc." (sorry to all my Russian friends for this stereotype example), even if the certificate was issued by a known CA.

This is what I mean when I say that we should assume users are not stupid. If we can't assume users to be at least that aware, then all the security will always fail.

Vess said...

@Anonymous: How exactly do you plan on preventing malware writers from distributing their digitally signed malicious code?

First, the mere presence of a digital signature is not sufficient, of course. Look at how this concept is implemented regarding macros in the current Office applications - by default, macros are not executed unless digitally signed with a key marked as trusted by the user. If they are just signed by "Joe Blow", they still won't run - first the user must explicitly indicate that s/he trusts the executables signed with Joe Blow's key.

Second, Joanna isn't talking about stopping malware in general - she talking only about preventing parasitic infection. If only signed executables were allowed to run (and if code signing wasn't easily doable in an automated way on the target system), malware can still get in (signed) - but can't infect other files, because this will invalidate their signatures. Of course, stealth tricks defeat that, so the integrity of the OS will have to be insured first or the whole idea simply won't work.

@Joanna: Users should be aware/smart enough and not trust publishers

If users were smart enough not to trust dubious programs, we wouldn't need anti-virus programs because these users wouldn't get infected in the first place.

This is what I mean when I say that we should assume users are not stupid.

Well, you're wrong in that assumption. I have a paper somewhere, in which I show with statistical means that 97.24% of the users make wrong decisions when faced with a virus problem. The users are not "stupid" (although a lot of them are) - they are just incompetent on security matters and not interested in changing this. They are interested in doing their jobs - not in becoming security experts. That's why "user education" will never work.

If we can't assume users to be at least that aware, then all the security will always fail.

Almost all users are that unaware and that's why all security almost always does fail. The biggest security hole is always between the chair and the keyboard. :-)

joanna said...

Vess, and all other people who try to convince me that users are, and always will be stupid: I can't really argue with those points as this is a non-technical argument.

All I want is technology that would allow "smart" people to feel secure. Today we don't even have that :(

Yes, I know that there are money in protecting "unaware users". So are also the money in e.g. tomato farming or coal mining ;)

Entropy said...

I fully support Joanna's position. The basic thing that people don't realize is that when a malware author is forced to sign his binaries with a certificate issued by a publicly trusted CA, the binaries are no longer anonymous and he can be held legally liable. Everyone will know his name and the CA knows his address.

The level of trust that signed binaries inspire cannot even be compared to the level of "trust" anonymous unsigned binaries inspire. They are apples and oranges.

And drivers? If I were Microsoft, Vista (whether 32-bit or not) would never run any unsigned driver -- not even if you press F8. A driver can do _anything_, and even the best AV software, the best firewall, the best OS security component will not prevent it. Once you run an unsigned driver, you can't be sure of anything.

mark zielinski said...

I would like to point out a common misconception that I have seen repeated since Microsoft's release of Windows Vista.

The process of cryptographically hashing and/or signing of executables and dynamically linked libraries has been around for quite some time now, at least eight or nine years by this time. Microsoft is not the original inventor of this technology and in fact many host based products have used this approach in the past with much better implementations. I would also like to point out that cryptographically hashing executables is a much better approach than using digital signatures, as we have seen with many of the problems that Microsoft's implementation has encountered. Microsoft's implementation is deeply flawed and will not address any problems sufficiently.

Additionally, I would also like to point out that exploits can also be easily defeated by using similar approaches. By intercepting and analyzing the instructions to be executed in real time, malicious instructions can be easily detected and defeated, rendering an exploit useless. This can be done by either cryptographically hashing and comparing the instructions intercepted, or by simply comparing the instruction and it's order with the instructions contained inside of the application, in addition to many other approaches.

joanna said...

@Mark - you write that:
Microsoft's implementation [of digital signatures] is deeply flawed and will not address any problems sufficiently.

Can you elaborate on this?

Additionally, I would also like to point out that exploits can also be easily defeated by using similar approaches. By intercepting and analyzing the instructions to be executed in real time, malicious instructions can be easily detected and defeated, rendering an exploit useless

I don't believe this is practical to implement because of the performance impact anytime soon (and maybe even ever).

mark zielinski said...

Well, I think Microsoft's approach is flawed for a couple of reasons. First, as we've seen recently, Microsoft doesn't seem to do an adequate job of vetting requests and submissions for signatures. This has an immense impact on customers as malicious device drivers or third-party device driver loaders can be created and submitted which don't seem to be adequately analyzed. Second, with the current approach Microsoft is the one that decides what's good and what's bad. And in light of the first problem, this isn't a good idea. With the other implementations that exist which use hashing, at least the customer can say what they want and don't want on their systems and there isn't some third party that decides this for you. (Who could be lazy, incompetent, compromised, or all of the above) And also, Microsoft's implementation does nothing to prevent malicious applications and libraries from executing on the system. As a result, attackers can locally execute exploits which can exploit problems in device drivers to bypass the protections.

I agree that the performance impact with analyzing instructions can be significant, however I think there are ways to address this. I have been interested in researching and developing something like this, and in my research I have discovered some interesting things that can greatly reduce the speed impact. For instance, today's processors have support for single-stepping on branches, which can be leveraged to analyze the application's execution patterns in real-time without actually needing to compare every instruction executed, which will help considerably and still achieve the same result.

joanna said...

@Mark:

1) Well, I think Microsoft's approach is flawed for a couple of reasons. First, as we've seen recently, Microsoft doesn't seem to do an adequate job of vetting requests and submissions for signatures.

You seem to confuse here the fact that Vista allows to load kernel drivers signed by 3rd parties with what MS's WHQL does. AFAIK WHQL verification process is pretty decent. Could be better probably, but that doesn't mean that MS's implementation of digital signatures is flawed.

2) Second, with the current approach Microsoft is the one that decides what's good and what's bad.

I think that you're now negating what you said in the first argument...

3) With the other implementations that exist which use hashing

But, digital signatures/certificates is actually a form of hashing - the signed hashes to be precise.

Please do not confuse the following things:
1) the actual implementation of digital signatures
2) the implementation of a PKI
3) the policy used by PKI to issue certificates
4) the policy implemented by MS which certs to trust
5) the policy used by users which certs to trust

mark zielinski said...

You seem to have misunderstood some of the points that I had made in my last post, such as: 1) I had been saying that Microsoft's approach is neither considering nor addressing the entire problem at hand, therefore their approach is entirely insufficient as it still leaves the system vulnerable to attack and 2) I had said nothing either directly or indirectly that implied hashing was not used in the process. What I had been discussing was the difference between and the benefits of hashing the system's applications, dynamically linked libraries and device drivers in real time and comparing these hashes to a known baseline decided by the customer, versus simply verifying a digital signature on the system's device drivers, while not allowing the customer to decide what they want and do not want to load on their system, because such information is decided by a third party and not the customer. And 3), As such information is decided by a third-party, if this third-party is compromised then the security provided by this feature is also worthless.

Joe said...

Hi

For me it's unsure where the border between a trick and a robust solution is located and who decides which is which. Eg. SVV is it just an other trick which is easly bypassible or a robust security solution?

Thanks