If we looked at the computer systems and how they try to provide security, I think we could categorize those attempts into three broad categories:
1) Security by Correctness
2) Security by Isolation
3) Security by Obscurity
Let's discuss those categories in more detail below.
Security by Correctness
The assumption here is obvious: if we can produce software that doesn't have bugs (nor any maliciously behaving code), then we don't have security problems at all. The only problem is that we don't have any tools to make sure that a given code is correct (in terms of implementation, design and ethical behavior). But if we look at various efforts in computer science, we will notice a lot of effort has been made to achieve Security by Correctness: "safe" languages, code verifiers (although not sound ones, just heuristic based), developer's education, manual code audit, etc. Microsoft's famed Secure Development Life-cycle is all about Security by Correctness. The only problem is: all those approaches sometimes work and sometimes do not, sometimes they miss some bug and also there are problems that I simple don't believe can be addresses by automatic code verifiers or even safe languages, like e.g. logic/design bugs or deciding on wheatear a given code behaves maliciously or not (after all this is an ethical problem in many cases, not a computer science problem).
To sum it: I think that in some more or less distant future (some people think abuout a timeframe of 50 years or so), we would get rid of all the implementation bugs, thanks to safe languages and/or sound code verifiers. But I don't believe we could assure correctness of software on any higher level of abstraction then implementation level.
Security by Isolation
Because of the problems with effectively implementing Security by Correctness approach, people, from the very beginning, has also taken another approach, which is based on isolation. The idea is to split a computer system into smaller pieces and make sure that each piece is separated from the other ones, so that if it gets compromised/malfunctions, then it cannot affect the other entities in the system. Early UNIX's user accounts and separate process address spaces, things that are now present in every modern OS, are examples of Security by Isolation.
Simple as it sound, in practice the isolation approach turned out to be very tricky to implement. One problem is how to partition the system into meaningful pieces and how to set permissions for each piece. The other problem is implementation - e.g. if we take a contemporary consumer OS, like Vista, Linux or Mac OSX, all of them have monolithic kernels, meaning that a simple bug in any of the kernel components (think: hundreds of 3rd party drivers running there), allows to bypass of the isolation mechanisms provided by the kernel to the rest of the system (process separation, ACLs, etc).
Obviously the problem is because the kernels are monolithic. Why not implement Security by Isolation on a kernel level then? Well, I would personally love that approach, but the industry simply took another course and decided that monolithic kernels are better then micro-kernels, because it's easier to write the code for them and (arguably) they offer better performance.
Many believe, including myself, that this landscape can be changed by the virtualization technology. Thin bare-metal hypervisor, like e.g. Xen, can act like a micro kernel and enforce isolation between other components in the system - e.g. we can move drivers into a separate domain and isolate them from the rest of the system. But again there are challenges here on both the design- as well as the implementation-level. For example, we should not put all the drivers into the same domain, as this would provide little improvement in security. Also, how to make sure that the hypervisor itself is not buggy?
Security by Obscurity (or Security by Randomization)
Finally we have the Security by Obscurity approach that is based on the assumption that we cannot get rid of all the bugs (like in Security by Isolation approach), but at least we can make exploitation of those bugs very hard. So, it's all about making our system unfriendly to the attacker.
Examples of this approach include Address Space Layout Randomization (ASLR, present in all newer OSes, like Linux, Vista, OSX), StackGuard-like protections (again used by most contemporary OSes), pointer encryption (Windows and Linux) and probably some other mechanisms that I can't remember at the moment. Probably the most extreme example of Security by Obscurity would be to use a compiler that generates heavily obfuscated binaries from the source code and creates a unique (on a binary level) instances of the same system. Alex did his PhD on this topic and his an expert on compilers and obfuscators.
The obvious disadvantage of this approach is that it doesn't prevent the bugs from being exploited - it only make the meaningful exploitation very hard or even impossible. But if one is concerned also about e.g. DoS attacks, then Security by Obscurity will not prevent them in most cases. The other problem with obfuscating the code is the performance (compiler cannot optimize the code for speed) and maintenance (if we got a crash dump on an "obfuscated" Windows box, we couldn't count on help from the technical support). Finally there is a problem of proving that the whole scheme is correct and that our obfuscator (or e.g. ASLR engine) doesn't introduce bugs to the generated code and that we will not get random crashes later (that we would be most likely unable to debug, as the code will be obfuscated).
I wonder if the above categorization is complete and if I haven't forgotten about something. If you know an example of a security approach that doesn't fit here (besides blacklisiting), please let me know!
I'm surprised nobody is working on an OS micro-kernel to run inside the domain 0 to provide separated drivers for the domainU. The domain 0 only needs to provide drivers and manage the domain U's and it doesn't even have to be really high-performing since it is mostly handling relatively slow IO drivers for the domain U's. It would seem to be an ideal place to start implementation of a new micro-kernel OS using modern design principles and eventually when it became capable/fast enough you could start running it in domain U's along with Linux, Windows, etc. This would help to greatly reduce one of the big vulnerabilities of the xen way of providing security.
ReplyDeleteI sometimes define cryptographic contructs as giving a designer the ability to concentrate and formalize secrecy (obscurity). Instead of having to hide everything, I can use crypto to encrypt and than my problem is smaller - now I just have to protect the keys.
ReplyDeleteSame thing with PKI, it concentrates the trust mechanisms into a smaller set of operations that can be more formally designed and managed.
trsm.mckay
Hi Joanna,
ReplyDeleteVery interesting post!
I am not sure how to categorize this concept, but I think there is room for deterrence, and also detection, delay, and response.
Security by deterrence seeks to prevent a threat from taking an action based on inflicting consequences on the threat. Lack of attribution makes this difficult in the digital world.
Security by detection, delay, and response means identifying when an attack begins and holding the attacker until a response process can stop the attack. Defense of a bank vault is the classic example. The intruder triggers an alarm while cutting the vault, but the door holds until the police arrive. Digital examples involving increasing the time allowed between intruder accesses, but this model also tends to break down.
This comment is pretty much off-topic, but I wanted to alert Joanna and invisiblethings readers to one of the more startling aspects of the new Google chrome browser and ask for opinions on the security implications. I'm hoping this might spur an investigation and a post.
ReplyDeleteGoogle chrome installs on a per-user basis for both admin and limited users (under XP). With the plug-ins and Google Gears and Java capabilities, my question is:
"Is Google Chrome safe enough for me as an IT admin not to worry when my limited account users start installing Google Chrome on all the systems in my company?"
If not, I face a challenge in inhibiting the informal adoption of Google Chrome in my organization, because it appears that any limited user can install and use Google Chrome, taking me out of the loop on managing the security of the browser, certainly the most vulnerable part of the OS today.
Tracy, starting from Xen 3.2 on machines that have IOMMU/VT-d, one can assign devices to various DomU domains, moving the drivers out from Dom0 to those DomUs, which are then called "Driver Domains". This is a damn cool feature and I love Xen for having this. This also makes Xen effectively a micro-kernel ;)
ReplyDeleteI think at some point in time we could replace the OS that runs in Dom0 (currently Linux or some BSD or Solaris) with something very small like MiniOS that is used e.g. for so called stub-domains in Xen 3.3 (another cool feature).
Hi Richard!
ReplyDeleteNo, I haven't forgot about detection. After all I've spent quite a lot of time in the past researching various detection approaches and it was myself who always have been trying to send a message "prevention is not enough, we also need detection!". Gee, I even did it in our Xen 0wning Trilogy at Black Hat :)
But I think I made a silent assumption when coming up with this list, that by "security mechanism" I mean "prevention mechanism". And here prevention could be defined as something that "doesn't allow to execute even the first byte of the attacker's code".
There is no doubt that we need detection/monitoring (on both the network level and host level), but those techniques should always be treated as the last line of defense, more precisely as something that verifies if all the other prevention mechanisms do work indeed.
While we can imagine a system built using only one of the three approaches to security that I listed in my post and we can imagine such system to be secure and usable (assuming the correctness, or isolation or obfuscation is really *effective*). But I cannot imagine a system built only on "security by detection" approach. Even if the detection was ideal (i.e. detecting all the threats), still we should always assume a non-zero response/reaction time and this would mean that attackers do always succeed (they always able to pull some piece of information from my system) and probably also that my system can be easily made out of order (I assume many response approaches would effectively DoS me for a while).
As to the deterrence - I'm not fully getting the concept of security by deterrence approach in the digital world. Any example?
@my_two_cent:
ReplyDeleteBuilding your security policy on the fact that most browser require admin right for installation is plain wrong. I can always write a little program myself that would be e.g. a browser but make sure it doesn't require any admin rights for installation (or installation at all) and just bring it from home to work.
If you want to limit users' ability to install and run programs then you might want to look at the whitelisting mechanism present in all Windows versions since XP or 2000 (forgot the name right now). Needless to say, this is also bypassable (using an exploit for one of the legitimate programs), but this is much less trivial then the approach you're using.
Just to shed some light on the subject matter, I'd like to share with you my paper, maybe it'll add another dimension to the discussion.
ReplyDelete"Achilles Heel in the Philosophy of Prometheus Boundless Security", you can grab a copy at http://www.themutable.com
I think there is another approach:
ReplyDeleteSecurity by Intimidation.
Basically this consists of instead of actually securing the system to go after anyone who either says or publishes anything about the security of the system.
Like the french credit card that used a too short key.
Or publish something about certain network hardware classified as secret, as if that really improved security.
There also exists something that could be called "security by reducing". I mean, "do not store secret", "minimize the time of the data storage", "reduce functionality". That's similar to isolation, is that the same?
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteI believe that defense-in-depth could be considered to be a fourth approach to security, being relatively independent of correctnes, isolation, and obscurity. D-in-D is more than simply duplicating security functionality to protect against failure (i.e. ensure correctnes). Each layer in a deep defence should ideally also be designed so that, when broken, it leaves the attacker with less attack avenues or capabilities to employ on the next layer than he/she would have had if the previous layer was not there. In other words, good D-in-D should gradually reduce the attackers degree of freedom as he/she progresses through the defences.
ReplyDelete@alexander: I guess what you describe as "security by reducing" could be classified both as security by isolation and correctness depending on the specific case.
ReplyDelete@louis: what you describe as "security by reduction" seems to be like a way to achieve security by correctness - you just hope to eliminate the potential bugs by disabling programs that might have them.
@anonymous: I think that each layer of your "defense in depth" should be build using one of the 3 approaches I listed, so "defense in depth" itself is not a new way to provide security, it's just a marketing term to say: "we have a few different security mechanisms implemented" (e.g. security by isolation and security by obscurity).
This comment has been removed by the author.
ReplyDeleteLouis,
ReplyDeleteIf the commands are not there and I can execute commands I can build the commands for example by loading tools into the database and then storing them to disk. Or some other complicated form of file transfer.
I didn't found the link, but I remember an enterview to Alan Cox, where he said that software is buggy because of lazzy programmers, we can make much better, and actually do, and he mentions micro-processors as an example, bugs on them are really rare.
ReplyDeleteI think the first kind of security is actually very approachable, but there is a lot of lazzy people in the world, just as me :)