Thursday, August 19, 2010

The MS-DOS Security Model

Back in the '80s, there was an operating system called MS-DOS. This ancient OS, some readers might not even remember it today, had a very simple security model: every application had access to all the user files and other applications.

Today, over two decades later, overwhelming majority of people still use the very same security model... Why? Because on any modern, mainstream OS, be that Linux, Mac, or Windows, all the user applications still have full access to all the user's files, and can manipulate all the other user's applications.

Does it mean we haven't progressed anywhere from the MS-DOS age? Not quite. Modern OSes do have various anti-exploitation mechanisms, such as ASLR, NX, guard pages (well, Linux has it since last week at least), and even some more.

But in my opinion there has been too much focus on anti-exploitation, and on bug finding, (and on patching, of course), while almost nothing has been done on the OS architecture level.

Does anybody know why Linux Desktops offer ability to create different user accounts? What a stupid question, I hear you saying - different accounts allow to run some applications isolated from user's other applications! Really? No! The X server, by design, allows any GUI application to mess with all the other GUI applications being displayed by the same X server (on the same desktop). So, what good it is to have a "random_web_browsing" user, if the Firefox run under this user account would still be able to sniff or inject keystrokes to all my other GUI applications, take screenshots of them, etc...?

[Yes, I know, the user accounts allows also to theoretically share a single desktop computer among more than one physical users (also known as: people), but, come on, these days it's that a single person has many computers, and not the other way around.]

One might argue that the progress in the anti-exploitation, and also safe languages, would make it nearly impossible to e.g. exploit a Web browser in the next few years, so there would be no need to have a "random_web_browsing" user in the first place. But, we need isolation not only to protect ourselves when somebody exploits one of our application (e.g. a Web Browser, or a PDF viewer), but also, and perhaps most importantly, to protect from maliciously written applications.

Take summer holiday example: imagine you're a scuba diver - now, being also a decently geeky person, no doubt you will want to have some dive log manager application to store the history of your dives on a computer. There are a dozen of such applications on the web, so all you need to do is to pick one (you know, the one with the nicest screenshots), and... well you need to install it on your laptop now. But, hey, why this little, made by nobody-knows-who, dive application should be given unlimited access to all your personal files, work email, bank account, and god-know-what-else-you-keep-on-your-laptop? Anti-exploitation technology would do exactly nothing to prevent your files in this case.

Aha, it would be so nice if we could just create a user "diving", and run the app under this account. In the future, you could throw in some advanced deco planning application into the same account, still separated from all the other applications.

But, sorry, that would not work, because the X server doesn't provide isolation on the GUI-level. So, again, why should anybody bother creating any additional user accounts on a Linux Desktop?

Windows Vista made a little step forward in this area by introducing integrity levels, that, at least theoretically, were supposed to prevent GUI applications from messing with each other. But they didn't scale well (IIRC there were just 3 or 4 integrity levels available), and it still isn't really clear if Microsoft treats them seriously.

So, why do we have user accounts on Linux Desktops and Macs is beyond me (I guess Mac's X server doesn't implement any GUI-level isolation either - if I'm wrong, please point me out to the appropriate reference)?

And we haven't even touched the problems that might arise from the attacker exploiting a bug in the (over-complex) GUI server/API, or in the (big fat) kernel (with hundreds of drivers). In order for those attacks to become really interesting (like the Rafal's attack we presented yesterday), the user would have to already be using e.g. different X servers (and switch between them using Ctrl-Shift-Fn), or some sandboxing mechanisms, such as SELinux sandbox, or, in case of Vista, a scheme similar to this one.

15 comments:

WndSks said...

There are more than 4 integrity levels in Vista (On Screen Keyboard will run at Medium+X IL etc) The IL's are spaced 0x1000 apart so there is plenty of space for custom levels (I have never actually tried to create a process with a custom level so I'm not sure if it is possible for normal apps to use custom levels at this point or if we have to wait for Win8)

Larry Seltzer said...

I think the model you're inclining towards might be a true microkernel model, where all system services run isolated in user mode, with all calls passing through a minimized amount of kernel mode code.

There have been attempts to commercialize this model for ages, and some, e.g. QNX, were successful. But there's a difference between a small embedded system like QNX and a complex desktop GUI OS like Linux and Windows. They tried originally to make NT a microkernel and it was just too slow, and IBM tried to make a microkernel our of OS/2 ("Workplace OS" I think) and it took 40 minutes to boot.

So the short answer to your question is that there have always been performance trade-offs for this aspect of security and we're only now, with virtualization and multiple cores and gigs of RAM, getting to the point where such isolation might be practical. But not yet mainstream

Joanna Rutkowska said...

@Larry: I wrote nothing in the post that would even suggest a microkernel model. I only complained about the lack of isolation on the GUI level in the mainstream OSes.

Also, please note that there was _no point_ (at least from security point of view) to have microkernel OSes before 2009, as there were no laptops supporting VT-d, and without IOMMU it makes no sense to have this model.

visgean said...

joanna: you wrote "or in the (big fat) kernel (with hundreds of drivers)" which can seems like you want micro kernels - they aren´t fat :)

So, you are saying that there should be some separating of applications.. and user files (informations). That seems logical but if user would have to set the access to files it could be really annoying ;)

RareCactus said...

I really like the Android security model. When you install an application, it asks for certain capabilities. Then, when it's installed, it has only those capabilities, and no others.

I wish that it were more fine-grained, though. For example, a lot of applications request full network access just so that they can display some advertisements. Surely we can get more fine-grained than this. Also, it would be nice to install an app, but deny it some of the capabilities it requested. However, I can see why Google might be afraid of implementing this, because then users could effectively disable advertisements by disabling network traffic on certain applications.

I also like the apple sandboxing API. If an application knows that it will never be writing anything to the filesystem, it can set a sandbox profile like "kSBXProfileNoWriteExceptTemporary." Effectively this causes it to drop the capability of writing to the filesystem forever. Even if you find a way to inject malicious code into the process, you will never be able to write to the filesystem (or at least, not without exploiting another vulnerability.)

cmccabe@alumni.cmu.edu

ag4ve said...

At the end, you mentioned SELinux "sandbox". Well, I think SELinux can be used to give you pretty much everything you are asking for (and possibly AppArmor too). The only problem with SELinux securing applications is that AFAIK, you have to compile the configuration for every program / user you want to secure - this would be a PITA even if the configuration wasn't so hard to figure out.

I think what might be a happy medium would be for package maintainers to put SELinux configuration changes in their packages so that security is templates for most people.

Of course, there is also a chroot jail which for simple things like bind, web browsers and the likes. This wouldn't work as well for your dive log unless you can export things out of the jail (weakening the security model).

Anonymous said...

Hello,
first of all thanks for a great article. It really made me thinking about how I use my machine every day.
Since I'm a Mac person I did some checking. It turns out that lauching apps from root's or other-than-current user's shell (via shell command "open") actually makes them running under my account. It's interesting because OS X doesn't allow me to run Firefox as anybody else than bilbo (i.e. my account). I'd have to switch user to ghost (i.e. my secondary account, unprivileged) to do random browsing. Such scheme renders my (bilbo's) desktop unsusable. I don't like it at all.
I haven't checked if the secondary user's processes can listen to keystrokes, mouse events etc. while being run in a "switched" desktop, but I think I'll craft a small app to do that. Your article really made me thinking.
Once again, thanks :).

somebloke said...

"[Yes, I know, the user accounts allows also to theoretically share a single desktop computer among more than one physical users (also known as: people), but, come on, these days it's that a single person has many computers, and not the other way around.]"

Both scenarios are equally valid, especially in a corporate setting. For example, machines in 'hotdesk' areas my well be shared by multiple users who all have separate accounts on the machine (likely these are AD accounts but from the a local machine perspective these are separate accounts with separate user profiles, user data areas etc.)

And, Citrix/MS Terminal Server solutions are effectively a number of users all logging into the same machine with different user accounts when you think about it.

There are lots of examples where application separation gets really complicated - for example, if I run several word processors, they could all need access to word processing documents on my machine, even those documents that were created in other word processors on my machine - for example, docs created in MS Word and those created in OpenOffice. Similar situations exist for other application types such as spreadsheet programs, graphics programs and so forth.

The other problem of course is the usual 'ease of use'/security trade off. How could such a full segregation security model be implemented between applications and actually make it user friendly?

Finally, the security model on most operating platforms requires that I elevate the application installer to 'admin' level so that the application can install files in 'system' areas, add entries to the 'Registry' and so forth. At the point that I as a user have allowed this privilege elevation, it's pretty much 'game over' in any case as the application installer has full access to my machine. Segregation between applications *after* installation therefore doesn't seem to have much point.

I'm not actually offering any solutions either but that's only because I'm not sure what the solution is here. Is anything really actually broken here?

Joanna Rutkowska said...

@RareCactus:

If an application knows that it will never be writing anything to the filesystem (...)

That assumes the app is not intentionally malicious, no?

@Shawn: No, SELinux, chroot, and similar Linux/Unix mechanisms DO NOT PROVIDE ISOLATION ON THE GUI LEVEL.

@somebloke: Of course! Requiring the user to run every app's installer as root/admin is a stupidity (as I wrote many years ago here.

Noah said...

Thank you. For stating what seems to me is so obvious, but no-one else in the security / IT industry seems to get. It's not about "removing admin rights" to stop people installing programs, it's about having separate sandboxed rights assigned to each program so they can access only the data they need ala the Apple iPhone model. People will always install unknown applications, if only to see what they do, but they should not be able to take over an entire system and it's data.

Sam Bowne said...

Would it solve this problem to run each app in a different chroot jail?

Joanna Rutkowska said...

@Sam: No, no GUI isolation.

Ahmed Masud said...

Hi Joanna:

Interesting article. While I agree with the spirit of your argument I think I disagree with some empirical points that you base it on.

DOS had only one operating mode: (Kernel Mode) and everything that ran could really poke and do anything to any part of the operating system.

The security model in modern desktop operating systems is a Discretionary Access Control (DAC) model. While this still allows users to share their files and data, to say that these systems do not go beyond MSDOS is perhaps a bit too harsh.

There is a clear separation of kernel-space and user-space processes in these modern systems. While the transition between them may have been kept fairly lax the separation is formal and does exist, and can be used to separate users to quite an extent. Interprocess Communications paths within nix like systems (such as Linux and OSX) is well defined. In Windows NT type kernels (NT, Vista, Win7 etc) the kernel is a microkernel and various components, including IPC run within user-space. An approach that's geared towards rapid development of closely knit applications that provide short-cuts to object embedding etc.

You mentioned the fact that X allows one process to manipulate another one. That is not as gross as you make it out to be. A properly configured X environment can and does provide some level of isolation. Not to take away from the argument that it's difficult to configure (X security cookies are still a black art), but to out right say that a mechanism is absent is a bit too severe. Moreover if you look at the history of X, the whole idea was to have a dumb display (which would run your X server) that would eventually be able to display multiple applications, and more to the point the approach to display is also DAC. So if John allows access to his DISPLAY to Sally then that's John's prerogative, and Sally's good luck.

The other component is lack of granularity within Security Systems. There is a lot of progress on data-at-rest. File systems support POSIX extended attributes, encryption and more, so there is a lot of hope there. The same can be said for data in transit over wire. The main place where security still falls short is in the processing stage (running programs).

However, it’s not for the wont of knowledge. Strong security models have existed for decades that provide sufficient separation of duty, control and isolation to achieve appropriate levels of secrecy, privacy and integrity. Orange Book Mandatory Access Control and Domain Control (not to be confused by Windows Domains) models have been around since about 1984.

Modern systems have variants that at some level provide the ability to enforce security. Security Enhanced Linux, for example, implements the ability to enforce various security models including MAC and is part of the core kernel. This extended security also encompasses GUI interfaces and interprocess communication therein.

Security, especially fine-grain security, comes at a significant price of usability, speed, flexibility and requires user-training and understanding.

My company Trustifier (shameless plug) was founded on the very premiss that high-end security as envisioned in orange book, common-criteria, DIACAP and the like have to be made easy to use in modern distributed heterogenous operating environments.

We have to realise however, that we cannot have it both ways: We cannot collectively turn our noses at security by not learning about it and properly implementing it, and all the while complain about lack of security. There is security available, it is accessible, and can be used. We just have to own up to the fact that, like anything else worthwhile, it takes time, effort, money and discipline to execute on it.

With Respect, Ahmed.

Simon Farnsworth said...

You might want to look into XACE and the way it integrates SELinux into X11.

It's not perfect, but it lets you bring in the full power of a FLASK/type enforcement MAC to your GUI apps. The trouble, of course, is that writing FLASK/TE policy isn't easy - but it does get you the isolation from misbehaving apps that you want.

Joanna Rutkowska said...

@Ahmed:

Theoretically, most of the points you bring are correct (except for the statement that NT is microkernel based -- it is not, not even in theory). The reality, however, is much different:

1) The usermode/kernel mode separation on Linux and Mac OS X is moot -- the user can load whatever kernel module she wants at will (unless you're some hardcore admin that manually recompile kernel with disabled LKM, and a few hardening patches, like /dev/(k)mem removal -- certainly not something that a desktop user should/could do). Vista/W7 tried to made this somehow better by introducing a policy to load only signed kernel modules, but it has been demonstrated several times, that this doesn't work in practice (see e.g. Alex's and mine BH 2007 presentation).

2) I don't agree on the comment about the X server isolation -- please post an exemplary config that would "properly configur[e] X environment", so that it isolated e.g. my firefox from the other apps. I would love to see one!

3) To the best of my knowledge, SELinux currently is unable to provide GUI-level isolation. If it was, then why Dan Welsh (http://danwalsh.livejournal.com/), SELinux developer, would be not using it for his "sandbox -X" utility? He uses a dedicated (nested) X server (Xephyr) to provide GUI-level isolation, from the main X server.

@Farnz:

AFAIK, SELinux's XACE hooks is not there yet (in fact this looks like a dead project), and also see my argument above about why Dan Welsh does not use it for his sandbox.