Sunday, April 01, 2007

The Human Factor

When you go to some security conferences, especially those targeted for management staff, you might get the impression that the only problem in the security field that mankind is facing today is… that we’re too stupid and we do not know how to use the technology properly. So, we, use those silly simple passwords, allow strangers to look at our laptop screens over our shoulders, happily provide our e-bank credentials or credit card numbers to whoever asks for them, etc… Sure, that’s true indeed – many people (both administrators and users) do silly mistakes and this is very bad and, of course, they should be trained not to do them.

However, we also face another problem these days… A problem of no less importance then “the human factor”. Namely, even if we were perfectly trained to use the technology and understood it very well, we would still be defenseless in many areas. Just because the technology is flawed!

Think about all those exploitable bugs in WiFi drivers in your laptop or email clients vulnerabilities (e.g. in your GPG/PGP software). The point is, you, as a user can not do anything to prevent exploitation of such bugs. And, of course, the worst thing is, that you don’t even have any reliable way to tell whether somebody actually successfully attacked you or not – see my previous post. None of the so called “industry best practices” can help – you just need to hope that your system hasn’t been 0wned. And this is really disturbing…

Of course, you can chose to believe in all this risk assessment pseudo-science, which can tell you that your system is “non-compromised with 98% probability” or you can try to comfort yourself because you know that your competition has no better security they you… ;)


denis bider said...

Indeed, it's quite disturbing. I hope your proposals for improving the security and verifiability of operating systems makes headway.

In any event it certainly seems like it will take 5-10 years before we trustworthy OS fundamentals are ready for mass use, but I would prefer 10 years to 50.

Anonymous said...

Worst problem is the human factor. You talk about bad drivers! Human develop drivers. Computer doesn't make errors in drivers! The human makes error and bad driver is the human factor too.

We must learn and improve our security knowledge.
I think basically all errors and all security problems are the human factor.

Unknown said...

I think educating users, and all the articles and policies saying you should educate users is both important and unimportant. I know, this is a contradiction. Users should be educated as best we can because they often detect the hack because the exploit coder is no more perfect than the rest of us. In "Cuckoo's Egg" Cliff Stoll detected the intrusion because the phone bill was incorrect. We also as an industry have to stop blaming the luser. Secure coding techniques and secure system design need to be just as important as building a bridge safely or making sure an Airplane can fly on one engine. When the OS has the fault tolerance of a Boeing 777, then we will be getting somewhere. I think you are on the right track with your discussion. I might not understand everything you are saying, but some combination of a secure hypervisor and a trusted code base for the kernel makes sense. We should also stop expecting users to know whether they should press the ''OK'' button. --Michael

Cd-MaN said...

One quick philosophical comment about the "risk assessment pseudo-science" / probability part: afaik (I'm not a scientist), the current view in physics is that we all exists because of probability, in the sense that it is most probable for the atoms which form our body to behave such that we exists. However this is all probability and there exists the chance (although a very, very small one) that one day the atoms composing my body will move in such a way that I get disintegrated.

My point is that probability is part of our existence and many things we take for sure are in fact things which are very probable, but not 100% probable. One should embrace and not fear the unknown.

Anonymous said...

It would be great if systems were safe. What we know, they are not.
To achieve safety, both project and implementation have to be perfect from security point of view (not only the kernel, even the software installed).
Most of the companies developing software takes, care about security. Unfortunately their workers do not know all the possible ways to exploit their products. They are not "the hackers" - they are software developers. You can't blame them for that. Even you, security expert, do not know all the paths attacker can follow. Human factor.
Even if the systems were made fulfilling all the modern requirements of safety, tomorrow whizkid will find a flaw starting new branch of attacks; history taught us that.
You do a good job, pointing security flaws. Although your work on the most advanced system compromises is impressive, you completely ignore "visible" attacks, more popular and still hard to find.
Using zero-day exploit in Word or Acrobat Reader someone can steal your data working in "stupid ring 3".
How often do you look at the code injected to processes? How often you check what the user-mode threads do? How often do you look at the templates for new Office documents?
Are you sure that your Internet Explorer is not executed by external application when your screensaver is active and do not open a website passing somewhere your data? ;)
Praised PatchGuard protects the most popular malware from being deeply monitored. That's why antivirus companies complain about it so much. It especially protects the malware that use human factor directly.

Anonymous said...

human factor indeed, stupidity knows no bunds!

Anonymous said...

My view is that today's security problems are really a simple reflection of low quality software. My current operating system is so complicated that resonable test coverage is impossible. The result is bugs, lots of them. The operating system itself is so extended into 1000's of shared libraries that I don't believe anyone even understands the big picture, much less that has an "architecture" in mind. Today's operating system is very reminiscent of the "spaghetti code" associated with DOS of some years back. The Winchester Mystery house also comes to mind.

When I think about what I do with a computer, and what others do with computers, I see no compelling reason for such complexity -- expecially in the light of non-backward comatibility.

Reduce the complexity and innovation will be forthcoming and the number of bugs will decrease.

At least that's how I see it.
Ed Bradford
b ed at nc dot rr . com

Anonymous said...

No one can to know everything. Modern IT technology need polymathic. A security expert or software developer can be expert in small area only. I think software developer must be multiskilled. The best programmer is a security expert. The best security expert is a programmer.

Even you’ll know everything you can make errors. It is a human factor.

David said...

Please pardon this expansive, questionable, and dissipative rant ... If one is destined for stupidity, then into stupidity he goes. Keep as many eggs in as many baskets as possible. You can never be safe and secure, that is an illusion. If you want to play the game, then be prepared to lose. What's more valuable, data, information, or wisdom? What is value? As anonymous says, software will always reflect our flaws as much as our brilliance.

Anonymous said...

"Just because the technology is flawed!"

I don't think the technology is flawed. I think when you combine the Human Factor and the word variable you get an answer.


Definition of Variable

1.apt or liable to vary or change; changeable:
2.capable of being varied or changed; alterable:
3.inconstant; fickle:
4.having much variation or diversity.


Anonymous said...

well, obviously mistake can be avoided. mistakes will happened but must be at acceptable rate, say 1/1million.

HarryE said...

Being a humble user and after watching the mighty IT departmens and their monster machines blaming the poor user ,I concluded :

- Computers are very naive machines
they believed whatever they are being told.
-It is possible to record the input
and reaction of the machines and made them to act accordingly.- If you tell a computer "black",
****???, XXX123 ,they will act on it, even if you do not know what it means. Face recognition, who cares ,you just say 11001000 or
whatever the computer believes is a face and it will comply.A dog
is smarter.
-Worse ,computers are the best imitators and can be used to imitate anything .
-So ,computer security cannot be achieved until computers think by themselves and even then.

boris kolar said...

Yes, we can blame technology for much of our security chaos. However, it was human factor that lead us to situation. Now, even eliminating human factor of end users won't help, because underlying technology is flawed.

Requiring administrative rights for every installation is a bad security policy, made by humans. Relying that millions of lines of code in kernel and drivers is free of security vulnerabilities is also a bad security foundation, caused by human factor. Allowing emails to run scripts on a such a flawed operating system is another bad decision, also a human factor.

So in the end, we can conclude our security problems are based on human factor, but our problem is so wide spread, that fixing human factor for end users is not enough.

Anonymous said...

Obviously the technology we use is flawed, I don't think anyone would argue that a browser having a bug should result in someone taking control of a computar.

I think you are misrepresenting the argument of "human error" that many people and that I make. Most security problems at corporations are the result of incompetent management and lax configurations. It is possible to build a network that offers an acceptable level of security. Things can be segmented and separated. Most corporations can't and don't consider the case of an attacker with a tremendous amount of resources. Given an infinite amount of resources mostly nothing is secure. Have you ever experienced dealing with the security of a corporate network? If you do, you will quickly realize that invisible malware is the least of their worries. There are generally many, many more fundamental problems to fix.

Not all of the security problems of the real world can be avoided and there's no reason to believe that we're going to have perfect avoidance in terms of IT security either, it's too expensive. That's why we have law, courts and prisons. And yes, a host can be "owned" (I wonder who comes up with these types of words...) and be undetectable from it's perspective, but:
1 - It might be detectable from a network perspective
2 - Just because a host is "owned" it doesn't mean that the corporation is in deep trouble, depending on the security measures in place.

For many corporations having the right person murdered could be more destructive than "owning" a machine undetectably. Nobody goes around saying "they might murder our CEO and not be detected", it's just obvious.

I get amazed at how much effort is spent on talking and researching "owning" when there are so many real world (maybe less hip) problems to solve in the real world.

Anonymous said...

There are all aspects of the "Human Factor" involved, from developers to end-users.
The technology is definitely flawed! The network protocols we still use (and abuse) today were desinged decades ago for functionality, NOT Security.
A lot of our current technology is really just stretching the use of old technology in ways never conceived by the original designers. Original designers simply could not comprehend the nature of our present-day security environment.
To truly improve the security of the technology, the technology must be redesigned, from the ground up, with consideration to our current requirements, and those projected for the next decade or two.
Uh oh, wouldn't that cause compatibility problems? Imagine updating TCP/IP, Ethernet, even programming language revisions... Yes, absolutely it would. This is a much larger picture of what Microsoft needs to do (Scratch-build to an architecture worthy of current and future security requirements).
Will it happen? No - not all at once anyway. Did "Human Factor" cause all of this? Yes - but I think it is more helpful to separate technology issues and Human issues, to appropriately address them. Otherwise, everything gets jumbled into one big confusing mess.