[People] don't understand that the [compromised] operating system can impersonate the user at will!
The compromised OS could have saved your PIN to this [smart] card when you used it previously (even if you configured it not to do so!) and now, immediately, it could use the inserted card to authenticate as you to the bank and start issuing transactions on your behalf. And you won't even notice this all, because in the meantime it will show you a faked screen of your banking account. After all, it fully controls the screen.
The bottom line is that we cannot secure our digital lives, if our client operating systems could not be secured first.
But introduction of tokens won't make our operating systems any more secure!
This article sparked lots of controversy with many people, who considered it a fallacy to criticize two factor authentication...
Today, I came across the news about Operation High Roller, discovered recently by McAfee and Guardian Analytics. They released a paper with some details about the attacks and the malware deployed. Some interesting quotes:
All of the instances that involved High Roller malware could bypass complex multi-stage authentication. Unlike recent attacks that collect simple form authentication data—a security challenge question, a one-time token, or PIN—this attack can get past the extensive physical (“something you have”) authentication required by swiping a card in a reader and typing the input into a field (see Two-factor Authentication sidebar).
The attack asks the victim to supply the information required to get around the physical controls of smartcard reader plus pin pad entry to generate a one-time password (or digital token).
Having collected all the information it requires for the entire transfer, the malware stalls the user and executes its transaction in the background using the legitimate digital token.
Multiple after-the-theft behaviors hide evidence of the transaction from the user. For example, the client-side malware kills the links to printable statements. It also searches for and erases confirmation
emails and email copies of the statement. Finally, it also changes the transactions, transaction values, and account balance in the statement displayed on the victim’s screen so the amounts are what the account holder expects to see.
Defensive security is a difficult game, because one doesn't immediately see whether a given solution works or not. This is in stark contrast to other engineering disciplines (and to offensive security) where one usually have immediate feedback on whether something works well or not.
Say you want to build a redundant long-range video downlink for your unmanned, remotely operated helicopter -- you can throw in lots of money buying various high-gain antennas, circular antennas, antenna trackers, diversity systems, etc., but then ultimately you can verify your creation immediately by going into a field and trying to fly a few miles away, and see whether you loose the vision (usually in the middle of some life-threatening manoeuvre) or not. At least you can draw some lines of how good your solution is ("I can fly up to one mile away, but not more, unless there aren't that many trees around and the air is dry enough").
With security, especially with computer security, it is so different, because there is no immediate feedback. This results in various vendors pitching their products as wonderful solutions that just solve all the worlds problems, even though what they're saying in those marketing materials might be pure nonsense... (BTW, congrats to Simon Crosby for apparently creating a Windows-hosted VMM in below 10k LOC! ;)
The often made mistake is to say: "Perhaps there is a way to attack this solution, but then again, how much of the malware in the wild implements such attacks?" This is a classical thinking in our industry, and in my opinion, an inexcusable mistake! Let me say it clearly:
It doesn't matter whether what the malware in the wild does -- it matters what it could potentially do!
So, if we can do a quick brainstorming session and point out potential attacks within 1 hour against some technology/product X, then, if we don't see a solution how to prevent them generically, we should not bother and implement product X, because it will be defeated, sooner or later. Let's not waste time on useless solutions, life's too short!
I agree with you mostly, but I think there is value to looking at what malware does and not what it could do. But this should all be done keeping in mind the cost of defensive techniques.
I dislike smart-cards because the cost is too high, it is used as a liability transfer mechanism, and as we just saw doesn't do much for the cost it has.
But, Windows turning off USB auto-run: that seems like a low cost thing, directly based off what malware does. Ofcourse, there are so many other vulnerabilities, and the USB driver could have a flaw, but turning off auto-run is still useful.
You make some very good points in this article right up until the last paragraph. Here you are presenting anything that is not perfect and undefeatable security as a waste of time. All security is a matter of degree. Higher security just means that it takes longer to defeat not that it can never be defeated. So yes we need to implement security schemes even when we know it is possible that there may be compromises that we have not been able to defend against. You would not leave your home or car without locking the door even though we all know that the locks can be defeated given a sufficiently determined and knowledgeable thief. It is the same with computer security, you will never be 100% secure but you have to make the effort that is appropriate to that which you are securing.
@elfringham: No, I'm not saying it's useless to build products that are not 100% perfect. I'm saying, that it's waste of time to build security mechanisms that we can already, right at the beginning, point out how to defeat. That's a significant difference!
Take our Qubes OS as an example -- I would never claim it's unbreakable, in fact we write in our wiki what are the attack points on Qubes OS, i.e. we list the security critical elements, that might, one day, turn out to be buggy and get exploited:
Nevertheless, I think Qubes architecture is reasonably secure, and I don't see any immediate weak points in its design, in contrast to many other security products/technologies, where one can *immediately* point our problems.
Was reminded of this and your earlier article today, when I noticed this excited announcement: "The long awaited validation of the OpenSSL FIPS Object Module v2.0
("2.0 module") is now complete"
"One very important difference to note is that a new requirement has been
imposed on the distribution of the 2.0 module. The CMVP (the program
granting the validation) has specifically disallowed the conventional
process of downloading the source code distribution from a web site. To
use the 2.0 module for production purposes where FIPS 140-2 validation
is to be claimed the source must be obtained by a "secure path", and the
most feasible such mechanism is transfer via physical media, i.e. a
snail-mailed CD-ROM disk. We will provide such disks at no charge for as
long as possible, see:
This will be a big deal for U.S. Govt. processing, setting standards for hardware and software security - subject to exactly the criticism you've provided.
I hope someone clues them into this and the earlier blogs, and overview to "them" (GSA, HHS, etc.) the Qubes project.
Doors and locks are not useless even though a burglar can kick the door down or go through the window.
Having a security camera filming my house may not be totally useless to _me_ if it causes the thief to rob another house instead.
Similarly, using two factor authentication may not be totally useless if it protects my transactions as long as I have secured my OS by using Qubes :-),
or by doing all my banking (and nothing else) in a VM or by using Linux when the malware is written for Windows.
If it makes me a harder target than my neighbour then there is a good chance it will be of benefit.
Also, even if I use Windows it could still increase the security of my Reddit account since crackers will probably not care enough about that account to write an attack that can bypass two factor authentication.
At work I have a smartcard that locks my screen when I remove it from the reader (as the risk was perceived to be my co-worked could abuse my account)... This imply that as long as I am in the office, malware which has managed to capture my PIN can abuse my smartcard.
It would help, to some extend, if the card-reader had a hardware button to be pressed to allow one authentication...
Actually, I think the goal of two-factor authentication (as implemented by Google for example) is to protect against phishing rather than malware. Phishing is arguably a lot easier to do than developing malware that comes to steal your credential, so two-factor authentication raises the required expertise level of the attacker significantly.
OTOH I agree that thinking two-factor authentication helps against malware is naive...
Ok, Two factor authentication does help improve security. Particularly two factor using a sms to a persons mobile device like google. This means that the time of such attacks are restricted and basic attacks are thwarted. (also if someone leaves a company the physical means of two factor authentication can be taken. Ofcourse this can be cloned).
So all this does is increase the difficulty of a hack? Wrong, sadly we are thinking that this is a very-very secure mechanism.
From what I have heard of Flame, our entire security trust of browsers is misplaced. As another user is suggesting the best way to use internet banking/sensitive data would be via dedicated HW or a VM and a dedicated browser. (with the usual SSL encryption and authentication mechanisms). Since Flame and Stuxnet can control Hardware, the most secure way would be purpose built devices.
Although for now, a dedicated browser will provide my required level of security. (ofcourse a quick phone call to visa to cancel my cards is my disaster mitigation plan).
Post a Comment