Wednesday, March 25, 2009

Trusting Hardware

So, you're a decent paranoid person, running only open source software on your box: Linux, GNU, etc. You have the feeling you could, if you only wanted to, review every single line of code (of course you will probably never do this, but anyway). You might be even more paranoid and also try running an open source BIOS. You feel satisfied and cannot understand all those stupid people running closed source systems like e.g. Windows. Right?

But here's where you are stuck — you still must trust your hardware. Trust that your hardware vendor has not e.g. built in a backdoor into your network card micro-controller…

So, if we buy a laptop from vendor X, that might be based in some not-fully-democratic country, how do we know they didn't put backdoors there? And not only to spy on Americans, also to spy on their own citizens? When was the last time you reverse-engineered all the PCI devices on your motherboard?

Scared? Good!

Enters the game-changer: IOMMU (known as VT-d on Intel). With proper OS/VMM design, this technology can address the very problem of most of the hardware backdoors. A good example of a practical system that allows for that is Xen 3.3, which supports VT-d and allows you to move drivers into a separate, unprivileged driver domain(s). This way each PCI device can be limited to DMA only to the memory region occupied by its own driver.

The network card's microcontroller can still compromise the network card driver, but nothing else. Assuming we are using only encrypted communication, there is not much an attacker can gain by compromising this network card driver, besides doing a DoS. Similarly for the disk driver — if we use full disk encryption (which is a good idea anyway), there is not much an attacker can gain from compromising the low-level disk driver.

Obviously the design of such a system (especially used for desktop computing) is not trivial ans needs to be thoroughly thought out. But it is possible today(!), thanks to those new virtualization technologies.

It seems than, that we could protect ourselves against potentially malicious hardware. With one exception however… we still need to trust the CPU and also the memory controller (AKA northbridge AKA chipset), that implements that IOMMU.

On AMD systems, the memory controller has long been integrated into the processor. Also Intel's recent Nehalem processors integrate the memory controller on the same die.

This all means we need to trust only one vendor (Intel or AMD) and only one component, i.e. The Processor. But should we blindly trust them? After all it would be trivial for Intel or AMD to build in a backdoor into their processor. Even something as simple as:

if (rax == MAGIC_1 && rcx == MAGIC_2) jmp [rbx]

Just a few more gates in the CPU I guess (there are apparently already about 780 million gates on Core i7, so a few more should not make much difference), and no performance penalty. Exploitable remotely on most systems and any more complex program I guess. Yet, totally undetectable for anybody without an electron microscope (and tons of skills and knowledge).

And this is just the simplest example that comes to mind within just a few minutes. I'm sure one could come up with something even more universal and reliable. The fact is — if you are the CPU vendor, it is trivial for you to build in an effective backdoor.

It's funny how various people, e.g. European government institutions, are afraid of using closed source software, e.g. Windows, because they are afraid of Microsoft putting backdoors there. Yet, they are not concerned about using processors made by some other US companies. It is significantly more risky for Microsoft to put a backdoor into its software, where even a skilled teenager equipped with IDA Pro can find it, than it is for Intel or AMD, where effectively nobody can find it.

So, I wonder whether various government and large corporate customers from outside the US will start asking Intel and AMD to provide them with the exact blueprints of their processors. After all they already require Microsoft to provide them with the source code under an NDA, right? So, why not the "source code" for the processor?

Unfortunately there is nothing that could stop a processor vendor to provide its customers with a different blueprints than those that are used to actually "burn" the processors. So, the additional requirement would be needed that they also allow to audit their manufacturing process. Another solution would be to hire some group of independent researchers, equip them with an electron microscope and let them reverse engineer some randomly chosen processors… Hmmm, I even know a team that would love to do that ;)

A quick summary in case you get lost already:
  1. On most systems we are not protected against hardware backdoors, e.g. in the network card controller.
  2. New technologies, e.g. Intel VT-d, can allow to protect against potentially malicious hardware (requires specially designed OS, e.g. specially configured Xen)…
  3. … except for the potential backdoors in the processor.
  4. If we don't trust Microsoft, why should we trust Intel or AMD?
BTW, in May I will be speaking at the Confidence conference in Krakow, Poland. This is gonna be a keynote, so don't expect new attacks to be revealed, but rather some more philosophical stuff about trusted computing (why it is not evil) and problems like the one discussed today. See you there!

31 comments:

  1. As always it is a pleasure to read from you.

    Thanks for sharing all this.

    /Paul

    ReplyDelete
  2. I think that it is not just the point "not to trust microsoft" that let european authorities move from Windows to other OSes. Its more the basic idea of free software and that it not always have to be Microsoft. As I live in Germany I can tell you that there isn't that much open source software in governmental institutions as it should be.

    ReplyDelete
  3. Excellently! Instead of whether you reflected that similar backdoors are built in mobile phones, passports, underground cards, tickets, etc. So after all it is possible to die of fear:)
    and I wish good luck on Confidence ;)

    ReplyDelete
  4. Joanna,

    If you haven't already seen this paper related to malicious hardware, you should give it a read:

    http://www.usenix.org/event/leet08/tech/full_papers/king/king_html/

    Regards,
    Jon Oberheide

    ReplyDelete
  5. @wins mallow: I think you're a bit too paranoid. I really don't care if the gov builds in backdoors into my passport, metro tickets etc. What could they potentially gain?

    As for the mobile phones -- well, being a moderately paranoid person, I treat all smart phones with a dose of reserve -- I don't put my PGP keys there and generally treat my phone, as if others (e.g. the mobile operator) has had access to it. So, they can read my unencrypted email -- so what, my mail service provider can do the same. PGP is the way to protect against this.

    ReplyDelete
  6. Thank you for voicing these concerns.

    Why stop there? What about chips in vehicles having the potential to fake faults to drive sales of autoparts?

    It's just an example of the potential scale of the problem: it need not be a networked device and follow the money.

    ReplyDelete
  7. Oh,What you said is very terrible!Ha~,but thanks for sharing all this.

    ReplyDelete
  8. Take a look at this: Trojan Detection using IC Fingerprinting
    http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=4223234

    ReplyDelete
  9. Back in 1984 Ken Thompson gave a Turing Award Lecture "Reflections on Trusting Trust" of which this is a familiar echo.
    http://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf

    ReplyDelete
  10. Your readers are well aware! :-)

    But this post also reminded of the presentation of the Kris Kaspersky HITB 2009.

    http://conference.hitb.org/hitbsecconf2008kl/?page_id=214

    "Intel Core 2 has 128 confirmed bugs. Intel Itanium (designed for critical systems) looks more “promising”, carrying over 230 bugs. They have all been confirmed by Intel and described in errata section of their specification updates. Some bugs “just” crash the system (under quite rare conditions) while the others give the attackers full control over the machine."

    Anyway, something you drew the attention is something that comment to years, the fact that the software be closed doesn´t mean that its more insecure.

    []´s

    Alberto Fabiano

    ReplyDelete
  11. It was a pleasure to read this post, and will continue reading the rest of the blog, really clear the way you explain some things that are away from casual language.

    As you say (I must confess that I preferr open and free software but use also commercial software) commercial software it is not less secure or dangerous than open and free software.

    A pleasure to find your blog!

    ReplyDelete
  12. Be happy paranoiac. :)

    ReplyDelete
  13. BajoYo : It seems you missed the point about FOSS. The main asset of FOSS over closed-source is you can (or hire some to) fix it if it is broken. How many closed source software remains unpatched for a loooong period ?

    Patching hardware is a real problem anyway... :)

    ReplyDelete
  14. call me old-fashioned, but I don't buy the paranoia related to hardware backdoors; it is still much easier to buy or blackmail humanware than trying to attempt to hide the backdoor in the hardware; all best software rootkits are finally discovered, and it is not because somebody spends hours trying to find them, but they somehow manifest their presence at some stage; so will hardware rootkits, because they need to either communicate what they intercepted or must be fetched by somebody via a psychical contact; and if that somebody who find it happens to be Russinovich-like, it is more than certain that the media will pick up the story and the rootkiting company stock and image will be ruined forever; the strength is not in preventing rootkits in hardware, software but in people; sorry for such a cliche, but we live in a world that is very much working just because we trust somebody; you take the plane, you eat bread, you breathe - are you sure you can check all the variables on the way?
    check this one out - http://www.youtube.com/watch?v=1Xhdy9zBEws

    with your proposed approach to giving blueprints to governments, or taking it even further - to masses, who will be able to understand them anyway? masses will rely on opinions of experts and experts are not independent as long as they work for somebody or can be bought/blackmailed/etc.

    not-fully-democratic country... as a matter of fact, living in a post communistic country you are experiencing more freedom than Western Europe, US, or Australia... they actually believe they are free... the ultimate slavery

    you can't trust any single word in my post :)

    ReplyDelete
  15. @the-old-fashioned-anon:
    Please note the distinction between a rootkit, which is something that "lives", is active, vs. a backdoor -- something that could be waiting passively for ages, not doing anything by itself. E.g. some additional "if" clause, that normally is never taken (here "if" == some additional logic gates).

    You are right, however, that once somebody decided to *use* the backdoor, then it is all over (well, in the ideal world at least -- in our world the vendor might just say it was an accidental bug...).

    But I see this as some form of an ultimate weapon that could be used e.g. in case of a war or terrorist attack, etc. Again, just building the backdoors into processors seem like totally safe for Intel and AMD. *Using* them might be not safe, but having them in place, just in case, seems like a reasonable move.

    BTW, Of course I didn't mind Poland when I said "not-fully-democratic country" -- Poland is part of EU and NATO and it really is democratic. Also there are no computer hardware vendors based in Poland AFAIK.

    ReplyDelete
  16. @joanna from the-old-fashioned-anon:

    I do note the distinction between a rootkit and a backdoor as used in a popular culture but I don't agree with your clarification :) I believe that rootkit doesn't necessary imply the "active" part - we sort of got used to rootkits that need to be executed on the system and then must be actively running to hide themselves, but what about passive rootkits?
    if we agree that the role of the rootkit is to be invisible (and to be invisible _only_), then hiding a code in a slack space or sectors at the end of the physical disk is a very good (passive) rootkit functionality
    it is pretty close to your backdoor defintion

    interesting observation about "all over"; I agree and the mysterious bug is a tempting idea... a scenario where the passive backdoor of some sort triggers in a printer/scanner when a specific document is being processed and activates some sort of an easter egg - it can be just preserving all the documents scanned after a specific event triggered and then after collecting enough information signal a critical fault which requires a specialist's help to repair it (it's Mr. backdoor data collector in fact)

    hmm when you think of it... modern scanners have such sort-of-backdoors to prevent scanning money - do they store the information about detecting such events anywhere? :)

    if we assume we actually have the infected processors in place, what could trigger them to do what they are supposed to do? time? specific value of registers/memory content? how could such an event could be delivered though? if we look for similarities in a software world... for botnets, hard to control the target; for targeted attacks, we don't use universal approach... hard to believe someone would go with such a massive investment in an an area where so many things can go wrong... but of course, it's possible

    a very interesting idea ... perhaps it's time to start working on CPU-fuzzers :)

    regarding BTW
    I know you didn't mean your country; I think you meant... well, if you want to continue this topic offline, I would be happy to elaborate... i don't want to pollute your blog with off topics :)

    ReplyDelete
  17. Interesting Post :) However personally I'm more scared of something much simpler. Vendor X sells all laptops/keyboards with a built-in keylogger.

    Nowadays it's just too easy/cheap to include a keylogger which is so small that you won't notice it and that it can record everything you type.

    The only solution I can think of is an on screen keyboard, which is quite annoying to use.

    ReplyDelete
  18. @yet-another-anon:

    Comparing to a few additional gates in a 45nm processor (that already has 700 millions of them) an additional keylogger can be much easier spotted by others (e.g. reversing enthusiasts). And the vendor would risk going bankrupt once this keylogger is found. And the keylogging unit is definitely not something that the vendor might try to excuse as an "accidental bug".

    ReplyDelete
  19. it may be trivial to build a backdoor, it might be less trivial to bring home the 'win'. in general, i cannot advise this parnoia - it just doesn't add up to make practical sense. open hardware makes sense, but i'd give other reasons, economic reasons. open source makes sense: you can't read source that is not there. i indeed believe, that every line of code get's monitored over time. errors get published. that's exactly the reason, the concept of trust is better developed in open environments: you must not lose it! the others _will_ get you before you reach critical distribution. in general, i have to trust 'the others', the press, and maybe these wireshark authors;), but not my hardware as long as it does it's job. if keyloggers don't scale then cpu-bugs scale even less. in case of blade and/or virtual systems, i have to trust you.

    ReplyDelete
  20. In addition to the King and Agrawal papers cited above, there was a nicely written article about the US government's take on "trusting hardware" (http://www.spectrum.ieee.org/may08/6171). The idea is that even if chips are designed properly, they are manufactured somewhere that may not be trustworthy. This is even worse than backdoors inserted at chip design, because randomly modified chips are unlikely to be detected by random inspection. And who has the time or resources to verify each and every chip they use?

    There's also a workshop that addresses these kinds of problems, including threats from design through chip fabrication: http://www.engr.uconn.edu/HOST/

    ReplyDelete
  21. Great article! I'll just trust now in pencils and papers... rsrsrs

    ReplyDelete
  22. this is exactly what Loic Duflot discussed in SSTIC 2008 : http://actes.sstic.org/SSTIC08/Bogues_Piegeages_Processeurs_Consequences_Securite/

    you have to understand French...

    Laurent

    ReplyDelete
  23. this is exactly what Loic Duflot discussed in SSTIC 2008[...]Well, no, it doesn't seem to be even closely the same. While I don't know French, I know how to grep through the PDF ;) And the document you quoted do not contain terms, such as VT-d or IOMMU, that are key elements of my post here.

    ReplyDelete
  24. Things are getting a little hot around here with all your talk of having a built-in backdoor into your network card micro-controller.

    I'd thought we'd agreed this was our little secret ?

    :)

    ReplyDelete
  25. IT'S IN YOUR FACE!!!
    Britain Lets Police Hack PCs Without Warrants

    http://www.foxnews.com/story/0,2933,476904,00.html

    http://www.youtube.com/watch?v=wlj7u3tOQ9s

    WE NEED OPEN HARDWARE!!!

    ReplyDelete
  26. Another story: Internet runs on Cisco routers, those closed-everything beasts. Cisco simply rules the Internet!

    ReplyDelete
  27. It's acutally worse than you think. A CPU is too complex for a human to design on the physical level, so we have to rely on software to do the design. This software could be compromised without the knowledge of an honest engineer. Security is an illusion.

    ReplyDelete
  28. Well, in answer to your question, why should we trust Intel / AMD, guess the answer is that people don't! I mean, that would have to be one of the main reasons China is building the Loongson processor.

    (Failing to login with OpenID because blogger can't handle the URL length in the login redirect!)

    ReplyDelete
  29. @Ebrahim:
    The actual defense against backdoors in the infrastructure (e.g. CISCO routers) are good crypto protocols, e.g. SSL. Of course, sometimes SSL implementation might be buggy (e.g. Debian), or the way we use it might be wrong (think sslstrip), but still CISCO would have to count on some bug somewhere. In practice we can effectively protect ourselves against evil network infrastructure (in fact most of the research in computer security over the past few decades was focused just on this problem).

    On the other hand we have simply no single mean of how to protect ourselves against potential backdoors in CPUs (besides building our own processor design & production factory). We can protect against backdoors in all the other hardware components though VT-d/IOMMU, but not against backdoors in processors.

    ReplyDelete
  30. Hi Joanna,

    Take a look at

    http://www.springerlink.com/content/jp07870p24560678/

    Questions:
    1-Do you think there are some way to avoid this problem with some sort of "defensive" software design?

    2-Do you think "open hardware" CPUs do help to avoid CPU's backdoors? (ps: I think that open hardware CPUs are so "obscure" as any other CPU.

    Cheers,

    Hyperluz

    ReplyDelete