About the SA Blog Network



Opinion, arguments & analyses from the editors of Scientific American
Observations HomeAboutContact

Hacked Hardware Has Been Sold in the U.S.

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

Image of circuit board linesLast week, an official at the Department of Homeland Security (DHS) told a congressional panel that hardware sold in the U.S. has been compromised by foreign agents. According to a report at Fast Company:

When asked by Rep. [Jason] Chaffetz [R-UT] whether [acting deputy undersecretary of the DHS National Protection and Programs Directorate Greg] Schaffer was aware of any foreign-manufactured software or hardware components that had been purposely embedded with security risks, the DHS representative stated that “I am aware of instances where that has happened,” after some hesitation.

In other words, hardware manufactured abroad has been embedded with malicious code, a problem described last year in Scientific American by John Villasenor, a professor of electrical engineering at the University of California, Los Angeles. The design of modern integrated circuits has become so complex, says Villasenor, that malicious agents could insert unwanted instructions into the circuits at some point in the process. “Given the sheer number of people and complexity involved in a large integrated-circuit design, there is always a risk that an unauthorized outsider might gain access and corrupt the design without detection,” Villasenor writes.

What’s at stake here? Villasenor uses the example of a cell-phone circuit that’s programmed to shut down millions of phones at a certain predetermined time. But this is an innocuous example. Villasenor writes:  

The difficulty of fixing a systemic, malicious hardware problem keeps cybersecurity experts up at night. Anything that uses a microprocessor—which is to say, just about everything electronic—is vulnerable. Integrated circuits lie at the heart of our communications systems and the world’s electricity supply. They position the flaps on modern airliners and modulate the power in your car’s antilock braking system. They are used to access bank vaults and ATMs and to run the stock market. They form the core of almost every critical system in use by our armed forces. A well-designed attack could conceivably bring commerce to a halt or immobilize critical parts of our military or government.

What can be done? Villasenor advocates for circuits that are designed to police themselves, searching for abnormal activity in their sub-units and taking protective action if any is found. This would sacrifice a bit of performance, but protect the circuit as a whole.


Photo courtesy Karl-Ludwig Poggemann on Flickr

Rights & Permissions

Comments 19 Comments

Add Comment
  1. 1. Butchfoote 4:19 pm 07/11/2011

    This is just about total crap. The hardware, i.e. Intel / ARM cpu’s aren’t the problem. They won’t pass self-test if any of their microcode failed.

    The problem is software, not the circuits.

    Link to this
  2. 2. drafter 4:53 pm 07/11/2011

    A self monitoring circuit will require even more code and realestate thus making it actually more vulnerable and who will monitor the monitor programmers or makers.

    Link to this
  3. 3. blk 5:02 pm 07/11/2011

    This is a legitimate concern. US corporations have been sending pretty much all IC manufacture and hardware out of the country for decades now. And now IC and product design is also being sent abroad.

    You can no longer buy computers manufactured in the United States by US corporations from US components. Everything is made abroad, from CPUs, to memory, to computer monitors, to hard disk controllers, to the BIOS ROMs, including the software burned on those ROMs.

    When many of the companies that build these parts are owned by foreign governments, it creates a very real threat to our national security. That’s in addition to the economic insecurity caused by our lack of competitiveness in a field that we used to dominate.

    Yes, we design things like the iPad. But we cannot build them. And it won’t be long before we lose that single edge we have, because many corporations are sending their intellectual property out of the country to avoid paying US taxes.

    Link to this
  4. 4. jtdwyer 5:06 pm 07/11/2011

    More than a self monitoring circuit would be needed: a monitoring supersystem would be needed to ensure that all components (processors, microcode & software, etc.) continually operate within specifications. Of course, there are no specifications, compounding the difficulty…

    Link to this
  5. 5. DutchGuy 5:29 pm 07/11/2011

    I used to work for a company that designed chips for ATM’s, cameras and other high-end equipement.

    A CPU is a collection of hard-wired routines. Software invokes these routines via similar routines. It’s quite simple to embed hardwired routines that i.e. give low level access to the registers of a CPU via a ‘magic combination’. From there everything is possible, depending on your tools and framework. You may have similar control of other devices. Theoretically a worldwide shadow network with shadow OS-es is possible. Direct access to i.e. keyboard, mouse and screen subroutines. No virus scanner, firewall or even self-test would ever notice. Strictly by design.

    I am not surprised this comes up as a probability. Eastern eggs in hardware is not exactly new. We already had them in the eighties and those were for fun. Nowadays there are hardwired subroutines for debugging, sometimes causing bugs themselves.

    ATM chips are certified safe because they comply with a vast set of regulations. This set however does not cover everything down the line. The reason is simple: most people don’t have a clue what’s going on when a chip is designed.

    I didn’t see engineers working on secret ATM backdoors in our company. People were way too busy with deadlines and lunch. I don’t think backdoors are policy but it’s certainly possible with little effort.

    Link to this
  6. 6. justanobody 5:57 pm 07/11/2011

    Thanks for the comment – very informative. It seems like a random sampling of a large number of computers/eltronics would be enough to give us a really good idea as to whether this is even a problem, though. Perhaps that has been done and is precisely the reference Mr. Schaffer was making.

    Link to this
  7. 7. jtdwyer 7:35 pm 07/11/2011

    But this is nonsense. How would a large scale sampling of hardware or software to detect ‘Trojan Horse’ or ‘Easter Egg’ embedded destructive elements be achieved?

    There is no way to detect all or even many potential implementations of such destructive elements. Current security software focuses primarily on detecting viruses in subscribers’ software once they have been detected and analyzed in someone’s system – by identifying markers such as software modules that are larger than that distributed by the producer. A yet undetected method of implementation cannot be identified or detected.

    Consider the problem to be analogous to inoculating a population against all potential flu viruses…

    Link to this
  8. 8. byronraum 7:49 pm 07/11/2011

    Butchfoote, I cannot help but wonder if you actually read the article. If I gave you an Intel-like CPU, could you always tell with certainty if it had been hacked or not? Before you answer yes, please understand that this is a variant of the halting problem. It is mathematically provable that the answer is "no."

    Link to this
  9. 9. jtdwyer 7:58 pm 07/11/2011

    The article states:
    "When asked by Rep. [Jason] Chaffetz [R-UT] whether [acting deputy undersecretary of the DHS National Protection and Programs Directorate Greg] Schaffer was aware of any foreign-manufactured software or hardware components that had been purposely embedded with security risks, the DHS representative stated that “I am aware of instances where that has happened,” after some hesitation."

    IMO, Mr. Schaffer’s hesitation was most likely required for him to assess how he could respond to promote his agency’s interests without saying anything that could be discredited… I don’t accept his claim without substantiative evidence.

    That is not to say that this article is misleading – there is obviously an enormous potential for disruptive elements to be embedded in hardware and especially software, but there is no known generally effective method of detection.

    The article concludes by asking: "What can be done?" and answers:
    "Villasenor advocates for circuits that are designed to police themselves, searching for abnormal activity in their sub-units and taking protective action if any is found. This would sacrifice a bit of performance, but protect the circuit as a whole."

    In my professional opinion, this suggestion is utter nonsense!!

    Link to this
  10. 10. DutchGuy 10:54 pm 07/11/2011

    Consider ‘hacks’ as bugs. Bugs can make a system vulnerable and bugs is what virus writers and hackers look for. A low-level ‘backdoor’ on a chip would look like a bug or a set of bugs in the hardware. You would see rewired parts that form routines or instructions, if you follow the paths and decode what it means. Since features are very small and in large numbers it’s hard to detect mismatches with the blueprint.

    The proposition of systems policing themselves is fine in theory (apart from the question "who is going to police the police"), in reality it would be like making a chip that detects bugs in itself. Systems already do that on some levels. Electronic circuits are never 100% reliable and errors need to be corrected to keep a system stable. Finding design bugs on the fly is another story and much, much harder. A policing system would have to know what to look out for. In Windows you can have suspicious behavior with a strange process contacting the internet, on a chip it’s basically ones and zeroes.

    The natural stability errors are probably not the bugs a policing system would look for since these are expected, yet exactly that could be abused. A ‘magic combination’ could interfere with the stability of a system by i.e. tempering with voltage on some determined key points. A malicious manufacturer could put some inferior connections in a chip, basically undetectable, that will burn when some extremely rare conditions are met. Now, in the rare occasion where these conditions actually are met without some help from the outside the chip would simply be broken. Happens all the time, nobody would notice. If a pattern in break-downs is discovered it would look like a bad batch. That also happens all the time.

    Anyway, in this case severe instability or a shutdown would be the payload and a policing system would never notice it. A small OFF ‘button’ would be fine enough for some hostiles.

    I can think of other nasty things like bug triggering, Safe systems could suddenly become vulnerable to malicious code on higher level. The malicious code would look and act harmless as long as the bug has not been triggered.

    Virus makers and hackers look for unknown bugs in systems to exploit. In this strategy a malicious manufacturer would embed a sleeping, artificial bug and have the malicious code and trigger written for, distributed in i.e. driver software. It’s pure theory and farfetched but not impossible.

    Link to this
  11. 11. SpottedMarley 8:14 am 07/12/2011

    so then, Unibomber manifesto.. not so crazy afterall

    Link to this
  12. 12. RHoltslander 11:38 am 07/12/2011

    I found it a bit surprising and a little amusing that that John Villasenor only identified "an unauthorized outsider" as a possible route. I suspect that the authorized insider is also a possibility and I daresay a more likely route for this. Especially considering the number of people involved in the manufacture and development of these things. Policing the thoughts, politics and feelings of everyone who is involved is impossible if not also also undesirable.
    As for the suggestion that doing it all "state-side" would somehow solve this problem is a bit naive.

    Link to this
  13. 13. eclipx 2:16 pm 07/12/2011

    There is absolutely no way to detect a decent implementation of hardware with spy routines built in. Say it was an all-in-one motherboard. All one would have to do is build a pass-through gate for a particular subset of data, say, a keyboard, and then pass all output to the rogue network adapter that send simply sends the data out on the net. To detect this, you’d have to be on the ISP end and know what to look for. But then again, the data could also easily be encoded.

    Link to this
  14. 14. jcvillar 11:06 pm 07/12/2011

    What a surprise. We get greedy and rely on foreign nations to manufacture our chips with slave labor and guess what happens? Is there anyone here who didn’t expect this?

    Link to this
  15. 15. rwstutler 5:17 am 07/13/2011

    It is nonsense in your ‘professional’ opinion? Doesn’t sound that way to me, though it envisions something like an Underwriters Lab cert, sticker or seal, to go along with a credible organization that can actually be trusted to vette the self policing subassemblies, chips, or what-have-you.

    Link to this
  16. 16. jcvillar 10:32 am 07/13/2011

    The only solution I can see is that security clearances be required for anyone writing the code for sensitive hardware and the code be examined to ensure it is to spec when the chip comes in from the manufacturer.

    On top of that, x-ray the chip to ensure no additional structures have been added to it.

    Link to this
  17. 17. LarryL 10:39 am 07/16/2011

    "the code be examined to ensure it is to spec"

    It’s certainly possible to examine even a complex chip or system and verify that it behaves exactly as expected when given an expected set of inputs as called for in the specification, but how do you verify that it behaves politely for every possible set of *unexpected* inputs?

    Remember when cheat codes appeared in video games? A quality assurance process might be able to verify that a game performed exactly as expected when you press the expected combinations of up, down, left, and right. But if you tested only the expected specified inputs, how would you discover that "up up down down left right left right B A" would give your character unlimited lives? And as alluded to before, the number of possible unexpected inputs in a complex system is effectively infinite.

    Link to this
  18. 18. jtdwyer 11:40 am 07/16/2011

    If it was that simple, we could also have an agency certify that your breakfast cereal contains only authorized genetically engineered plant material and that your vegetables do not contain e-coli bacteria. It’s an order of magnitude more difficult than ensuring that electrical products don’t catch on fire. Even UL has its slip-ups, such as exploding halogen bulbs and lamps that set curtains on fire.

    Link to this
  19. 19. gmperkins 6:51 pm 07/22/2011

    The DutchGuy makes some good points.

    I’ll add some but this forum is not the place for details:

    * You can check hardware and hardware can self-check itself. How thoroughly you go about this depends how much extra cost you are willing to incur. Some simple cost effective checks will make hacking hard, due to how hardware "works". In fact, some of these checks already are in place to determine if manufacturing bugs are present. Extending those would help validate more of the chip and/or motherboard.

    * a very thorough discussion and proposals on this kind of stuff can be found in the literature of the Trusted Computing Group (TCG). Basically, you build trust from the "ground up".

    * CPUs are probably not that vulnerable, they have self-tests and the companies do routine checks on batches. Changes to them could cause wierd bugs (blue screen crashes) that would cause red lights. Possible to hack, but difficult. => But other components aren’t checked quite as thoroughly nor would they cause big problems if they occasionally acted up.

    It is a reasonable threat and I am 99% certain nothing will be done about it until it causes some great catastrophe.

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article