A2: Analog Malicious Hardware

Nothing much to do with language theory but too cool not to share.

While the move to smaller transistors has been a boon for performance it has dramatically increased the cost to fabricate chips using those smaller transistors. This forces the vast majority of chip design companies to trust a third party —often overseas—to fabricate their design. To guard against shipping chips with errors (intentional or otherwise) chip design
companies rely on post-fabrication testing. Unfortunately, this type of testing leaves the door open to malicious modifications since attackers can craft attack triggers requiring a sequence of unlikely events, which will never be encountered by even the most diligent tester.

In this paper, we show how a fabrication-time attacker can leverage analog circuits to create a hardware attack that is small (i.e., requires as little as one gate) and stealthy (i.e.,
requires an unlikely trigger sequence before effecting a chip’s functionality). In the open spaces of an already placed and routed design, we construct a circuit that uses capacitors to siphon charge from nearby wires as they transition between digital values. When the capacitors fully charge, they deploy an attack that forces a victim flip-flop to a desired value. We weaponize this attack into a remotely-controllable privilege escalation by attaching the capacitor to a wire controllable and by selecting a victim flip-flop that holds the privilege bit for our processor. We implement this attack in an OR1200 processor and fabricate a chip. Experimental results show that our attacks work, show that our attacks elude activation by a diverse set of benchmarks, and suggest that our attacks evade known defenses

A2: Analog Malicious Hardware. K. Yang, M. Hicks, Q. Dong, T. Austin, D. Sylvester, Department of Electrical Engineering and Computer Science, University of Michigan, USA.

Comment by Yonatan Zunger, Head of Infrastructure for the Google Assistant:

"This is the most demonically clever computer security attack I've seen in years. It's a fabrication-time attack: that is, it's an attack which can be performed by someone who has access to the microchip fabrication facility, and it lets them insert a nearly undetectable backdoor into the chips themselves. (If you're wondering who might want to do such a thing, think "state-level actors")

The attack starts with a chip design which has already been routed -- i.e., it's gone from a high-level design in terms of registers and data, to a low-level design in terms of gates and transistors, all the way to a physical layout of how the wires and silicon will be laid out. But instead of adding a chunk of new circuitry (which would take up space), or modifying existing circuitry significantly (which could be detected), it adds nothing more than a single logic gate in a piece of empty space.

When a wire next to this booby-trap gate flips from off to on, the electromagnetic fields it emits add a little bit of charge to a capacitor inside the gate. If it just happens once, that charge bleeds off, and nothing happens. But if that wire is flipped on and off rapidly, it accumulates in the capacitor until it passes a threshold -- at which point it triggers that gate, which flips a target flip-flop (switch) inside the chip from off to on.

If you pick a wire which normally doesn't flip on and off rapidly, and you target a vulnerable switch -- say, the switch between user and supervisor mode -- then you have a modification to the chip which is too tiny to notice, which is invisible to all known forms of detection, and if you know the correct magic incantation (in software) to flip that wire rapidly, will suddenly give you supervisor-mode access to the chip. (Supervisor mode is the mode the heart of the operating system runs in; in this mode, you have access to all the computer's memory, rather than just to your own application's)

The authors of this paper came up with the idea and built an actual microchip with such a backdoor in it, using the open-source OR1200 chip as their target. I don't know if I want to guess how many three-letter agencies have already had the same idea, or what fraction of chips in the wild already have such a backdoor in them."

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Beautiful

That looks impossible to find unless you know where to look.

That's all manner of cool …

… and utterly, utterly, evil.

visual diff?

I have not read the paper, so I might be a real ignoramass here: Why can't they take random samples of chips and visually diff them vs. the designs they sent to the fab in the first place?

Overall what this boils down to for me is that we need rapacious simplicity in everything so that we can more easily inspect every single thing. :-}

visual diff

From the title, I thought you were referring to the recent XKCD.

I'm not aware of state of the art here. Do modern hardware development tools provide a reasonable rendering of the alleged product, i.e. such that a visual diff wouldn't be full of false positives?

A different kind of defense

Here's a kind of defense I thought of awhile ago as a countermeasure to fabrication attacks. At the time I did not have this particular attack in mind, though it should be effective against it as well. I believe the core of the following idea is only an independent reinvention of something I think I later encountered in some paper somewhere, but I cannot currently find that paper.

Many CPU designs including the OR1200 are good enough that they perform reasonably for many tasks when compiled to an FPGA (OR1200 OpenRISC Processor, Implementation information).
In compiling a gate-level design to an FPGA, the compiler makes many arbitrary layout choices. Let's say that for every instance of the processor it makes these otherwise arbitrary choices randomly. Further checks on the randomness of the layout could ensure that no one gate or wire of the gate-level design always ends up at the same place in the FPGA layout.

In that case the manufacturer of the FPGA would not be able to predict where on the FPGA chip any one part of any given instance of the CPU would be. Given the regularity of a benign FPGA chip, it would be hard to hide an exploitable corruption of the FPGA that could both elude detection and be effective against any one randomized layout of the chip.

Whether this randomization helps depends on the threat model. If the attacker needs a high probability of being able to attack a particular chosen instance, then randomization should help a lot. If the attacker need only succeed at attacking any small number of targets out of a much larger population, and if the cost-free randomization of layout choices is quite constrained, then randomized layout may not help much.

However, if the gate-level design is of a clocked deterministic processor, and if, say, we can fit three of them plus redundant comparison circuits onto the FPGA, then we can run three in parallel and compare the outputs at each clock cycle before reporting results off chip. If there is a disagreement, we report an error and stop rather than reporting the majority result. Each of the three replicas, of course, needs to have independently randomized layouts. In this design, the comparison and output circuits may be the remaining best targets for a coordinated attack. But requiring a coordinated attack is already much better than the status quo.

The price/performance of this CPU will be far from competitive, so it will only be viable where this degree of reliability and trustworthiness is worth these costs. The trustworthiness provided is still less than approaches like Etherium (assuming a diversity of CPUs and manufacturers), but costs are also far less than Etherium. Which approach to diverse-redundancy-and-checking to use depends on these tradeoffs.