The irreducible physicality of security properties

The recent discussion around Safe and Secure Software in Ada involved some amount of discussion around what is involved in proving software secure, and what role do PLs play in this. I recommend two papers for further discussion:

  • First, Rao & Rohatgi (2001), EMpowering Side-channel attacks, which discusses a fairly new technology for gathering information from running systems by monitoring their EM emissions; and
  • Rae & Wildman (2003), A Taxonomy of Attacks on Secure Devices, which provides a synthetic classification of attacks on computer systems based on the attackers degree of access to the machinery and the attacker's objectives, and which catalogues a range of attacks into the classification.

So I hereby advance three slogans:

  1. Security is physical: neither applications nor operating systems can satisfy elementary security properties, but only deployed computer systems. This is because elementary security properties are about what the attacker does, which ultimately has a physical basis;
  2. Security is non-modular: Programming languages and software engineering practices can ensure that software possesses properties helpful to security, but the properties are only meaningful in the context of a strategy to ensure a computer system satisfies its security policy;
  3. We should not talk of secure programming languages and secure programs, such talk does mislead; to talk instead of software being secureable might promote better understanding.

Edited following Dave Griffith's remarks.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Security is relative, but still meaningful

Whenever we prove a theorem saying that something is secure this theorem only holds for the particular model that is used in the proof. Applying the theorem to something outside the model doesn't make it secure, that's just misusing the theorem. Indeed that's what your saying in your second point. It's true that these kinds of theorems doesn't model the whole physical reality, they target a specific part of a system. The rest of the system has to be shown to be secure by some other means. This doesn't make the theorem bad, it's still incredibly useful to know that one part of your system is actually completely secure. It's just that the theorem doesn't do all the work for us, we have to deal with the aspects that fall outside the assumptions of the theorem by some other means.

There is ongoing research in trying to include more and more of the physical world when modeling systems and proving them secure. People, including myself, are working on modeling timing, cache behavior and power consumption of programs in order to show that none of these aspects can be used by an attacker. So the situation is becoming increasingly better.

As to your third point. Security is a relative thing. Whenever you hear that something is secure then you should immediately ask yourself "Secure against what?" and "What assumptions did they make when showing security?". Things are only secure relative to the model and the assumptions used when showing security. Hence, I think it makes perfect sense to talk about secure languages and secure programs, baring in minds that they are secure relative to a particular model.

Security is of importance

Security is of importance, both in program language and software engineering, referring to program and architecture style.

Overreaching for undisputed points

It's pretty well known that attacks on lower layers of an automation stack can easily subvert security of higher layers. The only thing important about physicality is that the lowest layer of any automation stack must be physical, so if you have physical access then any level can be attacked.

It's also well known that no security is absolute, and that security involves looking at every component of a system, including hardware, OS, language, libraries, and application components.

Stung into sloganising

Stung by your criticism of triviality, I've reworked my undisputed points to make my intention in making them perhaps a little clearer; I shall change them from:

  1. Neither applications nor operating systems can satisfy elementary security properties, but only deployed computer systems. This is because elementary security properties are about what the attacker does, which ultimately has a physical basis;
  2. Programming languages and software engineering practices can ensure that software possesses properties helpful to security, but the properties are only meaningful in the context of a strategy to ensure a computer system satisfies its secuirty policy;
  3. Talk of secure programming languages and secure programs is considered harmful!

to

  1. Security is physical: neither applications nor operating systems can satisfy elementary security properties, but only deployed computer systems. This is because elementary security properties are about what the attacker does, which ultimately has a physical basis;
  2. Security is non-modular: Programming languages and software engineering practices can ensure that software possesses properties helpful to security, but the properties are only meaningful in the context of a strategy to ensure a computer system satisfies its security policy;
  3. We should not talk of secure programming languages and secure programs, such talk does mislead; to talk instead of software being secureable might promote better understanding.

The content of these points are not original (except, perhaps, the third), and have been well broadcast by people such as Lauren Weinstein and Bruce Schneier. They are, however, routinely ignored when talking about security and its relationship to the things we are most concerned with here at LtU. Maybe sloganising them help make the point sharper.

Languages can't be secure...

... but they can strongly encourage an insecure programming style. A classic example of this is the lack of bounds checking- it is possible to write secure code in a language without bounds checking, but this means a single bug, a single missed or incorrect bounds check, is a security hole waiting to happen.

red herrings

make a language which says it has no bounds checks but secretly does so all the script kiddies go for the low hanging fruit and get fake root control, instead of them having to go for the more refined attacks which actually work.

(of course, i mostly jest.)

A buffer-overflow honeypot?

I like it. :)

I argue they can

A actually argue that languages can be secure. Or at least, that there's a whole layer of security we haven't really approached in languages. At the moment, I'm working on type analysis of system-level properties (read "SELinux" or "ACL"s) and yes, it is feasible, provided you have some runtime support. I'll try and post more details when the paper is written.

Depends on what you mean

Arguing a programming language is secure is like saying that a burglar alarm, or a deadbolt, is "secure". While some programming languages have better security properties than others--and many have the property that a conforming program running on a correct implementation cannot violate the language's abstractions--as is noted above, that ain't enough. No PL can prevent someone from a logic analyzer from snooping the private key if they have access to the circuit board.

Security remains a *system* property. The attributes of the components do contribute to system security, but "secure" systems can be composed of insecure components, and secure components can be composed into insecure systems. I put "secure" in quotes to emphasize that security, like many things, is judged against requirements. The deadbolt on my front door is intended to keep out random riffraff. It won't keep out the cops, and it probably won't deter a dedicated and professional burglar. But it isn't intended to.

Granted

Although I'd rephrase this as "there's no such thing as a universal definition of security". On the other hand, what you can do with a programming language is enforce security policies. Now, I don't believe any language can enforce everything at once, but I believe in asymptotical convergence.

On the other hand, what you

On the other hand, what you can do with a programming language is enforce security policies.

You can enforce security policies modulo the language's abstraction level. Attacks which violate a language's underlying assumptions are still valid. For instance, the assumption that all memory accesses are equally expensive, which isn't true with caches.

[Edit: there has been much discussion on the capability security mailing lists with the, perhaps biased, conclusion that object capabilities are thus far the only known language for expression all enforceable access control policies. By this, I mean that all enforceable policies, have a natural expression with object-capabilities, and policies which have no such natural expression, have so far been shown to be ultimately unenforceable. There are compelling arguments and significant evidence for this conclusion. I'm curious what others think of it.]

Impressed

I have been impressed, certainly, by Mark Miller's writings about capabilities and the drawbacks of the ACL approach. I'm only beginning to get a grip on the literature, so perhaps I, too, have been seduced by the words of the E-ists.

I came from an OS

I came from an OS background, so Jonathan Shapiro was my guru in a sense. He's also done a lot more formal analysis of security models, including ACLs. His opinions swayed me more in this regard.

Background

I've spent a few years now following E and EROS, and I definitely feel that those efforts (and the ones preceding, e.g. on KeyKos and Joule) collectively represent the gold standard for identifying security issues and addressing those that can be addressed. My current understanding is that object capability security doesn't pretend to allow you to prevent communication among colluding adversaries via side channels because to do so would violate various properties of the quantum mechanics—that is, systems that allow you to express such a constraint are actively misleading and therefore introduce security risks rather than reduce them.

I also find the taxonomy of types (for lack of a better term) that the E community has evolved quite compelling, as I've written elsewhere but am slightly too busy to look up just now.

I'm heartened by recent work on Emily (an object-capability secure subset of OCaml), but ultimately I continue to feel that the real win will be the embedding of all of these lessons learned in a new information-flow type system. This type system will almost certainly (IMHO) need to be dependently typed in order to benefit from the major insight of the object capability security community: that it is the dynamic patterns of association that evolve in the course of running a piece of software, and not just the static associations, that matter to the epiphenomenal goal of "security."

There's a ton of work to be done here yet. PLT does indeed have a great deal to say about it. Frankly, I find (my interpretation of) the thrust of the lead topic of this thread pointless at best, and damaging at worst; taken more seriously than it warrants, it could have the same deleterious effect on security research as a subdiscipline of PLT research that Gödel's incompleteness theorems had on number theory research, that the halting problem had on automated theorem proving for some 20 years, that the halting problem had on logic programming after the development of Prolog, etc.

object capability security

object capability security doesn't pretend to allow you to prevent communication among colluding adversaries via side channels because to do so would violate various properties of the quantum mechanics—that is, systems that allow you to express such a constraint are actively misleading and therefore introduce security risks rather than reduce them.

Precisely what I was trying to say, namely, all policies expressible with object-capabilities are enforceable, while all inexpressible policies are unenforceable, even in principle.

I'm not very knowledgeable about information flow systems, so I remain skeptical. Since you see them as the future, perhaps you can recommend a good intro. I've downloaded a few papers, but just haven't dug in yet.

Unenforceable policies

It is sometimes amusing to see your own words repeated approximately. In this case, the statement isn't quite right. There are lots of policies that can be characterized as capability policies that are unenforceable. For example, you may have a desired policy that requires some decision procedure based on a global view of system state. In practice, you will soon discover that no entity in the system actually has such a view on the system state. In consequence, the policy is unenforceable.

The whole object-capability idea is more than just a way to express protection information. It also incorporates notions of extension (i.e. policies implemented by trusted programs that wield capabilities), constraints on update (i.e. that capabilities can only be transmitted over channels that are in turn authorized by capabilities, which has implications for graph update inductions), and locality of reference/visibility (i.e. well structured systems don't allow a "God's eye" view from within the system). As a community, I think that the cap-talk group sometimes attributes ideas and benefits to capabilities as a verbal shorthand that don't really hold up unless these other notions are brought to bear. All of these notions are consistent with the way that the major capability architects over the years have viewed their systems, but the term "capability" has come to be interpreted more narrowly as a means of expressing protection state rather than incorporating the broader system design criteria that exist in all real capability systems.

The comment that inexpressible policies are unenforcible is right, but in it's original form it was "Any policy that cannot be realized by some use of the underlying protection mechanisms is unenforceable." This is true regardless of what the underlying protection mechanism might be. I sometimes follow that statement with the observation that the correct technical term for such a policy is "wishful thinking." As in "Wish in one hand, crap in the other, and see which one fills up first." Most of the security policies that people think they want turn out to be policies of the wishful variety. Unfortunately their desires do not abate when this is explained with simple illustrations.

And just to get back to the topic thread, I think that the reason that a lot of policies are wishful is that information security isn't physical. The intuitions that people have about what they want are often grounded in meat-world understandings that simply don't apply in the information world. In particular, people get tripped up over the fact that reference transfer is non-exclusive (i.e. the giver doesn't lose the reference) and offers durable access to future updates on the target object.

The thing that is physical is the process and practice of system penetration.

I was being a tad loose with

I was being a tad loose with the terminology, as an object-capability language doesn't really have "policies" per se, it has an operational semantics defining the valid transitions from one state to another. My statement is really about how the operational semantics closely model real information flow in general systems. "Expressible policies" is thus just a shorthand for "system states that are achievable or preventable, even when in contact with other, possibly unconfined systems", and "inexpressible policies" is obviously shorthand for the converse. The futility of delegation control as a means of access control is an oft-repeated example of an inexpressible policy.

but the term "capability" has come to be interpreted more narrowly as a means of expressing protection state rather than incorporating the broader system design criteria that exist in all real capability systems.

Too true. I'm still coming to terms with how "capability" is used in language theory such as the calculus of capabilities, or other substructural logics for memory management. Capabilities are often equated to lambda names object-capability languages, so to manage capabilities as separate tokens took me some time to come to grasp. I suppose I now think of it as a form of sealer/unsealer, where the sealed box is distributed widely (values can be aliased), and the linear "capability" that must be used to access or modify a value is a more closely held unsealer.

[Edit: I'm curious what you consider a "capability policy" that is unenforceable?]

Poor little number theorists

I don't mean to be picky, but I have difficulty guessing where you might be going when you said the same deleterious effect ... that Gödel's incompleteness theorems had on number theory research .... It suggests something like the absurd hope that had it not been for mean old Kurt, number theorists would have had a chance at a positive resolution of Hilbert's 10th. I don't suppose you did mean that, but...

In matter of fact, the 50 years after Goedel's result have seen great progress in number theory, marked principally by great cross-fertilisation with other core subdisciplines of mathematics. One of the most fruitful of these, Ramsey theory, led to the Paris-Harrington proof of the incompleteness theorem by non-logical means, which emphasises the degree to which incompleteness is a natural phenomenon within mathematics.

In any case, what kind of growth depends upon ignorance of certain facts?

Quite Right

I was indeed referring to the portion of those mathematicians engaged in the pursuit of the Hilbert program, not because I believe the outcome could otherwise have been positive, but rather because it wasn't widely appreciated at the time how much there remained to do in spite of Gödel's results.

The effect on automated theorem proving was, of course, even more profound:

At the time [of the development of the SAM prover in 1970], the state of the art was to translate all logical propositions into lists (conjunctions) of lists (disjunctions) of literals (signed atomic formulas), quantification being replaced by Skolem functions. In this representation deduction was reduced to a principle of pairing complementary atomic formulas modulo instantiation (so-called resolution with principal unifiers). Equalities gave rise to unidirectional rewritings, again modulo unification. Rewriting order was determined in an ad hoc way and there was no insurance that the process would converge, or whether it was complete. Provers were black boxes that generated scores of unreadable logical consequences. The standard working technique was to enter your conjecture and wait until the computer's memory was full. Only in exceptionally trivial cases was there an answer worth anything. This catastrophic situation was not recognized as such, it was understood as a necessary evil, blamed on the incompleteness theorems.

                                                                                —Interactive Theorem Proving and Program Development, Foreword

AI culture

And at the same time there was AUTOMATH.

Really, this quote says more about the unscientific culture that prevailed in AI at that time, than any harm in negative results. Had they not this excuse, they would have found another.

Side channels are orthogonal

The object-capability model doesn't really take a position on side channels. My view on this is basically as follows:

  • There is no known technique short of fully deterministic computation that can completely eliminate covert channels. Fully deterministic computation is not, in practice, useful for non-trivial problems. For one thing, fully deterministic computation requires us to do things like turn off caching. It's pretty draconian.

  • While reducing covert channel bandwidth may be important in some applications, a clever adversary can often pose a question in a way that can be answered in a single bit (yes/no). Given this, it is not always clear what the real-world, motivating use case for covert channel suppression actually is.
  • It's pretty silly to worry about covert channel issues until you have dealt with overt channel issues. That doesn't meant that EROS/Coyotos have ignored them, but let's get our priorities straight. Because of this, I think it's fair to say that capability systems are the only systems currently known in which covert channel suppression might even be interesting -- because they are the only systems currently known for which overt channels have been fully modeled and controlled.
  • The mechanisms for dealing with covert channels largely have to do with implementation and design issues in the system's lowest level multiplexing points. This is resource scheduling issue which is largely orthogonal to conventional access controls. Because of this, the covert channel problem has relatively little to do with capabilities one way or the other. The use of capabilities to accomplish pervasive and clear designation is probably helpful, but only because it helps the system designer to avoid the inadvertent introduction of implicit authority on resources.
  • With all of the above being said, the mechanisms and techniques for covert channel reduction often have the side effect of reducing variance and improving system performance. Because of this, examination of covert channel issues may be motivated even if we can't fully suppress the channels.

Finally, I would note that the lexicon of the covert channel literature is just horrible. Storage channels, for example, generally arise from explicit actions that were overtly permitted but had unintended communications consequences. Classifying these as covert merely because the designer failed to properly evaluate the authorized operations is simply ridiculous.

There is no known technique

There is no known technique short of fully deterministic computation that can completely eliminate covert channels. Fully deterministic computation is not, in practice, useful for non-trivial problems. For one thing, fully deterministic computation requires us to do things like turn off caching. It's pretty draconian.

Can you (or someone) explain why keeping a computation deterministic would proscribe caching?

Caching can be a problem

Caching can be a problem when your model for computation includes duration. If it takes a different amount of time to access an address in cache vs. an address out of cache (a given when a cache is useful), and an algorithm has confidential data which determines the address to be accessed, then a covert channel may exist that leaks that data based on the duration of the computation.

It seems to me the actual

It seems to me the actual constraint is that timing of results must be deterministic when the entity you're giving the results to can measure time - or stated more simply, the time of delivery is part of your result. Eliminating all sources of variation in the computation like caches (which I agree is draconian and impractical on modern hardware) is not the only way to achieve this.

Yes, I think you nailed it

Yes, I think you nailed it with regard to timing attacks at least. As others have reported in this thread, Re: power measurements, direct clock measurement isn't the only measurable variable that can be used; it would seem that any direct or indirect sensor is a potential channel. Basically, all variables involved in a computation (power draw, timing, etc.) must be equivalent on all code paths up to random noise to completely eliminate any information leaking inadvertently. This may even necessitate hardware co-operation.

Right, I agree with your

Right, I agree with your (and others') earlier observation that security is relative to a physical model. It does no good to isolate sensitive information from certain processes if your adversary is looking over your shoulder as you access that information.

However, I think there's a pretty big jump from getting network security right to defending against an attacker who can monitor your power consumption. The latter requires physical access and (if you're actually worried about that kind of thing) is probably best solved with a physical solution like a buffer device that draws constant power. The former exists at a much higher semantic level and is IMO most practically addressed with a language/runtime solution (not to mention addressing a much more likely mode of attack).

However, I think there's a

However, I think there's a pretty big jump from getting network security right to defending against an attacker who can monitor your power consumption.

True, but you still have to be careful. As the SSL timing attack I pointed out earlier demonstrates, there are still channels to be exploited. Further, while you can confine local clients and shield them from certain authorities that can be used to mount such attacks, network clients are unconfined, or must be presumed so.

From ambients to SElinux

This sounds very interesting. I've heard your name in connection with the ambient calculus, and you are one of the coauthors of the highly cited Using Ambients to Control Resources. Forgive me for going off on a tangent.

I've been persuaded by some criticisms by Alan Bawden that the ambient calculus represents a sufficiently unreal view of network interaction as to be useless for reasoning about just about anything real network people care about, and when applied to security properties I think the situation gets worse, because normally security properties are modelled in the ambient calculus internally (ie. the program is put in parallel composition with all possible attackers --formulated by a modal operation in ambient logic), rather than externally, say through model theoretic reasoning. This is just the kind of viewpoint I am trying to combat with this LtU story!

I'm guessing you have a more positive perspective on the ambient logic, and the research you are doing now is guided by realistic security concerns, so I'd be interested to hear, if you are interested in making it, the case for the usefulness of the ambient calculus.

Er, well...

I'm afraid I agree with Alan Bawden's criticism. I did attempt to apply Ambients to prove a fragment of Linux' scheduler and the model ended up so different from the original that I couldn't convince myself of the validity of any result, either positive or negative. While this is hardly a scientific argument, it did speak against a direct use of Ambients.

On the other hand, this work on Ambients did lead -- through another work on the pi-calculus -- to a very-much-applied type system for a subset of Erlang. Unfortunately, due to administrative issues, I never found the time to fully publish it. Essentially, this type system could prove that [within the model] exposing functions as network-reachable services could not lead to successful Denial-of-Service attacks, i.e. the function could be made unaccessible but not the whole node.

Similarly, I have been part of an effort to define a fundamental calculus, partly based on Ambients, for component-based distributed, dynamic programming, and upon which a programming language and an API was designed. The effort is not complete yet, but could possibly lead to a complete architecture for safe distributed programming. Again, the Ambients did not come in directly, but some ideas from both Ambient logics, bisimulations in Ambients did trickle in, along with a healthy dose of Boxed Ambients and co-capabilities.

So, my personal conclusion is that Ambients cannot be applied as directly as, say, Lambda-calculus, but that they are an interesting breeding ground for ideas. YMMV.

Applications of ambients

Many thanks! I should emphasise that my complaints about the ambient calculus are not about the quality of the formal work done on it, and, as you say, some interesting ideas have come out from subsequent work. It is rather the way that Gordon & Cardelli framed the early papers, suggested they were addressing applications the calculus was unsuited for. I also think that, even given more moderate goals, the authors made some design mistakes with the calculus; particularly the decision to have tree-shaped ambient topology.

Is the work that you describe on typing (some) Erlang the same system as in Towards a Resource-Safe Erlang (pdf)? The TR really doesn't say much at all about the ambient calculus.

I concur

I concur that the claims from the early papers were somewhat optimistic. I don't see how Mobile Ambients could be used directly to model any network-related protocol. In particular, the tree-shaped ambient topology comes as a limitation, and is directly in contradiction with any kind of network I can think of.


Is the work that you describe on typing (some) Erlang the same system as in Towards a Resource-Safe Erlang (pdf)? The TR really doesn't say much at all about the ambient calculus.

Indeed. The work on ambients led to a work on pi-calculus which led to another work on pi-calculus which led to this work on Erlang. So no, I didn't actually quote the first work.

There are certain properties

Certain desirable security properties are known to be undecidable in ACL systems, and other problems are unsolvable. The Confused Deputy problem is an example of the latter (quick search didn't turn up the undecidability I was thinking of, so I'll search harder if there's interest). Do you somehow restrict the operations to solve undecidability? Can your analysis address the Confused Deputy problem?

Ok, ok...

I realize that my earlier statement was bold (hopefully not trollish). And no, for instance, I don't pretend I'm solving anything specific. I'm just mentioning that there are approaches which take into account the fact that security is not one simple property -- PCC being another one. Now, of course, as I mentioned, I don't believe that there is any language which solves everything at once.

Didn't mean to put you on

Didn't mean to put you on the ropes, I'm just curious. I look forward to the paper! :-)

Parallels

It is interesting that "security" issues in computers seem to parallel those in real life. On the lowest level we have unauthorized entry, burglary, or vandalism; for these we have pass words, firewalls, and permissions. Slightly higher up there are privacy issues like bugs, wire taps, and covert channels; for these we have virus and spyware scanners. It seems easier to get rid of the bug than the covert channel. Even higher up there is the problem of trusted agent that turns out to have malicious intent, like a program with a key logger. I some times look for cpu activity that I can't account for. Of course all this leads to full scale information warfare complete with attacks, counter attacks, and so on. Suddenly it is no fun anymore.

Did I leave out anything?

Two thoughts

Good point. Two comments:

  • The kind of EM analyses Rao & Rohatgi describe might be a bit like ESP...
  • The point about trust is ignored in Rae & Wildman's taxonomy;

Notes

A brief comment on Rao & Rohatgi and smart cards. The conclusions of the paper seem true for "smart cards" but they don't point out that the problem is probably also unique to smart cards. In general electromagnetic radiation can be shielded and this is the case with ordinary PC's. But PC shielding isn't perfect and it would be interesting to see some results on PC's.

A second point that seems to be missed is that there are typically two types of language at work in a physically extended system. The separate parts communicate with a protocol language. Protocols can be very complex, but are often left out of discussions of extended systems. Encryption being an exception.

Scaling R&R

My understanding of the R&R algorithm is that it is based on guessing what code the smartcard is running, deriving a predicted EM behaviour by some sort of flow analysis, and then using statistical analysis to check the guess against the observed behaviour.

Such a technique would become dramatically more difficult as the complexity of the machine increases. I don't know enough to say if it would be at all possible to apply against a PC, but my feeling is that it sounds hopeless.

Having said that, a more refined technique might completely change the picture. And what I gather about the TEMPEST specs suggest that the NSA takes this sort of side channel very seriously.

Security is an answer, not a question

The entire framing of this discussion seems to be missing something important. It is not meaningful to ask whether a system is "secure". It is only meaningful to ask whether a system is secure with respect to a defined set of threats or threat classes. For some threat categories, physical factors matter a whole lot. For others they matter relatively little.

The problem with the R&R case is that the threat model had to address attacks where the attacker has physical access and high motivation. The power analysis attack is not (conceptually) hard to beat once you understand the threat model. What is difficult is to beat it while simultaneously satisfying the low overall power consumption requirements of the target system. If you fail to achieve the low overall consumption objective, it ceases to matter whether the device is secure against power analysis attacks.

Hypothesis:

  • It is possible to design systems in which in-band attacks are eliminated by design at a given layer of the system, provided that layer and all layers beneath it can be fully specified. This becomes harder as you move up the functional stack, and security is often lost and/or mishandled at the application layer.
  • It is not possible to design a system in which side-channel attacks can be wholly prevented. It is only possible to design systems in which the cost of such attacks can be kept above a stated dollar value based on the technology available at the time of design.

The disturbing part about the recent remote pacemaker reprogramming exploit is not that it can be done. I've been saying that it could be done for almost 10 years now. The disturbing part is that it only required about $10,000 in equipment, and that money is only required for the analysis phase. Once analysis is completed, the attack can be cheaply reproduced. That is: the per-incident cost is low.

In this sense, the smart cart power attacks are much less threatening. You want to be greatly concerned about master keying information that may be stored on the card, but the requirement that the attacker have physical access to the card goes a long way toward increasing the per-incident cost of attack.

R&R and R&W and AAR&R

Two points of clarification:

  1. The R&R paper does not describe a power analysis attack. It describes an attack based on measuring EM emissions, and so it requires proximity but not possession. How close? Well, the authors coauthored a follow up paper, which I found as an unencumbered PDF: Agrawal, Archibeault, Rao & Rohatgi (2002) The EM Side-Channel(s): Attacks and Assessment Methodologies, which says usually no more than 1m away (1 wavelength of the EM radiation, here at 300MHz), but sometimes further away.
  2. I quite agree about defining threat classes, which is the point of the R&W paper I linked to. I think it is irresponsible to silently ignore awkward threat classes: indeed they may not be important for a particular software deployment, but there is, I think, an obligation to be explicit about the threat classes one does not handle. Morally, this point is obvious and does not need to be made, and yet, it does need to be made.

I've thought a bit about your point earlier that information security isn't physcial, because of how human intuitions get in the way; it is only the case that the thing that is physical is the process and practice of system penetration. This is an excellent point, and it belongs to a class of factors entirely ignored by the R&W taxonomy. Fixing this requires, apparently, a more complex story than the story I tried to tell in the lead article, full of reasoning about use cases.

The irreducible physicality of correctness

This post makes an important but trivial point. To see why it's both important and trivial, let's transpose the discussion to correctness. From section 5.1 of my thesis:

When we say that a program P is correct, we normally mean that we have
a specification in mind (whether written down or not), and that P
behaves according to that specification. There are some implicit
caveats in that assertion. For example, P cannot behave at all unless
it is run on a machine; if the machine operates incorrectly, P on that
machine may behave in ways that deviate from its specification. We do
not consider this to be a bug in P, because P ’s correct behavior is
implicitly allowed to depend on its machine’s correct behavior. If P’s
correct behavior is allowed to depend on another component R’s correct
behavior, we will say that P relies upon R [Tin92, Sta85, AL93, AL95,
CC96]. We will refer to the set of all elements on which P relies as
P’s reliance set.

The point is important, because for both security and correctness issues, we often forget to identify the reliance set to our peril. The point is trivial because, if the original post were taken seriously, it would result in a simple relabeling. To paraphrase:

We should not talk of correct programs, such talk
does mislead; to talk instead of software being
correctable (will run correctly when all members of its
reliance set are correct) might promote better understanding.

[emphasis taken from the original]

Would we improve discourse about either security or correctness by simply doing a global search and replace of "secure" -> "securable" and "correct" -> "correctable"? I think not. But we should strive to make explicit the usually-implicit reliance assumptions these concepts rest on. This post has done a service by emphasizing this issue.