Trade-offs with abstraction

Peter A. H. Petersen has written two weblog posts on abstraction and security:

The points he makes echo a theme dear to my heart: abstractions offer the programmer compositionality of construction when creating their programs and systems, but that notion of compositionality does not match all of the properties one needs to ensure that a system is secure. Cf. slogan #2, from my story The irreducible physicality of security properties:
 Security is non-modular: Programming languages and software engineering practices can ensure that software possesses properties helpful to security, but the properties are only meaningful in the context of a strategy to ensure a computer system satisfies its security policy;

Postscript — Before someone mentions it, Joel Spolsky wrote something quite relevant: The Law of Leaky Abstractions

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Security is not non-modular,

Security is not non-modular, its just differently-modular: entities involved in a security-oriented decomposition might be drastically different from what is convenient to program. Of course, AOP was all about fixing this problem, but that didn't go anywhere.

These kind of blog entires scare me, as someone might take it too seriously: of course you need abstraction to build any large/complicated system, unless your brain is extremely large and you can keep all the details in your head at once. You need even more abstraction as you add more people to a project, as its the only way software engineering can scale. If anything, programmers need to learn how to use more abstraction, not less, and be comfortable with that.

One potential solution to leaky abstraction problems is too build increasingly small and autonomous components with very tight/limited interfaces. The components can then interact so that the desired behavior of the system is more emergent than explicitly engineered. Removing centralized control could solve a lot of these problems.

Security is modular

Modularity is essential to being able to reason about a system. I'm don't see how one could make strong statements about the security of a system that one can't understand.

The trade-off is really against functionality. Increasing functionality usually leads to an increase in complexity, and an increase in complexity reduces one's ability to reason about the system and thus one's ability to make strong statements about its security.

As for emergent behavior, while interesting, I think it's a sign that a system has become intractable in its complexity. I think emergent behavior is a sign that the complexity due to component interactions is exceeding the reduction in complexity realized through modular design, and thus a sign that the system design needs to be reexamined.

very tight/limited interface

I don't understand this requirement. I'd suggest it is arbitrary and unnecessary. Also, it has been tried before under many different names.

I think the key is teaching engineers a wider vocabulary than simply "abstraction". You can't solve social problems with technology, something scientists overlook.

The more narrow the

The more narrow the interaction between components, the more you can understand their collective behavior through intuition and (gasp) statistics. This has worked for me as a practitioner, allowing me to build more complex systems that work somewhat reliably. Limiting interfaces, basically this means limiting communication, really means things like limiting who the component can talk to (you can't just reach through random access memory and communicate with someone you want) and what kind of state it depends on (FP mantra of state being evil applies). We also have to eliminate temporal dependencies as well, even in systems where there is a lot of state to consider (and compensate using damage/repair dependency tracking).

Abstraction is of course a very general term, but its general meaning, as a simple interface for something more complex, is useful. This definitely not a social problem, as writing complicated systems is difficult.

The more narrow the

The more narrow the interaction between components, the more you can understand their collective behavior through intuition and (gasp) statistics.

I do not want to understand through statistics what I can understand through other mathematical means. What I want is to be able to wiggle something in front of me and see what wiggles in the back.

And I like statistics, a lot. I like intuition, a lot. However, intuition is a subjective accidental complexity to accommodate.

Rather than say something should be "this wide" (which is what troubles me), I would say decompose the problem, figure out what the problem domain really is, and then once you've done a bit of analysis, do some design.

[Edit: The only form of intuition I think is an objective accidental complexity is theory of computation problems in PLT. For example, Edward A. Lee's The Problem With Threads argues for coordination languages on the basis of figuring out deadlock/livelock conditions in threading environments being too difficult, but these really help my basic objective of understanding the wigglies.]

This definitely not a social problem, as writing complicated systems is difficult.

Some systems people seem to agree should be relatively easy to build, but aren't. This seems to indicate a social problem. When discussing limitations of our software at work, the hardest problem I face is "but we're already far ahead of what others are doing in this space". Developers get very complacent as they get older. I am pretty sure I know what innovations to make, but people only want me to improve things that immediately annoy them.

It doesn't matter how brilliant you are. That's why empowering them with a strong vocabulary at a young age is important, in my books. The other side of this social problem is that it will be another 20 years before techniques we use today are widely used throughout industry. Object-oriented was the same way; Grady Booch has a bunch of essays on this collected into a book called Best of Booch.

Capabilities encourage

Capabilities encourage modularity, abstraction and security. Conformance to a high level policy can then be checked by a sort of model checker like SCOLL and SCOLLAR. It would be interesting to have such policy specifications and checking built into the actual programming language though.

The Law of Leaky Abstractions

I like that Spolsky article. I think I'd argue that Spolsky's Abstraction Leak highlights a functionality problem of abstractions -- an imperfect abstraction leads to lower level detail leaking upwards and turning into higher level problems, kind of like an imperfect sewer leaking up into your basement. The general principle is perfection -- it's very difficult for an abstraction to perfectly abstract away the detail of the lower level.

In contrast, security issues hidden by abstraction layers are kind of like Abstraction Holes. You may not see misbehavior at the application layer, but an assumption you made about the properties of the abstractions at lower levels was false. It's like assuming that if there were a train coming, there would be lights flashing and bells ringing. I think the general principle here is trust -- it's not that the abstraction failed to abstract away a particular detail perfectly, but rather that the abstraction or layer simply doesn't do or provide something for which you trusted it.

The bottom line is that I think it's important to see that the knife cuts both ways.

The "Leaky Abstraction" Abstraction Leaks

Spolsky's Abstraction Leak highlights a functionality problem of abstractions -- an imperfect abstraction leads to lower level detail leaking upwards and turning into higher level problems

There is an assumption that you are making (that Spolsky's original article also made) that in theory there could be a perfect abstraction for all purposes. I think this misunderstands the whole notion of abstraction in a system.

An abstraction is like a model: you remove detail, whether by extracting it as a parameter, or pushing it down as "implementation" so that you can more clearly see some pattern of interest. Whether an abstraction is good or not can only be judged based on how well it clarifies the pattern that you are interested in.

Any complicated system is going to have many different patterns in it, and so it necessarily is going to have multiple abstractions that can describe it. The big challenge with security abstractions is that they seem to frequently clash with the abstractions that are otherwise good for the application.

To take an obvious example, the Model-View-Controller abstraction is generally considered very useful for both implementational and conceptual reasons. For these purposes, separating display, UI operations and data storage/retrieval gives the best pattern.

Unfortunately, this is usually horrible for security, since you normally want to secure the functionality "lengthwise", grouping together what you are allow to see, to do and access as a logical unit.

So "abstraction" isn't the problem, it's that the abstractions that are best for describing the pattern of security often clash with the abstractions that highlight other more "core functionality" patterns.

I would consider security a

I would consider security a core functionality; security should never be an afterthought. Capabilities align the costs of abstraction and security so they are unified.

Woulda coulda shoulda

I would consider security a core functionality

A bit of a strawman? After all, I did provide scare quotes...

However, we can make this meaningful distinction: functionality is usually about enabling someone to do something; security is about preventing someone from doing something. The abstraction you are making of the user is radically different in these two cases...

security should never be an afterthought

I love sentences with "should" in them; they always tell me what is currently not happening. ;-)

Capabilities align the costs of abstraction and security so they are unified

I agree that capabilities are a great technique, maybe the best default technique, but they still force you to structure your functionality abstractions in units based on security concerns. That can still create the kind of abstraction clashes we are talking about.

functionality is usually

functionality is usually about enabling someone to do something; security is about preventing someone from doing something.

Hmm, I'm not sure I'd put it that way. Abstraction is about preventing clients from doing things too, namely violating invariants. Security properties are simply more invariants that need to be respected.

I understand what you're saying, but I'm beginning to think that the common viewpoint that separates security from other program properties is harmful, because then people think security is something you have to "add", rather than as an intrinsic part of every system.

but they still force you to structure your functionality abstractions in units based on security concerns.

But these concerns are clearly also application concerns, otherwise we wouldn't care!

The way I see it, we want a certain behaviour from a system, and I'm not sure separating the "security properties" of this system from the rest of its behaviour is helpful.

Don't lock security away

The way I see it, we want a certain behaviour from a system, and I'm not sure separating the "security properties" of this system from the rest of its behaviour is helpful.

I don't disagree with you that security is important and is an implicit functional requirement of most development projects. I'm not saying it should be separated.

Ultimately, I was describing, not prescribing.

A contributing factor to your concern that we have not yet mentioned is that security tends to be one of those feature sets that doesn't get defined or considered until its not there, just like performance, reliability, maintainability, etc.

Many requirements documents act as though the sentence "Must be secure" and a CRUD vs user role chart constitute a fully-designed security model. ;-)

Fleshing out security requirements at the same time as all others might well improve the selection of abstractions to support them, but I still think there are the practical challenges I've mentioned for any security design.

I used to think that

I used to think that security was about restricting functionality in a program (there are some great temporal logic papers in this vein!). Over the past couple years, however, I've come to think of security as a feature. E.g., can I use google calendars without google knowing my interests, or perhaps can I post some photos to Facebook and only share them with my friends? The latter example has been attributed to a lot of the success of Facebook.

Reduced to semantics, these might be safety properties that limit control paths, but in terms of end-to-end building of software, they're much more like typical features: HCI concerns, boilerplate cruft issues, etc. I think this perspective is underemphasized in academia; bug finding, proofs, and program analyses are more publishable.

Corollary

Any complicated system is going to have many different patterns in it, and so it necessarily is going to have multiple abstractions that can describe it.

Any complicated system is going to have many different abstractions in it, and so it necessarily is going to have multiple patterns that can describe it.

Usually multilple GoF design patterns are at work in any given object collaboration.

Slithy toves

Usually multilple GoF design patterns are at work in any given object collaboration.

You're using "pattern" in a much more restricted sense than I am.

Within the definition I gave of "abstraction", the GoF design patterns are abstractions.

Pattern = abstraction

This is the sense in which I took you to mean "pattern":

http://www.jstor.org/sici?sici=0022-362X(199101)88:12.0.CO;2-2

(LtU doesn't like the URL...)

Lengthwise and Depthwise Security

you normally want to secure the functionality "lengthwise", grouping together what you are allow to see, to do and access as a logical unit

Object capability security, in the sense you are utilizing the word, is more a "depthwise" security. It allows delegation and benefits from very fine-grained separation of facets at the primitive level. Without that separation you'll end up adding a layer-of-indirection to get it anyway.

Even the 'cell' (mutable state) primitive can benefit from this separation and delegation: you hand the read-end to some folks, and the write-end to other folks, and it is impossible to 'cast' between them.

If you want publish/subscribe (and yes, you do, pervasively) then you might even want three capabilities per cell: two dataflow capabilities (access, demand) that provide read/subscribe abilities and one actor capability (update) such that the system doing the update, the system starting/stopping the updates based on demand, and the system reading the data, can all be entirely distinct.

But all this doesn't preclude the use of 'lengthwise' security abstractions, which group certain functionalities, where they are appropriate. The notion of abstracting not just an object, but a whole object configuration and its hooks into the environment (then exporting some fine-grained caps), is perfectly reasonable. [To make that stronger: 'object' systems without object configuration sublanguage is very much "primitives without means of composition"! A configuration language is a critical part of any OO/Actors/Process-Calculi system, and first-class support is very useful for reducing need for mutable state, for cycle analysis, for dead-object elimination, and for sundry other optimizations. This support also seems to be conspicuously absent from many languages.]

Service-brokers, databases of capabilities and abstract factories to request even more capabilities, domain-addressed multicast systems, etc. are also available for that 'lengthwise functionality' purpose. Indeed, it's much easier to build these given fine-grained delegable capabilities to start with than it is to build such systems if you must implement your own facet-patterns and construct large facades.

And these service-brokers, databases, etc. can also be capability-secure. If you want MVC, then represent the model as a small, capability-secure database or factory of 'available' capabilities. If the model is very small, you could even use a simple record (though factories better support 'generic' code). Then pass a secure factory reference and the UI specification (interactive scene-graph) to the application/display-building procedure. Voila - separation of model, interactive scene-graph specification, and display, written in generic, composable, and flexible manner. With publish-subscribe, it can even be very high performance.

The big challenge with security abstractions is that they seem to frequently clash with the abstractions that are otherwise good for the application.

I posit that this 'seeming' is a consequence of lacking familiarity with the secure architectures that can (and "should" - that word you like so much :-) be built around security abstractions.

Also, security does not need to be a source-code abstraction. That is, encapsulation to protect against accidental coupling of code implementation details, and security abstractions like capabilities, benefit quite well from being in entirely different stages of language processing. This allows cross-cutting concerns to be injected at the source-processing layer via automatic gathering and combining of pieces of code scattered throughout the codebase (i.e. allowing functions and small databases to be defined in hundreds of small pieces) without risk of conflicting with any security issues whatsoever.

You use the words "the whole notion of abstraction in a system" but the singularity in the word "the" is itself a leaky abstraction when considering staged languages, first-class parameterized closures, abstract factories and composable meta-factories, and so on.

Abstract Silver Bullet Factory

I posit that this 'seeming' is a consequence of lacking familiarity with the secure architectures that can (and "should" - that word you like so much :-) be built around security abstractions.

On the contrary. It is quite compatible with my description that you can privilege your chosen security abstraction instead and slice up the other functionality into fine bits (or cross-cutting layers or factories or whatever) if you like. That just shifts the pain of the design trade off in a different direction, which it would seem you would prefer.

I don't have a preference one way or the other, outside of particular design parameters.

security does not need to be a source-code abstraction

I do have a preference here. Source code is generally the best representation of the developers' mental model of the application, and I consistently prefer language technologies and techniques which reinforce and support that. Any techniques that make source code less representative of the complete mental model for the program are going to be less desirable in my design philosophy.

You use the words "the whole notion of abstraction in a system" but the singularity in the word "the" is itself a leaky abstraction when considering staged languages, first-class parameterized closures, abstract factories and composable meta-factories, and so on.

That's a lot of weight to put on a determiner. ;-)

The definite "whole notion of abstraction" was precisely defined elsewhere in the post, and was quite distinct from examples of particular abstractions, or kinds of abstractions, which you give. I feel pretty comfortable that the two uses of the same word can co-exist nicely without contradiction.

Boy, when I started my project to find sophistry in PLs, I didn't realize that the LtU community was going to be so enthusiastic in giving me examples... ;-)

RE: Abstract Silver Bullet Factory

RE: Abstract Silver Bullet Factory

Pithy title. :-)

I use factories to abstract capabilities (i.e. capabilities-on-request) in a highly composable manner (via fallbacks, via factories that return factories) that supports integration of static code with live services. I abstract the factories by turning them into capabilities (i.e. providing abstract interface to the factories).

It isn't a silver bullet, but it isn't the rubber-bullet 'substitution' based constructors and procedures provided by common OO and functional languages, either, and it doesn't require a framework of singletons and ambient capabilities to hook an application into a system, and also supports confinement and testing and a place to add policy. It's a nice, practical, lead bullet. It'll kill quite a few problems with ease, so long as they only need shallow penetration.

End-to-end properties (e.g. ensuring dataflows or event-flows can easily be kept demand-driven and support sharing, composition, level-of-detail and graceful degradation from end-to-end) requires some different tools. The factories themselves often have only limited distribution (such that policies surrounding their use in composing multi-component systems can be enforced).

Source code is generally the best representation of the developers' mental model of the application,

Most projects involve composing libraries and frameworks not written by local developers. Many projects involve large numbers of developers who rarely know more than a small segment of it. Useful 'applications' tend to primarily involve hooking into external systems, also not known to the developers.

I can't say how most developers actually think (seeing as I'm a sample size of one, and my understanding even of how I think is quite limited) but I posit that source code should not be the primary representation developers have for their application.

Developers should learn to understand their application as a component that integrates in stages into a wider system, one they cannot fully grok. I consistently prefer language technologies and techniques which reinforce and support that. Any techniques that make source code more representative of the environment, which must be part of any complete mental model of the application, are going to be less desirable in my design philosophy.

This means war! ;-)

That's a lot of weight to put on a determiner. ;-)

No less than it deserves. :-)

Reifying collections in the singular will cause far more problems than it solves.

The definite "whole notion of abstraction" was precisely defined elsewhere in the post, and was quite distinct from examples of particular abstractions, or kinds of abstractions, which you give.

My impression from your post (and again upon rereading it) is that it offers no recognition of staged notions of abstractions all being part of the same system. Nor do you acknowledge that staging of abstractions completely avoids some of the 'clashes' you were describing.

Admittedly, your design philosophy on software might be directing your attentions away from such possibilities. My philosophy has me calling attention back to them.

Boy, when I started my project to find sophistry in PLs, I didn't realize that the LtU community was going to be so enthusiastic in giving me examples... ;-)

Marc, you've been here for over five years. You should have known better. ^_^

Two software designers go into a bar...

My impression from your post (and again upon rereading it) is that it offers no recognition of staged notions of abstractions all being part of the same system. Nor do you acknowledge that staging of abstractions completely avoids some of the 'clashes' you were describing.

I'm not rejecting this approach in any way. The only thing I would point out is that staging abstractions imposes its own abstraction on the system. ;-) This will have its own strength and weaknesses, as with any design trade-offs.

Your point seems to be that you have found a consistent trade-off balance that you are happy with, which is great. If I knew more about your projects and their requirements I might well endorse a similar balance... for those projects.

The position I'm staking out in this thread is a step behind yours in the design process: I'm articulating the design trade-offs and tensions themselves, not committing to or repudiating any particular solution.

People need to come to grips with the problem itself before they can assess a proposed solution.

Marc, you've been here for over five years. You should have known better. ^_^

I can dream, can't I? ;-)

Design "Trade Offs"

you can privilege your chosen security abstraction instead and slice up the other functionality into fine bits (or cross-cutting layers or factories or whatever) if you like. That just shifts the pain of the design trade off in a different direction

I'm curious as to what the trade-off is.

Belief that trade-offs are necessary is countered quite effectively upon realization that you can always take a well-engineered solution and find ways to make it complex, fragile, poor performance, filled with gotchas that require switchover to different implementations to maintain functionality on Wednesdays and Fridays, etc.

Making such properties more difficult to achieve by accident in favor of making the 'good' engineering properties easier to achieve is only a 'trade-off' to the most literal-minded. To others, it looks a lot like a win-win.

You say in your blog:

The truth is that no solution is perfect, all solutions require trade offs, and the secret of success is focusing on the must-haves while sacrificing the merely nice-to-haves or, more often, even the not-quite-so-must-haves.

I agree with the "no solution is perfect" part. But trade-offs are relative. You cannot discuss trade-offs without comparing two different solutions. As a consequence of this neat little property, there may indeed be no 'trade-offs' in choosing one solution over another by every relevant metric for 'closer to perfection' you manage to apply.

I won't assume 'sacrifice' is necessary. That's like giving up without even trying. I much prefer to study how to avoid a trade-off, e.g. by rearranging various primitives... or by 'staging' an abstraction in combination with a DSL and staged compilation.

Sometimes I can find something clever, offering much gain and almost no trade-offs relative to some other solution (at least not of the technical sort... I'll offer that 'familiarity' or 'lack-of-foresight-why-didn't-I-think-of-this-before-I-started-implementation-this-will-require-rework' can be a trade-off). More often I fail to do so, but I'm usually convinced something 'better' by every metric I could care to apply could be found with patience, effort, and a little ingenuity.

The problem is king

trade-offs are relative. You cannot discuss trade-offs without comparing two different solutions

I agree that trade offs are relative. The whole point of the post you reference is that trade offs are relative to the problem you are trying to solve, which exists in a particular context. Fixating on solutions before the problem is understood is a source of a lot of problems in IT...

I've seen too many wildly different projects in too many wildly different circumstances to assume that one solution will always be the best, however good I think that solution is for the problems I've run into.

I won't assume 'sacrifice' is necessary. That's like giving up without even trying.

I'm not recommending defeatism. I'm recommending the mature realization that you often can't have your cake and eat it too. You may find a ratio of cake eaten to cake saved that is not a "sacrifice" in one situation, but that feels like one in another.

Relative to "Problem"?

I certainly agree that properly identifying "the problem" is well over half the solution. But I'm not understanding what it is you mean when you say that "trade-offs are relative to the problem".

If you're able to trade one set of problems for another, I suspect you're in actuality bartering in accidental complexity from a choice in solutions. It is the solutions that are making the trade-offs.

If you mean to say: "you can't know the best solution until you know the problem", I can only agree provisionally.

Thing is, even without knowing the problem, I can eliminate quite a few solutions based on (among other things):

  • the environment in which any solution will need to be implemented
  • engineering properties that are desirable for every solution (including predictability, performance, efficiency, utilization, robustness, resilience, consistency, and more)
  • the vast, infinite space of contrived ways to make solutions worse by above said engineering properties

Further, whether you be a language-designer, systems engineer, or an applications architect, one can observe that "the problem" cannot possibly be king.

Why?

Because, for these entities, the problem is never completely specified, will change over time, will exist in a changing environment. They cannot know 'the problem'. It's the Problem Uncertainty Principle. If you try to know a problem too clearly, the problem will wag its tail excitedly, move, grow, eat, poop. Best one can do with a PUPpy is give it a playpen - which, losing the metaphor, refers to tackling a larger problem that surrounds the specific one on all known sides and gives it room to move and shift and generally be amorphous and incompletely specified.

Also, many concerns are inherently 'pervasive', meaning they impose constraints on their solutions from their environments. And these pervasive concerns are known: process accounting, resource reclamation, persistence, congestion control, concurrency, parallelization, demand-driven, resilience or self-healing, graceful degradation, disruption tolerance, privacy or digital-rights management, authority, load-balancing, utilization, delta-compression, versioning, undo-ability, composition, and the list goes on and on and on.

Important points about these 'pervasive' concerns:

  • which ones apply to your 'problem' will also change over time due to changing environments
  • if your abstractions don't support a given pervasive concern, you can forget achieving it without an overhaul
  • it is truly difficult to create solutions to pervasive concerns, especially when there is more than one, even for very specific problems (not that there is such a thing). Inventor's Paradox often applies.

I suspect that those who put the problem first will usually fail to consider the pervasive concerns that apply now and are likely to apply in the future. Further, I suspect that even if they do consider these properties they will often struggle and fail to come up with solutions.

It should be the language designer's responsibility to consider these, to design the language so the abstractions support more pervasive concerns than did the previous language... and to do so with minimal sacrifice.

I'm not one to assert that any one solution is always best, or that we should go about creating solutions in search of a problem. There are plenty of known problems to go around. I tend to favor multi-paradigm solutions that are glued together via layering, staging, and first-class configuration sub-languages.

I certainly recognize that some people get carried away by zealotry, creating cultures and communities around languages or paradigms, favoring the ideal rather than the real, getting defensive... and sometimes a little possessive. It was my impression, upon reading your comments in your blog, that this sort of culture and ideal-based design was the "fixating on solutions" to which you were objecting. Of course, given confirmation bias, that could just be my interpretation because that's what I like to hear.

I'm not recommending defeatism. I'm recommending the mature realization that you often can't have your cake and eat it too.

I prefer to wait for (or provide) proofs for where I suspect trade-offs must be made. Whether I can have some cake and eat it too depends largely upon the solutions available to me. Attempting to prove that a trade-off must be made will at the very least help clarify the exact nature of the trade-off, and will sometimes result in work-arounds to the tradeoff (often via splitting an abstraction or two and rearranging them).

You may find a ratio of cake eaten to cake saved that is not a "sacrifice" in one situation, but that feels like one in another.

I don't acknowledge "feels like" a sacrifice.

Intuition is a learned thing. What feels like a sacrifice or awkward at the start changes as we learn how the pieces fit together, as we grow familiar, as we learn more effective ways of thinking.

When discussing sacrifices between solutions, one can usefully discuss economic sacrifices (rework due to accepting new solutions, time spent in training) and technical sacrifices (engineering properties that become less controllable, less predictable under composition, or less optimal).

Yes, trade-offs

But trade-offs are relative. You cannot discuss trade-offs without comparing two different solutions.
...
Sometimes I can find something clever, offering much gain and almost no trade-offs relative to some other solution (at least not of the technical sort... I'll offer that 'familiarity' or 'lack-of-foresight-why-didn't-I-think-of-this-before-I-started-implementation-this-will-require-rework' can be a trade-off). More often I fail to do so, but I'm usually convinced something 'better' by every metric I could care to apply could be found with patience, effort, and a little ingenuity.

The sense of "better" here is the kind of better that comes from good craftsmanship. There can be no doubt that hard work by conscientious, knowledgeable, smart and well-resourced developers deliver far more secure systems than, well, ...

This is not the point. Mozilla, say, really does face hard trade-offs when they write Firefox: they are layered above several operating systems, they interact with complex network protocols, they have to store sensitive information in user space. They face a highly competitive environment, where they have to be flexible and fast in developing their products. They really do face the kind of trade-offs that Peter was talking about.

haha

I really like your points. However, I intentionally did not make the assumption that there could be "a perfect abstraction for all purposes". A perfect, universal abstraction would have to expose all potentially meaningful aspects of the system. I think if you could do that, you might as well just refactor that system according to your perfect abstraction.

Actually, the argument I was trying to avoid making (and/or getting into) was whether it is even possible to ever create a perfect abstraction for any single purpose. I think it is possible to create a perfect abstraction for simple systems for some specific, well-defined purposes. But I said, "it's very difficult for an abstraction to perfectly abstract away the detail of the lower level [meaning for even a single purpose]" specifically because of what you mention -- an abstraction is necessarily limiting and/or obscuring some of the details to simplify an interface -- and often the assumptions made in the process of creating or using the abstraction result in undesired behavior.

I think an interesting analogue of abstraction is lossy compression. In making the abstraction, you've "compressed" the exposed interface of the system (for simplicity instead of disk space) but in the process of doing so you've eliminated details that later on you may wish you had -- or you've introduced artifacts that aren't actually part of the underlying system but that you're now stuck with.

I also think your point about useful security abstractions vs. useful "core functionality" abstractions is very astute. You describe the desire of security as "lengthwise" but the way I think of it in my mind's eye is that we often break our systems into horizontal layers (e.g. OSI), but security must be a property that exists vertically through all the layers -- same point, orthogonal analogy.

Anway, I'll be chewing on what you posted for some time -- thanks.

Systems of Abstraction

an abstraction is necessarily limiting and/or obscuring some of the details to simplify an interface -- and often the assumptions made in the process of creating or using the abstraction result in undesired behavior

I think you make an unnecessary assumption about coupling between interface and implementation of abstractions.

It is possible to separate strategies and policies for implementing abstractions from the abstractions themselves. Techniques for doing so include abstract factory (ideally with policy injection), Declarative Metaprogramming with preconds/postconds/invariants/assumptions (which is my candidate for the primary programmming paradigm of the long-term future), abstract constructor, predicate dispatch, logical (domain-specified) subscriptions in event-driven architectures, and much more.

Substitution is a rather highly coupled technique for abstraction. In particular, substitution hides details but does not delay choice. This leads to a great deal more 'programmer control over implementation' than is often desirable. Substitution should (that word again) be regarded as just one tool for abstraction - and should be regarded as one of the more primitive tools for abstraction, to be used only when the programmer requires the level of control substitution offers.

With many abstraction techniques other than substitution, the "details" may be obscured but are by no means "limited" by the abstraction interface. Indeed, these details can be chosen heuristically to meet local and global policies, and (with first-class support - as in declarative metaprogramming) to most effectively combine with the implementations of the other, nearby abstractions. Abstractions that delay 'choice' open new opportunities rather than close them.

This easily results in 'desirable' behavior, selected more effectively than a programmer can usually manage, and often selected or tuned at runtime based on more information than a programmer can possibly possess.

Substitution is a rather

Substitution is a rather highly coupled technique for abstraction. In particular, substitution hides details but does not delay choice. This leads to a great deal more 'programmer control over implementation' than is often desirable. Substitution should (that word again) be regarded as just one tool for abstraction - and should be regarded as one of the more primitive tools for abstraction, to be used only when the programmer requires the level of control substitution offers.

I agree with this. In particular, I see researchers suggest that by maintaining the Liskov Substitution Principle while allowing alternative re-use techniques, they are 'generalizing object-oriented programming'.

Substitution can often not only not delay choice, but enforce programmers make choices. The best way to delay a choice is to always avoid it.

The Quota Problem

As usual, I love real world examples, rather than this airy stuff about how capabilities give you such and such. I understand the arguments, but they don't feel convincing because I've been burned in the past by snake oil salesmen. My bookshelf has lots and lots of books about offshoots from object-oriented and structured programming. They're mostly snake oil.

Here is a real world design issue to focus on.

Quotas.

How do you implement them? By user? By user role? By containers (i.e., directory quotas)? A composition of various options? How will it be composed? What about the opposite problem of quotas: reserving and guaranteeing a resource (such as disk space)? Is this done by user? By user role? By container?

Bonus points if you can explain why.

This problem is to showcase to some of you why conceptual integrity is very important. Knowing exactly what you are trying to build, and why, is how to achieve relatively stable abstractions. We should not aim for perfection, merely stability. If "Capabilities" are not a part of the agreed upon abstraction between the programmer and problem domain expert, then when the problem domain expert wants to adjust the capabilities of a piece of functionality, we have to note the abstraction is either unstable (and modify the source) or stable (and deny the change request). In this way, we can better achieve estimates.

Capabilities provide a stable model for security. They do not provide you with a stable way of structuring your problem domains correctly. So no silver bullet like AOP promised.

Inventor's Paradox

I suggest that 'quotas', as well as realtime guaranteed access to certain resources, might be more easily tackled at by designing your language with "process accounting policies", with a general concept of resources (resource = anything that is limited in distribution, including memory and CPU time, noting that the distribution-task itself can consume the resource being distributed), and of 'resource goals/heuristics/invariants/policies/distribution/reclamation/revocation'.

Tackling 'quotas' might keep your nose too close to the bark to see usable patterns. In particular, quotas are just one distribution policy... and likely to prove an arbitrary and not-very-good policy that has poor engineering properties for composition, priority inversion, safety, and so on. That's a lot of accidental complexity you'll be trying to work into your language/framework/OS!

We should not aim for perfection, merely stability.

We should not aim for perfection, 'merely' stability, resilience, robustness or safety, various optimizability, distributability, configurable modularity, integration of policy, delay of choice (flexibility), ..., and 'symmetry' (composability without gotchas).

Capabilities meet quite a few of those needs: statically processed capabilities meet all of them except certain forms of configurable modularity, and dynamically processed capabilities meet all of them except certain forms of optimizability.

I think the language or framework or OS designer should sketch out a fairly large list of properties for the abstractions in use. More good-engineering properties embedded into the language means less time fighting the language (greenspunning one's way out of a turing-tarpit by use of frameworks) to achieve them.

It also makes for better programmers who know more about good design because they are wasting less time learning how to tame a specific language in order to achieve adequate design.

Stability is good, but not good enough. We don't need perfection, but we should try to get a few steps closer than "mere stability".

We should not aim for

We should not aim for perfection, 'merely' stability, resilience, robustness or safety, various optimizability, distributability, configurable modularity, integration of policy, delay of choice (flexibility), ..., and 'symmetry' (composability without gotchas).

I would say all of those fall out from a stable design, with perhaps the exception of distribution and optimization.

Optimization is incredibly fickle, as Xavier LeRoy discusses in the July 2009 issue of Communications of the ACM. He showcases his CompCert project, a Coq-based compiler-compiler toolchain that performs provably correct optimizations. I think we still have a lot to learn in this area.

Likewise, distribution is a concept long-discussed in CS literature. J.H. Davenport beat out Map-Reduce by over a decade, but he did not simplify his functional approach to just two operators (or three operators in the case of Yahoo's Map-Reduce-Merge). Currently, Erik Meijer wants to see how his Volta project can allow writing a project with a single tier model and then distribute it on-demand across n-tiers. It's a great idea, and has been talked about for decades theoretically, but now we need to actually try and practice this theory and see how to do theory in practice.

The major point here is that stable interfaces imply a lot of things methodologically have been done correctly. Stable doesn't mean "frozen", either. The Netscape browser APIs are frozen, but they're not particularly "stable", especially in the sense that we probably wish we had better plans for creating plug-ins.

Stable designs tend to correctly re-use component parts, and I just don't see a good way to achieve configuration without integration of policy.

I think the language or framework or OS designer should sketch out a fairly large list of properties for the abstractions in use

I agree completely. Use cases/user stories are a great way to flesh out what can be achieved. We "copy and paste" configuration examples from exemplary use cases of our object model often.

Capabilities meet quite a few of those needs: statically processed capabilities meet all of them except certain forms of configurable modularity, and dynamically processed capabilities meet all of them except certain forms of optimizability.

Sorry, I do not see the need to distinguish between static and dynamic capabilities. I file this under delay of choice.

Tackling 'quotas' might keep your nose too close to the bark to see usable patterns.

The underlying example I had in mind was actually Sun's ZFS, which does not support user or group quotas last I checked. In my experience, people seem to think user and group quotas are more fine-grained than container-based quotas.

It is true a quota is not a prudent primitive. Take for example a Stable Bloom Filter or queue. I file this under symmetry.

I would say all of those

I would say all of those fall out from stable design, with perhaps the exception of distribution and optimization.

That really depends on the environment against which you are specifying 'stability', no? I suspect you and I are assuming different baselines.

Stable designs tend to correctly re-use component parts, and I just don't see a good way to achieve configuration without integration of policy.

There are a number of ways to achieve configuration without integration of policy. How are you defining "good"? ^_^

Sorry, I do not see the need to distinguish between static and dynamic capabilities. I file this under delay of choice.

I do not mean for the programmer to make an explicit distinction; rather, I used the words 'statically processed' and 'dynamically processed' to imply staging (usually optimizer and compiler vs. runtime). Utility exists in distinguishing between the two when comes time for dead-object eliminations based on confinement within object configurations, formation of 'cliques' for automatic distribution or mobility, combining objects into continuations to enhance partial-evaluations, etc.

In languages supporting object-configuration abstractions, object-configurations will often be largely static with just a few dynamic 'holes' - these being references to other objects, requests to a factory, or certain other details. When configurations are functionally composed, those dynamic holes are often filled with even more static capabilities. This quickly leads to large configurations that can be heavily preprocessed and optimized by tracing static capabilities, determining confinement, etc.

Stability, in the problem

Stability, in the problem domain, is my baseline. What is yours?

With regard to staging, I think the real key is to have an auditing machine that keeps track of when something is "frozen" in the stages pipeline. This then tells you what "Refresh" and "Recompile" mean.

Just speaking from experience, not theory.

Problem with Problem Domains

Stability, in the Problem Domain, is my baseline, too. But now we've just regressed the argument. ;-)

I'd consider 'stability in the problem domain' as applying to abstractions for modeling and describing a particular class of problems (e.g. description and layout for a menu) with little consideration of 'stability in the implementation domain'.

Implementation domain would include environmental constraints, such as programming environment (organization of source-code, compile and link orders, dependency issues) and runtime environment (realtime constraints, runtime extensibility, configurable modularity, security, etc.). It would also include performance goals.

Since I want more than "stability in the problem domain ignoring solution context and environment", I'd say one needs to consider much more than stability in the problem domain when developing abstractions.

With regard to staging, I think the real key is to have an auditing machine that keeps track of when something is "frozen" in the stages pipeline. This then tells you what "Refresh" and "Recompile" mean.

Please clarify: To which engineering properties is this auditing machine the key?

Pipelines and dataflows with automatic caching and ability to subscribe and 'push' updates are pretty useful (speaking from experience and theory). Supposing you started with such a system starting with text pages in an IDE, where would you be injecting "freezable" push-pull barriers to wait on 'refresh' or 'recompile' commands?

Great question

Supposing you started with such a system starting with text pages in an IDE, where would you be injecting "freezable" push-pull barriers to wait on 'refresh' or 'recompile' commands?

The ideal answer is not to define injection points, but instead allow fault tolerance mechanisms to handle refresh and recompile. People naturally assume the right answer to a problem is to solve it very directly, but the difference between a good design and a bad design is that a good design simply tries to do the right thing and that's it. Today, we don't code this way, except for perhaps a few programmers who program automated truth maintenance systems. That's what happens when Systems Research Is Irrelevant: we get stuck in the UNIX way of doing things.

Most modern batch compilers use a coarse-grained file timestamp to decide whether a binary is frozen for a particular edit-compile-debug cycle. Nobody actually directly defines this timestamp. It is specified vis a vis the action of something else. This effectively separates control flow from synchronization.

Everything is already subscribed for by the nature of staging and sequencing. This is similar to the properties of Subsumption Architecture: Intelligence Without Representation.

BTW, A pipeline is reification of a dataflow, in my book, so I'm not sure I understand your differentiation between the two? The only property a pipeline has that a dataflow doesn't is processor-specific "data hazards" (meta note: this is a leaky abstraction).

Please clarify: To which engineering properties is this auditing machine the key?

Key, in the sense that it is more foundational than existing approaches to building binaries. In much the same way that "names" are superfluous to an algorithm that addresses variables in a lexical scoping scheme.

Most of the time when I program I feel like today's batch compilers over-constrain my system. Even interactive compilers in IDEs are very batch-like, and their interactiveness is usually only to surface errors. I want static typing, but dynamic systems where the definition of what should be dynamic and static changes on the fly based on evolution to the system. In this way, const int i = 0; is really just a variable with version-specific metadata.

Dataflows and Pipelines

The ideal answer is not to define injection points, but instead allow fault tolerance mechanisms to handle refresh and recompile.

I'd certainly agree with that, at least in a live-programming environment. The 'fault tolerance' mechanisms could be something like automatically switching to new code once it both parses and passes all unit-tests. I'd certainly like a WikiIDE to be support live-programming, but the ability to slice the bindings between a running service and the source-code would be critical for securing some applications in such an environment.

People naturally assume the right answer to a problem is to solve it very directly

A C programmer won't solve signals or error-handling or persistence within an application very directly. I'm pretty well convinced that people naturally assume they have the right tools for the job. But even if they don't assume it, well, if all you have is a hammer you don't have much choice but to pound in the screws...

A pipeline is reification of a dataflow, in my book, so I'm not sure I understand your differentiation between the two?

I'd generally expect a dataflow to be side-effect free, immediate-mode (independent of history), and have stateful semantics - as per spreadsheets and functional reactive programming. Pipelines in general could include side-effects and may carry 'events' or 'commands' rather than maintaining a 'state'.

A dataflow is a very limited class of pipeline, but has quite a few very nice engineering properties and automated optimizations available to it as a result of its limitations.

It would be pretty nice if one could simply modify, say, the Firefox code at Mozilla and, when you it's passed all its automated tests and security audits, automatically push updates to all users of Firefox without restarting their browsers. That would be end-to-end pipeline, even including distribution, without any barriers. Achieving that in a manner users could trust and live with is beyond our current state-of-the-art, though it's beginning to happen for "higher layer" code such as Chrome and Firefox extensions.

Once side-effects in a pipeline get involved in the picture, so do a variety of safety and reliability considerations, doubly so if you have distribution or realtime or optimization issues to deal with. E.g. one may wish to query a database in order to 'freeze' a copy of some existing data, allowing it to optimize a great deal further, and ensuring that the code will run predictably even if the database fails.

Most of the time when I program I feel like today's batch compilers over-constrain my system.

I understand the feeling.

A C programmer won't solve

A C programmer won't solve signals or error-handling or persistence within an application very directly.

By very direct, I meant bludgeoning the program with over-specification (noise). Many people think planning requires master plans, that sequencing requires explicit sequences, etc. In my experience, the more control I relinquish the easier planning becomes.

A clever C programmer solving error handling might best be seen in how Daniel J. Bernstein wrote qmail.

I'd certainly like a WikiIDE to be support live-programming, but the ability to slice the bindings between a running service and the source-code would be critical...

I don't understand WikiIDE. c2 can be like that sometimes: the discussion is too hard to follow to understand. Basically, when I think about the web and an IDE, I basically ask myself, "If I'm the chief software architect at Google and Sergei asks me to make our programmers more productive, what choices will I make that take advantage of our awesome supercomputing infrastructure?" Then that system architecture at Google will likely mirror what the system architecture of the PC will look like 50 years like now. Ivan Sutherland famously called this anticipation The Wheel of Reincarnation.

Web and IDE

Basically, when I think about the web and an IDE, I basically ask myself, "If I'm the chief software architect at Google and Sergei asks me to make our programmers more productive, what choices will I make that take advantage of our awesome supercomputing infrastructure?"

When I think about the web and an IDE, I basically ask myself, "what choices will I make that take advantage of as many of the 6 billion units of gray matter on the planet as possible?"

The best answer I've found so far is a wiki-based IDE - stigmergy in programming on a massive scale, widespread refactoring across and between projects, zero-button testing, etc.

Taking advantage of the supercomputers is something our programs should be able to do, though. Languages supporting automatic distribution, redundancy for optimizations and resilience, load-balancing, and security enough that distributed supercomputers feel comfortable hosting them... are all good.

But those are features to make programming for the web easier, not programming on the web. You need both.

By very direct, I meant bludgeoning the program with over-specification (noise).

Ah. And I suppose that when you say 'bludgeoning the program with over-specification', I should interpret it as what I'd call 'struggling with the language and producing poorly composable, inefficient, and often buggy mechanisms to work around it' aka 'greenspunning'.

After all, when I say 'direct' I tend to mean 'the solution is simple, straightforward, correct, further composable, and largely free of semantic noise'.

It's so hard finding a common language. ^_^

The quotas cannot be just

The quotas cannot be just removed because they are part of the tariffing model. Also the accessible resources do not correspond to a particular object or a node in the object graph but rather to a possibly large group of objects or methods which correspond to categories or "crosscutting concerns" in AOP lingo.

As I understand Z-Bo's major concern with CapS is that it doesn't address the problems of object models in the way AOP did but just takes them for granted. AOP can be understood as a way to define "ambience" of some sort. It is inherently non-local and it had a good rationale but not so convincing implementation techniques and concrete representations.

Access control lists have always been used to group large sets of unrelated objects and/or characterize abilities. Just take a 2G or 3G Smartcard. A particular access condition can just mean "readable/updateable by the network provider" or "readable/updateable by the user" ( partition according to roles ) and this isn't a meaningless distinction.

I don't have any concerns, I

I don't have any concerns, I was just inviting discussion based on concrete issues in system design.

Capabilities have to be the subject talked about most on Lambda the Ultimate without much academic references or concrete examples. This is a blog after all, so suppose I stumble upon here and read about capabilities. But I don't see any links and really just hear this talk about magic object model lotion called Banana Boat E-Tan. I've read the gamut of E papers, so it doesn't bother me except for this itch I've got for examples.

But, your comments about AOP are correct. I am not sure I agree about "problems of object models", though. This is a slippery slope. Kiczales calls it the tyranny of the dominant decomposition, but others like frame-oriented programming proponents have said very similar criticisms without producing results. I am very skeptical about people who want to increase the parameterization of systems through generalizing blackboxes. Increasing parameterization only increases the importance of everything in No Silver Bullet... we should be striving to remove parameterization, not embrace it. For more thorough critiques of AOP, I recommend essays and journal articles by Freidrich Steimann as well as the list of Aspect-Oriented Bug Patterns defined by Sai Zhang et al.

A few years ago, we had a

A few years ago, we had a discussion about this on cap-talk, when Scheme published a paper about adding memory accounting without process partitions. Lots of good points in that thread.

Object capability operating systems in the past handled quotas trivially simply because data and capability storage were reified as explicit objects. See the Space Bank pattern in KeyKOS, EROS, and nor CapROS. I believe Coyotos has the same pattern, though the primitives have changed slightly IIRC.

The oldest capability system I'm aware of, NLTSS, also had storage accounting, but I'm not familiar with the details.

Finally, capability-secure PICT uses linear typing to ensure that ownership of message resources are actually transferred to the recipient, so quotas can be implemented by adding a storage manager here too.

I think the key is to make storage first-class so that it can be managed within the program. Capabilities are a design philosophy to reify all sources of authority as first-class objects, so I think capabilities lend themselves quite well to implementing quotas or any other policy.

Physicality is useful

One of the best security devices on my computer could be described as a physical device. It is a combination router and firewall device that almost everyone has. It works because the program is in read only memory and can't be altered or corrupted. Physicality is useful.

I knew I shouldn't have...

I should have taken time to make sure that responses to this forum post were careful about how they use two tricky terms: security and modularity.

Security is generally used carelessly. Frequently, people talk of "having secure access" when they login to their webmail account from a browser. In the context of reading your email on a laptop via an SSL on the free WLAN at a coffee shop somewhere, that might mean one of two things: someone has a sense of safety because they are using one sort of encryption, or one possible encrypted protocol is safe against attack. If the other encrypted protocol is WEP, well, someone else with a laptop in the coffee shop might be able to use a chosen plaintext attack against SSL and seize control of your email account.

The key to making the right claims about security is asking kind of attacks might an attacker use, and how can one defend against each attack (cf. Programming Satan’s Computer). And the total class of attacks is not constrained by a programmer's model of the deployed system, but by physics. You cannot be sure that you have eliminated all possible avenues of attack.

If you can't have certain knowledge of all avenues of attack, then you can't know how they match up with the modularity of your design. This is not to say that some programming languages or design patterns are not better for security than others, since we do understand the compositionality of some classes of attack, and we do understand how certain issues in software can be disastrous for the defensibility of software systems.

I think very highly of such system design strategies such as well-factored trusted code bases (which can leverage PL ideas intelligently), and capability-based security (likewise, as eros&c have shown). But these are not panaceas.

Software Security

The key to making the right claims about security is asking kind of attacks might an attacker use, and how can one defend against each attack

Since you're bringing up 'using security carelessly', I feel some need to point out that security is not just about defense against attacks (i.e. eliminating a vulnerability). One can also secure a system by eliminating a threat (e.g. by controlling incentives for an attack, or by increasing risks and costs). And system security equally involves survivability: robustness and resilience under both denial-of-service attacks and legitimate high loads.

the total class of attacks is not constrained by a programmer's model of the deployed system, but by physics; [...] if you can't have certain knowledge of all avenues of attack, then you can't know how they match up with the modularity of your design

Similarly, the total class of defense is not constrained by a programmer's model, but by physics, including securing the hardware and social aspects. It's rather difficult to secure a system if a person is free to install key-trackers and such. Privileged insider spies, or even just careless persons uneducated in security protocols, are also a problem.

Some software security designs can 'enhance' hardware and social aspects, such as by taking advantage of hardware-level digital rights management or smart-cards, or by increasing risks by auditing, or by peer-to-peer communications protocols that build and maintain a web-of-trust, or by integrating resilience to handle social weakpoints (i.e. users forget passwords, lose smart-cards, sometimes use insecure systems; hardware can be stolen or lost to an enemy; users can be placed under duress; etc.)

A very important concern is ensuring the system can be extended securely, such that programmers don't have incentive to work around the security system. Secure composition, and secure FFI, and secure distribution all reduce this incentive.

I take the rather extreme view that software security designs should take into consideration all of those things, and support them.

But software security doesn't need to handle "all avenues of attack". Physical security must be provided at the physical layer. Social security must be provided at the social layer. Software can support these layers by allowing more to be achieved securely by software (i.e. so systems administration involves fewer face-to-face visits and less sitting down at user's keyboard) and providing the right hooks (e.g. for auditing, or DRM, or alerts that trigger automatic chainguns, or making it easy to report duress in a manner hidden to the guy sitting over your shoulder with a gun). But software is fundamentally limited in its ability to handle attacks in higher layers.

So we don't worry about eliminating "all possible avenues of attack". Software only needs to eliminate software avenues of attack. And the first part of doing so is ensuring these avenues are well defined.

The secure design I've been working on involved security considerations in four layers:

  1. object capability (unforgeable names protecting access to actors, data flows, and event flows; these protect the host of untrusted remote software)
  2. secrecy-privileged capabilities (attaching certificate-requirements for knowing capabilities. These protect the remote service from accidentally distributing secrets or private data to uncertified systems, and help limit damage in event that such secrets are leaked. Platform-level certs provide high performance caching.)
  3. cryptographic encode/decode primitive (allows programmers to build their own cryptography, supports rights-escalation, being a primitive allows it to be installed into dataflows and event-flows, supports 'static typing', and enables a neat optimization where encode and decode from same platform is eliminated).
  4. global ambient capabilities (allows, for example, a 'timer' event-flow or 'random number generator' actor created by platform A to automatically distribute to platform B - or a neighbor of B - even if 'timer' is not a language primitive. This enables what is effectively secure distribution for FFI objects (still subject to secrecy privileges) reduces incentive to work around system to achieve certain forms of latency and bandwidth optimizations or disruption tolerance. (I also support local ambient capabilities, for application integration.)

In addition, automatic load-balancing, redundancy, caching for data-flows, fallbacks for dataflows and event-flows, history for event-flows, termination-properties up through the actors-configuration layer, elimination of deadlock, each help protect survivability.

Eventually, I'd like to separate the code-provers from the language (i.e. so platforms can trust multiple provers that code is valid, terminates, effect-free, has certain realtime properties, etc.) and I'm still trying to fully grok zero-knowledge systems.

Anyhow, software security might not be a 'panacea', but the level of support it can provide to social and physical layer security is pretty darn high.

Layers

I like what you have written very much.

I hadn't expected to be criticised on my attempted clarification of what security is, which I think is OK, but on what modularity is, about which I said nothing.

To say that security isn't modular isn't to say that structure is of no use in trying to build secure systems, but rather to say that we can't assume that the separate parts will interact through the intended interfaces. In particular, it is hard to be sure that the "physical" layer will behave as the software layer assumes —say, ferrying IP datagrams with only failures of certain, well understood kinds, and otherwise leaving the equipment alone— not least because of those annoying, misbehaving humans that infest the physical world.