## Automated Code Review Tools for Security

Gary McGraw, Automated Code Review Tools for Security. Forthcoming.

An introductory overview article about static analysis tools and how they can be used to improve software security. The article talks a bit about the history of Cigital's ITS4 tool.

## Comment viewing options

### Source code analysis vs. static analysis

To be fair, this is (as described) an introductory article. But it seems to stop at the level of source code analysis for the C/C++ family. (McGraw seems to use the terms "static analysis" and "source code analysis" as if they are equivalent.) On the third page (p. 94 of the preprint) the author takes a small step...

To increase precision, a static analysis tool must leverage more compiler technology. By building an abstract syntax tree (AST) from source code, such a tool could take into account the basic semantics of the program being evaluated.

It seems to me that tools such as FindBugs, which examines bytecode rather than source, offer some distinct advantages.

1. Source code availability is not required.
2. It's not necessary to re-invent a compiler front-end to go beyond simple source string matching (an issue raised in the paper).
3. Working against bytecode is at least a small step from syntax toward (hardware independent!) semantics, assuming one expects consistent code generation/optimization by the compiler.

I'm wondering if there have been similar efforts in earlier languages (e.g. Smalltalk, UCSD Pascal, or even SNOBOL) with language-specific, VM-oriented implementations.

Compiled code analysis is often simpler, and works with third-party code. Source code analysis can be made to work in IDEs interactively, which makes it far more likely to actually get used. Both have their place.

The FindBugs Eclipse plug-in logs its findings to the standard "problems" tab (which is where Eclipse compiler errors and warnings are logged). It can also be configured (by checking a box in the preference panel) to run automatically whenever a Java source file is modified.

So, in a nutshell, it's essentially as interactive as Eclipse itself.

### non java jvm languages?

so in theory it can be used with e.g. Scala, Groovy, Jython?

### yes

Yes, it works with other languages. From what I've heard from people using it with Scala, it generates a lot of useless warnings and an occasional gem.

### "Modern" tools?

Is it me or are they missing a whole host of security coverage tools? Thinks such as, say, type-based analysis, abstract interpretation, etc.?

### Safety vs. Security

The article discusses 'security' in terms of tracking down 'unsafe' operations. And while safety, in particular the prevention of undefined operations, is a prerequisite for security, it is hardly a provider of security.

I'd really love to see more languages integrate capability security models - ideally something stronger than E's object capability model (which cannot resist eavesdropping and is therefore insufficient for secure mobile code in an open system).

### What do you mean by

What do you mean by eavesdropping?

### Eavesdroppers would include

Eavesdroppers would include debuggers and otherwise compromised language runtimes.

The E language security model, which is based wholly upon hiding and encapsulating object references (and making them practically unguessable for distributed operations), assumes that there is no omniscient and untrustworthy observer, such as a debugger or sniffer, watching the messages being passed between objects. I.e. it assumes that every language runtime is playing by 'the rules' of encapsulation in order to achieve its security.

This assumption fails in an open, distributed system where people can provide compromised runtimes. This means that E cannot be used for secure, open, and distributed or mobile programming unless one explicitly keeps track of 'trusted' systems and embeds gatekeepers for those systems. The extra overhead of tracking these trustworthy systems and carefully firewalling the others is one of those problems that would pop up again and again independently of the domain problems one is attempting to solve.

What I'd like (for some of the user stories I work with) is an approach to security that operates on an open, distributed network with a small fraction of untrustworthy processors (whose identities are unknown). To do this, one must minimize the benefits of compromising the security on untrustworthy systems. This means that compromising security can't be as easy as copying a few identifiers or an SPKI certificate.

I've worked on a few approaches that promise to accomplish my goals. These largely involving placing complex and formal extents and limitations on PKI certificate based capabilities (i.e. more complex than mere expiration dates). Designed properly, the limited extent of these certificates can ensure that 'stealing' them certificates (by copying them, for example) provides very little benefit for the thief. I've also unified capabilities and access control lists by allowing a certificate to state who may use it (in terms of the public key), denying signature authority to forward the thing, and stating revocation authorities so that a given certificate can be revoked.

My goal is to allow some optimizations such as using abstract analysis or partial evaluation to automatically verify that a given section of code would, in fact, be secure and that there is no need to check at runtime the security of much code running within the local machine. Other optimizations would include lazy PKI signatures and caching of certificates or recent security verifications.

But these approaches are untested and aren't mature enough to be very paper worthy. I'll consider writing about them after I have running examples.

In any case, I want something stronger than E's object capabilities, and I'm willing to look around or work to make it happen.

### it assumes that every

it assumes that every language runtime is playing by 'the rules' of encapsulation in order to achieve its security.

I don't think this is the case. E makes no assumptions about clients. It's security is based purely on the Principle of Least Authority (POLA). Remote clients may certainly be malicious, and that's handled by doling out a minimal set of privileges so that any misuse is contained. Perhaps a concrete example illustrating your concerns would be useful here.

I've also unified capabilities and access control lists by allowing a certificate to state who may use it (in terms of the public key), denying signature authority to forward the thing, and stating revocation authorities so that a given certificate can be revoked.

I get a chill every time someone says they've unified ACLs and capabilities, because invariably, it means the worst of both worlds. ;-)

Seriously though, restricting delegation is almost *never* the right choice. If you try to prevent delegation, then you are preventing systems from being structured by POLA, which means any compromises are inevitably going to leak more valuable authorities than they otherwise would.

Consider your system as a language: restricting delegation amounts to preventing a function G from passing a reference it has to another function H. By trying to pass this parameter, G is trying to fulfill its function, but now G can't do it's job. Can you imagine writing a program in a language in which the majority of parameters couldn't be passed to sub-functions? This is almost never the right thing to do, and for the rare cases when delegation needs this control, there are other ways. Permitting delegation is the right default.

Alan Karp has been doing lots of work using SPKI and SAML, by using certs as authorizations and not for identity like you imply you're doing; authorize, don't authenticate! It's definitely worth your time to read his stuff. His e-speak design is also worth looking into. It's also worth your time to read Fred Spiessens' work on SCOLL and SCOLLAR, if you want a more formal understanding of the properties of object capability systems.

In any case, I want something stronger than E's object capabilities, and I'm willing to look around or work to make it happen.

I'll go out on a bit of a limb here and say: there's nothing stronger than E's capabilities when dealing with potentially malicious clients, there are just designs with different properties. They've given this a lot of thought, believe me. :-)

So the real question is: what properties are you after that E does not provide?

### Clients...?

I don't think this is the case. E makes no assumptions about clients. Remote clients may certainly be malicious, and that's handled by doling out a minimal set of privileges so that any misuse is contained. Perhaps a concrete example illustrating your concerns would be useful here.

Did I forget to mention the 'problem' of being forced to introduce games of playing gatekeeper and making hard distinctions between server and 'client'? ("This means that E cannot be used for secure, open, and distributed or mobile programming unless one explicitly keeps track of 'trusted' systems and embeds gatekeepers for those systems. The extra overhead of [explicitly] tracking these trustworthy systems and carefully firewalling the others is one of those problems that would pop up again and again independently of the domain problems one is attempting to solve.") I want open distributed programming systems - e.g. services that can, based on some selected triggers, execute code specified by other users under an authority specified by the user.

I get a chill every time someone says they've unified ACLs and capabilities, because invariably, it means the worst of both worlds. ;-)

I've seen and cringed at many bastardized attempts myself. This unification is considerably easier to swallow: there are no actual ACLs, and instead every 'feature' that ACLs ever provided simply become part of the Capabilities via extension of the expiration field (blocking the ability to "forward" authority, and limiting use of a certificate to a particular user).

There are reasonable arguments about how one can't ''really'' prevent capabilities from being passed around. I.e. one service could give away his private keys to allow others to spoof him. Or, similarly, one could always accept commands to use one's authority without filtering. A compromised agent can still offer the keys to the kingdom (or at least however much they have access to).

As I see it, ACLs aren't about capability. They're about responsibility, trust, and blame - figuring out where the buck stops. In real life, trust is not transitive. Even if I trust naasking, I might not trust everyone naasking trusts, and so when I give keys I tell naasking he's not allowed to make copies for all his friends.

I work with military tech. I can't think of a single general to whom I'd want to explain: "sir, fractions of your authority were delegated to a bunch of front-line operatives who then, effectively and technically, acted in your name based on their personal interpretations of your orders."

E's security is based purely on the Principle of Least Authority (POLA).

E's security aims to achieve POLA. It is a goal, not a technical basis for E's security model. To accomplish this, E encapsulates object identifiers.

Seriously though, restricting delegation is almost *never* the right choice. If you try to prevent delegation, then you are preventing systems from being structured by POLA, which means any compromises are inevitably going to leak more valuable authorities than they otherwise would.

Restriction of delegation should, for most capabilities, be discouraged (and shouldn't be the default). With that much I agree. Starting from "almost *never* the right choice" and making the further conclusion that delegations is "always the *wrong* choice", however, would require a much stronger argument than you provide.

In any case, I'll note that the ACL solution in use here can still be allowed to authorize session or transaction limited capabilities (e.g. delegating authority JUST for a given transaction) even if it doesn't allow signature authority (the ability to duplicate the certificate to allow access to it by another service/agent having its own private key). So POLA isn't really prevented; rather, control over delegation is forced to pass each time through a trusted "gatekeeper".

Consider your system as a language: restricting delegation amounts to preventing a function G from passing a reference it has to another function H.

The inability to pass references is an artifact of E's object capability model (because it is which is based on encapsulation of object identifiers). It would be a non-issue in the solution I proposed, which does not rely upon encapsulation.

Alan Karp has been doing lots of work using SPKI and SAML, by using certs as authorizations [...] it's definitely worth your time to read his stuff.

I did so, prior to producing the above solution. SPKI, without modification, is also subject to capability theft. My own solution is essentially SPKI with an unformalized set authorizations (capabilities, essentially just signed values) and a formalized set of extents (scoping rules, revocation authorities, expiration, interdependencies (e.g. this capability is granted if and only if these other two are present), limitations, blocking signature authority, etc.).

there's nothing stronger than E's capabilities when dealing with potentially malicious clients, there are just designs with different properties

In my mind all Turing complete languages are, to some approximation, equal in 'ability'. So for 'strength' or 'power' of a language to have any meaning at all it must be referring to non-functional properties.

So the real question is: what properties are you after that E does not provide?

I want a capability system that works in an open, distributed programming environment without drawing lines in the sand between "Clients" and "Everyone Else". Indeed, I'd like the ability to support my "ideal" programming environment where the operating system and communication and interaction and debugging is happening in one really, really big language runtime.

Because E relies on encapsulation of object identifiers as the cornerstone for security, such features as reflection, debuggers, and compromised language runtimes will violate its security assumption. This is a property that I don't find acceptable in E.

So, one property that E does not provide would be: resistant to eavesdropping.

### Did I forget to mention the

Did I forget to mention the 'problem' of being forced to introduce games of playing gatekeeper and making hard distinctions between server and 'client'?

I don't see these hard distinctions that you're seeing. What is a 'server' and what is a 'client' in your model? For the purposes of E, a 'server' is simply the object's vat, and the client is any object holding a reference to that object, whether in the same vat or not. The vat is the unit of failure and persistence. Does your model handle failure and persistence?

If you can more precisely define what "gatekeepers" and "trusted systems" are in the context of these E primitives, or how both E primtives and your primitives map to clients and servers, maybe I'll better understand what you're saying.

there are no actual ACLs, and instead every 'feature' that ACLs ever provided simply become part of the Capabilities via extension of the expiration field (blocking the ability to "forward" authority, and limiting use of a certificate to a particular user).

That last caveat sounds like an ACL to me. So what is a "user" in your model? Can a program operate under the authority of multiple users? If so, how easy is it to introduce Confused Deputies?

There are reasonable arguments about how one can't ''really'' prevent capabilities from being passed around. I.e. one service could give away his private keys to allow others to spoof him. Or, similarly, one could always accept commands to use one's authority without filtering

Right, in the capability world the last is called proxying. You can't prevent proxying, but there can be some added value to require proxying to access a particular resource. The pure capability revocation model is built on it, for instance.

As for sharing keys, this is exactly the violation of POLA that I was referring to. It's not a viable way of building systems.

In real life, trust is not transitive. Even if I trust naasking, I might not trust everyone naasking trusts, and so when I give keys I tell naasking he's not allowed to make copies for all his friends.

True, which is exactly where POLA comes into play. Clearly, if you want to delegate something to naasking you either depend on naasking for some function, or you want to share something with naasking for some purpose. Who are you then to say that naasking can't fulfill his role in whatever way he sees fit? It's poor abstraction to try to place restrictions on naasking other than authorities explicitly granted in a message, assuming naasking is not an entity you created or otherwise control.

Hence, you grant naasking only those authorities he actually requires and thus limit the scope of side-effects naasking can produce.

Even limiting delegation, you can do nothing to prevent proxying, so naasking can easily share his access with his friends without making copies of his authority.

E's security aims to achieve POLA. It is a goal, not a technical basis for E's security model. To do this, E encapsulates object identifiers.

Yes, E aims for POLA, but it achieves it by combining designation with authorization (which requires memory/reference safety, eliminating ambient authorities, etc.). POLA naturally falls out of writing the program. There is a bit more to it than simple encapsulation.

The inability to pass references is an artifact of E's object capability model (because it is which is based on encapsulation of object identifiers). It would be a non-issue in the solution I proposed, which does not rely upon encapsulation.

I don't understand this statement. What is this inability of E to pass references and how is this related to encapsulation?

With that much I agree. Starting from "almost *never* the right choice" and making the further conclusion that delegations is "always the *wrong* choice", however, would require a much stronger argument than you provide.

Right, which is why I didn't say it. I think you will also be interested in the Horton protocol. Horton is starting to sound like a capability implementation of what you're describing, with responsibility, blame, and gatekeepers handling delegation (which I think maps to what capability folks call "membranes").

SPKI, without modification, is also subject to capability theft.

What do you consider capability theft? Sorry if I seem to be pedantic, but you're using terms in a way that aren't familiar to me.

Because E relies on encapsulation of object identifiers as the cornerstone for security, such features as reflection, debuggers, and compromised language runtimes will violate its security assumption. This is a property that I don't find acceptable in E.

But E has a distributed debugger. And there is no reason reflection couldn't also be supported, given certain safety properties are assured.

I also don't see how compromised language runtimes violate it's security assumptions. Which runtimes are you referring to, local or remote ones? It's also the case that vulnerable cryptography libraries violate your platform's security assumptions, so I don't think that's a very interesting statement for the local runtime case. For the remote runtime, I'll simply state again that delegating an authority to a remote agent is already crossing a trust boundary, which is why the incentive is to minimize the scope of such authorities.

### What is a 'server' and what

What is a 'server' and what is a 'client' in your model? [...] Does your model handle failure and persistence?

The capability system I described does not require a distinction between servers and clients. The capability system I described is also only a fractional part of any larger system process/service model, upon which it imposes very few constraints. The language into which I'm integrating it does handle failure and persistence in addition to behavior mobility, network optimizations, and disruption tolerance.

E does not support behavior mobility. Suppose you wished to send a behavior (e.g. a procedure or an object with methods or complex value that can be compiled into these things) to another vat, perhaps to cache messages until you receive a signal to deliver them, or to preprocess them to reduce network overhead, or to perform network dispatching so that the message doesn't need to return to your vat before heading off in the right direction. There are a number of reasons you might want to do this. The problem is that one loses encapsulation of the remote object identifiers. This, ultimately, undermines the E security - unless you can "trust" the other vat.

If you can more precisely define what "gatekeepers" and "trusted systems" are in the context of these E primitives, or how both E primtives and your primitives map to clients and servers, maybe I'll better understand what you're saying.

By "trusted system" I refer to physical machines and hardware and "vats" that you'd trust to execute behaviors on your behalf. It is best understood in contrast to untrustworthy systems, which might dissect instructions you gave them in order to learn your secrets and use them to undermine your interests. In E, you'd not want to automatically distribute your 'vat' processing across potentially untrusted machines, and you'd also not want to mobilize behaviors/objects into untrusted vats, and so E has a great number of concerns about 'trusted systems'.

By "gatekeeper" I refer to the practice of passing commands through a chokepoint (e.g. a particular object) that is capable of such behaviors as checking well-formedness and deciding where to dispatch the message. In E, the entire basis for security is layers of gatekeepers. Gatekeepers fundamentally rely upon encapsulation as the technique to achieve security: that is, nothing interacts (communicates) with whatever is "behind the gate" without permission from a gatekeeper.

The association between the two is implicit: for reasons of security you cannot pass just any object references to a system that you do not wholly trust to properly interact with whatever walled garden your gatekeeper is protecting. So, instead, you pass references to 'gatekeeper' objects that can then observe, restructure, and properly discard or dispatch these messages.

It is the prevalence of these 'gatekeepers', in addition to the lack of support for behavior mobility, that really produces what I'd consider the 'hard' practiced distinction between 'client' and 'server'. In E, the gatekeeper object becomes a server, and whomever has a reference to it is a (potential) client, and there is no support for clients to embed parts of themselves in the server or vice versa.

I would prefer to avoid such hard distinctions.

That last caveat sounds like an ACL to me.

Exactly why I said it unifies ACLs and capabilities. But it does avoid the issues of "list" management - one major cause of headaches associated with ACLs is figuring out who can be trusted to update the TCB, or the "list". And then forcing whichever poor sod is shouldered with the burden to get up and do it is even more of a pain.

So this (which essentially allows distributed ACL management) is actually better than ACLs, while still allowing all the "features" of ACLs (though I'm certain you'd dispute the use of the word "feature").

So what is a "user" in your model?

A "user" is either a Public Key, a URI that will return a Public Key, or a verifiable version of the latter with a specified User that is required to have signed the Public Key.

 data User = K PublicKey | U PublicKeyURI | UT PublicKeyURI User 

Can a program operate under the authority of multiple users? If so, how easy is it to introduce Confused Deputies?

A process operates under a set of capabilities, which may come from multiple users. (One use case is that five out of seven authorities are required to perform a particular action.) In the language I'm working on (Awelon), the capabilities are associated implicitly with each continuation as part of the operating context (meaning that if you jump to a continuation, you also end up having whatever authorities were part of that continuation).

Confused Deputy is a problem when programmers (a) lack easy control over which authorities/hats operations are being performed under, and (b) 'default' to operating under agglomerated permissions (so the path of least resistance is also the least secure). These are both resolved in my language, but there are no rules regarding these design decisions.

As for sharing [private] keys, this is exactly the violation of POLA that I was referring to. It's not a viable way of building systems.

I agree that you shouldn't encourage sharing of private keys. I'm just saying you can't prevent it.

Clearly, if you want to delegate something to naasking you either depend on naasking for some function, or you want to share something with naasking for some purpose.

E.g. the latter might be a paid-for service.

Who are you then to say that naasking can't fulfill his role in whatever way he sees fit?

I'm a legal entity involved in a service contract. =)

The inability to pass references is an artifact of E's object capability model [...]

I don't understand this statement. What is this inability of E to pass references and how is this related to encapsulation?

I'm afraid the context of this statement is now mostly lost. In essence, you objected to ACL style solutions because they don't allow one to delegate efforts to sub-tasks by passing object references. But it is clear to me that this objection was framed in terms of E's object capability model. Only object capability models would be unable to pass object references if emulating something like an ACL. My own solution is not an object capability model, and your original objection simply didn't apply.

I think you will also be interested in the Horton protocol. Horton is starting to sound like a capability implementation of what you're describing, with responsibility, blame, and gatekeepers handling delegation (which I think maps to what capability folks call "membranes").

I'll look into it. I've always had an open mind for ideas worth stealing.

SPKI, without modification, is also subject to capability theft.

What do you consider capability theft? Sorry if I seem to be pedantic, but you're using terms in a way that aren't familiar to me.

I'm afraid this is my fault; I didn't make the context (what I mean by "open distributed systems" in avoidance of client/server boundaries) clear enough. One thing I meant to imply by such a context is the ability to introduce behaviors on remote machines (since there is no client/server boundary between machines).

If you wish to deliver a behavior to a remote system to perform after certain updates, for example, you may require that this callback execute with certain permissions (e.g. to update shared state). SPKI, because it only requires holding a certificate, would allow stealing these 'permissions' for nefarious uses. The solution I offer would allow more restrictions, such as asserting that the capabilities are usable only for communications with one particular remote process to perform a certain transaction and so on.

One could technically describe these limitations in the capability itself, but without their description being standardized they cannot be enforced automatically.

But E has a distributed debugger.

I'm curious as to how it handles drilling into other people's vats. I imagine obfuscation of object identifiers as part of a common debugger service would be possible.

In any case, the issue of compromised runtimes (when supporting mobile behavior) remains.

### There are a number of

There are a number of reasons you might want to do this. The problem is that one loses encapsulation of the remote object identifiers. This, ultimately, undermines the E security - unless you can "trust" the other vat.

Ok, now I'm starting to understand what you're saying. You're talking about mobile code, and the "trusted" runtime property you're referring to is maintaining the encapsulation of any authorities in a migrated object.

It is the prevalence of these 'gatekeepers', in addition to the lack of support for behavior mobility, that really produces what I'd consider the 'hard' practiced distinction between 'client' and 'server'. In E, the gatekeeper object becomes a server, and whomever has a reference to it is a (potential) client, and there is no support for clients to embed parts of themselves in the server or vice versa.

I believe a more standard terminology for 'gatekeepers' is 'proxy objects'. There are definite advantages to mobile code over pure distributed references, and I would certainly be interested to hear how one can maintain the integrity properties when migrating objects to untrusted runtimes.

So, instead, you pass references to 'gatekeeper' objects that can then observe, restructure, and properly discard or dispatch these messages.

In your mind, are these gatekeeper objects built and maintained by the runtime, or built and maintained by user code?

A "user" is either a Public Key, a URI that will return a Public Key, or a verifiable version of the latter with a specified User that is required to have signed the Public Key.

So a "user" is basically an unforgeable principal identifier, in one of the forms you listed.

Confused Deputy is a problem when programmers (a) lack easy control over which authorities/hats operations are being performed under, and (b) 'default' to operating under agglomerated permissions (so the path of least resistance is also the least secure). These are both resolved in my language, but there are no rules regarding these design decisions.

Confused Deputies can easily arise in the presence of rights amplification for capability systems. As for ACLs, I don't have a clear solution for Confused Deputies (although many purported solutions suffer from TOCTTOU vulnerabilities).

The combination of designation with authorization makes it much easier to avoid Confused Deputies, but your description seems to imply that authorizations are separate from designation, which would further imply that you are increasing the danger of introducing Confused Deputies (and TOCTTOU vulnerabilities).

I agree that you shouldn't encourage sharing of private keys. I'm just saying you can't prevent it.

Certainly, just like you can't prevent proxying in ACL systems, or delegation over authorized channels in capability systems. One should hope that the design is sufficiently flexible that sharing keys is not needed however.

I'm a legal entity involved in a service contract. =)

Which implies a degree validation of naasking's internals which completely eliminates the need for delegation control. So where's the problem again? :-)

Only object capability models would be unable to pass object references if emulating something like an ACL. My own solution is not an object capability model, and your original objection simply didn't apply.

I'm trying to understand the operational semantics from the view point of a calculus of sorts, either an object or lambda calculus. Your solution must be expressible in one of these given universality, so how do the ACLs affect parameter passing?

I'm thinking here of systems where parameter-passing is permitted, but which controls which clients may access a passed parameter. For instance, consider a substructural type system, where the ability to pass a reference is uncontrolled, but the ability to access that resource requires the presence of a linearly tracked capability. Thus, we are assured that only a singly-authorized entity with the linear capability can operate on the reference, no matter how many intermediate clients it passes through.

Pure capability systems support an operation called "sealing" to similar effect (only the client with the unsealer can extract the object).

One thing I meant to imply by such a context is the ability to introduce behaviors on remote machines (since there is no client/server boundary between machines).

Yes, once you mentioned mobile code many of your statements made more sense. I'm not really sure that E has the problems you state however. As long as your code acknowledges that a mobile object migrating outside of your trust boundary can no longer be relied upon, I'm not seeing the difference (or as long as proper use of sealing sufficiently protects the references). I would probably have to better understand how the properties of your language mitigate the dangers of potentially malicious runtimes in the scenarios you have in mind.

I fear that speaking in generalizations is ultimately insufficient, and some concrete examples will need to be discussed when you feel your language is ready. :-)

### Re: gatekeepers

In your mind, are these gatekeeper objects built and maintained by the runtime, or built and maintained by user code?

Gatekeeper objects are normal objects, provided by users. It is only their application (something called "gatekeeper pattern" where I learned my nomenclature) that earns them the name. The use of "proxy object" I learned was rather specific to an indirection between two services (possibly with some filtering, but usually just to support non-local objects).

So a "user" is basically an unforgeable principal identifier, in one of the forms you listed.

Well, what I really listed was the identifier for a user. The user itself would be whatever agent holds the private key.

Confused Deputies can easily arise in the presence of rights amplification for capability systems. [...] Your description seems to imply that authorizations are separate from designation, which would further imply that you are increasing the danger of introducing Confused Deputies (and TOCTTOU vulnerabilities).

My system hasn't been run through enough full-size experiments to know whether rights amplification is likely to be a problem. However, since the set of rights held by a piece of code are quite available (literally a pile of certificates sitting in a continuation variable) I expect it would become obvious fairly quickly. I don't attempt to prevent such accumulation, but I do aim to discourage it via other programming disciplines (in particular, via design of the portion of language dedicated to manipulating active capabilities).

Since what I propose gives certificates a strict subset of SPKI capabilities, one could probably look there to see how much Confused Deputy becomes a problem.

As far as TOCTTOU vulnerabilities: given that I aim to support offline capabilties and disruption tolerance, it is physically impossible to avoid aysnchronous behavior. One can specify revocation authorities for a certificate that will be honored by anyone who is prepared to honor that certificate.

Security isn't about providing absolute protection because that is impossible; it's only about raising the bar on effort-to-violate (robust) and enabling effective recovery so the violation doesn't become a festering sore like a rootkit (resilient). TOCTTOU errors are acceptable when all it means is a valid and authorized certificate is used a while longer after being revoked. When certificates need to be revoked that quickly, the design of the communications will be fixed to cover the problem.

I'm a legal entity involved in a service contract. =)

Which implies a degree validation of naasking's internals which completely eliminates the need for delegation control. So where's the problem again? :-)

Hah! Tell that to Microsoft. =)

I'm trying to understand the operational semantics from the view point of a calculus of sorts, either an object or lambda calculus. Your solution must be expressible in one of these given universality, so how do the ACLs affect parameter passing?

ACLs don't affect parameter passing, excepting that they require proof that the parameters they received came from a particular user (one of those on their "list"). This could be achieved by signature. With capabilities, the concept of "list" would be extended to include "and anyone who can prove that so-and-so said they're okay is also okay"... e.g. "Anyone for whom the DOD has assigned need-to-know capabilities (MILTAC SECRET) AND (GEOLOC CONUS) is on the Access List for this piece of data."

I've not developed a calculus for this, though I'm sure one is possible.

As long as your code acknowledges that a mobile object migrating outside of your trust boundary can no longer be relied upon, I'm not seeing the difference (or as long as proper use of sealing sufficiently protects the references).

Protecting the references won't help after they are 'unsealed' by someone outside your trust boundary. The problem is a consequence of the fact that holding the references implies capability. Attempting to get your code to "acknowledge" that mobile objects migrating outside your trust boundary would require considerable user effort and just as much indirection as not using mobile code. E.g. a piece of mobile code intended to send messages to third-party services to which you have access, thus removing the indirection cost, couldn't "be relied upon", which means you'd need to send everything first to yourself.

I fear that speaking in generalizations is ultimately insufficient, and some concrete examples will need to be discussed when you feel your language is ready. :-)

I agree.

### Protecting the references

Protecting the references won't help after they are 'unsealed' by someone outside your trust boundary.

Trivially solved: don't leak the unsealer outside your trust boundary! ;-)

Do you purport to somehow prevent references from leaking outside of your trust boundaries, or simply that your system allows one to more easily visualize/draw the trust boundary via the familiar notions of "users" and ACLs?

Attempting to get your code to "acknowledge" that mobile objects migrating outside your trust boundary would require considerable user effort and just as much indirection as not using mobile code. E.g. a piece of mobile code intended to send messages to third-party services to which you have access, thus removing the indirection cost, couldn't "be relied upon", which means you'd need to send everything first to yourself.

As dynamic typing enthusiasts are fond of saying, code is data, so there is no fundamental limitation to E that says code cannot migrate, nor any requirement that mobile objects be encapsulated at the destination.

If another host will be performing some computation on your behalf, then the authorities required to carry out that computation will need to be transmitted regardless. Embedding these authorities in a mobile program or simply transmitting them in a message does not seem to alter the fundamental properties, so your architecture must provide some sort of additional, implicit validation of the remote host that must be made explicit in an object-capability system. I'm just having trouble understanding what form this validation takes.

What you've said about this so far, is that such mobile programs cannot be created because one cannot control what happens to the capabilities embedded within the program once the program arrives at the other host. This sounds like the same argument most people make regarding capabilities, re: leaking.

Consider this design which exposes no capabilities to untrusted hosts: the mobile program embeds only sealed capabilities, and the original host holds the only unsealer. The mobile program operates on promises which queue all message sends. The resulting message sequence is sent back to the original host, where they are applied to the unsealed objects. So part of the program executes remotely, and part locally from the viewpoint of the trusted host. The unsealer is then thrown away so the embedded capabilities are now useless.

This lines up with your last statement, "E.g. a piece of mobile code intended to send messages to third-party services to which you have access, thus removing the indirection cost, couldn't "be relied upon", which means you'd need to send everything first to yourself.", but I'm not clear on how the mobile code can be relied upon in your architecture, and precisely what "relied upon" actually means in this context, ie. relied upon not to leak authorities, to execute correctly/as specified, etc.

### Trivially solved: don't leak

Trivially solved: don't leak the unsealer outside your trust boundary! ;-)

Trivially solved: just don't support mobile code!

Anyhow, one can attempt to track "trust boundaries" but it is rather difficult to do.

Do you purport to somehow prevent references from leaking outside of your trust boundaries, or simply that your system allows one to more easily visualize/draw the trust boundary via the familiar notions of "users" and ACLs?

Neither. My design is to "let them leak" and formally limit the damage. The system does not use references as capabilities because those would make a leak of references correspond to a leak of authorization.

It, instead, uses tickets as capabilities in a manner independent of the process model. Nice features of PKI enabled tickets include that they may contain a description of rules for application, and this description cannot be deleted without invalidating the signed hash on the ticket.

To limit the extent of 'leak' damage, I use a more complex system of 'rules' on these tickets than is normally the case (the normal case being that tickets have expiration dates, perhaps user names, maybe an identifier and an implicit revocation authority, but not much else).

Though it is still conjecture, based on point case analysis it seems that it doesn't take very many rules to reduce the vulnerability window by arbitrary degrees. Even better, there is no 'incentive' for a compromised runtime to not enforce these rules... because the only system not enforcing the rules harms is the compromised runtime itself. It also turns out these rules can allow tickets to interact (e.g. this ticket is valid so long as you have two other tickets with capabilities matching pattern1 and pattern2 respectively).

There is a computational cost to these bulkier, more complex certificates, though. So I'm aiming some optimizations at the problem (such as static analysis that can prove certain tickets will be valid at relevant points in the code).

As dynamic typing enthusiasts are fond of saying, code is data, so there is no fundamental limitation to E that says code cannot migrate, nor any requirement that mobile objects be encapsulated at the destination.

Indeed, and there is similarly no fundamental limitation to E that prevents you from undermining the language's capability security. Whether user code or a standard language library is performing the task of serializing behavior information isn't very relevant: so long as object references are contained in that behavior information, designation-based security is undermined.

If another host will be performing some computation on your behalf, then the authorities required to carry out that computation will need to be transmitted regardless.

Yep, indeed! And that's why, if you want hosts to perform computations on your behalf, you need ways to limit how the authorities you transmit are used.

Consider this design which exposes no capabilities to untrusted hosts: the mobile program embeds only sealed capabilities, and the original host holds the only unsealer [...]

This wouldn't support a number of desirable use cases, especially those that involve receiving replies, filtering messages, transforming messages, etc. on the host with the intent of avoiding unnecessary network traffic

### Liked everything but the over extended limb

> I'll go out on a bit of a limb here and say: there's nothing stronger
> than E's capabilities when dealing with potentially malicious
> clients, there are just designs with different properties.

Hi Sandro, thanks for the great explanation. With the exception of the above, I agree with all of it. But I'm not willing to go that far out on that limb. Modern cryptography in particular has shown the possibility of all sorts of interesting security properties that are not directly supported by capabilities or anything in E's security architecture.

A great example is Chaumian blinding. Of course, one could implement this in E, as one can in any other language, but that's just universality.

### Do you happen to have a link

Do you happen to have a link or paper ref for Chaumian blinding? Google doesn't produce any in-depth explanations. A post you made was the only informative thing I could find, and I'm still working through it.

I would think "designs with different properties" would imply that while any other system could certainly easily enforce things capabilities could not, it would be at the expense of things that capabilities can easily do. I look forward to seeing if that's the case!