Expressing usage constraints within the language

Let's suppose you have a language feature that only some code should have access to. For instance, Intensional Type Analysis/Reflection (ITA) can break abstraction barriers. Sometimes this sort of abstraction breaking is desirable, such as in generic pickling/serialization, printing, etc. but often it is not (violates parametricity theorems).

However, if we expose ITA as a language construct, then there are no restrictions on its use. So what we want is controlled abstraction breaking.

One solution is that ITA is exposed as a runtime provided object/module called Inspector. Inspector's signature has no constructor, so there is no way to construct an instance of it. Instead, an instance of Inspector is member of a standard set of such runtime objects that are provided to "main" and defines the "initial state of the world". "main" can then delegate access to Inspector to any code it deems should have it.

This is a capability design. However, I'm wondering if there's some way to express this constraint within the language itself, instead of resorting to a design pattern. In other words, keep ITA as a language construct, but somehow be able to express when it is legal to use ITA.

As an example of a possible solution, suppose we have a process language and two process constructors, one of which permits ITA expressions within it, the other of which doesn't:

expr  ::= process(expr) | lexpr | ita(expr)
lexpr ::= confined(lexpr) | ...

So a confined process cannot create an ordinary process or invoke ita. However, the composibility of the runtime Inspector object seems greater. Are there any other possibilities?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Exploring the space of language-encorced security

Since nobody's commented on this, I'll elaborate a bit and hopefully clarify. Basically, I'm exploring the options available for language-enforced security constraints for desirable features which can violate encapsulation and security properties (like reflection).

I am very familiar with capability security, that being my default approach, but we all know that anything that can be computed is expressible as a language. Since I can express the controlled propagation of "Inspector" implicitly via capabilities, I'm wondering what expressing this controlled use of Inspector directly as a language would look like, and what the trade-offs are as compared to capabilities.

I'm also interested in type-based solutions, rather than strictly language solutions. There exist type systems for controlling use of resources, such as linear and uniqueness types, but they don't seem applicable in this particular scenario (please correct me if I'm wrong!).

Pointing out the obvious that I may have missed, or links to existing papers are much appreciated! I am aware of the recent SCOLL and SCOLLAR for security property checking.

hrm, i'd suggest checking

hrm, i'd suggest checking out the phd work of Geoff Washburn for a look at one perspective on this issue!
http://www.cis.upenn.edu/~geoffw/research/papers/phd-proposal.pdf

Stack walking as Capability Security?

Excellent link, thanks! I'm particularly curious why stack walking is considered capability security. I've been dealing with object capabilities systems like EROS, E, etc. since about 2000, and the above classification strikes me as quite bizarre. Sure the security tokens on the stack are unforgeable, but they do not bundle a reference and authority to access it, nor are they delegable, which are all critical features of capabilities.

He had some interesting references though, particularly A Systematic Approach to Static Access Control. I'll have to go through it all in more detail again.