Bell-LaPadula and type safety

Can someone confirm, refute, comment on, or point to the literature on my (somewhat inchoate) intuition that the access-control rules of the Bell-LaPadula model (specifically that a subject cannot read from objects of higher privilege, the "simple security property", and that a subject cannot write to objects of lower privilege, the "*-property") are just special cases of the safety of covariant results and contravariant arguments respectively?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

JIF

For a relatively recent and popular starting point related to this view of information flow in a language, in case you haven't, check out JIF (http://www.cs.cornell.edu/jif/) from Andrew Myer's group. You should be able to chase citations from there.

And, in ML-land:

There also exists FlowCaml.

Thanks, but ...

Both of these seem to be strongly-typed languages with mandatory access control bolted on. What I'm interested in is the perception that Bell-LaPadula safety just is type safety, and needs no extra mechanism, simply a notion of secure repositories as typed containers.

I'm missing something

The above work does form a type system, and we see rules for functions.

You seem to be asking if we can reuse the type system of some preferred language (Haskell, Java, typed lazy lambda calculus, etc), and how far would we have to go to build the equivalence. We typically define information leakage to occur on conditionals; you'd have to encode a new 'If' in Java to handle them in a way checkable by javac. At that point, the question really is an academic one, and I want to think about it :) As more of a SE point than a PL point for what we're modeling, unless we're in the (I believe counter-productive) world of language based object capabilities, typed orthogonality of labels vs data is nice.

You seem to be asking if we

You seem to be asking if we can reuse the type system of some preferred language

He was specifically asking if there was a correspondence between Bell-Lapadula and co/contravariance, or other, well-known/existing type systems.

As more of a SE point than a PL point for what we're modeling, unless we're in the (I believe counter-productive) world of language based object capabilities

Counter-productive for what?

typed orthogonality of labels vs data is nice

I think the real questions for security analyses like you suggest are:

1. what are their security properties? (ie. what do they secure against, what patterns of safe co-operation or use do they enable, etc.)

2. how compositional are they?

I think the 2nd case is the killer for Bell-LaPadula. Anything resulting in coarse-grained authority management, and we'll be no better off than we are now (ambient authorities, trojans, viruses, etc.). Non-compositional security models inevitably result in coarse-grained authority management, thus, non-compositional security models are doomed to failure when applied to securing general computation (though specific cases may sometimes benefit from such models).

not so clear

He was specifically asking if there was a correspondence between Bell-Lapadula and co/contravariance, or other, well-known/existing type systems.

Co/contravariance are properties of type operators, not type systems, which was confusing. Anyways, the particular operator seemed to have been an arrow. So, here's the well-known slam calculus which was early to formalize these connections and demonstrates well-known arrows still following subtyping, but for labels, cleanly: http://www.cs.cornell.edu/andru/cs711/2003fa/reading/heintze98slam.pdf . JIF etc build upon it - I couldn't find anything in Andrew Myer's recent work showing it (the closest was for continuations, which generalize functions, but I didn't see it as clearly).

Counter-productive for what?
Counterproductive was too harsh, and you're right, I was being purposefully vague. For example, it does promote good defensive code style. But, consider typical needs of software engineers. For example, it is import to know what is the security policy for a program. Solely using E/Caja notions means I'm stuck with a points-to analysis to see current behaviour, if I'm lucky enough for it to work, and that still isn't what I want (the policy, not current implementation, which may actually oppose each other in places). How about safety? Dynamically or statically, I need to ensure particular properties. It is possible in these systems if you coded them right, but I would rather it be automatically checked or synthesized. Finally, how about recreating work? It's hard building creating good policies and monitors -- can Mort really do all this correctly, and how hard is it to adapt to new rules in a large system?

I think the 2nd case is the killer for Bell-LaPadula. Anything resulting in coarse-grained authority management, and we'll be no better off than we are now (ambient authorities, trojans, viruses, etc.).
I'm not arguing against language based approaches (esp. the statically or dynamically typed stuff), and am clearly interested in it, but suspect that there's an elephant in the room and we're at a turning point where, due to web programming and even more so due to rich web apps, these things finally matter. I further agree: fine grained end-to-end security is essential (I really need to catch up on the language and implementation implications of all the attestation work!). However, separation of policy (constraints, types, whatever you want to call them) from program and further abstractions that lead to the user and average developer still need a clearer story, and those two groups are fairly large bottlenecks to fine-grained approaches. Bad code will be written, and users will do stupid things, so I want to see more SE level work, and believe safety gauges likes policies and TPMs are important as part of that (redundancy, orthogonality, domain specialization, etc). Who knows, perhaps machine learning can detect deleterious data-level behavior in apps as the next step of virus protection and this viewpoint will be moot. It's late and this paragraph is rambling, but I've been struggling with getting a clear vision on these issues recently.

oops

1. Meant that to be a response to Naasking (I wish we could move posts..)

2. I had also meant to provide a link to a nifty usage of arrows to carve out a sublanguage in haskell for information flow. Control flow like sequencing or conditionals can violate non-interference and thus must be part of the analysis, so we see the contortions we must go through for the embedding: http://www.cis.upenn.edu/~stevez/papers/LZ06a.pdf . This paper is neat in that it checks noninterference for *dynamic* policies (and so is reasonably done at runtime, but for code that has not executed, so this is stronger than what a monitor can give you).

How about safety?

How about safety? Dynamically or statically, I need to ensure particular properties. It is possible in these systems if you coded them right, but I would rather it be automatically checked or synthesized.

That's why Fred Spiessens built SCOLL and SCOLLAR. His thesis is great.

Finally, how about recreating work? It's hard building creating good policies and monitors -- can Mort really do all this correctly, and how hard is it to adapt to new rules in a large system?

Hmm, I'm not sure I follow. The benefit of capabilities is that the security properties follow exactly the system's behaviour, which the developer is already reasoning about. The only thing missing is a way to specify higher-level policies (invariants) which the code must then be verified to contain. So if you mean that Mort isn't skilled enough to perform this verification, I agree, hence the focus on building some verification tools like SCOLL/SCOLLAR. Advances in type systems might here too, as security properties are now also program properties. Perhaps refinement typing might be able to address this one day.

Bad code will be written, and users will do stupid things, so I want to see more SE level work

Because of this, I'm a big fan of representing as much as possible via the type system. Capability security is subsumed by referential transparency, so if we can find a way to decompose coarse-grained monads/effects into finer-grained ones (IO a => DelDir a), then the developer only needs to learn one tool to be able to specify and analyze both security properties and behaviour, and the code is almost self-documenting!