From abstract interpretation to small-step typing

Patrick Cousot is well known for abstract interpretation, which among other applications offers a general approach to understanding and designing type systems. It thus surprises me that his work seems to have been discussed only cursorily on LtU.

Patrick Cousot and Radhia Cousot, Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. POPL 1977, 238-252.

A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some informations on the actual computations. An intuitive example (which we borrow from Sintzoff) is the rule of signs. The text -1515*17 may be undestood to denote computations on the abstract universe {(+), (-), (+-)} where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515*17 ==> -(+)*(+) ==> (-)*(+) ==> (-), proves that -1515+17 is a negative number. Abstract interpretation is concerned by a particlar underlying structure of the usual universe of computations (the sign, in our example). It gives a summay of some facets of the actual executions of a program. In general this summary is simple to obtain but inacurrate (e.g. -1515+17 ==> -(+)+(+) ==> (-)+(+) ==> (+-)). Despite its fundamental incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, ...).
Patrick Cousot, Types as abstract interpretations. POPL 1997, 316-331.
Starting from a denotational semantics of the eager untyped lambda-calculus with explicit runtime errors, the standard collecting semantics is defined as specifying the strongest program properties. By a first abstraction, a new sound type collecting semantics is derived in compositional fixpoint form. Then by successive (semi-dual) Galois connection based abstractions, type systems and/or type inference algorithms are designed as abstract semantics or abstract interpreters approximating the type collecting semantics. This leads to a hierarchy of type systems, which is part of the lattice of abstract interpretations of the untyped lambda-calculus. This hierarchy includes two new à la Church/Curry polytype systems. Abstractions of this polytype semantics lead to classical Milner/Mycroft and Damas/Milner polymorphic type schemes, Church/Curry monotypes and Hindley principal typing algorithm. This shows that types are abstract interpretations.

I started learning more about abstract interpretation for two reasons. First, abstract interpretation underlies CiaoPP, a program preprocessor mentioned here that uses the same assertion language for static and dynamic checking and optimization. Second, on our way to a logical interpretation of delimited continuations, Oleg and I build a type system using what we call small-step abstract interpretation.

Manuel Hermenegildo, Germán Puebla, and Francisco Bueno, Using global analysis, partial specifications, and an extensible assertion language for program validation and debugging. In The logic programming paradigm: A 25-year perspective, ed. Krzysztof R. Apt, Victor W. Marek, Mirek Truszczynski, and David S. Warren, 161-192, 1999. Berlin: Springer-Verlag.

We present a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be verified statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by means of user programs. We also report briefly on an implemention of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinancy, and computational cost, and can treat modules separately, performing incremental analysis. In practice, this modularity allows detecting statically bugs in user programs even if they do not contain any assertions.
Oleg Kiselyov and Chung-chieh Shan, A substructural type system for delimited continuations. TLCA 2007.
We propose type systems that abstractly interpret small-step rather than big-step operational semantics. We treat an expression or evaluation context as a structure in a linear logic with hypothetical reasoning. Evaluation order is not only regulated by familiar focusing rules in the operational semantics, but also expressed by structural rules in the type system, so the types track control flow more closely. Binding and evaluation contexts are related, but the latter are linear.

We use these ideas to build a type system for delimited continuations. It lets control operators change the answer type or act beyond the nearest dynamically-enclosing delimiter, yet needs no extra fields in judgments and arrow types to record answer types. The typing derivation of a direct-style program desugars it into continuation-passing style.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Type checking is abstract evaluation

Under that general heading I shall now toot our own horn (:

The particular point of our paper is type checking as evaluation with `abstract values'. Whereas dynamic semantics, evaluation, deals with (dynamic) values like 0, 1, etc., static semantics, type-checking, deals with approximations like int. A type system is sound if it correctly approximates the dynamic behavior and predicts its outcome: if the static semantics predicts that a term has the type int, the dynamic evaluation of the term, if it terminates, will yield an integer.

Conventional type-checking is big-step evaluation in the abstract: to find a type of an expression, we fully `evaluate' its immediate subterms to their types. Our paper above describes a different kind of type checking that is small-step evaluation in the abstract: it unzips (pun intended: cf. Huet's zipper) an expression into a context and a redex. The type-checking algorithms in the paper are implemented in Twelf; the complete code is available. In particular, the well-commented file lfix-calc.elf implements, by way of introduction and comparison, small-step type-checking for simply typed lambda calculus.

The benefit of small-step typechecking is that, by `following' the control flow, it is more suitable for effectful computations. A more interesting result comes from the fact that static evaluation, unlike the dynamic one, goes `under lambda'. Thus the small-step static evaluation context is a binding context.

Colon becomes the new turnstile! Since the evaluation context is neither commutative or associative but is subject to focusing rules, we obtain the framework for expressing various substructural and modal logics.

Abstract evaluation is metadata preprocessing

I've been floating around the idea that all type systems are just formal descriptions of metadata on names (strong/manifest typing) or values (weak/latent typing).

On that note, could one use this system to evaluate other data beyond the usual typing information? If so, all sorts of optimizing tricks are now possible, and could be automatically inferred by the compiler on the fly.

Imagine a "loosely" typed language. The compiler treats it as a strictly-typed, type-inferred language, transparent to the user. If the compiler finds that a value never goes below 0, it uses the unsigned versions of instructions on it. A value that never has a fractional component can be represented as an integer. In other words, the compiler could automatically lower all types as close to the target machine as possible, without necessitating type definitions.

(I've been thinking about writing a Scheme compiler that would follow these principles, but I haven't had the knowledge (or the time) to write it.)

(Edit: Or is that what this discussion is all about? Forgive my ignorance.)

Nice!

This sounds like great work. I look forward to grappling with it.

Re discussion of Cousot's work on LtU, Kevin Millikin mentioned "Types as abstract interpretations" here.

Evaluation is abstract interpretation too

You just pick the most boring lattice ever: that of flat values (ie values with an added bottom) for ground types. And the usual information-theoretic lattice for functions. And now your transfer function is just plain evaluation, and is perfectly accurate. So you can see abstract interpretation as ``zooming out'' from values.

Looking at small-step semantics, coupled with abstract interpretation, is cool. I am quite sure that that technique can be used for a lot more than what you did with it -- and what you used it for was itself cool.

What I am more interested in is to know what properties can be reflected via abstract interpretation. Types are one, but what else? For some types (like integers), there are a lot of nice abstract domains that allow a lot of information to be tracked. But what about more general values, outside of type, what can be tracked?

But what about more general

But what about more general values, outside of type, what can be tracked?

I may be misunderstanding the question, but I believe there are a lot of applications of abstract interpretation during optimization. Some properties that I believe can be easily tracked using abstract interpretation are:

  • whether a function is known to halt (e.g. if f(x) halts and g(x) halts then f(x) + g(x) halts, because + halts for all inputs)
  • the size of a list
  • memory bounds checking
  • possible value sets
  • whether a function is recursive
  • complexity analysis

Not a random question

I was not asking that question randomly -- and you are not mistaken. All those things can be kept track of. I have a student using an Abstract Interpretation framework (coupled with a lazy constraint-based formulation of some domain equations) which is giving us some interesting results in the context of a computer algebra system. (No publication yet, but we're planning out at least a Tech Report in the next month or so). I was just fishing to see if I could get things that I had missed! Hopefully that isn't against LtU rules -- I'll have to reread them.

More Ideas

Good to know I am on the right track then. So as for fishing for ideas, I think this a good place to do so! I wouldn't expect any rules against it.

I am very interested in your work. I am also working on similar idea: a term rewriting system for optimization based on cosntraint programming (like Ciao). but specialized for my purposes (optimizing stack-based interemediate code). I am interested in non-deterministic lazy evaluation, possibly with the aid of heuristics (e.g. priority evaluations, scores assigned to productions). Perhaps we can talk further off-line.

Nonetheless, to return to your original question, I have some other ideas about properties which can be tracked using abstract interpretation:

  • value relations (e.g. x = 1/y => inverse(x, y))
  • whether a function is evaluated 0, 1, or some other constant number of times
  • time dependent relationships (e.g. how close in time that two operations can be executed, or values examined)
  • identification of parallelizable code
  • what operations are executed on a ADT during its lifetime (and frequency) so that we can decide the best implementation to use (e.g. cons list versus array)
  • whether or not functions can be easily inlined
  • the types used when executing polymorphic code (e.g. can the polymorphism be removed)
  • probability of occurence of values (if my value is most often a 42, then we can optimize for that)
  • computing the best guess for optimistic parallelization of sequential algorithms

Maybe some of these ideas are helpful?

Maybe relevant

I am interested in non-deterministic lazy evaluation, possibly with the aid of heuristic (e.g. priority evaluations, scores assigned to productions)

This makes me think of definitional trees as used in implementing (or reasoning about) Curry.

Stochastic programs

Also related are stochastic/rational programming languages such as IBAL.

Fishing rights

So as for fishing for ideas, I think this a good place to do so! I wouldn't expect any rules against it.

Quite right.

Just to clarify this point a little, since it sometimes seems to be misunderstood: one important point about the policies is that they "were developed mainly to help new members understand the site, and to help maintain a high quality of discussion." They're not intended to inhibit reasonably informed, civil discussion of PL topics amongst regular members of the LtU community.

There is a policy, 4(a), about avoiding design discussions. However, I think a more important point is in the Purpose section, "As you become part of the community, you naturally have a larger impact on the topics under discussion." This is intended to mean that established members are welcome to post about things that they consider relevant. A problem only arises if people either object to the discussion for some reason, or if the discussion deteriorates in some way.

On the particular subject under discussion, there's a lot to discuss, so have at it! If it drifts too far from the subject of the posted paper, create a forum topic for it.

I have little idea how

I have little idea how abstract interpretation is involved into tracking of the mentioned properties? For example you might record

whether a function is evaluated 0, 1, or some other constant number of times

at runtime but what are the entities you'd like to abstract from to achieve an approximation of your actual program?

Tracking Control Paths

Let's look at a subset of the stated problem:

Is an instruction at a particular location known to execute precisely one time?

You could answer that by interpreting a program, and labeling instructions as you visit them:

  1. not yet visited (the initial state)
  2. visited once (e.g. single call)
  3. known to be visited more than once (e.g. fixed loop, multiple hard calls)
  4. visited variable times (e.g. variable loop, variable conditional)

So the state transition diagram is:
1. -> 2 | 3 | 4
2. -> 3 | 4
3. -> 3 | 4
4. -> 4

Essentially to perform this algorithm you execute a program non-deterministically where run-time variables are set to known constants, or are left in a non-deterministic state, forcing the interpretation to execute all possible paths.

I am pretty sure that this qualifies as an abstract interpretation algorithm, but I could always be mistaken.

Non-determinism is not involved

In abstract interpretation, it might be more accurate to say that your program is run non-linearly rather than non-deterministically. In a way, you ``run'' all possible paths in AI, though how that ``running'' is done can be quite arbitrary.

I highly recommend the book Principles of Program Analysis to get into this more deeply. The point is that when your abstract domain is a complete lattice which satisfies the ascending chain condition (or forced to do so via widening), then running all paths becomes something which runs in finite time (always). The cost is that you get approximations rather than values as answers. [It turns out that this works for types exactly because types are approximations, and in well-typed programs you never encounter the 'top' of the domain].

So your rough idea above is correct, but you have the details muddled.

And the point is?

In abstract interpretation, it might be more accurate to say that your program is run non-linearly rather than non-deterministically. In a way, you ``run'' all possible paths in AI, though how that ``running'' is done can be quite arbitrary.

I'm not saying that AI has to be done non-deterministcally, but simply that it can be done non-deterministically.

The point is that when your abstract domain is a complete lattice which satisfies the ascending chain condition (or forced to do so via widening), then running all paths becomes something which runs in finite time (always).

AI doesn't always have to halt to be useful. A non-deterministic algorithm can improve the chances of getting useful information even in a non-halting condition.

Examples?

You're speculating about what ought to be possible, rather than what existing abstract interpretation systems do, correct? If so, that would mean that the actual usefulness of the approach remains to be demonstrated.

Some confusion

I think I have identified a source of some confusion. I am using the definition of AI as "a partial execution of a computer program which gains information about its semantics without performing all the calculations.".

This is an algorithmic definition of AI as opposed to the formal mathematical one that I realize that others here are apparently more familiar with: :a theory of sound approximation of the semantics of computer programs, based on monotonic functions over ordered sets, especially lattices."

You're speculating about what ought to be possible, rather than what existing abstract interpretation systems do, correct?

No not really. What I am saying is based on the observation that many static analsysis and compiler optimizations are actually a form of abstract intrepretation (a point which the OP I believe was trying to get at). These usually yield increasingly helpful partial information while they executes whether or not they halts. Thus they aren't neccessary to diverge, and can be executed in a non-deterministic manner.

So by extension I am saying that convergence isn't neccessary for AI to be useful, just like other static analyses don't have to halt to yield useful information. This really shouldn't be a contentious statement. It may be very hard to model formally using probabilistic functions and lattices (I wouldn't be able to) but algorithmically it is pretty trivial stuff.

It should be well-established that to deal with any algorithm that doesn't guarantee halting, but yields useful partial information, is to simply write a nondeterministic multi-threaded version of the algorithm. By nondeterministic I am suggesting that during partial execution you can deal with branches and loops by spawning new threads. This makes it non-deterministic because, you can just halt (whether or not all threads converge) based on some arbitrary heruistic and you can schedule threads arbitrarily.

What I am suggesting shouldn't be a new idea to other researchers, and I doubt it is. A quick search on 'nondeterminsism + "abstract interpretation"' yields some apparently relevant papers:

If so, that would mean that the actual usefulness of the approach remains to be demonstrated.

Adhoc non-deterministic AI algorithms already exist in many compilers, it's just that many programmers aren't really aware of what it is that they are doing in formal terms. So the usefulness is pretty clear, whether the formal models of AI have reached that point or not.

Finally as for an example of a non-deterministic AI algorithm, consider the identification of whether or not a control path executes.

You'll forgive me if I don't comment further on the subject. I have already placed to much time and energy on this discussion.

That helps

Thanks, that greatly clarifies what you were getting at in the comment I responded to. It wasn't the definition of abstract interpretation that was at issue for me — I have no problem with your informal definition[*]. I was just looking for some specifics, and incorrectly assumed that concrete examples might not exist.

Adhoc non-deterministic AI algorithms already exist in many compilers

Good point.

[[*] Edit: at least as a definition of a kind of AI, if not all kinds.]

Your strategy seems to

Your strategy seems to be deriving a more abstract model of a program by running the program ( concrete interpretation ) with a prepared sample ( "run-time variables are set to known constants" ). Then you try to do code coverage by splitting the program flow into different continuations at branch points ( or not? what do you mean by running it non-deterministically? ) and do some labeling / recording of information. However I still fail to see AI here. What is the semantical domain in which your program is running? I also don't understand the need of a special preparation of your sample.

Most of these should work

Abstract Interpretation should allow most of those properties to be tracked. You'll need to define things a lot more carefully (ie what functions can be easily inlined probably means you need to track the property of "sub-expressions only evaluated once", otherwise you may cause duplication if you inline too aggressively).

At least, they can all work in theory. In practice, some of these might not satisfy the ascending chain condition or even have a computable widening operator (probability of occurence of values comes to mind).

Type systems as Aspects?

I'm completely new to Aspect Oriented Programming, but when I came across the example in the Kiczales, et al. paper:

For example, mechanical engineers use static, dynamic and thermal models of the same system as part of designing it. The differing models cross-cut each other in that the different properties of a system compose differently.

...I could squint and sort of see that type systems (especially as an abstract interpretation) might be like aspects. The type universe and the value universe are just different ways of looking at the same program. Anyone know of research into this area? Or is this just way out there?