Is there a site to discuss some issues on abstract interpretation?

Hello guys,

I am a Ph.D student on a subject of theoritical computer science.

Since my current work is quite related to formal methods, especially, abstract interpretation. I would like to know whether there is a specific site to discuss some technical issues on abstract interpretation? (Forum/Newsgroup etc)

Apparently here we don't do this if I understand well.

Thanks.
Z.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Perhaps there should be a

Perhaps there should be a companion site to LTU where we can discuss programming language implementation.

Abstract interpretation is a

Abstract interpretation is a program analysis technique. It is based on the idea of executing a program, but the art of implementing a language and an AI system have little overlap.

For the original question, I've seen no materials that have succeeded in a comprehensive or sufficiently representative beginner-level presentation (compared to, say, the great work Pierce has done with TAPL).

The "dataflow analysis is modeling checking of AIs" series of papers might be a good starting point, depending on student background.

What is abstract

What is abstract interpretation useful for, if not for implementing languages (analyses for optimization, static checking) and tools that help you develop programs in the language (type inference for intellisense in an IDE for a dynamically typed language)?

I don't think that the OP was asking for introductory material, but rather for communication about the technical issues on abstract interpretation (because many people here feel LTU is for theory, not technical issues).

You're right -- AI is a very

You're right -- AI is a very general framework so you can phrase much of what a compiler does in terms of it. However, I more typically encounter it in verification literature rather than performance literature. Perhaps the disconnect here is that I think about the language implementation process as starting almost downstream from where AI is normally discussed (as with the paper I highlighted, compilation can certainly use it).

AI was discussed here many

AI was discussed here many times. LtU is not for design discussions, regardless of the topic. Other than that I am not sure I understand the original question.

Try asking your specific questions

Don't worry, if they are off-topic then people will not be shy in telling you. If they are on-topic then you've created an interesting thread. As far as more specific sites go the program transformation wiki used to be good for AI although I haven't been there in years so YMMV.

Thanks. Here some more details

Thank you for all your replies.
I am trying to find here and there a forum to a technique discussion on AI

In AI, we lose information by abstraction. But, how to explain, sometimes, our analysis results are insensitive to such lost of information? My current work starts from a low-level semantics of simple O-O languages. The 1st abstraction consists of, from the semantics of stack, or heap etc to get, with a soundness, the semantics of path vectors which tells the values of each path expression. Here, as you see, we may lose the info of garbage, because the references allocated but no-more pointed will not be represented by any path expressions. But it doesnot matter.

However, sometimes, we do care about the lose of information. For example, if I understand well, we probably don't like Galois connection which is not injective, since its abstract domain may contain irrelevant information which can, at least, be abstracted away by a reduction operator, say, \alpha\circ\gamma. That's to say, in this case, we do care the lose of information. So my question is how to formalize this kind of non-harmful lost of information, to make our designing of abstract domain more rule-based rather than experimental ?

Sorry if my question is naive and vague? The reason that I was asking whether a AI-specific forum exists is simply that I don't see where to put this kind of question.

Thanks you for all your ideas.

Difficult in general

If I've understood you correctly then you want to formalise the information lost by the abstraction operator in AI. My initial intuition is that this would be very difficult to do in general. Abstract Interpretation is itself a formalisation for a class of analysis problems that can be approximated by finding iterative solutions to dataflow equations. Prior to Cousot & Cousot's work each analysis problem had been tackled independently with differing notions of abstraction and correctness.

The use of Galois connections in AI is actually a solution to the problem of showing that the analysis results are 'correct' with respect to some form of approximation. In this context the type of 'correctness' that is usually required is soundness under composition of abstraction / concretisation operators. It is true to say that information is lost under each of the abstraction operators, but this is normally necessary to ensure termination of the analysis.

As AI already offers a formal framework to reason about the amount of abstraction necessary to ensure termination it is difficult to see what you could formalise in this area. Any reasoning about how much information loss is acceptable would be dependent on analysing the termination properties of the analysis. In particular it would correspond to plugging in different decision procedures to determine particular properties of the language and deciding whether or not they yield a least fixpoint in a finite period of time.

This process is difficult enough that the (formal) tools available still require human interaction to design the abstraction operator, based on domain-specific knowledge about the analysis problem and the language under consideration. Attempting to automate this process in general would be very ambitious.

If you are looking for information about how to determine how much information loss is acceptable in each problem then you will need to look for informal documentation. Designing abstract domains is still pretty much an art-form as much as a science. While you will find many formalisations of abstract domains, the justifications for why the information-loss is acceptable will only be found in natural language write-up accompanying them.

Some (very generic) starting points for you would be: this old thread on lambda with some good references (there are many others). Pretty much most of the past proceedings of SAS and many papers on using AI in program analysis that have appeared in POPL.

There is not really much discussion of AI here on lambda. Depending on specific your questions about analysis domains for languages are, there may be some overlap with the discussion topics here about languages. Other than that, sorry I can't provide you with a more useful answer.

Pretty informative your answer, many thanks

and that old thread is quite nice