Inductive Synthesis of Functional Programs: An Explanation Based Generalization Approach

Inductive Synthesis of Functional Programs: An Explanation Based Generalization Approach
Emanuel Kitzelmann, Ute Schmid; JMLR Volume 7(Feb), 2006.

We describe an approach to the inductive synthesis of recursive equations from input/output-examples which is based on the classical two-step approach to induction of functional Lisp programs of Summers (1977). In a first step, I/O-examples are rewritten to traces which explain the outputs given the respective inputs based on a datatype theory. These traces can be integrated into one conditional expression which represents a non-recursive program. In a second step, this initial program term is generalized into recursive equations by searching for syntactical regularities in the term. Our approach extends the classical work in several aspects. The most important extensions are that we are able to induce a set of recursive equations in one synthesizing step, the equations may contain more than one recursive call, and additionally needed parameters are automatically introduced.

Is this the future of programming?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I can't make up my mind

I need to read this in more detail (it's late, and I'm working on a talk, so no time right now) but it looks like an interesting development in automatic programming, something which we have discussed from time to time. The idea of providing a few examples and having the system infer a program is awesome. At the same time this stuff seems kinda crazy to me -- I just can't see it ever scaling to interesting program. I really can't make up my mind.

It might be interesting to

It might be interesting to compare/contrast this with the techniques used to derive programs from specifications.

Connections with ILP

It's interesting to see explanation-based learning (EBL) back from it hiatus but, after a cursory read of the paper, I think they may have missed some connections with similar work in the inductive logic programming (ILP) literature. In particular, I would be interested in understand the connection between their approach and Shapiro's Algorithmic program diagnosis as well as Muggleton's Progol.

In the former, (recursive) logic programs are refined by finding a sequence of resolution steps that lead to proof of a negative example (I/O pair). A faulty clause in that proof sequence is identified and replaced by its refinements. To me, this seems similar to generalising from program traces. Similarly, the "saturation" of a positive example in Progol finds the most specialised clause that is consistent with the example which feels to me like a kind of program trace.

I can't find a reference right now but this stuff also reminds me of John Lloyd's work on inducing functional programs from examples in the language Escher.

The authors have a pretty good summary of related ILP techniques towards the end of the paper so it strikes me as strange that they would miss these, especially given that they cite the comprehensive survey by Flener and Yilmaz. That said, I didn't know of the 1977 of Summer's that they build upon. Perhaps I need to spend some more time reading their proposal.

It's an interesting area but an incredibly difficult problem to solve. The space of programs that satisfy a finite set of input/output pairs is very complex. Without an extremely strong search bias I can't see these methods working on anything but small problems.

Shapiro's Algorithmic Program Diagnosis