Lifted inference: normalizing loops by evaluation

Lifted inference: normalizing loops by evaluation. Oleg Kiselyov and Chung-chieh Shan. 2009 Workshop on Normalization by Evaluation.

Many loops in probabilistic inference map almost every individual in their domain to the same result. Running such loops symbolically takes time sublinear in the domain size. Using normalization by evaluation with first-class delimited continuations, we lift inference procedures to reap this speed-up without interpretive overhead. To express nested loops, we use multiple control delimiters for metacircular interpretation. To express loops over a powerset domain, we convert nested loops over a subset to unnested loops.

The paper is a bit hard to follow, but there are enough little tricks here to merit attentive reading. Or better yet, read the code. The basic PLT idea might be summed as doing abstract interpretation on a shallowly embedded DSL using delimited continuations.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Talk slides

Thanks! I just posted the slides for the talk.

Data is Code

I found this write-up from Edward Z. Yang who recently attended Chung-chieh Shan's colloquium on Embedding Probabilistic Languages to be a nice & accessible introduction to the PLT idea summarized by Ehud:
http://blog.ezyang.com/2010/09/data-is-code/