User loginNavigation |
Lifted inference: normalizing loops by evaluation
Lifted inference: normalizing loops by evaluation. Oleg Kiselyov and Chung-chieh Shan. 2009 Workshop on Normalization by Evaluation.
Many loops in probabilistic inference map almost every individual in their domain to the same result. Running such loops symbolically takes time sublinear in the domain size. Using normalization by evaluation with first-class delimited continuations, we lift inference procedures to reap this speed-up without interpretive overhead. To express nested loops, we use multiple control delimiters for metacircular interpretation. To express loops over a powerset domain, we convert nested loops over a subset to unnested loops. The paper is a bit hard to follow, but there are enough little tricks here to merit attentive reading. Or better yet, read the code. The basic PLT idea might be summed as doing abstract interpretation on a shallowly embedded DSL using delimited continuations. By Ehud Lamm at 2009-08-14 02:34 | DSL | Functional | Implementation | Meta-Programming | other blogs | 10421 reads
|
Browse archives
Active forum topics |
Recent comments
1 week 2 days ago
41 weeks 3 days ago
41 weeks 3 days ago
41 weeks 3 days ago
1 year 11 weeks ago
1 year 15 weeks ago
1 year 17 weeks ago
1 year 17 weeks ago
1 year 20 weeks ago
1 year 24 weeks ago