User loginNavigation |
Lifted inference: normalizing loops by evaluation
Lifted inference: normalizing loops by evaluation. Oleg Kiselyov and Chung-chieh Shan. 2009 Workshop on Normalization by Evaluation.
Many loops in probabilistic inference map almost every individual in their domain to the same result. Running such loops symbolically takes time sublinear in the domain size. Using normalization by evaluation with first-class delimited continuations, we lift inference procedures to reap this speed-up without interpretive overhead. To express nested loops, we use multiple control delimiters for metacircular interpretation. To express loops over a powerset domain, we convert nested loops over a subset to unnested loops. The paper is a bit hard to follow, but there are enough little tricks here to merit attentive reading. Or better yet, read the code. The basic PLT idea might be summed as doing abstract interpretation on a shallowly embedded DSL using delimited continuations. By Ehud Lamm at 2009-08-14 02:34 | DSL | Functional | Implementation | Meta-Programming | other blogs | 10329 reads
|
Browse archives
Active forum topics |
Recent comments
23 weeks 1 day ago
23 weeks 1 day ago
23 weeks 1 day ago
45 weeks 2 days ago
49 weeks 4 days ago
51 weeks 2 days ago
51 weeks 2 days ago
1 year 1 week ago
1 year 6 weeks ago
1 year 6 weeks ago