archives

Arrows generalise monads and idioms

Two fresh papers from the Edinburgh theory stable:
  • Lindley, Wadler & Yallop, 2008. The Arrow Calculus, (Functional Pearl) (submitted to ICFP).
  • Lindley, Wadler & Yallop, 2008. Idioms are oblivious, arrows are meticulous, monads are promiscuous (submitted to MSFP)
    We revisit the connection between three notions of computation: Moggi’s monads, Hughes’s arrows and McBride and Paterson’s idioms (also called applicative functors ). We show that idioms are equivalent to arrows that satisfy the type isomorphism A ∼> B ≅ 1 ∼> (A -> B) and that monads are equivalent to arrows that satisfy the type isomorphism A ∼> B ≅A → (1 ∼> B). Further, idioms embed into arrows and arrows embed into monads.
The first paper introduce a reformulation of the Power/Thielecke/Paterson/McBride axiomatisation of arrows, which the authors argue is more natural, and shows that arrows generalise both monads and idioms. The second paper studies the relationships between the three formalisations in more formal depth; in particular the results about applicative functors struck me as significant.

Automatic Generation of Peephole Superoptimizers

Automatic Generation of Peephole Superoptimizers, Sorav Bansal and Alex Aiken, ASPLOS 2006.

Peephole optimizers are typically constructed using human-written pattern matching rules, an approach that requires expertise and time, as well as being less than systematic at exploiting all opportunities for optimization. We explore fully automatic construction of peephole optimizers using brute force superoptimization. While the optimizations discovered by our automatic system may be less general than human-written counterparts, our approach has the potential to automatically learn a database of thousands to millions of optimizations, in contrast to the hundreds found in current peephole optimizers. We show experimentally that our optimizer is able to exploit performance opportunities not found by existing compilers; in particular, we show speedups from 1.7 to a factor of 10 on some compute intensive kernels over a conventional optimizing compiler.

It's always fun to see a method that ought to be intractable rendered tractable through a combination of cleverness and strategically applied brute force.

I found their performance measurements suggestive but perplexing. Their results on their kernels are really astoundingly good, which they claim that is because they make use of the x86's SIMD instructions. But there is very little change in the SPEC benchmarks, and they subsequently suggest that their peephole optimizer catches many things that other compilers get via dataflow optimization. As a result, I don't feel like I have a clear picture of what their optimizer does and doesn't do, or where it does and doesn't overlaps with existing optimizations. But they definitely have enough to convince me that this is worth further study, so I really hope Bonsal and Aiken publish some more about this -- a tech report or journal paper with more systematic measurements and in-depth analysis would be really illuminating.