Types/Ordering of canonical FP compiler transforms and optimizations?

I'm very interested in the order in which various "typical" FP compiler optimizations are performed.

For example, lambda-lifting vs. closure creation vs. parameter augmentation vs. beta reduction vs. escape analysis vs. pattern match simplification and optimization vs. representation selection for things like nullary constructors (True | False, etc.) within algebraic data types vs. stack map generation/consolidation for precise GC of call frames vs. any potential "name mangling" to support some linking policy - and so on and so forth.

I'm very curious what other typical "high level" optimizations either 1) are typical or 2) not so typical, but can add greatly to performance - emphasizing again, at some "higher level" IR and typical of mostly-functional languages.

Any information on compilation of functional array processing languages (APL or J or something else?) would also greatly peek my interest, as I'm curious about achieving "competitive" numerical performance in a functional language context.

Let's call the above compiler IR(s) and optimizations the "FP stuff" - presumably, prior to some future transformation to basic-blocks-of-quads (and perhaps SSA and so on), i.e. some lower level IR(s), upon which the compiler performs a more "traditional" (and relatively, very well documented) set of "low level" compiler optimizations .

My little personal "language lab" happens to be strict, just in case that matters.

Many thanks!

Scott

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

MLton Compiler Overview

The MLton wiki has a nice Compiler Overview that you might find interesting.

Fusion optimizations

I'm working on a runtime compiler for array languages and I've found the most exciting (and powerful) optimizations are things like map/reduce/scan fusion. These are the rough equivalent of affine loop optimizations for structured iteration. I don't know if anyone has studied how fusion optimizations interact with other passes, but I can at least say that a lot of opportunities for fusion appear after inlining and constant propagation.

Simpleton question, sorry -

Simpleton question, sorry - but does this in any way (escape analysis?) help to address (theoretically needless) production of garbage (temporaries) over successive array operations? (It counts when manipulating larger arrays)

Thanks

Scott

Sorry, I don't totally

Sorry, I don't totally understand your question. If you're asking whether fusion optimizations get rid of array temporaries-- then yes, they certainly do.

Thanks

Thanks

Check GHC

There has been a lot of work with fusion in GHC. Regarding arrays specifically, you may enjoy this recent paper: Regular, shape-polymorphic, parallel arrays in Haskell.

Andre Santos thesis

You should check out Andre Santos thesis. He was a student of Simon Peyton Jones and worked on the optimizer in GHC. His thesis contains a section on analyzing the order in which optimizations should be done.

http://research.microsoft.com/en-us/um/people/simonpj/papers/santos-thesis.ps.gz

Thanks

Thanks

CiteSeer link.