Lightweight Fusion by Fixed Point Promotion

Lightweight Fusion by Fixed Point Promotion, Atsushi Ohori and Isao Sasano.

This paper proposes a lightweight fusion method for general recursive function definitions. Compared with existing proposals, our method has several significant practical features: it works for general recursive functions on general algebraic data types; it does not produce extra runtime overhead (except for possible code size increase due to the success of fusion); and it is readily incorporated in standard inlining optimization. This is achieved by extending the ordinary inlining process with a new fusion law that transforms a term of the form f o (fix g.λx.E) to a new fixed point term fix h.λx.E' by promoting the function f through the fixed point operator.

This is a sound syntactic transformation rule that is not sensitive to the types of f and g. This property makes our method applicable to wide range of functions including those with multi-parameters in both curried and uncurried forms. Although this method does not guarantee any form of completeness, it fuses typical examples discussed in the literature and others that involve accumulating parameters, either in the foldl-like specific forms or in general recursive forms, without any additional machinery.

In order to substantiate our claim, we have implemented our method in a compiler. Although it is preliminary, it demonstrates practical feasibility of this method.

Deforestation is one of those optimizations every functional programmer who has ever had to rewrite a beautiful composition of maps and filters into an evil, ugly explicit fold has always longed for. Unfortunately, the standard lightweight fusion algorithms have trouble with examples as simple as foldl, and this paper has a very nice account of a simple algorithm that can handle it.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


I read it before. While I can see the usefulness as demonstrated by the examples, there could be potentially many other cases where this technique is simply not applicable. The paper lacks depth at exploring its limitations.

I agree

I agree that the is rather weak on related work. Another thing which they totally miss to talk about is zip-like functions, function which takes several arguments and which can be fused in all of these arguments. This is an example where many other fusion methods fail. It's extra surprising that they don't talk about this since I believe that their algorithm actually handles this case, although I have yet to work out an example in detail.

I'm also not that convinced that their fusion method will actually do any good in practice. In the paper they only demonstrate that it doesn't degrade the performance which is an incredibly weak demonstration. The identity transformation doesn't degrade performance either!

Another reason for why I'm skeptic about the performance improvements that can be achieved with this method is the following. For short cut fusion as it is used in Haskell, the biggest win is not in removing the intermediate data structures, it's about the optimizations that are enabled after fusion is performed. Now, the problem is that Lightweight fusion doesn't enable nearly as much optimizations as short cut fusion does. An example will help illustrate my point. The important this here is that we're dealing with higher order functions such as map and foldr. I'm going to pretend that lightweight fusion can handle several applications of the same functions so I'm going to use

map f o map g

as my example. Lightweight fusion will (if it could handle this example) yield the following code:

let map_map a b [] = []
    map_map a b (x::xs) = a (b x) :: map_map a b xs
in map_map f g 

Note how the two functions f and g never get "in contact". There is no easy way for the optimizer to see that they are applied in sequence to the same elements all the time.
Now short cut fusion would yield something isomorphic to

map (f o g)

. Now this is a much nicer thing for the optimizer. Suppose (switching to Haskell syntax) that

f = (+2)


g = (+4)

then it's now an easy matter for the optimizer to generate the result

map (+6)

. OK, addition is a silly example but it extends directly to more complicated functions. It's clear that lightweight fusion is inferior to short cut fusion in this respect.

I'm also not that convinced

I'm also not that convinced that their fusion method will actually do any good in practice. In the paper they only demonstrate that it doesn't degrade the performance which is an incredibly weak demonstration. The identity transformation doesn't degrade performance either!

To be fair, the authors were working with SML#, which is rather handicapped in this regard. Any functions defined in separate modules, such as standard library functions like map and fold, are opaque to SML#'s optimizer, so it can't fuse those definitions properly. They describe these problems towards the end of section 7.

So despite their promising fusion results in section 5 (50% reduction in allocations, and 33% reduction in execution time), actual programs compiled with SML# don't benefit much due to these problems. They also discuss possible solutions.

[Edit: does anyone know if this fusion technique has been used in a compiler that doesn't suffer from SML#'s limitations?]


I'm happy to see that they ponder these issues with call-by-value in mind too! Unfortunately they conclude the correctness proof relies on call-by-name semantics, which I interpret as that they might introduce termination in the fusion process.

They define tail contexts, and use their rule AppDist to "pull out the head term of a case expression", and I get the impression this is closely related to the turning programs inside-out of The positive supercompiler.

For FixPromote they briefly mention that =>+ is their transformation relation => "together with standard simplifying reduction rules". I would have loved to see all transformation rules collected together in a figure!