Trampolining Architectures

Trampolining Architectures
A trampolining architecture is a special case and extension of a monad which is useful in implementing multiprogramming. Five trampolining architectures, operating over the range of two trampolining translations, are presented. The effects of the architectures are cumulative. Some increase the breadth of multiprogramming facilities provided. Others demonstrate the potential for more efficient implementation. Finally, we demonstrate the applicability of trampolining to languages without closures.
We discussed more than once Trampolined Style, but never this paper by the same authors (except Wand). Published one year later, it discusses more options, and more importantly, shows relations between trampolines and monads.

If you are interested in design and implementation of PLs, and you never heard about tramplines - you should.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Bouncy Bouncy

I feel like I should know what the Trampolined style is, having hung around LtU for all this time but...

From glancing over the papers, it appears the purpose is similar to OS task switchers/schedulers. I'm assuming this could include round-robin and priority driven context switches. The difference with the trampolined style would appear that it is something which can be emulated within the user programs (without having to resort to the OS).

Would this characterization be a close approximation?

Pretty close

It is also related to proper tail calls, CPS transformation, monads, futures, and conituations. Each of the cited papers is just 10 pages long (short?), so I urge you to read them.

Last time I used trampolines to implement an interpreter of concurrent persistent1 PL with proper tail calls on a single thread of Java. If you ever decide to implement full Scheme on JVM (and you do not mind some loss of performance) - trampolines are for you!

Last time I checked (today) Javascript didn't support TCO either, so if you happen to implement Scheme on Javascript... ;-)

1Theoretically, also distributed, but I never got that far as testing it on many machines. Well, it never left my laptop at all :-)


I guess I've first seen trampolines cited in Distel paper by Luke Gorrie. He used to visit LtU frequently...

More in-depth (warning, categories)

A Monadic Categorical Semantics for Threads

Rather than relegating threads to be a machine-level concept, it is possible to describe thread semantics at the programming-language level. Here, a categorical semantics is given for a potentially multithreaded programming language. The thread behavior is packaged as a monad, and the semantics is described both directly and via a semantics-preserving "trampolining" translation into a lambda-calculus-like metalanguage.

Probably a smoother introduction to trampolines and friends

Cheap (But Functional) Threads

The current work advocates the use of resumption
monads as a useful abstraction for concurrent programming and verification rather
than as a basis for denotational semantics. The purpose of this account, in part,
is to provide an exposition on resumption monads so that the interested reader
may grasp the literature more readily. To this end, when resumption monads are
introduced in Section 4, the constructions are presented both as Haskell data type
declarations and in the categorical notation of Moggi’s lecture notes,
so that the reader may compare the two formalisms side-by-side.

Despite "cheap" title, the article seems to be of good quality (I've just read the beginning) and uses Haskell (and Haskell monads) extensively, so it has a more hands-on approach than other cited papers.

Thanks for posting that link, Andris

I was excited to read about an article with both the words "trampolining" and "monad", but was somewhat disappointed on opening the PDF. Although I usually can read Scheme fluently, I find that as soon as I see the word "monad", my brain loses the ability to follow any language but Haskell...

Hope that was useful

Another benefit of not using Scheme when describing trampolines is that people see more clearly that there are no gimmicks - the metalanguage does not have to support proper tail calls, first-class continuations, native threads, mutable cells, etc.

BTW, if you add some persistence and/or distribution to trampolines, you will get a pretty useful tool for many business applications. If you do that in Haskell - it means a useful tool in Haskell...

I was trying to do something along this lines in Java, and am currently lost somewhere on object-relational boundary (no, Hibernate does not help - I suspect the problem is not O/R mapping, but O part of it).

Trampolining Architectures, STM, and Multi Core Processors.

I see three intriguing enabling technologies, two based on monads and one on hardware, starting to emerge in the multiprogramming area.

  • Trampolining Architectures
  • Software Transactional Memory
  • Multi Core Processors
  • So is there any away to intregrate them and would it make sense to try to do so?

    Can STM mechanics and Trampolining Transformations be hidden behind the scenes so a programmer from a more conventional background could use an STM formalism for synchronization without having to learn anything about monads whose mere name seems to strike terror in the hearts of the unitiated?

    If anyone would like to volunteer to write a short paper and/or webliography addressing this topic, we would like to post it on The Institute for End User Computing's web site. Just send email to

    Peter J. Wasilko, Esq.
    Executive Director & Chief Technology Officer
    The Institute for End User Computing, Inc.

Refocusing in Reduction Semantics

A paper that is interesting by itself (and I remember reading it on LtU, but cannot find a link!):

Refocusing in Reduction Semantics

The refocused evaluation function of a reduction semantics implements the transitive closure of the refocus function, i.e., a “pre-abstract machine.” Fusing the refocus function with the trampoline function computing the transitive closure gives a state-transition function, i.e., an abstract machine. This abstract machine clearly separates between the transitions implementing the congruence rules of the reduction semantics and the transitions implementing its reduction rules. [...] As a byproduct, we have shown how refocusing makes it possible to mechanically obtain an array of abstract machines [...] We have also shown how to refocus a quadratic-time context-based CPS transformer into a linear-time one.
This is not the abstract, just pieces I wanted to quote. Note the use of trampolines.