CPS for the win?

My old half-baked idea/dream of a cranky day job programmer who doesn't really know what he's talking about:

  • Convert everything in to CPS.
  • Each function call is a new continuation.
  • Have a top level 'event loop' that runs the next continuation one at a time.
  • The runner checks the signature of the next function vs. a lookup table for hooks to run before and/or after the invocation.

This gives super powers like:

  • AoP join points at every function. In languages that are more expression than statement oriented, more pure fp style, then that means there's a lot of opportunity for join points.
  • Logging is more flexible, at any join point, dynamically adding/removing/tweaking at runtime.
  • Ability to run prod & test versions of any function side by side and compare behaviour.

Code would still be interacted with by developers in original format. Since e.g. hypothetically anything can be converted from imperative to monadic pure fp style then e.g. Java could be set up this way. The UX for adding hooks would be nice and all at a high level, looking like source code, not some internal runtime CPS hell.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Scheme

That's exactly how the first two implementations of Scheme worked, the original interpreter and the Rabbit compiler. Except without the hooks.

Your pipe dream idea

Your pipe dream idea presupposes a trampoline mechanism to call continuations.

While that works, I don't think it is particularly optimal. Most of the time you just want to branch to the next continuation.

Interesting point about the runtime hooks though...

Kernel

Once upon a time I started to write a Kernel interpreter that worked pretty near to this way. My notion was to get everything working and then explore approaches to optimization (to understand better how much optimization can actually be done, independent of dogmatic claims). Each continuation was an object in the interpreter; an interpreter state consisted of a value and a continuation, and the basic method of such a state was step, which would send its value to its continuation and return a state. As for hooks, though, my guarded-continuation facility was meant to fix that broken element of the R5RS design. Interestingly (imho), although I never finished implementing that interpreter, my plan to learn from it about optimization of Kernel had already started to work, some, by offering an insight related to the runtime efficiency of continuation guarding.

Background: In Kernel, each continuation has, in general, a list of entry guards and a list of exit guards; usually both lists are empty, but a single primitive in the language constructs a continuation with guards. Each guard (each item on either list) consists of a selector (continuation) and an interceptor (applicative, i.e., procedure). Continuation guards don't come into play when a continuation receives a value normally, but only when a value is abnormally passed (i.e., a first-class continuation is explicitly called). Normal flow of information to continuations arranges all continuations into a tree, with the dynamic extent of a continuation being the subtree of its descendants, and all the computations that normally contribute to it and to them. When an abnormal pass sends a value from somewhere inside the dynamic extent of a continuation to outside its dynamic extent, as the value crosses the boundary of that dynamic extent, a check is done to see whether one of the continuation's exit guards may be triggered. Each selector in the list is checked, to see if the destination of the abnormal pass is within the dynamic extent of that selector; and if any of the selectors matches the destination, the first matching one (and only the first) is triggered; its interceptor is then applied to the value that's being abnormally passed; and the normally-returned result of the interceptor then continues along the path of the abnormal pass. This is close, though not identical, to the way Java exceptions work. When an abnormal pass sends a value from somewhere outside the dynamic extent of a continuation to inside its dynamic extent, as the value crosses the boundary of that dynamic extent, a check is done to see whether one of the continuation's entry guards may be triggered. Each selector in the list is checked to see if the source of the abnormal pass is within the dynamic extent of the selector, and at most one interceptor is thus applied.

So the exit guards are selected based on the destination of the abnormal pass, while the entry guards are selected based on the source of the abnormal pass (that's the original source of the abnormal pass: interceptors, as long as they return normally, do not alter the source of the abnormal pass, thus do not affect later entry guards within the same abnormal pass). What should have been clear to me —I designed the language feature, after all— but wasn't until I implemented it, is that at the moment the interpreter begins an abnormal pass, when the source and destination of the abnormal pass become known, the entire arc of the abnormal pass can and should be scheduled, with a complete queue of all the interceptor calls that should normally occur during the pass. (Which can, btw, be done in linear time if one is allowed to exclusively mark continuations during the queuing operation, but afaict takes quadratic time if one isn't allowed to do that.)