## Koka a function oriented language with effect inference

Koka is a function-oriented programming language that seperates pure values from side-effecting computations, where the effect of every function is automatically inferred. Koka has many features that help programmers to easily change their data types and code organization correctly, while having a small language core with a familiar JavaScript like syntax.

Koka extends the idea of using row polymorphism to encode an effect system and the relations between them. Daan Leijen is the primary researcher behind it and his research was featured previously on LtU, mainly on row polymorphism in the Morrow Language.

So far there's no paper available on the language design, just the slides from a Lang.Next talk (which doesn't seem to have video available at Channel 9), but it's in the program for HOPE 2012.

## Comment viewing options

### Very nice. Some complaints:

Very nice.

Some complaints: the tutorial explains that the signature of while:

while : ( pred : () -> <div|e> bool, action : () -> <div|e> () ) -> <div|e> ()

does no harm, but it does not explain why it is used instead of the expected:

while : ( pred : () -> <e1> bool, action : () -> <e2> () ) -> <div|e1,e2> ()

I am not sure what are the benefits of inferring a "divergence" effect. The problem with this is well illustrated by this quote:

Note that the type inference engine is currently not powerful enough to prove that this recursive function terminates

Then it goes on to suggest an imperative version of Fibonacci as the solution! You may as well just ignore the div effect type.

I like the constructor-based updates, but does it mean this doesn't work:

p(address.street = "Madison Sq")

That is, update a field inside a field. It seems this requires to write:

p(address = p.address(street = "Madison Sq"))

### inferring divergence effects

The presence of divergence complicates reasoning about the program. For example, a piece of code on which no other code has data dependencies upon can be discarded without affecting referential transparency, under some restrictions on its side effects. Of course, diverging code should not be discarded.

Keeping track of an upper-bound on divergence allows such reasoning, e.g. in an optimizing compiler.

While not being a tight upper bound (which is undecidable anyway), such inference is an easy bound. I conjecture a lot of basic pattern matching code fits this criteria (and is probably the code we want to optimise away too). However, I have no hard evidence supporting this conjecture, nor know of studies examining this question.

Another reason for tracking divergence is suitability to run on different architectures. The application I refer to is database queries, where recursive code cannot be run directly on the DB engine, even if it does terminate. (The system I saw this in was Wadler&co.'s Links web programming language.)

### Unification rules for while

The slides go in a bit more detail (slides to 11) about why 'while' has this type as a motivation for row polymorphism for effects. Essentially it simplifies the types (which helps inference) while using unification to solve the thorny issues.