LtU Forum

Refining Structured Type System

This is my first more serious paper on Structured Type System.

As every theory needs some syntax form to express its elements, a road to a theory about theories leads through a syntax defining land, so structured type system, in the first place, provides a flexible generalized text parser that builds up internal abstract syntax trees (AST) from input data. The other aspect of theory about theories inevitably covers the meaning of input data. This is called semantics, and this is the point where structured type system provides a possibility to define deeper connections between syntactic elements of AST-s. For this purpose, structured type system uses a kind of functions known from functional programming paradigm. These functions are able to process any data corpus, being natural or artificial language translation, which in turn happens to be just enough for running any complexity task used to analyze existing and calculate new data from an input.

In short, we use BNF-ish grammars as types for function parameters and function results. Some nice constructions can be made by combining grammars and functions. One of the most important properties of structured type system is its ability to additionally extend grammars outside the grammars definitions, all based on function result types. It is fairly simple: where a certain type of expression is expected, there a grammar that results with the same type can be used, and there goes syntax extensibility. Conveniently, we can combine grammar definitions and their inputs in the same source code file.

I was hoping to get some feedback and critics from this community before attempting to get more publicity to the paper. This is an important milestone to me and I want to thank You all for being so inspirational community during my research.

Cool stuff from recent conferences

I heard of some good stuff. How about someone who was there post the headline worthy papers?

Céu: Structured Synchronous Reactive Programming (SSRP)

Céu is a Esterel-based synchronous language:

It appeared in LtU in the past in an announcement of the "SPLASH: Future of Programming Workshop" program.

In this new public version, we are trying to surpass the academic fences with a more polished work (docs, build, etc).

In summary:

  • Reactive: code executes in reactions to events
  • Synchronous: reactions run to completion in discrete logical units of time (there's no implicit preemption nor real parallelism)
  • Structured: programs use structured/imperative control mechanisms, such as "await" and "par" (to combine multiple awaiting lines of execution)

Structured programming avoids deep nesting of callbacks letting programmers code in direct/sequential/imperative style. In addition, when a line of execution is aborted, all allocated resources are safely released.

The synchronous model leads to deterministic execution and simpler reasoning, since it does not demand explicit synchronization from the programmer (e.g., locks and queues). It is also lightweight to fit constrained embedded systems.

We promote SSRP as a complement to classical structured/imperative programming like FRP is now to functional programming.

Archaeological dig to find the first Lisp example of the Y-combinator

I'm trying to find the first Lisp examples of the Y-combinator. Beyond that I am also trying to find the first time the Y-combinator was demonstrated using the factorial function and the mutually recursive definition of odd/even.

What works should I be looking at? The first Scheme paper references fixed-point combinators at page 16 and also shows the familiar LISP definition of the factorial function. But, it does not express the factorial function using a fixed-point operator.

How will look a modern imperative language? All love here is functional only..

After read a lot about compilers/languages I see that most research, if not all, is about functional languages, and complex type systems

Now that I'm toying in build one, I see that I'm biased the language because that to be functional, yet, the truth is that I'm more a imperative guy.

So, I wonder what is new/forgotten in the world of imperative or non-functional languages, languages more "mainstream". Are GO/Rust/Swift just there?

If wanna build a language (more mainstream, imperative, etc) with the wisdom of today, how it look? Is already made? Maybe ADA or similar?

I probably switch it to make "const by default, variable optional", use AGDT and the match clause, but not think what else...

Inference of Polymorphic Recursion

In the following (Haskell) example, the type annotation on f is required:

f :: a -> (Int, a)
f x = (g True, x)

g True = 0
g False = fst (f 'a') + fst (f 0)

main = do
    print (fst (f True))

I can understand why in general, but I wonder if we could just decide to generalize arbitrarily in the order that declarations appear so that in this case the type of f would be inferred but if you switched the definition order you'd get a type error. When f is generalized, g would be constrained Bool -> b where b would be unified after generalization. Is this something that might work (but isn't done because it's arbitrary and makes definition order matter) or are there hard cases I need to consider?


Generic overload resolution

Kitten has ad-hoc static polymorphism in the form of traits. You can declare a trait with a polymorphic type signature, then define instances with specialisations of that signature:

// Semigroup operation
trait + <T> (T, T -> T)

instance + (Int32, Int32 -> Int32) {

instance + (Int64, Int64 -> Int64) {

This is checked with the standard “generic instance” subtyping relation, in which <T> (T, T -> T)Int32, Int32 -> Int32. But the current compiler assumes that specialisations are fully saturated: if it infers that a particular call to + has type Int32, Int32 -> Int32, then it emits a direct call to the (mangled) name of the instance. I’d like to remove that assumption and allow instances to be generic, that is, partially specialised:

// List concatenation
instance + <T> (List<T>, List<T> -> List<T>) {

// #1: Map union
instance + <K, V> (Map<K, V>, Map<K, V> -> Map<K, V>) {

// #2: A more efficient implementation when the keys are strings
instance + <V> (Map<Text, V>, Map<Text, V> -> Map<Text, V>) {

But this raises a problem: I want to select the most specific instance that matches a given inferred type. How exactly do you determine that?

That is, for Map<Text, Int32>, #1 and #2 are both valid, but #2 should be preferred because it’s more specific. There are also circumstances in which neither of two types is more specific: if we added an instance #3 for <K> (Map<K, Int32>, Map<K, Int32> -> Map<K, Int32>), then #2 and #3 would be equally good matches, so the programmer would have to resolve the ambiguity with a type signature.


Hi, I wonder if someone can help resolve the conflict described below.

My system is an Algol like language which supports both functional and procedural code.
Now, I hate the way C handles lvalues, it doesn't extend well, one has to make horrible rules
and list all possible l-contexts, and attempts to do this in C++ are an abysmal failure.
My solution is elegant! Consider a record:


which has type

(x:int, y:double)

then this is a first class value, and the field names are projections:

x (x=1,y=42.1)

Since I have overloading there's no conflict with the same field name in some other record,
the field name is just an overloaded function name.
To make it look nicer you can use reverse application:

(x=1,y=42.1) . x

Now, to get rid of lvalues we introduce pointer types and variables so that in

var xy = (x=1,y=42.1);
&xy <- (x=2,y=43.1);

This is really cool because & is not an operator,
just a way to write the value which is the address of a variable.
We use a procedure written as infix left arrow which takes a pointer to T
as the first argument and a T as the second, and stores the value at the specified address. So its all values.

To assign to a component, we introduce a second overload for each projection that takes a pointer argument and returns a pointer:

&xy . x <- 3;

This works for other products as well (tuples, arrays and structs).
So, we have a purely value based semantics and a sane type system.
In particular we have a very nice rule that relates values and objects.

So far so good, but now I have another feature called "compact linear types"
which are packed encodings of any type defined by the rule:
unit is compact linear, and sum, product, or exponential of a compact linear type is compact linear.
A small compact linear types is one that fits in a 64 bit machine word.
So for example the type 3 * 4 is a single 64 bit value which is a subrange of integer 0 thru 11.
Compact linear types with integer coercions used as array indices give polyadic arrays
(rank independent array programming).

The problem is .. compact linear type components are not addressable.
Projections functions work fine, but there are no overloads for pointer projections.

And so the conflict: polymorphic pointer projections are unsound:

proc f[T,U] (r: &(T,U)) (v:T)) { r.0 <-v; }

will not work if r is a pointer to a compact linear object. I can think of three solutions:

(1) invent a new pointer type (destroys uniform pointer representation property)
(2) have distinct product, sum, and exponential operators for compact linear types
(3) use the same operators but introduce pack and unpack operations

Domain specific language for playing games

Writing computer games for people to play, even quite simple ones, is a surprisingly challenging task. Here I'm thinking of turn-based games like card and board games, puzzles, block-pushing and perhaps simple arcade games like Space Invaders. It would seem fairly obvious that large parts of the heavy lifting will be common from one to another and that differences in game play might well be encapsulated in a DSL.

Others have had the same thought. Here are links to some good reviews:,, The GGP language GDL is a Datalog derivative, Zillions uses a Lisp dialect and Axiom is a kind of Forth. There are several others, including PuzzleScript, CGL and VGDL. GGP in particular is the focus of a lot of AI work, not so much the UI. Several of these projects appear dormant.

Considering the prevalence of games in the community and the number of people involved in writing them, this looks like surprisingly little effort in this direction. I was wondering if anyone is aware of or involved in any active work in this area, particularly on the language side, before I go and invent my own.

Process Network for Effects, Monad Alternative

Monads are an awkward effects model in context of concurrency. We get incidental complexity in the form of futures or forking threads with shared memory. The running program becomes entangled with the environment, which hinders persistence and mobility and debugging. So I sought alternatives in literature.

Kahn Process Networks (KPNs) seem like a very good alternative. From an external perspective, they share a lot of similarities to monads, except we get more than one input (continuation) and output (effect) port and thus can model concurrent operations without relying on effects or environment support. Internally, KPNs have a lot of similarities to free monads: we can compose KPNs to handle effects internally, translate them, etc.. Use of KPNs as first class values allows for dynamic structure and mobile processes.

The main feature missing from KPNs is the ability to work with asynchronous inputs. But it is not difficult to add time to the model, and thus support asynchronous messaging and merges in a style similar to functional-reactive or flow-based programming (and somewhere between the two in terms of expressiveness). I doubt this is a new idea.

I've written about these ideas in more detail on my blog:

Reactive KPNs with open ports or channels also make a better FRP than most, having a far more direct API for pushing inputs and pulling outputs deep within a network.

XML feed