LtU Forum

A pretty printing algorithm

A pretty printing algorithm which works for strict languages with an example implementation in ocaml.

For a relational language, how treat KeyValues, Streams in relations with operators that add or remove "columns"

I'm building a relational language where I wish to treat everything as a relation. The main point is to provide a common core of relational operators (aka: Queries) that work across all. A "relation" here is a table with a header of fields, columns and rows.

I have a container that match directly this, that internally is an array:

id name age
1 Jhon 18

Now, I have other containers that are Key/Value like a hashmap and BTreeMap, and single value like Generators, ranges and vectors. Only generators, as being read-only forward-only can be say to output a fixed schema.

You can say a hashmap, for example, is a relation that look like:

Key Value
1 Jhon

And a vector one like


With all of them, I wanna provide the same operations like:

table ?where #age = 18
kv    ?where #key =1 
array ?where #it = 2

Now my dilema is what to do for some operations that add or remove columns, like joins and projection:

table ?select *, #age + 10 as older
kv    ?select *, #value |> String.Upper as name
array ?select *, it * 10 as calc

What could be the best option here?

1- Say that kv/array have fixed schemas and can't grow or shrink their headers

2- Say a KV only have fixed the KEY and the value header is flattened, so it become:

[key, value, name]

The problem is not what OUTPUT. I could just output tuples. Is how REPRESENT the relation if it be more than the KV.

So, If I have 10 columns, what is the best option to put inside a KV? Consider that the relational model allow to rename columns, and add and remove them.

Another problem is that KV like Btree allow to model... trees. If I think instead of Btree as a index behind a table, the language lost the ability of "naturally" model trees. BTW, I wanna to have AGDTs.

P.D: Other potential question is: What kind of container could work alike "table" but also alike hashmaps/trees? How model a type that allow several "backends"?

P.D: Originally here: but not good answers... so I ask here with a bit of more details.

Streaming Language Rewrite Processing (SLRP)

Streaming Language Rewrite Processing (SLRP) is a low-level computing model that is essentially characterized by limited-memory processors rewriting a much larger input stream. Computation under this condition requires multiple passes, but is highly amenable to pipelining and partitioning, and thus parallelism. SLRP processors may have side channels to external devices that we drive through ‘effectful’ rewrite rules.

TLDR: SLRP is a very promising alternative to the von Neumann architecture. SLRP offers advantages for security, concurrency, distribution, and scalability. However, SLRP performs poorly for data plumbing and loops because there is no pass-by-reference within the stream. This weakness can be mitigated by effects, shunting a region of the stream to a remote device and keeping a small placeholder – a reference.

The article develops a Turing complete foundation for a SLRP machine code based on concatenative combinatory logic, and sketches several extensions. The challenge is that the operands for a combinator may be larger than a processor's memory, e.g. for a copy operator `c[X] == [X][X]` the value `[X]` might be larger than what our processor can remember. Thus, a processor cannot simply copy and paste. To solve this involves divide-and-conquer tactics, and a notion of open-ended blocks.

Paper on ParaSail published in <Programming> journal V3.3; new release also available

We very recently published a long-ish article on the ParaSail programming language (ParaSail = Parallel Specification and Implementation Language) in the relatively new Journal on the Art, Science, and Engineering of Programming, Volume 3, Issue 3:

ParaSail: A Pointer-Free Pervasively-Parallel Language for Irregular Computations

There is also a new (version 8.0) release of the ParaSail binaries (Linux, Mac, and partial Windows) and sources at:

A direct download is available at:

Abstraction Tiers of Notations

DZone has published 3 my articles:

Few key points from them:

  • Abstractions are not created equal and they could be sorted by tiers depending on structural elements they use and the way these elements are used.
  • The tiers are closely tied to cognitive development stages as described by J. Piaget and M.L. Commons.
  • Each new tier of the abstractions is built upon constructs of the previous tier.
  • Adoption of new tier causes major generation of the general-purpose programming language to appear.
  • Basing on the model, the next generation of general-purpose programming languages will be likely System-Oriented Programming. That will use ‘system’ as a new category of type that could be instantiated. Current dependency injection frameworks are early and incomplete attempts to implement this new abstraction.
  • Basing on the model, the evolution of the database languages should be parallel to the evolution of general-purpose programming languages, with the same abstraction tiers adopted.

Note, I’ve described some elements of this model in some discussions on LtU previously.

Looking for papers on covariance and contravariance


I am looking for papers and a particular paper I used to have on covariance and contravariance.

The paper I used to have many years ago probably about 2006 or 2007 when I originally started looking into type theory and did not realize its relevance had many inference rules for covariance and contravariance using plus and minus symbols within the inference rules.

I am now researching this area and am interested in any papers on this topic.

Many thanks in advance,

Aaron Gray

Functional Design Patterns - Relating Haskell Typeclasses to Software Design Patterns

Recently, while re-reading through the Haskell Typeclassopedia I thought it would be a good exercise to map the structure of software design-patterns to the concepts found in the Haskell type class library and in functional programming in general.

By searching the web I found some blog entries studying specific patterns, but I did not come across any comprehensive study. As it seemed that nobody did this kind of work yet I found it worthy to spend some time on it and write down all my findings on the subject.

I think this kind of exposition could be helpful if you are either:

- a programmer with an OO background who wants to get a better grip on how to implement complexer designs in functional programming
- a functional programmer who wants to get a deeper intuition for type classes.

I have set this up as a GitHub project with full sourcecode for all examples:

The Patternopedia

POLA Would Have Prevented the Event-Stream Incident

POLA Would Have Prevented the Event-Stream Incident by Kate Sills

The JavaScript world was rocked this week by news that the popular npm package event-stream included malicious code that attempted to steal the private keys of certain Bitcoin users.

Since the attack was discovered, both the JavaScript community and the cryptocurrency community have been passionately debating how to prevent such an attack. At Agoric, we think this attack was entirely preventable, and the answer is POLA, the Principle of Least Authority.

This npm / event-stream debacle is the perfect teaching moment for POLA (Principle of Least Authority), and for the need to support least authority for JavaScript libraries. My talk Securing EcmaScript, presentation to Node Security explained many of these issues prior to this particular incident.

For LtU, my best explanation of POLA is Verify What? Navigating the Attack Surface given to the "Formal Methods Meets JavaScript" workshop at Imperial College.

Video on Unison/comparison to Haskell/Monads/distributed computing

Monads are passé, apparently.

Previously mentioned on ltu rather a long time ago

Lambdas and objects as an existential type

I’m looking at type theory from point of view of the general-purpose programming language evolution, and I’ve found something that does not look nice.

There is stage when function pointers like ones in C were introduced to the languages. Differently from lambdas they are stateless and they could be only evaluated using their explicit arguments and using only constants (or global state) and by calling other stateless functions. Their type will be designated as:

T ⇨ A

Now let’s take usual lambdas, their type will be designated as:

T → A

The relationship between these two types is the following:

A → B = ∃ T (T ⨉ (T ⨉ A ⇨ B))
apply:  (A → B) ⨉ A ⇨ B = 𝜆 ((t, f), a), f(t, a) // unpacking existential type here
curry: ((A ⨉ B) → C) ⇨ (A → B → C) = 𝜆(t, f) ((t, f), (𝜆((t1, f1),  a1) ((t1, f1, a1),  (𝜆((t2, f2, a2), b2)  f2(t2, (a2, b2))))))

Such definition is needed to support currying and to capture implicit environment, and it reflects what is actually happens in the code. Every function from T ⇨ A could be trivially converted to T → A, however stateless functions have different set operations supported. Particularly, currying is not supported. So, it is not subclass, but a separate entity type.

This existential type captures difference between structured programming paradigm and object-oriented or functional programming paradigms. So, theoretically, when we would try to study paradigms, we would also need to make this distinction.

I’m interested if there are some research papers that describe this distinction. The discussion in the TaPL book is somewhat unsatisfying because explicit existential types are defined using implicit existential types. Theoretically, the type theory should start from stateless functions to reflect evolution path rather than in reverse direction. Such point of view might be also useful for the compiler transformation reasoning, so there might be papers on such equivalence in that area too.

XML feed