## Object Algebras

The ECOOP 2012 best paper award this year was given to Bruno Oliveira and William Cook for the paper "Extensibility for the Masses: Practical Extensibility with Object Algebras".

This paper is (yet another) solution to the expression problem. The basic idea is that you create a family of objects via an Abstract Factory. You can add new objects to the family by extending the factory as per usual, but the twist is you can also add new operations by overriding the factory methods to do other things, like evaluation or pretty printing.

Bruno has also been collecting sample implementations using Object Algebras solving a simple expression problem example.

## Comment viewing options

### This is very similar

This is very similar to the encoding of finally tagless interpreters using higher kinds that I developed in 2009 for C# (edit: actually, make that 2008 first posted here on LtU).

The only difference is that this paper's factory cannot statically prevent using a string expression with a bool operation. That is, only some implementations will throw an error in this invalid case, not all implementations. For instance, the pretty printer won't throw an error, but the interpreter will. That's a very unfortunate property of this approach IMO, unless I've missed something that would prevent this?

You could possibly prevent this by distinguishing the different expression types for bool and int by representing them as different type parameters, but you quickly run head first into the lack of higher kinds. For instance, iff will accept any type for the then/else branches, not just expression types (unless you overload iff for every expression type).

That was the sole purpose of the Repr<T, B> type in my encoding, which specifies the type of the expression value T. Even though it uses casts, these casts are safe unless the client intentionally goes out of his way to crash his own program.

### Now that's a neatly

Now that's a neatly interesting use case for the CRTP that some dislike to see used in C# (e.g., Eric Lippert for one, if I recall correctly). I'm still confused a bit on how I could have missed your blog post so far, especially since Jacques Carette linked to his Finally Tagless paper recently in another LtU discussion thread. A very important paper, IMO, too, btw.

Unrelated to typing proper, but relevant to syntax at least, another use for the CRTP I had found for myself was to reify and constrain the dynamic extensions/amendments one would make, at runtime, to PEG based parser combinators, there. I'm eventually abandoning this former idea, but not because of the CRTP specifics, but rather because from a strict design view point, that much of dynamism inexorably leads to untamable non determinism(*) because of the obvious TC it brings.

Your CRTP use case is much more excusable and relevant, I guess. :) Thanks again for sharing.

(*)which, I eventually decided, is not only overkill, but also mostly harmful for my initial problem domain and design goals (re: seeing languages themselves as first-class in the platform)

### Just remembered that I use

Just remembered that I use the exact technique from this paper in my parsing library because I didn't need higher kinded types. You can see the use in the test code for a simple calculator for which I defined an interpreter. The interface is then abstractly extended into an expression language, for which I provide two different interpreters, one based on an AST class hierarchy, and one based on closures.

### Thanks. Nice

Thanks. Nice food for thought.

### Here's a more detailed post

Here's a more detailed post on the Pratt parsing framework of which the previously linked interpreter is one example of an object algebra. See the section on extensible parsing.

### CRTP

I had not heard about CRTP before, but from the wiki page it looks like F-Bounded polymorphism, or really just encoding open recursion and fixedpoints using templates/generics. This is related to that fact that most OO languages (including Java and C++) do not have inheritance at the interface/type level.

What I don't understand is how CRTP applies to Object Algebras. Can you explain?

### I think he meant it applied

I think he meant it applied to my encoding of a "brand" to ensure that subtypes of Exp<T> were properly typed to the implementing interpreter, so downcasts were guaranteed type-safe. Basically, take an object algebra:

interface IIntBool<T>
{
T Int(int i);
T Bool(bool b);
T If(T cond, T then, T _else);
}

Now lift T out into a common subtype with phantom types encoding the expression type, and a "brand":

abstract class E<T, B> where B : IIntBool<T, B> { }
interface IIntBool<T, B>
where B : IIntBool<T, B>
{
E<int, B> Int(int i);
E<bool, B> Bool(bool b);
E<bool, B> If<T>(E<bool, B> cond, E<T, B> then, E<T, B> _else);
}

The B type parameter is the "brand" tying the subtype of E<T> to the interpreter it's for. Downcasts are safe except in the case where the client intentionally implements a new subclass of E implementing the same brand. I think it's as safe as you can get without higher rank types in Java/C#. The B type parameter is the CRTP pattern, since an implementation would just pass itself as the type param, ie. class PrettyPrinter<T> : IIntBool<T, PrettyPrinter> { ... }.

### Brand Hell

Looks very familiar; although I never knew the name for this even. Also, its best to have brands as classes in C# so you can take advantage of implicit coercions, as well as your operators are going to be a bit dopey and you'll need to convert from Brand[SELF] to SELF a lot.

Also, CRTP is very similar to what we did with Jiazzi's open class pattern and my take of Scala's cake pattern (Jiazzi inspired both uses in Scala and Bling).

### Sorry for the confusion

Sorry for the confusion. My bad.

Indeed my remark about the CRTP was only indirectly related to your work re: object algebras. As naasking pointed out, it was more geared towards the use one can make out of that pattern during an implementation (among other possibilities) to express the sort of constraints that Torgersen's solution #4 to the Expression Problem requires (there, relevant to your object algebras).

AFAIC, I'm studying other ways to make profit of the CRTP, only remotely related to the EP, granted, and from a rather different angle/point of view.

E.g. :

(this is valid C#)

    public abstract class PhraseOf<TLegacy>
{
protected PhraseOf() { Console.WriteLine("\r\n(In " + this.GetType().Name + ")"); }
public abstract TLegacy CompilerFor<TInput>(TInput input);
}

public abstract class Language<TReified, TLegacy> : PhraseOf<TLegacy>
where TReified : Language<TReified, TLegacy>, new()
{
public static readonly Language<TReified, TLegacy> Definition = new TReified();
public static TLegacy operator /(string input, Language<TReified, TLegacy> language)
{
Console.WriteLine("Compiling ... " + input + " ...");
return language.CompilerFor(input);
}
}

public class CIL : PhraseOf<CIL> { // new ordered reified languages family, CIL-rooted
public override CIL CompilerFor<TInput>(TInput input) { var compiler = new CIL(); /*...*/return compiler; }
}
public class CSharp1 : Language<CSharp1, CIL> {
public override CIL CompilerFor<TInput>(TInput input) { var compiler = new CIL(); /*...*/return compiler; }
}
public class CSharp2 : Language<CSharp2, CSharp1> {
public override CSharp1 CompilerFor<TInput>(TInput input) { var compiler = new CSharp1(); /*...*/return compiler; }
}
public class CSharp3 : Language<CSharp3, CSharp2> {
public override CSharp2 CompilerFor<TInput>(TInput input) { var compiler = new CSharp2(); /*...*/return compiler; }
}
public class CSharp4 : Language<CSharp4, CSharp3> {
public override CSharp3 CompilerFor<TInput>(TInput input) { var compiler = new CSharp3(); /*...*/return compiler; }
}
public class OtherSerialization {
public static readonly OtherSerialization CSHARP_1_COMPILER = new OtherSerialization("CSHARP_1_COMPILER");
public static readonly OtherSerialization CSHARP_2_COMPILER = new OtherSerialization("CSHARP_2_COMPILER");
public static readonly OtherSerialization CSHARP_3_COMPILER = new OtherSerialization("CSHARP_3_COMPILER");
public static readonly OtherSerialization CSHARP_4_COMPILER = new OtherSerialization("CSHARP_4_COMPILER");
public OtherSerialization(string content) { Content = content; }
public static implicit operator string(OtherSerialization value) { return value.Content; }
}

class Program
{
static void Main(string[] args)
{
//doesn't type check: CSharp3 badUse = "..." / new CSharp3();
//type checks ok: CSharp3 okUse = "..." / new CSharp4();

CIL cil =
"source of a C# 1 compiler written in CIL" /              //<-- reference implementation for language in (...) :
(
"source of a C# 2 compiler written in C# 1" /         //<-- reference implementation for language in (...) :
(
"source of a C# 2 compiler written in C# 2" /     //<-- reference implementation for language in (...) :
(
"source of a C# 4 compiler written in C# 3" / //<-- reference implementation for :
new CSharp4()
) // static type : CSharp3
) // static type : CSharp2
) // static type : CSharp1
;

Console.WriteLine();
Console.WriteLine("----------------------------------------------------------------------------");

CIL cil2 =
OtherSerialization.CSHARP_1_COMPILER /              //<-- reference implementation for compiler in (...) :
(
OtherSerialization.CSHARP_2_COMPILER /          //<-- reference implementation for compiler in (...) :
(
OtherSerialization.CSHARP_3_COMPILER /      //<-- reference implementation for compiler in (...) :
(
OtherSerialization.CSHARP_4_COMPILER /  //<-- reference implementation for :
new CSharp4()
) // static type : CSharp3
) // static type : CSharp2
) // static type : CSharp1
;

}
}


(note the role played by the CIL class type and its occurrences)

Which prints :


(In CSharp4)
Compiling ... source of a C# 4 compiler written in C# 3 ...

(In CSharp3)
Compiling ... source of a C# 2 compiler written in C# 2 ...

(In CSharp2)
Compiling ... source of a C# 2 compiler written in C# 1 ...

(In CSharp1)
Compiling ... source of a C# 1 compiler written in CIL ...

(In CIL)

----------------------------------------------------------------------------

(In CSharp4)
Compiling ... CSHARP_4_COMPILER ...

(In CSharp3)
Compiling ... CSHARP_3_COMPILER ...

(In CSharp2)
Compiling ... CSHARP_2_COMPILER ...

(In CSharp1)
Compiling ... CSHARP_1_COMPILER ...

(In CIL)

(if only for a rough, 20,000 feet glance at the sort of things I try to investigate, when I can... that is, rarely)

### Java *and* C++?

Just a nitpick here. Following your link, the example of something that cannot be done in Java can in fact be done in C++. The return-type of an interface function can be redeclared in the subclass, so long as it doesn't break the original contract. In the example given, next() could return BiStream<T> in the subclass, and the usage example would compile without problem (and without CRTP). Here's a simple example.

In fact, according to this, you can also do it in Java as of JDK 5.0.

Is it me or is this new OO-solution the result of applying the dictionary construction (with minor adaptions to fit the target type system) to the Haskell solution (available as a sample implementation)?

Naturally you want the nominal subtyping of dictionaries (induced by instances) to correspond to subtyping of the classes that characterize them, i.e. object algebras.

### The main drawback

from a usability standpoint seems to be that the introductory forms for data must be expressed as a function over an object algebra.

This seems like it would often be at odds with separate compilation. While this scheme doesn't require that you control the definition of the data variants, it requires that you control their instantiation in order to instantiate them in terms of their church encoding.

A more concrete example: if I have a separately compiled module that defines a parse tree type as well as a parser of type string -> parsetree, this scheme fails to allow me to add new operations over the parse tree because the library failed to encode the parse-tree as a function, instead giving it to me in concrete form.

This seems to make this primarily useful for designing a system to allow for retroactive extension, but less useful for allowing retroactive extension of existing systems.

### Separate compilation

I don't see how its at odds with separate compilation. The types and interfaces work just fine in that case. However, you are right that it doesn't work very well if one side of the modularity boundary was written in the more traditional style without object algebras. I will point out that you *can* use object algebras as visitors. In this case the generic object structure can be visited with an object algebra to create a new structure. This corresponds to "replaying" the steps that created the value in the first place, but using a different algebra. This is similar to the way that a fold works in functional programming. The unification of factories and visitors lets us formulate a kind of build-cata law for visitors and factories. We left this out of the paper, but it is fairly simple:

build(factory).accept(visitor) === build(visitor)

This says that a builder function (parameterized by a factory) which is then visited by a visitor will produce the same result as building using the visitor as a factory in the first place. We need an appropriate notion of equivalence === here for this to work. Also, it is possibly to copy an object by visiting it with a factory:

o.accept(factory) === o

These equalities are not necessarily valid unless the factories and visitor methods are faithful implementations of the abstract concept of a factory and visitor.

### This is an interesting

This is an interesting paper. This could be a valuable pattern for some common situations: scene graphs, composite pattern, and embedded languages.

Naturally, one could build a visitor atop the object algebra. :)

I'd be interested in a use-case analysis relative to ASTs and GADTs.

### Object algebras are a very

Object algebras are a very promising development. I am considering to pursue them as an expressive alternative to vau calculus. The Object Algebra can serve a similar role, but based in composition and communication rather than construction and pattern matching.

How far could this concept be taken, with a language design oriented towards them? Even our parsers could invoke an object algebra rather than return an AST - which might offer opportunity to: note errors, query to resolve ambiguity or obtain environmental switches (#define, etc.), and so on. I'm still thinking about how modules might be modeled as algebras or expressions.

Object algebras depend at least as heavily on an ADT/Existential concept, expressed through the typesystem (leveraging the template system, in the examples given). I'll be thinking a bit on how they might interact with dependent types - something like typed protocols could be very valuable in this role.

### Something existential-like

Something existential-like and type constructor polymorphism are needed to fully exploit object algebras.

### I agree. Rank 2 generic

I agree. Rank 2 generic types are sufficient, e.g. for modeling this in Haskell. But first-class types (e.g. self-types or objects-as-types) would probably be better.

### Is it applicable to dynamically generated structures?

To use the technique described in the paper, it seems that a visited structure should be expressed as a function.

For example, the paper says, in page 8, the following 'exp':

Exp exp = new Add(new Lit(3), new Lit(4));

should be expressed as

[A] A exp(IntAlg[A] v) {
}

Is this conversion possible for a dynamically generated structure?
For example, an abstract syntax tree is usually generated by a parser at run-time based
on an input program text.
Can I use the technique to visit such an abstract syntax tree?
As far as I know, there is no way in Java to produce a function
at run-time.

### See the link I provided

See the link I provided above to my Pratt parser. I use exactly this technique to map syntax to semantic constructors:

interface IMathSemantics<T>
{
T Int(string lit);
T Sub(T lhs, T rhs);
T Mul(T lhs, T rhs);
T Div(T lhs, T rhs);
T Pow(T lhs, T rhs);
T Neg(T arg);
T Pos(T arg);
T Fact(T arg);
}
class MathGrammar<T> : Grammar<T>
{
public MathGrammar(IMathSemantics<T> math)
{
Infix("+", 10, math.Add);   Infix("-", 10, math.Sub);
Infix("*", 20, math.Mul);   Infix("/", 20, math.Div);
InfixR("^", 30, math.Pow);  Postfix("!", 30, math.Fact);
Prefix("-", 100, math.Neg); Prefix("+", 100, math.Pos);

Group("(", ")", int.MaxValue);
Match("(digit)", char.IsDigit, 0, math.Int);
SkipWhile(char.IsWhiteSpace);
}
}

Interpreter:

sealed class MathInterpreter : IMathSemantics
{
public int Int(string lit) { return int.Parse(lit); }
public int Add(int lhs, int rhs) { return lhs + rhs; }
public int Sub(int lhs, int rhs) { return lhs - rhs; }
public int Mul(int lhs, int rhs) { return lhs * rhs; }
public int Div(int lhs, int rhs) { return lhs / rhs; }
public int Pow(int lhs, int rhs) { return (int)Math.Pow(lhs, rhs); }
public int Neg(int arg) { return -arg; }
public int Pos(int arg) { return arg; }
public int Fact(int arg)
{
return arg == 0 || arg == 1 ? 1 : arg * Fact(arg - 1);
}
}

Extending abstract semantics to support local bindings:

interface IEquationSemantics<T> : IMathSemantics<T>
{
T Var(string name);
T Let(T x, T value, T body);
}
class EquationParser<T> : MathGrammar<T>
{
public EquationParser(IEquationSemantics<T> eq) : base(eq)
{
Match("(ident)", char.IsLetter, 0, eq.Var);
TernaryPrefix("let", "=", "in", 90, eq.Let);
}
}

There are a few different interpreters for the equation parser, one of which is an object algebra implementing an eval function, the other is a function from environments to values. They are effectively the same.

### What's new here?

This paper seems a combination of "Type-Classes as Objects and Implicits" with dynamically-dispatched interfaces (as the types over which the type-classes are parameterized). It also seems like it will have a pretty big problem when you try to parameterize your interface over which you've parameterized your type-class; that is, when you try to parameterize a type-class over a type-constructor (as is fairly common in Haskell and Scala).