Factor Mixins

Mixins, a very interesting post from Slava Pestov's Factor blog.

Factor's object system allows the following operations without forcing unnecessary coupling:

* Defining new operations over existing types
* Defining existing operations over new types
* Importing existing mixin method suites into new types
* Importing new method suites into existing types
* Defining new operations in existing mixin method suites
* Defining new mixin method suites which implement existing operations

That's pretty much what I want from an object-functional language.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


That's pretty much what I want from an object-functional language.

So, basically, you want an assembly shaper - or what a compiler writer might call a linker. Effectively, private, protected and public were some of the earliest assembly shaping mechanisms in programming.

And now programmers want more schema authoring techniques.


Not sure what you'd be talking about... Factor uses polymorphic inline caching for it's object implementation, although it doesn't currently support multimethods. I hear Slava's planning on supporting them sometime in the future. Inline caching is sort of like continuous dynamic re-linking, but there is slightly more to it than that.

Do you have a reference?

I was asking a question

I was asking a question moreso than making a statement.

Therefore, I am not sure what you want a reference to. However, I can fill-in my thought process on my comments, since I think that is what you are really looking for.

In programming language design, one way of referring to languages is by their language generation. Most Object-Oriented Programming Languages I've seen are Third generation programming languages. Although most definitions of this term define it in the positive sense, including Wikipedia's definition, it makes more sense to define a 3GL in terms of its shortcomings and what it is clearly not.

Probably the first and foremost software engineering reference explaining this concept is John Lakos' Large-Scale C++ Software Design (ca 1996). Today, the book is kind of crude, however, at the time it was a groundbreaking argument for how to design very large systems. Up until this point, "Object Technology"-related conference papers were dominated by papers advocating "reuse" without any real methodology for assessing the quality of reuse. Lakos, by contrast, explained in plain and simple terms what needed to be done to build modular, reusable systems based on 3GLs.

One big shortcoming of 3GL's is dependence on a 3GL linker. In short, 3GL linkers all have the same characteristic design limitation: there is no separation between messaging and the addressing mechanism. Even Smalltalk, although it is a very high-level 3GL, does not completely separate messaging and addressing mechanisms. Instead, Smalltalk merely unifies the syntax for Properties, Methods and Events. Aside from Beta, Smalltalk was one of the first major programming languages to do this. In all 3GLs I've seen, collaboration via messages is tied to method addresses that are directly tied to procedure calls. With these 3GLs, navigating a UML collaboration and thinking in terms of methods all basically boil down to procedure calls. In this sense, these 3GLs are simply battling for the title of highest-level assembly language possible.

That's what the .NET MSIL and C# is. MSIL is a rudimentary assembly language, and C# is a very high level assembly language layered on top. mscorlib.dll is the baseline for all assembly. However, yet another way to look at this is from the Mono viewpoint. Things like Mono.SIMD are C# libraries, but they are treated by the VM as very high-level assembly instructions, similar to OpenGL ES or other graphics shader library instruction sets. In fact, in a more modular VM than .NET, that is not shrinkwrapped, developers can use Mono.Linker to basically package their own distribution of .NET. It is simply a matter of what ABIs to link to. Mono.CSharp namespace contains the Compiler Service that is exactly the product of Mono.Linker: it shapes a source code base full of public method declarations (for testability and evolutionary design purposes) into a bunch of internal declarations and four public ABIs. Likewise, if we compare how Microsoft's CLR team managed the port of .NET to Silverlight, and how Novell's Mono team managed the similar port of Mono to Moonlight, the approaches vastly stand out in comparison to their understanding of what Lakos taught 13 years ago. Novell has no duplicate code, and just defines a link.xml file that shapes the assembly.

Similarly, say you happen to have to host some code in an environment that has a ridiculously dumb restriction on the format of method calls, prohibiting you from defining general methods with n-arity. You can re-shape the method signature by taking its name and appending to the name With_CALLARGUMENT_As_CALLVALUE_ etc. This is basically the opposite of a reader macro. You are not creating unduly context in your parser, you are simply reshaping a generic method and making it specific. I realize this example sounds sketchy, but, ahem, it's the real world.

Lakos definition of a component, if I recall correctly, was a .h and a .cpp file combined. In the book, Lakos talks about three programs: adep, ldep and cdep; read the source code to find out what they do. (.NET and Mono are building much more robust tools these days, although Java is still way behind due to its incompletely designed open class system. I've written a blog post or two about Mono.Linker, for example, and I think I have written one about Gendarme). Later, in 1998, Clemens Szyperski can be seen as refining these ideas into the computer science formalism known as Component-Oriented Programming aka "Versionable Objects". Among the many things Szyperski lists as important in a component model include a Properties, Methods and Events model. In his book, Component Software: Beyond Object-Oriented Programming, he meticulously covers every aspect of physical software design, including how to safely use callbacks through the judicious use of contracts (something much open source code I see does not use!). Therefore, he also covers the linker. The book supplants Lakos' work, but it is much heavier reading.

As an aside, not everything about Szyperski's book is great. For instance, he talks too glowingly of his own BlackBox framework, including his ideas cascaded message multicasting and domaincasting, which I don't like from a design standpoint. Sadly, these ideas effectively appear in WPF, and I have to wonder why.

More generally, you can look at many other OOPLs as attempts to design a robust linker. Probably the most profound example of this is Karl Lieberherr's DemeterTools project which champions his Demeter Method. One way of looking at Karl's work on propagation patterns is that he basically created a DSL for a 3GL linker based on a parts-centered object model that is trivial to typecheck (simplifying the linker logic). What's cool is Karl formalized what I think of as the Law of Linkers; The Law of Demeter.

What all these 3GLs have very much in common, too, is that they embrace the von Neumann architecture. They use procedural message passing, stack-based semantics like RIAA, etc.

Now, to decompress all this, here is a link to Pat Helland's humorous explanation of Components: Building Blocks and the Three Bears

Sorry, I'm not familiar with

Sorry, I'm not familiar with your terminology (assembly shaper, schema authoring).


I did say that a synonym for assembly shaper is linker, and I stole this term from Miguel de Icaza's PDC 2008 Mono speech notes. I like this term because it differentiates from traditional linkers and captures much of the spirit of what we're trying to accomplish, including using a linker as a "leader of the bootstrap" in bootstrapping "Compact Frameworks" from existing Mono frameworks. This is configurable modularity, because you can re-target for a different VM deployment set-up just by adjusting your linker. So it is much more general than simply having a single language to accomplish these things. Basically, I don't want the word "linker" to become the next buzzword and for a bunch of silly computer scientists to create an annual conference called LINKers, like they did with SOA and the Semantic Web. If you don't know much about Linkers, I recommend John R. Levine's book Linkers & Loaders. John is the moderator of usenet://comp.compilers

Yet schema authoring is a little vague.

Schema Authoring is a phrase used in XML-land, but it basically describes the process of authoring a schema. You can define schemas intentionally or extensionally, too. One method uses schema inference, and the other uses something like a DTD.

This is an idea I think David Barbour explained very well a few weeks ago in the Why are objects so intuitive thread?: Being poor at modeling is essential to OOP. If you read just about any paper on programming, you will see a common approach regardless of paradigm. Actually, this approach was originally advocated by hardware guys at IBM in the '70s, especially Nate Edwards. As H.S. Lahman pointed out to me recently on usenet://comp.object, Executable UML author Stephen Mellor was (probably) the first person to realize that OO was the ideal vehicle for doing this (in the '80s). One of the better layman explanations of schema authoring I've read is by PocoCapsule/C++ developer Ke Jin. In particular, his blog post [Domain-Specific Modeling] in [Inversion of Control] Frameworks.


Thanks for the detailed reply.

I've discovered that John Levine offers the manuscripts of his Linkers & Loaders on the web.

With great power ...

Allowing flexible "injection" into the inheritance hierarchy, or of new methods into existing multimethods, is a double-edged sword. The benefits are well covered in the original post.

The down-side is that without restrictions, new code modules can change the meaning of existing (well-tested and/or third-party) modules. This makes separate development and testing difficult to achieve.

The rules around "sealing" in the Dylan language reference manual show the lengths involved in trying to make multimethods more suitable for programming-in-the-large.

Scoped multimethods

Are there languages that support scoped multimethods: adding a method to a generic function in a limited (dynamic?) scope?

Aldor has "Post Facto Extensions"

These extend types with additional functionality within a scope. Thus types can be made richer as needed in client applications, libraries can be designed in tidy layers, etc. Aldor has had this since about 1991, but it was written up as a programming languages paper only for the 2006 DSAL conference.


ContextL ties methods to layers, which can be activated and deactivated. That should come quite close to what you asked for.