The Left Hand of Equals

The Left Hand of Equals, by James Noble, Andrew P. Black, Kim B. Bruce, Michael Homer, Mark S. Miller:

When is one object equal to another object? While object identity is fundamental to object-oriented systems, object equality, although tightly intertwined with identity, is harder to pin down. The distinction between identity and equality is reflected in object-oriented languages, almost all of which provide two variants of “equality”, while some provide many more. Programmers can usually override at least one of these forms of equality, and can always define their own methods to distinguish their own objects.

This essay takes a reflexive journey through fifty years of identity and equality in object-oriented languages, and ends somewhere we did not expect: a “left-handed” equality relying on trust and grace.

This covers a lot of ground, not only historical, but conceptual, like the meaning of equality and objects. For instance, they consider Ralph Johnson on what object oriented programming means:

I explain three views of OO programming. The Scandinavian view is that an OO system is one whose creators realise that programming is modelling. The mystical view is that an OO system is one that is built out of objects that communicate by sending messages to each other, and computation is the messages flying from object to object. The software engineering view is that an OO system is one that supports data abstraction, polymorphism by late-binding of function calls, and inheritance.

And constrast with William Cook's autognosis/procedural-abstraction view, which we've discussed here before.

The paper's goal then becomes clear: "What can we do to provide an equality operator for a pure, autognostic object-oriented language?" They answer this question in the context of the Grace programming language. As you might expect from some of the authors, security and trust are important considerations.

Site migration

Update: The migration of LtU to new servers is complete.

If you notice any issues with the site, please post in this thread (if you can), or email me at antonvs8 at (gmail domain).

Original announcement appears below:

This evening (Sunday, US Eastern time), Lambda the Ultimate will be migrated to new servers.

The site will be offline for around 30 minutes, while this migration and some database maintenance is in progress.

The new platform is a shiny new Kubernetes cluster, which will enable some long-overdue improvements to the site in 2018.

An update will be posted in this thread once the migration is complete.

Compiling a Subset of APL Into a Typed Intermediate Language

Compiling a Subset of APL Into a Typed Intermediate Language

by Martin Elsman, Martin Dybdal

Traditionally, APL is an interpreted language ... In this paper, we present a compiler that compiles a subset of APL into a typed intermediate representation, which should serve as a practical and well-defined intermediate format for targeting parallel-architectures through a large number of existing tools and frameworks. The intermediate language is conceptually close to the language Repa. It supports shape-polymorphic functions and types that classify shapes. The compiler takes a simplified approach to certain aspects of APL. Following other APL compilation approaches, the compiler is based on lexical (i.e., static) identifier scoping and has no support for dynamic compilation (APL execute).
Terseness of APL is legendary, for good or bad. I keep finding more and more papers by Haskell community (and especially GHC contributors) working on efficient (parallel) arrays in Haskell.

People of Programming Languages Interviews

There is a growing set of fascinating interviews with PL folks at People of Programming Languages.

Exploiting Vector Instructions with Generalized Stream Fusion

Exploiting Vector Instructions with Generalized Stream Fusion

By Geoffrey Mainland, Roman Leshchinskiy, and Simon Peyton Jones.

A.k.a. "Haskell beats C".
Our ideas are implemented in modified versions of the GHC compiler and vector library. Benchmarks show that high-level Haskell code written using our compiler and libraries can produce code that is faster than both compiler- and hand-vectorized C.

This paper continues the promising line of research started in 1990 by Wadler (at least, that was how I learned of deforestation). Of course, there was a lot of development since then, but this specific paper introduces an interesting idea of multiple representations - potentially changing the game.

How efficient is partial sharing?

Partial sharing graphs offer a reduction model for the lambda calculus that is optimal in a sense put forward by Jean Jacques Levy. This model has seen interest wax and wane: initially it was thought to offer the most efficient possible technology for implementing the lambda calculus, but then an important result showed that bookkeeping overheads of any such model could be very high (Asperti & Mairson 1998). This result had a chilling effect on the initial wave of excitement over the technology.

Now Stefano Guerrini, one of the early investigators of partial sharing graphs, has an article with Marco Solieri (Guerrini & Solieri 2017) arguing that the gains from optimality can be very high and that partial sharing graphs can be relatively close to the best possible efficiency, within a quadratic factor on a conservative analysis (this is relatively close in terms of elementary recursion). Will the argument and result lead to renewed interest in partial sharing graphs from implementors of functional programming? We'll see...

(Asperti & Mairson 1998) Parallel beta reduction is not elementary recursive.

(Guerrini & Solieri 2017) Is the Optimal Implementation inefficient? Elementarily not.

Is Haskell the right language for teaching functional programming principles?

No! (As Simon Thompson explains.)
You cannot not love the "exploration of the length function" at the bottom. Made me smile in the middle of running errands.

"8th" - a gentle introduction to a modern Forth

Found on the ARM community's embedded blog. It seems that Forth may be making a comeback.
"8th" - a gentle introduction to a modern Forth

8th is a secure, cross-platform programming language based on Forth which lets you concentrate on your application’s logic instead of worrying about differences between platforms. It lets you write your code once, and simultaneously produce applications running on multiple platforms. Its built-in encryption helps protect your application from hackers. Its interactive nature makes debugging and testing your code much easier.

As of this writing it supports 32-bit and 64-bit variants of:

  • Windows, macOS and Linux for desktop or server systems
  • Android (32-bit Arm only) and iOS for mobile systems
  • Raspberry Pi (Raspbian etc) for embedded Linux Arm-based systems.

...
8th differs from more traditional Forths in a number of ways. First of all, it is strongly typed and has a plethora of useful types (dynamic strings, arrays, maps, queues and more).

Other differences from traditional Forth appear to include automatic memory management and some kind of signed and encrypted application deployment.

[Edit: per gasche's comment, please note that 8th appears to be closed source. From their FAQ:

"Is 8th a GPL-Licensed product? No, it is a commercial product. None of the libraries it uses are under the GPL or LGPL. Due to the desire for security, 8th includes its required libraries in the binary, and the GPL family of licenses is therefore not appropriate."
Let the arguments about the effectiveness of security-by-obscurity begin. Source is apparently available if you buy an Enterprise license and sign an NDA.]

Project Loom: adding fibers and continuations to Java

Just saw this on Hacker News -- Project Loom: Fibers and Continuations for the Java Virtual Machine with the following overview:

Project Loom's mission is to make it easier to write, debug, profile and maintain concurrent applications meeting today's requirements. Threads, provided by Java from its first day, are a natural and convenient concurrency construct (putting aside the separate question of communication among threads) which is being supplanted by less convenient abstractions because their current implementation as OS kernel threads is insufficient for meeting modern demands, and wasteful in computing resources that are particularly valuable in the cloud. Project Loom will introduce fibers as lightweight, efficient threads managed by the Java Virtual Machine, that let developers use the same simple abstraction but with better performance and lower footprint. We want to make concurrency simple(r) again! A fiber is made of two components — a continuation and a scheduler. As Java already has an excellent scheduler in the form of ForkJoinPool, fibers will be implemented by adding continuations to the JVM.

I'm a fan of fibers, and this has quite a bit of interesting material in it for like-minded folks.

Copattern matching and first-class observations in OCaml, with a macro

Copattern matching and first-class observations in OCaml, with a macro, by Paul Laforgue and Yann Regis-Gianas:

Infinite data structures are elegantly defined by means of copattern matching, a dual construction to pattern matching that expresses the outcomes of the observations of an infinite structure. We extend the OCaml programming language with copatterns, exploiting the duality between pattern matching and copattern matching. Provided that a functional programming language has GADTs, every copattern matching can be transformed into a pattern matching via a purely local syntactic transformation, a macro. The development of this extension leads us to a generalization of previous calculus of copatterns: the introduction of first-class observation queries. We study this extension both from a formal and practical point of view.

Codata isn't well supported enough in languages IMO, and supporting it via a simple macro translation will hopefully make it easier to adopt, at least for any language with GADTs.