LtU Forum

Non-transitivity of type unification

We know type unification is not transitive in general

a ~ b /\ b ~ c does not imply a ~ c

And it's easy to find examples that don't unify

a |-> Int; b |-> b; c |-> Bool
a |-> (Int, a'); b |-> (Int, Bool); c |-> (c', Bool)

In all the examples I can make up, there's two outer types that don't unify, and a piggy in the middle that is either strictly more general than the outers (bare b) or strictly subsumed (Int, Bool).

For some (malicious) testing, I want an example where each three pairings of types unify, but the three types together do not. That is

a ~ b /\ b ~ c /\ a ~ c but not mgu(a, b) ~ c

Can anybody concoct such an example?
Or point me to something showing it's impossible.


4th Workshop on Live Programming (LIVE 2018)
November 6, 2018
Co-located with SPLASH 2018, Boston, USA

The LIVE 2018 workshop invites submissions of new ideas for improving
the immediacy, usability, and learnability of programming. Live
programming gives the programmer immediate feedback on the behavior of
a program as it is edited, replacing the edit-compile-debug cycle with
a fluid programming experience; the best-known example is the
spreadsheet. The LIVE workshop is a forum for research on live
programming, as well as work on fundamentally improving the usability
of programming, through language design, assistive environments and
tools. This year we are reaching out to the CS Education community to
include ideas on making programming more learnable and teachable.


Against The Current: What We Learned From Eve
Chris Granger


The shared spirit of LIVE is a focus on the human experience of
programming, and an interest in reconsidering traditional practices
and beliefs. Topics of interest include:

- Live programming environments
- Visual/Projectional programming environments
- Advances in REPLs/notebooks/playgrounds
- Programming by example/demonstration
- Advanced debugging and execution visualization techniques
- Language learning environments
- Language design for learnability and teachability
- Alternative language semantics/paradigms in support of the above
- Suggestive experiments and experience reports on teaching programming

Our goal is to provide a forum where early-stage work receives
constructive criticism. We accept short papers, web essays with
embedded videos, and demo videos. A written 250 word abstract is
required for all submissions. Videos should be no more than 20 minutes
long and papers no more than 6 pages long. We strongly recommend that
your submission use concrete examples to explain your ideas.


Submissions due: Fri August 17, 2018
Notification: Fri September 7, 2018
Workshop: Tue November 6. 2018

Anna: A KVS For Any Scale

Anna: A KVS For Any Scale, by Chenggang Wu, Jose M. Faleiro, Yihan Lin, Joseph M. Hellerstein:

Modern cloud providers offer dense hardware with multiple cores and large memories, hosted in global platforms. This raises the challenge of implementing high-performance software systems that can effectively scale from a single core to multicore to the globe. Conventional wisdom says that software designed for one scale point needs to be rewritten when scaling up by 10−100× [1]. In contrast, we explore how a system can be architected to scale across many orders of magnitude by design.

We explore this challenge in the context of a new key-value store system called Anna: a partitioned, multi-mastered system that achieves high performance and elasticity via wait-free execution and coordination-free consistency. Our design rests on a simple architecture of coordination-free actors that perform state update via merge of lattice-based composite data structures. We demonstrate that a wide variety of consistency models can be elegantly implemented in this architecture with unprecedented consistency, smooth fine-grained elasticity, and performance that far exceeds the state of the art.

This isn't strictly programming language related, so I didn't post this as a story, but actors and distributed systems are popular topics around here, and this builds on the CRDT and Bloom work discussed here before. The performance numbers under contention are certainly impressive compared to existing alternatives.

See also the accompanying article where the authors discuss this further.

GADTs as gaurds

Is there some reason Haskell didn't expose GADTs as guards?

    -- Haskell
    data Term t
        Lit Int :: Term Int
        App (Term (a -> b)) Term a :: Term b

    -- Guards
    data Term t 
        Lit Int where t = Int 
        App (Term (a -> b)) Term a where t = b 

It seems equivalent and semantically simpler.

Is there a functional language with explicit limits on the heap(s)?

I'm trying to make a research compiler which is supposed to run as a service, and so I'd like to use some language with cooperative, userspace threads, as (Go or) Erlang. The problem is that some user requests may consume too much memory, or possibly never halt. So I'd like to impose some restrictions, e.g., "requests from IP a.b.c.d may only use up to 80mb of heap memory and run for 5 minutes top".

Haskell has setAllocationCounter (also forkIO and STM), which would help, but that doesn't seem to take the GC into account. Erlang seems to let me limit the heap for each process, but many tasks will be computationally intensive, and Erlang's probably not a good pick.

Would anyone have some suggestion? I'd rather not have to create a new language, but it's a possibility; is there any research on such languages that could help?

Anything recent happening with multi-stage programming?

I remember that 15 years ago, multi-stage programming was new and exciting. It promised simpler implementation of programming languages, better-than-macro support for strongly-typed languages, etc.

These days, I can't find any recent publications on the topic. Am I somehow looking in the wrong place? Has the topic fuzzed out?

In particular, I'm currently working on something extremely close to multi-stage programming, applied to a JavaScript JIT compiler. I'm very surprised that nobody seems to have worked on seriously applying multi-stage programming to JIT compilers. Am I missing something obvious?


Just a small thing I've often wondered about. In math, people are often content separating algorithms into two classes: those which terminate, and those which don't. For the latter, often the bottom symbol is introduced. But isn't there obviously a third class: Those algorithms for which I don't know whether they terminate?

So, I've always wondered: Don't you introduce a logical inconsistency by cleanly separating algorithms? I.e., since you state something you can't know, it must be inconsistent, right?

Looking for views/the best answer/etc. Or, an open invitation to shoot with your best wisdom.

Generics and Reverse Generics for Dynamic Languages

Generics and Reverse Generics for Pharo

Generic programming is a mechanism for re-using code by abstracting specific types used in classes and programs. In this paper, we present a mechanism for adding generic programming in dynamically typed languages, showing how programmers can benefit from generic programming. Furthermore, we enhance the expressiveness of generic programming with reverse generics, a mechanism for automatically deriving new generic code starting from existing non-generic one. We implemented generics and reverse generics in Pharo Smalltalk, and we successfully used them to solve a problem of reusing unit test cases. This helped us to identify a number of bugs and anomalies in the stream class hierarchy

Are Monads a Waste of Time?

I was thinking about this yesterday. If the point of functional code is that it is easier to reason about, should we like Monads? Consider we have an interpreter, and that we construct a sequence of commands in the free monad. Given a program like this:

f = do
   s <- newRef 
   writeRef s 23
   writeRef s 42
   readRef s

Now I have deliberately not specified the implementation of the interpreter, it could be pure using threading, it could be impure using IORef, or it could be any other valid implementation in Haskell except we won't allow "unsafe" code.

The question is, just looking at the above code, it looks like "s" is mutated. Yes the implementation in the monad could be pure and use state threading, but how does that help us reason about the above imperative program? How is the above program any better than the following JavaScript program:

function f() {
   var s
   s = 23
   s = 42
   return s

Ignoring syntax differences (because we can trivially translate from one syntax to the other) surely we can make any argument about implementation in the free monad with an interpreter about this code that we can about the first example? Surely this means that this second imperative program is no easier or harder to reason about than the first?

Hence my deliberately controversial title, why write an imperative program in a monad? Either we should actually write code in a visibly referentially transparent way like this:

f = let s1 = 23 in let s2 = 42 in s2

Or we may as well just use an impure language to start with. (I am deliberately ignoring the fact that parts of a program could be pure in Haskell).

The second second part of this question is, if making a program look as if it had state is as bad as having state from an analysis point of view, how could we prevent this in the type system? What loophole is allowing 's' to behave as if it had state, and how can that be closed?

Type systems for acyclic terms

I've been looking for type systems which guarantee that terms are acyclic, in a context with mutability, of course. Any references? Thoughts?

XML feed