Its type checking, Jim, but not as we know it.

A new completely rewritten and improved version of my previous Going Against the Flow paper. Abstract:

This paper introduces a novel type system that can infer an object's composition of traits (mixin-style classes) according to its usage in a program. Such "trait inference" is achieved with two new ideas. First, through a polyvariant treatment of assignments (a := b) and implied field sub-assignments, assignment can usefully encode trait extensions, method calls, overriding, and so on. Second, rather than unify term types or solve inequality constraints via intersection, we instead propagate trait requirements "backward" to assignees, ensuring that trait requirements are compatible only for terms used in the program. We show how this system is feasible without type annotations while supporting mutability, sub-typing, and parametricity.

The main innovation in this work is a treatment of fields and assignment to support it. We have found that assignment between a term's fields is parametric with respect to the assignments of that term! Intuitive but semantically tricky. Soundness proof of this treatment is included in the paper, along with a feasible (but unproved) way of implementing it. Anyways, once that's done, we can model the entire type system in terms of assignments and fields, which I don't think has been done before.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Suggestion for formalizing this approach

Right now it seems like your operational semantics is tied up with your type system. It seems like you might be able to separate the two in a fruitful way. Consider this random example from your paper:

var x := new
if x.Pressed():
    x.Thickness := 10

Couldn't you start with an elaboration pass where you introduce unknown class parameters? i.e.:

var x := new C1
if x.Pressed():
    x.Thickness := 10

Here, C1 is an unknown class. If you know C1, then it's easy to give operational semantics to this snippet -- I understand your object system to be a fairly typical record of mutable fields. And so you can define that semantics (in terms of the class parameters) easily.

Explaining how to elaborate from the first snippet to the second snippet is where much of your semantics will need to be figured out. When you invoke 'new' inside of a function, you probably want the concrete class to be passed in as a parameter so that you get the polymorphism you're expecting, but you probably have to handle recursion specially to avoid introducing an infinite number of parameters. Casting your system into this framework might be elucidating for you.

Once you've defined this elaboration process, you'll be on firm ground when defining an inference procedure to solve for the missing class parameters. The constraints imposed by usage will be obvious, and you just have to check that your type and inference system either satisfies them all or generates an error.

I don't really understand.

I don't really understand. The record not only has fields, but it's fields have fields and so on. Those fields in turn are related by bounds that are purely driven by assignment. Every field (and it's fields...) is a type variable, so elaboration into a typical parametric type system doesn't make sense, the target system wouldn't be expressive enough to express anything but fields one level under. I'm not even sure what those solutions would look like...maybe graphs? Flattening out the fields doesn't really help: that misses a major part of the type system.

Please don't get me wrong, I would like do a syntactic formulation if it was at all possible. I haven't seen it yet. I suspect that an analysis based definition is probably unavoidable if fields aren't to be unified with their bindings. But maybe something will come out of further discussion?

Probably I don't understand

Is it possible to give an example where the execution isn't explained by inferring implicit classes?


x := new
x.field = foo
x.field = bar

My understanding is that, if you were to step through that in a debugger, you could see all of the properties of x evolving as expected with each assignment. Is that correct? So the novel step of your compiler is to infer all of the fields of x that should be instantiated (and with what values) for each 'new'.

If you could toss out an example that you think would be hard to fit into my system, that would be very helpful for me to think about.

Note: I'm assuming that fields have objects for their values, which have fields, so we get fields of fields, etc.

Given an object allocation,

Given an object allocation, the traits that the allocation extends are fixed within its unit (trait), but the relationships of its fields are still very open. So:

trait Bar:
  var fieldA, fieldB

trait Foo:
  def fooA: 
    x := new
    return x

trait Baz
  def baz1:
    var y := new
    var z := y.fooA
    z.fieldA := z.fieldB
    return z

So in this case, the type of new in Foo.fooA is a Bar, the type of new.fieldA is a Duck. But then in Baz, a relationship between fieldA and fieldB is established, which is unknown (and unfixed) in the allocation of the object. For the fields to remain open during allocation, I'm not sure how your approach would work.

But then in Baz, a

But then in Baz, a relationship between fieldA and fieldB is established, which is unknown (and unfixed) in the allocation of the object.

It's unknown based on that definition alone, but again, if you were to trace to the definition of `y in a debugger, you'd see a specific set of traits instantiated, right? The instance would depend on who called `baz1 and how its return was used/unified.

I'm suggesting elaborating that like this:

trait Foo:
  def fooA[C1]: 
    x := new[C1]
    return x

trait Baz
  def baz1[C1,C2]:
    var y := new[C2]
    var z := y.fooA[C1]
    z.fieldA := z.fieldB
    return z

A type system is needed to keep track of trait constraints on these class variables.

I see, but I'm not sure how

I see, but I'm not sure how its helping. The composition of traits in an object allocation must remain invariant within its containing trait implementation, its only the fields that can take on new trait extensions afterwards to help in the formation of other objects that pass through this object's fields. So C1/C2 could only parametrize the fields of the object, not the object itself.

I'm not sure how this could work with overriding:

trait BaseBaz
  def baz1 # no implementation

trait Baz
  override def baz1: ... # implementation above

In general, the constraint graph can be distributed in multiple terms that then meet via assignment, so overridden definitions will crash into uses of those definitions that did so without knowing anything about them, integrating use and def results. The reassignment judgment is incredibly powerful in this way; perhaps too much so :)

The goal is to separate out

The goal is to separate out the operational semantics. I think I may have confused you by leaving the "traits" declarations in the elaborated code. Traits in this system would be pre-elaboration only. They constrain the class variables and help determine which objects get created, but they don't have any other operational effect. I don't see any reason that overrides would be a problem. (Edit: Or maybe I do. Let me think about it.)

The main idea is that assignments at the operational level are just ordinary usual assignments that overwrite the contents of a cell with a value (object reference). If you have an example that demonstrates why this understanding is wrong, that'd be useful.

Assignment is naturally

Assignment is naturally parametric, which we leverage to model the entire language. Methods don't really matter because they are easy to model as some fields and assignments.

Take the override could also be expressed like this:

X0.A := X0.B
X1.B := X1.C

X0 := X2
X1 := X2

There are no methods going on here, but we hit the same feature where the field assignments of X0 and X1 crash into each other at X2 and must be merged. Yinyang will infer X2.A := X2.C while keeping X0 and X1 unrelated. If X2.A is a duck, then X2.C is a duck also, and duckiness doesn't seep into X0 or X1, nor do we have to know the specific objects that make it to X2.

Let's stop pretending that only methods can be parametric, fields are obviously parametric also. Any system that will work for YinYang has to be able to handle this basic example above, which also gets at the essence of flexible overriding and subtyping.

OK, I'm not sure you

OK, I'm not sure you understood what I was asking, but I think you've answered it with this example. Your assignments don't have a usual interpretation. Let's look at this example:

X0.A := X0.B
X1.B := X1.C

X0 := X2
X1 := X2

Do those first constraints survive the latter ones? Does order matter (as it would if these were all imperative assignments), or is what you've written there equivalent to this?

X0 := X2
X1 := X2

X0.A := X0.B
X1.B := X1.C

Yes, order doesn't matter;

Yes, order doesn't matter; it's control flow insensitive. Those statements could execute in any order, any number of times, with arbitrary statements between them. Your order makes more sense; I stopped noticing the order in my examples :)

But I understood your fields

But I understood your fields to be mutable. How do you distinguish clobbering mutation from additional constraint imposition?

Reply back to level 0

Moving my reply back to the top level.

I think the fields *are*

I think the fields *are* mutable, it's only the analysis that doesn't consider order.

Long reply to nested comment

But I understood your fields to be mutable. How do you distinguish clobbering mutation from additional constraint imposition?

The fields are mutable, and we don't examine/analyze/use clobbering of field assignments. We also don't examine what objects are assigned to a term at any rate, and instead do our analysis purely in terms of a control insensitive assignment topology. Going back to my example:

X0.A := X0.B
X1.B := X1.C

We rely on the fact that in both assignments, both X0 and X1 are assigned to exactly one object during execution of the assignment; given concurrency, the compiler must cooperate here and load X0/X1 only once for the entire statement, which isn't unreasonable.

Now given:

X0 := X2
X1 := X2

It is possible that X0 is assigned to an object that was assigned to X2, and the same for X1, leading us to infer that X2.A := X2.C, not that this assignment actually executes, just that the flow it establishes is now possible in the program. If we also had:

X3.D := X3.A
X0 := X4
X3 := X4

We would also infer that X4.D := X4.B completely independent of X2, that is fields are type variables and assignments are parametric with respect to the common parent term of their left and right sides. We can do all that without reasoning about control flow; only the operational semantics of our proof has to deal with that, and even then using Lemma 2 we quickly abstract that away.

There are cases where assignment's parametricity is non-existent; consider:

X5.A := X6.B

The common parent term in this assignment is root (the common parent of X5 and X6), and since root can't be assigned, this assignment will never be replicated parametrically, instead if:

X5 := X7

We only know that X7.A := X6.B. Likewise, the "scope" (common parent term) of an assignment can weaken as it is replicated through other terms; here is the example I use in my paper (with an explicit root):

R.T.A.F := R.T.G
R.T.A := R.B
R.B := R.T.C
R.T := R.S

Here we derive that R.S.C.F := R.T.G because the third assignment weakens the scope of the first assignment to root.

We can then reason about parametricity without ever going near a procedure or type variable; fields are type variables and assignments are as parametric as they can be without a control-sensitive analysis, which is necessarily expensive and hard for programmers to reason about anyways. Once we've done that, we can go back and add methods, traits, and so on encoded in terms of fields, where all operations on them (e.g. overriding, method call) can be encoded in terms of assignment for the purpose of type inference.

We can argue about whether the extra precision of doing everything through fields/assignment is worth it; I admit I hardly understand yet work on record or row polymorphism. We can also argue if this treatment is decidable/scalable/feasible. I believe it is, but its a hard argument to make (hopefully the implementation will help).

Denotational semantics?

I'm basically lost. What does assignment mean in your language? Can you explain it without explaining how it works operationally? What does parametricity mean in this context?

I'm traveling this weekend, and so won't post any long replies until Monday.

Assignment doesn't deviate

Assignment doesn't deviate in meaning from other languages: its parametric behavior exists in Java, Smalltalk, ML, or any other language that supports mutable assignments. Operationally, the semantics of assignment are completely mundane and not interesting.

Assignment's parametric behavior can only be leveraged, however, through a non-trivial analysis. I'm not really sure how denotational semantics can be applied here, as they seem to be concerned with syntactic definitions; i.e. what does one statement mean in terms of its environment, whereas meaning with our treatment of assignment is tied up in the entire program. It becomes even more weird since our treatment is based on topology rather than more conventional flow. So honestly, I don't know: has someone done work to express denotational semantics over whole program graphs? Perhaps my semantics are not compositional?

No, I'm loster than that

Assignments in Java depend on order. So it's not like those, evidently. What do you mean by parametricity? To me that's a result that captures the idea that you can only use values up to what you know about them. Is that what you mean?

I can write up what I thought you were doing (before this paper) and let you tell me what's wrong, but that will have to wait until later.

Ya, assignments do depend on

Ya, assignments do depend on order, operationally. But an analysis of assignments can be context/control flow insensitive and they still have parametric properties, it is just not uncovered as precisely as if context/control flow was considered.

Put it this way: if we considered exact control flow completely (say by analyzing a trace of the program's execution), then we could compute exact type information for each term in the program because we know exactly how it will be used. Each assignment is completely parametric with respect to the object being assigned to, which is quite obvious in a trace where each instruction of the trace executes once (instructions that execute multiple times would be duplicated in the trace).

Ok, so that isn't very practical, maybe we could just do a very precise context/control flow sensitive analysis (e.g. CFA-k for k > 0): our types wouldn't be as good, but they would still probably be very good vs. what our type systems give us today. Parametricity falls directly out of the analysis being (very) polyvariant.

But that also isn't very practical. So if we ignore control flow completely, if we don't bother to track object flows (but rather abstract over them), and if we don't bother to be context sensitive, what I'm claiming is that we can still reliably extract out very precise parametric behavior out of an assignment, where parametric here means with respect to the terms assigned directly or indirectly to the common parent of the assignment.

This is a useful conversation, I appreciate your patience. I know this is really weird.

Okay. That's in line with

Okay. That's in line with what I understood before. So I may understand what you're doing after all. Here's a test example:

var x
x.scrollHeight = 10
x = new

The above would correctly infer that x was a scroll window, but would revert to the default height (10 would be clobbered by the later assignment). Yes?

If so, I think you should look more carefully at my proposal, because I don't think it suffers from the flaw you're suggesting (that it makes methods polymorphic but not fields). The reason that functions need class annotations is because they are the only places where 'new' can be called. The constraint system will capture the polymorphic nature of methods and fields.

In the case of your example,

In the case of your example, there would be a null pointer exception on the second line during execution. Perhaps you meant:

var x := new
x.scollHeight := 10

A scrollbar would be allocated whose height was assigned to 10.

I don't understand how your approach works without functions, procedures, or methods. I'm only interested in the polymorphic nature of fields: assume you have a language with just records and absolutely no procedures at all, how does your approach work? Or are you saying that polymorphism should be limited to just occurring at method call boundaries? Or that somehow, method calls enable it?

I also don't understand how your approach handles overriding, where the caller has no idea about the callee implementation of the method they are calling. So we have to "commit" on some set of type variables for a method without knowing how it will be implemented. If you want to think functionally, then compare with higher order functions (virtual methods are not as expressive as HOFs, but they have similar signature issues).

I fixed my example to not

I fixed my example to not hit a null pointer exception.

What does polymorphism mean without methods or functions?

Overriding would work by solving constraints in a global setting similarly to how I thought you were doing it. In a setting where all constraints can be solved because you know how objects will be used.

Consider one method with two

Consider one method with two implementations:

trait Base:
  def Foo()

trait ExtA:
  var useHeight
  implement def Foo():
    var x := new
    x.scrollHeight = this.useHeight
    return x

trait ExtB:
  implement def Foo():
    var y := new
    return y

trait UsageA:
  def Bar(f):
    return f.Foo()

trait UsageB:
  def Main():
    f := new
    f.useHeight = 20
    var g := Bar(f)

Here, g in UsageB will most definitely be a scrollbar (and that type will be known). What should the constraint signature of Bar and Foo be then to remove fields as a concern in this analysis?

I'm just talking about a

I'm just talking about a naive expand all constraints before solving approach. I can write more tonight (US time). I think I would not worry about trait inheritance relations for signatures -- work directly with data flow. So if functions foo and bar both can be in a particular slot then you need to be able to solve the constrained class parameters of both cases. But failed cases probably shouldn't be errors until that code path is actually taken. This is again going to be a global analysis and exponential worst case behavior, but it was my impression that you were already in that boat.

Worst case is bad (I'm not

Worst case is bad (I'm not sure if its exponential), but the common case is quite good. In the above example's case, looking at data flow, the analysis should never have to examine ExtB to infer the type of g. I'm quite busy on implementation right now to make that really so, which is the trickiest part of this work (i.e. I'm not so worried about soundness, I think I've show that the system is sound, but showing feasibility means showing scaling).

Request for clarifications

I'll start back over here on the left.

I tried to write down this idea with a little more precision last night, but I got stuck on some details. When you invoke 'new', do you ever get back a non-tree DAG of objects? Do you allow x.Field1 = x.Field2 to be setup during construction of x = new? Are your references comparable?

Also, have you figured out recursion yet? For example, a list is a recursive data structure. It seems clear that you can't get an unbounded data structure to have full polymorphism in every slot all the way down. On the other hand, you need to correctly handle lists of lists, and give that case a reasonable type that doesn't merge the node type of the inner list with the node type of the outer list. It looks like you might need an explicit annotation to let the compiler know which nodes you're forcing to be the same type. Then all other nodes could be assumed fully polymorphic and you can report an error if you end up with infinitely many of them.

You can definitely

You can definitely write:

var x := new
x.Field1 := x.Field2

But it is nothing special, say Field1 and Field2 are in trait T, only the following assignments are inferred:

x := new
T := x
x.Field1 := x.Field2

In the assignment graph, you have an edge from T to x and an edge from x to new, along with edges between x.Field1 and x.Field2 (exactly as shown above); x is its own tree that happens to be connected from T. You create your own DAG, and inherit strong edges within T (via the reassignment judgment).

Recursion is what I'm working on, it is actually quite cool: we don't need to know the type of every term, just those materialized by the programmer in their code (ignoring encapsulated sub-objects for now, which require special treatment). Say we have:

trait Base:
  var val, next
trait T:
  def doT():
    val.Quack() # Duck := T.val
    next.DoS()  # S :=

trait S: 
  def doS():
    val.Moo()  # Cow := S.val
    next.DoT() # T :=

T is recursive through S, (S :=, T :=

Now we create an x that is a T:


In Section 4, we use term height to determine what assignments to copy down to x, so we get decidability because the programmer can only materialize the term to a finite depth; e.g.

x.val            # a duck       # a cow  # a duck 

Life becomes interesting when fields "meet" under x because x is assigned to multiple terms:

x.DoT() # T := x
x.DoR() # R := x

Where R is:

trait R:
  def DoR():
    val.DoWildThing() # Wild := R.val
    next.DoQ()        # Q :=
trait Q:
  def DoQ():
    val.DoDomesticated() # Domesticated := Q.val           # R :=

A meeting of R and T will work correctly:

x.val            # a wild duck       # a domesticated cow  # a duck  # a wild cow  # a domesticated duck  # a cow

This all falls out directly of being lazy.

However, there are two major complexities here: first, you must materialize some weak edges between terms that meet to capture flows that transit through a meeting; e.g.

var i := x # T := x, R := x, i := x
while true:
  i.val := new 
  # error: cannot create a wild domesticated duck cow
  i =

Weak edges from T.val, S.val, R.val, and Q.val to i.val must be manifested because they all meet under x! This is recursive, but terminates because there are only a finite number of terms that can meet (can be n^2 meetings where n is depth for recursive structures out of phase with each other, but in practice its much less).

Additionally, for the height thing to work, which scaling depends on, we have to detect assignments to deeper terms and match them through assignments to shallower terms. The assignment to the deeper term might occur in S, while the matched shallower assignment might occur in R, or there might even be three terms involved in a single meeting (deeper, level, shallower), or even more. There is a reasonable way to deal with this, but this post is already too long!

Section 4 is incredibly important for the paper, but it is still such a mess, this is incredibly tricky!

I'm asking whether you can

I'm asking whether you can write only this:

var x := new

And then find that slots are aliased:

x.Field1.a = 0
x.Field2.a = 1
print x.Field1.a -- prints 1 due to aliasing

Is that possible? Or would you have needed to setup the aliasing explicitly with:

var x := new
x.Field2 = x.Field1

(I'm assuming that subobjects have their own identity and that aliasing is possible at all in these situations)

I'll look into your explanation of recursion later tonight. I counted three exclamation points in those final paragraphs, so expectations are high. ;)

Say:trait T: var Field1,


trait T:
  var Field1, Field2
  def init(): 
    this.Field1 := this.Field2

var x := new
x.Field1.a := 0
x.Field2.a := 1

That would definitely set the aliasing up as usual (assuming init is a constructor). Maybe what you are really asking is are typical constructors possible, and there is no reason why not (it doesn't help or hurt the type system).


Yes, whether arbitrary constructors are possible is indeed what I was asking about.

Weak edges

Can you explain weak edges? Specifically, you give the following example in your paper:

A := B
A := C
A.F : = D

You say D < A.F and B < A, and then say that we must conservatively assume that D < B.F. I don't follow that step. I would think B < A should imply B.F < A.F. So we have D <.F and B.F < A.F and C.F < A.F, capturing the intuition that anything you can do to A.F must also be doable to everything you put in A.F. My understanding of weak edges heads south from there.

weak assignment

Intuitively, if A := B and A.F := D, then A could possibly be assigned to an object bound to B, and B.F := D is definitely a possibility.

Weak assignments (:-) capture the essence of flow, so if A :- B, then B < A, but that is it: they are fundamentally weaker than strong assignment := (all strong assignments are weak assignments!). However, a field sub-assignment, how A.F and B.F are related if A :- B, contains a latent backward object flow that can be activated when a field of the weakly assigned term is strong assigned from another term. I used to have a unique operator for field sub-assignment, but it is kind of redundant since it can be expressed via the parents being weakly assigned (A :- B implies A.F and B.F are related by field sub-assignment).

Now, strong assignment (:=) likewise contains a latent forward object flow that is activated when paired with field sub-assignment, which is why it is stronger than :-. We can cash in on these latent flows using the reassignment judgment:

A := B
------ (strong-to-weak)
A :- B

A :- B
---------- (field subassignment)
A.F :- B.F

A :- B, A.F := D
---------------- (reassignment)
B.F := D

Note that we say A :- B, but this is really just getting at the field sub-assignment relationship between A.F and B.F with that latent backward object flow. Strong assignment then brings that backward flow out to derive B.F := D.

So we actually have three kinds of assignments to work with, and we could throw out weak assignment and work only with strong assignment, with its latent stronger forward flow, and field sub-assignment, with its latent backward flow. But that would requiring doubling our rules since strong assignment and field sub-assignment share a common forward object flow that we can express easily as weak assignment.

Consider if we just had one assignment operator, say just strong assignment:

A := B
---------- (field subassignment)
A.F := B.F

A.F := B.F, A.F := D
-------------------- (reassignment)
B.F := D

This is tantamount to unification.

How the heck did I get here? I used to split the nodes up into high and low components, and then wire them to let transitivity do its thing, first strong assignment:

A := B

A.H -> B.H
A.L -> B.H

And field sub assignment:

A.F :+ B.F

A.F.H -> B.F.H
A.F.L <- B.F.H
A.F.L <- B.F.L

Weak assignment only exists to drive deriving more field sub assignments:

A := B => A :- B
A.F :+ B.F => A.F :- B.F
A :- B => A.F :+ B.F

A :- B

A.H -> B.H

Note how the flow of split/node weak assignment is just the intersection of the flow of split/node strong assignment and field sub assignment. If you pair up strong assignment with weak assignment, you can derive a strong assignment:

A :+ B 
A := C

C.H <- A.H -> B.H
C.H <- A.L <- B.H
       A.L <- B.L

So basically, B := C, and that is how reassignment works!

Even though this is quite explicit about forward and backward object flows, it is quite complicated to reason about and obviously very verbose; I also have no idea how to skip fields in such a system, which is essential to getting a reasonable implementation of the system. It was a huge advance when I figured out that I could describe everything with strong/weak assignments and abandon the split node model, leaving it to a reassignment rule to cash in on the backward/strong forward flows. Split nodes also provide for no reasoning about parameticity, at least as far as I can tell so far.

Notation confusion

I see the main thrust is that you're using := in the example snippet I copied from the paper in a way that embodies both a subtyping constraint on values and assignability of fields. I gather you're saying here that you used to break these out into independent concepts and found it to be a pain, but it still seems confusing to me. I'll think about it and see if I either learn to love it or can suggest something else.

Couple things

Consider writing an immutable list:

trait ImmutableList:
    var isEmpty := True
    var head
    var tail

def cons(x, xs):
    var newlist := new
    require newlist = xs  -- HERE
    newlist.isEmpty = False
    newlist.head = x
    newlist.tail = xs
    return newlist

How would you write the line labeled HERE? Basically, it's a way to enforce that the list is homogeneous. The elements there aren't required to have exactly the same type, but should be unified such that whenever you attempt to use one element of the list it automatically infers those traits for all elements. It looks like you'd want some mechanism to handle that, even if you didn't use it to resolve recursion.

I'm still haven't figured out your :- syntax. Don't you need to keep track of a write bound and a read bound separately on each mutable field? The write bound will always be a refinement of (a more specific type than) the read bound, but it looks to me like you'd need both for each mutable cell. And I was thinking that maybe that's what you were accomplishing with :-, but then I don't see how that could distinguish between longer permutations like A.Field1_read.Field2_write.Field3_read, etc.

With a type annotation you

With a type annotation you could induce homogenous typing artificially:

def cons(x, xs):
  var newlist := new
  newlist is xs      # typewise like xs := newlist 
                     # but a no-op at run-time
  newlist.head := x
  newlist.tail := xs
  return newlist

All requirements of xs become requirements of newlist, which transfers down to fields.

You don't have to keep track of read and write bounds, because on a field assignment, the reassignment judgment will connect the assigned term to whatever field that could reach it. Longer permutations just result in larger graphs with more copied assignments.