Mixed-Site Variance

Ross Tate is calling for "Industry Endorsement" for his paper Mixed-Site Variance.

..this is an attempt to make industry experience admissible as evidence in academic settings, just like they do in industry settings.

Abstract:

Java introduced wildcards years ago. Wildcards were very expressive, and they were integral to updating the existing libraries to make use of generics. Unfortunately, wildcards were also complex and verbose, making them hard and inconvenient for programmers to adopt. Overall, while an impressive feature, wildcards are generally considered to be a failure. As such, many languages adopted a more restricted feature for generics, namely declaration-site variance, because designers believed its simplicity would make it easier for programmers to adopt. Indeed, declaration-site variance has been quite successful. However, it is also completely unhelpful for many designs, including many of those in the Java SDK. So, we have designed mixed-site variance, a careful combination of definition-site and use-site variance that avoids the failings of wildcards. We have been working with JetBrains to put this into practice by incorporating it into the design of their upcoming language, Kotlin. Here we exposit our design, our rationale, and our experiences.

Mention of it is also at Jetbrain's Kotlin blog.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Actually...

Actually one of the better papers I’ve read—a simple presentation of a theoretically elegant solution to a practical problem. Now if only Kotlin were a bit more innovative...

Why is declaration site

Why is declaration site variance necessary at all? If you take a more structural view, it seems that this is unnecessary. If you have a type Foo<T> where T is only used in output positions, then it's automatic that Foo<Int> is a subtype of Foo<Number>. You simply view Foo<Int> as Foo<T> with all T's replaced by Int, and then do what structural type checking would do. Use-site variance also has a simple structural interpretation. If you have Bar<T> and you do Bar<in A out B> simply means that instead of substituting all occurrences of the type parameter with T, you substitute all occurrences in input position with A and all occurrences in output position with B.

You could even do a translation from the language with use site variance to a subset of the language without use site variance by replacing all type parameters T with a pair of type parameters T_in and T_out, where T_in will be substituted in input positions and T_out in output positions. So class Bar<T>{...} becomes class Bar<T_in,T_out>{...}, and a usage of Bar<T> becomes Bar<T_in,T_out> and Bar<in A out B> becomes Bar<A,B>. Or am I missing something here? (of course this translation doesn't work with higher kinds since for non ground Bar, it's only by convention that Bar is covariant in its first parameter and contravariant in its second parameter, but the same idea will work if you add that extra knowledge to the type checker).

Full example:

class Bar<T> {
   T foo(T x) { ... }
   Bar<T> slice(int n, int m) { ... }
}

Becomes:

class Bar<T_in,T_out> {
   T_out foo(T_in x) { ... }
   Bar<T_in,T_out> slice(int n, int m) { ... }
}

If you take a prescriptive

If you take a prescriptive typing view, variance annotations are also unnecessary.

Variance inference

What you're saying is not so much that variance isn't needed, but that it can be derived in many cases. This is trivially true for all cases where a type declaration is concrete, i.e. has a definition. OCaml does that for type aliases, for example.

In general, however, types can be abstract, and if you have higher-kinded abstract types then you need to look at their uses, i.e. do real variance inference. Just consider (using Scalaish syntax):

f[C[_]](x : C[Dog], y : C[Cat]) : List[C[Animal]] = [x, y]

You need to infer from the list expression [x, y] that C has to be covariant -- and this is still a simple case. I don't remember any work that explores this (but I haven't looked).

Another problem with inferring variance, at least in the case of classes, might be brittleness and/or exposure of implementation details. That is, a class might be variant unintentionally, and users might start relying on that, and then you cannot extend the class anymore (or modify its implementation) in ways you may have intended. Not sure how much of a practical problem that would be, but it's always better if you can make all parts of a typing contract explicit, at least as an option.

I'm not proposing to infer

I'm not proposing to infer variance for non concrete types, or to remove variance, just that declaration site variance seems to be unnecessary (plus a simple way to implement use-site variance, but since I know nothing about the existing approaches that may be well known).

As written, that function f looks to me like it should just be a type error. You'd have to write that function as follows:

f[C[_]](x : C[Dog], y : C[Cat]) : List[C[out Animal]] = [x, y]

To make sure that only the outputting methods of C are accessible for the elements in the list.

If you really want to avoid implicit variance of a concrete class, you could always add an extra dummy method that makes it invariant. Note that the variance of a generic class isn't dependent on its implementation, only on its type signatures. I agree that it may be undesirable that existing code ceases to type check if you add an extra method to a class, but I don't think that this would be a big problem in practice. Arguably, it's only natural that changing a type signature may cause type errors in code that uses that type. Other constructs already have the exact same situation. For instance if you have object literals that can implement interfaces, then if you add a method to that interface then existing code will signal type errors.

Semantics of C[out T]

Andreas mentioned that it would be implicit that C was covariant. So he's thinking the inferred type would quantify only over covariant C. You seem to be thinking that each type functor has covariant and contravariant parts, as in your proposed translation above, so that you quantify over all functors C and then can ask for the covariant part. That seems like it's going to have a problem with higher rank types.

In any event, trying to sort through these issues seems like a good exercise to require of anyone contemplating rampant subtyping constraints for their new language.

Hm

Hm, okay, I guess I never felt very comfortable with use-site variance, and find this way of writing the example rather unintuitive. But YMMV.

I'm not sure how your variance splitting translation for generic parameters can be sufficient. Wouldn't you also need to split every class itself into a co- and contravariant version? If you think of nominal types (like classes) as a contrived form of (globally bound) existential type variables, then that would be a natural consequence. That is, shouldn't

class Bar { Bar f(Bar) }

become

class Bar_in { Bar_in f(Bar_out) }
class Bar_out { Bar_out f(Bar_in) }

Otherwise, I don't see how you could instantiate a generic with a class correctly.

A related question is how your scheme would deal with invariant uses. Wouldn't you actually need to split everything into three versions for that?

I'm not sure I follow what you're saying in the second paragraph. If there is no def-site variance, then the notion of "variance of a generic class" does not even exist, right? OTOH, if you do have def-site variance, then the variance of a generic class can depend on its implementation, because that may make certain assumptions about it, say, inside a method that self-references the class type internally.

It Should Always be Legal to Put a Turtle into an Array of...

It Should Always be Legal to Put a Turtle into an Array of Animals.

The authors "industry experience" appeal is B.S.

Java sucks because they designed arrays and primitives wrong from the start, and generics design had to deal with that mess.

While I think the author's idea is cleaner than wildcards, the appeal to authority weakens the paper considerably. On the other hand, since Kotlin is a JVM language and the JVM handles arrays in a stupid way that doesn't make sense when you have generics, I fail to see what we are actually doing to truly help the core education problem, and fail to see what we are doing about the problem of preventing run-time errors by catching them at compile-time, with good error messages.

Another way to look at this is revealed by Andrey in his blog post. Rather than try to target "industry experience", he points out a simple decidability argument. It seems fair to me that type judgments should be decidable before we measure whether programmers can understand the outcomes. In other words, if a computer can't figure it out, what hope do people have?

My question is where it gets interesting. There are type systems which are inconsistent and still incredibly useful, even though they can lead to some whacky code getting executed. We already passed through this territory when debating Gilad Bracha's position statement - Subsumption At All Costs. Gilad is probably the best appeal to authority Ross Tate can hope for, even though Gilad isn't specifically speaking into Ross's scenario.

Exact Gilad quote

The type system is unsound, due to the covariance of generic types. This is a deliberate choice (and undoubtedly controversial). Experience has shown that sound type rules for generics fly in the face of programmer intuition. It is easy for tools to provide a sound type analysis if they choose, which may be useful for tasks like refactoring.
Gilad Bracha, Google's Dart announced, (emphasis mine)

Weird science

I'm not sure I understood the aims here:

Our reviewers may very well like what we have to say, but they may not be comfortable accepting a paper founded on heresay into a scientific venue. Indeed, after reading positive reviews and then being rejected, through side channels I was informed such concerns were this paper's downfall, and I was encouraged to try unconventional avenues. So, this is an experiment to enable industry to encourage research directions they would like explored despite the difficulty in scientifically backing claims. In other words, this is an attempt to make industry experience admissible as evidence in academic settings, just like they do in industry settings.

When empiricists look around, they gather data that both supports and contradicts their hypothesis. This sounds like only rounding up cases of the former. It's great to see someone like Ross Tate experimenting with non-analytic approaches to type systems, so my comment is more of a nit. I'm not sure of what's a good way to go about empirical PL research, so hopefully his process will be revealing!

Warning: controversial rant

Warning: controversial rant ahead.

Design is about solving problems; design solutions should be presented to the community so that we can critique and build on each others solutions. Rigorous empirical, scientific, theoretical evaluations are nice to have, but not always possible nor very useful.

In my opinion, the problem is that the PL community, and the OOPSLA community specifically, has drifted away from their earlier focus on solving problems into some sort of scientific navel gazing. A focus on scientific evaluation has led to a plethora of low quality publications that fit the rigor requirement and have crowded out design papers that actually solve problems! So OOPSLA might accept 60 (!!) papers this year, but most of them will be uninteresting because...science!

It gets much worse than that: our publication venues are preventing us from being innovative and inventive, it is too difficult to do something new and get recognized for it. Instead, we have fallen into the trap of small incremental refinements that can take advantage of lots of previous rigor. None of the new languages coming out these days are really new at all.

Yes, interesting work can still be presented in more innovative venues like Onward and The StrangeLoop, but why? Where are we going as a community?

Preaching to the choir

Rigorous empirical, scientific, theoretical evaluations are nice to have, but not always possible nor very useful.

As an analogy, cheap genetic sequencing caused a methodology revolution in bio (and a ton of lessons), but wasn't practical for a long time. I don't know how many Ph.D.s have been devoted to verification techniques -- what if we similarly invest in empiricism? That should bring down the cost of application, so my questions have been more about when they'd be worth it and what's the route.

I agree that dominant academic schools of thought are severely boxing in design research, but you already know that ;-) However, punishing empiricism sounds like barking up the wrong tree. E.g., empiricists have a poor showing at R1 institutions, and I can only think of one guy who might make the jump any time soon.

My attack on empiricism has

My attack on empiricism has nothing to do with the true empiricists, but with many's misguided notion that empiricism is a real solution to our design research problems. We do not need to replicate the well known problems of HCI in PL!

Truly useful empirical research takes a lot of time, planning, resources, and has limited applicability even when performed correctly.

Part and parcel

My attack on empiricism has nothing to do with the true empiricists, but with many's misguided notion that empiricism is a real solution to our design research problems.

It weeds out incorrect theories through testing and heralds new ones by revealing phenomena, so I have to disagree at least based on theory. In terms of expected impact for the next 5-10 years, I'm curious. The internet, source control, mechanical turk, quantitative revolutions in social sciences, etc. have revealed many opportunities.

We do not need to replicate the well known problems of HCI in PL!

Counting on HCI folks to solve PL problems is not a good idea.

Truly useful empirical research takes a lot of time, planning, resources, and has limited applicability even when performed correctly.

Brad Myers writes as much as well. I'm more optimistic in two ways. First, there's probably a lot of useful low-hanging fruit, such as Shriram's experiences with error messages. It shouldn't be so hard to run conceptually simple trials like that, so I suspect proper tooling and automation can take away a lot of the bite -- there are few established norms and techniques here. Second, data driven design is still an emerging concept. E.g., Stackoverflow is now the default tool for documentation and debugging help. I doubt it'll be the last data-driven tool.

Nails and hammers

The internet, source control, mechanical turk, quantitative revolutions in social sciences, etc. have revealed many opportunities.

There are plenty of new opportunities to know things, its knowing things related to the problem you are trying to solve that is important. If this is your problem, or you can bias the problems you work on based on what you can objectively know or discover, great! But many of us do not have this luxury.

Counting on HCI folks to solve PL problems is not a good idea.

Nor are transplanting disfunctionalities in the HCI community the PL community.

First, there's probably a lot of useful low-hanging fruit, such as Shriram's experiences with error messages. It shouldn't be so hard to run conceptually simple trials like that, so I suspect proper tooling and automation can take away a lot of the bite -- there are few established norms and techniques here.

Low-hanging fruit that is useful for something might not be useful for you. 99% of what UX designers do, however, is not within the realm of the low-hanging HCI fruit, I don't see why this would be any different for PL designers.

Second, data driven design is still an emerging concept. E.g., Stackoverflow is now the default tool for documentation and debugging help. I doubt it'll be the last data-driven tool.

I'm sure data has a big role to play in our future, but say pure data-driven design as practiced in Google...is already known to fail. Data can't replace design, it can only augment it.

Big returns from automation, generality, and accumulation

Low-hanging fruit that is useful for something might not be useful for you. 99% of what UX designers do, however, is not within the realm of the low-hanging HCI fruit, I don't see why this would be any different for PL designers.

I think there'll be a bunch of general design and implementation techniques that pop out. Examples include Shriram's error message technique, Stackoverflow, and 'rules of thumbs' that pop out of Andreas Stefvik's work such as how to name a for-loop construct.

It's also a great example of something that developers have little time to do today, so automation would make it scalable.

Whether empiricism belongs at every step of the way, it's easy to say 'no' today. If research continues -- it's not as obvious.

I'm sure data has a big role to play in our future, but say pure data-driven design as practiced in Google...is already known to fail. Data can't replace design, it can only augment it.

Agreed -- I view data as an ingredient for creating new types of designs, and in the case of empiricism, a testing strategy, and more ambitiously, a step along the path towards understanding why designs succeed or fail.

Alot of heuristics involved,

Alot of heuristics involved, and we still aren't sure what the quality of these signals are; worse than nothing is still a possibility.

Agreed -- I view data as an ingredient for creating new types of designs, and in the case of empiricism, a testing strategy, and more ambitiously, a step along the path towards understanding why designs succeed or fail.

Isn't it that some great designs leave a nice tingling aloe feeling in our brain and some horrible ones give us massive headaches? To understand why is to understand self, and I bet we are a long way away from that. Some of it (beyond visuals even) really is aesthetics that we can't even begin to quantify, like a beautiful vs, ugly syntax.

Magic bullet

Agreed, I'm not claiming a magic bullet. Instead, I expect several big and many small advances, and can even argue that they already have occurred. Some of the advances can even foster design by providing foundations (e.g., social or cognitive) rather than building on mud.

In a similar vein...

...you might enjoy Richard P. Gabriel's Onward 2012 paper, The Structure of a Programming Language Revolution. It's basically his personal reflections on how the programming languages field greatly increased the level of mathematical rigor it required, and how this changed the character of the field. While mathematical rigor is of course different from empirical rigor (and I have the ICSE/FSE rejections to prove it), RPG's complaint seemed very similar in spirit to your own.

It's a very interesting read, though I disagree with his assessment of what happened. In my view, what happened is the publication of the Wright/Felleisen methodology for proving syntactic type soundness. It is so easy, and works on so many different languages, that it radically lowered the cost of building sound type systems. So people made a lot more of them! I also think that the type systems gold rush is over (and has been for some time): people are now trying to design languages and prove programs with properties that go well beyond what progress/preservation can say.

This makes me wonder if there were any major empirical successes analogous to the Wright/Felleisen paper which triggered a transformation in the culture of the practically-oriented end of the field. Maybe the appearance of large open-source Java codebases with detailed commit histories?

I went to RPG's talk at

I went to RPG's talk at Splash, I think the main gist of his essay was about how reality is however we view things today, regardless of past meanings (incommensurability). I don't think this applies to my problem, which is not really about the past, but how those that were provided so much freedom in the past to innovate and invent have imposed a stupid system on the present to prevent us from having the same opportunities.

The paper I'm working on basically tears up all previous type systems work. I'm either being very inventive or I'm chemically imbalanced somehow (I haven't decided yet). I think some of our problem IS that we build too much on the past, and we don't try different paths not taken because we know we can't get published if we do.

please to keep on with the tearing

it is nice to see good radical thought.

Done.

Okay, did type theory. What's next?

I'm of the opinion that the field is now looking for the next major theme to focus on advancing. Guys have been working on static type theory now for over a decade, and the low-hanging fruit there (read: the really important, solvable problems) are gone.

So, from the point of view of 'languages are what people can make good programs with', what are the important, solvable problem now?

"What's next?" opinion

'languages are what people can make good programs with',

Whether a program is "good" or not is a good question and also a question that's beyond any possible theory of programming languages.

For example, sometimes it might be an ethical question. Suppose that we have a possible programming language feature the main function of which is that it helps enable writing programs for "the cloud". Or maybe instead it helps enable writing programs using 5,000 programmers none of whom really see the big picture of the program.

Making a language to help with those things, or deciding not to, or some other possibility --- these are factors into whether or not language work is helping people to write "good" programs.

Political economy, such as the expected relations of authority and work splitting among programmers, figure heavily into the choices made by programming language designers and implementers.

One direction to explore, that might contain some "low hanging fruit" of reasonable value, is to build better theoretical understandings of the relationships between programming language constructs, and the kinds of political economy that those constructures support, encourage, or obstruct, and discourage.

Such concerns are latent in a lot of prior work on programming languages. Consider how we talk about various combinations of concepts of "safety", "modularity", and "separate (or independent) development".

Perhaps there is opportunity to better understand those relations. And perhaps, because the questions revolve around political economy, it is an interdisciplinary opportunity.

If we are going to spread bullshit like "Don't be evil." we need a better story about "good".

Unethical strategies -- strategies that scale?

For example, sometimes it might be an ethical question. Suppose that we have a possible programming language feature the main function of which is that it helps enable writing programs for "the cloud". Or maybe instead it helps enable writing programs using 5,000 programmers none of whom really see the big picture of the program.

Hmmm... If it's unethical to enable writing programs using 5000 programmers none of whom really see the big picture of the program, then wasn't OO methodology unethical in the first place? I mean, making plug-together sub-units that are as independent of each other as possible has been the whole focus of OO.

Hasn't our ideal been that someone who doesn't even hardly know programming can sit down, plug together parts made by hundreds of other people none of whom had any idea that their parts would be used in this program, tie it into some kind of UI generator, and get a program that works the way he wants?

How is that different from employing thousands of programmers who have no idea what programs their parts will be used in?

Ray
Bear

making plug-together

making plug-together sub-units that are as independent of each other as possible has been the whole focus of OO

That's an idealized view of OO. In practice, OOP is not compositional. There is no generic way to combine two objects. OOP has not historically been relevant to mashup designs or application-layer components. I am not aware of any OOP language that even achieves pluggable or composable subprograms, much less focuses on them. The focus of OO (in practice) seems to be inheritance-like relationships - prototyping, subtyping, interfaces.

Regarding models where thousands of programmers can develop systems together without anyone seeing the big picture, you are better off looking at multi-agent systems, topic-based publish-subscribe models, collaborative state models (e.g. tuple spaces, shared document editing, blackboard metaphor), and concurrent constraint models.

Hasn't our ideal been that someone who doesn't even hardly know programming can sit down, plug together parts made by hundreds of other people none of whom had any idea that their parts would be used in this program, tie it into some kind of UI generator, and get a program that works the way he wants? How is that different from employing thousands of programmers who have no idea what programs their parts will be used in?

They are duals. In the former case, the context names a reusable component. In the latter case, the plug-in component names the extensible context. The notions of "reusable component" vs. "extensible system" are not incompatible, but they certainly aren't the same.

There are related concerns for how state is modeled, i.e. is it part of the component or part of the context? What does it mean to have multiple 'instances' of a reusable component?

Verifying compiler

I respectfully disagree that all the "really important, solvable problems" are gone. I still have no practical way to invest a reasonable effort into proving that the code I write is correct with respect to the specification I can write -- in the cases when I know what such a specification is -- or at least some reassuring properties of this code.

I want to be able to write a specification or some properties along with my code. There would be an interaction loop with the verifier telling me which part it couldn't verify and why, during which I would add hints/annotations/proofs (of reasonable size) to the code, until the full verification completes.

I believe this is really important, and feasible;'Tony Hoare argued as much when he proposed the Verifying Compiler as a Grand Challenge for Computer Science Research (pdf) in 2003.. The ideas of how to proceed have been there for a long time now, but the proof automation technology is constantly improving. So we are "mostly there", but not quite. You need to solve the hard problems, and then you need to integrate the solutions into a mainstream-enough programming environment for people to use it.

So far I don't know of any such efforts having been used outside the research team it originates from. There are known feasible ways to write proved-correct code, but they involve a radical change of programming techniques (eg. writing the program directly in Coq as Compcert). There are also usable partial correctness tools, for eg. SPARK/Ada or static analysis plugins for Java/C#, but to my knowledge they're not meant to specify functional correctness properties. Static contracts technology is coming close, but I'm not aware of realistic non-insider uses.

I still have no reasonable

I still have no reasonable way to write a specification that is correct with my intended goals. Heck, I have no reasonable way of figuring out what my goals are beyond writing code and experimenting.

Even if this magical verification technology comes out, I have a feeling that we would just get hung up on the specifications, which become more like code themselves. Perhaps we can do away with the specifications and get some assurances on null pointer exceptions and such, but I still don't think my life will be much better than it is now.

Not that I don't think that this field is not worthwhile, surely it is; just that I think the role of safety and security is often overstated against various bogey men to scare funding out of DARPA. The amount of resources consumed by security concerns in PL is quite amazing, that it has been along time since anything related to real productivity increases has been adequately funded. This is another one of my pet peeves.

That goal is an infinite regress.

If you want to prove that the code you write is correct with respect to the specification, what you're talking about is a "specification" language that you program in.

It's been my experience that the people who write specifications rarely produce a self-consistent specification in the first place (ie, a specification for a program which can exist without logical contradiction). Those who do produce a self-consistent specification leave so much ambiguity that there are an infinite number of different programs having vastly different semantics which could trivially be said to "meet" the specification, the vast majority of which would be rejected by the specification writers who will claim that that isn't what they meant. This is because they are trying to put in a "reasonable" effort rather than the amount of effort required to actually program.

The methodology of programming languages (and the specification language you require) is to be able to write down something that has one (or very few) self-consistent specific semantic interpretations. And generally in the case that there is more than one semantics which can result from code, we regard writing code that actually permits a difference in result depending on such ambiguity as "bad style" at the very least, or "a logic error" if we are being less charitable (Well, unless specifically randomizing for simulation or some such purpose).

If you develop a specification language that has only one (or very few) self-consistent specific semantic interpretations, then why bother programming at all? That language is a declarative programming language in the first place. If writing such a precise specification winds up being less work than just coding, then you'll have invented a really *good* programming language. Somebody will then write a compiler for it, turning specifications into programs. Then you'll wonder how to prove that the specifications are correct.

Writing such a detailed, consistent specification cannot be less effort (nor, unfortunately, less error-prone) than writing code, and for exactly the same reasons. The quest for such a specification language is the same as the quest for a better language to program in.

In fact, IIRC, this is how the Lisp language came into existence. McCarthy developed a language for formally reasoning about programs, provability, and semantics, and was as surprised as anybody when somebody wrote a compiler. And people can still write Lisp programs that don't meet specifications, or meet them without meeting actual requirements.

Specifications aren't computable

The general difference between a specification language and a programming language is that the latter needs to produce an executable of some sort.

It's been my experience that the people who write specifications rarely produce a self-consistent specification in the first place (ie, a specification for a program which can exist without logical contradiction).

This is precisely what makes specifications so useful. Your compiler can't verify that your axioms align with reality, but it can tell you that they're consistent. With a sufficient level of specification, this will identify the vast majority of bugs (in my estimation).

Platitudes

I know that specifications have problems themselves, they can be impossible to pin down, or be written down incorrectly. However, in my experience a lot of the small bricks of software we compose to build larger (and loosely defined) applications do have a clean interface and specification. Think of the functions of your standard library, the compiler of your language, the small command-line tools of your UNIX userland, etc. Specifying those should already be done, and in practice we are still getting functional errors and security issues from this part of our software base.

Specifying often amounts to writing code; one good example is that the specification of a sorting function, depending on how you define the logical "is a sorted sequence" predicate, often precisely corresponds to an insertion-sort implementation. But note that the actual implementation of the sorting function can be much more complex, low-level, efficient, etc.

The idea that code can serve as specification is well-known, and it is still a net gain if this code is simpler to write, maintain and reason about than the actual code running in production. (Moving the trusted code base from 50K lines of implementation to 1K lines of specification is a win.) This idea is an integral part of the Bedrock project for example:

Bedrock is [...] computational: Many useful functions are specified most effectively by comparing with "reference implementations" in a pure functional language. Bedrock supports that model, backed by the full expressive power of Coq's usual programming language.

Confirmation bias

It is easy to see cases where specification could have prevented problems, much harder to see many other cases where it couldn't. If I were to talk about standard libraries I used today, what does the formal specification for Canvas look like anyways? How often do I bother sorting something, and how less often do I bother writing a sort function? Real programming often isn't very algorithmic.

How many us would benefit from taking the Bedrock approach, considering the code we wrote today?

So, from the point of view

So, from the point of view of 'languages are what people can make good programs with', what are the important, solvable problem now?

Productivity, tools, having the computer do more thinking for us, lowering the threshold needed to become a good programmer and spending less time doing tedious crap. Languages are also still poorly designed, inconsistent, unprofessional, and such.

Unprofessional?

Productivity, tools, having the computer do more thinking for us, lowering the threshold needed to become a good programmer and spending less time doing tedious crap.

Okay, agreed on that.

I've been having some woolly ideas about formalizing the interface requirements between modules in a better way than has been done before which may play into that goal. Sadly, it amounts to more type theory. And I'm bored of type theory. Still, it may be worth explaining, or making a few blog posts about, because it really would make modular design a lot easier.

We've spent a lot of effort forcing subunits of programs (objects, if your style runs that way) to document what interfaces they PROVIDE. But as we plug them together, problem of forcing them to document (and possibly even standardize) what interfaces they REQUIRE has become by degrees just as important. That problem is, as yet, nearly unaddressed.

As matters stand, nearly every "modular" program subunit comes with a forest of specific, non-substitutable dependencies and libraries that cost efficiency and memory and bloat our programs with redundant crap when some other object requires different dependencies and libraries that provide essentially the same services, but with different argument orders and/or minor additional or missing functions.

Simply speaking, you have modularity on the "provides" side - nice for building new stuff on top of - but utter rigidity on the "requires" side, so what supports the functionality is essentially frozen and accumulates efficiency-sucking layers between new code and hardware forever.

We've seen adhoc efforts to do something about this, such as the rise of virtual machine languages and cross-platform libraries. Versions of the same virtual machine or same library implemented to run on different platforms are deliberately substitutable infrastructure. Ditto runtimes for many languages. Access to the screen, for example, may be accomplished in entirely different ways under Linux and on a nook tablet, but the runtime abstracts access and provides a uniform interface.

But this kind of effort toward substitutability - even the effort to *document* requirements so that substitutability could be attempted - rises very rarely into the realm of code written by the people who use these systems. And it has become at least as important as what interface is provided.

Languages are also still poorly designed, inconsistent, unprofessional, and such.

Heh. I think that "unprofessional" means something which, apparently, you don't think it means. Let's face it, hardly anybody gets paid money for building a new language. Academics may get a paper or even a dissertation out of it, but the vast majority of new languages are still implemented "for the fun of it" or "for the hell of it" or "as an experiment" or "because I have pet peeves about programming languages that I want to address" by people who are both unacademic and unprofessional. Nothing is going to stop them, and IMO nothing should.

If you believe that more of them should be paid, or that more of them should be academically recognized, or that more professional and/or academic people *should* develop languages, thats ... well, heck, I'd like that too, but it's unlikely to happen.

Ray

Heh. I think that

Heh. I think that "unprofessional" means something which, apparently, you don't think it means. Let's face it, hardly anybody gets paid money for building a new language.

Pretend I work for a certain BigCorp where people do get paid to design languages (well, one language) that are used heavily internally by 1000+ people to solve certain problems. But they are not language designers, and the languages are accordingly designed very poorly. Now stop pretending :) There is also a big difference in language design that goes to conferences like OSDI/SOSP and design that goes into our own PL conferences.

What you are talking about is a bit different and not something I'm worried about; people will always scratch itches and it would barely affect me or my work.

Okay, let's pretend that. What's your point?

Okay, that's maybe three or four people for the design, all at a particular BigCorp whose name we all know. Regardless of your opinion, they are indisputably professional language designers, and therefore the language was in fact designed professionally, whether it's up to your standard or not. About seven new languages, I think, have been built this way in the last twelve years.

I'm betting that hundreds of adacemics will unprofessionally produce scores of languages this year, and have done so every year, during that twelve-year span.

I'd further bet that six times that number people who are neither academic nor professional have unprofessionally produced languages every year during the same period.

We still aren't talking

We still aren't talking about the same people, but I'm not going to get more detailed than that. BigCorp is a HUGE company and the professionals who actually get paid to design languages are in a different building in a different campus working whose customers are much more external. More than 7 new languages are designed per year at BigCorp, if you are on the outside, you wouldn't know about any of them probably.

I don't know what you guys are talking about ...

... but I have made (and still make) compilers for custom languages as part of my job.

All of this "unprofessional" business and generalizing who makes these things and why sounds kind of snobby to me.

Probably because I'm on the receiving end of your judgement. ;P

Have to agree.

I agree. Lay or professional*, makes no difference. I think the Jonathan Meades comment on architecture could be applied here: "the only school is talent." There's plenty of outsider languages being used daily.

(*)I've experience of both and let me tell you that getting paid only made the result more expensive.

I was referring to

I was referring to professional systems designers designing languages without any of the experience that we have, or even an interest in that experience. The result is as expected. At least if your company has professional language designers already, you might want to try and use them. Yet we are like UX designers in this regards, many dont even know we exist, and the job is done by some PM....

(Many language designers are titled as researchers or PMs also, but only because BigCorp doesn't know how to define our role exactly....)

Larry Wall, for example...

Larry Wall was a professional sysadmin. He was also a pofessional (human languages) linguist, of a particular theory of natural language called Tagmemic Linguistics.

His invention was Perl -- an attempt to create a computer language that uses pronouns, context, etc, in some of the nuanced ways that human languages do. Its semantics were largely inspired by Wall's natural-language work.

Was he a "Professional" programming language designer? No. Did he produce anything elegant and precise like professional programming language designers might have? No. Was his type system sound? Ha!! It was deliberately, intentionally, vague and subverted itself three times before breakfast!

Was he aware of programming language design literature? Maybe. But his goal, aside from making it easy to do the reports his sysadmin job required, was to try and apply natural-language ideas to computer code. He was deliberately trying to produce something that's vague and occasionally even ambiguous but in useful ways like human languages. He wouldn't have cared about the soundness of his type system, any more than we care about the (in)consistency of our pronoun applications, even if he was completely familiar with the type literature.

Whatever. I don't happen to like Perl, because I feel that those same ambiguities and vaguenesses get in my damn way when I'm programming. I also dislike the line-noise syntax, but I could get over that. But hey, pronouns in programming languages were a really neat idea!

My point was that an outsider like Wall can have a completely original idea in programming language design, that comes from a discipline outside programming language design. And doesn't necessarily value the same things that programming language designers do when those are contrary to the design he's trying to complete.

IMO, things like that -- although often they fail -- are valuable experiments and often yield one or two innovations that make the world better even if the overall vision doesn't. He's like a chair designer who thought more about the shape of human butts than the support engineering and what his elegant sculptured form and bold lines would look like in an architecture magazine. So maybe his chairs lack a solid balance, and tip over too easily, and aren't that elegant, but that seat cushion turns out to be a valuable innovation that nobody had been considering as something that might be important for chairs!

Ray

Contextual Adaptation and PLs

I would like to see more development of functions that can adapt to their context, i.e. representing a search in the function space similar to Coq strategies/tactics. What is normally called a 'function' today would then be a singular strategy. In some ways, this is analogous to a dictionary: a word or phrase can have many potential meanings but a 'best' meaning is chosen based on context of use.

Part of the compilation process would be selecting a concrete function - one that meets type requirements in context, and that optimizes other cost/preference/weighting criteria (which are also provided by the context). The 'search' for a concrete meaning is compile-time rather than runtime (though might be mixed a bit in cases of interpreters or live coding).

Usefully, a single function could have multiple types, and type-checking is valid if at least one of the types is valid.

I believe this sort of adaptive PL could make programming a lot closer to natural language, and namespaces more composable. The cost is separate compilation of libraries, but this could be mitigated by incremental compilation and caching.

[The `par` type, â…‹, from linear logic seems relevant to contextual adaptation.]

Adaption

This strikes me as a very cool idea, but could you give an example of some namespaces it would make more composable?

For example, 'draw' could

For example, 'draw' could have multiple definitions in different libraries - one for artists, another for cowboys, and so on. The correct meaning for 'draw' would be used in context.

Overall, this makes namespaces more compositional... you can sort of dump multiple libraries into one big dictionary, which creates a bunch of ambiguity, but then the ambiguity has its own resolution mechanism based on context. (In cases where ambiguity does remain, a compile-time warning can be issued and developers can simply add a bit more context.)

In practice, this should result in a somewhat more terse language since little disambiguation is needed for namespaces.

Experience won't count as evidence

Experience informs and influences practice. It also informs and and influences science. It will not count as science (today).

I have only glossed over the paper and still have to make up my mind re: mixed variance, but that introduction sentence suggesting reform to the scientific process is going to hit some conceptual obstacle. In the same way, I think design does not count as science (today).

The (today) qualification is important. It seems there used to a time when the community was exchanging ideas on system design. I found Gabriel's "The Structure of Programming Language Revolutions" paper [1] quite revealing.

If we want to reintroduce system design into the scientific process, we have to work on recasting mere experience into some category of knowledge. What needs to happen for that is establishing a common vocabulary and some ground rules by which hypotheses can be formulated (and falsified).

[1] http://www.dreamsongs.com/Files/Incommensurability.pdf

I saw Gabriel's talk at

I saw Gabriel's talk at SPLASH last year. I thought it was interesting, but very insular and meta with respect to the academic community, and not very applicable to the field at large. Ya, different people in the community have different ideas about what mixins are, but at the end of the day we are just arguing amongst ourselves. Somehow we've gotten ourselves stuck in a academic feedback loop with little input from or output to the "real world."

If we want to reintroduce system design into the scientific process, we have to work on recasting mere experience into some category of knowledge. What needs to happen for that is establishing a common vocabulary and some ground rules by which hypotheses can be formulated (and falsified).

I think Crista Lopes has perhaps a better description of the problem. My primary thought is whether we really should pretend to be doing science at all; the useful outputs of our field are generally in the realm of design, new engineering techniques, and mathematical theory. We can argue if theory is science, it really depends on your definition of the scientific process, but theory is based on absolute truth with respect to a model that may or may not represent the real world, and has little to do with the scientific process (though it often informs on it!).

We are going through an identity crisis: do we keep trying to shoehorn our field into a scientific method that barely seems to fit, and more importantly be useful, or do we go back to our roots of innovating, inventing, and reflecting? This debate is hardly unique to PL or computer science. I personally believe that the recent pushes for rigor have made our field much worse (just look at the noise in recent paper proceedings), not better.

why can't we all get along?

pushes for rigor have made our field much worse

wish there were places for radical experimental thought, that got good support and backup and funding etc., and then places that either refines pre-existing stuff with more rigor, or takes the dope smoking ideas and tries to see what rigor can be applied, to see if it makes things even chocolate+peanut-butter better.

Whatever, just give me

Whatever, just give me papers that are interesting again! Apply rigor where needed and useful, not blindly as an end in itself. We will have to sacrifice rigor bars to increase interesting/relevant/useful bars; basically, we should reset our priorities.

What is real and what isn't

I don't know whether programming language research has become so dominated by rigor/theory/formal systems as you two deplore, but it is also beside the point.

All I said is that you cannot equate industry experience (as is) with science; one needs to at least turn the insights from experience into something that can be communicated, that is reproducible. Call it "hypotheses that are falsifiable".

All over the world people are in fact learning new stuff about languages, about systems, and some of it is general and would be interesting and useful. Somebody needs to find it out and write it up. There may be navel-gazing, also fragmentation in scientific disciplines, but that has not prevented good scientists from appearing and surprising everybody.

Mainstream programming languages and systems maybe do not look like it, but they are all built by people standing on the shoulders of giants. The relative value of a real world system is quite relative, it shifts over time and what is innovation one year is merely considered technical debt next year, but foundations do indeed live on. It is not the phd theses themselves, but the authors' ability to understand what is around them and apply this knowledge.

There could certainly be more direct knowledge transfer in both directions, but to say that there is none at all isn't true. I think it would be a worthwhile effort to document and codify some of the industry experience and engineering practices into some form that allows judgments -- and if these judgments satisfy basic requirements, then I don't think anybody could claim that this wasn't science. Hypotheses may be a lot weaker than "Any design based on X is going to be better than a design based on Y", but they still have to codify knowledge and teach the reader something that they didn't know or couldn't articulate before.

Finally, I think many people (especially my former phd student self) are tempted to place science as somehow morally higher than engineering and industry practices (that Crisa Lopes post has many examples). It may be about solving different kinds of problems, but ultimately it is problem solving all the same. There is no general language of problem solving, but that doesn't mean that we won't see new languages (both terminology and PL) and new results at those intersections.