Compilers as Assistants

Designers of Elm want their compiler to produce friendly error messages. They show some examples of helpful error messages from their newer compiler on the blog post.

Compilers as Assistants

One of Elm’s goals is to change our relationship with compilers. Compilers should be assistants, not adversaries. A compiler should not just detect bugs, it should then help you understand why there is a bug. It should not berate you in a robot voice, it should give you specific hints that help you write better code. Ultimately, a compiler should make programming faster and more fun!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The compiler messages are

The compiler messages are much more friendly, but a bit verbose. Besides, they are still getting it wrong in my opinion: people don't want stilted compilation feedback, they want full on type debuggers.

What is the difference between a run-time error and a static type error? The former can be debugged, the latter can't! That is sort of the appeal of dynamic languages, they change non-debuggable static type errors into dynamic faults that you can actually reason about. The compiler has to do much more than just dump information (both overwhelming and underwhelming the programmer with information) about an error to the terminal to bridge the gap.

+1

My favourite language in Uni was SML because once you got past the compiler your code (for homework, anyway) was most likely right. But then I'd complain that I had no debugger to apply to the damned type checker...

Not just for error messages

I'd like to see the idea of "compilers as assistants" extended to much more than just error messages. In particular, I would like to see compiler-developer interaction as the main strategy for code optimization.

Imagine some high-level code. While running the first test cases, the run-time system generates a profile much like for JIT compilation. It then tells the developer which of these observations could be used for optimization if they were true in general. If the developer confirms, the information enters into a compilation profile that is used both for code generation and for adding appropriate checks (static or dynamic).

Separation of concerns

... would suggest that compilers shouldn't be linters and vice versa. Because a linter doesn't have to produce object code fast, it can afford to work a lot harder on error messages and the like. Stephen C. Johnson's original lint paper explains the point of such a separation, even (or especially) with a feebly statically typed language like C.

C's lack of safety

I would argue that lint separation from the C compiler is one of the causes that has made C as insecure as we know it.

Had lint been part of C compilers, when C got widespread into the industry developers would have been used to write C in a more secure way.

Static analysis in C world, only gained some adoption after Clang integrated it into the compiler.

The problem with separate tools is that they tend to be ignored.

And the problem with fools ...

... is that they tend to behave like fools. In my very new role as team lead, I was asked how I get developers to do code reviews rather than letting them pile up forever, or until some magic integration day. My answer was: By making it clear that reviewing a ticket counts as working a ticket, and that the activities of coding and reviewing count equally toward your reputation. This is relative to the amount of work done on each, obviously: a ticket may require many hours to develop even though only a one-line fix is needed to resolve it, a sort of NP-nature of tickets.

Fighting human nature

I used to agree with what you're saying here, but I've found that it's easier and more successful to solve problems by explicitly accounting for human nature.

Human nature is not to work for The Man at all

Yet many people can be motivated to do so with *money*.

You'd be surprised how far

You'd be surprised how far just having good management and a culture of "giving a f*ck" will get you. If you don't have access to those, then you might as well try aversion therapy (painful electric shocks until code is reviewed).

It should not berate you in

It should not berate you in a robot voice

I've never felt this way about compile time errors. Do any of you?

Is it that people these days are simply less resilient and want more cuddling? I think the argument can be made that this reflects broader cultural trends.

UX is subjective

Personally I could believe somebody could make terse but not obnoxious error messages. But mostly when they are terse, they are also poorly designed. They come out at the worst time -- they are telling you that something is busted you stupid human. But then they don't help. So in those kinds of situations, yes, it feels like the berating robot.

SYNTAX ERROR IN 30.
READY.

?

The old anecdote goes that the unix ed(1) editor has only a single error message, "?", and that "The experienced user will usually understand what went wrong.".

ed(1) error messages have these advantages:

1. They were never inaccurate or incorrect.

2. They were never misleading.

3. They discouraged attempting needlessly complicated solutions and encouraged thinking about what you were doing.

Also:

it feels like the berating robot.

If that is a wide sentiment, everyone who feels that way should smash every computer they can find since computers should never be in a position to berate.

Computers should be in a position to be laughably useless or harmlessly idiotic. After all, they are dumb, lifeless machines.

But if lots of people feel like the computer is berating them, then there is a social disaster going on.

Those who don't feel the beration

lack sufficient and necessary empathy to ever do a good job at UX, is my $0.02.

But there is an equal and opposite failure mode ...

... in which an ML error message says "Expected type int*int*int, found type int*int*int". (This may be an urban legend, but I definitely saw it mentioned somewhere even if I can't track it down today.) That is what I would call being berated by a robot, since it obviously either has no clue what you meant, or is reporting something as a failure that is not a failure at all.

It has been said that the problem with soft static typing in Scheme, the kind that rejects only programs that will always fail at run time, as opposed to conventional static typing that rejects any program that may fail at run time, is that you have to have a Ph.D. to understand the error messages. Dialyzer users in Erlang don't seem to have this problem, though to be sure Erlang is simpler and more syntactically rigid than Scheme.

Location, location

Long ago I learned, from a toy compiler for an extensible-syntax language, to stamp every compile-time structure — starting with ASTs as they're constructed bottom-up — with an associated region of the source code, and produce error messages with mimimal description and precise identification of the region (illustrated to the extent possible, though that's a lot easier for small regions). For myself, I found it vastly easier to deduce what was wrong from a mimimal hint with a clearly identified region of source, than from the sort of error messages I'd mostly seen that gave an elaborate description with, at most, a single point in the source (line number and column number). Later I tried adapting the same technique to a Scheme-like interpreter (okay, a Kernel interpreter :-), stamping a region on each AST and then on each first-class object, and as best I could tell it worked tolerably well in that setting too.

And when they aren't

And when they aren't terse...they can be too verbose. The problem is there is no way to zoom in and zoom out.

I only look at compiler errors as a last resort

I look at the line number, fix the bug, and recompile. Only if that cycle breaks down I start bothering with the actual error message.

No idea why one would need nice error messages, or even multiple error messages.

What I can't understand is

What I can't understand is why anyone would ever want to see compiler error messages on a command line or in a separate compiler error pane (just as bad)? I mean, its 2015, we know how to annotate text buffers. We can underlying a token in an editor buffer and put an error message underneath...we have the technology but refuse to use it!

I know I know, there are still people out there programming with teletypes rather than bitmapped displays and we shouldn't alienate them.

Old BSD error(1)

... provided this facility: it would read standard Unix errors of the form filename:line:column:message, figure out from the name of the file how to do a comment, and insert the error into the correct line of the file in the format of a comment, using patch(1)-like techniques.

Second-system effect

If you start annotating text buffers, then you realize that there are many, many more things you could do, and then ambition leads to horrible overengineering. (Eg, Light Table/Eve/whatever they are up to now)

Basically what you or someone else needs to do is to figure out a minimal set of decent dev features, and then write a dragon book for IDEs. That way developers can implement the stuff in your book, declare victory, and go home. :)

(It should go without saying that this is not an entirely serious assessment...)

drinks & whiteboards

As a Joe Programmer In The Street, I have long had some random thoughts about such things. It would be a lot of cathartic fun to hang out with people who like to talk about it all and just brainstorm a toolkit / framework / philosophy about it.

only works if the line numbers are accurate

In several modern programming languages, the line number reported by the compiler is often misleading.

specific hints... Giving a

specific hints...

Giving a hint suggests knowing more than you let on. I certainly don't want compilers to give hints; I want them to let me know what should be fixed. But of course they don't know what I am trying to do, so they can only inform me of the problem. I don't want them to make assumptions (ass-u-me) about what I am trying to do. Maybe compAIlers should be in that business, but not compilers. Then again, perhaps betraying my being an old-fart, I find the talk of hints mildly passive-aggressive. This compilers-as-parents schtick is almost as bad as hearts-for-favs on twitter.

Hint: Did you mean

I implemented typo-warning hints in the OCaml programming language:

# List.itr;;
Error: Unbound value List.itr
Hint: Did you mean iter?

To me the name "Hint" is a short word, in a language that everybody understands, for "Heuristic advice". In particular, it insists that the hint may be wrong. In that regard, "Guess" would work just as well -- but then maybe you would be displeased by "Guess" as well?

I find these hints useful, even if they make even more sense in presence of editor support for automatically applying the suggested change. (In a command-line I am always a bit frustrated when "git comit" results in a hint but I have to re-type the command: if it obviously knows what I mean, why not do it directly?)

Do you have a suggestion for a better name? The important point, I think, is to insist that this advice is partial (may be wrong), in contrast with the cold description of the failure that precedes.

I don't buy the general scepticism and "our 2.0 interfaces are so above this" feeling that is common in this thread (not in your particular reply, but I haven't commented elsewhere). There is interesting work to be done on better-quality error messages, and quite often this works turns out to be also useful and necessary for better-quality 2.0 interfaces.

I used to believe that instant feedback would solve all syntax issues by waving a big red flag whenever a mistake is made, and that simply observing the change that makes the problem appear should be worth a thousand explanations. Turns out people hate to be interrupted in their train of thought by a big red flag and will always have a need to go back to the now-cold error trail. Better direct feedback is a part of the solution, change recording and replay, semantic edition, non-textual interface, yes, please do that, but I still want good error messages.

I hope you realize that I am

I hope you realize that I am at least being 50% tongue in cheek?

But I agree about cases like the one you gave. If one really wants to avoid the H word, how about: "Closest matches are X.iter, X.int, Y.itr"? Or maybe "Did you mean X.iter"? A hint would be "Read the documentation before the next code review session".

or...

Have you considered burger flipping?

Actually, tho, for me, it is just "indictment all the way down". I mean try some 5 Whys on why the human typed "itr" and had to wait for a compile cycle to find out.

So here again I don't actually think this really is pointing blame at the programmer. It is pointing blame at the entire development ecosystem's UX.

Hints/tips are great if they

Hints/tips are great if they can manifest correctly in an appropriate UI. Statically typed functional languages have a lot of low hanging fruit to grab here....

Red flags if done right don't have to be disruptive. VS has a 500 ms delay, and even then its just an underline. In my own work, I just red line and don't provide an explanation for some time, though it is probably not the right solution yet. Feedback has to be something that lies in the background, that you can totally ignore if you want. Some people just want nothing on their screen but code however, just like people who fly biplanes aren't really thrilled about HUDs.

If you wanna be my compiler

Giving a hint suggests knowing more than you let on. I certainly don't want compilers to give hints; I want them to let me know what should be fixed. But of course they don't know what I am trying to do, so they can only inform me of the problem. I don't want them to make assumptions (ass-u-me) about what I am trying to do. [...] I find the talk of hints mildly passive-aggressive.

I had a reply in draft that was almost just like this, although in my case it isn't tongue-in-cheek. :)

"Hint:" feels like someone tapping me on the shoulder and trying to show me that they take pity on me for not finding something out myself. If it's in a canned error message, I've probably heard it several times already, so it's eventually 0% helpful information and 100% social signaling. And even if I do relinquish my control and give the compiler a chance to show me what it's got, that gets us nowhere. The compiler is constantly in a state of trying to be something for me that it can't be.

If the error messages in the blog post simply left off the word "Hint:" or changed it to "Note:" then I would be less annoyed. Other than that, they're pretty good error messages. I like that they highlight the relevant information and then get out of the way.

In my experience, people feeling persecuted by inanimate objects is a common and mostly harmless occurrence (e.g. when they're having a bad day and their car won't start), but I doubt we can improve that kind of experience with a more condescending error message. I think a nicer experience would be for errors to appear gradually and continuously as the user writes their code, rather than lurking in wait to ambush the user when they run the compiler, and for the error messages to be more interactive when they do appear.

Right on! (I said I was at

Right on! (I said I was at least 50% tongue in cheek, this still lives a lot of room for tongue out of cheek.)

Compilers could at least be polite

Compilers could at least be polite enough to offer to fix the problems for you. Outside of an IDE, you'd never see a compiler offer to actually insert a semicolon in a file or rename a variable. From there it's just a hop, skip, and a jump to having the compiler at least try to fix an error and continue. If it has multiple choices, it's literally a matter of milliseconds to try a few possible changes and see which one fixes the most downstream errors. Compilers could actually get really good at fixing the most common kinds of errors for some class of users, like swapped or extra or missing arguments to API calls, and it's just a tab or a space for confirmation away. Don't we want to make programming fun again and make the compiler into your best friend who never gets annoyed and knows the rules like the back of his hand?

The idea of trying a few

The idea of trying a few fixes and reporting on which one minimizes the number of downstream errors is cute. Are there any examples of compilers that do (did) that?

Parsing error recovery

This is a standard idea in the sub-field of "syntax error recovery" (find a good way to continue parsing a file after a syntax error, as a way to find good fix proposals for the user or to go ahead and report other errors in the resot of the program). It could also easily be done at level of the type system; in a sense this is what happens with type-directed name disambiguation that is implemented in many languages (pick the only possible choice that does not fail). For example, syntax error recovery, or typo-fixing of unknown identifiers could take advantage of the type checking to filter out more wrong candidates. To my knowledge this is not done -- but more interactive tools like smart auto-completion widgets are close to this.

Then the next idea is to start running fix candidates and see which one pass the testsuite. This is rather scary (what if the system generates code to erase all my disk?) and I would only attempt that in programming systems with strong symbolic evaluation features (to compute results statically instead of running programs) or strong control on effects (to only compute in the pure fragment). I suspect the Smalltalk people would not mind being more daring with this.

Recovery makes it worse, try interactive type error debugging.

In my experience, attempting recovery makes things worse. Look at the mess html parsing is in, where writing a new parser has to include all the broken heuristics of old parsers, or it won't work on 90% of the web content out there. I find the pages of errors that compilers that attempt to continue past the first error difficult to deal with. Sometimes stopping on the first error is better, unless the errors get annotated directly on the source, it is hard to process a wall of errors.

The work I did on compositional type checking is interesting in this respect, as every program fragment can be typed, you can find additional information about the type error by slicing the program at different points. I think this could function as a debug mode for typing, allowing an interactive investigation of the type error, which might be the way to go, thinking about how tools help debugging of mainstream languages.

I agree. In the old days of

I agree. In the old days of batch compiling, recovery was critical, since the cycle took a long time. Hence why I wonder if the heuristic suggested (sorting possible corrections by affects of downstream errors) was ever used in practice. Does anyone remember such a thing?

Don't go for maximum errors

That's not quite what I meant. I don't think compilers should try to correct the error in order to generate the maximum number of downstream errors. That's actually part of the problem. Rather, given a set of errors that are generated, can the compiler make reasonable suggestions to the programmer and offer a menu of choices to make and automatically rank them based on how many of the existing set of errors would be corrected? I'm thinking of cases where a single semicolon fixes a giant cascade of errors, which seems to happen a lot with clang, since it seems to drop declarations that have syntax errors and continue past them, leading to many unresolved type and method errors later. Compilers should be able to try simple fixes by the hundreds in the background and generally become "smarter" at offering successful ones to users.

Innacuracies

My concern with that is that the semicolon might go in several different places. This may allow the compiler to change the meaning of the code depending on where it gets inserted. It may be the programmer has made a second error (misspelling a keyword for example). Now the heuristic may find the most errors are fixed by inserting the semicolon at a point that hides the misspelled keyword, making semantic changes to the program more likely.

Javascript seems to do a reasonable job of semicolon insertion, but that is because they are semantically redundant in the language design. A plain return can be interpreted to mean the same thing as a semi-colon.

Redundancies

One of the reasons I was against semicolon inference in Scala at the time was because semicolons acted as redundant signaling for better error messages and error recovery in the IDE. Ya, they were redundant, but eliminating them was a classic PL-centric idea that failed to take into account tooling.

Incidentally, my current languages lack semicolons, but do not allow statements to be on multiple lines so new lines can unambiguously act as statement terminators.

+1

I have never understood the mental mode where one says, "I know, I can make X more 'efficient' (e.g. optional-semicolon-inference) but only at the risk of introducing more possible confusion and ambiguity and chances for things to go oddly wrong! I like it!"

Ambiguity

The compiler should essentially do a limited breadth-first exploration of the space of possible actions and report back to the user its recommendation for what the most successful one is. If there are many such choices, it should be conservative and inform the user in a lightweight manner.

E.g. the compiler could say: semicolon could be inserted here, here, or here, and based on the number of errors fixed, I recommend here. Shall I do it? y/n

The error localization problem

As we try to make programming languages friendlier by inferring types or software reliable by doing complex program analyses, it becomes difficult even to know what point in the program to blame when something goes wrong. The error messages you get back are often quite misleading because they are telling you about the wrong place in the program. Danfeng Zhang has done some work on that for ML and Haskell in some recent papers published at POPL and PLDI, improving the accuracy of error localization in both languages to about 90%. Try
the Haskell demo!

region

Just to note, somewhere above I remarked (probably not clearly enough to convey the point) that a significant obstacle to clear error messages is trying to place an error at a specific point in the source code, rather than a region in the source code. The Scheme-like (Kernel) error message system I alluded to gave a pretty minimal description of the error, but did fairly well at communicating to the programmer what was going on by pairing its minimal description with a region of the source code — either a partial expression for syntax errors or a complete expression for semantic ones.

Code slicing

The classic example is a function that uses an argument in two incompatible ways under type inference. They cannot both be correct, but most compilers have a forward bias, assuming the first use is correct and reporting the second. If we split the code into two fragments, each can be considered correct, it is only when they are joined together that there is an error. The problem is that when they are joined together is dependent on the compiler. I think some kind of interactive type error explorer/debugger would both be useful for experts and beginners to understand what is going on in the type system, how the error occurs, and what it means.

That's a good point.

That's a good point. Messages should be along the lines of "these two things are incompatible" rather than "this one thing is incorrect". Of course, it might be possible to do more: if there are n uses and n-1 of them assume i is an int and one case assumes i is a string, the compiler might have a reasonable expectation of intiness.

Too aggressive type inference

This is where aggressive type inference starts to fail users. There is so little "ground truth" that when something goes wrong, it goes wrong in the compiler's internal fiction about the program, and it ends up dumping its problems on the user.

Muddiness

I've been trying for a while to articulate why Lisp is so like a ball of mud. Finally making progress, the thesis I'm pursuing is that — building on an essay flagged out on LtU a while back — Lisp minimizes the conversation between programmer and computer, leaving room for a conversation between human beings. Complicated type systems create more of a conversation between the programmer and the computer, tending to damp out the strengths of sapient thought in favor of a poor imitation of computing; using S-expressions for everything, garbage collection, bignums, trivial syntax, all minimize the conversation with the computer. The sort of region-oriented error messages I've been advocating here are also about describing things in a less computerese way, which admittedly is easier to do if the language is more like a ball of mud. It seems significant that the difficulty pointed out is type-related.

Runtime Error

Well the misuse of the argument would still be an error. It may result in an unplanned coercion or casting, allowing the program to run, but resulting in unflagged incorrect behaviour (which is bad, and results in things going catastrophically wrong), or it may result in a runtime error. Both these cases are worse than catching the problem in type checking.

As a programmer, I find types facilitate dialog, as it allows me to express expectations and assumptions the function makes to other programmers. "My 'max' function expects two integers and returns an integer". Further I can express these requirements in a formal language, removing the ambiguity of natural language from the discussion, helping the other programmer more easily understand the requirements.

It's easy, when working with

It's easy, when working with computers, to come up with situations where the ambiguity of natural language is a bad thing. I think perhaps we need to think a lot more about when, and how, and why, it's a very good thing.

Generics

That's where I think generics (or type classes which are equivalent) are useful. Generics allow you constrained ambiguity.

That is a really interesting

That is a really interesting thought. I had always assumed that if I could only explain in enough detail then the computer wold understand. But I can see that the opposite goal might be more reasonable. If people were as uncooperative as computers often are, then more conversations would be like the one I once had where a teacher tried to test my understanding of what a fractal was while I was trying to explain Ron Eglash's research.

In human communication the better someone understands what your goal is and the structure and purpose of what you are doing the less you have to say. However I don't know how to apply that to language design.

As I've said before

As I've said before, when you implement an algorithm in a language that lacks type constraints and other constraints, you end up solving ONE problem with flexible tools.

When you have to shoehorn it into a typed language, you have a second problem. Sometimes that problem is much larger and requires you to iterate over combinations of types... Sometimes it is such a large problem that it makes the program too hard to express or to understand.

There is nothing that you can express with typing that you can't express with code, but mathematicians prefer LESS expressive languages because the weaker a language is, the less you can express with it, the more you can prove about it.

In a lot of circumstances an engineer would prefer the most flexible, most expressive language.

That's what I find frustrating about LTU, the point of view here is programming as mathematics not programming as human expression.

Two languages

This is why I prefer the two-languages approach. A value-level language that is flexible and expressive (imperative, mutable, etc), paired with a meta-language that is a logic (declarative, immutable etc). The meta-language can access and manipulate meta-information about the value-level language (static-types are just one form of meta-information, constant values are another) and can be used to prove things about it. So I want both programming as mathematics (on the meta-level) and programming as human-expression (on the value-level). One nice thing about treating types as meta-information is that there is a clear distinction between statically inferable type information which is available to the meta-language, and run time dynamic types, which are not.

I like logic...

Logic languages allow programs to reorder in ways I find more convincing than just Haskell. They're a tool that IS useful despite not being so imperative.

Maybe I don't get haskell, but the lazy eval isn't even well controlled - to be more useful you should be able to tell it which results can be thrown away and recomputed, manage saved results and their memory. It's a poor solution. Call it a leaky abstraction because it literally leaks memory.

programming as mathematics?

As I see it, mathematics — traditional mathematics, not corrupted by influence from computer science — is a conversation between human mathematicians, profoundly flexible. If programming were actually like mathematics, it too would be profoundly flexible and wouldn't obsess over elaborate type systems.

TLA+

I believe that this was pretty much the argument that Lamport made for untyped nature of his TLA+ specification language (which was intended to be, as he put it, "just mathematics"). On the other hand, as Lamport himself says in a discussion of TLA+:

In principle, being untyped makes TLA+ significantly more expressive than Z. In practice, the inexpressiveness of Z’s type system is at worst a minor nuisance for writing the specifications that typically arise in industry.

Its amazing that math is

Its amazing that math is "tell the mathematician" your math while programming is "tell the computer" your program. But how do you even think up the math or the program? Is thinking purely linguistic?

Is thinking purely

Is thinking purely linguistic?

Imo, no.

Nice post

I found this to be a great read. :)

Then we still have to focus

Then we still have to focus on the thinking part, not just communicating what we have thought.

Thought provoking.

I shall have to dig back through your blog, some fascinating posts. Have you come across Julian Jaynes, "The Origin of Consciousness in the Breakdown of the Bicameral Mind"? I wondered what you thought of his theories of part of our own history being a relic of a pre-sentient period for our species. It is a very entertaining book if you have not come across it before.

to-read list

I've encountered references to that; I need to add it to my to-read list. Thanks for reminding me.

art vs. science

(i use that title jokingly)

I expect many people who like mathy static strong typing approaches have tried to do things in a 'human conversational' way and ended up getting exactly what they deserved from the machines!

The key is to get more information

The insight in Danfeng Zhang's work is that you can extract information about the "ground truth" from everything else going on in the program other than the actual constraint failure. Current compilers either tell you about the program point corresponding to the constraint that failed (which is very often the wrong place), or they just dump everything about all program points that might have contributed to the failure (which buries the real problem in a pile of chaff). His system uses probabilistic reasoning over the whole collection of constraints from the program analysis to find a small number of program points that explain the failures *and* for which there is not much evidence that they are otherwise consistent with the rest of the program. The result is that his system, SHErrLoc, generates answers that are remarkably accurate. He evaluated it empirically by using as ground truth the fixes that programmers applied to hundreds of real programs.

standard complaint against Ocaml et. al.

and seems to be based on real experience (I seem to recall it happening to me more than once). So I've always suggested that some IDEs should try the experiment of injecting the compiler's inferred types as regular 'manual' type annotations as we go along so that we have a better chance of early on noticing when it has gone off the rails :-)

OCaml IDE

OCaml nowadays has remarkably improved in terms of IDE support thanks to the Merlin project, an editor-assistant that can interface with editors to provide OCaml-aware semantic editing functions. The Merlin authors, Thomas Refis and Frédéric Bour, have been doing excellent work on analysis (syntax and typing) of incomplete programs/buffers and information gathering / error reporting, even pushing the state of the art in some areas (generic syntax recovery from a LR parser). Instead of asking the typer to insert type annotations, you can go over the program and ask Merlin what the inferred type is -- the point of recovery is to be able to provide reasonable information even for syntactically incorrect programs, or with an independent type error somewhere close.

Motivated by this work, François Pottier (the author of the Menhir parser generator) has been working on better error messages from LR parsers. The first result of this brand of work has been a LR parser for the C99 grammar whose error messages equal or improve on GCC or Clang's error messages, see the (second) draft Reachability and error diagnosis in LR(1) parsers. (comparison is Figure 8 on page 9).

I would be very excited to see a collaboration with the SHErrLoc people, of the kind that happened with GHC, but you need to put the right people in the same room and it's not very simple to make happen.

Even better counterexamples

Another parser generator that generates better error messages for LR parsers is the Polyglot version of CUP (PPG). This work by Chin Isradisaikul is described in a PLDI'15 paper. The idea is to search the LR state machine to find concise counterexamples that explain why a grammar is ambiguous or, at least, has a parsing conflict.

It's a popular PhD thesis subject

Seems every two years, or so, some PhD writes a thesis about error reporting. I remember a similar one on the Utrecht Haskell compiler (Helium). (I loosely used that algorithm to implement my own unstable typechecker for a language I once wrote.)

It's popular since there's no neat answer. Any set of constraints you solve can give you errors pending the order you choose to solve them.

I am not interested in it anymore. Bottom-up is much faster and gives neat errors too; which is good enough.

bad link

The link to the article is incorrect - there's a trailing double-quote that threw me for a second. The real link is:

http://elm-lang.org/blog/compilers-as-assistants

Smart quotes

Thanks. Somehow I used the smart-quotes around a href.