## Error reporting strategies during parsing

Although I've seen lots of great papers on error recovery during parsing, I haven't been able to find much describing how parse errors are reported.

A couple of the problems I'm having:

1. if the grammar requires lots of lookahead and backtracking to parse, there's potentially *tons* of possible errors to report if the input is invalid. But reporting all of them might be overwhelming/confusing and probably unhelpful
2. the position reported for an error often doesn't correspond with the actual position at which the user made the mistake
3. reporting errors inside nested structure

Are there any papers describing error reporting strategies, or parser generators or combinator libraries implementing simple but effective strategies?

## Comment viewing options

### incremental

If you just parse while the user is typing, then you can quickly stick good parse information and avoid widely propagating parse errors when one occurs (list parse error once, then stick to good parsing information for the rest).

I report my parse errors in the editor as meta-text under the token where the parse error occurred.

If you want decent error reporting and recovery, you should avoid the generator/combinator approach. I'm not sure if there are any papers on how these tools have dealt with error reporting, but I looked before and didn't find anything up to date or promising. PEGs are supposed to be pretty good, while Grimm has done some work in this area.

### Thanks for the pointer about

Thanks for the pointer about PEGs -- is the advantage of using them the prioritized choice?

### Do you know of any good

Do you know of any good writeups of incremental parsing strategies? I don't know the field at all, and my (quick & lazy) google scholar search only turned up a bunch of off-topic papers.

### Nothing that is very modern

Nothing that is very modern or useful. Keep in mind that incremental parsing is useful for just two reasons: better error recovery that naturally considers edit history, and preservation of type information because typing, not parsing, is actually the slow computation that we need to care about. Most of the papers are written from the persective of performance, which isn't very interesting in most use cases.

Many of the earlier language aware editors seem to avoid being incremental and rather just try to be really fast. Visual studio C# presentation compiler is supposedly incremental, but details are lacking. I'm currently writing a short historical retrospective for the sle parsing workshop, but all those details will be biased and vague.

if the grammar requires lots of lookahead and backtracking to parse, there's potentially *tons* of possible errors to report if the input is invalid. But reporting all of them might be overwhelming/confusing and probably unhelpful

Could you make the parser interactive and let the user configure tracing, set breakpoints on the creation of partial parse trees, investigate state, dump partial parse trees, choose grammar rules to apply, and so on?

See "BUP: A Bottom-Up Parser Embedded in Prolog", Matsumoto, et al. for an example of an interactive parser which seems to be a textbook example of why meta-circular interpreters can be useful. Their routines generate Prolog code which implements the parser for the user's grammar, and it looks like their optional debugger can interpret this code knowing when it's building trees or looking for applicable grammar rules. This debugger can interact with the user when it builds a partial parse tree or a grammar rule fails.

### PEG + cut error reporting can be pretty good

Take a look at my python PEG combinator library (here); it implements the best automatic-error-reporting I've ever seen in a parser generator/combinator library. The basic approach is to dump the state of the parser when an error occurs, reporting both the parser-stack that brought it to that location as well as the exact parser which failed. This sounds extremely scary, but actually comes out pretty nice. Here's the error message for a typo in a rather big json blob with multiple layers of nesting:

ParseError: index: 456, line: 19, col: 31
json_exp / obj / pair / json_exp / array / json_exp / obj
"number": 646 555-4567"
^
expected: '}'


Not bad for an autogenerated error message. I don't think you'll be able to get much better without delving into psychology and trying to guess what people intended when they make mistakes.

In cases with lots of backtracking (e.g. a PEG parser like string | number | object | array | ...) there's not much you can do but report "all these things failed", but in the vast majority of these cases, you can insert a few cuts to kill off the backtracking behavior after the choice is confirmed, which makes errors much more specific.

Not bad for an autogenerated error message. I don't think you'll be able to get much better without delving into psychology and trying to guess what people intended when they make mistakes.

Of course you can do much better than this if you take into account the programmer's sequence of edits over the entire editing session in generating the error message. Just most compiler implementers refuse to admit that programs are written in editors.

### Are there any examples of

Are there any examples of IDEs (research or otherwise) that do this? I've used quite a number of them and I've never seen it happen. Always any path-dependent-ness in the editor is just the editor slowly catching up to its path-independent, stateless error reporting.

### Definitely. The scala v2 IDE

Definitely. The scala v2 IDE did this, but they ripped that out for laziness instead and went back to stateless parsing again in the current version. I do it in all my research languages, I am planning to write this up for the SLE parsing workshop at splash.

Most of the delays built into stateless editor parsers are intentional and not because parsing is slow. They just ran way too. Many user studies that revealed that programmers hate stateless syntax error messages.

### Great idea! I've used the

Great idea! I've used the same cut approach.

However, since I couldn't really find any writeups of it, I've been expecting to hit some nasty snags at some point. For instance, I was basically doing manual grammar analysis to determine where I could add cuts without shutting off *good* backtracking -- and I definitely make mistakes.

What problems have you faced using this approach? Do you manually add cuts?

### I added cuts manually; the

I added cuts manually; the biggest thing I've ever written in it personally is a JSON parser, which is only a few dozen lines, and the grammar is pretty straightforward so it wasn't difficult. In theory you could automatically add cuts for some cases; this paper:

http://ialab.cs.tsukuba.ac.jp/~mizusima/publications/paste513-mizushima.pdf

goes through how it might be done, but I don't pretend to understand their technique or its limitations.

Cuts are integral to having good error reporting in PEGs: without cuts, failures would cause every choice node ever taken to backtrack and you get one big "all these rules failed: (a | b | c | d | e | ...everything...)": not nearly as useful.

I think the main reason that PEGs are nice, though, is the fact that PEG grammars basically express a very familiar recursive descent control flow, similar to code in any other programming language.

That means that its trivial to trace what gets parsed first, what gets parsed next, etc.. That also means that instead of getting obscure shift/reduce conflict errors, you get nice stack traces which are very informative as to what the parser was trying to do and why it failed.

### That was a really nice read;

That was a really nice read; thanks for posting it. Unfortunately, I couldn't find their earlier paper where they first introduced cut ("Improvement technique of memory efï¬ciency of packrat parsing").

The automatic insertion points they identified are very helpful; I wonder what additional ones there might be. Do you know of any follow-up works?

Although they claim that restricting backtracking does improve error messages, they don't actually get into this topic; I would have enjoyed reading their thoughts/experiences on that. I wonder how different desired error reporting would be from the errors reported by parsers with automatically inserted cuts.

### Although they claim that

Although they claim that restricting backtracking does improve error messages, they don't actually get into this topic; I would have enjoyed reading their thoughts/experiences on that. I wonder how different desired error reporting would be from the errors reported by parsers with automatically inserted cuts.

I think they're talking about automatic-inserted-cut vs. no-cut, rather than automatic-inserted-cut vs. manual-inserted-cut. I can't really see how the later would be true.

The former is pretty obvious once you start working with recursive-descent PEG combinators and cuts: take a look at this example to see if it makes sense to you.

In short, without cuts, a parse failure means that every ordered-choice that was ever selected will be backtracked in an attempt to find another choice which is successful. This means that you end up with the "I tried parsing all choices at the start of the input, none worked" behavior I described above and demonstrate in the example, since many parsers have, at the very root, one very big ordered-choice node, and that's the one that gets backtracked to.

Using cuts prevents this backtracking, since every choice before the cut is committed to, so the parser can narrow down the possible-locations-for-error to "anywhere after the last cut". In the (pretty common, in my experience) case where there are no other choice nodes after the last cut, it can narrow down the error to exactly that last mini-parser which failed, which is what gives the nice error messages I showed earlier

### Sorry, I wasn't very clear

What I meant was, there could be a difference between the errors reported by an automatic-inserted-cut parser and the ideal errors I'd like to have reported.

In that case, I wonder how the parser would have to be modified to provide the desired error reporting. To use the example from your project, suppose I wanted something like 'expected } or "' instead of the automatically generated 'expected }'.

In other words, it seems to me that the cut strategy has good error reporting properties as a side effect, not as its main goal (even though the error reporting is indeed quite good).

BTW, thanks for posting the link to your project. Some really good material in there.

### And for LR parsers?

I have a related question: is there a good way to add reasonable error messages to an existing LR parser (using yacc or something related), which currently only prints a (reasonably accurate) error position with a generic "Syntax Error" message?

There is the "ERROR" token that yacc/bison handle, but first nobody seems to know for sure whether it could create conflicts in the grammar (and how to avoid that), and second I haven't heard any success story of it allowing to produce improved messages in a reasonable amount of work.

### Yes, LR parsers have the easiest story for good error messages

There's a really gorgeous technique for implementing good error messages with LR parsers, which doesn't require *any* modifications to the grammar. It's described in a 2003 ACM TOPLAS paper, Generating LR Syntax Error Messages from Examples, by Clinton L. Jeffrey.

The idea is that the parse state and input token can be viewed as a summary of the context. So, to automatically generate good error messages, you do the following:

1. First generate an LR parser from a grammar as usual.
2. Then, construct a list of erroneous programs and an English description of the error.
3. Run the parser on each erroneous program to find the parse state when the error happens.
4. Use this state info to generate an error routine which checks the state/input, and then issues the appropriate error description.

Russ Cox describes how he implemented this technique for Go in this blog post.

### Crowdsource that.

Crowdsource that. Generate some summary of LR state and tokens that can be reused, get people to provide their own English descriptions of what went wrong, and keep a database on a website (and a local copy)...

### Perhaps we should learn how

Perhaps we should learn how to report parse errors from FORTH and J. I've never heard programmers for those languages complain about the quality of their parse error messages.

### Why are we still reporting

Why are we still reporting errors anyways? Just put a line under the token with a simple message in the meta-text region of the line.

If we keep our syntax simple enough (and really, we don't need complicated syntax for anything), then it is good enough.

### Yeah, I agree with that.

Yeah, I agree with that. We'll still be dealing with semantics issues, but even those are well addressed by highlighting or drawing attention to the area.

### highlighting

The first nontrivial program I ever wrote (iirc), back in 1986, was a compiler for a language with extensible syntax. It kept a table of context-free rules, and used a modified form of left-to-right recursive-descent parsing with a provision for lazy left-recursion. When it came to syntax error messages, since the grammar wasn't even known before compile-time, I output the last several lines leading up to the point of the error with highlighting on the smallest nontrivial incomplete expression. The result was stunning to me at the time; I'd never experienced a system where the meaning of syntax error messages leapt off the screen so readily. Several years later when I read Henning Christiansen's list of unresolved challenges for adaptive grammars (which would eventually lead to my master's thesis), one challenge I couldn't identify with was producing useful parse error messages for adaptive grammars; my experience didn't prepare me to see that as difficult.

In retrospect, I suppose some part of this was that even then I accepted as "obvious" some grammatical constraints to prevent the grammatical structure from becoming innately opaque to the programmer. Years later I'd formalize some of that (independent of parsing algorithm) in a paper on Well-behaved parsing of extensible-syntax languages.

### Agreed

Agreed, but I'd just like to mention a couple points:

I'd view determining the token where an error occurs as a form of error reporting. As far as I know, it's not trivial for many existing grammars. Backtracking and lookahead complicate things. Plus, the error that the parser detects and reports may not be the error that the human actually made; this could be problematic even with very simple grammars.

What about parsing situations where the input text wouldn't be produced interactively? For example, say I have to parse some JSON from an AJAX request and the parser finds a problem. Or if the file is huge -- in these cases, I think error reporting might still have some value.

### Why should a JSON packet be

Why should a JSON packet be send able if it can't be parsed? The reality is that we ship around untyped stuff that might be garbage. But if garbage is shipped, why isn't just a crash/abort appropriate? You want a better abort message?

Debugging the component that produced garbage for parsing is probably better left to debugging the unparsing step.

### if garbage is shipped, why

if garbage is shipped, why isn't just a crash/abort appropriate? You want a better abort message?

Taking this question in the context of software in general, yes, I would like some better abort messages when my applications crash.

I would much prefer my software doesn't crash, of course (except in the sense of crash-only software).