archives

Context Sensitivity and relational comparison operators

I'm elbows-deep in a language implementation, and since (for a new experience) it's not some kind of LISP, I'm dealing with infix operators and implicit operator precedence. I solved a little problem and thought you all might find my rambling thoughts about it interesting.

I noticed that treating relational comparison operators strictly as context-free binary operators matches the way they are in many programming languages but does not match the way they're used in mathematics (and some other programming languages).

When I write
A < B < C < D

assuming we have a context-free parser and left-associative relative operations, this is
(((A < B) < C) < D)
and we wind up comparing the result of a comparison. But this is not what a mathematician means when they write that expression. Standard infix math notation wants it interpreted as
(A < B) && (B < C) && (C < D).

Engineering considerations say we want that with single subexpression evaluation and short-circuit semantics.

If generating code for a C program, the relop doesn't check whether the left subtree is another relop; all it knows is that the left subtree returned #false (or #true), so it compares its right subtree to that value. And treats booleans as a subrange of numbers [0..1], so the comparisons have humorous results that require "inside" knowledge about the binary representation of booleans to comprehend. We get single subexpression evaluation but no short-circuit semantics.

In languages like Java, comparing booleans and numbers is a static type error, because static type errors are probably more useful than humorous results that require inside knowledge about binary representation of booleans to correctly interpret. No subexpression evaluation, no short circuit semantics.

If it's a logic-descended language, the comparison may return NIL or a non-NIL value, and the non-NIL value it returns is the value of the last subexpression evaluated, meaning the value of the right subtree. This treats numbers as a subrange of boolean values (all of them are #true). And any relop whose left subtree is NIL returns NIL without evaluating its right subtree. An analogue of the mathematician's interpretation of chained relational operations falls out "naturally" and if you never do numeric operations on a value you got in a context that encouraged you to think of it as a boolean, you will never notice the difference. You also get single subexpression evaluation, you get short circuit semantics - all seems good!

But it breaks static type checking. This means NIL is its own type, and relational operators have to check for it at runtime, and it can get returned to continuations that expect numbers. So now everything that uses booleans or numbers has to do runtime type checking. From a static type checking POV treating numbers as a subrange of booleans is even more expensive than treating booleans as a subrange of numbers.

When I'm traversing the abstract syntax tree after parsing, I can have different code templates to emit code for relational operations. I pick one depending on whether a subtree or the parent node is or is not another relop. So I get static type checking, I get the mathematician's interpretation of chained relational operators, I get the efficiency of caching a single evaluation instead of expanding a macro and evaluating something a second time.... all is good.

But now I broke referential transparency. Identical subexpressions encountered in different contexts mean different things. All the type checking can be done statically, but the relational ops other than the root of the chain are making a choice between jumping to their own continuation site in their parent relational op (to return a number) or to the boolean continuation at the call site of the whole chain (to return #false). Only the root of the chain is special; it has no numeric continuation, and can jump instead to the exit continuation to return #true. This is statically typed return polymorphism.

I was considering whether to be upset about breaking referential transparency, but then I realized people are using languages with exceptions all the time without complaining about it, so referential transparency can take a walk, I guess, if it gets us static type checking.