Behavioral subtyping and errors

Forgive the elementary nature of this question, but lacking the correct terminology, I'm having trouble getting started.

Consider the definition of a class A, together with a structural invariant (in the Guttag/Liskov/Meyer sense of that term). I'm trying to show that a certain class B is a "sort of" subtype of A, the sense that it obeys the Liskov/Wing substitution principle on all input for which the invariant holds, but not on input that causes the invariant to fail (in particular, methods of B will result in an error when the invariant does not hold, even if the same methods return normally for A objects).

Mind you, I'm not talking about the requirement that B preserve the same invariants as A (which is a basic component of the subtyping relationship). Rather, both have the same purported invariant, but B includes runtime invariant checks that result in failure as error, while the same methods in A might return normally, even though the encapsulated state might now violate the invariant. If the invariant truly holds in both A and B instances in all cases, then subtyping holds. It's only if we have an invariant failure (i.e. an implementation bug) that the difference shows up.

Is there an accepted term for this kind of "subtyping"? What is known about it?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Typing Abstraction Violations?

It seems to me you're looking for a contradiction in terms...

I don't think so, no, but

I don't think so, no, but perhaps my question was unclear. What I'm trying to express is the idea that if A instances are free of invariant errors (i.e. we really do have an invariant for A), then B is a subtype of A. It may be the case, however, that a method in A contains an error which causes the purported invariant to fail, yet the method returns normally anyway, and may even satisfy the pre/post-conditions of its contract.

In other words, the invariant failure is undetectable across the call boundary, because it isn't checked. The corresponding B instance would, in this case, fail, because it includes the check.


You are trying to express a concept in your language that is, ultimately, predicated on an object NOT being of the type it purports to be.

Technically, B's type is a subtype of A's type, and A is just implemented incorrectly if it violates its purported type. This would be clearer if you separated the type from the implementation.

In their role as types, you should be able to expose preconditions and postconditions to clients of the object, and you really shouldn't need to check them within each object (i.e. ideally that would be handled by static analysis - and if not that, then less ideally by dynamic typecheck). So it isn't clear to me you should call your example anything but an ill-typed program.

Okay, I think I get it

I'm looking at a way of adding these checks dynamically to Java code, so encoding them in the type system isn't possible. Static analysis is certainly a desirable approach, but it's the dynamic check that I'm really after here (and already know how to do, I just want to make sure I'm discussing it in a clueful way).

Anyway, your broader point about broken implementations makes sense. Perhaps the way I want to frame this is that instances of B are subtypes of correct instances of A, but for incorrect instances (specifically, those for which the invariant fails), a B instance will expose this fault where the corresponding A instance may not. For these ill-typed cases, the notion of subtype doesn't even make sense, so we don't care if the behavior isn't the same.

Am I missing anything else here? Anything else I should know about related work (besides the textbook stuff and the Liskov/Wing paper)?


You have type I, an interface type for which membership can be statically checked by Java. Then you have type T, a subtype of I that imposes behavioral constraints for which membership cannot be statically enforced by Java. Finally, you have classes A and B whose instances are of type I, but only B's instances are of type T.

That makes sense

Thanks, David and Matt, for helping clarify this.