Denotational semantics with other notions of convergence

In a typical domain-theoretic denotational semantics, a primitive value converges when it is non-_|_, and functions converge in the limit, pointwise. Are there similar denotational semantics out there in which convergence is defined differently?

I've found myself thinking "Why just non-_|_ for values?" I can answer the question, partially: it makes semantic approximation order easy to define, and monotonicity with respect to the order is easy to ensure (anything finitely computable, I think). To do it otherwise would require every value to belong to a metric space with a metric that convergence happens with respect to, a way to specify what a function returns at any finite approximation level, and monotonicity would be hard to ensure. Monotonicity might be too restrictive to be useful anyway...

But it might be cool. In languages defined with non-_|_ convergence, you can't, say, define a value that depends on all the values of an infinite list, even when it's well-defined. Infinite sum, for example. You can make a list of prefix sums and carry out the limit yourself, but I'd like the language to do it for me. If it doesn't converge, that's like writing an infinite loop: my fault.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Operationally?

A language with those denotational semantics would, it seems to me, would have a difficult time with operational semantics.

Without peeking into function definitions, you can't be sure that an infinite sum converges, there's always the possibility that the next term will be too "large." After evaluating any length-n prefix of the infinite sequence, no matter how fast you have seen it shrinking so far, you can't be sure that the function doesn't evaluate to 37 for all terms after the nth. When does the implementation guess that n is large enough that the sequence doesn't converge and report an error? Alternately, when does it guess that the sequence does converge and report an (approximate) value possibly in error? Such an erroneous report is different from writing an infinite loop, it is still "the programmer's fault", but instead of introducing non-termination it introduces potentially unbounded error.

Discharging the assumption that the sum converges runs into Rice's theorem, and now you are building a more complicated type system to conservatively and statically approximate convergence.

I may be wildly misreading, sometimes the jargon gets me; if so I would appreciate a nudge back on track.

Not misreading

You're not misreading. There are definitely operational issues to work out.

Operationally, it's tricky, as you say. I don't envision a "push Enter and wait for a definite answer" sort of interaction, so there would be no need for any static or dynamic system to determine when a finite approximation is "close enough". I've thought of two alternatives. One is background computation, in which the approximations get better as you wait. It'd be kind of like FRP. The other alternative has the user asking for better and better approximations as he decides he wants them.

In both alternatives, the metric that defines the output value's metric space would be available somehow, so the user could get a "feel" for whether it's converging. If he wants to prove convergence - e.g. for all inputs, the sequence of output values is Cauchy - that'd be analogous to proving termination on all inputs.

The thing that worries me is that it seems the semantics would (roughly) move all the limits to the outside of the program and combine them into a single limit. I'd have to pin down exactly when that's legal, and that seems tricky - and also a lot like domain theory.

What's your intuition for convergence?

I'm not sure I understand what you mean by convergence. Especially in lazy languages, one can have values that are not completely defined, but are not _|_.

For example (1, _|_), or [1, 2, 3]:_|_, where the _|_ is of the appropriate type.

I also recall a paper by Martín Escardó, that starts operationally, defines when a computation is observationally lesser than another computation (whenever the former returns a ground value, so is the latter), and then obtains domains, topologies, convergence, etc. So these notions appear to be in-grained in recursive functions.

I'll try to find the actual paper later today.