archives

sublanguages of CTM's Oz

I've been reading van Roy and Haridi's Concepts, Techniques, and Models of Computer Programming. It is an exciting and thought-provoking work. I have one major criticism of it that I thought LtU participants might be interested in discussing. While Oz does a great job of incorporating multiple programming models into a single language, to me what it lacks is a way of constraining oneself to a particular model or set of models, and demonstrating to others that these constraints have been met.

For example, I would like a way to assert that a function is declarative (i.e. is truly a function), and have this be enforced. I suppose this would have to be enforced at run-time to fit with the dynamically typed language design. Also, I'm not sure how this would be enforced; a conservative strategy would be to consider any use of NewCell or Exchange as non-declarative, but this would yield "false positives" in cases where state is used in a function that is still observationally declarative (e.g. is just using state for memoization).

Perhaps I want something more like monads in Haskell, where the fact that function that is not observationally declarative will be reflected in its type (and the way must be used). But this may be an inappropriate comparison because Haskell is a one-model language and is only simulating a stateful model via monads. And of course it is statically typed. In fact the CTM authors specifically mention (page xxi) that they disfavor the monadic approach to state because it is too explicit, violating modularity by making the use of state change the interface to a function (if I understand them correctly).

On the flip side, I'd say modularity is better supported by a language in which design contracts (e.g. do you use state) can be enforced. It all comes down to whether you think the use of state is a private or public manner, or perhaps under the choice of the programmer whether to make it private or public (and if the programmer chooses to keep that choice private, we are forced to make the conservative assumption that he did use state).

Reversible generators

Python has a language feature that's been brought up here a couple of times before, whereby a function can return a value, but remember it's current point of execution, and restart execution from there the next time the function is called. Here's a simple example:

def myrange(a, b):
   while a <= b:
      yield a
      a += 1

I'm curious if any work has been done looking into making generators more arbitrarily maniputable. If I want to define a function reverse(myrange(1, 10)), Python will have to do the entire generation at once and then perform the reversal. This eliminates one of the key advantages of generators -- making only one object at a time, and thus saving memory and potentially latency.

What I'm curious about is if anyone's done any work into what constraints would be necessary on generators (obviously they couldn't have arbitrary python code as they can now) to make it so that a reverse() would be possible that would still have each value be generated once a time, just in reverse. Presumably such a solution would be powerful enough to implement things like step too (currently to get the effect of step with a generator you have to just throw values out).