Doctoral thesis for the degree of Doctor of Philosophy
Fudgets --
Purely Functional Processes
with applications to
Graphical User Interfaces
Magnus Carlsson
Thomas Hallgren
Carlsson and Hallgren's 1998 thesis on functional window gadgets is of course not new. However, I want to call attention back to it because it discusses the idea of an Oracle. I submit that the notion of a system that supplies oracles (as many as needed) from the runtime environment provides the best way to account for nondeterminism/indeterminism in a notation or a framework of reasoning that embraces Referential Transparency. I submit that Oracles should be more on the minds of those who write about designs for pure functional programming languages and systems than they seem to be today.
(Unfortunately, the term "Oracle" in this meaning is not in practice Googlable, because
anything you'd turn up would be swamped by the returns about the Oracle brand database
management system.)
When I say indeterminism, the kind of indeterminism I'm talking about is the kind where information comes from various sources over rough roads, and the right thing to do is act differently depending on what information happens to arrive first, affecting output. The right thing to do for the sake of the humans the computer is interacting with may also depend on the time required for various calculations. So there needs to be a way to model the fact that a routine needs to ask for a calculation that actually takes up time. A referentially transparent language can talk about what has to be calculated, but it has trouble dealing with the fact that calculations can cost something and sometimes one may finish before or after another, or before or after some system input (e. g., command from human to go get some other information, or maybe to query the status of the calculation, or change the resources devoted to it or the priority of their use, or suspend the calculation, or cancel it).
Given an oracle o from the environment, and two data sources a and b, we can have a function decide, where
decide(o, a, b)
returns something like (Left, a) if a is available before b, or (Right, b) if b is available first (assume Left and Right are just symbols, distinct from one another). This operation consumes the oracle o and no attempt must be made to use it again. The Referentially Transparent explanation for why this function application is able to determine whether the computation of a or b finished first is that from its magic knowledge from the gods, o knew all along which calculation was destined to finish first, hence the metaphor of an oracle as the term for this concept. Of course, an oracle can't be printed. It's just used once, at the point where the reality of the outcome is going to be determined, and of course the prediction never fails since it can't in practice be tested as a prediction.
An alternative way to model indeterminism in the framework of referential transparency is via the sets of Hughes and O'Donnell. I thank Fergus Henderson for his easy-to-understand
explanation
of the Hughes/O'Donnell approach. The outputs of indeterminate function applications are understood to be sets of all possible results. The implementation only outputs one of the results, however, at the exit to the system. I used to think I could articulate an argument why this approach will not satisfy so well as the oracles approach. In the Hughes/O'Donnel sets account, intermediate results are not understood to contain any decision about what result was computed (although of course in the implementation, the decision would be made and only its result stored). Since the result contains in concept all the possible results, I'm not sure there is a semantically sound way to think about storing it persistently so as to avoid having to repeat long calculations in case of a crash of the volatile memory. Maybe bye and bye I'll think of an argument again on the comparison of these approaches. Or maybe someone else will.
Does anyone propose a third answer?
It seems to me that in the design of a programming system demonstrating orthogonal persistence (transparent persistency), the consumption of an oracle may be a significant event. Or maybe not, maybe in such a design, all itermediate results are as significant as the time it took to calculate them. The real significant event is system output, because that embodies the decisions taken at all the indeterminate choice points that contributed to the output. Subsequent outputs have to honor those choices.
Recent comments
22 weeks 1 hour ago
22 weeks 5 hours ago
22 weeks 5 hours ago
44 weeks 1 day ago
48 weeks 3 days ago
50 weeks 13 hours ago
50 weeks 13 hours ago
1 year 4 days ago
1 year 5 weeks ago
1 year 5 weeks ago