splitting the program into formalizable vs. non-formalizable parts?

While reading up on OO and Algorithms vs. Interaction, I re-encounter the adage that OO's flexibility is at odds with verification formalisms. It makes me wonder what various, er, formalisms people use to split a program up into those parts which are a non-verifiable 'interaction machine' vs. those parts which are formalizable algorithms, cf. monads used to make sequencing explicit. (I shall be reading more of Wegner's papers. [Edit: and old LtU threads about 'interactive machines'.])

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I followed the link...

...to a set of slides full of dangerous fatuous nonsense. Extremely depressing. I'll have to follow up later in more detail, but consider that there are already several extant formalizations of OO.

The bottom line is that there is no aspect of software, including interaction, that isn't amenable to formalization. Whether such formalization would be worth it in any given case or not is a separate matter.

Confused

I don't understand what is "dangerous fatuous nonsense here" (in your opinion). What Wegner is discussing is not that different from what Carl Hewitt recently started a thread here on LtU discussing. Actually, Wegner's proposals about interaction vs. algorithms are a re-capitulation of 40 years of evolution of one branch of software engineering/system's modeling.

I think a much more interesting issue, which I've been thinking a lot about, is whether Dina Goldin's Persistent Turing Machines are a useful formalism.

I think the even deeper questions that Wegner, and frankly Carl Hewitt, are all avoiding is the extent to which properties of interaction can be cleverly avoided - and when they are clearly necessary. For example, interactive machines are fundamentally coinductive and can model hidden state. Such hidden state also implies the ability for an "object" to "backtrack" through a "journal" of its "history". However, misused, such hidden state can require the object to "guard" every sequential interface with checks on the method parameters. This results in invoking a sequential interface corresponding to more than a mere fire-and-forget message-send from the sender, since in order for the sender to invoke an interface on the receiver "correctly", it must know something about the receiver's implementation. -- This sort of structured analysis, the sort of reasoning employed regularly by software engineers seeking to build maintainable solutions that accurately correspond to problem domains, appears to be completely lacking from academic computer science. I therefore think that while the expressive power needs to come from researchers, practical motivation and subsetting of this expressive power needs to come from engineers.

If you don't think this matters, just imagine Truly 3D UI toolkits. Microsoft Research's Matt MacLaurin has predicted all major OSes will have 3D UI toolkits by 2020, with true support for focus and perspective, in order to handle the massive data visualizations needs we've created as a society obsessed with information -- you cannot accomplish these things using available WPF widgets today (except for perhaps the Canvas, which makes good use of retained mode architecture, but ultimately lacks optimizations at this point in time). Wegner's background includes his project in the 1990s on expressing compound, active document technology. It would be a mistake not to take a look at next generation graphics as the toy experiment for the sort of theory Wegner is proposing.

Confusing

I agree with Paul. I haven't yet read anything along the lines of "interactive Turing machines" that wasn't either stating the obvious or overstating its importance. We can formally study the behavior of machines from the inside of the box - if this input comes in, this output goes out. If you want to talk about what those inputs and outputs mean in the context of the machine's use, then you have to formalize the context. Duh! As for 3D UI toolkits, certainly the same situation applies. If you're going to reason about the correctness of output images producing 3D images for a human viewer, then you're going to need some physical model of the world in which to perform that reasoning. If you're willing to axiomatize the meaning of the inputs and outputs, then reasoning becomes straightforward.

The slide about Penrose's claims about mathematicians and machine borders on wrong (though it's hard to pin down what claims that slide is making). Look here for a discussion of what's wrong with Penrose's argument. It doesn't require concluding that physical systems compute more than Turing Machines.

More to Come...

...when I'm not at work, but:

My immediate reaction is to the claims that neither OO nor "interaction" can be formalized, and to the final slide claiming a new foundation for CS, AI, etc. etc. etc. These kinds of wild claims have been with us for the entirety of CS' history, and basically, whenever anyone says that CS needs/is going to get a new foundation, I reach for my wallet, because I'm being had.

With that said, ironically, the material on induction and coinduction does look harmless enough, as far as it goes. But once you admit that you can model interactive processes coinductively, I wonder what's left in "formalization" that you "can't do?" Ditto OO, which has a number of formal calculi at this point.

So my problem is that I see a couple of claims that aren't merely falsifiable but are false, just enough reasonable math and logic to sound convincing, and then the wild claims about a New Kind of Computer Science. It's that kind of thing that makes me keep my eyes open for the exits, Jason Bourne style.

Update: I neglected to add that when I claim that interaction is formalizable, I'm thinking precisely of the work on FRP, and agree wholeheartedly with "It would be a mistake not to take a look at next generation graphics as the toy experiment for the sort of theory Wegner is proposing," to the extent that I feel I understand what Wegner is proposing.

OO being harder to formalize

Cook (as seen on LtU) said something along the lines of OO being hard to verify, but not as extreme as the gist I get from Wegner.

Object-oriented programming has caused significant problems for verification efforts [34, 45, 2]. This is not surprising if you understand that object-oriented programming is high-order procedural programming; objects are a form of first-class procedure value, which are passed as arguments and returned as values everywhere. It is difficult to verify programs that combine first-class higher-order functions and imperative state.

The connection has been more

The connection has been more recently shown and discussed (see on LtU). I still find it odd that much of literature ignores this (which, as hinted by Wegner's informal discussion, should be intuitive even if not in a precise way).

Btw, Paul, you might want to check Wegner's wikipage & publications to understand where he's coming from. The tone of your post threw me off a little; perhaps you already know and this is just a hot-button issue :)

The connection has been more

The connection has been more recently shown and discussed (see on LtU). I still find it odd that much of literature ignores this (which, as hinted by Wegner's informal discussion, should be intuitive even if not in a precise way).

An OO program adhering to the Liskov Substitution Principle is effectively the same as a first-order program. The reason is that when you adhere to it, you have ensured that a method call of a subclass implies the contract of its superclasses. Since at runtime, any method call can only invoke a method on a subclass of the static type of the expression, this means that you don't have to worry about the fact that you don't care what code is getting run dynamically; the contract of the superclass is always a sound approximation to its behavior.

Of course, this is also why no interesting OO programs adhere to the LSP: higher-order behavior is just too useful! (For a concrete example, consider what the spec of a Function class with Add and Multiply subclasses has to be, and then try to verify a map procedure that uses an arg of type Function.)

I didn't mean to get into

I didn't mean to get into the subclassing vs. subtyping issue (and it seems orthogonal, right?). As an example of what I had in mind, as part of the resurgence of dynamic languages, we're seeing a lot of Analysis X for Dynamic Language Y papers. For the same Y, Alias/points-to analysis X1 will ignore results for control-flow analysis X2 (and vice-versa). I'd rather not point fingers at particular authors, however.

There are several good reasons for focusing on points-to for OO / (heavily) imperative languages. For example, when positioning a paper, it seems points-to is better studied (empirically, at least), providing a clearer basis for comparison. As another, it seems points-to is more directly applicable to optimization. Anyways, this is just a pet peeve of mine and I don't mean anything deep here.

No worries...

...this is just my take on why the verification community has historically ignored the higher-order aspects of OO programming. To put it (too) glibly, the LSP provided a convenient way to pretend OO was the same as Hoare logic. :) Luckily, though, this phase is over: as tool support gets better and verifying significant programs looks more and more feasible, ignoring all the stuff we need to prove actual programs correct looks less and less reasonable.

Classical problems?

Neelk, first: thanks for the wonderful perspective. You state academically what I mostly come across as a barbarian seeking to smash things (probably not too far off from the truth).

Second, do you know of any good problems to demonstrate the verification of correct behavior? One perspective is that inheritance is essentially a communication protocol. This viewpoint was expressed by some in the mid-90's and even got into ACM Communications, but nothing was ever really done with it. Matt Parkinson at MSResearch has recently started re-promoting this view of re-use between a client subclass and its superclass, advocating separation logic.

The essential difficulty with subclassing (answering Leo's question about orthogonality) has to do with subclassing having the potential for dividing an abstract state or, worse, hidden state, into several substates. With hidden state, you just don't know this is taking place, and so any delegation to the state-process of the superclass can equate to some out-of-band communication. Although subtyping and subclassing can be considered orthogonal, combining both actually requires quite a bit of specification effort (unless you view LSP as necessary and not merely sufficient and thus reduce the degrees of freedom in the system from the start). -- At least right now.

Aspect-oriented programming has these similar communication protocol problems, by the way. Most people just aren't aware of the gotchas. See:

(1) On Identifying Bug Patterns in Aspect-Oriented Programs by Sai Zhang (who is a student working on debugging tools like Celadon and AutoFlow for aspect-oriented programs)
(2) Exception Handling Bug Patterns in Aspect Oriented Programs

These are just like the Fragile Base Class problems, and it is important to realize their similarity. The only major difference is that abstractions are composed sideways (horizontally) with aspects, and so making a value judgment on the quality of the abstraction is different from the abstraction failure in the Fragile Base Class problem case (IMHO).

Function query

consider what the spec of a Function class with Add and Multiply subclasses has to be, and then try to verify a map procedure that uses an arg of type Function

I'm unclear what you intend here.

In the Function case, the contract imposed would be something like "give me two numbers, and I'll give you a number back". That is a pretty strong guarantee in computational terms.

Obviously, there are going to be many interesting facts about Add and Multiply you can't verify from this, but you could certainly verify that your map procedure will produce a result of the appropriate type, it won't blow up, etc.

What are you suggesting is missing?

The problem...

...isn't in the abstract Function spec -- it's in the verification of anything that uses an abstract function. Concretely, let's think about how we might specify an abstract Function class, and then write some code to fold it over a collection.

The spec of AbstractBinaryFunction will be something like:

 { false } AbstractBinaryFunction.apply(a : int, b : int) { true }

This says that an instance of an abstract class can never be invoked (precondition false), and you won't learn any meaningful information from its call (postcondition true), which makes sense. In particular subclasses, we can get more useful behavior by specializing the spec:

 { true } Adder.apply(a,b) { result = a + b }
 { true } Multiplier.apply(a,b) { result = a * b }

Both of these are legit, under the LSP, because false entails true (we're weakening the preconditions), and result = op(a, b) entails true (we're strengthening the postconditions). So these two subclasses implementing a particular function refine the spec of apply and are legitimate subclasses of AbstractBinaryFunction.

However, we can't prove things about clients that use an argument of type AbstractBinaryFunction. Suppose we have an implementation of fold like:

class MyCollection { 
    ...
    def fold(f : AbstractBinaryFunction, init) { 
        var acc = init;
        for (x in this.elements) {
           acc := f.apply(x, acc)  // PROBLEM HERE
        }
        return acc
    }
}

The trouble happens at // PROBLEM HERE. All we know about f is that it's a binary operation, and so to invoke it we need to meet its precondition false, which is impossible! But: obviously this is a perfectly good OO program. So the upshot is that we need to give specs for methods which are genuinely parameterized by the specs of their arguments, and can't just rely solely on semantic subtyping.

A different problem

The spec of AbstractBinaryFunction will be something like:

 { false } AbstractBinaryFunction.apply(a : int, b : int) { true }

I can't help but feel that this is the wrong place to capture the uninstantiability of an abstract class. You might want to make precondition false for you "new" operator (or equivalent thereof) when applied to such a class, but for the purposes of specifying "apply", "AbstractBinaryFunction" needs to be inhabited. (i.e. "there exists an x:AbstractBinaryFunction" needs to be part of the precondition)

This does hit on the ambiguity of class as "object template" vs. class as "type" or "classification", but I don't see why from a semantic specification point of view we can't sort that out, since one use is only relevant during object instantiation and the other use is only relevant during member dereference.

Making such a distinction is part of the competence of a proficient "speaker" of an OO language, and it seems to me any account of the semantics of the language or of a program in that language must take that into account.

Minor point

So the upshot is that we need to give specs for methods which are genuinely parameterized by the specs of their arguments, and can't just rely solely on semantic subtyping.

If the type of fold (as a method with R a class parameter) is:

fold: forall T. AbstractBinaryFunction[T,R,R] -> R -> R

You can view what's going on as subtyping, rather than type parameter passing. For any K, you have the relation:

forall T. AbstractBinaryFunction[T,R,R] -> R -> R <: AbstractBinaryFunction[K,R,R] -> R -> R

Under this view, at the time of invocation the caller notes that a subtyping relation holds for a particular K, as opposed to substituting K as a type parameter.

Edit: Fixed bungled direction of the subtyping

Further reading leads to further incredulity

Leo,

Following your suggestion to Paul, I went to Wegner's publication page and briefly read a couple more papers. The first was a survey of why interactive TMs are more powerful than regular TMs. The argument boils down to: a regular TM can't drive a car (or smell a rose or fall in love). It's true, but anyone who knows what they're doing already understands this.

Next, I saw this paper, Paraconsistency of Interactive Computation, which from the abstract claims to argue that:

We show that first-order logic cannot model interactive computation due to the incompleteness of interaction. We show that interactive computation is necessarily paraconsistent, able to model both a fact and its negation, due to the role of the world (environment) in determining the course of the computation.

I'm open to, but quite skeptical of, paraconsistent logics, so I thought this would be a great paper to read. I was quite disappointed by the paper. The argument in favor of paraconsistency was weak to the point of non-existence. I found the reasoning about the significance of completeness and countability to be muddled throughout and occasionally borderline wrong. Take this statement for example:

Godel showed that arithmetic over the integers cannot be completely formalized by logic, because not all its properties are syntactically expressible as theorems.

As if Godel's proof was a simple cardinality argument. So I'm still skeptical about this stuff. But maybe I haven't read the right paper (feel free to recommend one) or the stuff I've read has gone over my head.

if i grok that link

it sounds like functional style is easier to analyze when it is done in an OO style, which is ironic given this here page's thread.

Apologies

Yeah, upon re-reading it it puts me off, too, and I should apologize for that.

I find myself in a bit of a quandary, though, because in addition to having studied computer science, I've also studied physics, and part of my computer science studies include artificial intelligence. So the material on Penrose and his theories of brain function by principles unknown to physics are quite familiar to me. My copies of "The Emperor's New Mind" and "Shadows of Consciousness" are sitting on my shelves at home. :-) The problem is that what Penrose is doing in these cases isn't physics; it's philosophy. His theories in this area enjoy no experimental support whatsoever. To be fair, that merely puts his theories in the same category as String Theory and M Theory, but then, String Theory and M Theory aren't physics either, for exactly the same reason.

So this is what I mean when I say things like "dangerous fatuous nonsense:" "Fatuous," because there's a spectacular amount of impressive verbiage involved. "Dangerous," because, lacking evidentiary support, the statements nevertheless claim the mantle of respectable scientific method and perpetuate the pseudo-science that the study of physics in particular has been for the past 30+ years. "Nonsense," because to the extent that falsifiable claims are made, they are, in fact, falsified by extant experimentation, which thus far overwhelmingly supports the validity of General Relativity, the Many-Histories Interpretation of the Quantum Mechanics, and the Extended Standard Model of Particle Theory.

If you want real cutting-edge research into the physical nature of computing, there's the entirety of David Deutsch's and others' work in Quantum Computing... all of which directly contradicts at least the claims in the slides in the original post. That is the source of my harsh reaction, which nevertheless was overly aggressive in tone. Hopefully this post serves as a more substantive, less invective, reaction.

Well, having scanned the link and your post

I couldn't agree more?

which thus far

which thus far overwhelmingly supports the validity of [...] the Many-Histories Interpretation of the Quantum Mechanics

I'm not aware of any evidential preference of many-histories over something like the Bohm Interpretation of QM. I'd appreciate any pointers, even privately if this is going too far OT.

Sorry

I don't understand your objections.

I'm also not familiar with David Deutsch. My friend Dan Carrero, who is a scholar award grad student at RPI in Physics, convinced me while I worked at BNL that I should stick with understanding Computer Science as best as possible and not dilute myself with physics, because there are so many theories and all of them possess some partial explanation of things, and that I could literally spend a life time trying to understand them all and tease them apart. I'm not going to run to him to decode what you just said, either. If I come to understand some theory in physics, it will be by accident e.g. through category theory or algebra. I am otherwise not going to be impressed by a vague reference to some cutting-edge quantum computing theory, "all of which directly contradicts at least the claims in the slides"; I am going to stick to what I understand and can wrap my head around. So I don't see this as more substantive (but I am not the only member of the audience). At least point to a specific example, so I know you are not pulling my nose and hiding it.

If you wanted to criticize Wegner, you could argue that some of his papers have been small increments (not much progress) and close to Least Publishable Units (LPUs) -- even Persistent Turing Machines is really Goldin's work, which is still offensive but probably not as bad as "dangerous fatuous nonsene". And many of those LPUs include publications in places with a wider audience than COOPL researchers, his main target audience in the '80s and early '90s.

Can you simply carve out your objections in terms of why you don't like the fact Wegner wants his own formalization of OO? That was your initial objection. And at least that I can understand.

Here is another way to understand Wegner's argument for interaction:

Turing Machines cannot model commitment, and commitment is fundamental to object-oriented computing, since you cannot undo a commitment to, say, a strategy. This was his rationale in 1992, when he presented his argument for why the Japanese 5th Generation Computing Project could not possibly achieve its objectives of logic programming==computing. [Note: Carl Hewitt appears to be saying that he knew this back in 1972, which sounds about right to me. Joe Goguen also appeared to independently know this in the 1970s with his work on applying category theory to general systems theory and the modeling of interactions. Hewitt also had an important result in the Japanese 5GCP: "Guarded Horn clause languages: are they deductive and Logical?", which was Hewitt's initial foray into showing Kowalski he was not only wrong that Computation==controlled deducation, but here's the conjecture that actors are more expressive interpreters of logic than logic itself. Hewitt has lately been inspired by John McCarthy, who has been interested in Elephant 2000, recasting Actors in terms of Speech Acts. So Wegner talking about his results from 1992 is a little weird when he doesn't mention Hewitt. Vice versa.]

Sidenote: I do think that some of those slides in Wegner's powerpoint are ironic. Such as the reference to Plato's Cave, where prisoners are shackled together and can only see shadows of the cave wall, and interpret the shadows based on observation rather than true knowledge of what creates the shadow. The man who then frees himself from the shackles and escapes the cave, only to be blinded by the glory of the sun, returns to the cave and its remaining shackled prisoners, and tries to describe what he has seen, only to be murdered by them, because he cannot interpret the shadows on the cave's wall. Sub-sidenote: Wegner himself suffered a serious brain injury in 1999, when he was hit by a bus! Prior to that, he had made many contributions to the theory of concurrent object-oriented languages and also collaborated with Cardelli on the famous '85 data abstraction paper.

What's the point?

Z-Bo: I'm also not familiar with David Deutsch... I'm not going to run to him to decode what you just said, either.

Then how do you expect (if you do expect) to be able to understand my critique? In case you should reconsider, his home page is here, and an excellent launching-off point summarizing my observations can be found in this interview.

Z-Bo: If I come to understand some theory in physics, it will be by accident e.g. through category theory or algebra. I am otherwise not going to be impressed by a vague reference to some cutting-edge quantum computing theory, "all of which directly contradicts at least the claims in the slides"; I am going to stick to what I understand and can wrap my head around.

Then your understanding is going to remain limited by your unwillingness to study.

Z-Bo: At least point to a specific example, so I know you are not pulling my nose and hiding it.

I already did: Wegner makes reference to Roger Penrose's unsupported claims about the way the brain functions. In doing so, he's left the field of science altogether, nevermind whatever you might think about formalizing object-orientation.

Z-Bo: Can you simply carve out your objections in terms of why you don't like the fact Wegner wants his own formalization of OO? That was your initial objection. And at least that I can understand.

Perhaps I misunderstood, but my objection was actually to what I took to be Wegner's claim that OO can't be formalized. As I wrote in More to Come..., "My immediate reaction is to the claims that neither OO nor 'interaction' can be formalized..." Assuming that I understood correctly, Wegner can then be dismissed out of hand, since both OO and interaction have been formalized—several times, in fact.

All of the work on logic programming, actors, etc. is very interesting and very important. I fully expect, eventually, to see a unification of their principles. In fact, part of my argument is precisely that it all leads, ultimately, to quantum computing and quantum information theory, which in turn ultimately leads to the many-histories interpretation of the quantum mechanics (more about that in another reply). But we won't get there by taking more multi-decade excursions into nonsense like Penrose's theories of brain function, String Theory, or M Theory.

Some feedback

Obviously, you have a lot going on in your mind right now, and it sounds impressive, but it is not important to me. Forgive me, but I have a much narrower focus of pure mathematics and applied math (to Comp Sci, Neurology and Economics).

Then how do you expect (if you do expect) to be able to understand my critique?

No offense, but I saw no (de-)constructive critique of Wegner's work in your replies, just a claim that his position cannot be falsifiable and then a wild tangent to string theory and M theory not being falsifiable (and since I am not a physicist, I have no idea what the scientific method of materials and experimentation requires for falsifying those theories). I also saw no (de-)constructive critique of Wegner's work in Hewitt's reply, so you are not alone in coming up short in my eyes. Hewitt gave me references to work of his I'm familiar, but how that maps to a "misunderstanding" by Wegner is hard to tell.

Bottom line: Critiques must be constructive or deconstructive, and the more self-contained the better.

I already did: Wegner makes reference to Roger Penrose's unsupported claims about the way the brain functions.

So by making a small reference to Penrose's famous claim, he is instigating "Deep fatuous nonsense"? Then what are Penrose's books STILL doing on your shelf? Just go burn them now :)

In doing so, he's left the field of science altogether, nevermind whatever you might think about formalizing object-orientation.

He clearly states in his slides that the interaction patterns among objects are nonformalizable from static descriptions alone. That is correct, because in order to formalize them you would need to know their representations and something about the order in which objects take messages off their queues or just flat-out ignore messages. He is not spitting on any object formalizations in stating this, e.g. this is completely orthogonal to issues such as self-application vs. self-reference. Note how on Slide 9 he mentions under "dynamic interaction" the role of "scenarios, use cases". In OO, it is the responsibility of the developer to resolve collaborations by assigning external events to an existing transition. If at run-time there is no policy for handling an external event, then there is a sequencing bug in either the collaborator or the object receiving the message. This then opens up a very obvious question on how to design object-oriented languages; Why aren't ya'll directly supporting this already in ya'll programming languages? Wegner argues "testing is the only game in town", which is true for the hardest case where you cannot inspect representations at all and must operate under mutual suspicion of failure to do what the collaborator expects e.g., things like Rinard's comfort zones. Note that if you have an object collaborate with another object, then you've changed the context in the system. This side-effect is the hardest case, because plugging in different objects into the end of a collaboration is trivial only if you reduce the power of objects. Wegner is clearly stating this means incomplete specification. For practical subsetting of this expressive power, think how you might design virtual machines for safe object-orientation and what you can do to make the virtual machine play a stronger role in safety.

Note that I also stated the difficulty in pushing functional programming beyond its current status is probably primarily methodological*. As H.S. Lahman once pointed out to me, Software engineers need to learn that even with testing, you still need process to reach statistically insignificant error rates, e.g. Six Sigma.

* Although, whether monads are a good way to model all side-effects in functional programming is a language design question, not a process question.

Then your understanding is going to remain limited by your unwillingness to study.

It is possible that by studying physics I am doing more harm than good to myself. I read plenty already - listening to physicists and their wild theories at BNL for two summers convinced me there is no major gain in understanding things I care about that can come from studying physics, other than the practice of building physical models (and I get this practice from any applied math). Bottom line: I am not unwilling to study, but I am prudent in what I choose to learn. Understanding will always remain limited, insofar as the scarce time we have on this mortal coil. If I learn pure math related to physics, it will be an accident.

Thank You

That's a lot of thorough, thoughtful response to digest. Let me just take a few points to respond to in the time I have before me.

Z-Bo: No offense, but I saw no (de-)constructive critique of Wegner's work in your replies, just a claim that his position cannot be falsifiable and then a wild tangent to string theory and M theory not being falsifiable (and since I am not a physicist, I have no idea what the scientific method of materials and experimentation requires for falsifying those theories).

Fair enough. My contention is that Wegner's assertions regarding OO's formalizability, to the extent that they are related to the Penrose material he included in his slides, aren't even science by virtue of not being falsifiable in the sense of Popperian scientific philosophy, with which I'm quite sure you're familiar. My comments about String Theory and M Theory are only intended to be additive with respect to my observation that Penrose isn't alone, or even particularly unusual, in pursuing non-science while claiming the mantle of science. In that sense, I believe it's analogous to your observation that neither Carl Hewitt nor I have offered (de)constructive arguments again Wegner. That is, my point with that is that I'm not singling Wegner out—quite the contrary.

Z-Bo: So by making a small reference to Penrose's famous claim, he is instigating "Deep fatuous nonsense"?

This is a very good, important question that suggests that I need to rely on more than the slides. With that said: I saw a path from Penrose to the closing slide's claim of a "new foundation" for CS, AI, etc. I have a very strong negative reaction to that that persists, partially because of 30+ years of abject nonsense in physics (so that can't possibly serve as a "new foundation for CS, AI...") and partially because it's proving difficult enough to get CS and AI practitioners to pursue the foundations we already have, are rock solid theoretically, and have proven eminently practical to the extent they have been pursued at all (here I'm thinking of Data Analysis: A Bayesian Tutorial and related work).

Z-Bo: Then what are Penrose's books STILL doing on your shelf? Just go burn them now :)

What kind of critic would I be if I weren't intimately familiar with, and failed to periodically revisit, the opposing camp? :-)

Z-Bo: He clearly states in his slides that the interaction patterns among objects are nonformalizable from static descriptions alone.

I question both whether this is as clear as you state and, more importantly, even if it is clear, whether it's true. It seems quite clear to me that it's not, at least not as stated. My assumption is that what's meant is that the embodiment of objects in a physical system gives rise to circumstances in which a physical event not accounted for in the formalization can lead to emergent behavior of the system that is inconsistent with the formalization. Assuming that this interpretation is fair, it seems uncontroversial, and in fact I would argue that it's trivial. I think this is supported by...

Z-Bo: Wegner argues "testing is the only game in town", which is true for the hardest case where you cannot inspect representations at all and must operate under mutual suspicion of failure to do what the collaborator expects...

If you can't assume that your objects do what they're supposed to do, then yes, all you can do is try them and (maybe) find out. :-)

Z-Bo: Software engineers need to learn that even with testing, you still need process to reach statistically insignificant error rates, e.g. Six Sigma.

We're in vehement agreement. The whole point of my argument, ultimately, is that if you can't make static logical assumptions about the behavior of your system, then you can do statistical modeling of them and even apply machine learning to the problem so that your system gets better and better at doing what it's supposed to do over time. My objection to the "can't formalize" argument is that the statistical modeling and machine learning itself is formal. If the counterargument is "but you can't do it statically," my response is that I never claimed that you could; I claim that "statically" is not part of Wegner's argument, and if it is, then the argument is trivial.

Z-Bo: Bottom line: I am not unwilling to study, but I am prudent in what I choose to learn. Understanding will always remain limited, insofar as the scarce time we have on this mortal coil. If I learn pure math related to physics, it will be an accident.

This is all certainly fair, so please let me attempt to refine my point. If, and only if, Wegner's arguments derive from theories that purport to be from physics, but have no experimental support, then, and only then, his arguments themselves admit no chain of inference from their premises to their conclusions. The follow-on argument then might be that that's not as damaging as it may seem, because the experimental support might yet be forthcoming. I introduced the work on the many-histories interpretation of the Quantum Mechanics and quantum computing generally to support my assertion that we are already developing quite good experimentally-supported models of information and computing that, to my reading at least, seem strongly to contradict the notion that CS and AI are amenable to new foundations, let alone in need of them.

With all of that said, it's a lot of weight to put on a set of slides, and I do intend to give Wegner a more thorough reading than I've had the opportunity to at this point.

Thanks again for the thoughtful dialogue!

I find it odd that

I find it odd that mathematicians who make progress in their field by puzzling about "conjectures" are doing just this while physicists who work on physics conjectures are considered "crackpots".

I believe this is a more recent trend. When I studied math+phys in the 1990s the aggression level was lower.

Conjecture vs. Theory

There's nothing at all wrong with making conjectures. It's just that conjectures don't become theories until and unless they acquire a body of falsifiable experiments with which they can, at least in principle, be refuted.

For example, Einstein's General Relativity was, upon publication, a theory: it included hypotheses that were falsifiable in principle. It just took another 30 years or so for it to become falsifiable in practice.

Roger Penrose's theories of brain function, String Theory, and M Theory don't fall into the same category: they offer no falsifiable hypotheses. In the cases of String and M Theory, they don't even do so in principle. This makes them philosophical, rather than scientific, theories. My best understanding of Penrose's theories is that they fall into the same category, although I stopped following Penrose some time ago, so it's somewhat more likely that things could have evolved there.

In the mid 1990s it still seemed possible that String Theory could redeem itself and become an actual science. Instead, things got father from any actual science with the introduction of M Theory. There's still no experimental support for either. In the meantime, the success of Quantum Computing began to mount and those experiments in particular lent support to the Many Histories Interpretation of the Quantum Mechanics.

Unfalsifiable?

those experiments in particular lent support to the Many Histories Interpretation of the Quantum Mechanics.

Um. Perhaps I'm missing something. It seems as if an interpretation of QM would be inherently non-falsifiable (unlike the theory of which it is an interpretation).

Testing many universes

Just arrange an experiment where a certain quantum measurement results in your death, and then perform this experiment until you're satisfied with your confidence in the theory.

Darwinian physics

I see. Over time, the fraction of the remaining population of physicists with confidence in the theory should approach 100%.

Good Point

You're in good company; at least one physicist that I know of has made the observation that we should quit calling it "the many-histories interpretation" because it's the only interpretation that has experimental support.

Note that there also exist

Note that there also exist mathematical conjectures that are not falsifiable.

Well, when a research

Well, when a research program becomes complex and sprawling falsificationism can easily be circumvented: derive some testable predictions from your theory and when they turn out to be wrong it was a blind alley and one has to care for the other 10.000 possible branches or something completely different. That's how people work on conjectures, with the exception that mathematicians don't need empirical data for "branch prediction".

This isn't philosophical work ( I don't know what you think that philosophy is? ), it isn't actually mathematics either because the math formalism has to be only "physicist-correct" i.e. within the bounds of realist assumptions, so it must be "crackpottery" or pseudo-science. I do not want to defend ST which doesn't look very promising any more today or any other attempt to do quantum gravitation but I'd say this is sloppy philosophy.

From the paper: Many signs

From the paper:

Many signs that Turing machines (TMs) cannot model OOP
knowing what OO is: not possible by algorithmic models
OO can be defined only by interactive models of computation
Goal:
negative: show OOP is not expressible by, reducible to TMs
positive: unifying model for OOP, AI, networks, graphics, HCI

What does a Turing machine have to do with Types, OO or otherwise? A TM is purely operationally defined.

That being said, since a TM and Untyped Lambda Calculus are isomorphic and LC can encode data quite easily as tuples, you can just pass around both the state and the implementation of "virtual functions" that your object will use, giving you late binding. It follows that a TM is capable as well.

There is more to computation than (recursive) functions

Although the work of Goldin and Wegner may seem superficially similar, it unfortunately failed to comprehend previous publications on the Actor Model including [Hewitt, Bishop and Steiger 1973], [Hewitt 1977], and [Hewitt and Agha 1988]. For further discussion, please see Common sense for concurrency and inconsistency tolerance using Direct LogicTM and ActorScriptTM: Industrial strength integration of local and nonlocal concurrency for Client-cloud Computing.
  • Carl Hewitt, Peter Bishop and Richard Steiger. “A Universal Modular Actor Formalism for Artificial Intelligence” IJCAI 1973.
  • Carl Hewitt. “Viewing Control Structures as Patterns of Passing Messages” Journal of Artificial Intelligence. June 1977.
  • Carl Hewitt and Gul Agha. “Guarded Horn clause languages: are they deductive and Logical?” International Conference on Fifth Generation Computer Systems. Ohmsha 1988.

Interesting

The material on Direct Logic is fascinating, and I'm intrigued by ActionScript as well. But I can't seem to find an implementation anywhere. Is there one?

Question

In Direct Logic, given "the number of leaves on a clover is 3" and "the number of leaves on a clover is 4", can you not conclude 3=4? And doesn't that deduction result in "explosion" (at least in the theory of clovers)?

Direct logic doesn't seem to capture this aspect of "common sense": stop when you reach an obvious contradiction. Hopefully a software engineer who found that the number of clovers was documented at both 3 and 4 wouldn't just plug ahead, thrilled at how easy it had become to prove new theorems. Hopefully, they would back up and try to figure out which assumption was incorrect, and then fix the documentation.

In practice, inconsistencies are not derived from flaws in logic or mathematics, but arise because someone somewhere entered a fact wrong. It seems to me any future approach to "scaling logic in the cloud" is going to involve automated help in detecting inconsistencies and rooting them out.

And there is more to large software systems than computations

Indeed, the programme (of research) of Direct Logic is very ambitious and does make strong sense for the future long term, imho and for what I believe I understood about it. I really hope it will receive soon much more attention and support, as its papers already, largely deserve.

But also, in a related field of interest, regarding the formalizable vs. non-formalizable aspects of software topics, I'd venture to say there is maybe another domain, sort of dual of Direct Logic and its concerns about consistency-"awareness" of formal methods/logic systems.

As I see it, as much as Direct Logic could shed very useful light on how to acknowledge the pragmatic need to redefine the scope of consistency-based ontologies for the sake of being (still) capable to predict (*) runtime behaviors, I believe there are also "almost-virgin" fields to explore : at human-initiated construction time (or, if you wish: "modeling processor negociation time") of "large software systems" (whatever you decide is "large" for you/your business).

I am speaking of that time, after design time, but beforehand of this well known, so-called "runtime", where computation and predictability of computer/Turing Machine-generated "values" (in whichever forms those are serialized in) become more and more prevalent, to eventually be the one time so many researchers are still being obsessed with (and for very good reasons, I personally totally reckon).

Of course if one considers that generated code, seen as yet another computed artifact of an outter processor's context, is thus just another "value" among others, then you basically have shifted back to the points and recent new concerns that Direct Logic tries to explore in the first place.

But if, conversely, you're willing to accept the idea that any genuinely new formalization will be given a human brain-devised meaning, a priori, with more or less ad hoc laws of usage for a given stage/context of the rest of the state-of-art (in modeling, languages, and systems execution), then you may prefer considering intent-purpose relations instead, for those formalisms we devise as humans -- knowing we will maybe have to combine/have collaborate their processors' "intended" semantics with others', but that we only know... are not even known yet (and where neither are all their possible future worlds of "purposes").

My current feeling is the latter presumably intent-acknowledged and -equipped formalisms are, unlike in computationability-concerned topics, more likely related to the need of redefining the scope of the "completeness side of things" -- at the very moment you introduce these new intent-purpose pairs, and while being yourself in very awareness of you introducing them, and as such. By "intent" there one could also think "announced capabilities" and by purpose, "announced usage context" for the intent, already acknowledged, and accepted. (still very much... TBD, here. Obviously.)

I believe it's much less ambitious than the Direct Logic programme, as formalized intent-purpose will always end up being mostly dependent on the free will you accept/can afford to use to define their interpretation "granularity" by the platform (in between of design time, by humans on one end, and execution time, by computers, on the other end) but would still be somewhat useful to investigate, too, eventually. (e.g., as in what I had called for thoughts about).

But of course, it's then about finding means of reifying these intent-purpose pairs at some point and have them be the official/standardized "glue" between your languages (for the machine to bridge those at the level of their processors, between design time and runtime, that I called "construction time"; i.e., "a la makefile", if you absolutely want to use the compile+link metaphor, but at a higher abstraction level since eventually made available as a standard, cross-ontology service of the modeling platform itself). I don't pretend/have definitive clues that it will be much easier to implement, per se, than the Direct Logic programme's tools hoped for, but I think it's at least worth having a beginning of reflection about it (hence, etc.)

But, well, yes, that's still probably like 90+ % speculative. Agreed. Far less firmly rooted in worth-the-name rationale than DL's proposals, even, anyway.

My bottom line, to get back to the very point/question of this thread, anyway, would be that, like Paul, I tend to find very suspect if not clumsily specious the kind of situation where one is asked to drop a well known, well proven theory for the benefit of another, totally new, which, surprise! ends up discouraging you -- ALSO! -- from any form of elaborated formalization that "could grow up", for you to build upon, retest, refine, etc.

If that's not "depressive" a conclusion (about formalization)... then, I'm not sure about how to qualify such a thing where you want to talk about something... but also you surrender, a priori, on phrasing it structurally enough to communicate your ideas on it with others. (**)

(*) (or at least, organize them enough, to keep them "loose-but-still-sensical" enough, as I'm intuiting part of the DL programme, but I may be wrong; please point me out what I totally missed or misunderstood -- thank you)

(**) Attempting a raw parallel: Even E=MC**2. It hasn't discarded any of Newton's. It has only put them all in a different, outter/embracing perspective (that those weren't in yet, in Newton times). And more importantly, E=MC**2 is clearly just *not* as simple a formula as it is famous for, actually. Not at all. It's like... just the catchy/punchy tag/shortcut made up for the public, which really emerges as a tiny bit of a snowflake above sea level for the rest under sea level of the iceberg of many other, pretty "hairy" differential formulas. But that's just it : the very same differential calculus, with a bigger number of formulas, richer formulas. The REAL, GENUINE value of Einstein's theories not being as much in the form of his results, as, instead in the very new hypotheses, and constants/axioms themselves, expressed in the legacy language, about his object of study. And then, yes, the very same new axioms gave those unexpected and interesting newer "truth" formulas, as well, against which easily reproducible experiments, with as easily observable results, concluded with more (or much more and more often) satisfication than in former Newton's "truth". Until ... well, the after Einstein's arrives one day, if ever.

Goldin and Wegner: clarification request

Could you clarify what Goldin and Wegner missed? I understand your criticisms of Petri nets and also of Kowalski's attempts at coming up with a logical basis for Planner (e.g., SL Resolution and SLD Resolution), and how you have compared these to the actors model. However, I am unaware of any critique you've made of Goldin and Wegner. I generally try to absorb your programming language critiques, as I think historically you've been dead-on accurate and fair.

I am not going to conjecture what you meant by "[Wegner] failed to comprehend [the] Actors Model". Just please tell me. I can guess, because I've read all those papers you've cited and also have copies of Clinger's and Agha's respective Ph.D. theses. But guessing stinks, especially when I've grabbed your attention.

Let me also further qualify my argument that practical motivation is limited here: I've personally always wondered why you've never mentioned the object capability model and its connection to the actors model; disappearing actors is similar to revocation of a capability. Mark Miller has even suggested you gave the clearest description of object capabilities in your explanation of actors:

Hewitt's original Actors still seem to me like the clearest formal statement of the capability computation model. But, as far as I remember, there are no references between his work of the 70s and the capability OS work happening at the same time. And I don't think Hewitt ever says "capability". It looks to me like a completely independent discovery. From Hewitt's writings of the time, he clearly saw the security implications of the formalism, and may have been the first to realize that it could be a basis for transparent secure distributed computing without a shared TCB. Unfortunately, Hewitt is not a great communicator, and, being in the MIT AI culture, he eventually stopped caring about security.

Postscript: I think JK Iliffe was probably the first to realize transparent secure distributed computing without a shared TCB was best done through object-segmentation of representations, in 1967. This can be debateable, since he did not directly describe it so concisely.

Pnueli, Milner

I was surprised to see turing-award automata-guru Pnueli publish something with a title "reactive systems cannot be modeled by algorithms?"

Can't find the links to his or Milner's work? Is it online?

?

Yes

Amir Pnueli.

Here is a good overview of why he is an important figure in Computer Science history: Amir Pnueli and the Dawn of Hybrid Systems

I think, it's was by now...

Unfortunately...