On the Unusual Effectiveness of Logic in Computer Science

In 2001, Moshe Vardi organised a workshop devoted to a the topic of The Unusual Effectiveness of Logic in Computer Science with papers presented covering such topics as "Logic as the calculus of computer science" (Vardi) and "Descriptive complexity" (Immerman), and later a gang consisting of Halpern, Harper, Immerman, Kolaitis, Vardi, and Vianu published a likewise named 24 page article in the July 2002 issue of the Bulletin of Symbolic Logic.

The title is derived from Wigner's famous article on The Unreasonable Effectiveness of Mathematics in the Natural Sciences, which was devoted to raising and attempting to answer the important question: why should mathematics have been so useful to natural scientists? With respect to logic, my answer for the effectiveness of LICS is that, while computation is a physical phenomenon, it is a phenomenon that is best understood via powerful abstractions, and the most powerful abstractions we have at the moment are abstractions in mathematical logic, because of the fundamental relationship of Turing completeness to Goedelian incompleteness.

Links derived from Richard Zach's Motivating Intro Logic for Philosophy majors (and others).

More links...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

C-H?

The Vardi paper is nice, and I use it frequently when trying to add more logic courses.

But really, how can you motivate logic, without mentioning Curry-Howard? ;-)

Models-of-computation-centrism versus language-centrism

I'd say that the formulae-as-types correspondence is important to the language-centric view of computation, but not so important to the models-of-computation-centric view of theoretical computer science which has the upper hand in theory and is favoured on the Vardi view. The latter view has the more compelling story to say about why the science of computation needs powerful abstractions to yield its secrets, IMO.

Top dog?

The real question is whether you are trying to motivate CS people to study logic, or motivate logic people to study CS...

I wonder...

I guess the aim is both. I think logicians have been slower to appreciate the value of the study of computation for their discipline than computer scientists have been to see that logic is relevant to what they are up to. I have the advantage that I teach logic to people who have subscribed to an MSc programme in "Computational Logic" so I have the luxury of preaching to the converted...

That's what I thought

I think logicians have been slower to appreciate the value of the study of computation for their discipline

And here C-H would be a useful argument, no?

Logic was first

There's definitely a two-way transfer of knowledge via the CH, though my impression (but this could simply be because I "see" more of the CS side) is that it's biased toward the CS side (i.e. "we" are getting more). I think part of this is simply due to logic being around longer than computer science. There was and is a large body of logic that was around before the Curry-Howard correspondence was really appreciated. There are plenty of current research papers that are still "CH-ing" logics, and, say, Kolmogorov's embedding of classical propositional logic into intuitionistic propositional logic should look very familiar.

That said, logicians, no doubt still have a lot to gain from CH.

Some links

  1. There's a Wikipedia article on Wigner's paper.

  2. There was a heated discussion on Wigner's paper on the Foundations of Mathematics mailing list which started with this excellent mail from Solomon Feferman. I'll link to more in due course.

some more links

Halpern et al.'s paper also mentions The Unreasonable Effectiveness of Mathematics by R. W. Hamming. (Another copy of Hamming's paper is hosted by the [now defunct] MATC project at Darmouth alongside Wigner's article.)

I'm more amazed by the unreasonable...

...effectiveness of category theory in logic, computer science, physics and mathematics. It's astounding just how many constructions turn out to be exactly the same thing when looked at from a suitable level abstraction. Years ago I studied algebraic topology (whence I believe category theory was originally spawned) and it's fascinating to see many of the same tools (some of them seemingly tailored just for topology) hard at work in completely different areas like computer science and logic.

un-astounding

I disagree -- that's precisely what's un-astounding about category theory, or any suitably abstract endeavor. Abstraction is about information hiding, or erasing details. When you erase enough details, any two things look the same.

The moon is a pizza

When you erase enough details, any two things look the same.

Hmmm. But it makes a big difference to the usefulness of this sameness exactly what details you erase and what you keep. As I've said before, in a totally different context, not all abstractions are created equal.

I could say that you and a jacko'lantern are the same because you both have faces, but I would hope that this analogy tells me very little about you that is of interest.

CT, on the other hand, shows that the concept of compositional mappings (a very specific abstraction) has deep ramifications throughout mathematics.

It isn't necessarily a shock that such abstractions exist for a discipline like mathematics, but it is far from the case that some other random abstraction could do just as well.

And I disagree too!

I'm not surprised that the strategy of erasing details works, after all it's trivial that things will look more alike if you erase details. What I'm surprised by is that after erasing enough details to make algebraic topology look like logic you don't end up with a trivial theory. You still have a rich and complex theory and that theory is useful for working in other fields too.

A priori, logic and topology look quite different, and, in fact, objects and arrows in category theory and logic represent very different beasts. A sheaf in topology looks nothing like a sheaf in intuitionistic logic, and yet they are both sheaves and are both defined by the same non-trivial categorical definition. This is quite far from things looking similar because they've been smudged into each other a bit. This is a deep commonality between two highly complex structures.

re: And I disagree too!

What I'm surprised by is that after erasing enough details to make algebraic topology look like logic you don't end up with a trivial theory.

A quote from G. Polya along the same lines:

There are two kinds of generalizations. One is cheap and the other is valuable. It is easy to generalize by diluting a little idea with a big terminology. It is much more difficult to prepare a refined and condensed extract from several good ingredients.

I can't find the original quote at the moment. I seem to vaguely remember that Polya mentions group theory as an example of "valuable generalization" and refrains from mentioning any specific examples of cheap generalization.

Another quote on shallow generalizations:

Hermann Weyl, one of the foremost mathematicians of this century, expressed in 1951 his contempt for pointless generalizations, asserting: "Our mathematics of the last few decades has wallowed in generalities and formalizations." Another authority, George Polya, in his Mathematics and Plausible Reasoning, supported this condemnation with the remark that shallow, cheap generalizations are more fashionable nowadays.

Got a hammer but lost my nail

I am sorry, please category theorists do whatever you want, but well, I definitely know it ain't my thing.

A sheaf in topology looks nothing like a sheaf in intuitionistic logic, and yet they are both sheaves and are both defined by the same non-trivial categorical definition.

Hmm, expressed the same thing in two formalisms and can do that also in a third. Exactly what did we gain?

This is quite far from things looking similar because they've been smudged into each other a bit. This is a deep commonality between two highly complex structures.

Which is exposed by a third non-trivial complex structure in a third non-trivial formalism?

I remember I dead stopped on category theory when (a) equality symbols were used in definitions of functors but noone could explain what that equality symbol actually meant, and (b) in a number of articles mathematical equality was often equaled with computational equality in an indifferent and incorrect manner, and (c) PL type theory drifted away from studying usable types for programmers to, well, unusable types for everyone. ;-)

CT always reminded me a lot of a not too nice but famous joke "What is the difference between a computer scientist and a train?" (The emphasis is on me since this is actually not meant as a rant)

Let's get hammered

I definitely know it ain't my thing

Then why comment on it? ;-)

expressed the same thing in two formalisms and can do that also in a third. Exactly what did we gain?

To me this is the same as saying "Well, we already know how to add and multiply integers, what good is group theory?".

Having multiple equivalent formalizations of the same mathematical objects is often quite useful. (I could generalize to multiple different perspectives on ANY domain of knowledge.)

(a) ...noone could explain what that equality symbol actually meant... (b) ...mathematical equality was often equaled with computational equality in an indifferent and incorrect manner

You will have to give examples. Are you sure YOU know what equality is? ;-)

famous joke "What is the difference between a computer scientist and a train?"

So famous I've never heard it, and all my Google powers could not find it. ;-)

Don't leave me hanging, what't the punchline?

Ah well, bad hair day

Damned, actually typed in a reply but forgot to submit.

Then why comment on it? ;-)

Uh, well I had that one coming. See title ;-)

To me this is the same as saying "Well, we already know how to add and multiply integers, what good is group theory?".

Ok, but in type theory I miss the added value of CT.

You will have to give examples. Are you sure YOU know what equality is? ;-)

I know when I am using intensional, extensional, or some informal -close to Leibnitz thingy- equality




Famous joke among students I guess.

"What is the difference between a computer scientist and a train?"

A train stops when it loses track.

Hair of the dog

Ok, but in type theory I miss the added value of CT.

It provides a nice formalism to investigate type theory, and known results from CT can be used to illuminate type theoretic issues. What's to miss?

I know when I am using intensional, extensional, or some informal -close to Leibnitz thingy- equality

To make these distinctions you need to have a pre-existing model that you are working with. Typically, CT equality works across different models of equality, so it's more general.

If you don't like that, just plug in a convenient or obvious model for the category in question and use that; doesn't really hurt anything most of the time.

A train stops when it loses track.

Groan. Guess I shouldn't have asked. ;-)
(Not sure it really says anything particular about CS though; could plug in any discipline and it still would be equally unfunny. ;-) )

Mochito!

Hehe, nah, probably it just boils down to taste ;-). I don't know: I didn't find the examples I saw illuminating. Seems I am more the type to enjoy reading about, whatever, Winograds version of Strassen matrix multiplication.

To make these distinctions you need to have a pre-existing model that you are working with. Typically, CT equality works across different models of equality, so it's more general.

Which makes it also very easy to make mistakes (think the usual mistakes in denotational semantics when confusing extensional and intensional properties of statements in PLs); but, ok, these errors are made apart from CT too so, well, you got a point there.

(Not sure it really says anything particular about CS though; could plug in any discipline and it still would be equally unfunny. ;-) )

Yeah, actually, I think it was used as an opening statement in a speech on another topic once to stirr up the audience.




I am going for my drink now ;-)

Exactly what did we gain?

Understanding. If you use category theory in only one way there isn't much to gain. But every time you move to a new domain and don't have to learn new notation because you've used it already in another domain it makes life easier. I guess I'm also biased as I first met categories in a field where the payoff from using them was pretty large: topology.

If someone is using equality symbols without being able to explain what they mean then it isn't a problem with category theory, it's a problem with people not knowing what they are doing. This is common in many fields of study :-)