What is the dual of { } ?

I have a question for the logicians and category/set theorists among LtU readers.

Background:

I'm currently dealing with a simple matching algorithm for strings. One can create a search pattern either by building a set of admissible characters {a, b, c, ... } from the empty set {} or a search pattern from matching basically any character ( like using a . operator in regexps ) and withdraws a set of characters that shall not be matched:

ANY - {a, b, c, ... }

For disambiguation purposes I'm interested in making all those sets disjoint. It shall not be really significant in the discussion that characters are matched. This is just an implementation detail.

The question is about the status of ANY? As it seems the full ZF set theory would be far too much specification. It entirely suffices that each of those sets S can be finally constructed or finally de-constructed as for ANY - S. In one case one starts with a "set of no entities" and in the other case with a "set of all entities". The latter clearly violates the principle that the set is constructed after the elements. I know ANY isn't entirely insane because I have a working implementation with the usual relations like union, difference, subset and intersection.

What I'd like to know is about a more in depth treatment of ANY in the literature. I expect more interest from computing scientists given the above motivation than from classical mathematics.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Equivalent to bot

Assuming I understand you correctly, Any is analogous to Bot in certain type systems.

Pragmatically though, I'd look into making Any be an alias to the set containing all possible values rather than a special token that is the superset of all sets.

Any

You can model these matching algorithms as computable functions on strings which return TRUE or FALSE. Then "none" is the function that returns FALSE regardless of its input, and "any" is the function that returns TRUE regardless of its input.

A "set of all sets" isn't paradoxical in itself. The paradoxes come from adding such a set to a set theory such as ZF which allows unstratified comprehension over sets. Alternate set theories like Quine's NF allow a "set of all sets", but restrict comprehension so that paradoxes can't arise.

If I understand...

You can model matching rules as sets of strings, with ANY being the set of all strings and {} being the empty set. In this case, your universe is the powerset of the set of all strings. There's no theoretical problem with this.

It sounds like maybe your concern is that such uncountable sets require stronger set theory than you think is warranted. In that case, do what Tim suggests and take your universe to be predicate expressions rather than sets. If desired, you can work in a quotient space of expressions that compute equal sets. In this way you can keep your model countable.

Regular languages...

...are closed under intersection, union, sequential composition, complement, and Kleene closure.

Each of intersection, union, and sequential compositions have units. The unit of intersection is the wildcard that matches *all* strings. The unit of union is the the void regexp that matches *no* strings. The unit of sequential composition is the regexp matching an empty string.

Is this what you want?

Bertrand Russel invented type systems

to get around his paradox, and when later applied to programming languages they prove very good at eliminating it from those too.

So if you're happy to use a typed model where a Set of Sets is a different type of object (in a formal sense) to a Set of (say) Characters, then it becomes easier to construct an Any object. eg as someone suggests if you implement sets as functions to Bool:

type Set a = a -> Bool

anySet :: forall a. Set a
anySet _ = True

Note that although the type polymorphism makes this look a bit like a universal set of all sets, really it's more of a recipe that, if you choose a type a, will give you an Any object for sets of that type. If you try something like the following:

setBelongsToSelf x = x x

you get

Type error: Russell rises from the grave to object
(OK actually: "Occurs check: cannot construct the infinite type: t = t -> t1"...:-)

On the other hand if you want an untyped model where, say, a Set of Sets exists in the same universe as a Set, or a function exists in the same universe as its domain, then things get complicated. If you restrict the notion of 'set' or 'function' to 'computable set' or 'computable function', though, you find you can avoid paradoxes.

As I remember (from a long time ago...), Domain theory is the place to look if you're interested in this, it has various clever techniques for constructing these kinds of models for the lambda calculus.

Well, I'm not quite glad

Well, I'm not quite glad with the terminology.

ANY is not constructed in any way but primitive, just like {}.

From a mathematical point of view it is rather unsophisticated and one doesn't have to go through the history of Russel paradox avoidance and language surgery. It is not entirely necessary to fit it into an outdated research program.

Usually one doesn't talk about constructing new sets from {} using the union operator

A = {} ∪ {a,b,c,...}

because {} is just the union identity and doesn't add anything to our understanding of A.

It makes sense however to construct new sets from ANY using subtraction

B = ANY - {a,b,c,...}

Talking about characters and such stuff has motivational purposes because I can't see right now another use case.

I'd rather compare this with that non-standard analysis. In NSA a new symbol ϵ is introduced as a positive number that is smaller than any positive real. Even if one rejects this meaning as nonsensical it provides a good base for our intuition.

OK... well I'm still not

OK... well I'm still not sure quite what you're asking, or how much you already know.

If you just want to write a language which includes such a construct, you should find it easy.

It seemed you were concerned about ZF and paradoxes though.

If you want a simpler formalism to base this kind of semantics on there are of course Lattices, Boolean algebras, etc. Mathematicians have been talking about a Top object in these contexts for a long while. You don't need a whole Set theory to model them in, far from it, nor anything like the kind of cleverness required to introduce non-standard analysis.

About ZF,

there's virtually no reason I can think of base programming language semantics on the ZF formulation of set theory.

Constructive theories are a much more natural fit when talking about computation, see eg:
http://en.wikipedia.org/wiki/Constructivism_(mathematics)
http://en.wikipedia.org/wiki/Constructive_type_theory

That said, I'm now wondering if anyone knows of any weird and wonderful programming languages with semantics based on ZF...

With diffidence...

there's virtually no reason I can think of base programming language semantics on the ZF formulation of set theory. ... Constructive theories are a much more natural fit when talking about computation

I can see why you'd not like ZF, as it doesn't embed an easy to work with model of computation. But as for reasons to work with a set based formalism rather than some constructive type theory:

1. The law of the excluded middle. It works, and things proven by contradiction tend to be true. There are occasionally things that are easier proved by contradiction.

2. Types as sets. It seems very appealing to be able to reason about "x:T" as a simple boolean proposition rather than having this as a special syntactic form.

3. Natural subtyping. Related to the previous, it's nice to be able to reason about subtypes (particularly refinement types) "for free".

4. Intuitive denotations. Maybe it's my math background, but the semantic underpinnings of a set based approach seem more intuitive. For this reason, the correctness of ZF seems more intuitive. If CIC didn't have a consistency proof grounded in set theory, I'm not sure I'd feel comfortable believing it.

I can see how to develop a nice programming language starting from a set-based formalism, but I confess I'm not an expert on constructive logic or the calculus of constructions. Maybe those are all foolish reasons not to like constructive type theories?

This is a theological argument about orthogonal issues

There are intuitionalistic and classical set theories, as well as intuitionalistic and classical Higher Order Logics. Neither classical nor constructive logics have monopolies on useful models of computation.

The canonical example is adding call/cc to the simply typed lambda calculus. This creates an axiomatization of classical logic from constructive logic.

The cool thing in this area is Laurent's LLP, which harmoniously integrates classical and constructive logic into a cohesive whole. And of course, everybody seems to be buzzing about Call-By-Push-Value these days...

I can sympathize. Due to some of the same background, I'm much more comfortable with (hyper-)sets than HOL, and for what little it's actually saying, I am more comfortable with HOL than any abstract kind of category theory. Although I am often amused by light-hearted trash-talking of category theory, I am actually rather agnostic about the differences at this point. I don't know enough to competently judge, and I am more interested in going deeper into HOL than ZF.

(Hmm... there is a humor page out there to the effect of 25 generic questions to derail a math talk or ask afterwards to hide the fact you didn't understand anything, but I can't seem to find it.)

Postscript: To clarify, I was mostly disagreeing with citylight's assertion that ZF wasn't a good choice for modelling semantics, and Matt's suggestion that the law of the excluded middle was an inherent advantage of set theory. The rest of Matt's post seemed alright, although I'm indifferent whether or not CIC's consistency proof in terms of ZF actually means that much to me, as only God can know that ZF is consistent. (Although I do believe in ZF, so it's a good sign.)

I'm also amused by light-hearted trash-talking of set theory, by the way. :-)

800lb gorilla of logical strength

You missed the truly heavyweight reason to choose set theory: theories based on ZF have by far the strongest consequence relations, and by far the strongest consistency strengths (in terms of the Gödelian hierarchy). This means that if you use a ZF-based theory to reason about a programing —all things to do with the quality of the construction of your semantics being equal— you will be able to prove more things about the PL than in a weaker theory, say CIC.

Of course, you shouldn't think that finding the axioms of a theory intuitive provides much in the way of reason to think the theory consistent. Given a formal theory that sits high up in the hierarchy of consistency strengths, it follows we should have less confidence in its consistency that theories sitting lower down. And indeed, the most widely circulated reasons given for believing in the consistency of ZF are rather handwavy.

I'd guess that the semantic underpinnings of a set based approach seem more intuitive, not because they are well grounded, but because set theory can be used to reason about its own models in a way that unleashes its inferential power on its own semantics.

Postscript: As a biographical aside, I learnt axiomatic set theory long before I discovered structural proof theory.

CIC...

[Ok, edited first paragraph: Initially posted somehow thinking CIC referred to simple calculus of constructions].

Why do you say CIC is weaker than ZFC? My understanding is that CIC and ZFC are essentially the same consistency strength - one can model the other if you're willing to just postulate bigger sets in ZFC or more universes in CIC.

Of course, you shouldn't think that finding the axioms of a theory intuitive provides much in the way of reason to think the theory consistent. Given a formal theory that sits high up in the hierarchy of consistency strengths, it follows we should have less confidence in its consistency that theories sitting lower down.

My point was that I have more a priori confidence in ZFC based on natural intuitions. If the only reason we trust the consistency of some formal system is because we have a ZFC proof of it's consistency, then I see ZFC as more foundational.

CIC & natural intuitions

The paper you refer to (Werner 1997), provides a family of axioms to add to CIC that allow, essentially, a chain of extensions of CIC that can do something along the lines of classical set theoretic introspection on theories below them. You need the first such extension to interpret ZFC, the second to interpret ZFC plus the existence of an inaccessible cardinal, etc. CIC does not interpret ZFC. The paper provides bounds on consistency strength, but does not explicitly talk about them. Good call, btw, I enjoyed reading that paper.

I don't believe in `natural' intuitions in higher-order logic. If we really understood set theory, then we would be able to construct `natural' large cardinal axioms that could settle the continuum hypothesis. To be more explicit, except for the redundant axioms, ZFC is very much more than the sum of its parts:Fraenkel's comprehension axiom is very easily interpreted in a weak set theory that admits constructive models, but when put in combination with ZC has very radical and difficult to fathom implications for the structure of set theory.

If we can construct a model for a theory in ZFC, this provides a powerful sanity check for that theory. And since there is no particular reason to doubt the consistency of ZFC, then there is no particular reason to doubt the consistency of one's theory. But this does not provide the kind of well grounded consistency argument that justifies deep confidence in a theory, the way that, say, Gentzen's consistency proof for arithmetic is grounded in elementary combinatorial intuitions.

I should add that I do believe in the consistency of ZFC, but my reasons are rather intricate, and they justify a belief that is of a somewhat hypothetical nature.

Sets and such

The paper you refer to (Werner 1997)...

I had forgotten about needing to add a functional axiom of choice to CIC to do the ZFC encoding, but that makes good sense. The main take away I have from the paper is summarized in the abstract: "both hierarchies of logical formalisms interleave w.r.t. expressive power and thus are essentially equivalent."

If we really understood set theory, then we would be able to construct `natural' large cardinal axioms that could settle the continuum hypothesis.

Isn't that like saying that if we really understood the natural numbers, we could give an answer to the Goldbach conjecture?

I think Godel's result casts serious doubt on our being able to find a simple finitary proof of ZFC's consistency in the way that we have a simple proof for arithmetic. This seems reasonable to me though, as the objects ZFC is meant to model are not finitary.

If you can make the leap to accepting subsets of the naturals as objects, getting to higher cardinals shouldn't be a problem, and then you have an intuitive "model" of a fragment of ZFC. All of the "counterintuitive" results about set theory that I've come across (e.g. Löwenheim–Skolem) really aren't that counterintuitive once you've wrapped your head around them.

Understanding mathematical structures

Somehow this discussion doesn't feel very LtUish...

"both hierarchies of logical formalisms interleave w.r.t. expressive power and thus are essentially equivalent."
Sure, but there is a big difference between CIC and its extensions: CIC is a constructive theory, in a somewhat strange sense, and its extensions are not. The behaviour of the hierarchy really doesn't tell us anything new about CIC: we already knew that it had models in ZFC.

Isn't that like saying that if we really understood the natural numbers, we could give an answer to the Goldbach conjecture?

In one sense yes, in another sense no. Our failure to prove the Goldbach conjecture shows how little we understand the distribution of the prime numbers, and that is a sense that we don't understand the natural numbers. But this is not a complaint about our axiomatisations: if Goldbach's conjecture is false, then the weakest theories of arithmetic suffice to show it, whilst Harvey Friedman informally polled a number of mathematicians, all of whom had some confidence that the conjecture could be settled within existing theories of arithmetic.

The Continuum Hypothesis is a different story. Wrestling with it goes back to Cantor's failed proof of it, right at the beginning of set theory, before Zermelo's first axiomatisation. The embedding of arithmetic in set theory is perfectly clear, but the embedding of analysis is trickier: can the continuum usefully be described using an uncountable set of cardinality lessthan the set of real numbers? There is no fact of the matter in those set theories we understand best, nor are there good candidates for LCAs that settle the matter.

I think Godel's result casts serious doubt on our being able to find a simple finitary proof of ZFC's consistency in the way that we have a simple proof for arithmetic.
"Finitary" is a distraction: we can't even prove first-order logic consistent with finitary methods. I disagree with your claim even if we raise the bar to mean combinatorial, since the result casts exactly the same doubt on relatively weak theories, such as Peano Arithmetic, that do have combinatorial consistency proofs. The results described at Wikipedia's Ordinal Analysis page give a glimmer of inslight into the problem: we can't find combinatorial bases for theories with more than the weakest second-order comprehension axioms.

then you have an intuitive "model" of a fragment of ZFC.
I'll try another tack on intuitive: intuitive is nice, because it means you can work with a theory. ZFC is intuitive, and I like how easy it is to find intuitive models for it and its many intuitive large cardinal axioms —besides the many LCAs I don't find intuitive— but this cornucopia of intuition does not provide me with one iota of confidence in the consistency of ZF set theory, just as it does not with the intuitive nature of Frege's inconsistent theory of the Grundgesetze.

Intuitiveness is an internal property of a theory. Confidence in the consistency of the theory needs to come from outside the theory.

Löwenheim–Skolem
It makes clear the important difference between what theories about uncountable structures purport to define, and what they actually do define. So ZFC aims to describe, among many other things, the uncountable set of functions Nat -> Nat, but only succeeds in describing the countable subset of functions that it can prove total.

Intuitiveness vs. intuitive correctness

Somehow this discussion doesn't feel very LtUish...

As the discussion drifts more to feelings and intuition, I concur. We can wind this down...

... and that is a sense that we don't understand the natural numbers. But this is not a complaint about our axiomatisations: ...

The former is the sense I meant, but of course Godel's incompleteness result provides an example of axioms for arithmetic failing in your other sense. I don't think there's anything we're disagreeing about here.

I disagree with your claim even if we raise the bar to mean combinatorial, since the result casts exactly the same doubt on relatively weak theories, such as Peano Arithmetic, that do have combinatorial consistency proofs.

Such a combinatorial consistency proof of the consistency of ZFC could not be formalized in ZFC, right?

Intuitiveness is an internal property of a theory. Confidence in the consistency of the theory needs to come from outside the theory.

This is the crux of our disagreement, but it may just be terminological. I'm not talking about intuitiveness as in, "wow, this iPod interface is intuitive" - I'm talking about intuitive correctness. For example, the Peano Arithmetic is consistent. And this fact is intuitively obvious, in the absence of any combinatorial consistency proof, because the theory is modeled by natural numbers. Every axiom and every inference rule correctly describes the naturals and the naturals exist, QED. While the infinite sets that are to model ZFC are not as concrete as the natural numbers, I think the intuitive correctness is still there.

[Edit: "Grundgesetze" isn't the system I thought it was, but AFAICT it suffers the standard problem with naive impredicativity of not considering that there may be non-termination of recursion in the valuation function for supposed boolean predicate expressions.]

[Löwenheim–Skolem...] So ZFC aims to describe, among many other things, the uncountable set of functions Nat -> Nat, but only succeeds in describing the countable subset of functions that it can prove total.

I don't like the way you've phrased this. ZFC successfully describes the uncountable set of the functions Nat->Nat, but it doesn't describe them completely and there are other different structures that it also describes.

Update: Private mail sent. Apologies as well if this was too off topic. I've also made a few edits for clarity.

Moving from LtU

There are two points that suggest a misunderstanding that I think are best taken to email, and one point, regarding intuitive correctness that is interesting enough to be conducted with some visibility, where I will summarise my view on my weblog, sooner or later.

I agree entirely with Ehud's point about clogging up the forum with conversations of interest to few, and apologise for having done just that.

Not well grounded

I'd guess that the semantic underpinnings of a set based approach seem more intuitive, not because they are well grounded, but because set theory can be used to reason about its own models in a way that unleashes its inferential power on its own semantics.

Also, I'm not sure what you mean by this. ZFC seems very well grounded to me. Would you deny that it makes sense to consider an object such as "all subsets of the Natural numbers?" Or do you think that some axiom or inference rule doesn't capture our intuition of how such objects should behave?

Finally, I don't see what's special about the self-referential power of ZFC as compared to CIC, which can implement and prove things about an evaluator for itself (in a higher universe).

I'll try to find something in your biographical aside to argue against in another post :).

IO Error: 0xDEADBEA7 stack dump... (Blork!)

Also, I'm not sure what you mean by this. ZFC seems very well grounded to me.

Just to point out a potential communication error: it was not stated that set-based approach is not well grounded. Rather, it was stated that the benefits (regarding intuition) of set-based approach have cause based in something other than it being or not being well grounded (i.e. independent of it being well grounded).

... Can't say I agree one way or another. I suspect that what is 'intuitive' to us depends mostly upon our prior exposure and efforts to achieve those critical 'aha!' moments. I suspect that, for the uninitiated, simple ideas like recursion are often far more difficult to grok than more complex ideas constructed of simple parts we already understand (e.g. iteration).

Set theory is in some ways more complex than, say, Category theory. How many people here really grok Category theory? Is it less intuitive, or do we simply have less exposure? Related to that attack on the biographical aside you're putting off: Charles may find set approaches 'more intuitive' only because he learned them earlier and more thoroughly... though I suspect he stated this as a disclaimer.

Oops, thanks

Just to point out a potential communication error: it was not stated that set-based approach is not well grounded.

Thanks. You're right - I read too much into that.

Regarding intuition, I can't say much about what others should find intuitive or even what influenced my own intuition. I don't think I've seen an attempt to justify the existence of interesting categories based solely on intuitive first principles. Do you know of one?

I don't think I've seen an

I don't think I've seen an attempt to justify the existence of interesting categories based solely on intuitive first principles. Do you know of one?

I do not. On the other hand, I haven't a clue how "interesting category" might be defined, and I do not count myself among those who fully grok category theory. I have some difficulty getting my head around the various morphisms and topologies. The extent of my understanding is more limited to inductive and coinductive structure, folds and unfolds... the sort of minor take away from category theory traditionally applied in PLT.

Sets of nats & intuitions again

Would you deny that it makes sense to consider an object such as "all subsets of the Natural numbers?"

No, but then again the mathematical structure is fantastically complex. Consider the Löwenheim–Skolem theorem: this tells us that every first- order theory has countable models, and so if a first-order theory is complete with respect to an uncountable structure, then that structure is uncountable in a degenerate way, such as in case for the set of all subsets of the natural numbers.

It is easy to trip over when one tries to grasp such complexity. Martin-Löf's intensional type theory without universes can reason about the type Nat -> {0,1}, in its unambitious way.

Or do you think that some axiom or inference rule doesn't capture our intuition of how such objects should behave?

Well, there are good intuitions, and there are bad intuitions, and it's not always easy to tell which are which. If I were to be flippant, I would point you to our financial crisis.

But more seriously, it's not that we can't have confidence in a small intuition in isolation, it is more that —to echo a point I made in my other point— we find it hard to keep track of what happens to intuitions in combination. And often what we write down when we try to formalise our ideas goes beyond our original intuition: this rather seems to be the case with both Frege's Axiom V, and Martin-Löf's original type theory.