Fixed points considered harmful

Fixed points have been an important topic ever since Church invented the lambda calculus in the early 1930s.

However, the existence of fixed points has unfortunate consequences:
* In a programming language, they preclude having having strong types
* In logic, they lead to inconsistency.

Fortunately, types intuitively preclude the existence of fixed points, e.g.,
ActorScript and Direct Logic.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Categories typically realise many axiomatic uncountabilities.

What does "Types categorically axiomatise uncountably many Reals" mean? I understand types, categories, axioms, uncountable things and what a real number is, yet that sentence means nothing to me.

You cannot realise uncountably many Reals, categorically or otherwise. The human brain with a finite number if neurons cannot contain infinite anything. The universe with finite qbits cannot contain infinite anything. Realising an infinity of anything is impossible. You can axiomatically describe the infinity using the Peano/Dedekind axioms, and that is how the human brain reasons about them. You reason about Pi by approximating the value or series expansion.

What we have here I think is a classic philosophical category error. Referencing Wittgenstein, we have one language game for countable things, things that exist, like how many stones in this pile. We then have an abstract language game, based on induction, where we say: "the sun has come up every morning so far, therefore we expect that it comes up every morning forever". Despite the fact that this is not true, we can imagine a universe in which it is true. What we have is concrete observed reality, and an abstract fantasy. Now we find we can reason about infinities and come up with sensible results (second order logic), and we know we can still count finite sets (first order logic). We even share words between the language games such that we might have a first order concept of addition, and a second order concept of addition. The mistake is to try and make those two concepts of additions the same. They share the same name because they seem to do similar things in both language games, but they are not the same thing. By trying to make them the same thing you introduce an error in that you can now say you have uncountably many countable things.

You can define an interpreter for second order logic in first order logic, and I think this is how the human brain reasons about second order things, after all the brain is constructed from a finite number of neurons. In this case the rules and conclusion about second order addition clearly do not apply to the host (first order system).

Wrong categorical

What does "Types categorically axiomatise uncountably many Reals" mean? I understand types, categories...

Don't forget that "categorical" is a concept in model theory and is not especially related to "categories."

Categorical

Okay, so it means one model up to isomorphism. This seems irrelevant to realisability, and does not change my reasoning above.

Constructable Reals

I disagree that the constructable Reals are uncountable. As we can Godel number any generator function of Real numbers (in the same way we can count using Natural numbers). Constructable has to take into account finite time and memory. You cannot construct uncountably many Real numbers in this universe.

I can construct a sentence that talks about uncountable Reals, but that sentence does not actually contain all the Reals. This is simply a statement in second order logic, and all statements in second order logic (that you can write down) have a finite Gödel number.

re I disagree that the constructable Reals are uncountable.

The constructable reals are countable.

They are uncomputably countable.

Go figure.

Types mean that "I am not provable" is not a sentence of Math

Types mean that "I am not provable" is not a sentence of Mathematics.

However, types must be powerful enough that all of Mathematics can be done using types.

Human Brain and Finite Godel Numbers.

Clearly the Godel number corresponds to the neuron numbering in the network. The human brain has a finite number of neurons (hence all Godel numbers assigned are finite) yet we can understand and reason about real numbers. Effectively any real number you can imagine has a finite Godel number. Hence sentences and propositions are countable because the representation of them in the network is finite. Literally anything you can think or reason about has a finite Godel number.Conversely the human brain cannot reason or even imagine anything with a non-finite Godel number. Hence your system must have finite Godel numbers as you can reason about it using a finite number of neurons.

BTW Hewitt: another typo and a rough spot in DL paper

In the DL paper footnote where the constructions for types are given, the construction for Terms of some type σ is given twice. (The first time it is footnoted so probably the second one is the typo.)

Also, Terms is nowhere else used or explained which is slightly rough. A sentence, perhaps.

overview of the DL paper

AFAICT,

CDL gives a theory of types with primitive types Boolean, ℕ Sentence, Proposition, Proof, and Theory. From previously constructed types, new types can be constructed for disjoint unions, functions, and so forth.

Axioms of construction are given for ℕ

A strong axiom of induction is given for ℕ and as a result it has but one model.

Axioms of construction for typed sets are given but there is no set-specific strong axiom of induction. Sets are established by a characteristic function.

(Therefore, btw, I was wrong, earlier, when I said in a nearby thread that DL lacked a universal set of for ℕ but I think I was still correct that it lacks a universal set for Sets◁ℕ▷. We don't have a provably universal powerset of the naturals because we have no categorical induction axiom for Set◁ℕ▷, We do have a universal set for ℕ itself because the categorical induction axiom for naturals lets us prove that all naturals are found in the set given by the characteristic function that always returns true. Hope I have that right now.)

The paper doesn't give (or I missed) axioms for Boolean but these are not too difficult to come up with. :-)

Rules of construction are given for typed expressions and sentences, which can be abstracted to form propositions. No axiom of induction is given for these.

A conservative but natural set of rules of inference are given.

Here is one big point:

Unlike what people are used to:

CDL does not give a strong axiom of induction for types. The set of types is open-ended. You can extend DL with new types.

CDL does not define any particular construction rules for functions, families, etc. You can invent new ways to introduce objects of this kind.

CDL is rigorously open-ended/incomplete.

One crude reason you can not enumerate the proofs of CDL is because CDL does not give you enough specification of proofs to be able to do that! (At the same time, you can enumerate all of the natural numbers in CDL, because the axioms for natural numbers give you enough to do that.)

The core of CDL is deliberately ambigous in an unusual way. It is not simply that there are multiple possible models of CDL. The ambiguity is deeper than that. There is no model for CDL -- only for CDL plus additional axioms (which we could call the "sub-theories" of CDL).

The "propositions of CDL are uncountable" because it is possible to define theories within CDL in which, for example, we have a kind of constructive way to choose a truly random real number.

If you find uncountable propositions unpalatable, stick to some other sub-theory of CDL.

CDL is not the kind of "foundation" wherein we have specified, once and for all, all the axioms we need.

CDL is the kind of "foundation" wherein we have axioms atop which we expect to add additional axioms and definitions.

Direct Logic intention: a rigorous mathematical foundation of CS

Direct Logic is intended to be a rigorous mathematical foundation for Computer Science. However, it is still relatively new and consequently may have some bugs although it has been presented and reviewed by world-class experts during its development. Thomas has presented an excellent overview in his post above :-) The current published articles on Direct Logic are designed for readers who are already experts. It would be very helpful to have articles for other audiences.

Of course, there is still much work to be done including some open issues to be addressed.

The core of CDL is

The core of CDL is deliberately ambigous in an unusual way. It is not simply that there are multiple possible models of CDL. The ambiguity is deeper than that. There is no model for CDL -- only for CDL plus additional axioms (which we could call the "sub-theories" of CDL).

Hmmm, that seems impossible. A model for theory T+S is also a model for theory T or theory S.

BTW, can you explain the difference between a set and a type? I asked Hewitt about this previously and didn't get a good answer.

no model re the core of cdl

The crude reason that there is no model for CDL because the definition contains deliberate gaps. Someone could ask why you modeled the type "Proof" that way and you will find no textual support for any answer you could give.

Many of the parts of the paper that relate CDL to history provide rationales for those definitional gaps and the resulting blocks on inference. A curious side effect is you get things like the self-consistency proof and the inferential-incompleteness proof -- both signs that the deliberate gaps are well chosen.

BTW, can you explain the difference between a set and a type?

I think so but not very well yet. I'll work on it.

Models

The crude reason that there is no model for CDL because the definition contains deliberate gaps. Someone could ask why you modeled the type "Proof" that way and you will find no textual support for any answer you could give.

Oh I get it. You mean the situation we find ourselves in -- where we propose a model for DirectLogic but then Hewitt drops by to supply a new axiom -- isn't just an annoyance, but is rather the essence of DirectLogic. DirectLogic isn't a single logic, but a family of logics branching through the multiverse.

I think you're being bamboozled.

Edit: Since sarcasm can easily come across as mean, which isn't my intention at all, let me add that I really just mean to express skepticism in the above. There's an air of mysticism about the way that you describe DirectLogic that seems very undesirable to me.

not a multiverse re models

DirectLogic isn't a single logic, but a family of logics branching through the multiverse.

Not quite, no. There's only one universe here.

DL is simultaneously three things:

1. DL is the metatheory of DL.

2. DL is a set of axioms any(?) useful theory can safely include.

3. DL is a meta-theory for all theories that include it.

Since DL is a meta-theory for all theories that include it, all of the theorems of all of those theories are meta-theorems of DL.

In some sub-theories, proofs may be enumerable.

In other sub-theories, not necessarily.

DL, as a meta-theory-for-all, does not assume (or allow you to prove) that proofs are enumerable.

re bamboozled

BTW:

I think you're being bamboozled.

If you mean by Hewitt, I don't think so. If you mean by certain commentators making with inane questions or absurd provocations, well, who do you think they are fooling?

One model per type

The core of CDL is deliberately ambigous in an unusual way. It is not simply that there are multiple possible models of CDL. The ambiguity is deeper than that. There is no model for CDL -- only for CDL plus additional axioms (which we could call the "sub-theories" of CDL).

Hmmm, that seems impossible. A model for theory T+S is also a model for theory T or theory S.

Yeah. I generously read this as saying "minus" instead of "plus." :) It seems to me that if CDL is consistent, any given closed subtheory of CDL has lots of models corresponding to possible extension paths CDL could take -- one of which might be your model that conflates provability with truth-in-the-model.

I think when Hewitt's trying to claim categoricity of the models of integers, sets, etc., there's a sense in which every type in CDL has its own "mini-model" that coexists with the mini-models for all the other types. Once one of these mini-models is categorical in itself, it will remain categorical in every potential model of CDL. (Is there a more standard word than "mini-model"?)

Well, I'm speaking on behalf of a system that isn't rigorous to me and that isn't designed by me, so I can hardly evaluate the truth of what I'm saying.

re one model per type

AFAIK:

Some subtheories have no models because of definitional incompleteness.

Some subtheories have multiple models.

Some subtheories have a unique model up to isomorphism.

(Which kind of describes the territory of mathematical discourse. :-) )

Such a subltle, slick theory.

This is really subtle and hella slick. For example, there is no provably universal set of sets of naturals because the only way to conduct such a proof would be in terms of the strong axiom of induction over naturals. Yet if you could do that, you would have to first enumerate the subsets of the naturals.

Sets◅ℕ▻ is a type, but *not* a set in Mathematics

Thanks Thomas!

Sets◅ℕ▻ is a type (unique up to isomorphism), but Sets◅ℕ▻ is not a set in Mathematics thereby providing all of the intuitive sets over ℕ (which is sometimes called "ordinary mathematics").

Separate theories can be provided for more speculative extensions.

Finite and Infinite

I am in two minds about this. On the one hand a finite system makes sense, and is certainly going to be more efficient. On the other hand it is useful to have unbound representations.

I still think I want to separate the ideas of DL from actors, and maybe even the parametric types.

- How would you model integer rings in DL?
- Does the compiler convert sentences into propositions?
- what messages does a type accept?

re How would you model integer rings in DL?

This probably has minor bugs and more seriously I have no reason to believe it's stylistically any good but;

Group◁σ▷ is the type of all groups using sets of type σ

Every group is defined by a set, along "dot" and "inverse" functions that obey certain axioms.

∀g:Group◁σ▷ ⟶ ∃S: Set◁σ▷, ∙:σ[σ,σ], inv:σσ ⟶
   GroupSpecification[S,∙,inv]
     where
      GroupSpecification[S,∙,inv]
        ⇔   (∀a,b∈S ⟶ a∙b ∈ S) 
          ∧ (∀a,b,c∈S ⟶ ((a∙b)∙c) = (a∙(b∙c)))
          ∧ (∃e∈S ⟶∀a∈S ⟶ (a∙e = a = e∙a) ∧ (a∙inv[a] = e = inv[a]∙a))

By the way, if you are interested in doing abstract algebra, I think(?) you can construct a closed group theory this way:GroupTheory Φ ⇔ ∃σ:Type, g:Group◁σ▷ ⊢ Φ

Also, after defining ℤ, you could prove:

Theorem: GroupSpecification[Integers, +, 0]

  where ∀i:ℤ, ⟶ 
   i ∈ Integers ⇔ inCountableUniverse[countIntegers[i]]
      where countIntegers[i:ℤ]:ℕ → (i >= 0) ? i * 2 : (abs(i)*2 - 1),
            inCountableUniverse[x:ℕ] → True




Forall g?

I thought DL did not have a universal quantifier?

quantifier elimination in DL re Forall g?

(Thanks for this question.)

I thought DL did not have a universal quantifier?

The uses of quantifiers here are just a short-hand.


Propositions can be formed according to this rule: 

  If σ12:Type, x11 and x22, then
    x1:x2:Proposition
 
The univeral quantiifer is a short-hand:

  ∀x:σ ⟶ Φ  ≡  x:s ⇒ Φ

The existential quantifier is short-hand for introducting a constant with no specific construction:

  ∃x:σ ⟶ Φ  ≡  x0:Constant◁σ▷ ∧ Φ

In the context of set theory:

  ∀x∈S◁σ▷ ⟶ Φ  ≡  ∀x:σ ⟶ (x ∈ S) ⇒ Φ
                ≡  x:σ ⇒ ((x ∈ S) ⇒ Φ)

  ∃x∈S◁σ▷ ⟶ Φ  ≡  ∃x:σ ⟶ (x ∈ S) ∧ Φ
                ≡  x0:Constant◁σ▷ ∧ (x0 ∈ S) ∧ Φ


For example, the group theory axioms begin:

  ∀g:Group◁σ▷ ⟶ ∃S: Set◁σ▷, ∙:σ[σ,σ], inv:σσ ⟶
     GroupSpecification[S,∙,inv]
      where [...definition of GroupSpecification...]

which could be (less expressively but equivalently) written:

  g:Group◁σ▷ ⇒
      Sg:Constant◁Set◁σ▷▷
    ∧ ∙g:Constant◁σ[σ,σ]▷
    ∧ invg:Constant◁σσ▷  
    ∧ GroupSpecification[Sg,∙g,invg]
      where [...definition of GroupSpecification...]

Modeling machine integers

So this would allow a model of say 64bit integers? I am slightly concerned about the size of the model, as in its not what would would call a concise description.

Back to my dilema. Starting with machine integers, and zero terminated strings, we can model unbounded integers starting from machine words (bounded by the addressable memory of the machine implemented on).

Which is the better approach? At the moment I still prefer starting with the machine word and building up, as it mirrors the real world.

In fact the two may be complementary, DL being top down, and implementations being bottom up.

closure axioms (re How would you model integer rings in DL?)

I think I missed one axiom here:

∀g:Group◁σ▷ ⟶ ∃S: Set◁σ▷, ∙:σ[σ,σ], inv:σσ ⟶
   GroupSpecification[S,∙,inv]
     where
      GroupSpecification[S,∙,inv]
        ⇔   (∀a,b∈S ⟶ a∙b ∈ S) 
          ∧ (∀a,b,c∈S ⟶ ((a∙b)∙c) = (a∙(b∙c)))
          ∧ (∃e∈S ⟶∀a∈S ⟶ (a∙e = a = e∙a) ∧ (a∙inv[a] = e = inv[a]∙a))


By the way, if you are interested in doing abstract algebra, I think(?) you can construct a closed group theory this way:GroupTheory Φ ⇔ ∃σ:Type, g:Group◁σ▷ ⊢ Φ

An additional axiom is needed to express that if a set, dot operator, and inverse operator exist that satisfy the axioms, then a corresponding group exists.

∀σg:Type, Sg:Set◁σ▷, ∙g:σ[σ,σ], invgσ ⟶ 
  GroupSpecification[Sg,∙g,invg] ⇔ g:Group◁σ▷

Finally, closure. Groups are uniquely identified by their components:

∀σ:Type, g,h:Group◁σ▷ ⟶

  g = h ⇔   (Sg = Sh)
          ∧ (∙g = ∙h)
          ∧ (invg = invh)