Fixed points considered harmful

Fixed points have been an important topic ever since Church invented the lambda calculus in the early 1930s.

However, the existence of fixed points has unfortunate consequences:
* In a programming language, they preclude having having strong types
* In logic, they lead to inconsistency.

Fortunately, types intuitively preclude the existence of fixed points, e.g.,
ActorScript and Direct Logic.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Higher Order Functions.

Are functions first class in ActorScript? How can I pass an actor to an actor as a message?

Addresses of Actors can be sent in messages

Addresses of Actors can be sent in messages.

Of course, a procedure is an Actor, which can be invoked by sending it arguments in a message.

Y-Combinator

So if I have an actor that passes its own address to the actor address passed to it, you have the Y-combinator, which calculates the fixed point. I thought fixed points were not possible?

Types: fixed points do not exist in ActorScript and Direct Logic

Types mean that fixed points do not exist in ActorScript and Direct Logic.

Can I create the Y-Combinator as a procedure?

What prevents me declaring the Y-Combinator then? Which specific typing rule?

I have seen in the examples procedures which call themselves (like the factorial example), so if a procedure can pass a message to itself, and can pass its address as a message to another procedure, you can create the Y-Combinator.

The Y combinator cannot be defined in ActorScript

The Y combinator cannot be defined using ActorScript:

 
PropToProp ≡ [Proposition]↦Proposition
Helper.[f:PropToProp]:PropToProp ≡ [x:([⍰]↦PropToProp)]→ f[x.[x]] 
Fix.[f:([PropToProp]↦PropToProp)]:PropToProp ≡ (Helper.[f]).[Helper.[f]]

The missing strict type ⍰ does not exist.

Factorial?


Factorial∎[n ←9, accumulation ←1] ≜
 n � 1 ⦂ accumulation⦶
 (> 1) ⦂ Factorial∎[n–1, n accumulation] ⍰▮

I can only easily type ASCII, so hopefully you can understand the following without the specific unicode characters:

Y.[f] = f.[Y.[f]].

How do I indicate 'f' is the address of an Actor? How does the type system deal with addresses of Actors?

Y combinator *cannot* be defined in ActorScript

With types made explicit:

Factorial.[n:Integer]:Integer ≡
 n=1 � True ⦂ 1
        False ⦂ n*Factorial.[n–1]

However, Y.[Factorial] is an infinite loop with the following (mistaken) definition of the Y combinator:

Y.[f:([Integer]↦Integer)]:Integer ≡ f.[Y.[f]]

Well. I won't use Actorscript then.

I can give typed Y combinators in Haskell and ML; with some verbosity in C++ even. It isn't hard, I needed to think about the ML version for half an hour and naturally defined an eager version for it.

Why would I use Actorscript if I can naturally define a Y combinator in most other typed languages?

The Y combinator is *not* practically useful for anything

The Y combinator is not practically useful for anything.

What nonsense

Since my small declarative language doesn't have local recursive function definitions I use it all over the place?

But, granted, in a better language you don't really need it. But it's a nice to have if it can be expressed.

It just shows other languages have more expressive type systems than Actorscript. So, I won't use Actorscript.

Y combinator: an obscure, convoluted hack to do recursion :-(

The Y combinator is an obscure, convoluted hack to do recursion :-(

Yah. But it works

Works fine in my language.

    def fix: ((a -> b) -> a -> b) -> a -> b =
        [ f -> f [x -> (fix f) x] ]

    def long_to_text: long -> text = 
        zero = int_to_long 0;
        ten  = int_to_long 10;
        c = [ 0 -> '0' | 1 -> '1' | 2 -> '2' | 3 -> '3' | 4 -> '4'
            | 5 -> '5' | 6 -> '6' | 7 -> '7' | 8 -> '8' | 9 -> '9' ];
        p = fix [ p, nil,       x -> cons x nil
                | p, cons y yy, x -> cons y (p yy x) ];
        s = fix [ s, n ? n < zero    -> cons '-' (s (zero-n))
                | s, n ? n < ten     -> cons (c (long_to_int n)) nil
                | s, n            ->
                    n0 = n / ten; n1 = n - (n0 * ten);
                    p (s n0) (c (long_to_int n1)) ];
        [ n -> _txt (s n) ]

Recursion + First Class Functions

The Y-Combinator is a natural consequence of having recursion and first class functions. It looks like you allow general recursion, which must mean you do not have first-class functions. You claim you can pass the address of one actor to another (which would seem to be similar to first-class functions) so I am trying to understand the details of the difference (because the exact details are important).

Types mean that Y combinator can't be implemented in ActorScript

Types mean that the Y combinator can't be implemented in ActorScript

Which is a great reason not to use ActorScript.

Which is a great reason not to use ActorScript.

If it's your own language why leave out "local recursion"?

"Since my small declarative language doesn't have local recursive function definitions I use it all over the place?"

I realize that code generation is a bit more complicated with recursion, but let me guess that you don't have looping constructs either - so if you want efficient looping you need to handle recursion specially anyway.

I like minimalism

I like minimalism. I don't handle recursion in any special manner, it's just that only top level definitions may refer to themselves - each top level definition becomes a (trampolined) C routine.

I left it as a 'nice to have' for some later version.

What is not allowed?

So what is the restriction that prevents that definition of the Y combinator? [Interger]->Integer and Integer both seem reasonable types?

Y fixed point procedure is not validly typed

The Y fixed point procedure on integers is not valid because types must be constructively defined:

IntegerToInteger ≡ [Integer]↦Integer
IntegerFunctional ≡ [IntegerToInteger]↦IntegerToInteger 
Nonexistent ≡ [Nonexistent]↦IntegerFunctional    // not allowed

Helper.[f:IntegerFunctional]:NonexistentLet g.[x:Nonexistent]:IntegerFunctional ≡  f[x.[x]]。 
    g 

Y.[f:IntegerFunctional]:IntegerFunctional ≡ (Helper.[f]).[Helper.[f]]

Edit: For example, the following is a valid definition of an Integer functional:

FactorialFunctional.[f:IntegerToInteger]:IntegerToInteger ≡
   Let g.[n:Integer]:Integer ≡ 
          n=0 �
            True ⦂ 1
            False ⦂ n*f.[n-1]。
     g

Consequently, FactorialFunctional[Factorial] is equivalent to Factorial.

Not getting it

I'm obviously still not getting it. I don't even see why you need "Nonexistent". Surely the type you want for an integer fixed point is:

Y : [[Integer] -> Integer] -> [Integer] -> Integer

So you could pass a function like:

f x = x + f (x / 2)
Y f 1

Should return the result 2. The first argument to Y is clearly a function from integer to integer, the second argument is simply an integer, and it returns an integer. There are no complicated or unrealisable types. Of course the definition of Y at the value level might be tricky, but it is certainly well typed.

Important for types that Y procedure definition is not valid

It is important for types that Y definition (above) is not valid.

Invalid types like Nonexistent (above) are not constructively defined.

Nesting

Aside from why you might not want to allow them, I don't really see what you are disallowing. So do you not allow higher order functions (actors), where the argument of one function is another like:

[[Integer] -> Integer] -> ...

Or do you not allow functions that return functions like:

... -> [Integer] -> Integer

Edit: I might have a vague clue that you don't allow construction of a function like 'Y', but I am not sure how. I don't see why you need "nonexistant"? Surely you just need IntergerToInteger and IntegerFunctional. The type of the function you want to calculate the fixed point of is "IntegerToInteger" and the type of the Y combinator is "IntegerFunctional". It seems you can define the Y combinator without "nonexistant", so why is it even there?

The Y fixed point has a definite definition: but no valid types

The Y fixed point procedure has a definite definition that has no valid types.

Valid types

That's an odd way of putting it. Don't you mean the procedure fails to type check?

The question is why it fails to type check? Let's start with valid types;

fix<a> = [[a] -> a] -> [a] -> a

Ignoring what procedure has this type, and how the type is generated, is the above a valid type for a procedure?

Parameterized Y fixed point procedure: not typed

The parameterized Y◅aType▻ fixed point procedure is not valid because types must be constructively defined:

SingleArugment◅aType▻ ≡ [aType]↦aType
Functional◅aType▻ ≡ [SingleArugment◅aType▻]↦SingleArugment◅aType▻
Nonexistent◅aType▻ ≡ [Nonexistent◅aType▻]↦Functional◅aType▻   // not allowed

Helper◅aType▻.[f:Functional◅aType▻]:Nonexistent◅aType▻ ≡ 
  Let g.[x:Nonexistent◅aType▻]:Functional◅aType▻≡  f[x.[x]]。 
    g 

Y◅aType▻.[f:Functional◅aType▻]:Functional◅aType▻ ≡ (Helper◅aType▻.[f]).[Helper◅aType▻.[f]]

Why Y can't be defined in DL (simple) (re "not getting it")

Let's suppose:

  factorial == Y[fixes-as-factorial]
            == fixes-as-factorial[Y[fixes-as-factorial]]


  shorthand:

  faf == fixes-as-factorial
  n2n == [Natural] |-> Natural

  i.e.

  factorial : n2n

  factorial == Y[faf] == faf[Y[faf]]

  faf's type?  One could say:

  faf : [typeof(faf)] |-> n2n 

  but...

A few DL types are primitive.

Additional DL types may be introduced by deriving a new type that is distinct from all earlier defined types.

No other types exist.

Notice that the purported type of faf is:

  faf : [typeof(faf)] |-> n2n 

To construct the type of faf one must have already constructed the type n2n and also the typeof(faf) itself.

Since the type of faf must be constructed before it can be constructed, it must not exist.

Simple function and Recursive Types

But I can have a simple function with the same type as the fixed point. For example:

g f = \x . f (f x)
has the type 
g<a> : [[a] |-> a] |-> [a] |-> a

And I can pass it a function like increment by one, and 'g' will return a function that will increment by 2.

This has the same type as the fixed point, but performs another calculation.

So as far as I see, there is nothing wrong with the _type_ of the fixed point function.

Now consider "Mathematica", we could pass a symbolic representation of the function 'f' to a fix function that performs algebraic manipulation of the symbolic representation and returns the fixed-point (for example calculates the fixed point of infinite arithmetic or geometric progressions).

In that sense 'Y' can be defined as a meta-function that operates on functions without any self-referential definitions.

edit: I think I see, the type equation solver is not able to solve a type like :

X = X |-> X

So effectively you don't allow some generalisations of type variables? Because traditionally:

X = forall a . a -> a
X = forall a . (a -> a) -> (a -> a)
X = forall a . ((a -> a) -> (a -> a)) - ((a -> a) -> (a -> a))

The top (a -> a) is the most general type and it includes all those below. Now we know the AcrorScript types we are discussing don't have universal quantiication, but we can represent the above parametrically:

X<a> = [a] |-> a

Now we can provide "[a] |-> a" or "[[a] |-> a] |-> [a] |-> a" as the parameter to the above type X.

So the type of "faf" can be derived to be:

faf<a> = [a] |-> a

Which I guess would become an infinite expansion:

faf<a> = [a] |-> a, 
[[a] |-> a] |-> [a] |-> a
etc...

So it appears the important bit is the difference between parametric and universally quantified types, although we could recover the Y combinator with parametric types by permitting recursive types, which using "mu" notation would give something like the type:

f<a> : [a] -> a
Y : mu b . f<b>

Y fixed-point procedure is typed; but not its implementation

The Y fixed-point procedure itself has a typed signature; but its implementation is not typed.

no recursive types re: Simple function and Recursive Types

So effectively you don't allow some generalisations of type variables?

I guess you could say that.

Types are defined by a constructive induction. Therefore they are partially ordered by a kind of ranking.

The lowest ranked types are: Boolean, , Sentence, Proposition, Proof, Theory.

New types may be constructed from earlier constructed types. For example if σ1, σ2:Type then there is a function type: 1] ↦ σ2.

A function type is of higher rank than it's domain and codomain.

Consequently, a function type is always distinct from its domain and codomain.

Hewitt's earlier described helper function and my "faf" (fixes as factorial) both have the same, non-existent type.

   We know:

   factorial : [ℕ] ↦ ℕ

   Assume to derive a contradiction that:

   factorial ≡ Y.[faf]

   where

   Y : [typeof(faf)] ↦ [ℕ] ↦ ℕ

   and where Y, defined in the customary way, gives us:

   Y.[f : typeof(faf)]:[ℕ] ↦ ℕ ≡ Helper.[f].[Helper.[f]]

   since we know that Y.[faf] is factorial we can, read off
   the domain and codomain of Helper:

   Dom(Helper)typeof(faf)

   Cod(Helper) ≡ [ℕ] ↦ ℕ

   that is to say:

   Helper : [typeof(faf)] ↦ [ℕ] ↦ ℕ

   the whole concept of faf, which when fixed yields factorial,
   is that faf applied to itself is factorial.  From this it follows
   that we can read off the type of faf:

   faf : [typeof(faf)] ↦ [ℕ] ↦ ℕ

   By inspection we can see that the types of faf and Helper
   are the same and therefore:

   Helper : [typeof[Helper]] ↦ [ℕ] ↦ℕ

   these constructions don't work, in DL because they have it that:

   Dom(faf)typeof(faf)

   and

   Dom(Helper)typeof(Helper)

   yet in DL, a function type can only be defined in terms of a
   lower ranking domain and a lower ranking codomain.

So, no, types don't "generalize".

Interestingly, a syntactically recursive definition of factorial is perfectly fine.

   factorial.[x:ℕ]:ℕ ≡ if (x = 0) 1 else x * factorial.[x-1]

and as Hewitt has mentioned we can define a (particularly uninteresting) functional:

  FactorialFunctional.[f:IntegerToInteger]:IntegerToInteger ≡
     Let g.[n:Integer]:Integer ≡ 
            n=0 �
              True ⦂ 1
              False ⦂ n*f.[n-1]。
       g

and thus FactorialFunctional.[Factorial] is equivalent to Factorial.

Still, FactorialFunctional is not a function that when self-applied yields Factorial.

Type System Using Big Lambda

Okay, I think I now understand what is going on with the type system. It seems it has big-lambda and no universal or existential quantification, so in more common type system notation we would write:

Y<a> = [[a] -> a] -> [a] -> a

as

Y : \a . (a -> a) -> a -> a

I think that captures all the same properties, except the message arity, which I am not concerned with for the moment.

Edit: I am again a bit confused, what happens with something like the identity function:

id : forall a . a -> a
id x = x

If this were written with a type parameter you would get something like;

id : \a . a -> a 

However there is only a single function here, so if we try and use it:

id "hello"
id 27

We cannot bind it to two different types. It is the universal quantification of the type variable that allows it to be both a string and an integer at the same time (as there is only single function).

How can we define a re-usable polymorphic function without universal quantification?

"How can we define a re-usable polymorphic function without uni"

How can we define a re-usable polymorphic function without universal quantification?

Every function has a definite domain and co-domain. There is no "any" type.

In ActorScript, a "parameterized type" can be defined. The class of typed identity functions could be defined (modulo my ActorsScript errors):

  SingleArgument◁aType▷ ≡ [aType] ↦ aType

  Id◁aType▷ ≡ 
    Actor implements SingleArgument◁aType▷ using
      [x] → x ▮

Actorscript supports "polymorphism" in roughly the manner of "generics". Different actors can implement the same type as one another using different implementations.

A longer example begins on page 33 of the ActorScript paper.

p.s.: Hewitt - the appendix 4 hex value for "end" (▮) is wrong.

Thanks Thomas!

Thanks for noticing the typos!
They will soon be corrected on HAL.

It also possible to define Id as follows:

Id◁aType▷.[x:aType}:aType ≡ x ▮

Emacs abbrev file for DL & Actorscript?

Is there an already existing Emacs abbrev file somewhere to translate IDE ascii into the assorted unicode used by DL and ActorScript?

Something Odd

There is something odd here. There is only one 'id' function, so once you have grounded its type parameter to say Integer, it is only integer and cannot be anything else at the same time.

re Something Odd

A polymorphic function like Id can be applied to arguments of any type that has the property that the definition of Id type-checks if "aType" is replaced by the type in question.

It so happens that the definition of Id is such that it will type check no matter what type you replace aType with.

Other polymorphic functions are not necessarily that general.

That's not it.

In other type systems, there is only one id function. It can only have one type. The universal quantifier says this single id function accepts any type, but there is still only one id function. You can think of this in the code output by the compiler itself, there is a single function that accepts any type.

So a type with a parameter like: \a . a -> a once bound to an Integer can only accept integers, and hence the same function cannot be used on strings as well.

In ActorScript types this is different because there is some kind of assumed operation on the functions (probably elaboration like Ada generics), where every time we use a function like id on a type we have to make a copy of it, and elaborate it on that type to generate a specialised id function. We then have to keep track of every type id is already elaborated on, so that we don't proliferate unnecessary copies of the function.

At the moment I cannot think how to represent such a constraint in a formal type system, I suspect it requires introducing the notion of a set of types, something like:

{\a . a in T => a -> a}

IE we have a set of functions each of which accepts a type parameter which must be in the set of types T which is all the types the function is applied to in the whole program.

Many functions

You can think of this in the code output by the compiler itself, there is a single function that accepts any type.

Would that not depend on the language and the compiler? In the examples that I've seen the function would be specialised to a type signature, so that in the generated code there would be a function id_integer_integer and another id_string_string. They would both have originated from a common description but at the point that they were instantiated over a concrete type they became separate functions. This depends on what style of polymorphism is in use. C++ uses this approach and then mangles the types into the names. I think (very fuzzy memory) that Haskell does the same, or it throws an error at the point that the function cannot be instantiated with the types (i.e. the constraints cannot be met)?

The claim that there is only one id function is after all just a claim that is relative to what is syntactically valid. If one is working in a system without polymorphism at all then it is clear there is no universal id function. If one is working in a system where polymorphism defines a template that can be instantiated into functions, then the universal id is not actually a function.

Logic

I think the type system, as related to a logic, or an algebra has these intrinsic properties. I don't think C++, Java etc, have a proper algebraic type system.

Haskell types are implicitly universally quantified, and you can read them as:

id :: forall a . a -> a
id x = x

With exactly the semantics I describe above.

The claim about only one id function is based the logic underlying the type system, nothing to do with the language or its semantics.

Soundness is defined by a type system that admits only well formed programs, IE the logic of the type system matches the semantics of the language. Therefore it must be possible to have unsound type systems (where the logic does not match the semantics) and even type systems with no language.

Simply saying:

a -> a

Without introducing 'a' is not a complete statement, a is free in the expression, and it must be introduced in order for this to be a meaningful statement in logic, for example:

\a . a -> a
forall a . a -> a
exists a . a -> a

'->' is just an infix functor (logic terminology, not category theory). You could alternatively write:

\A . arrow(A, A)

Using upper case for logic variables and lower for atoms. The problem is, for none of these options above does the logic match the semantics of ActorScript types.

System F

What is it about the DL types that you find odd? Isn't this scheme just restricting to no higher rank polymorphism? He seems to be calling quantified types "roughly generics" and distinguishing them from types. This corresponds to requiring all quantifiers be at top level in types. Also, look at the System F encoding of Haskell types. A value of quantified type is encoded as a function using big lambda.

No universal quantifier

So in system F, "id" is:

|- /\a . \x^a . x : forall a . a -> a

So the type is still universally quantified. ActorScript types we are told have no universal quantification.

You cannot assume free variables are universally quantified, see:

http://math.andrej.com/2012/12/25/free-variables-are-not-implicitly-universally-quantified/

Right

In System F, you'd have id = \a:*. \x:a. x. I understand him to be saying that id isn't a proper value, but is a family of values. In other words, he considers (id Int) and (id Char) to be values, but id isn't a proper value. Quantified types in System F would correspond to "types" of these families, but he doesn't call them "types", a term he reserves for ordinary types of values. So (\a:*. \x:a. x) has a "family type" of roughly (forall a:*. a -> a). But he doesn't allow function families to be passed first class - no higher rank polymorphism.

I'm filling in some blanks, but this setup doesn't seem particularly unusual to me.

How to express as a type system.

So what extensions to a simply typed lambda calculus would be needed to support this kind of typing?

I am not saying its unusual (Ada generics work this way) but they are largely ad-hoc. How would this relate to a formal type-system, or to logic?

Edit: I think I am thinking of the type level lambda the wrong way. Perhaps the actor type for "id" could be written:

/\a . a -> a

Just weaken System F

I think you just need to split the grammar for System F so that family values (big lambda) and family types (forall) don't produce ordinary values and types, but rather produce value families and type families:

V = x | \x:T. V | V V | primitive
VF = V | \a:*. VF
T = a | T -> T | primitive
TF = T | forall a:*. TF

Edit: And to answer your second question... I'm not arguing that it's a great type system. Just that it doesn't seem particularly odd to me.

Translation to logic

So with a universally quantified type we can translate to logic quite easily, using Prolog syntax:

forall a . a -> a   <=>   arrow(A, A).

What would an actor type look like with the implicit universal quantification of the Prolog clause. Clauses have implicit universal quantification. If we take that away, once a clause is bound to a value, it retains that value for the rest of the program (generalisation does not happen).

We could imagine abstracting the values from the Prolog clause using a lambda as in:

\A . arrow(A, A).

Now I guess we have to know the argument 'A' before we can match and unify the clause. It seems kind of useless as a logic because we can no longer return a value, or propagate an unknown type.

Family polymorphism would

Family polymorphism would only exist at top level. If you want to pass a value first class you have to choose a (non-family) type for it. I agree that this makes for a weak logic.

Choosing

Presumably you have to choose first, which means you cannot operate on unknown variables, which would seem to make inference harder.

System F + ActorScript Types

How would you adapt system-F to support both the normal universal quantification, and types like Actor Script types?

Possible to define Fix◅aType▻

How ActorScript types work in the first place is already a good question, but at the moment I suspect it's very close to System F and the kind of polymorphism you expect. Here's something Hewitt said elsewhere in the thread:

PS. Also, it is possible to define Fix◅aType▻ although that doesn't create an untyped fixed point to create "self-referential" sentences.

Signature but no valid implementation of Y◅Sentence▻

I fixed the misleading statement in my previous post.

There is a signature but no valid implementation of Y◅Sentence▻. There are no "self-referential" sentences in Mathematics.

Separate thing

I don't see how a fixed point combinator would lead to "self-referential" sentences anyway. To me, they're independent topics at the moment. (Maybe there's something I'm missing.)

Martin-Löf, function identity, and polymorphism

Per Martin-Löf wrote (in "Constructive mathematics and programming", in "Proceedings of the sixth international congress for logic, methodology and philosophy of science," 1982):

I do not need to enter the philosophical debate as to whether the classical interpretation of the primitive logical and mathematical notions (proposition, truth, set, element, function, etc.) is sufficiently clear, because this much is at least clear, that if a function is defined as a binary relation satisfying the usual existence and unicity conditions, whereby classical reasoning is allowed in the existence proof, or a set of ordered pairs satisfying the corresponding conditions, then a function cannot be the same type of thing as a computer program. Similarly, if a set is understood in the sense of Zermelo, as a member of the cumulative hierarchy, then a set cannot be the same kind of thing as a data type.

It is a classical notion, of the sort from which Martin-Löf removes himself, that one can say:

In other type systems, there is only one id function. It can only have one type.

That is problematic in a constructive type theory since it entails the existence of a "type of all types".

The type-parametric definition of Id◅aType▻, illustrated in earlier comments, is not a constructively well-defined function because it has no definite type: no domain and codomain. It is "something else", not a "function".

The type-parameteric Id◅aType▻ can be regarded as a kind of idiosyncratic mathematical object. The parameterized type signature is one kind of abstract grammar tree that describes a class of function types. The full generic definition of Id is a grammar tree that describes a class of Actor implementations. (It is really an internal detail of implementation whether or not the class of Actor implementations can all be compiled to a single machine code or whether different members of the class must be compiled separately).

In addition to being a programming language, Actorscript is a suitable language for expressing DL terms, such as the definitions of mathematical functions.

If you are coming from a classical-logic-and-set-theory point of view that might seem odd because, well, it is different from that tradition.

What does this have to do with software engineering?

I have no idea what's being talked about here, even though I know some perfectly good meanings for the phrase "fixed point". But even having no idea what's being said, I am ever more firmly of the opinion that none of this could ever have anything to do with the act of engineering software, which, after all, is what programming languages are for.

Can anyone explain, defend the proposition that there is anything here that could affect the actual implementation of software?

The bounds of logic define thebounds of reasoning about software

I am not really in a position to defend these people because my own opinion is that there are too many mathematicians and logicians in CS, subsequently SE is both misunderstood and taught badly, but the title is more or less the argument.

We want correct software. One manner of achieving that is through formal means, proving software correct. The problem with proving software correct is that maybe Godel blew up that idea a priori. Therefor there is a need to establish maximally strong logical frameworks to prove software correct. Hewitt thinks he did that with a certain logic I don't understand.

That's about it.

Inconsistency Robustness: best we can do for large software

Inconsistency Robustness is the best we can do for large software systems, which are pervasively inconsistent.

My software works.

My software works.

Why: Windows, IoS, Linux, etc. always have huge numbers of bugs?

Why do Windows, IoS, Linux, etc. always have huge numbers of bugs?

They don't

Whatever gave you that impression? My software just runs? Actorscript cannot prevent the occasional 'logical' (algorithmic design) bug from appearing either; how does it prevent division by zero errors?

In practice, my software works and I have uptimes of years.

A difference between tools and proofs

"too many mathematicians and logicians in CS"

Especially in the field of programming languages there's a difference between an engineer and a mathematician.

Mathematicians usually start with severely limited domains because in any new area, you can usually only write proofs about things that are very simple and limited... even as an area evolves, the mathematical notations for it remain as minimal as possible because being able to write proof is enough.

An engineer wants tools that are expressive and that's the opposite of limited. Also an engineer just wants to be able to write something that he can reason about - he doesn't need a proof let alone a automated proof. A system that's designed to prove various (often trivial) properties about his programs may just get in the way of getting the work done. It may make the work impossible, even.

Mathematicians and logicians often get everything wrong

Yeah well. There's that and there's that (despite that there are scientists which are great engineers), often, they'll simply get everything wrong.

It's a real hassle. In the small country I live in, students who want to go to a university after college are advised to not go to CS departments with a high theoretical focus because they'll a) will not learn an additional skillset, and b) will need to unlearn lots of stuff when entering industry from that department.

Unless you want a career as a scientist, there's simply not much there in most theoretical CS departments. And even what is there is often somewhat dubious.

(Don't blame me, I didn't know either until I became a college lecturer.)

Everything in LTU is about types these days

I'm not even convinced that types are good for programming.

For one thing I've noticed that everything you can do with types that actually effects the result of a computation (rather than limit what can compile) could also be written in something like in prolog, and the prolog would be much more expressive and less limited.

As for types that are there to prove that your program does the right thing... well not everyone needs that, not all the time. And most of the time, the style of programming it forces is a lot slower than the alternative.

People writing in overly typed languages have to solve TWO problems, one is the algorithm - the other is fitting it into the type system. If you delete the second problem, a lot of problems become trivial.

Types ARE a language, but they're one of those severely limited domain languages that mathematicians love because they can make proofs about them.

But no one here even seems to notice expressiveness. Lack of limitations may make programs hard to make proofs about, but they make them easier to write.

And for god's sake they're arguing about the y-combinator, something that is useless in any real language where a function can refer to itself. Why would they assume a function can't - oh they had some calculus where they thought they couldn't make a proof if functions could refer to themselves. Useful in the real world? Absolutely not.

Scientists often concentrate on the trivial

Yeah well. Parsing is 2% of the effort of implementing a compiler and typing is maybe another 10%.

Typing for a programming language, well, someone should write a good book about it once because I gather it's a mostly finished subject.

It's just one of those things you can produce nice greeklish about which will get published because it looks sciency. It's also surprisingly hard to get right, so that doesn't bother me too much, but anyone who thinks a compiler is about parsing and typing is a fool; and it looks CS delivered another generation of those.

The type-theory academics

The type-theory academics have this deep belief, I think, that if they can just keep at their approach until they get it right enough it'll be so wonderful that it'll win everyone over. Which may be true... it's good that academia can explore things like that with very long lead times... but in this case I think they're barking up the wrong tree. Your point about two problems, one algorithm and one typing, seems closely related to the argument I've been making lately against types. It's a little scary to me that I've gotten some objections when I suggest that the first duty of program syntax is to be readable. By which I mean, clearly express the algorithm. Types, I've concluded, are essentially a tying of program syntax to reasoning about programs (through the imho over-hyped Curry-Howard correspondence), but once you tie program syntax to both things, the demands of making these two things clear pull the syntax in different directions. If correctness proofs are to be practical, you need proof and algorithm both to be clear, which you can't get if you tie both to a single syntax.

I suspect that mathematicians don't like algorithms

one end point of programming is where you specify a problem and the software is the programmer.

We may even get there eventually.

I know none of what they're arguing about is declarative programming but I'm still tempted to make the joke that mathematicians imagine that what they need to get right is the ability to specify problems and the magical programmer in a box is a trivial problem left up to the reader.

But I do notice that nothing on this blog is about algorithms, at best it's about trivial classifications of algorithms, such as whether they can be proved to terminate.

Donald Knuth seems a likely

Donald Knuth seems a likely counterexample.

I've felt strongly for decades that the mental talent needed to write a good program is the same as needed to write a good proof, making it fascinating that there's this huge gulf between the two disciplines. Revisiting that insight now, I'd add that it's crazy to try to constrain the syntax such that the same syntax is ''both'' a good program ''and'' a good proof at the same time. A bit like trying to come up with two grammatically correct sentences in two different languages such that the two sentences sound the same.

Could you come up with examples?

It would be interesting to me if you could come up with a proof syntax and a program syntax - keep them separate and show a program and a second meta-program that declares things about it or proves things about it.

Assume that I can't already read the notations they use around here and describe whatever one you come up with.

I already realize that there are some programming constructs that will throw monkey wrenches into proofs... I think we both agree that you have to allow those and that you should also be able to tell your prover to assume things that it can't prove. Such as my trivial example that a REPL will eventually end its loop (because the user won't want it to run forever).

As an aside I don't think that proof of termination is usually useful.

Prolog & Lambda Calculus

I have something like this, the proof language is like Prolog and the programming language is effectively a lambda calculus.

Types are inferred from the lamdba calculus, and these becomes the terms in the meta-language.

I remember a talk...

that I've failed to find reference to. It was a microsoft thing with a research version of a their SAT solver that they never released. It was making proofs about routines in a subset of C#

But I don't remember it showing any code, just talking about it.

They apparently used it to prove some properties about a security protocol routine. That was the selling point, that you could use proof to improve security. SAT solvers are supposedly pretty good at proving that there is no input that will break something, or finding an input that will.

It's a bit mind boggling though, the gap between proving something about some code using types and sicing an advanced SAT solver on the code to try to break it.

My initial enthusiasm for SAT solving has tempered

My SAT solving enthusiasm has tempered considerably since I started experimenting with it. The initial selling point was that it could solve problems with thousands, or orders more, of variables.

Turned out that all these variables are the result of translating a digital circuit into CNF. I.e., a NAND term 'x|y' becomes 'z <=> x|y' becomes '(-x \/ -y \/ -z) /\ (z \/ x) /\ (z \/ y)' according to the following table.

x y z
0 0 1
0 1 1
1 0 1
1 1 0

A digital circuit is trivially translated to CNF by assigning each out gate a variable and using the above encoding.

All those 'thousands' of variables just imply a SAT solver is a very efficient manner of finding truth assignments over a digital circuit. But, unfortunately, it looks like all NP results hold, and maybe one can check little more difficult circuits than through old means of finding assignments over circuits. SAT solvers are simply efficient because the CNF representation is aimenable for a solver written in C; i.e., the CNF representation fits neatly into arrays and arrays can efficiently be locally accessed.

Looks like Knuth dove down the same rabbit hole, so I guess I am in good company, but fundamentally nothing much changed. It's just efficient local search.

SMT not just SAT

(Satisfiability modulo theories) not that I really understand SMT. SMT solvers can solve constraints over specific theories or combinations of them. So the programming one probably knows something about arrays and numbers not just booleans.

I rented a text book on it, and was very sad to note that I tried to convert the rental into a purchase a few days too late to get rental as credit toward the purchase. $40 down the drain. >.>

That said I didn't read enough to understand the magic... And there are new algorithms that weren't in the book, more efficient ways to combine theories.

I have some thesis bookmarked though.

Microsoft claims to have an unreleased SMT solver for symbolic solutions to nonlinear equations that they claim is almost as powerful (and as fast) as Mathematica on them. I've forgotten what that kind of solver is called. Eliptical something :/

I remember some papers on solvers. Someone first siced a boolean SAT solver on a bunch of constraint problems, treating each bit of the numbers involved as a separate variable. It beat every mini-zinc solver around.

Then he did another paper where he tried an SMT solver that had theories about numbers and maybe a few other things. It beat the simple SAT solver and also all of the traditional constraint solvers.

Note that the constraint solvers could use search space hints and the SAT and SMT solvers couldn't - but SAT and SMT still beat them.

SAT, QBF, to SMT

It'll likely depend on the problem. SAT solvers will likely 'discover' good manners of searching the search space if they are there; even strange strategies you wouldn't think of yourself so the hints one can provide are probably superfluous.

But if you start doing SMT which is roughly combining many techniques the problem matters the most. If you're simply not doing a lot of propositional reasoning a SAT solver becomes overhead over pure symbolic reasoning; it's likely that trivial. So I somewhat expect pure 'chaos' as the outcome of comparing SMT solvers.

Many problems can be SAT-solved efficiently

SAT solvers are commonly used in industry because, even though they take exponential time in the worst case, they're efficient for large classes of problems. In some domains, the "exponential time" worst-case is nowadays a mostly theoretical concern. One example I'm somewhat familiar with is discussed in http://arxiv.org/abs/1506.05198 (though I didn't read that particular paper yet).

Of course, if you have a genuinely NP-complete problem, translating to SAT probably won't help; but often it's easier to encode your problem for a SAT-solver than to figure out a specific algorithm specializing all the SAT-solving heuristics to your domain.

True, but also Cherry Picking

The models in the paper aren't very large. 64K variables sound large but isn't when you got in the back of your mind that that corresponds to a digital circuit of 64K gates; that's pretty small.

Moreover, the experiment is ran on models which encode implication chains I gather. That's not that difficult for a SAT solver.

Sat solving becomes exponentially harder on trivial problems like multiplication.

So. Nice application domain, though a bit cherry picked.

I never claimed SAT solvers aren't without their use; but given the initial enthusiasm of solving NP problems maybe within reach, I can only conclude they (mostly) didn't made a dent into those problem.

I don't remember

if it was Knuth or Dijkstra who hated lisp, but one of them was made ill by the notion of algorithms that can do things that make proofs impossible, said that the punning between data and code was an abomination.

It just shows that someone whose job is to prove things about algorithms has totally different needs than someone whose job is to write programs.

By the way, what I'm going to be doing for a few hours is to turn a normal language into a homoiconic one by adding an accessable parser and parse tree format and embedding such things in. I guess all these proof obsessed people don't see much use for homoiconic languages. Prolog was homoiconic without being unreadable, but that sort of advance disappeared with it. Haskell isn't homoiconic, notice.

homoironic

As I recall, McCarthy defined Lisp in Lisp as a demonstration of mathematical superiority of the language.

homoironic? lisp 1.5 manual contradicts

On the very first page:

The second important part of the LISP language is the source language itself which specifies in what way the S-expressions are to be processed. This consists of recursive functions of S-expressions. Since the notation for the writing of recursive functions of S-expressions is itself outside the S-expression notation, it will be called the meta language. These expressions will therefore be called M-expressions.

Third, LISP can interpret and execute programs written in the form of S-expressions. Thus, like machine language, and unlike most other higher level languages, it can be used to generate programs for further execution.

The language was defined via a recursive function notation called M-expressions. A program in lisp was able to interpret a lisp program expressed as S-expressions, but this put it in a practical category alongside machine language rather than a vague "mathematically superior" category.

LISP had the practical advantage that you could write programs that would generate and then execute code.

SCHEME (after its AI lab days) did its best to turn its back on this practical utility and struggled to hint vaguely at some kind of quasi-mathematical utility. Because hygiene and lexical scope and CPS and Strachey or something.

Yeah

early standards of scheme didn't specify eval, though most versions had it.

Also I guess taking it out, or pretending it isn't there allows a bunch of optimizations and I think some people in scheme wanted to focus on the optimizations.

better metaprogramming

"LISP had the practical advantage that you could write programs that would generate and then execute code."

That's not enough. I want languages that have libraries to help you analyze code, transform it, specialize it, optimize it...

Why did the lisp community never get too deeply into metaprogramming?

For that matter, why not have nice parser? At least the guy who invented FORTH had the excuse of not being an expert in CS.

Why did the lisp community

Why did the lisp community never get too deeply into metaprogramming?

Well, here's some interesting deep magic I noticed when researching my dissertation.

My topic, of course, was fexprs, and as I think I've remarked at some point or other on LtU, I've always had at least half in mind that an interpreter for a Lisp with fexprs might be run at "compile" time to generate the compiled image of a program, to interestingly useful effect. Let's put a pin in that and come back to it.

Turns out that Shriram Krishnamurthi in his dissertation — which I couldn't have helped noticing since he was on my dissertation committee — was proposing something he called micros, which are a bit like macros except that where macros do a transformation from source-language to source-language, micros do a transformation from source-language to target-language. When a micro wants to it can recursively invoke the compilation process to transform any source-language expression into the target-language, using a function called dispatch. Which is uncannily similar, in a sort of inside-out way, to what fexprs do. From my dissertation:

Each fexpr specifies a computation directly from the (target-language) operands of the fexpr call to a final result, bypassing automatic operand evaluation and thus giving the programmer complete semantic control over computation from the target language. Each micro specifies a translation directly from the (source-language) operands of the micro call to a target expression, bypassing automatic operand translation and thus giving the programmer complete syntactic control over translation from the source language. Thus, micros are to translation what fexprs are to computation. The parameters of the analogy are that micros bypass processing that would happen after a macro call, and are inherently syntactic; while fexprs bypass processing that would happen before a procedure call, and are inherently semantic. The analogy also extends to internal mechanics of the devices: micros [...] rely heavily on a function dispatch that explicitly performs source translations, compensating for the loss of automatic operand translations — just as fexprs (treated here) rely heavily on a function eval that explicitly performs evaluations, compensating for the loss of automatic operand evaluations.

When you start comparing fexprs at compile-time to micros... I suspect fexprs turn into a more fluent device subsuming the capabilities of micros.

I've done something along those lines

I've actually made a couple of macro systems with the specific idea in mind that I'm making a Kernel-style fexpr phase. It's pretty nice for defining local macros, because it's just a matter of appending a binding to a first-class environment.

I also return dedicated AST objects--much like a target language expression, but potentially with extra methods like listing the free variables or doing some visitor pattern code-walking. Since I haven't put my own languages to much actual use, I'm not quite sure what methods I'm going to want yet.

I'm gonna add this to an existing language

using a hidden code to code translator and taking advantage of the fact that it's a dynamic language.

Lua might not be to the taste of people here, but it's popular enough, has a tiny, fast tracing jit. And it has a guarantee of tail call elimination.

That ought to be enough to bestow on it all sorts of powers.

It also has the advantage of being really simple and small.

The one thing it's missing is threads. It has coroutines and exceptions though.

On the negative side a lot of its abstractions are very slightly leaky.

re: metaprogramming

Yes, not too long ago I noted code writing code would help some problems, so this metaprogamming and homoiconicity sub-topic is one I care about. In particular, I want something similar to what you want:

I want languages that have libraries to help you analyze code, transform it, specialize it, optimize it...

Such libraries would have been inhibited in the past by platform dependencies on where code is located and how it is accessed, specifically the file system when all code versions cannot fit in memory. (And even if you could fit every incremental rewrite in memory, you still need to output somewhere.) That's one reason I like the idea of a language knowing about an abstract file system (afs, or VFS virtual file system), so abstract storage manipulations can be written in portable form, only needing a platform specific mapping in each particular concrete implementation. It's weird when a PL delegates part of its semantics to however the OS file system happens to work.

In the mid 90's you would just run out of resources, like RAM, trying to virtualize where everything was stored in a moderate sized code project. Now it's not a problem at all, as long as you work in a low level language like C for the basic infrastructure. Using suitable path conventions in a tree-structured space, you can present code before and after analysis, transformation, specialization, etc., as located in different sub-spaces, then populate things on demand when needed, or access old persistent verions. When an abstract FS lives across sessions, you can do long-lived code versioning with early and later versions both reachable at once, and perhaps used by different processes.

My focus is mainly having both sync and async verions of the same codebase, at different paths, where one derives from the other via continuation passing style, plus binding to a specific process model. I want to write general algorithms in high level style in Lisp, then translate to C, then rewrite that in as many variants as needed, sometimes dynamically when debugging. (I wish that last version had been rewritten to instrument this stuff ... okay let's do that and debug the new process linking that version of code.) I want to keep all the intermediary results, showing the work, instead of just having one original source code and one magic executable at the end. Part of the point is to lower the activation cost needed to move some small thing sideways and try it out, without interfering with other stable versions.

earlier than Lisp 1.5

I was thinking of McCarthy's HOPL paper, which would be his memories in 1978 of the thinking that went into the design of Lisp leading up to Lisp 1. Worth rereading; pdf. I see he does mention proving things about programs and in that regard, pure Lisp. The passage I'd in mind would be just after that:

Another way to show that LISP was neater than Turing machines was to write a universal LISP function and show that it is briefer and more comprehensible than the description of a universal Turing machine. [...] Writing eval required inventing a notation representing LISP functions as LISP data, and such a notation was devised for the purposes of the paper with no thought that it would be used to express LISP programs in practice. [...]

S.R. Russell noticed that eval could serve as an interpreter for LISP, promptly hand coded it, and we now had a programming language with an interpreter.

Also relevant to the current discussion is a bit hidden in the last ellipsis there:

D.M.R. Park pointed out that LABEL was logically unnecessary since the result could be achieved using only LAMBDA - by a construction analogous to Church's Y-operator, albeit in a more complicated way.

re earlier than lisp 1.5

That's an excellent resource.

minimum program size

Typing adds very little to algorithms.

Do CS people ever think about the core of an algorithm and ask "what's the notation that can express an algorithm in the smallest possible form?"

Not make proofs about it, just express it.

Because that comes close corresponding to what's important to people. The shorter a program is, often the easier it is to understand.

Typing adds cruft that doesn't get you to your answer. And no I'm not saying that lambda calculus that has nothing but closures is enough, I'm saying that you don't need to name types or specify them very often. You can have tuples and lists and arrays and numbers etc.

Name a language

after Andrey Kolmogorov.

utility of fixpoint combinators

I find fixpoint combinators quite useful in contexts where I want streamable code - e.g. a stream of low level commands for teleoperation, occasionally sending a short loop. There are simplicity advantages, too, e.g. with fixpoints we can use a simple macro expansion layer as an alternative to a full linker. And it's a lot easier to extract and reuse code that isn't entangled with a namespace. Avoiding named recursion isn't just about proofs. There is operational utility, at least for some domains.

"with fixpoints we can use a simple macro expansion layer as an

alternative to a full linker"

I don't understand what you said here. Can you give an example and explanation?

macro expansion languages

A typical example for name-based recursion:

factorial n =  
  if (n < 1) then 1 else 
  n * factorial (n - 1)

Here, the definition of factorial contains the name 'factorial' which is bound using an external semantics and operational model for namespaces. This code becomes entangled with the namespace. You cannot trivially eliminate the namespace, e.g. by naive expansion of each word into its definition. Attempting to do so:

factorial n =  
 if (n < 1) then 1 else 
 n * 
   factorial (n - 1)

factorial n =  
 if (n < 1) then 1 else 
 n * 
   if ((n - 1) < 1) then 1 else 
   (n - 1) * 
     factorial ((n - 1) - 1)

factorial n =  
 if (n < 1) then 1 else
 n * 
   if ((n - 1) < 1) then 1 else
   (n - 1) *
     if (((n - 1) - 1) < 1) then 1 else
     ((n - 1) - 1) * 
       factorial (((n - 1) - 1) - 1) 

...

Obviously, any such effort, performed statically, would be divergent. Thus, the burden of managing names and binding becomes deeply entrenched in our runtime operational model. Namespaces might be reified as runtime 'environments'. Fixpoint combinators can avoid this issue.

fix = λf.(λx.f (λv.((x x) v))) (λx.f (λv.((x x) v)))
primRec = λplus.λzero.(fix λr.λn.(if (n < 1) then zero else plus n (r (n - 1))))
factorial = primRec (*) 1

Every place a toplevel symbol is used, we can statically replace that symbol by its definition. In the operational model, this namespace containing 'factorial' and 'fix' essentially becomes a macro expansion layer. Of course, for cache friendly operation you'll still want to limit expansion and reuse compiled subprograms. But even there the clean staging is a simplifying aspect. Cache friendly compilation becomes a separable concern because it's no longer strongly entangled with namespaces or linkers.

More important to me is the ability to eliminate the namespace or treat it as a separate stage. This is convenient for a lot of domains where one might benefit from mobile code, streaming code, or dynamically generated code. Attempting to manage remote or time-varying namespaces is stateful, awkward, and (usually) buggy.

uhm

so you maximally expand every named function in place, recursively?

uh

and you call this a "namespace" problem and an alternate to linking...

uh

"Doctor it hurts when I do this"

"Don't do that"

All I can say is that it takes a mathematician to even think of something that naive as a solution to linking and optimization. The amazing thing is that your solution works.

Well it works until you try specializing it for constants, and take the parameters to your y-combinator as constants. Then you're back where you started.

so you maximally expand

so you maximally expand every named function in place, recursively?

Due to cache-friendly compilation issues, if you do maximally expand everything in place, you'll eventually need to apply a compression scheme to reduce common subprograms. Though, that isn't a bad way to go. It can be useful to expand everything, optimize as far as possible, then find common subprograms that aren't aligned with how the programmer thinks about it, but instead with how the optimizer thinks about it.

All I can say is that it takes a mathematician to even think of something that naive as a solution to linking and optimization.

This isn't really about optimization. It's primarily about having a simple model for mobile and streaming code and generated code, contexts where maintaining a namespace is awkward. ("Doctor, it hurts when I combine namespaces with streaming code." "Don't do that.")

And I'm a programmer, not a mathematician. Many programmers do favor simple solutions because they're less likely to be buggy or to interfere with other features. I've been burned by enough compiler bugs and feature interactions that I have trust issues with any language feature that isn't simple.

Well it works until you try specializing it for constants, and take the parameters to your y-combinator as constants. Then you're back where you started.

I used a Z combinator above. So it isn't a problem to specialize for a constant function. If you have both a constant function and a constant argument, I suppose you could evaluate the whole expression at compile time to its final result. (Modulo divergent loops, of course.) I actually use a variant on the Z combinator that contains 'f' only once, rather than twice, to avoid doubling the serialization cost of the partially applied (or curried) form. I haven't translated the variation I use to lambda calculus, though it should be straightforward.

If you perform full beta reduction in an optimizer, you will need to carefully handle expansion of loops. I think it's fine if the optimizer needs to be careful when unrolling loops. That isn't really a change from where we were before. Unlike namespace or linker layers, the optimizer is intimately concerned with evaluation semantics. I don't believe it's accurate to say that "you're back where you started".

I shouldn't get into this

1) because it's the tar-pit that's eating LTU
2) because I don't know anything about it
Do you hear a "but" coming? There's a "but" coming.

"The problem with proving software correct is that maybe Godel blew up that idea a priori."

I don't have a great impression of Godel's achievement even not really understanding it. What I can see is the reactions it got - people were way too impressed with how sweeping it sounded and so instead of subjecting it to mathematical rigor and changing their definitions until Godel's proof no longer attacks mathematics but just says something limited and useful, they just kept huffing the title like it was gasoline fumes to get high off of. Deep, man!

So you start with the assumption that propositions have to be true or false and provable and that blew up in your face... so break it down and turn it into something useful. Also don't break it down in a way that just confuses you until you are no longer capable of asking questions that lead to your paradox. If you say "hey, I've invented a notation so limited that I can't ask any problematic questions in it" that doesn't sound like much of an improvement. It would be better to understand how things go bad. No one threw away division just because it isn't defined at zero. No one threw away zero either. No one threw away subtraction just because you can write 3-7...

don't tease the sharp-witted human animals

What are you trying to get done? :-) Sometimes people like to talk about things because the vein of exploration is fun, not because the goal is reachable. When goals inhabit an abstract philosophical space, they move around a lot. (Smart guy pops back into a room, after a jaunt to alternate dimension, but now he has a facial tic. Cracking a faint grin he notes, "Wow, all your furniture holds still, I forgot how nice that was.")

Proving software correct requires too much perfection. But types can be used to tighten up tolerances in specs so misfit parts show themselves earlier, especially when folks tweak large bodies of existing code, playing with fire. It helps to be told whether round pegs are going in round holes, even if you aren't being warned a peg is made of wood when it needs to be steel.

The practical idea of tolerance, as in parts in mechanical engineering, is useful engineering but it's fuzzy. It's tempting to hope digital tolerances need not be fuzzy as well, because the possibility of perfection seems present, even when it's not because change is also ever present.

There are only a few people understanding Godel

Well. Don't worry about not understanding Godel's incompleteness results because there are really not a lot of people in this world who do understand it. You're in good company. It's a bit like 'understanding' QM.

I personally don't even understand that he simply hasn't proven inconsistency instead of incompleteness.

That thing that bugs me about Gödel

I am bored and things are slow on LtU. So, I am going to ask that thing: That thing that bugs me about Gödel. From the Wikipedia page [1]:

"If p were provable, then Bew(G(p)) would be provable, as argued above. But p asserts the negation of Bew(G(p)). Thus the system would be inconsistent, proving both a statement and its negation. This contradiction shows that p cannot be provable."

Why? Why not simply stop before the 'This'-sentence and simply conclude that the system is inconsistent?

No idea.

[1] http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems

contrapositive

The contrapositive of the implication (P provable implies inconsistent) is (consistent implies P not provable). In other words, that final italicized conclusion is assuming the system is consistent. Does that help?

Yeah. But why?

Why not simply drop the assumption that the system is consistent and write it like this:

"If p were provable, then Bew(G(p)) would be provable, as argued above. But p asserts the negation of Bew(G(p)). Thus the system is inconsistent, proving both a statement and its negation."

Isn't it a bit odd to derive unprovability since otherwise the system is inconsistent? I see no reason to assume the system is consistent?

The first incompleteness

The first incompleteness theorem states that a system cannot be both consistent and complete. This is equivalent to the implication (consistent implies incomplete). Your quote gives the proof (consistent implies p not provable). Wikipedia also includes the other half, (consistent implies (not p) not provable). Combining these two gives the theorem, as incompleteness means that there is some p, such that neither p nor (not p) is provable.

I get that

But I can simply drop everything about completeness, drop the whole definition even, assume that systems are either consistent, or not, and simply interpret that Godel derived an inconsistency in an inconsistent system.

Much more direct, if you ask me.

But isn't completeness the

But isn't completeness the whole point? I'm not sure I get what you're driving at here. What's the point of the interpretation "inconsistent systems are inconsistent"? This would seem to have no information content. Is it that, if this is "all" Gödel showed, then we don't need to feel uncomfortable or worried about the result?

Looks like cyclic reasoning to me

In order to get out of inconsistency of math, Godel introduced the notion of incompleteness, and subsequently proves that unprovable sentences must exist.

What if I don't try to get out of inconsistency with a definition of incompleteness but simply go back pre-Godel, drop the notion of completeness, and assume systems are either consistent or not?

I simply don't see what's wrong with an interpretation that he has proven (logic over) arithmetic inconsistent. As in: maybe the quantifiers are wrong, or something else.

Maybe we should all be finitists?

What's the difference?

Pre-Gödel, it was hoped that it may be possible to formalize mathematics in a way that was complete; i.e., that for every P, either P or its negation is provable. Gödel showed that this is not possible. That is, we had to "drop the notion of completeness". I wouldn't phrase it as Gödel introducing some new notion called incompleteness. Rather, he showed that this old notion called "completeness" was doomed to failure, and had to be given up. I don't see a difference in the result from what you suggest: drop the notion of completeness, and assume systems are either consistent or not. (This latter point resting on the law of excluded middle).

Guess a difference of focus on Consistent and Complete Logic

Some focus on logics where where maybe everything can be proven mechanically, I suppose.

The idea that mathematics

The idea that mathematics should be founded on a set of axioms — and thus, that everything that's true should follow from those axioms, completeness — was less than a hundred years old when Gödel came out with his Theorems. Math had always been based on intuition, and there was a crisis going on in mathematical foundations in the early nineteenth century because they'd been obliged to swallow complex numbers and had indigestion; trying to justify complex numbers using intuition was not working well, the closest they came was the geometric interpretation of complex arithmetic in the Gaussian plane. Folks were looking for some kind of number that would have a 3D goemetric interpretation, because they really, really wanted to be able to treat a directed magnitude in physics as a "number", and when Hamilton came up with numbers that were like that (quaternions) using an axiomatic approach, the axiomatic approach gained hugely in credibility. Anyway, the point is that the expectation of completeness was a fairly recent thing when Gödel's Theorems appeared; you might even date it from Frege's work, in which it was about as old then as Lisp is now.

One more attempt

This is a response to your final parenthetical question, and a last attempt to help clarify things, which I think I've so far failed to do.

The structure of the proof is to derive a contradiction from the assumption of the provability of P (and also, of not P). "Provability" is a key assumption, because the code of the proof of P is a necessary ingredient to derive the contradiction. So we can't "simply stop" before mentioning provability; it's what gets us off the ground.

Two assumptions lead to the contradiction. We assumed consistency and we also assumed that we had a proof derivation for P in hand. It's not logically sound to say the first assumption, consistency, is definitely the wrong one. ("We've proved inconsistency. Let's stop here.") We are also forced, out of logical necessity, to consider the possibility that, perhaps we didn't have the proof code for P in hand after all.

No. I got it.

[EDIT: Ah. Forget it. That didn't make much sense.]

Gödel's results invalid; Church/Turing proved incompleteness

Gödel's results are invalid because he made use of a "self-referential" proposition that leads to inconsistency in mathematics. Using computational undecidability of the halting problem, Church/Turing provided a valid proof of inferential incompleteness of the Peano/Dedekind theory of natural numbers.

See page 14 of Inconsistency Robustness in Foundations.

To try to be a little more

To try to be a little more helpful: it would be fine to leave that first implication as (P provable implies inconsistent), since, combining with (not P provable implies inconsistent) gives the main theorem in the form (complete implies inconsistent). This would be logically equivalent to the argument Wikipedia gave (at least with classical logic).

Wittgenstein: Gödel's "self-ref" prop leads to inconsistency

Wittgenstein showed that Gödel's "self-referential" proposition leads to inconsistency in mathematics:

Let us suppose I prove  the improvability (in Russell’s system) of P
[i.e., Suppose ⊢⊬P where P is Gödel's “self-referential” proposition "I am not provable." so that P⇔⊬P] 
then by this proof I have proved P [i.e., ⊢P].
    Now if this proof were one in Russell’s system [i.e., ⊢⊢P]—
I should in this case have proved at once that it belonged [i.e., ⊢P] and
did not belong [i.e., ⊢¬P because ¬P⇔⊢P] to Russell’s system.
    But there is a contradiction here! [i.e., ⊢P and ⊢¬P]

See page 7 of Inconsistency Robustness in Foundations.

My understanding of your

My understanding of your reading of that quotation (I'm not concerned with Wittgenstein's intention) is that it proves this:

(*) From PM ⊢ ¬"PM ⊢ P", which is equivalent to PM ⊢ P, it follows that PM ⊢ ¬P. In other words, supposing PM ⊢ ¬"PM ⊢ P", it follows that PM is inconsistent.

I'm using PM for Principia Mathematica (i.e., Russell's system) and "PM ⊢ Q" (quotation marks included) as shorthand for Prov(#Q).

So far, so good. This is just a restatement of part of Gödel's proof. Now, if PM is consistent, then it follows that our supposition, which was equivalent to PM ⊢ P, must have been wrong. So didn't we just prove PM ⊢ ¬"PM ⊢ P" ?! I.e., didn't we just prove our supposition, so that the conclusion of inconsistency necessarily follows? No, we did not show that ¬"PM ⊢ P" follows from the rules of PM. At the very least, we'd need to include our assumption that PM is consistent in the context, to get something like

PM ⊢ (¬"PM ⊢ ⊥" → ¬"PM ⊢ P").

This leads in, of course, to the second theorem ...

It's possible I've misread your reading, and ultimately it was my guess as to what the "inconsistency in mathematics" could refer to, as it wasn't explicitly spelled out. I assumed, in particular, that it had to mean more than just (*), as otherwise you're just agreeing with Gödel.

Do you understand the border of Consistent and Complete?

I briefly looked at consistent and complete arithmetic. Informally, I suppose I can abbreviate that as: when adding things consistent and complete logics exist, once you start multiplying it ends.

Is there some informal easily grokkable reason why this is? Or did I get it wrong?

NVM

I think I got it.

Wittgenstein: ⊢P and ⊢¬P where P is Gödel's "self-ref" prop

By the proof that I quoted above, Wittgenstein showed ⊢P and ⊢¬P where P is Gödel's "self-referential" proposition.

Ok. This confirms that your

Ok. This confirms that your reading was what I thought it was.

Was that all you sought to accomplish with this response?

The original comment of yours I replied to made a strong claim. I presumed your intent in posting was to convince others of this claim. My response to you was offered in the spirit of helping you with this goal: I took pains to carefully spell out the understanding I came to of what you said, and also to carefully explain exactly why I was not convinced. This is so that you can change my mind! Point out where I misunderstood, where my attempt to be precise didn't match your intent, where I took a step of logic that wasn't valid.

Your reply was fine; it did give me a small piece of information about what you think. But, surely you couldn't have thought it would do anything to help convince me? Do you care about convincing anyone? I'm just one person, and I'm no expert; there's no need to engage me or my analysis. But you don't seem to engage anyone else, either.

There's a glaring missing reasoning step here. How do you get from "Suppose X, then ⊢P and ⊢¬P" to "⊢P and ⊢¬P"? I proposed one possibility, but also showed that it was an error. What's your alternative?

Wittgenstein: Suppose ⊢⊬P; P is Gödel's “self-referential” prop

Hi James!

I was responding to your question "What is the contradiction?".

Summarizing Wittgenstein:

If we suppose Gödel's 1st incompleteness result is provable
(i.e., suppose ⊢⊬P where P is Gödel's “self-referential” proposition), then
there is a contradiction.

Regards,
Carl

Final response

Notice that this summary is quite different than the one in your previous comment. It's important to be precise! Your new summary still looks inaccurate, for what it's worth.

The understanding I came to, and carefully laid out in my first reply to you, remains intact. I'm not being obstinate here. This latest version of your assertion is more or less what I suspected you thought in the first place, and what I responded to.

Shall we leave it at that? Now that my replies have hit 0 technical content, I don't see the point of continuing, on my side. You could always engage me on the substance of my initial reply...

Wittgenstein: Gödel's "self-ref" prop leads to inconsistency

James,

My summary accurately summarizes the structure of Wittgenstein's argument from the box in my post.

Unfortunately, the notation that you used in your post is not the best :-(
Your best bet may be to abandon this notation and go over to something more modern.

I will be happy to respond to any questions that my have about modern more precise notation.

Regards,
Carl

notation

Get back to me with a working computer implementation of Classical Direct Logic, so that I can write proofs and have the computer verify them. Then I'll implement the first incompleteness theorem for you. I don't think anything else would satisfy us both as to using "modern notation" and as to being "precise."

I am not confused. I have no questions for you. I challenged you to convince me, and gave you all the help I could to do so. Obviously, I remain unconvinced. If you can point to something specific in my notation that was not precise or was not modern, I'd be happy to clarify.

Direct Logic proves the 1st incompleteness theorem for ℕ

Classical Direct Logic clearly proves Gödel's 1st incompleteness theorem for ℕ using the proof of Church/Turing. See page 14 of Inconsistency Robustness in Foundations.

However, Gödel's 2nd incompleteness result is false because Mathematics proves its own consistency :-)

Right, you won't be

Right, you won't be surprised that the theorem is provable. But the point is I would implement Gödel's proof. Not that I think if you saw a computer-verified version of Gödel's proof using your own system, you'd give up your claim that it is invalid.

If p were provable

"If p were provable, then Bew(G(p)) would be provable, as argued above. But p asserts the negation of Bew(G(p)). Thus the system would be inconsistent, proving both a statement and its negation. This contradiction shows that p cannot be provable."

Why? Why not simply stop before the 'This'-sentence and simply conclude that the system is inconsistent?

Abbreviating that quote: "If p were provable, then [the system would be inconsistent]." We can't just upgrade that to "the system is inconsistent" without justification. (If for some reason we knew p were provable, that would be justification.)

See the other thread

In my view, he simply needs to show his logic consistent first. [found the bug in my reasoning, I read the Wikipedia page argument wrong.]

Needs to show which logic

Needs to show which logic consistent first? His meta-logic?

I wouldn't know

Probably both. I wouldn't know. Honestly, I just read Godel's argument superficially. And somehow I took the contradiction constructively; i.e., as a procedure of turning a provable into an unprovable, I guess.

But it's all very superficial reasoning where I liberally 'erase' the encodings.

It all made a lot more sense

It all made a lot more sense to me (and somebody around here was pointing me to where somebody else had done the same earlier) to cast Gödel's results in terms of Turing machines, making it possible to do a proof that's scarcely more complicated than the routine proof that the halting problem is undecidable. I worked it all out in the form of a blog post.

I am not entirely sure that you can do that

I wouldn't know if you can look at Godel through the eyes of Turing machines, to be honest. Guess I should buy GEB, it's all a long time ago.

Thing is, I can read Godel's argument and superficially agree with it too.

Too sloppy I guess.

When I wrote my

When I wrote my dissertation, I extracted this overview of the proof:

For sufficiently powerful M, one can construct a proposition A of M that amounts to "this proposition is unprovable". If A is proven, that would show that A is false, thus would constitute a proof of not-A; if not-A is proven, that would constitute a proof of A; so if M is consistent, both A and not-A are unprovable. (This is Gödel’s Theorem, that M must be either incomplete or inconsistent.) But then, a proof that M is consistent would constitute proof that A and not-A are unprovable; and a proof that "A is unprovable" is a proof of A, and a proof of A means that M is inconsistent.

What's unsatisfying about this, to me, is that it doesn't explain how proposition A is supposed to "amount to" this-proposition-is-unprovable, and I was motivated to write a proof where the only details omitted are unimportant tedium, rather than key points like that. It's important to make clear that there is no explicit self-reference involved; it's just that there must be some arithmetic function, which the logic M can consider statements about, that behaves identically to M.

Constructive procedures

There are three constructive procedures in the proof (of Rosser's version). The procedure for producing the sentence P, given the object system, is constructive. Then there is a constructive procedure taking as input a proof of P, producing a proof of not P. In Rosser's version, there is a constructive procedure going the other direction, taking a proof of not P and producing a proof of P. We have all this without assuming consistency. Note, you refer to an additional "assumption" of possible incompleteness, but this is an absence of an assumption (of completeness); it doesn't require assuming anything. The only conclusion we can draw at this point is that, if we could ever find inputs for the last two procedures, we would then know for sure that the object system is inconsistent. The detail that the final two procedures operate on proofs and not on sentences/propositions is a rather critical one. If you want to say, the object system is either consistent or it isn't, this is an extra step. The case of inconsistency is not very interesting, as the object system then proves all sentences. Nothing has been done to rule out the other case, that of consistency. We haven't constructed an inconsistency yet, not without an input to one of the last two procedures.

Yah well.

I guess a proof of inconsistency of the object logic would render the meta logic inconsistent, as well. And Godel would have proven nothing.

Nah. I found it. It's the wording of the wikipedia page. It reads somewhat like "we found a contradiction, that cannot be, therefore we go over to concluding unprovability" whereas they mean "assume something, contradiction, therefore not something. Repeat for reverse case. Unprovable."

Got it.

Wittgenstein: "prove the improvability of P" means "⊢⊬P"

By "prove the improvability of P," Wittgenstein meant ⊢⊬P.

Assuming Godel's ⊢⊬"I am not provable." leads to inconsistency

Wittgenstein argument was that assuming Godel's result ⊢⊬"I am not provable." leads to inconsistency in mathematics.

I wouldn't worry too much

I wouldn't worry too much about it. Hewitt's claims are extraordinary, which, as the saying goes, calls for extraordinary evidence; extraordinary evidence has failed to emerge, despite whatever Hewitt or others here (including me for years, until recently) have done. Until and unless it does emerge, this stuff won't be useful for implementation or theory.

Necessary to read an article in order to properly critique ;-)

John,

It is necessary to read and understand an article in order to properly critique it ;-)

Regards,
Carl

Total FP for theorem-proving

A language whose computations always terminate (and therefore has no general recursion or Y combinator) is useful because the type tells us that we *will* get a value--not only that we might. When using the type system as a logic, if we could reach arbitrary conclusions (types) by going into infinite loops, the logic would be unsound.

When there's no general recursion (or any other chance of nontermination), it's called total functional programming. That is, the functions are total functions, not partial ones. We can see this in Coq, Agda, Epigram, and Idris. Agda and Idris are particularly notable for being designed for programming, as opposed to Coq's theorem-proving focus.

I think Idris has effect types and syntactic sugars so that it's easy to switch between a verifiable total FP style and a flexible ML-like style for different parts of a program.

(Anyway, this is a thread started by Hewitt, so it's probably going to involve extraordinary claims at some point. Total FP is already a pretty hard sell if you're not already trying to do program verification using a type system.)

No general recursion

I understand the restriction of general recursion and how termination can be guaranteed by things like structural recursion, for example languages like Epigram.

This sounds like something different, but it could just be that it is not being explained very well.

Procedures in ActorScript can be partial

Procedures in ActorScript can be partial. For example Factorial.[0] does not converge using the following definition:

Factorial.[n:Integer]:Integer ≡
 n=1 � True ⦂ 1
        False ⦂ n*Factorial.[n–1]

Why doesn't that lead to

Why doesn't that lead to logical inconsistencies?

Total math functions: different from executable procedures

Total mathematical functions are different from executable procedures with a different notation:
* Total mathematical function f:Boolean so that f[3] is either True or False
* Potentially partial procedure g:NaturalNumber↦Boolean so that g.[3] might diverge

Is that what he's saying?

Just something like "don't write loops unless you can prove that they will end?"

Is there something horrible hiding in there like "force a type system on people that makes it hard to write loops?"

You know, it's impossible to define a loop in SQL, so there's long been a place for limited languages, but I wouldn't want to write an application in one.

Ask him how he would implement a REPL in his ideal language. One that loops till the user asks it to end.

Termination

Actually he said my definition of the y-combinator was wrong (it's not but it is the lazy version, so it was the wrong one for an eager language), he did not say it would be rejected. Although on re-reading the choice of the word "erroneous" is a bad one, as it is ambiguous as to whether the definition is a valid definition (but not the Y-combinator) or that it is an invalid definition.

I was asking about this whole topic

When he says "fixed points considered harmful" is he talking about flow of control, ie, just looping?

Types are fundamental to programming languages

Types are fundamental to programming languages and consequently untyped programming languages are a mistake.

Fixed points hark back to the invention of programming languages in the lambda calculus and were used to show that the lambda calculus is a universal programming language for functions on numerals.

However, types mean that fixed points do not exist.

Some type systems permit fixed points.

I agree types are fundamental, and I also prefer strongly typed languages. However other type systems admit fixed-points, so to say "types mean that fixed points do not exist" is clearly wrong. So what do you mean? How does your type system differ from other type systems such that fixed-points cannot exist? Surely if you allow general recursion fixed points do exist (even if not explicit).

The universal type does not exist

The universal type does not exist.

Consequently, there are no fixed points.

Universal Quantification.

Hold on, as most type systems do not have a universal type (a '*' or anything type) it sounds like you are talking about the universal quatifier. Does that mean you cannot write generic algorithms, for example the identity funciton:

id : forall a . a -> a

Is really useful, but the next simplest generic function is also useful:

swap : forall a, b . (a, b) -> (b, a)

How would you express these functions in ActorScript or DirectLogic?

Parameterized types

The following is an identity procedure:

Identity◅aType▻.[x:aType]:aType ≡ x

The following is a swap procedure:

Swap◅aType, anotherType▻.[x:aType, y:anotherType]:[anotherType,aType] ≡ [y,x]

Implicit Universal Quantification

Isn't the universal quantification implicit?

forall a . Itentity<a>.[x:a]:a = x
forall a, b . Swap<a, b>.[x:a, y:b]:[b, a] = [y, x]

How about:

Y<a>[f:[a]->a]:a = f.[Y.[f]]

He probably needs at least the eager variant

In eager languages you need to think about the order of evaluation when using a fixed point, otherwise it'll just unwrap the definition of Y an infinite number of times. By making an extra argument explicit you can make sure you get a fixed point operator with the right semantics.

let rec fix f x = f (fix f) x (* note the extra x; here fix f = \x-> f (fix f) x *)
 
let factabs fact = function   (* factabs has extra level of lambda abstraction *)
   0 -> 1
 | x -> x * fact (x-1)
 
let _ = (fix factabs) 5 

I imagine ActorScript is eager, therefore, at minimum, he needs to supply an extra argument. But then, I gather, his typesystem blocks the definition.

Lazy vs Eager

Sorry, too much Haskell. I prefer eager though. I am trying to understand how it blocks the definition.

You need to think in the Operational Model

Roughly, you need to think in the operational model of both languages, which can be thought of as evaluating thunks (stack frames in the heap.)

If you have

fix f = f (fix f)

then an application gets rewritten eagerly like

fix f x
= f (fix x) x
= f (f (fix f)) x
= ...

Because fix is a unary function, it's a function argument with all parameters supplied, and eager evaluation enforces the evaluation of arguments.

If you have

fix f x = f (fix f) x

then

fix f x 
= f (fix f) x (* fix f cannot be rewritten since it lacks an argument *)
= ...         (* f is rewritten *)

And fix f cannot be rewritten since it lacks an argument. It's a curried application, is pushed as a thunk to the heap, and f is evaluated.

(Guess I overdid delayed evaluation in my own definition. I'll look at it once.)

Ah. I do it a bit differently in Hi.

 fix = [ f -> f [x -> (fix f) x] ]

Then

fix f x
= [ f -> f [x -> (fix f) x] ] f x
= f [x -> (fix f) x] x
= .. (* f gets rewritten since the argument is a lambda term *)

You can pick either. Guess the ML one is neater but that Wikipedia page didn't exist when I needed my own eager fixed points.

fix and hi

Are you still working on Hi? Your profile reference to www.hi-language.org has been broken for a long time.

A Z-combinator (eager) fixpoint can also be expressed using types, too.

fix :: ((a→b)→a→b) → (a→b)
fix f = f (fix f)
-- is equivalent to
fix f a = f (fix f) a

In any case, depending on the namespace to tie the knot here seems like cheating to me. You're essentially using the namespace as an implicit fixpoint. You aren't really implementing fixpoint, you're just leveraging one that the compiler implements. This would be obvious if you were to model the namespace explicitly.

Y as a unary or dyadic combinator. And Hi.

The only difference is whether you define Y as a unary of dyadic combinator. If you define it as a dyadic combinator, you can rely on (some languages) operational semantics that 'fix f' simply will not be expanded but evaluation will be deferred until fix receives two arguments.

Namespaces have nothing to do with it?

Yah. After a five year break I decided to implement a Hi interpreter/compiler in C++ since the bootstrap failed to deliver performance. (It's one of those nasty features of bootstrapping, you only can measure performance after you went through the whole process.)

I was roughly three orders, 1000x, off of what I think I would need to deliver an acceptable tool. Fixing the bootstrap would require a similar effort like for Haskell/ML; i.e., a twenty to hundred manyears which I don't have.

So I decided to write an interpreter/compiler in C++. Fast interpreter, slow language. I'll hopefully end up with something like Python/Mathematica speed.

But it's tedious and I often don't really feel like since I already went through the whole process once.

My blog has some information on it.

namespace (singular)

Typically 'namespace' is used in context of distinguishing many namespaces, e.g. `math:tan` vs. `color:tan`. But the difference between zero and one namespace is more interesting. Many languages, such as lambda calculus, don't have even one namespace.

With `fix f = f (fix f)` you're relying on a fixpoint in the space of named functions to bind the 'fix' on the LHS to the 'fix' on the RHS. The fixpoint behavior is pushed up to your compiler or linker, which cannot naively inline the definition in place of every symbol. (In this sense, the namespace becomes relevant to the operational model.)

You could express fixpoint without relying on the namespace, but it would typically be more verbose. E.g. `Z = λf.(λx.f (λv.((x x) v))) (λx.f (λv.((x x) v)))` in lambda calculus.

I'll check out your blog on Hi language. Thanks.

Oh, no problem

The blog is more of a log which I use to order my thoughts sometimes as well as log what I was thinking about at what point during the development.

On second thought, forget about the blog. It's not informative enough and the language didn't change. There's a working link with a bootstrap (don't download it) and some pdf's on the language if you're wondering about what this small declarative language is about.

But I doubt I'll ever find a lot of users. Haskell/ML have evolved far beyond what is reachable for me, and I am not sure the combination pure/eager is even what people want. People want to write for loops in scripting languages.

The language will probably remain an oddity unless I can find some use-case for it.

Implicit Universal Quantification 2

If I define:

Itentity<a>.[x:a]:a = x
Swap<a, b>.[x:a, y:b]:[b, a] = [y, x]

Are the 'a's different? If they are then they must be introduced with an implicit universal quantifier, like in Prolog clauses.

Erroneous definition of the Y combinator

The following is not the definition of the Y combinator:

  
Y◅aType▻.[f:([aType]↦aType)]:aType ≡ f.[Y.[f]]

Missing the point

Sure, lets use the eager variant then:

Y<a>.[f:[[a]->a]->[a]->a)]:[a]->a = f.[Y.[f]].[x]

Postponing execution doesn't help implement Y combinator

Postponing execution doesn't help implement the Y combinator in ActorScript because the following is not the definition of the Y combinator [edited for clarification]:

  
Y◅aType▻.[f:([aType]↦aType)]:aType ≡ postpone f.[postpone Y.[f]]

Incorrect

What do you mean incorrect. They Y-combinator is not incorrect because it exists, and the Haskell version of that code works fine:

{-# LANGUAGE ScopedTypeVariables #-}

fix (f :: (a -> a) -> a -> a) = \(x :: a) -> (f (fix f) x :: a)

fact f x | x == 0 = 1
fact f x | x /= 0 = x * f (x - 1)

Is there an ActorScript interpreter that I can use?

Can you explain to me why the above definition of the Y combinator would not work. What would the type error message be?

The Y combinator *cannot* be defined using ActorScript

The Y combinator cannot be defined using ActorScript:

 
SingleArgument◅aType▻ ≡ [aType]↦aType

Helper◅aType▻.[f:SingleArgument◅aType▻]:SingleArgument◅aType▻ ≡
                [x:([⍰]↦SingleArgument◅aType▻)]→ f[x.[x]]
 
Fix◅aType▻.[f:([SingleArgument◅aType▻]↦SingleArgument◅aType▻)]:SingleArgument◅aType▻ ≡ 
                (Helper◅aType▻.[f]).[Helper◅aType▻.[f]]

The above attempted definition of the Y combinator does not work because the missing strict type ⍰ does not exist.

Why is that?

Why is that type not:

x : ([SingleArgument<aType>]->SingleArgument<aType>)

Also I don't see what is wrong with this definition:

Y<a>.[f:[[a]->a]->[a]->a)]:[a]->a = (f.[Y.[f]]).[x]

Y combinator

Y combinator is far from the only fixpoint combinator. Can you prove ActorScript expresses none of them? As I understand it, ActorScript is Turing complete. If so, you certainly can model within ActorScript a language where you can express a fixpoint combinator and an interpreter for it.

Partial functions exhibit the same 'considered harmful' problems you ascribed to fixpoints. So I'm not understanding why your attack focuses on fixpoint combinators in particular, much less the Y combinator even more specifically.

I know of only one logic in which proof-by-failed-attempt might be considered valid. It's a very inconsistency robust logic, too. But, for various reasons, durnken logic doesn't get much respect from the academic community.

No fixed point implementation can be expressed in ActorScript

It can be proved that no implementation of Fix can be expressed in ActorScript.

But a formal proof can be tedious :-(
The basic idea is that every ActorScript type has finite rank but an implementation of Fix requires a type that is not of finite rank.

Hewitt, you seem to avoid ...

But the proof is tedious :-(

Yeah, right. At every turn you provide an excuse for avoiding clarity and rigor.

Now tell me again to go ask a specific question about the N+1th version of this or that paper.

How about this?


SingleArgument◅aType▻ ≡ [aType]↦aType

Fix◅aType▻.[f:([SingleArgument◅aType▻]↦SingleArgument◅aType▻)]:SingleArgument◅aType▻ ≡
 [x:aType]↦(f.[Fix.[f]]).[x]

Doesn't this one work?

I'm still wondering about this fixpoint definition in particular. Doesn't this work?

Several attempts by Keean Schupke, marco, and dmbarbour elsewhere in this thread have had simple flaws, such as forgetting to introduce the variable x before using it. Maybe my definition has a flaw too, but I don't see it yet.

(This came up again when I mentioned my lingering doubt in this thread.)

Hey! My fixpoint combinators

Hey! My fixpoint combinators are purrfect!

Purrfect

Yeah. :) I don't see anything wrong with them except that they're not in ActorScript notation. I had assumed Hewitt was ignoring them on that basis.

Apparently my writing in ActorScript notation didn't make much difference, so I don't know anymore!

re doesn't this one work

I'm still wondering about this fixpoint definition in particular. Doesn't this work?

I don't think so. How would you reconcile it (to create a Goedel fixpoint sentence) with the rules for constructing sentences at the bottom of page 27. The one that begins "x:Sentence ⇔ x constructed by the rules below:"

Not talking about Gödel here

Drat, I got the lambda syntax wrong. The syntax []↦ is for executable procedure types, and the syntax []→ is for executable procedure lambdas. I also wrote Fix.[f] when perhaps I should have written Fix◅aType▻.[f]. Here's an update:

SingleArgument◅aType▻ ≡ [aType]↦aType

Fix◅aType▻.[f:([SingleArgument◅aType▻]↦SingleArgument◅aType▻)]:SingleArgument◅aType▻ ≡
 [x:aType]→ (f.[Fix◅aType▻.[f]]).[x]

Now for something else you're saying...

to create a Goedel fixpoint sentence

In this subthread, I'm not talking about fixed points for sentences, nor of fixed points for ActorScript's total mathematical functions. I'm just talking about fixed points for ActorScript's partial executable procedures.

Hewitt gave an example of a recursive partial procedure Factorial, so it seems like even if this combinator exists, it won't cause any emergence emergency.

However, Hewitt specifically argued that this fixed point operator did not exist, with an argument based on showing a failed implementation attempt. My reply offers an implementation with the same type signature as that attempt. Hewitt responded by objecting to a slight (broken) variation. (The variation uses x without even establishing a binding for x first. (EDIT: Hewitt has fixed this.)) Is his remaining objection that my definition doesn't work, that his variation doesn't work, or just that it isn't "the Y combinator" in particular?

Anyway, this stuck in my mind because it's a good example of why I'm suspicious of Hewitt's arguments of the form "The missing type ⍰ does not exist." As (hopefully) constructive advice for Hewitt, I think the proofs that use this technique need to be rewritten... or at least elaborated upon to explain why the failed attempt is the only attempt that could have worked.

Direct Logic uses typed recursion instead of the Y combinator

This is an excellent question :-)

Direct Logic uses typed recursion instead of the Y combinator.
It is easy to prove that using typed recursion in Direct Logic, every type has finite rank.

The claim is that fixed points cannot be defined in Direct Logic because of type constraints, e.g. the definition of the Y combinator does not work in Direct Logic. Unfortunately, I do not have an elementary proof of the claim ready to hand :-(
However, you may be able to construct one or find one in the literature :-)

Edit: Also, it is possible to give a signature for Fix◅aType▻ (but not an implementation) so that Fix◅Sentence▻ doesn't create a "self-referential" sentence.

I almost see it

This is an excellent question :-)

And that's an informative response, thanks!

I know that in the simply typed lambda calculus, fixed point combinators don't exist because every type has finite rank. I had assumed the Foo◅aType▻ brackets were something like ML-style polymorphism, which would have made this combinator possible. However, I can imagine at least one variation that wouldn't support fixed points, and that's if every occurrence of Foo◅aType▻ were automatically replaced with its definition (much like Lisp macroexpansion).

I'm not sure if that will give me the proper intuition of how this works in your system, but it gets me unstuck, I guess. :)

It's extremely interesting to think that this is a system with a sort of graph-like scoping policy for recursive executable procedures, but also strict types (reminiscent of total FP) that happen to prohibit a fixed point combinator.

David was talking about how most implementations of fixed point combinators rely on the recursive nature of the namespacing mechanism, and how he prefers using fixed points directly. It looks like ActorScript decisively commits to recursion-by-namespacing, drawing a correspondence between interacting Actors and interacting definitions. (I hope I'm not again seeing something that isn't actually there....)

Types have raised very subtle issues ever since Russell

Thanks!

Types have raised very subtle issues ever since Russell started his investigations.

Of course, it is not settled that ActorScript and Direct Logic have got types right ;-)

As mentioned at the top of this topic, untyped fixed points can do a lot of damage if they exist.
Consequently, it would be great to have a formal proof that they don't exist in ActorScript and Direct Logic.

PS. Of course, it is easy to prove that fixed points do not exist in the simply-typed lambda calculus.

re not talking about Goedel here

I have nothing to add to Hewitt's reply to other than I did mistakenly assuming you were trying to define Y for the purpose of trying to create a Direct Logic realization of Goedel's second incompleteness so-called theorem.

Hah

so-called theorem

Now you're just trolling.

Not trolling: Gödel's "so-called theorem" 2nd Incompleteness

Gödel's "Second Incompleteness Theorem" has come under serious question as to whether or not it applies to Mathematics.

Consequently, Thomas was legitimately allowed to call it a "so-called theorem".

Note that the Church/Turing theorem of inferential incompleteness of the Peano/Dedekind categorical axiomatization of ℕ still stands.

So what

The claim was that Goedel never worked out his proof. So what. Someone else did, peer reviewed and everything.

Goedel 2nd is understood by some. Not me. But here is Boolos.

Furthermore, you like to troll. I've seen you calling CSP a "programming language" and make other claims which "stimulate discussion".

But personally, that doesn't bother me. You like to take the knives out in a discussion, I don't believe in discussions in hushed tones while walking around lily ponds either.

But my critique remains, on glancing through your papers, things aren't sufficiently pinned down to discuss reasonably.

"2nd Incompleteness" depended crucially on "I am not provable."

The problem is not that Gödel didn't present a formal proof. Gödel's "2nd Incompleteness" writing and subsequent work by others depended crucially on the existence of the Mathematical sentence "I am not provable" alleged constructed using a non-existent fixed point (sometimes called the Diagonal Lemma).
Also, Gödel's "2nd Incompleteness" so-called theorem has now been contradicted by a theorem of Mathematics.

PS. Communicating Sequential Processes (CSP) was name that Hoare gave to a programming language.
Later he decided to use the same name for something else.
Consequently, the name is now ambiguous.

PPS. Comments and suggestions are greatly appreciated on the articles that you have had a hard time understanding.

Discuss it with a logician

Unless you make that precise, I don't understand what you're talking about. I've read the Stanford expose, don't see anything wrong with it, don't have a problem with effective enumerability, don't see problems with a diagonal lemma, don't see explicit self-reference, don't find quoting a big problem, and doubt that types will lead to a weaker logic where the construction is prohibited. (And doubt that the construction of a weaker logic makes a lot of sense; you end up with a weaker logic, but the result still holds.)

To me it only signifies that discrete thought about a continuum is effective, but when turned on itself leads to absurdity. And I find that natural, since I believe in Panta Rhei. So, you're discussing this with the wrong person anyway since I believe most thought is fundamentally flawed anyway.

But I am not trained a logician. Write it down and get it peer reviewed. That's all I can say. Discussing this on LtU is maybe entertaining to some but leads nowhere.

PPS: Historically it is nice to know CSP started out as a programming language. But that's irrelevant. You're just teasing. You know it has been widely successfully used as a formal language.

re "Boolos" link

Goedel 2nd is understood by some. Not me. But here is Boolos.

Boolos does not understand it either, as evidenced by the passage that says:

In fact, if math is not a lot of bunk, then no claim of the form "claim X can't be proved" can be proved.

Sigh

Boolos means for any fixed sufficiently powerful formal system called "math" and, of course, this is correct.

re Sigh

-

Boolos means for any fixed sufficiently powerful formal system called "math" and, of course, this is correct.

Far from correct, Boolos' claim is trivially either false or "not even false", depending on how you read it.

In DL, as an example, you can prove that you can not prove 2+2=5 very directly. In that sense, Boolos' statement is false.

What we might generously suppose that Boolos means is that even if we can prove the non-provability of 2+2=5, we still have not ruled out the possibility of an inconsistent system (in which we can also prove the provability of 2+2=5).

But that is different. That is saying that you can't rule out that a (sufficiently powerful) system may yet be revealed to be inconsistent.

The ability to prove that you can't prove 2+2=5 does not imply inconsistency (but neither does it rule it out).

His claim's are standard

To prove that DirectLogic can't prove 2+2=5 means to prove that there is no chain of inference admissible in DirectLogic that leads to the conclusion 2+2=5. Note: this has nothing to do with the presence of unicode symbol U+22A2 in the DirectLogic grammar.

The system of "math" (call it M) Boolos has in mind allows proof by contradiction. Thus if M is inconsistent, then M proves absolutely anything. Conversely, if there's any proposition M does not prove, then M is consistent. If you can prove that "2+2=5" is not provable, then you can prove M consistent.

that's a cool hair to split (re his claims are standard)

To prove that DirectLogic can't prove 2+2=5 means to prove that there is no chain of inference admissible in DirectLogic that leads to the conclusion 2+2=5.

That proof is trivial (hence "not dispositive" aka "doesn't put the question to bed") in Direct Logic.

Goedel's 2nd is incorrectly popularly understood to mean that Direct Logic's trivial proof of the unprovability "2+2=5" is impossible in a system that is both consistent and that can contain a model of ℕ.

The catch to the trivial proof is that, who knows, maybe it can also be proved that "2+2=5" is provable in DL. I'm confident it can't but my confidence is empirical. I regard it as a scientific question.

In Goedel's 2nd theorem argument, instead of the naked turnstile ("⊢"), he has a predicate over an encoded model of the subject system, along with an implicit assumption of the derivability axioms.

As it turns out, therefore, Goedel is talking only about a pretty confined class of systems: systems with those axioms and supporting those particular definitions. If a system doesn't have those axioms or doesn't accept those particular definitions, the argument called Goedel's 2nd theorem just does not apply. Period.

The informal error that is popular, that Boolos and you are reiterating, is that the derivability axioms and validity of those definitions is inevitable in a "suitably powerful" fully general foundation of math.

DL gives you a pretty compelling counter-example.

Meta-level claims

Again, it is not correct to make the meta-level claim that "DirectLogic proves that DirectLogic cannot prove 2+2=5" on the basis that DirectLogic proves "¬ ⊢ 2+2=5". In order to make that claim at meta-level, you must judge at meta-level that "⊢" means provability in DirectLogic. Unfortunately, this meta-level judgement is not sound.

The proof is pretty straightforward, I think: Suppose that we have a model that interprets the naked turnstile as asking for the existence of a certain proof object, encoded as a natural number. Then we can soundly expose the way we're encoding things to the logic. Once we do that we will run afoul of Godel in the ordinary way and our system will be inconsistent, contradicting the assumption that it has a model.

Edit: Oops. That reasoning was sloppy. You might not get into trouble with Godel after all, since by exposing reasoning to the logic, you change the logic. So maybe this is sound. I'll have to think more later.

meta-level claims and Goedel's 2nd

There is no meta-level in the argument sketched for Goedel's 2nd theorem.

Suppose that we have a model that interprets the naked turnstile as asking for the existence of a certain proof object, encoded as a natural number.

I can't suppose that since it would not be an accurate model of DL.

Can assume number encoding

I think that part was fine. You can always choose numbers to encode your symbols.

But my reasoning was flawed. See my edit if you haven't.

Edit: yeah, the only thing I see about DL that doesn't look sound is that it proves turnstile isn't enumerator. Even if it were possible to interpret it soundly, I don't think it should count as proving consistency unless thats the only interpretation.

Gödel numbers are irrelevant to inferential incompleteness

Gödel numbers are pretty much irrelevant to inferential incompleteness.
See the Church/Turing proof of the inferential incompleteness of the closed theory ℕ.

Church's Paradox shows that the theorems of Mathematics are not computationally enumerable.
Direct Logic turned Church's argument around into proving the theorem that Mathematics is open, i.e., inexhaustible. :-)

However, there are still grounds for concern that Direct Logic could be inconsistent, regardless of the correct proof that Mathematics is consistent.

Not the Y fixed point procedure

The above definition of Fix is not the Y fixed point procedure.

Z combinator, then?

I see what you're saying--that this operator is only able to derive fixpoints that are functions, as opposed to fixpoints of arbitrary type--but the actual Y combinator is almost beside the point.

When Keean Schupke first brought up "the Y-combinator" in this thread, and when marco was saying "Why would I use Actorscript if I can naturally define a Y combinator in most other typed languages?" I think they didn't necessarily need the Y combinator itself. They evidently would be satisfied with the "eager variant," the Z combinator.

Back in this post of yours, you demonstrated a flawed way to assign types to the usual untyped lambda calculus implementation of the Y combinator:

The Y combinator cannot be defined using ActorScript:

 
SingleArgument◅aType▻ ≡ [aType]↦aType

Helper◅aType▻.[f:SingleArgument◅aType▻]:SingleArgument◅aType▻ ≡
                [x:([⍰]↦SingleArgument◅aType▻)]→ f[x.[x]]
 
Fix◅aType▻.[f:([SingleArgument◅aType▻]↦SingleArgument◅aType▻)]:SingleArgument◅aType▻ ≡ 
                (Helper◅aType▻.[f]).[Helper◅aType▻.[f]]

The above attempted definition of the Y combinator does not work because the missing strict type ⍰ does not exist.

The final type signature you used is specific enough that the Z combinator would be a valid implementation, so I think I mistook you for saying Z itself couldn't be defined. Sorry for my contribution to this misconception.

The implementations Keean Schupke, marco, and I gave are pretty standard implementations of Z. Actually, if I were defining Z itself, I'd give it a slightly more relaxed type than the type of Fix above:

Z◅a, b▻.[f:([[a]↦b]↦([a]↦b))]:([a]↦b) ≡
    [x:a]→ (f.[Z◅a, b▻.[f]]).[x]

If combinators like this can be defined and used for constructing recursive functions, even if the actual Y combinator does not have a well-typed definition, I think that'll be a pretty nice clarification.

Z combinator does not exist

Omitting types, just looking schematically at the defintion, the Z combinator would be:

  Z.[g] -> [x] -> g.[[v]->x.[x].[v]][[v]->x.[x].[v]]

Immediately, by inspection, we can notice the critical sub-expression:

  x.[x]

Here, "x" is presumptively a function whose domain is the function type itself.

In DL's constructive type theory, a function type can only be formed from domain and codomain of lesser rank. Therefore, "x" has no valid type.

Lesser Rank

Isn't the lesser rank restriction overly restrictive to avoid paradoxes? Don't you only need the rank to be lesser or equal for non-negated application, and only strictly lesser for negated application (as in datalog's restrictions).

re lesser rank

Don't you only need the rank to be lesser or equal for non-negated application, and only strictly lesser for negated application (as in datalog's restrictions).

I don't really know anything about datalog.

By "negated application" do I guess correctly that you are talking about modeling functions as predicates expressing a relation between domain and codomain, and in particular terms which logically negate such a predicate?

In DL and ActorScript logic programming, propositions are sentences of Boolean type. They are all of a single type. Rank of types do not come into question.

Proposition values are ranked, but all those values are the same type.

Rank of types,

Surely propositions cannot be Boolean, and inconsistency robust (As Boolean algebra had the law of the excluded middle). I would have thought propositions need to be Heytingean, or whatever the type of truth for direct logic is?

By negated application I mean depending on the negation of some other proposition. "The set of all sets that are not members of themselves" is only a paradox because of the negation. "The set of all sets that are members of themselves" is not a paradox. So you only allow ranks less than when it is negated. Allowing less than or equal in the non negated case is sufficient.

Inconsistency-robust logic

I'm not really sure if "inconsistency-robust" really means "paraconsistent" (or, at least, if the former is a special case of the latter), but, if this is the case, what you want outside of logic is the principle of explosion, not (necessarily) the law of the excluded middle.

Paraconsistency is a special case of Inconsistency Robustness

Paraconsistency is a special case of Inconsistency Robustness. For example, intuitionistic logic is paraconsistent but not inconsistency robust.

Intuitionistic entailment is explosive

How is intuitionistic logic paraconsistent? If I understand correctly, "paraconsistent" means "the entailment relation isn't explosive", i.e., there are inconsistencies from which not everything is derivable. Under this definition, intuitionistic logic isn't paraconsistent.

That being said, I'm very interested in your idea of inconsistency robustness, as a paradigm for developing and using distributed databases. I can't say I fully understand the exact meaning you associate to the term "inconsistency robustness", but it is clear to me that if there are multiple sets of evolving beliefs about the same domain of discourse, it will be the norm, not the exception, that some of these beliefs will contradict each other.

Could you tell me what papers I should begin reading?

Intutionistic logic is paraconsistent

Intutionistic logic is paraconsistent in that it is not the case that every proposition can be inferred from a contradiction.

For more information on Inconsistency Robustness, see Formalizing common sense reasoning

Explosion only to negated propositions

I think Hewitt is saying that in intuitionistic logic, a contradiction lets you derive every negated (and double-negated...) propositon, but not necessarily the non-negated ones.

Why not?

Presentations I've read have an elimination rule for contradiction that allows you to conclude any other proposition. Wikipedia agrees. See the rule labeled FALSE.