Buried Treasure - No Fluff Just Stuff

Buried Treasure pdf

The trend toward dynamically typed languages is both widespread and strong. Less obvious, though, is a resurgence of interest in functional programming and functional languages. ... Haskell ... Objective CAML ... Scala ...

Is it double-think that allows the promotion of both dynamically typed languages and, by name, Haskell OCAML Scala without any acknowledgement that those powerful and efficient functional languages are ... statically type checked?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

It does have bearing in that

It does have bearing in that if a static language could guarantee "the absence of classes of misbehavior" then it should be able to do that no matter how i formulate my program as long as it type checks. If my customer wanted had heard of the virtue of static typing and demands that I write a program in such a language, but I myself prefers dynamic typing, one way to circumvent the whole thing might be to write the program in a dynamic language and then compile it to a static one, or use a interpreter written in the static one. That way I can show my static code to the customer.

But the question is, can the static type checker prove the absence of classes of misbehavior before delivering the software? (then applyed to my static code) If it can do that it obviously can do it for dynamic languages to, and if it can't, well it's not much of a proof then is it?

Your interpreter's likely to

Your interpreter's likely to end up pretty well sandboxed :-)

Well yeah, I would think

Well yeah, I would think sandboxing is the only way to realy get more garuanteed safety. (no mather what kind of typing your language has.)

But seriously, I dont see

But seriously, I dont see why the evaluator (interpretator/compilator) would be more sandboxed than its target language.

The host language might

The host language might enforce it.

In that way would that be

In that way would that be diffrent from the host language being sandboxed?

The host language may not be

The host language may not be sandboxed per se but might force various properties to be proven to do something that'd be subject to sandboxing in other systems - if you can't prove them by any other means, you'll end up having to build your own sandbox.

No bearing

Follow your reasoning to its logical conclusion. Ultimately the program is translated to machine code. It would be trivial to compile to statically typed assembly language. Would you then hand this assembly to the customer and say "Here you go, it's statically typed just like you asked?"

By translating you lose information. It is possible to prove the higher-level source code correct without being able to prove the translated code correct. That is, given just the lower-level code, you can't make the same guarantees as you could given the higher-level code.

The point was that any proof

The point was that any proof that was valid for the target language has to be valid for the source language. Not the other way around.

Tcl

Funny you should mention Tcl. The original idea behind Tcl was that it would be a small dynamically-(un)typed interpreter for tying together components written in statically-typed languages (C primarily). Many people still use Tcl in this way, and many good libraries for Tcl are written in C (although many prefer writing Tcl code to writing C, understandably). So, e.g. when Paul Snively says that developing a better C/Java "[...]might not be enough to persuade the Python, Ruby, Perl, Tcl... communities, basically because nothing is likely to be." this is a little unfair. The Tcl community certainly haven't abandoned static typing; they just use two languages. I also don't think many would be hostile to using a better language instead of C (and some do). Replacing Tcl itself with a statically typed language would be a harder sell. Not impossible, but you'd have to demonstrate the same breadth of functionality and ease of gluing together disparate components that Tcl manages (that means good cross-platform, high-level process control, pipes, files, sockets, C-FFI, etc), along with simple GUI libraries, easy EDSL creation etc. (Maybe a Meta-AliceOCaml would come close).

To compile or not to compile

Given an interpreter for a dynamic language how do I create a compiler for that language. The problem seems to be to decide what is static in the input source, or conversely what is truely dynamic. The dynamic part becomes the input to the compiled program and the static part is partial evaluated along with the interpreter to get an execuitable. The problem is to divide the source code into static and dynamic parts. I often find that there may not be a unique division of static and dynamic, so I just don't compile it.

EDIT: On Ltu it is sometimes said that an interpreter is just another program. And we can also say that it is a compiled program and the input is the supplied source. The developer of such a program has made an implicit decision that the source should be treated dynamically. If there are static parts then a different interpreter should have been written.

You Know This One :-)

Hank Thediek: Given an interpreter for a dynamic language how do I create a compiler for that language.

You know the answer to this one already: apply a self-applicative partial evaluator to the interpreter, a lá Futamura's Second Projection. :-)

Program specialization

See for example this for an explanation of "self applicative partial evaluation". This really is interesting and gets to the heart of many language issues. Somewhere in these pages there may be an explanation of why so many dynamic languages don't compile (I mean arn't compiled).

The difference bwtween a

The difference bwtween a compiler and an interpreter is rather more subtle. What's dynamic and what's not depends on the underlying model (the "target language"), and in any case it not just a matter to types.

A good topic for a paper...

would be "Why do all mentions of about typing turn into hugely long debate threads?" I find it strange that types brings out such passion :)

Typing Debates

I don't have a paper about this, but I have a thesis to submit:

A type is information about a piece of data that is known at compile time.

Since it is possible to analyze a program in an infinite number of ways, and since it is impossible to analyze it fully (in that ol' Halting Problem sense), there is room for endless work, endless speculation, and endless debate.

I admit that this is a naive definition of 'type', and that it has been refuted mathematically, but I still don't see exactly why it isn't true. In fact, someone please tell me why this isn't true.

Putting On My Anton Hat :-)

Intuitively, it isn't true because the information doesn't disappear into the ether at runtime. Granted, given "type erasure," you no longer have the name of the type, or lots of other metadata about the type, for that matter. Nevertheless, even in a type-erased statically-typed language, the type that you know of at compile time imposes some constraints on the runtime behavior of the data of that type, otherwise the whole exercise would be pointless: your compiler would infer, or you would provide annotations for, "types" that were just meaningless syntactic noise.

So even in statically-typed languages, there's a subtle relationship between the compile-time type of something, and the runtime behavior of that thing. What exactly that relationship is is the subject matter of programming language design and type theory. The frequently-mentioned "Types and Programming Languages," or TAPL, does an excellent job of (briefly) surveying several of the possible interpretations of the relationship, while basically settling on the small-step operational semantics interpretation for the remainder of the exposition (like the rest of academia and industry, to be fair).

Known by whom?

A type is information about a piece of data that is known at compile time.

In a very general sense, I don't see a real problem with that definition. I don't know that it's "been refuted mathematically". I'd say it's been refined mathematically, with the most important formal refinement being that a type can't really exist without a type system.

However, with respect to the distinction between static and dynamic type-checking, an important issue to consider is who or what "knows" the information in question at compile time, i.e. statically. In a statically-checked language, the compiler is aware of the information. In a dynamically-checked language, the compiler (or interpreter) may not be aware of the information, but the programmer may very well be.

In fact, the point that started the "Why type systems are interesting" series of threads is that programmers must be statically aware of a great deal of this kind of information, in any language, in order to write correct code. You can't do much with data without knowing something about its (static) type.

This is what makes it possible to do type inference even on programs in dynamically-checked languages: because programs contain implicit static information which can be recovered by analysis ("latent types").

One caveat is that because "it is possible to analyze a program in an infinite number of ways", latently-typed programs don't have just one correct type scheme, in general. From the formal perspective, it can be argued that without a definite type scheme for a program, the program doesn't really have any types — or equivalently, that it has just one universal type. A pragmatic response to this is that latently-typed programs have informal type schemes, and programmers reason about their code quite routinely (and often unconsciously) using those informal type schemes.

As a separate issue, it's common for users of dynamically-checked languages to conceive of types as a kind of runtime entity, because that's how types are presented in those languages. This tends to lead to some misconceptions about types in the more formal sense. The most pernicious misconception is the idea that "static" type systems have no relevance to dynamically-checked languages — nothing could be further from the truth.

As One of the Offending Parties...

...I'll tackle this one. :-)

At the moment, whether to create/use a language that is dynamically typed or statically typed is probably the most fundamental decision you can make. It affects nearly everything else about the language and, hence, the culture surrounding that language.

The worst part is that you have folks like me: converts. As I've written before, I spent decades in Lisp and Smalltalk before learning about type inference, static typing, and where it's all going, so I'm like the ex-smoker when it comes to smoking. Obviously, it's time once again to apologize for my bombast—I think I have some good points, but I articulate them extremely poorly!

Meanwhile, where we find common ground is where Anton very gently tries to take us: somewhere between the extreme dynamism of a Python or Ruby and the extreme staticness of a Haskell or O'Caml, somewhere where the language might make inference tractable, but (like CMU CL or SBCL) won't reject a program that it can't type—it's just that the warnings might actually be helpful, unlike CMU CL's or SBCL's. :-) There's still much work to do on establishing whether a powerful type system (dependent types?) might not give us the expressive power that dynamic typing fans appreciate, or whether partial inference/evaluation might not enable us static fans to have what we want without preventing dynamic fans from saying "the fact that expression X doesn't type is acceptable to me."

I'm hopeful that an implementation of "The Next Mainstream Programming Language" will go a long way towards resolving the question. But it might not.

Diversity

I think for most people, the passion is not so much about dynamic typing, but about diversity and freedom of choice. In industry it seems that use of any "alternative" technologies is an anti-pattern, and many usable alternatives happen to have dynamic typing -- probably because it's easier to define and implement such a language :)

It's funny how fuzzy the notion of static/dynamic can be in large projects, however. Consider that a large number of production apps using DT languages use a RDBMS backend, which has well-defined data types. Thus the system as a whole has a very large static safety net as far as persistent data consistency is concerned. (though it's not totally static, since you can modify table definitions while an app is running)

Ingredients for a religious debate

Why do all mentions of typing turn into hugely long debate threads?

1. Because types are important: they're a fundamental part of how we give meaning to information.

2. Because most people are most comfortable with a single typing paradigm, and have misconceptions about all other typing paradigms.

3. Because people have misconceptions even about the typing paradigm and type system they're most familiar with.

For added insight, substitute "religion" for "type" (with context-appropriate variations) and "life" for "information".

True

But most of the debate in this thread is either done before or OT with the OT parts done before as well. The original post had a good question: there is a trend towards dynamically checked (non-fp) languages, but there is also a trend towards statically checked fp languages. What gives?

It's a good question, but I think the answer lies more in trend analysis than theory: how many people are moving into Ruby, and what did they know before? What did the new ST-FP adopters know before? Were they heavily into OOP? How much do they care about type safety, runtime performance, type system overhead, etc.? Is this really just about the language or extensive libraries it comes with, or the development tools?

With so many variables it is very normal to get a hundred responses. If we had an authoritative Types LtuPedia entry, we could cut the number down by quite a bit. But, the question would be still very broad.

What gives?

What gives?

It's extremely simple. This is kind of like that nutshell number of murders in Thailand is correlated to the number of icecream cones sold in New York. The connection is that there is a trend toward "higher level" languages in general. About a year or two ago (and probably still now) Kenny Tilton on comp.lang.lisp was crowing about an increased flux of Common Lisp newcomers, and of course Haskell has seen surprising large increases as well. One thing these languages have in common, is that pretty much anyone here (on LtU) would prefer to see either of them used as opposed to the current mainstream languages.

Not quite

Although I find it likely that lisp enjoyed an increase in popularity recently, this is not what the paper focuses on (iirc it doesn't even mention lisp). The trend (according to the paper) is towards either dynamically checked but still imperative languages (includes Ruby, python, etc. but excludes lisp), or statically checked fp languages (which excludes c#). It's XOR'ing of the language features that could pose an interesting question, but if you agree that there's also a trend towards lisp, the question is no longer that interesting.

Fact-check

If you're referring to the Buried Treasure paper, it mentions Lisp quite a few times, including to say that "A surprising number of people are discovering (or rediscovering) Lisp, due in part to the popular essays of Paul Graham." (As an aside, more recently, Peter Seibel's book probably has something to do with it, too.)

Whoops

Yes, I was. Apparently, I remember Isaac's summary above better than the paper itself. Thanks for the correction. In this case, the paper seems to read like: "there's a trend towards anything that is not C++".

RE: What Gives?

The connection is that there is a trend toward "higher level" languages in general. About a year or two ago (and probably still now) Kenny Tilton on comp.lang.lisp was crowing about an increased flux of Common Lisp newcomers,

And then they see a couple posts from Kenny and they're off to greener pastures.

Types are fundamental?

[...]types are important: they're a fundamental part of how we give meaning to information.

Are they really fundamental to meaning? You could presumably define the meaning of programs entirely at the level of terms. It would just be rather cumbersome (e.g. defining addition by enumerating every possible sum of two numbers). Naïvely, I think of types as being terms in another language. So while types help in giving meaning, you still have to give meaning to the types. Or am I really missing a trick here? (Should probably get round to reading TaPL at some point).

For added insight, substitute "religion" for "type" (with context-appropriate variations) and "life" for "information".

Oh boy! (Point 1...)

Types tend to be associated

Types tend to be associated with not just sets-or-set-like-things of values but also with varieties of data from outside the system - there's a big difference in meaning between C source and TeX source despite them both being sequences of characters, for example. You really can't make that distinction at term-level without carrying around what amounts to a type tag.

Types as fundamental

Categorization is certainly fundamental to human cognition, particularly problem solving. While one could certainly create a model of machine computation with no intrinsic categorization of data or functional elements, human developers wouldn't be able to understand it without imposing a categorization on it. Those categories are going to end up looking something like types-as-we-know them, I'd imagine.

Fundamentalism

I agree with the previous two replies. More specifically:

You could presumably define the meaning of programs entirely at the level of terms. It would just be rather cumbersome (e.g. defining addition by enumerating every possible sum of two numbers).

Why do I get the feeling that Tcl has something to do with this? ;)

Note that the claim wasn't intended to be restricted to formal types — even in a language with weak type capabilities, if you write code that encodes values in a certain way, and you reason about programs based on whether you're dealing with a particular kind of encoded value, then I'd say you're using informal types to give meaning to those values.

Re "defining addition by enumerating every possible sum of two numbers", what's a number? How do you know when a value is a number? Once you've figured out that a value is a number, how do you know what you can do with it? How do you determine the meaning of some piece of data like "42"? How do you know it's not a Klingon insult?

Talking about "numbers" implies you have at least one type. The existence of an operation like addition, which is well-defined on some kinds of values and undefined on others, implies the existence of a type.

Even in a completely untyped language, such as the untyped lambda calculus or the SK combinator calculus, a programmer has to classify values and reason about programs in terms of those classifications, i.e. types. To do without types of any kind (formal or informal), you'd have to find another way to give meaning to information, that doesn't involve classification.

Naïvely, I think of types as being terms in another language. So while types help in giving meaning, you still have to give meaning to the types. Or am I really missing a trick here?

I don't see that this is at odds with "types are a fundamental part of how we give meaning to information" — in fact, this two-tier perspective illustrates the point. Types provide a connection between programs and the meaning we assign to the types — whether it's a formal meaning given by a formal type system, or an informal meaning in the programmer's mind. If you take away the types, you lose that connection to the meaning of the types, and your programs and data lose meaning.

There are other things in a language that help provide meaning, by connecting the code to something external — for example, names. But if you're given a function and told its type, you can successfully apply the function, even if its source code contains no meaningful names. If you don't know the type of the function, the first thing you have to do to be able to use it successfully is figure out what types of values it expects, and returns. It doesn't matter whether the types in question are part of the language, or only exist implicitly in the code and the programmer's mind: you're still using types to give meaning to values (information).

Re "defining addition by

Re "defining addition by enumerating every possible sum of two numbers", what's a number? How do you know when a value is a number? Once you've figured out that a value is a number, how do you know what you can do with it? How do you determine the meaning of some piece of data like "42"? How do you know it's not a Klingon insult?

The "numbers" in this case are the terms that I explicitly enumerate, of course. I know what operations I can perform on individual terms, because I define the operations over individual terms. I don't have to define what a "number" is (as a type) to assign meaning to individual number-terms. If you are saying that the mere act of enumerating some terms while avoiding others constitutes defining a type, then yes, types are unavoidable.

Types provide a connection between programs and the meaning we assign to the types — whether it's a formal meaning given by a formal type system, or an informal meaning in the programmer's mind. If you take away the types, you lose that connection to the meaning of the types, and your programs and data lose meaning.

But here, "meaning" is defined in two stages: first terms are categorised into types, and then types are assigned meaning. My contention was that the first step is not fundamental, in that we could assign meaning directly to terms; types are an (extremely) useful convenience, but not fundamental. If I understand your response correctly, it is that any assigning of meaning to terms that includes some terms while excluding others represents some form of type distinction. If that is the case, then yes, it would be impossible to assign meaning to terms without types. I'm not sure I'm willing to accept that definition of "type", however, but perhaps I need to give it more thought.

Types and meaning

First, let me say that in practice, I think my claim is entirely uncontroversial: programmers do in fact use types as a fundamental part of how they reason about programs and their meaning, regardless of the language they use. But I also think that they're fundamental in a deeper sense, so:

If you are saying that the mere act of enumerating some terms while avoiding others constitutes defining a type, then yes, types are unavoidable.

It's not just enumerating the input that a particular functions accepts — having done that, you also arrange your program so that calls to a function provide it with the kind of values it expects. This means that the program's terms are implicitly classified according to the kinds of values they receive. In addition, you name these classifications: you used the term "number", for example. So now we have a classification with a name, and a set of values that "inhabits" that classification, which can be used to classify program terms. Further, programmers reason about these classifications when writing code: it's necessary to produce a working program.

A program that does something useful, even in the kind of untyped language you're describing, has an unavoidably strong degree of organization in this respect. The definition of "number" as a type may not be explicit in the language or program, but it is implicit. In a program which enumerates the values it accepts, such a definition can quite easily be inferred — an inferencer would examine functions to determine what values they accept, then check that calls to those functions provide the expected kinds of value. The fact that type definitions can be recovered even from an untyped program is not some kind of coincidence: programmers organize programs so that they have that structure, and they couldn't reason about programs without it.

But here, "meaning" is defined in two stages: first terms are categorised into types, and then types are assigned meaning. My contention was that the first step is not fundamental, in that we could assign meaning directly to terms; types are an (extremely) useful convenience, but not fundamental.

Eliminating the first step, e.g. by removing explicit types from a language, hides the classification of terms, but programmers still have to perform those classifications, structure their programs around them, and use them in order to reason about programs and their meaning.

As a rather relevant aside, formal type theory simply identifies this implicit information and formalizes its use — it's not some artificial, optional construct. The original version of type theory, in logic and set theory, arose as a response to Russell's Paradox. Russell's Paradox results from a type error, and it was necessary to invent (or discover) type theory in order to characterize the error. The solution to the paradox centers around the observation made on the linked page:

"before a function can be defined, one first has to specify exactly those objects to which the function will apply. (For example, before defining the predicate "is a prime number," one first needs to define the range of objects that this predicate might be said to satisfy, namely the set, N, of natural numbers.)"

Without this constraint, we end up with inconsistent systems that generate paradoxes and arbitrary conclusions: systems with no meaning. This is a formal theoretical sense in which types are fundamental to meaning, in logic as well as computing.

Good points

First, let me say that in practice, I think my claim is entirely uncontroversial: programmers do in fact use types as a fundamental part of how they reason about programs and their meaning, regardless of the language they use.

Yes, in practice I agree with this entirely.

Without this constraint, we end up with inconsistent systems that generate paradoxes and arbitrary conclusions: systems with no meaning.

This is a very important point, I think, and a pretty compelling argument that types are fundamental. I'd forgotten the importance of Russell's Paradox, and what I did remember was clouded by a mis-reading of Gödel, Escher, Bach, which says (pp. 21):

To all appearances, then, this theory of types, which we might also call the "theory of the abolition of Strange Loops", successfully rids set theory of its paradoxes, but only at the cost of introducing an artificial-seeming hierarchy, and of disallowing the formation of certain kinds of sets — such as the set of all run-of-the-mill sets. Intuitively, this is not the way we imagine sets.

In reading Hofstadter's distaste for types, I managed to forget that they do indeed solve the problem.

In Dr. Hofstadter's Defense...

... he'd likely agree that the categorization is useful, but disagree that it's fundamental. He'd also likely be interested in sharpening the notion of "run-of-the-mill."

But to be sure, something that seems fundamental to his philosophy, and that certainly is a central theme of GEB, is precisely the importance of SLOTH (Strange Loops, Or Tangled Hierarchies) in a discussion of meaning and intelligence.

Obligatory name-dropping: my fondest memories of my undergraduate years are of hanging out in Lindley Hall chatting with Dan Friedman ("The Little Schemer," etc.) and Douglas Hofstadter ("GEB," etc.)

Not too relevant

Russell's Paradox results from a type error[...]

This statement doesn't even make sense. There is(/was) no type system for Russell's Paradox to ill-typed in, and the type system developed was designed to make the statement of Russell's Paradox ill-typed.

Invented vs. Discovered

Derek Elkins: This statement doesn't even make sense. There is(/was) no type system for Russell's Paradox to ill-typed in, and the type system developed was designed to make the statement of Russell's Paradox ill-typed.

Whether the statement makes sense or not depends upon whether you consider mathematical logic to be descriptive of objective reality, and therefore the principles arising from it constitute "discovery," or whether you consider mathematical logic to be a particularly ancient and popular mind game that, whether by design or by accident, seems to map nicely to a variety of experienced phenomena, and therefore the principles arising from it constitute "invention." At the time that type theory arose to handle Russell's Paradox, the consensus belief among mathematical logicians was the former, and so it is perfectly sensible to say "Russell's Paradix results from a type error," and isn't it great that we've now discovered type theory, in much the same way that Maxwell discovered Maxwell's Equations? Of course, before too much more time goes by, along comes Gödel, who promptly reveals mathematical logic to be a very ancient shell game and very nearly drives people like John Von Neumann to suicide. More recently, along comes Gregory Chaitin, who demonstrates that not only is even the most seemingly-basic mathematical logic either incomplete or inconsistent, but the overwhelming majority of it is simply meaningless—most truths in the system are true by accident!

Still, here we are, most of us happily fumbling along writing software expressing a tiny subset of these accidental truths, and even encoding Russell's Paradox as a programming language feature without too terribly much ill effect.

Russell's Parad[o]x results

Russell's Parad[o]x results from a type error," and isn't it great that we've now discovered type theory

That would work if there was a unique way to resolve Russell's paradox and that that was a unique type theory, but this is not the case. I can easily see some people taking this tack anyway. For example, reading Chapter 16 of Roger Penrose's "The Road to Reality" is annoying and disturbing (at least to me). I view Roger Penrose as "more Plato than Plato".

Incidentally, not that anyone cares, my philosophy of mathematics is that mathematics (and every other thought endeavor) is based on how humans perceive the universe, thus math is not baked into the universe (I'm am definitely not any kind of a Platonist), but there is still objective (but human) content to math since humans perceive the world in more or less the same way because we are made more or less the same.

Type theory


isn't it great that we've now discovered type theory, in much the same way that Maxwell discovered Maxwell's Equations?

What type theory do you have in mind ?


Of course, before too much more time goes by, along comes Gödel, who promptly reveals mathematical logic to be a very ancient shell game

What exactly are you referring to ?


and very nearly drives people like John Von Neumann to suicide.

Could you provide a reference about "nearly drives people like John Von Neumann to suicide" ?


More recently, along comes Gregory Chaitin, who demonstrates that not only is even the most seemingly-basic mathematical logic either incomplete or inconsistent

Did he really say that ? According to Godel's completeness theorem, predicate logic is just the opposite, that is consistent and complete.

Relevant my hyperbaton is

This statement doesn't even make sense.

An important part of the original sentence was elided:

"Russell's Paradox results from a type error, and it was necessary to invent (or discover) type theory in order to characterize the error."

Obviously, it would have been more technically precise to say "Type theory was invented in order to resolve Russell's Paradox". However, the latter sentence rates a zero on the Douglas Adams rhetoric scale, whereas my original hysteron-proteron rates a positive, if miniscule, amount.

More seriously, thanks to Paul for describing the Platonic implications of my use of the word "discover". But even if you're not remotely Platonistic, Russell's Paradox still needs to be addressed, and is an "error" in many contexts, in that it is evidence of an inconsistent system. The paradox obviously becomes a type error if you introduce type theory to resolve the inconsistency.

Of course, we know that other theories have been developed to address Russell's Paradox, including ZF set theory with its Axiom of Foundation. I wasn't deliberately trying to exclude those other possibilities, but I don't think that ZF, at least, is particularly relevant in the context of programming languages, especially considering that the Axiom of Foundation is very similar to Russell's type restrictions anyway.

But I admit that the possibility that some alternative could replace types can't be completely ruled out, so the claim that types are a fundamental part of how we give meaning to information should be qualified. Perhaps I'll be permitted to get away with saying that types are fundamental to meaning, but if types were replaced by some other construct, then that construct would be similarly fundamental to meaning.

The point is really that types play a critical role in connecting programs and data to whatever external things they're supposed to be modelling, and they can't be thrown out without replacing them with something that has roughly equal ability to do that. Simply using a completely untyped language isn't enough, because comprehending the meaning of an untyped program requires reasoning about type-like entities: sets of values, and which terms can accept values from which sets. This very distinctive aspect of the structure of programs can't simply be ignored — it has to be accounted for somehow, in order to make sense of programs.

Axiom of Foundation


Of course, we know that other theories have been developed to address Russell's Paradox, including ZF set theory with its Axiom of Foundation. I wasn't deliberately trying to exclude those other possibilities, but I don't think that ZF, at least, is particularly relevant in the context of programming languages, especially considering that the Axiom of Foundation is very similar to Russell's type restrictions anyway.

The AF is irrelevant to Russell's paradox. What prevents the paradox (in ZF) is the Axiom of Separation.

(What set theory do you regard as relevant to programming languages ?)

Axiom of forgetfulness

The AF is irrelevant to Russell's paradox. What prevents the paradox (in ZF) is the Axiom of Separation.

Thanks, that was a silly mistake (looking up an axiom's name via Google can be treacherous!) The Axiom (schema) of Separation is the one I was actually thinking of, since it resembles Russell's type restriction.

(What set theory do you regard as relevant to programming languages ?)

Sorry, I wasn't clear. I meant to refer specifically to ZF's relevance to the question of the relationship between a program and its meaning, i.e. set theory alone doesn't provide a complete account of that. [Edit: by that I mean that you need to describe how ZF applies to programs.]

Russell's paradox


There is(/was) no type system for Russell's Paradox to ill-typed in, and the type system developed was designed to make the statement of Russell's Paradox ill-typed.

What specific type system that prevents Russell's paradox do you have in mind ? For example, the 1971 Martin-Lof type theory was shown to be vulnerable to the paradox. Besides, the most popular set theory (ZF) prevents the paradox without resorting to typing.

What specific type system

What specific type system that prevents Russell's paradox do you have in mind ?

Russell's own theories qualify, as well as e.g. simply typed lambda calculus [edit: and System F]. I may be missing your point?

What specific type system

If you mean Russell's 'ramified types', then you are quite right.

Simpler hypothesis

My hypothesis is that most programmers don't know the first thing about programming language semantics and programming language design. They think they understand typing (though they really don't) because they program in languages with explicit typing, and their understanding is always limited to syntactic issues, of which they think typing is one.

Long time readers know that I am usually very tolerant, but enough is enough. It's just plain silly to engage in these discussions.

My hypothesis is that most

My hypothesis is that most programmers don't know the first thing about programming language semantics and programming language design. They think they understand typing (though they really don't) because they program in languages with explicit typing, and their understanding is always limited to syntactic issues, of which they think typing is one.

I agree.

Long time readers know that I am usually very tolerant, but enough is enough. It's just plain silly to engage in these discussions.

Quite the contrary; I thought it was actually going somewhere. Unfortunately, it seems it ended couple days ago. I'm still awaiting replies :-)