Definition of Type

I am planning on using the following definition of type for my documentation on Heron:

A type is a semantic classification of a set of values. In other words it is a set of values which share some kind of information, whether it is intensional, extensional or otherwise.

Any comments?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Hmmm

The most common type systems for functional languages are known not to have models in Set (like the polymorphic lambda calculus - from results of Reynolds and Girard in the mid 70s), so that talking about 'sets of values' is quite dangerous if you are trying to be formal, albeit quite useful intuitively.

I do like the generality of 'some kind of information, whether it is intensional, extensional or otherwise'. But that would then include most 'effects' systems as type systems. Not that I necessarily disagree, but there must be a reason why so many researchers make a difference between types and effects.

Also, a lot of type systems are syntactic - like the ones based on Hindley-Milner. They are meant to reflect something semantic, but they are explicitly defined to be syntactic. Using a pure semantic definition makes type inference that much more difficult to explain.

You can even use it as a formal definition!

This is a big digression from cdiggins's question, but it turns out that the polymorphic lambda calculus does have set-theoretic models, if you work in constructive set theory.

"known not to have models in

"known not to have models in Set"

What do you mean by that? Can you give an example?

"but there must be a reason why so many researchers make a difference between types and effects"

I am unaware of what an effect or an effects system is and how it is dinstinguished from a type.

"Also, a lot of type systems are syntactic "

I don't see what you are getting at here. Syntactic doesn't mean "not semantic" as you seem to be implying. Semantic means to convey information. Which is exactly what a type does. It conveys extra information about a set of values, even if that information is nothing but a label, or behaviour or representation.

Using the Merriam-Websters dictionary online, semantic is defined as "of or relating to meaning in language.". Meaning is defined as "the logical connotation of a word or phrase", which I think can be applied appropriately in this case to a value. Finally connotation is defined by defined as "an essential property or group of properties of a thing named by a term in logic".

types in Heron

The online documentation for Heron says,

Heron is statically typed, this means every variable and expression has a type that is known at compile-time.
And it proceeds to specify the various types that are available in Heron. This very satisfactorily expresses the essential meaning of "type" in the context of Heron and other statically typed languages. If you add something about constraining values, I wouldn't call it a definition. You could also mention that static type analysis catches many errors at compile time.

"Type System"

I think it's easier and more productive to define "type system" instead of directly and independently defining what "type" means.

How about this?

Isn't a type system just a functor from a category of sentences to a category of types?

Anything less generic might be too restrictive :-)

This requires type to be predefined

This definition contradicts Ehud's point, because it defines a type system in terms of types.

What does a 'sentence' mean in computer science?

It depends

This requires type to be predefined

Not really. Let me rephrase it:

Let Sentences be a category of a language (for most PLs defined inductively, i.e., algebraically) of an abstract syntax of a PL. Then by type system we call any functor from Sentences to some category T. We call T a category of types.

Type System

The concept of 'type' predates the concept of a 'type system'. As a programmer for over 20 years, I have never been concerned with what a 'type system' is. I think any attempt to define a 'type system' independantly of the concept of a 'type' leads to academic confabulations.

Re: Type System

cdiggins: The concept of 'type' predates the concept of a 'type system'. As a programmer for over 20 years, I have never been concerned with what a 'type system' is. I think any attempt to define a 'type system' independantly of the concept of a 'type' leads to academic confabulations.

Whereas I think that any attempt to define a "type" independently of the concept of a "type system" leads to hand-waving and, far more often than not, incoherence.

"It turns out that a fair amount of careful analysis is required to avoid false and embarrassing claims of type soundness for programming languages. As a consequence, the classification, description, and study of type systems has emerged as a formal discipline." — Cardelli 1996

"Q: Why bother doing proofs about programming languages? They are almost always boring if the definitions are right.
A: The definitions are almost always wrong." — Pierce 2002

"Just because you've implemented something doesn't mean you understand it." — Brian Cantwell Smith

"The fundamental problem addressed by a type theory is to insure that programs have meaning. The fundamental problem caused by a type theory is that meaningful programs may not have meanings ascribed to them. The quest for richer type systems results from this tension." — Mark Manasse

hand-waving afoot ?

Whereas I think that any attempt to define a "type" independently of the concept of a "type system" leads to hand-waving and, far more often than not, incoherence.

You haven't addressed my definition. You have neither shown it to be hand-waving nor incoherent. Nor have you attempted to provide a satisfactory defintion of type dependent on a definition of type-system.

Foot Waving?

cdiggins: You haven't addressed my definition. You have neither shown it to be hand-waving nor incoherent. Nor have you attempted to provide a satisfactory defintion of type dependent on a definition of type-system.

Nor did I claim to have, nor am I willing to recapitulate the history of type systems. If you care to search the archives, you'll find at least two lengthy threads in which I made such an attempt, overreaching on some occasions, and winding up exactly where I started: with an acknowledgement, covered very succinctly in the last quote that I posted, that "dynamic types" are important and worthy of study, but agreeing wholeheartedly with the observations of Cardelli, Pierce, and Smith, which I would loosely summarize as "attempting to talk about 'types' outside what has already been established in type theory is incredibly foolish."

I see that one of my previous interlocutors, Anton, has also replied. I'm certain that he has very constructive things to say, so I will go read them now.

More light, less heat

The concept of 'type' predates the concept of a 'type system'.

This is like saying the concept of `parent' `predates' the concept of `child'. You cannot speak of one without the other, because they are defined relative to each other. A type is something that belongs to a type system, and, to a first approximation, a type system is a collection of types.

If you cannot see this, then you would see it once you tried to give a mathematical definition of type, rather than one in prose.

As a programmer for over 20 years, I have never been concerned with what a 'type system' is.

I will grant you the courtesy of interpreting this to mean that, since you don't care about what a type system is, neither do you expect Heron users to.

If that is so, then I have to wonder why you even make such a big fuss about how to define `type'. The definition you give above is clearly of no practical use to Heron users.

I think any attempt to define a 'type system' independantly of the concept of a 'type' leads to academic confabulations.

You seem to regard academic notions of `type' and `type system' as pointless.

Let me give you an example of an `academic' definition of `type'. In the simply-typed λ-calculus: o is a type; if A and B are types, then A -> B is a type; there are no other types.

Now, which do you think is more useful: `a type is a semantic classification blah blah blah set blah blah blah extensional blah blah blah ', or the definition I gave above, which is absolutely precise because I gave it relative to a type system?

I have some advice for you. The abstract question, "What is a type?", is a matter for philosophers not programmers. It has nothing to do with Heron, and you are clearly not equipped to answer it. Forget about it. Rather try to answer the question, "What is a Heron type?"

[Andris wrote:] Isn't a type system just a functor from a category of sentences to a category of types?

This would be an unusual definition.

It is far more orthodox to say that a type system is simply a category, period. There is a notion of type system as a functor (see Blute, Cockett and Seely. The logic of linear functors MSCS: 12(2002)4, pp. 513-539, abstract) but it is better thought of as a way to integrate two type systems. In fibrational and presheaf models of polymorphic calculi also the functor is prominent, but of course not every calculus is polymorphic.

`a type is a semantic classi

`a type is a semantic classification blah blah blah set blah blah blah extensional blah blah blah ',

That simply indicates clearly both your ignorance and arrogance. I will not respond further to your post.

Decomposition

[type] is a set of values which share some kind of information, whether it is intensional, extensional or otherwise.
Rewrite since after the comma is irrelevant:
[type] is a set of values which share some kind of information
So now we have 'type' as a subset of all values with the conditions that the members "share some kind of information".

So, if the definition is going to be well-defined, is {a,b,c} a type? What information do these members share? How about {1,square,{a,b,c} }? Is it a type? What information do the members share?

Please differentiate your 'type' from a set.

Those are sets of values, not neccesarily types

So, if the definition is going to be well-defined, is {a,b,c} a type? What information do these members share? How about {1,square,{a,b,c} }? Is it a type? What information do the members share?

There is no shared information implied in those sets, they are simply sets of values. If they were defined to be types ( which doesn't appear to be the case ), then I would expect them to at the very least have a label or type identity associated with them.

Labels

There is no shared information implied in those sets
Yes, there is. The information is the set membership.
have a label or type identity associated with them
Fine, then label or 'type identify' {a,b,c} as set1 and {1,square,{a,b,c}} as set2.

Now are they types?

On types and masturbation

Hey, I'm sorry if you took that the wrong way, but my point was that trying to explain an abstract word, like `type', using a lot of abstract words which most people don't understand, like `set' and `extensional', is not going to help anyone who doesn't understand `type' already. It fuzzifies rather than clarifies.

Hell, I love abstract words. They're powerful and useful, and I can spin mighty and elegant webs with them. But. Sometimes when I look at my own writing I see that I don't actually understand what I've written, and that I'm actually using the words to cover up my own ignorance, and counting on other people's ignorance not to betray me. And that's when I look for more concrete ways to write things.

Now, you may call me arrogant, but not ignorant. The truth is that I understand very well what a set is, what `semantic' means and what `extensional' and `intensional' mean. And I also know what a `type' is, probably better than anybody else here, both in the context of programming languages and in mathematics, not least because I've defined and analyzed dozens of type systems.

And I am trying to give you some advice, which is commonly sloganized as this: Keep It Simple, Stupid. Or rather, Keep It Concrete, Comrade.

I've been on this site for quite a while now, and I used to get into all sorts of philosophical discussions about this and that. And, believe me, I can sling bull with the best of 'em. But now I avoid it because, ultimately, an argument about such abstract things can only be settled by a long, informal tract (as in, something the length of a book) or a short, formal article, which assumes some mathematical knowledge. Neither of those things are suitable for publication on a site like this, and no one has the time to write them. That's why we have things like periodicals, and editors, and referees.

If you want an honest definition of `type', with no qualifier, that is, `type' in the most general sense of the word, then you have to get it from someone who knows lots of type systems, and it is clear that you don't. Who does? Well, Pierce does, and so does Reynolds, who someone quoted in the other thread:

Type structure is a syntactic discipline for enforcing levels of abstraction.

Pierce and Reynolds give reasonable definitions of `type'.

What I suggested to you is, forget that; instead add a qualifier and just define `Heron type'.

Then, when you've done that, you are obligated to check yourself, and I wonder if you know what that means. For example, you claim a type is a `set' of `values'. If this is true for Heron, then it means that 1) you also need a definition of `Heron value', and 2) you need to exhibit a model of Heron in which each type is interpreted by a set, which 3) requires that you define what a `model' of Heron is, which 4) requires that you show how each Heron program is modeled as a function, or relation, or whatever. Then you show your model is sound.

Once you've done all that, then, finally, you are in a position where it might be useful to quibble about the rest of the definition, like whether `share some kind of information, whether it is intensional, extensional or otherwise' actually adds anything meaningful to the definition. I think it doesn't, and that is why, to me, all of that reads pretty much like blah blah blah.

But anyway, by that time, your definition will be so Heron-specific that no one here will be interested in dissecting it, because we all have our own lives. Instead, if you are smart, you will read up on archetypal type systems and languages, like simply-typed λ-calculus, because that is something any knowledgeable person can discuss, and it's small enough to have a meaningful discussion about, and you will start a discussion about that, and quibble about what a type means there, and we'll have a big long thread on the meaning of `type' in STLC and, if you are very lucky, then you might learn something from that which you can then, in private, apply to your favorite language Heron. And then we may all benefit from it.

But at present, all we are doing here—all of us—, fussing and fighting, is masturbating and wasting time. And now I will stop wasting your time, because I'm sure this is all lost on you anyways.

Another amendment

[Andris wrote:] Isn't a type system just a functor from a category of sentences to a category of types?
[Frank wrote:] This would be an unusual definition. It is far more orthodox to say that a type system is simply a category, period.
Now I seem to remember reading something like this - base types are (represented by?) objects (or actually morphisms from some dedicated object? initial?), monomorphic function types are morphisms, currying requires all cartesian products (to exist), sum types require all coproducts, polymorphic types are natural transformations...

If something like this is usually called a type system, fine. Then I guess the functor I "defined" is some kind of binding or application of the type system to a specific (formalization of abstract syntax of) PL. I am getting curious, how would this formalization look like for some real language (bonus for not a functional one)...

Factoring it along denotations

Risking to demonstrate my ignorance once more - if the PL in question has denotational semantics (are values a special case of denotations?) then "my" functor from Sentences to Types factors into denotational functor from Sentences to Denotations (Values?) and simpler typing functor - from Denotations to Types.

[on edit: quick Googling revealed that this functor from the category of syntactical objects to the category of denotations is sometimetimes called "interpretation".]

Only 20 years? ;)

The concept of 'type' predates the concept of a 'type system'. As a programmer for over 20 years [...]

Formal type systems, and the notion of 'type' that go along with them, date back at least to Bertrand Russell in the early 1900's, long before there were any programmers. ;) The modern notion of types in mathematical logic and computer science follows the same basic approach. Types in programming languages are just one part of this much broader field.

It's relatively straightforward to map these formal type systems onto statically-typed programming languages, because a key point is that type systems are all about classifying terms based on the (kinds of) values they can produce or accept. That's exactly how types in a statically-typed programming language are used. Types in this context aren't associated directly with values, they're associated with terms in a program (or a proof, as the case may be).

The connection to sets of values is that the type restricts the set of values that can flow through a term. This is the sense in which it is said that type systems are "syntactic" - because terms are syntactically tagged with types, either explicitly or implicitly (e.g. via type inference).

Things get less simple when talking about so-called untyped programming languages. Programmers in these languages always have some theory about the types that flow through a given term - it would be impossible to write useful code otherwise. In such programs, types can be said to be latent - terms do have latent types, even if the programmer's theories about them happen to be wrong in some cases. However, this latent type information about terms is not available to a language implementation - instead, the language tags values with information relating to types, to allow it to detect invalid operations, e.g. operators being applied to the wrong type of value.

However, as many type theorists will delight in telling you, these tags are not types in the type-theoretic sense. The formal explanation of this is simple: by definition, types exist within the context of a formal system, i.e. a type system, which is expressed in a formal language. In a statically typed programming language, the formal language of types is embedded in the syntax of every program. In an untyped programming language, there is no formally expressed equivalent, which is why these languages are called untyped.

It's not even the case that the tags in an untyped language are a proper mapping of latent types into the runtime environment. Tagging values still doesn't tell you much about terms. The tags associated with values are often narrower than the (latent) types of the terms they flow through, and in some cases the tags may not be capable of expressing all latent types. An example of this is in the case of first-class functions. Functions have types like (a -> b), but untyped languages will typically tag a function value (i.e. a first-class function) with a tag which means "function", but which doesn't express the actual type of the function.

For your definition, since Heron is statically typed, you might want to modify the definition to make the connection between types and terms in the language. Also, as has been pointed out, it is more proper to define 'type' in terms of a type system.

If Heron were untyped, the formal ground becomes shakier, and it's quite likely that what it would call types wouldn't quite the same thing as what mainstream type theory considers a type.

In Cardelli's "Type Systems", he gives a definition of type as "A collection of values. An estimate of the collection of values that a program fragment can assume during program execution." Earlier, he writes "A program variable can assume a range of values during the execution of a program. An upper bound of such a range is called a type of the variable." This is later extended from variables to "all expressions in a program". Note the emphasis on the types of variables and expressions.

(Corrections welcome)

Dead Philosopher's Society

Formal type systems, and the notion of 'type' that go along with them, date back at least to Bertrand Russell in the early 1900's, long before there were any programmers.

I see no connenction between data types in computer science and Russell's notion of type. I don't think dead philosophers have to be referenced to explain what a data type is.

The connection to sets of values is that the type restricts the set of values that can flow through a term. This is the sense in which it is said that type systems are "syntactic" - because terms are syntactically tagged with types, either explicitly or implicitly (e.g. via type inference).

Pardon my frankness, but this is hogwash. Every programming language I am familiar with has the same basic concept: values and expressions may or may not have a type. A type provides extra information about a given value or expression, which might be extensional information ( e.g. behavioural ), intensional information ( e.g. structural ) or other. The concept of "term" is irrelevant to the implementation, design and use of programming languages. They may be useful for proving things about type systems, but are absolutely not neccessary for providing a definition of type, nor for understanding what a type is.

In Cardelli's "Type Systems", he gives a definition of type as "A collection of values. An estimate of the collection of values that a program fragment can assume during program execution." Earlier, he writes "A program variable can assume a range of values during the execution of a program. An upper bound of such a range is called a type of the variable." This is later extended from variables to "all expressions in a program". Note the emphasis on the types of variables and expressions.

Cardelli provides an incorrect definition of type. A variable's type is not simply an upper bound on a range of values. A type can be unbounded. The lower bound is as also as important as the upper bound. Furthermore a set of values is not neccessarily a type, as two types can have the same range of values and an arbitrary range of values is not neccessarily a type.

Sets of Potential Values

cdiggins: Furthermore a set of values is not neccessarily a type, as two types can have the same range of values and an arbitrary range of values is not neccessarily a type.

This is only as true as your design makes it. I presume that Tim Sweeney is otherwise occupied :-) so let me refer you to his interesting posts on David McAllester's Ontic language, which you can find in threads rooted here, here, here, etc. etc. etc.

Upon re-reading some of Tim's posts, I think you might find this one particularly pertinent; it offers a rather detailed definiton of Tim's understanding of the meaning of "type" in the context of a modern programming language. Following that is a very useful dialogue between Tim and Frank Atanassow in which some of these definitions are challenged and expanded upon.

You've never used a typed PL?

Every programming language I am familiar with has the same basic concept: values and expressions may or may not have a type.

Even in C, everything has a type. There may be a lot of void *'s, but everything is typed nonetheless. I can thus conclude that you meant something different. Exactly what, I couldn't say.

The concept of "term" is irrelevant to the implementation, design and use of programming languages.

I think there are many people who have been involved in the implementation and development of programming languages that would disagree with you.

I meant:

I meant that when you consider all values and expression in the set of programming languages that I am familiar with, sometimes a value or expression is typed, sometimes it isn't. Some languages obviously are strongly typed, some languages aren't.

I think there are many people who have been involved in the implementation and development of programming languages that would disagree with you.

That don't dispute my point at all. What about terms makes them crucial to the understanding of programming language?

irrelevant/non-crucial

You said:

The concept of "term" is irrelevant to the implementation, design and use of programming languages.

That's obviously a different statement than "Terms are not crucial in understanding or designing every programming language." There's a difference in quantification: universal/existential. Your original statement implied ". . . [all] programming languages," which you have yet to support.

The inexpressible importance of terms

I meant that when you consider all values and expression in the set of programming languages that I am familiar with, sometimes a value or expression is typed, sometimes it isn't.
I think what you're referring to is that some terms have explicit type annotations, and others don't. However, statically typed programming languages, and the type theories which describe their type systems, follow the rule described in the Heron docs: "every variable and expression has a type that is known at compile-time". In that sense, every term in these languages has a type. When analyzing types in a program, the presence or absence of a type annotation is not particularly important, as long as the type of the term can be determined.
What about terms makes them crucial to the understanding of programming language?

"Term" is just a term (heh) which means the same thing as "variable and expression" in the Heron quote above. It's just a little more concise than "variable and expression".

One could equivalently say "What about variables and expressions makes them crucial to the understanding of programming language?" Hopefully you won't want to argue that point. ;)

Don't count out dead philosophers until you've understood them

I see no connenction between data types in computer science and Russell's notion of type.

Just as one example, a connection to Russell is made in the first two pages of Pierce's book "Types and Programming Languages". However, I've seen various references to that connection, which isn't surprising, because it's an obvious one. Types as defined by Russell and his successors were focused on being able to reject formulae that made some claim about relationships between values that didn't fit the type system. It's this ability to reject formulae that is the key goal of type systems in logic. The type systems of statically typed languages are similarly focused on rejecting inconsistent terms. I'd say that's the primary basis of the connection, although the similarities go some way beyond that.

Pardon my frankness, but this is hogwash. Every programming language I am familiar with has the same basic concept: values and expressions may or may not have a type. A type provides extra information about a given value or expression, which might be extensional information ( e.g. behavioural ), intensional information ( e.g. structural ) or other. The concept of "term" is irrelevant to the implementation, design and use of programming languages. They may be useful for proving things about type systems, but are absolutely not neccessary for providing a definition of type, nor for understanding what a type is.

I'm not sure if you're reading something into it that I didn't mean, but I used "term" to indicate more or less what you describe as "a given value or expression" - think of a term in a grammar. Your claim that "the concept of 'term' is irrelevant to the implementation, design and use of programming languages" indicates to me that we must be talking at cross purposes, since the concept of 'term' seems essential to any kind of programming language, typed or untyped.

Every programming language I am familiar with has the same basic concept: values and expressions may or may not have a type

In statically typed programming languages, every term has a type, even if it's implicit. Otherwise, they're usually not considered statically typed. Further, compilers for these languages reject terms whose types aren't consistent. That capability is what type theory focuses on.

When you say "a type provides extra information about a given value or expression", perhaps you're focusing on the fact that type information can also be used for purposes other than rejecting inconsistent phrases. That may be so, but I was really just describing what type theory focuses on, which is borne out by the definitions provided in the type theory literature. As I pointed out, this traces back to the original motivations of the philosophers who invented the earliest type theories — the consequences of which are broader than you give it credit for. As another example, and since you've rejected Cardelli's definition, here's the definition of "type system" given by Pierce:

A type system is a tractable syntactic method of proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute.

Pierce used "phrases" where I used "terms". It should be noted that Pierce points out the difficulty of defining the term "in a way that covers its informal usage by programming language designers and implementors but is still specific enough to have any bite". My explanation which you described as "hogwash" explains the sense in which type systems are a "syntactic method".

To bring this back to something concrete, are there terms in Heron which have no type that can be determined syntactically, i.e. without running the program? (That's assuming, of course, that there are terms in Heron ;)

I think I understand what is going on

The type systems of statically typed languages are similarly focused on rejecting inconsistent terms. I'd say that's the primary basis of the connection, although the similarities go some way beyond that.
I think I now understand what the source of contention is.

For the most part people are talking about "type" in sense of logical type systems, which can only be defined in the context of a type system. I had been talking about "type" in the sense of "data type" as used in programming languages to describe the extra information associated with certain sets of values.

I hope it can be agreed that there exists a mapping from a programming language data type to the concept of a type in a logical type system, but that they are indeed separate concepts which should not be used interchangeably.

Thanks for explaining your position so patiently and clearly Anton.

Not quite

I think I now understand what the source of contention is.

For the most part people are talking about "type" in sense of logical type systems

For the most part, I was talking about type theory in computer science, but also pointing out the connections to earlier logical type theories. A big reason that there are strong connections between CS type theory and type theories in mathematical logic is that programming languages are formal languages, and mathematical logic is largely concerned with dealing with formal languages. In that context, programming languages are just a special case.
I had been talking about "type" in the sense of "data type" as used in programming languages to describe the extra information associated with certain sets of values.
Do you have any references for the definition of data type that you're thinking of?
I hope it can be agreed that there exists a mapping from a programming language data type to the concept of a type in a logical type system, but that they are indeed separate concepts which should not be used interchangeably.

When it comes to statically typed programming languages, the CS type systems I was talking about are intended as formalizations of the type systems and types in those languages. As such, they currently provide the most rigorous definition of "type" in those languages. Without that formalization, you have an informal definition. I think that's the real source of contention: the mismatch that often occurs between an informal definition and a formal one.

Fear

Anton van Straaten: When it comes to statically typed programming languages, the CS type systems I was talking about are intended as formalizations of the type systems and types in those languages. As such, they currently provide the most rigorous definition of "type" in those languages. Without that formalization, you have an informal definition. I think that's the real source of contention: the mismatch that often occurs between an informal definition and a formal one.

An excellent point! Couple that with the fear that some of us feel after having attempted to develop and maintain large systems in latently-typed languages and bumped into some "language corners" or just "bookkeeping issues" in keeping the code coherent, and perhaps some can arrive at an understanding as to why we invest in the study of richer static type systems.

This is a very real-world concern. Even in my day job, it tends to show up as someone coming to me and saying "See how simple this API is?" and what they offer as a simple API is one with scads of functions taking and returning strings, laden with side-effects, and needing to be called in just... one... particular... order. So you need to know what to call when, what format the strings are in, how to parse the ones you get back... none of which is explicit in the API. It's simple because none of the heavy lifting is expressed (or, in some cases, expressible) in the language itself. This is the kind of thing that I fight, even in "statically typed" languages like C and C++.

Every argument can be countered...

"It turns out that a fair amount of careful analysis is required to avoid false and embarrassing claims of type soundness for programming languages. As a consequence, the classification, description, and study of type systems has emerged as a formal discipline." — Cardelli 1996

The study of formal type systems is way older than the study of type systems in programming languages. And a lot of type theory isn't about programming languages but different kinds of logics.

"Q: Why bother doing proofs about programming languages? They are almost always boring if the definitions are right. A: The definitions are almost always wrong." — Pierce 2002

So are the proofs. (Hmm, some Martin and, uhm, Lof, right? ;-)

"Just because you've implemented something doesn't mean you understand it." — Brian Cantwell Smith

If you think you understand something it usually just means you don't know how to implement it.

"The fundamental problem addressed by a type theory is to insure that programs have meaning. The fundamental problem caused by a type theory is that meaningful programs may not have meanings ascribed to them. The quest for richer type systems results from this tension." — Mark Manasse

Unfortunately, it seems a lot of meaningfull programs can more easily be written in very poor/dynamically typed languages. Where is the study which proves we need these rich type systems?

I see no connenction between data types in computer science and Russell's notion of type.

We're not talking about data types. Just types in general. And hey, both applications use types somehow for solving semantic issues.

A type system is a tractable syntactic method of proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute.

Not only to prove absence, but also to prove presence of properties such as, say, resource utilization or timely completion.

 

Definitions have different meanings in different contexts. I think it was Nietsche who introduced the term "angry consciousness" for people who are afraid of accepting paradoxes or diversity.

It seems more worthwhile to stick to the original question here: was the given definition of a type in Heron good? I personally don't like the "it is a set of values" but would stick to "it describes a set of values" or even "it describes the structure of a set of values" and drop the rest.

Cheers, Marco

Poor rich type systems

Unfortunately, it seems a lot of meaningfull programs can more easily be written in very poor/dynamically typed languages. Where is the study which proves we need these rich type systems?

In Manasse's sense, dynamically typed systems are richer that statically typed systems. The reason why so much effort is spent devising static type systems is that, though they deliver less, what they deliver they deliver in good time, ie. before you run your program.

Nice!

cas: In Manasse's sense, dynamically typed systems are richer that statically typed systems. The reason why so much effort is spent devising static type systems is that, though they deliver less, what they deliver they deliver in good time, ie. before you run your program.

Thanks, cas, for doing such a great job of explaining, in two sentences, what I've taken many dozens of posts in a futile attempt to explain! :-)

Not a counter, just a corollary

A type system is a tractable syntactic method of proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute.

Not only to prove absence, but also to prove presence of properties such as, say, resource utilization or timely completion.

Proving absence of some class of behavior proves the presence of the class of behavior which remains after the class whose absence has been proved is removed.
:)

Is this constructive?

Or intuitionistic? :)

Well, I spotted the smile...

constructive+ intuitionistic = ...

I think I read a comment of Per Martin-Lof, and, uhm, from what I remember, that particular mistake was best described as ... holistic ;-)

Cheers, Marco

Denial of the antecedent?

Proving absence of some class of behavior proves the presence of the class of behavior which remains after the class whose absence has been proved is removed. :)

I think I see what you mean, but I'm wondering if the wording you used is accurate. It seems like it would allow: "I have proved that there are no type errors, so any behavior that is not a type error is definitely present".

I agree that disproving something is a way of proving something else, but does anybody know a better way of stating that? Or am I mistaken about it being incorrect in the first place?

Proof of absence is not absence of proof :)

My original sentence was something I came up with on a first pass, but its tortuousness appealed to me so I decided to leave it as it was.

Note that it claims a proof of "the presence of the class of behavior which remains..." It was not intended to claim the presence of anything that wasn't already there. However, talking about the presence of a class tends to imply that the entire class is present, which is misleading and leads to the conclusion you proposed.

I think it's easier to describe it in terms of type systems: type systems reject program phrases which don't satisfy certain constraints. The remaining phrases, in any given program, are thus guaranteed to satisfy those constraints.

Meaningless meaningfull programs

I know that; although Manasse's statement seems to be paradoxic. (My remark was just made to show that every argument can be countered)

If there is no encompassing definition for type, or type system, and it seems there are very different definitions of it, then the original poster is right that it only makes sense to provide one in his specific Heron context. And it only makes sense for us to comment on that definition in that context.

Cheers, Marco

Note: hit the wrong button. This is a reply to a post by cas.

Math is hard

Math is hard.

Hard or not (it's too hard for me, anyway), I believe it's also a complete and useful description of a type system, in the context of which "type" has a fairly unambiguous meaning (that is not necessarily the same as the meaning of "type" in some other system).

Dominic Fox: Hard or not (it'

Dominic Fox: Hard or not (it's too hard for me, anyway), I believe it's also a complete and useful description of a type system, in the context of which "type" has a fairly unambiguous meaning (that is not necessarily the same as the meaning of "type" in some other system).

Right. That's "Algorithm W," the tried and true algorithm for Hindley-Milner type inference. Dates to 1975 or so. There are more interesting algorithms, defining more interesting type systems, in the literature. The oft-cited "Types and Programming Languages" offers several, and that's without even getting into dependent types and the Calculus of Constructions and so forth.

To the extreme

Math is extremely hard on everyone, especially when you try to understand formal logic systems which try to develop models for "math" (if such a thing is possible; even the T-Shirt text you linked to seems informal to a logician). I sometimes still dive into it, but there are few "moments of perfect clarity" in the formal logic field it seems.