Home

Feedback

FAQ

Getting Started

Discussions

Site operation discussions

Recent Posts

(new topic)

Departments

Courses

Research Papers

Design Docs

Quotations

Genealogical Diagrams

Archives

No! (As Simon Thompson explains.) You cannot not love the "exploration of the length function" at the bottom. Made me smile in the middle of running errands.

I saw nothing on the Elm site that indicated a type system rich enough to describe integer divide without possibility of a runtime exception, nor did I see any pointer to a real description of the type system. Can somebody offer either a link or an explanation? Thanks!

Elm just returns 0 for integer division by 0.

Yes, Haskell deserves to get beaten up regularly for the stupid decision about the length function. (It's been criticised frequently and vehemently on Haskell forums; the defence seems to be "we spent so long arguing about it that in the end it became a rushed job and somebody just got on with it". We know who the someone was; and we know they didn't even communicate the decision until after it was released.)

length

Thompson says of other possible languages there aren't the mature texts that Haskell has; but there aren't mature texts for Haskell as-is today -- especially as would describe what's going on with length. All the mature texts describe Haskell 1998; quite possibly including the pre-monad approach for IO. (As in the "Gentle Introduction".)

As I monitor proposals for extensions, I spend most of my time lamenting that what seemed to be such a clean language has become bloated with strange notations, and especially impenetable error messages to do with higher-typed-in-kind skullduggery that I never suspected I was using.

Thompson concludes by asking why not go for subsets of Haskell, ? There is a subset of Haskell that's stable and close to the mature texts: Hugs. It just has one tiny annoyance to do with overlapping instances combined with functional dependencies. And there's a clean way to fix that. Hugs is a self-contained downloadable that for teaching purposes doesn't involve dependency hell on some vast infrastructure of libraries with bizarre definitions of length and friends.

Oh, and I'd love for someone to revive TRex.

The horrible "length" error message in a certain case is an example of the fact that you want language "levels", giving beginners an explicit way to remain within a simpler subset of the language. Having higher-kinded type classes leak into an error message is an example of "abstraction leak", where someone in a beginner scenario is given information expressed in terms of things they don't know. The boundary that leaks is the (implicit, unspoken) simple-language-subset boundary, and it needs to be explicitly designed.

I have thought about this issue on-and-off for a while and my current approach is that this is an interesting instance of "multi-language programming system", that can be worked on using tools for this larger class of programming systems. The "teaching subset" and the full language are two languages that are mixed in the same system (if you design a library for your students to use, you may restrict its interface, but the implementation will use the full language), and we need ways to speak of what it means for these two languages to gracefully interact together.

The Racket teaching-languages tower previously mentioned is a good step in this direction, but no one has studied what *static* guarantees can be given/respected by the various language subsets, in terms of interacting with each other. This question feels more natural when working with typed languages, but it is of interest for any combination of languages, typed and untyped.

I have started working on this question last year; see a draft article and the corresponding talk slides, which actually discuss more of the high-level picture.

What would be really great would be a sequence of languages that gradually add features as they are taught.

There is only one way to do this properly, and that is to use literate programming, where the tutorial material IS the language.

Unfortunately, that is really hard to do because the logical structure of ideas is "spread out" in code, especially in functional code, which is one reason functional programming style was dropped in favour of data driven style (OO) several decades ago. That criticism remains valid.

I'm trying to construct a tutorial that introduces my language Felix starting at a beginner level and getting more difficult as one progresses but it is HARD because things are related. In fact, the very early introduction of ideas is the hardest I think. Once the basics are mastered, extensions can more easily be introduced.

There are a lot of issues. One of them is the "small white lie" problem. Often you have a general construction and a special case, but you need to introduce the special case first and then generalise it. But its a lie, because the *system* does not work by extension.

For example, one can introduce various integer and floating point types and explain the operations. Then one can generalise to algebra represented by type classes. The problem is the "special purpose" library code doesn't exist: in the real world the non-typeclass based code was *refactored* when the generalisation was done. So an accurate record of the progress of ideas is actually historical.

Similarly, in Felix everything is polymorphic. Non-polymorphic functions are polymorphic functions that happen to have zero type variables. Zero is a special case of a number, but historically Euclid didn't have zero and the greatest common divisor algorithm was written by considering the positive and empty cases separately. The generalisation to include zero as a natural number is simpler in some ways but harder in another: now we have to explicitly check the unified algorithm for the zero boundary condition that was explicitly written in the original algorithm.

Is Haskell a good teaching language? You have to be kidding. I have a lot of programming experience and I can't make head or tail of it. There isn't enough redundancy, especially, not enough type annotations, to actually see patterns in the code. People think understanding functional concepts is hard. They're wrong. Its the *syntax* of a language that is hard (and, knowing how to understand the huge libraries most languages come with). This is also why Scheme is a woeful language. Too many parens. No redundancy. Its also why type inference is a really bad idea. The resulting code is unreadable and isn't amenable to refactoring because one has no idea what type things are, because it isn't written down. Redundancy requirements vary from 15 to 30% depending on level of expertise. Haskell is like 5%. Incomprehensible. Of course mathematics is the worst language, its so compacted most maths is 30% *ambiguous* and you need to understand it already to read the notation. [Numbers invented for illustration only]

Simon mentions, but never actually responds to, the question of language levels à la Racket.

I'm failing to see an issue here that is not addressed by the Racket approach.

You don't have to like the languages built-in — the whole point of Racket is you can build your own language about as easily as building a library. Several textbooks come with their own sets of language levels. Some are even typed: I know because mine (PLAI second edition) is one of them (the plai-typed language is Matthew Flatt's implementation of H-M type inference).

plai-typed

Indeed, this is the problem with pointing to some specific subset (e.g., Hugs): it won't work for everyone. You need the ability to build your own.

## Recent comments

3 days 7 hours ago

3 days 22 hours ago

4 days 31 min ago

5 days 23 hours ago

6 days 10 min ago

6 days 1 hour ago

6 days 8 hours ago

6 days 12 hours ago

6 days 16 hours ago

6 days 16 hours ago