Announcing Lang.NEXT 2014 - Registration is Free and Now Open

Lang.NEXT is a cross-industry conference on what’s trending in programming languages (optional types, anyone?), occurs every couple of years, and is sponsored by Microsoft. We bring together a broad range of language experts as speakers as well as audience, and focus on a lively and intense exchange of ideas, straddling the usual barriers between different segments of the community, and between academia and industry.

I am very happy to let you know that the 2014 Lang.NEXT conference is now open for registration.

Here is the 2014 program (more details to appear!) which includes keynotes by C++ creator Bjarne Stroustrup and functional programming fundamentalist (reformed?) Erik Meijer. We have a great list of speakers for this iteration of Lang.NEXT. Please join us on May 29-30, 2014 in lovely downtown San Francisco at the Microsoft Office on 835 Market Street. Attendance is free of charge (and there will be free beer, breakfast/lunch and an attendee party - a first for Lang.NEXT!). Registration is required and seating is limited. If you register, then please do actually attend.

Hoping to see you there!

Charles and the Lang.NEXT team (Mads, Dustin, Jonathan and Luke)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Erik Meijer is back to his old self, scrap the reformed

Just read this latest ACM entry. I couldn't disagree more but I guess he has got a company to push now.

Looks a bit like a European thing to me; people starting to believe their own bullshit. Then again, guess it might make him rich. Who knows?

Erik has some valid points,

Erik has some valid points, at least. You can't just dismiss them with the sheer power of grumpiness. (If that worked, the world would already be a lot more to my liking!)

Erik does have some valid

Erik does have some valid points, and so let the debate begin (start here He puts up a lot of strawmen and exaggeration to make his argument though. You can mix imperative and functional programming in C# without going totally bananas about it.

Well I can

Man, I just don't feel like repeating what I've said here on LtU over the last ten years. And it doesn't make much sense to even discuss it. These people have (career) agendas; it isn't a very rational process.

(Though I just saw that a year ago he proclaimed that "Obsession with Monads is a medical condition." Great, that only took ten years.)

I'll bite anyway

No. My main beef is that he reverses a number of arguments. People once started of with the idea "Wouldn't pure lazy programming be great!" Yah, that would be great; if you can get it to work and I am not sure the verdict is already out on that one. And then that implied a number of things, like monadic IO and various manners of dealing with side effects. And then they started selling these implied trade-offs as great ideas.

Most of his remarks don't make much sense unless you buy into we should do everything lazy and pure.

It's easy to complain about

It's easy to complain about what can go wrong with imperative programming, and by the principle of grass-is-always-greener one would want to eradicate all things imperative. (One might then complain about what can go wrong under the new regime, and hark back to the good old days of imperative programming, which doesn't strike me as any less naive.) Myself, I want to understand what's good about imperative programming, because there's obviously something going on with it, and if we don't grok what that is, when we discuss whether or not to elminate it we literally don't know what we're talking about.

It's easy to complain about

It's easy to complain about what can go wrong with imperative programming

Unfortunately the discussion often tends to get framed quite badly by both sides. No-one that I know of is actually arguing for eliminating imperative programming, even if they say they are. The actual functional argument is that mutable state should be better managed and only used when needed, rather than gratuitously or by default.

With that in mind, the issue being complained about is "imperative languages", not imperative programming. You certainly want a language that supports imperative programming well, but an imperative language that use an imperative model by default is a bit more of a suspect idea.

One factor that contributes to confusion in these discussions is that the majority of imperative programmers today haven't actually done any significant work with a so-called "purely functional" language. When they've studied those languages, the pedagogical materials tend to start out by ignoring mutable state entirely. This is justified as a "lie to children", less condescendingly known as Wittgenstein's ladder. Wittgenstein wrote the following about this:

"My propositions serve as elucidations in the following way: anyone who understands me eventually recognizes them as nonsensical, when he has used them—as steps—to climb beyond them. (He must, so to speak, throw away the ladder after he has climbed up it.) He must transcend these propositions, and then he will see the world aright."

Taking this to heart, we can conclude that the idea of truly purely functional general-purpose languages - languages with no support for mutable state - is nonsensical, as is the idea that imperative programming should be entirely eliminated. When you see those propositions being discussed, you've almost certainly run into a case of this:

when we discuss whether or not to eliminate it we literally don't know what we're talking about.

The real debate is, or should be, about things like how best to manage effects, how best to modularize and compose state, and about how to improve the existing prevalent model of imperative by default.

In that context, the argument that the imperative status quo is just fine is a lot more difficult to defend.

Haskell just has great sales

"The actual functional argument is that mutable state should be better managed and only used when needed, rather than gratuitously or by default."

Yeah well. I find it rather revealing that roughly the only people in the world worrying about this are people with a background in Haskell. And they seem to have forgotten that Haskell needed to factor out side effects since lazy evaluation is an unpredictable operational model. They sold that as a great idea but it's really doubtful you should keep track of side effecting code.

Maybe you can make code better by keeping track of side effecting code, it's a nice research project, but it reminds me too much of the "should we annotate exceptions fully for every function declaration." Nah, you shouldn't.

Haskell just has great

Haskell just has great sales

I doubt you can explain its success based on that alone.

I find it rather revealing that roughly the only people in the world worrying about this are people with a background in Haskell.

I don't think that reveals what you think it does, because people who worry about these issues will sooner or later discover Haskell. The main observation here is that Haskell is the only reasonably popular general-purpose language that chose to focus on its particular set of properties, so it ended up as the poster child for a purer style of functional programming.

That may be a disadvantage in terms of a productive debate about improving programming languages, because it makes it seem more like advocacy for a specific programming language than about improving programming models.

Maybe you can make code better by keeping track of side effecting code

It's not just about keeping track of it, it's about relying less on it when it's not necessary, and having the language features to support that.

It isn't a lie when you tell it to a student

No, there is something to it. Like Prolog's DFS unification, lazy evaluation is a more expressive model. And purity often helps writing clean code. But it's doubtful purity and laziness buys you much and it's very doubtful looking at the world from a categorical perspective will buy you anything. (For instance, monads are more a disease in comparison to the nice functional code you want people to write. Even Haskellers now admit that.)

Students buy into it. It solves a number of problems. That's fine. I personally won't touch it. (Though I taught it as a lecturer since I want students to understand FP.)

(There are no students here I hope.)

inference and monads

I think mutability contained within a function that appears pure to the outside should be possible, providing the encapsulation is guaranteed by the compiler. I think mutable objects are fine as they represent a unit of enclosed mutability and are semantically similar to a state monad.

Imagine that you have a language that looks like 'C' but everything has the correct monadic type. Imagine monads extended with row types allowing monadic mixins derived by type inference. With the right type system the gap between imperative and functional can be closed.

Wittgenstein's ladder is

Wittgenstein's ladder is profoundly misguided. One one hand it produces experts who believe the lies (which is why so many scientists don't understand what science is), on the other hand outsiders who believe the lies since they go no further (which is why so many non-scientists don't know what science is), and on the gripping hand, outsiders who, told one thing and then another contradictory thing, go away in disgust, confirmed in their belief that all academic learning is bullshit (which is why so many non-scientists mistrust science).

I suspect this stuff about managing mutable state may be already too elaborate to really grok what the imperative paradigm is about (rather as, in quantum physics, by the time you're working with operators on Hilbert spaces you're already too far down to see the forest for the trees).

Wittgenstein's ladder is

Wittgenstein's ladder is profoundly misguided

The current example certainly supports that case. Although there are real issues with introducing details of a complex topic gradually, but I'm not a pedagogy expert.

I suspect this stuff about managing mutable state may be already too elaborate to really grok what the imperative paradigm is about

It's difficult to grok what the imperative paradigm is about in an imperative language, because the paradigm is too pervasive, so there's nothing to compare it to - it's just the way you do everything. That's another factor that makes for a difficult debate.

Four kinds of time

I've got mostly just one thought to show for my struggles with the imperative concept, thus far: that there are four different kinds of time involved. There's time for the programmer, which is of concern for revision management. Time for the compiler, which is of concern for abstraction. Time for the processor at runtime, which seems the central concern for imperative programming. And time in the real world with which the running program interacts, which is tangled up with processor time. Maybe imperative programming is only concerned with the last two; but I'm not too sure there isn't some deep interaction with the first two.

Processor time can be

Processor time can be eliminated as a concern for the imperative programmer, which is the focus of my research*. Actually with live programming, we also deal with compiler time in a more elegant way, creating a feedback loop that optimizes programmer time.

What you have left is real time, and there is no way to get rid of that without getting rid of the arrow of time.

* One can argue if Glitch is truly imperative, or rather something else. But the state operations, like adding to a stateful set and setting a stateful cell, are still there, making it feel at least imperative. Anyways, this is completely orthogonal to monads and whether your effects are explicit or not.


is interesting.

Bonus points

Bonus points for using both "gripping hand" and "grok" in a single post.

Pick a side, we're at war

People once started of with the idea "Wouldn't pure lazy programming be great!" Yah, that would be great; if you can get it to work and I am not sure the verdict is already out on that one.

I posted another comment about the purity issue, but the situation with laziness is similar: using Haskell as the obvious example, real Haskell programs are filled with explicitly specified strictness, and the reality is that both strictness and laziness are useful. So as with pure vs. imperative, the real debate shouldn't be about which one to choose.

These people are here for my entertainment

Man. I really don't chose sides in this. I like the idea of pure lazy functional programming but I am just waiting how the war will play out.

Personally, I don't like most Haskell code when I read it so I wonder what went wrong there. And I don't like that programmers need to fall back to strictness annotations when writing real code, so I doubt Haskell is really useful as a GPL. (I see that as another bad hack; people were just mistaken that lazy evaluation is that useful.)

They push it as a GPL and I agree someone should. But I expect they're aiming at the wrong problem domain. I expect it'll end as Prolog, nice operational model for some problems but you pay too much for the abstraction, overkill for normal programming, but nice for scientific programs.

Sorry, but I really can't care much about it.

My point is not that

My point is not that Erik is right about everything, or that the whole world should be programming in Haskell or an equivalent. That might in fact be an improvement, but it's not going to happen any time soon.

I'm saying he raises some valid points. On the "mostly functional" issue, I'd have to do some work to figure out exactly how aligned my position is with his, but I what I would say is that embedding pure approaches in an imperative-by-default environment is unlikely to be optimal. It needs to be the other way around.

Erik should stick to research

Maybe it's a teaser but I don't see anything there except for reversal of a great many arguments and a possible research proposal.

He completely ignores Lisp, ML, and Scala and it's all written from a Haskell perspective. (This is how the Mormon faith was born I gather.)

I don't believe in separating purity and side-effecting/imperative code. If it would be that great, he'll need to show it once. It's at best a research proposal with too many contrived arguments from the Haskell popularized discourse. I am not buying.

Depends on how you define side-effects

I think an interesting discussion is when is a side-effect a side-effect (before thinking about whether they should be separated)? Is a mutable container a side-effect? I don't think it is if it is local. Consider a function that internally uses a mutable array internally, but the result of which depends only on the arguments, I don't see any side effects here. If you pass a mutable object into a function by reference, are there any side effects if the function output depends only on its input arguments? What if that object is a handle for a database that you can read and write to? There are some things that you can't have like static variables in functions and global variables.

Whether a function is referentially transparent seems to depend on a certain point of view. For example a function that reads from a file would seem to not be referentially transparent, but if we see the argument to the functions as the actual file-data rather than a file-handle, then surely calling the function with the same data would produce the same result. Of course in this case the file-argument would be mutated by the function, so calling the function again is not actually calling it with the same argument. This requires certain semantics to be obeyed, for example copying the 'file-handle' should actually copy the file (and the current state, like open/closed and the current read/write location). In effect the file behaves like a list in a pure functional language.

Fresh Variables, logging and procs. Products and Productivity

Nah. Sorry. I still think you're still buying into a silly Haskell related story someone once sold you. I agree with the array example. I think you can even do this in Clean with uniqueness typing; no idea what Haskell makes of it.

Have you ever tried making fresh variables while you transform a program? In a pure setting, in cases, you'ld need to thread a monad throughout the whole program to make sure the variable is fresh with respect to the term. In an impure setting, this is what I did, you can just increase a global counter. And that leads to readable code. I am not even sure math is that pure. I am certain there are typing schemes with rules "v ∈ FV, v is fresh" where they magically pull a fresh variable w.r.t. the total term out of a hat.

Or logging. Maybe you'll want to insert trivial logging around your functional webserver program regarding transactions being made. BOOM! Now you'll need to thread a logging monad throughout all of your beautifully crafted functional code.

Purity is a pain.

Then there is the fact (I forgot where) that people have already tried this in a naive manner making distinctions between functions (pure) and procedures (impure). The lesson learned was that very often procedures needed to be coerced into functions in a later stage to keep everything working and reasonably straightforward.

People build products; they need to create them fast. I am betting the productivity hit of doing stuff too neat is insurmountable.

It's not Halloween yet

Or logging. Maybe you'll want to insert trivial logging around your functional webserver program regarding transactions being made. BOOM! Now you'll need to thread a logging monad throughout all of your beautifully crafted functional code.

This is a bogeymen that imperative programmers use to scare children. In practice, a non-trivial application would already be using a monad or monad transformer stack, and the change you're describing would be performed centrally and easily. In the web server example, the Haskell web server libraries all provide parameterized monads to make it completely trivial.

Of course in the scenario where a complete newbie is for some reason writing a purely functional web server without reference to standard practices, yes, they'll find out at some point why monads are useful. That's called learning the hard way, and it's not necessarily a bad thing. If you want to avoid it, you just need to read a relevant book first.

So when you write "Purity is a pain" and "I am betting...", you seem to be describing the classic fears of imperative programmers about purity, not the practical reality.

Frankly the biggest problem with this is that if you tell an average mainstream language user that in order to do logging, they need to modify the monad transformer stack definition, they'll look at you as though you just sprouted horns. And they probably have a point.

No deal. Old argument, another dirty fix

There is no point in threading a 'monad stack' throughout your whole program. If I need to thread the world throughout my whole program I'll write a preprocessor to annotate all functions with that stack such that I'll just have my global variables; something the compiler writers should have done. Why do I need to do all the work? They need to come up with something better or I am not buying.

I don't know if it's what

I don't know if it's what you're going for, but I've taken to reading all your comments to myself in a Rodney Dangerfield voice.


I am trying to remain polite too.

Still not how it works

You're still speculating about things that don't happen in practice. In practice, such monads are never threaded through the whole program - only through the application's backbone, for the kind of monad we're discussing.

From there, pure functional code may be invoked, or functions that use some other monad. This makes the package of state and effects that a given piece of code relies on explicit in the type, which is very helpful when growing and maintaining a program over time.

Elsewhere you wrote "If [separating purity and side-effecting/imperative code] would be that great, [Erik would] need to show it once." But these benefits have been experienced, at least anecdotally, by thousands of programmers who've all had similar good experiences with it. Can you suggest some way that this could be "shown"?

They need to come up with something better or I am not buying.

That's a perfectly reasonable attitude for a programmer to have, assuming they're not trying to develop a language themselves, or doing the whole mind-expanding thing with unfamiliar paradigms. But as I've already pointed out, the real consequence of the issues Erik is raising is that there's still a lot of room for improvement of mainstream languages. Think of Haskell as a proof of concept if you like. The concepts are more important than their specific incarnation. The strength of the incarnation is just a surprisingly useful bonus.

Something better

They need to come up with something better or I am not buying.

They have: algebraic effects and handlers. They've even been implemented in Haskell, by the Oleg.

These techniques aren't mainstream by any means - they arrived very late in the development of Haskell's ecosystem. But a few new languages (e.g. Koka and Kitten) are using them. What they offer is a much more flat (not a stack), efficient, composable, and fine-grained model for threading effects through purely functional programs.

If you need global variables, you could use one effect per variable. Or one for the group. How you break things up is pretty flexible.

Fresh Variables

I don't see any difference, in C++ if you wish to generate fresh variables the number source must be routed to everywhere you want to use it, just like in Haskell. If we are writing multi-threaded code (as would be sensible on modern architectures) then you cannot use a global variable as the locks make it inefficient, and global variables limit code reuse too, you probably want to write the code in a re-usable library of some kind so you can use it in other language projects.

The solution in C++ or Haskell is a splittable generator, that when you split fixes the least-significant bit to a even (0) in one copy and odd (1) in the other. You can then pass these into each thread and have each thread get a unique sequence of fresh variables with no overhead. As a thread parameter in C++ you will need to pass these around as arguments.

For the Haskell web server you will have to use a monad, as without it there is no guarantee that things will happen in the correct order due to laziness. For me this is a problem, and I think Haskell takes the wrong approach with everything is lazy. I think a language where both data and codata are supported is a start, and that it should be lazy where possible, strict where necessary. This means it needs awareness of side effects, as the order of side-effects should be preserved, whilst the order of side-effect less operations does not matter. This is also vital for automatic parallelisation.

But I don't think it needs to be a pain - the compiler should be able to infer the side-effects as part of type inference. I think you could write a loop that looks like plain 'C' code, and the compiler finds the side-effect free part and parallelises it, joining everything back up so that your logging statements still occur in the same order.

The technical debt of building products too fast is a reality for many companies.

Badly written monadic Bash code

Yah. That all sounds great. So I took a look at a monadic stack.

-- file: ch18/CountEntriesT.hs
module CountEntriesT (listDirectory, countEntries) where

import CountEntries (listDirectory)
import System.Directory (doesDirectoryExist)
import System.FilePath ((</>))
import Control.Monad (forM_, when)
import Control.Monad.Trans (liftIO)
import Control.Monad.Writer (WriterT, tell)

countEntries :: FilePath -> WriterT [(FilePath, Int)] IO ()
countEntries path = do
  contents <- liftIO . listDirectory $ path
  tell [(path, length contents)]
  forM_ contents $ \name -> do
    let newName = path </> name
    isDir <- liftIO . doesDirectoryExist $ newName
    when isDir $ countEntries newName

Code like this shouldn't make it past the QRB.

I agree coding functionally is fun, does buy you some extra things if done right. But neither the referential transparency/purity or the laziness of Haskell bought you anything in the above example. It's straightforward trivial imperative code with all the drawbacks associated. This is how most production code will look.

And then the next thing is that programmers probably won't do very lofty things even if functional programming is fun. They rely on libraries for that. Like anyone, they're just stuck that most code isn't that interesting.

I think I know where it'll end. They'll probably once add a untyped, bash-like, front end to Haskell and sell that as the next great thing such that you can create webservers and stuff with ease. (That'll take another ten or twenty years probably.)

No idea if anyone will ever create something interesting with it though there probably are some application domains.

Commercial Haskell

I wrote an aggregating property search engine in Haskell, and built my own web service framework to host it, including a relational algebra mapping to SQL like HaskellDB/Linq. The clever stuff was in the parsing of the HTML, with an error correcting parser that you could easily add 'fix-up' rules to emulate the way browsers auto-correct bad HTML. Also included an HTTP client, to pull the data, and a tree-difference using something like edit-distance on trees...

Then again I agree that the above monadic code looks overly complex. However you should consider that the syntax can look like an imperative language, and still have an inferred monadic type.

I'll consider that

However you should consider that the syntax can look like an imperative language, and still have an inferred monadic type.

True. In the above example the monadic type tells you that you've no guarantee that whatever you've written is in anyway correct by composition therefore didn't buy you anything and the monadic type got in the way of writing whatever they aimed to do in a trivial manner.

It doesn't look overly complex. It's line noise.

After thirty years of partying in obscure places Haskell can now boast that people can now construct webservers with a bit more pain than if you would do it in Perl.


Convincing example selection

Code like this shouldn't make it past the QRB.

You took a contrived example from an introductory book, which was presumably deliberately designed to be standalone as opposed to building on relevant abstractions. You're surely aware of the reasons that examples like this are not representative of real code.

Yet from this you generalize that "this is how most production code will look."

This isn't very convincing.

But even if we accept your premise here, the point is not whether default purity can blow you away with eight lines of embedded imperative code - the point is how those eight lines fit into a larger context, how you can compose them with other code, how you can reason about the resulting program, and how you can maintain it over time.

There are many pieces of code in most programs that aren't pretty. That's exactly why it's so useful to be able to impose strong boundaries between code using default purity combined with types, typed effects, and automatic proof and enforcement of those properties.

But neither the referential transparency/purity or the laziness of Haskell bought you anything in the above example.

Because it's directly using system-level filesystem functions, for the purposes of the example.

It could easily be abstracted, for example reading a directory into a lazy tree data structure. You could then manipulate that structure purely and lazily, and implement things like the Foldable and Traversable typeclasses on it.

But even with the example you gave, you're focusing only on the innards, and ignoring the broader context. It's certainly true that for 8-line programs, we could just as easily use Javascript.

Looking at that function's type, we can see from its use of WriterT that it uses a pure functional approach to build a list of results. We can also see that since it's parameterized over the IO monad, that the overall function isn't pure. The use of the IO monad provides information about the nature of the impurity.

If we had other code using some other effectful monad, we'd know that the side effects from this IO code can't interfere with the behavior of that monad (assuming IO doesn't crash or hang or something.) We also know what kinds of functions we can directly and safely compose countEntries with, and what kind of adapter would be needed to compose it with incompatible functions. We can extract the monoid from WriterT and pass it to pure code.

All this knowledge allows us (and the compiler!) to reason reliably about systems on a larger scale, often without even looking at internal implementations. This is something that's much less simple when mutable state is pervasive and not exposed via types.

I agree coding functionally is fun, does buy you some extra things if done right.

Good! So am I more or less correct in summarizing your objection as follows: most code won't benefit from these extra things either because of the nature of the code (too imperative or too middleware-ish), or because in practice it'd be badly written by people who don't do it right?

Again, even if we accept that most programmers write code as though they were writing standalone contrived examples in introductory books, that's an even stronger argument for default purity, not against it. Even when it's not "done right", default purity and typed effects help limit the scope of the damage, and make refactoring and maintenance easier.

Plenty of systems, probably most, start out being written in a hurry and then have to be evolved into something more extensible and maintainable. Approaches that can help manage that are important.

First example I Googled but fully generalizable

Nah, it was the first example I Googled. I do imagine the code doesn't look as much contrived to a Haskeller as it does to me. But it is what I expected and fully generalizable.

I agree I generalize from a sample size of one but that was partly since I don't like talking in abstractions anymore. I can make a lot of hypothetical objections against arguments which sound great, and I did that previously on LtU, but those discussions end in philosophy. But this example was exactly what I expected.

I am old. My experience with lazy pure programming predates the invention of monadic IO, or monadic programming. The aim of the original research was to implement a more expressive language in which programs would be more or less correct by construction; or could be proven correct through algebraic means. That was more or less true for small, sometimes medium sized, examples.

Now I look what has remained true of that story

Laziness isn't used in practice

After having implemented a number of algorithms on data where laziness may play a small role in some algorithms, probably written by others in libraries, people will glue all the code together in an imperative manner never using or touching laziness. Worse, laziness sometimes pops up in an unpredictable manner forcing them to default to strictness; it isn't a very useful operational model.

Purity doesn't scale

Being pure to me implies that you'll need to thread complex information explicitly throughout your program and, in my view, that is a form of vendor lock in. At some point you'll want some change in your program and that will force you to refactor a substantial amount of your code. I don't buy into that pure programming scales.

Monadic programming is an abhorration since it neutralizes any "correct by construction" argument

I object to monadic programming and find that one of the worst manners of organizing code. First, it is a mistaken view usually concentrating on one of the least interesting aspects of rich mathematical structures. Second, this is worse, since everything you do with a monad is program against an 'imperative' interface where the semantics of what you program against is always hidden in a run function it always immediately throws away any "correct by construction argument."

If you don't appreciate purity then don't use it

If it isn't "correct by construction" then why bother using a referential transparent programming language? There is no purpose whatsoever anymore to using Haskell.

Haskell has no programming in the large constructs

Haskell deliberately hides from you best software practice of programming against interfaces since they don't want you to know that you're usually concentrating on the most trivial parts of rich algebraic structures. This is one of the biggest lies in Haskell: They make you believe you can do stuff in a nice categorical manner by deliberately never making explicit the rich algebraic interface you're programming against. Haskell doesn't have any "software engineering in the large" concepts of organizing code. (Or they are not using them right. I am not sure about that one yet.)

(Ah well. A bit harsh. My opinion is that Haskellers try to write code by concentrating on the unimportant trivialities of rich algebraic structures.)

Monads don't stack nicely

I don't buy into that your monad stacks nicely. Purity dictates that at some point you'll end up threading a rich structure throughout your code. I don't believe that most code is organizable in such a manner that you can factor out all dependencies that nicely as proclaimed. Worse, instead of organizing the code neatly I am now stacking monads, abhorrent machinations to avoid, in an unrefactorable manner. They chose the worst possible manner of composing functional programs.

So you do end with code as described in the example and that example is fully generalizable. Laziness isn't used or even useful, purity is abandoned with the monadic imperative interface, the code doesn't scale, it can hardly be refactored, it is composed badly, there is no implied correctness. The only good thing about it is that it is well-typed. But that is true for lots of imperative code and at least that code is usually organized neatly.

The end result:

Badly composed unreadable imperative code which can't be refactored, which doesn't use purity or laziness, which cannot be reasoned over, and where the types are in the way. Bash does it better.

Sorry. But Haskell, at the moment, is one of the worst programming languages around for anything but small academic projects. And since they write badly organized imperative programs I don't even know how Meijer can even object to introducing small imperative constructs in the language since purity, laziness and correctness went out of the window a decade ago anyway.

If I look at the example code I have no idea what you're dreaming about. Someone sold you a bullshit story and you bought it. True.

Functional Bash

Taking it one step further. I think that the reasonable success of Haskell is explained by the fact that there is a market for a functional Bash. Some sloppy imperative language with a good functional core.

But laziness, purity, category theory, correctness has little to do with it; though it tickles programmers academic interest. Haskell just is the only language which fits that niche. If Python had the syntax for simple functional programming, and a library, and a good enough performance it would destroy Haskell in a heartbeat.

Python has lambda and closures

Python does have the syntax for simple functional programming (lambda and closure support). Early Python (pre 2.0 I think) did not, but JavaScript has had closures from the beginning. In fact JavaScript is pretty good for functional programming (monads in JavaScript, and using the error-continuation monad with jsnode The only problem with monads in javascript is that they only really work for the imperative stuff with the do-notation syntax in Haskell. Why would you want monads in JavaScript? Because monads allow programmed syntax binding, which you can use so that each statement calls the next immediately or on its success callback. This allows you to write apparently simple linear single threaded code hiding all the events and callbacks, but unfortunately becomes a mess of brackets, bind functions and lambdas without do-notation.

For me what Haskell has that other languages don't is a sane type system. ML has problems with references escaping with applicative-modules which don't exist in Haskell. C/C++ have ad-hoc systems that get extended with things like templates to do anything useful. Python and JavaScript have no static type system at all, so no good for generic programming, yes you can do runtime dynamic dispatch but you could do that in BASIC with a numeric type-tag. I don't mean to pick on these languages specifically, more by way of example.

Python cannot have the required performance, because of dynamic typing, and even the developers of pythons new interpreter (pypy) needed a static typed version of python (rpython) for type-safety and performance. So I would be looking for a static type system with parametric-polymorphism.

I should admit at this point that pretty much all my companies product development is done in Python and JavaScript. For my research I use C++ or Ada these days. I was very into Haskell for about 10 years, but Stepanov's "Elements of Programming" taught me to appreciate imperative programming again. Stepanov is an old-school mathematician, and his view was that mathematics needs to embrace the advances of Turing and Von Neumann, not computing be held back by the limitations in mathematical techniques for analysing mutable references.

Haskell needs a while to get used to. Its seems to have limitless possibilities at first, but at the end of the journey its just another limited language like a looking-glass C++. I can see bits of C++/Ada and Haskell that I like, and bits that I don't. If I could only synthesise the good bits into a single language then I could actually get on with writing applications. I guess it should be assumed that everyone on LtU is unhappy with current languages in some way or they would not be interested in developing new languages.

ML has problems with

ML has problems with references escaping with applicative-modules which don't exist in Haskell.

Could you give a link or short explanation? While I'm unfamiliar with ML's module system firsthand, my own module system design has a lot in common with it, so I'd like to watch out for problems like this.

Applicative Functors and References

Some examples from Andreas Rossberg is in this video from about 10:58 to 12:30. More examples follow - its worth watching the whole thing if you haven't already.

These problems are to do with impurity (the fact that ML has references), and Haskell because it is pure does not have these problems. The restrictions needed in ML occur naturally in Haskell because state has to be in a monad, and monads are generative by definition. So rather than having to protect the functor definition because you have pervasive references (ML), Haskell protects the references, allowing free use of applicative functors. In Haskell this comes for free with the type system, in ML type languages you have to add special restrictions to keep things safe.

Hold your horses

Thanks for the plug. But hold your horses. First, the problems I discuss are only with applicative functors, which are currently only relevant to OCaml, not "ML" in general. Second, the trickier part of the problem is "merely" about leaky abstraction, not type safety. Third, only some of the ways in which the problem materialises have to do with impurity, others come up in completely pure programs -- and if Haskell did have applicatively parameterised modules, then it would have them as well. And finally, we show how to address the issues without "special restrictions" (see our paper for more details).

Generalising from this to a broad statement about the relative sanity of the type systems of Haskell and ML seems rather bold. Especially given that, for example, Haskell + unsafePerformIO (i.e., Real World Haskell) even is plain unsound, because it lacks those "special restrictions".

Applicative Functors

I think I did say the problem was with applicative functors. ML may not have them at the moment, but they are desirable to have. So we agree on that part

However I think it is a type system problem. The ML type system permits the leaky abstraction to exist because it does not control side-effects. I see this as a limitation of the type system of ML style languages that Haskell does not have. Haskell has applicative functors already and it does not have these problems. I don't think Haskell would have these problems with a well implemented module system, as a module would have to be pure to be applicative, as soon as it contained any mutable state or IO, the module's type would have to be in that monad, which would make it generative.

When a function has a type "a -> b" there must be by definition no side-effects, as the type system has no knowledge of them and cannot make effective decisions about the safety of the code. If there are side-effects (not contained by monads), there are really extra hidden parameters not captured by the type.

Side effects not the only issue

I agree that not tracking effects in types can lead to problems. However, you seem to be ignoring a significant part of the talk: the more subtle problem with making abstraction safe under applicative functors is that you actually need to simulate a form of dependent typing to define the right type equivalence. This is by no means "free" in Haskell either.

When a function has a type "a -> b" there must be by definition no side-effects

I don't see how this follows, I think you are mixing up a few things. What would be unsafe about "fun f x = (print "Ehud"; f x)" or "fun f x = raise Domain"?

Associated Types

I am indeed ignoring some of the talk (although not relevant to the point, I did recommend watching the rest of it too). Just because there are also other problems does not take away from the fact that the reference problems exist, and I didn't want to complicate matters with the problems of associated types. I agree at that point you may be reaching the limit of what Haskell's type system can cope with gracefully, but it does seem more able to cope with these unforeseen extensions.

We both agree "not tracking effects in types can lead to problems". Providing safe examples does not negate that point, but I will admit not all uncontrolled uses of side-effects are unsafe in a strict language. However, if the type system is not aware of the side-effects it limits what you can do about rejecting the unsafe examples in type-checking.

Types as theorems, Values as proof

What does "f :: a -> b" mean in terms of a theorem? In Coq the type is the theorem and you provide a value level implementation to prove the theorem. What does the theorem "f :: a -> b" actually state, and what is a valid proof of that?. My interpretation is that it says "there exists a function from a to b (and nothing else, no exceptions, side-effects etc, or they should form part of the theorem), now the implementation should be an example that proves such a thing exists. If the example includes a print, which does IO, it does not prove the theorem stated, it might be the proof of a different theorem like "f :: a -> IO b"?

Bad argument

First of all, when you pick your preferred definition of what something should mean in your mind, and that doesn't match what it does mean in a given language, then that does not imply that there's anything inherently wrong with the language's meaning.

Second, in Haskell,

f :: a -> b
f x = f x

does not prove anything either, so perhaps you should pick your arguments more carefully. ;)

Type Discussions

On the other hand, if something has been given an arbitrary meaning, its good to question it. It would seem if Haskell and ML are both existing within the same type universe (which makes sense to me there should be an agreed way of discussing things. That ML and Haskell both use -> to mean different things is an issue. We can either have a special arrow for pure functions or a special arrow for impure functions. Which would you prefer?

ML seems to be inconsistent in that the first arrows in a multi argument function are side-effect free, and the last can have side effects, they really ought to be written differently as they mean different things.

Different semantics

The semantics of the arrow type differs between Haskell and Coq as well. Or between almost any pair of languages you can find. From the perspective of computational logic, Haskell's arrow is impure, because it includes partiality (non-termination is an effect). Why is one choice more arbitrary than the other? The meaning of the Int type is not integer in Haskell, it's pointed integer (i.e., includes bottom). Is that a problem, too?

Different languages have different semantics. If you really want to canonicalise them all then good luck with that.

Common Semantics

The easiest to define would seem to be a pure arrow, which seems to be a non-arbitrary choice.

Type theory exists independently of any programming language, should we just ignore that, and carry on using ad-hoc definitions?

Type theory describes a wide

Type theory describes a wide class of different languages and models, each with its own feature sets (linearity, conservation, divergence, effects models, polymorphism, subtyping, type-state, traits, type-level computations, constraints, etc.). What 'purity' means can vary from one type system to another. In many ways, design of a language's type system is design of the language. There is no singular type theory to avoid ad-hoc definitions. There is no 'common semantics'.

Whether a design choice is 'arbitrary' or not is a separate issue, more regarding the motivations and thought processes of the chooser than the choice itself.


Its a dissapointing answer but probably right. With curry-howard enabling types as propositions in a logic language (like Prolog), it seems there is a more natural semantics, that which produces the same result as lifting the values to the type level and evaluating the logic program, but perhaps that's just my choice. Maybe the mistake was Haskell using the same arrow and not differentiating it? Perhaps I should define a specifically pure arrow to avoid confusion, something like :-P

Could we not define a common semantics, it would have to have sufficient expressivity to differentiate the languages. Could we not start with similar languages (Like Haskell and ML)?

Many different logics

There isn't one logic or type theory to which you can apply Curry/Howard either. There are many.


Indeed, but I think the idea of defining a common sematics between similar languages like Haskell and ML has some merit. At least then we can have conversations without having to do specify the semantics of the particular arrow we are using in an informal way.

Phantom problem

I don't recall ever having witnessed the kind of confused conversation you are alluding to. When you discuss a type system, you need to define it first, and what you define determines the meaning of various syntax. No amount of syntax standardisation will change that.

Also, you are far too fixated on the arrow type. All types potentially have different meanings in different systems. In particular, you won't find a single type constructor that has the same meaning in both Haskell and ML -- because ML types describe values, whereas Haskell types describe computations (with non-strictness and partiality, every type is inhabited by ⊥ -- which also means that naive Curry/Howard does not apply to anything in Haskell) .

One big difference

Is that in Haskell the operational semantics of a block of code relies on its type. This is usually a nogo area in PL but they went ahead with it, I gather to leverage the idea of type classes as much as they could. Research.

That somehow should translate to a big difference between types of the two languages.

I don't see that

Both laziness and strictness are just reduction strategies. And because they are just reduction strategies, any example you can give in one system has a counter example in the other. They can trivially simulate each other, whatever you say about one of them holds for the other. (Okay, going from lazy to strict is a challenge, but for programming languages I don't see what it matters.)

At least, that has been my opinion for years. Glad to be proven wrong on that one.


Man. UnsafePerformIO tells you another thing about Meijer's ACM story in that even Haskell doesn't do common subexpression elimination (a lot.) Nice idea that you thought you could optimize pure programs better. Didn't work out, bad call. (Unless they track it but I severely doubt that.)

I agree with you that -> in both ML and Haskell are roughly the same thing, namely composition of impure functions.

I don't think keeping track of purity will pan out. Everything is impure and therefore hard. Bad luck.


The compiler does in fact track that, thanks to NOINLINE pragmas, and documentats the kind of optimizations users should be wary of (typically let-floating).

I'm surprised by your bitterness in this whole discussion. I might have been interested in debunking your point that "monads are only helpful to get tenure", but only in a discussion that is not nested so far on the right, and in an atmosphere where we're talking instead of just reading whatever provocative "real programmer" stuff you want to throw at functional programming.

Comedy Central

I am not bitter. I watch Comedy Central and read academic papers.

(Btw. I like pure programming. I have no feelings about real programmers. The history of things is just interesting.)


Dude. Miranda hackers once found a useful idiom and that took off. Everything after that is pure entertainment to me.

Notations and concepts

I think you are asking too much of the mapping between theory and practice. I consider it true that ML and Haskell live in the same type theory (of course they rather share a common formalized subset), and I don't see what the problem is with using the same convenient shorthand notation "->" to mean two different things in the two languages.

Language design is full of choices and compromises, and two given languages will always have made different choices on the meaning of a common notation. It's a freedom that designers have to pick the most convenient notation for what they want to emphasize in their language, and it doesn't stop us from understanding each other and even establishing formal results that are relevant in both settings.

(Also your notion of multi-argument function is constructed, not a primitive of the language. You can define multi-argument functions that make side-effects between argument applications, and while that is generally considered bad style it is occasionally convenient.)

Maybe another example would help. I am of the opinion that binding constructs, for example the value-binding construct "let", should be available in a non-recursive and a recursive-form. In term of abstract syntax you could speak of "let rec" and "let nonrec". Yet, different languages make different choices about which of those two semantics will get the convenient shorthand, just "let": in ML, "let" is "let nonrec", and in Haskell "let" is "let rec". And there is a good reason for it, which is that lazy evaluation plus partiality make it trivial to take fixpoints, while in ML there is a restriction on which forms of "let rec" are meaningful. I regret that both languages don't *also* export the long form corresponding to their default, but other than that there is no problem with two languages making different choices of which "path of least effort" they want to encourage.

Informal definitions

The problem is that when I write a type like: a -> b there is no clear understanding of what I mean. In order to be understood across the community I should be writing a -> b [where '->' is call-by-value function evaluation with side-effects], and this is an informal definition. Is there not a shorthand way of writing this, or giving a formal definition?

I have advocated the view that language design is the art of compromise in the past, so whilst I agree with this, my point is more that one should be able to talk about and explain those compromises in some kind of formal setting.

I didn't realise you could have side affects at the application of any argument. I think it would be nice to know this from the type as in: a -> b => c -> d => e but at least that makes ML consistent in its use of '->' to mean "possibly has side-effects".

With the recursive definition of let, does that suggest that there would need to be both a recursive arrow and a non-recursive arrow? Presumably there are actually three categories of arrow in this dimension, non-recursive, always-terminating, and potentially-non-terminating?

We have the terminology

I regularly discuss language design with people that know both ML and Haskell but make regular use of only one of those, and I don't remember a single situation where the ambiguity about `->` outside any context has been a problem. I'd say it's always clear from the context what people mean, and it's trivial to ask a question to make sure you follow. You start needing different notations when you design a system that has different flavors of arrows, but then people explicitly list and explain them so that is not a problem either.

Also, you seem to be considering that the meaning of the arrow "->" is defined as the set of programs that have this type -- and thus that moving from a pure to an impure language changes the notion of "->". That's an entirely reasonable point of view that is well-understood, but I'm not sure it is absolutely consensual: some other people think of separate worlds of types and programs, with say Haskell and ML sharing the same syntactic type language, but difference in dynamic semantics (independently from the type system) having the property that programs accepted at this type will have different properties in the two cases.

They're adding interfaces to Haskell as we speak

Seen the proposal somewhere. True story.

(Found it)

I know

Of course, and you might want to check where the paper is coming from. ;) (Also, Scott would probably be able to say a word or two about Keean's enthusiastic claim that stuff like this would be cleaner in Haskell...)

Anonymous pattern matching rewrites

I wrote small functional programs in Python and Javascript too. Probably because I am a bit thick, a bad programmer. The simple model of functional programming is, I think, the only model which really sticks in my mind. (I noticed Meijer has the same thing. He can't remember C# keywords or their meaning. C# needs to be in IT curriculae or students will just not pick it up.)

But it always went wrong. The idea in both languages is that as long as you have a lambda approximation, you can do functional programming. That's technically correct but in both cases the syntax is against you. Python I don't remember but in Javascript nesting lambdas means writing an awful amount of keywords, curly braces and return statements. And you don't want lambda, you want anonymous pattern matching rewrite for various reasons.

Then the second problem is that functional programming is usually a slow model of doing things. You need a tremendous amount of optimizing machinery and a fast operational model to get near the performance you need to do any real functional programming. And Python is slow. Damned slow even.

I once wrote a language. If there is anything to take away from that it is that syntax matters. A lot. All you need for pragmatic FP, in the sense that you want to give to programmers the ability to leverage the "correct by construction" idea, is a simple syntax for records and an anonymous pattern matching rewrite. And performance. The rest simply doesn't matter much.

(And you should always provide bindings to C, of course.)

Add a static type system with type inference

I think this is spot on... but I would add a good type system that has complete inference for the non higher-order polymorphic cases, I quite like first class polymorphism like this:

Do-ing it wrong?

Because monads allow programmed syntax binding

Not buying it. A monad is a trivial observation on a (usually rich) algebraic structure. Leveraging that triviality to a software engineering principle for FP is an incredibly bad idea.

Remember the parser combinators I use? It has two combinators with a bind-like type. Rationality: I wanted LL(1) and backtracking LL(k).

Example. Suppose this is your grammar:

P = Q | R
Q = "a", "b", "c" 
R = "a", "c", "b"

Since you want to remain close to the original grammar but don't feel like explicitly writing all the code for chaining rules and dealing with error conditions you go for higher-order programming to define an algebra which will solve your problem pragmatically. You're going to backtrack on the first two symbols of the alternative, LL(2).

Informally, you want your code to look like (without attribute calculations)

def P = Q + R
def Q = (match "a" & match "b") && match "c" // try match "ab" but report an error if "c" fails
def R = (match "a" & match "c") && match "b"

The & allows a second alternative to fail. The && doesn't allow the second alternative to fail but will report an error message at that point that an non-failing alternative didn't match. Since you want to calculate a result with respect to whatever you've recognized you give both & and && a monadic bind type.

Now I ask you: Which one of the two combinators & and && do you want to lift to a monad such that you can use the do notation?

None. I really, really, really don't care & and && have a monadic bind type, I don't even want to know. The right manner of writing functional code is analyzing and solving the problem at hand, the wrong manner of writing functional code is shoehorning everything into a monad.

Academics are not stupid. Most of them know monads are a horrible manner of writing code but they want to teach you some category theory and Haskell and they need to sell both as a success. (I agree probably category theory gives you a few extra instruments to write functional code. But after a two-month course you should immediately forget all of that.)

You don't want do. Do is for students who can only implement imperative programs. And that something is a monad is a none-observation. I am sorry, it is just some bullshit which spun out of control. (Thought the good thing about it is that Haskell now has a large library of categorical idioms. But I doubt they can leverage that further.)

You're not in school anymore. Move on. (Unless you want tenure somewhere; in that case you should disagree fervently with me.)

Types are a tool

How about:

q = do
    match "a"
    option "b"
    match "c"

The right way to write any program is to choose the right tool for the right job, but all languages have preferred idioms that are more elegant if you can reframe the problem to fit with that way of working.

How do you think IO should be handled in a functional program? Consider an application that needs to be used in the context of a modern event-based UI.

You didn't get the example

With attribute calculations, monadic style, you get:

def q: parser text =
    match "a" & \c0 ->
    match "b" && \c1 ->
    match "c" && \c2 ->
        success (c0 + c1 + c2)

There is no good choice to make either &,success or &&,success an instance of a monad. Because you should categorically refuse to shoehorn your algebra into a category. It doesn't buy you anything.

I think I did.

All I see is a bad program now, if '&' allows the second argument to fail, then 'c1' will never get defined. The return type cannot possibly be c0 + c1 + c2 because "b" is optional.

In my solution the list of accepted characters is in the hidden state. So it can return precisely succ "ac", succ "abc" or fail, depending on the input. I am working with the original description:

The & allows a second alternative to fail. The && doesn't allow the second alternative to fail but will report an error message at that point that an non-failing alternative didn't match.

This says that the second argument of '&' is optional as it can fail, but the second argument of '&&' is not optional. This is precisely what I have in the solution above (ignoring attributes as you did in the original example). If you want to add attribute calculations, what is wrong with (assuming this is in the error monad so failure is an exception):

    a <- match "a"
    b <- option "b"
    c <- match "c"
    return $ case b
        Nothing -> a + c
        Just b -> a + b + c

This code works in ML and my bootstrap

Come on man. You know how monads work. How do you think they thread exceptions in Haskell?

The binding looks wrong

Okay, maybe I'm being stupid, but the binding looks wrong:

def q: parser text =
    match "a" & (
        \c0 -> match "b" && (
            \c1 -> match "c" && (
                \c2 -> success (c0 + c1 + c2))))

So its not a monad? The problem is we do not fail if "b" does not match, but in which case what is in 'c1'? I used the Maybe datatype in my example above, how does this deal with failure of match "b"?

It's a 'double' monad, if you insist

&: parser a -> (a -> parser b) -> parser b   // bind0
&&: parser a -> (a -> parser b) -> parser b  // bind1
success: a -> parser a                       // return

You're forgetting that it just has the correct type. There is no guarantee that whatever you're passing to bind is evaluated. This is always true, and in case of a monad, or more explicit exception monad, doubly true.

But it is so common that I saw they made raise or something part of the Monad class too if I remember correctly. The hack made me laugh.

Magic '+' Operator

Does that make it a Diad?

So what happens to "c0 + c1 + c2" is this going to be a runtime error? I guess I just don't get what the type returned from match is so that c1 is both optional and can be an argument to '+'?


No idea what a Diad is but excellent suggestion. If anything fails for whatever reason the function/closure is never evaluated.

If an exception is thrown in my language it exits to the closest catch.

Definitely Magic '+'

I don't know what a Diad is either... but apparently I meant "Dyad" However it appears to be the least common generalisation of a monad and a comonad, and not something with two binds.

Okay, but '+' is a functional operator and not monadic, so it obeys value semantics, and cannot fail.

You have a totally monadic language where every operator is in the exception monad, and you don't like monads :-) Effectively you have (in the error monad)

    tmp1 <- add_with_error a b
    tmp2 <- add_with_error tmp1 c
    return tmp2

+ is concatenation

Dude, now you're just tossing words around. Everything in Haskell is in an exception monad, that's true. Technically I am not sure about my language.

But if you want to know why I don't like monads, or the usage of the word/mental model in Haskell: read this and count the number of occurences, and a few hundred thousand objections I must have posted on LtU about it as well as a number of objections in published literature which were blissfully ignored.

But anyway. Nobody knows what a monad is except that it provides for tenure. Don't worry about it.

Not Really

Sorry, I did guess that "+" might be concatenation, but I wasn't sure. In which case it does make sense providing "match" returns a string.

"Everything in Haskell is in an exception monad, that's true". I thought my point was the opposite, "+" in Haskell is functional, that means it is applicative, and "1 + 2" is the same as "3" as far as the language semantics go it cannot fail, its pure and has no side-effects.

The whole point of pure functional programming is that an operator like '+' cannot fail, so if the program compiles, it cannot throw an exception at runtime and this has been statically proved by the compiler.

Now, whether you place value on that guarantee or not is a matter of opinion.

I am not a fan of monad transformers either, and to me they seem to be due to the lack a row-type in Haskell's type system. You get into a whole heap of composition / compatibility problems with monad transformers and they are symptomatic of trying to push an idiom too far. Just stick to the IO monad and other simple monads, or write your own multi-function monads and they are usable.

Monads provide tenure

The above line is all that there is to a monad, sorry. Apart from that. My language is strict and impure, deliberately, by design.

Purity, great. How do you think Haskell treats division by zero?

Division by Zero

Haskell doesn't and it's a problem (as it is with many languages). My own approach is to have a "not-zero" type, and define division as (/) :: Number n, NumberNotZero m => n -> m -> n. This is combined with path-dependent-types, so that:

if (x != 0) {
     // inferred type of x is constrained to be NumberNotZero
} else {
     // inferred type of x is constrained to be Zero


You're smarter than that. It cannot be done.

Why Not?

The inequality must be from a pure, terminating subset of the language, depending on compile time known values only. If this is not the case, it will not function as an effective guard against division-by-zero.

m = collatz(n) - 1

m = collatz(n) - 1

Obfuscated Programming

Nobody is going to use a the collatz function to do a division by zero check. That would be obscure, unreadable, unmaintainable code. Assuming it would fail the termination check:

if (x != collatz(n) - 1) {
    print y / x

Would fail to compile, with a "possible division by zero" error. The programmer would correct by doing:

if (x != collatz(n) - 1 && x != 0) {
    print y / x

So in rare and obscure cases, the programmer may have to add a more readable and direct check, but I am okay with that. (In other words the compiler only accepts the subset of code it can statically prove is okay, just like with static type systems in general).

Preconditions lifted to type constraints

I was a bit slow. Didn't fully get the previous example. Yah, I guess you can add pre- and postconditions to types. I've seen it tried for resources and you can do it explicitly in a number of functional languages, including Haskell, too. Success with it.

How do you think IO should be handled in a functional program?

Dude, I don't have all answers. But if you're asking. In a lazy language you're somewhat forced into purity so monadic IO wouldn't be bad; but you can always chose one of the equivalent models or fall back to list processing functions.

In a strict language. Well, doodle purity, monads, then and just use calls or callbacks.

A monad is a trivial

A monad is a trivial observation on a (usually rich) algebraic structure. Leveraging that triviality to a software engineering principle for FP is an incredibly bad idea. [..] monads are a horrible manner of writing code but they want to teach you some category theory and Haskell and they need to sell both

So, you're saying that monads aren't good enough for FP? They're too trivial? or too generic?

The right manner of writing functional code is analyzing and solving the problem at hand, the wrong manner of writing functional code is shoehorning everything into a monad.

If I develop a structure specific to solving a problem at hand, and it happens to be convenient to compose operations on this structure monadically (which happens often because monads are trivial), what's the problem? Why not call it a monad? What is the wisdom of: "don't care & and && have a monadic bind type, I don't even want to know."

Which one of the two combinators & and && do you want to lift to a monad such that you can use the do notation?

I don't use the do notation. (I'm quite happy to use '>>=' and '>>' and especially '>=>'.) Is your problem with monads in Haskell that they've been elevated onto some sort of pedestal by the syntactic sugar of do notation?

To answer your actual question, I'd lift `&&` to the monad, and use some mechanism like `try parser` for backtracking. As it stands, I think your current syntax could be a little confusing with respect to associativity.

A hundred thousand objections I don't want to repeat

But none of them matter much but it makes me a bit angry sometimes. Let's just say that Haskellers, in my opinion, are living in monad hell. There are good reasons for that but it all boils down to that, in the end, it's just bad to lift trivial categorical observations to engineering principles. Category theory, in this setting, usually just makes the trivial explicit; it may give you a reusable idea but that's it. Maybe that's all what's wrong with Haskell at the moment: bad engineering with type classes; in the sense that they are using them wrong.

As far as the parser combinators go. I can read the grammar and the attribute calculations. Correct by construction. Of course I am not going to introduce do.

engineering principles

it's just bad to lift trivial categorical observations to engineering principles

I think you misread the situation. Use of monads is a consequence of other principles (equational reasoning, composition, once and only once abstraction (reusable patterns)), not a principle in itself. People seek alternatives to monad stacks (such as algebraic effects) because those alternatives even better achieve some principles (such as composition). Monads have not themselves been lifted to an engineering principle.

I can read the grammar and the attribute calculations. Correct by construction. Of course I am not going to introduce do.

Are you suggesting you would be unable to read your grammar or maintain an equivalent level of correctness by construction when using a 'do' notation?


Tell me where it would improve the situation.

One Improvement

(I assume by `it` you refer to monads. If I'm wrong, please clarify.)

By providing a Monad instance, you'd be able to use many higher-order functions that have been written to work with monads, e.g. `mapM`, `join`, and `unless`. And after developers learn a few of these higher order functions, they won't need to learn (or implement) a new vocabulary of basic combinators for each problem domain.

I'm curious why you ask, though. Suppose I had answered: "monads don't improve the situation". How would that support your seemingly extreme position? I think it would only support a neutral attitude towards monads.

Push, Pull, Call, Callback, Duality

I was watching a Meijer presentation in which he showed that things have (categorical) duals.

Did he: A) Give a nice talk. B) Show a profound insight. C) Learn the average programmer something. D) Make an intuition explicit. E) Underestimate the average programmer. F) Expose a triviality. G) Need to stop talking.

Take your pick. I really can't tell. And, if it is profound, can you lift that to an engineering principle? That's a research project.

But programmers write enormously complex code with ease. If Haskell is lifting trivialities to engineering principles I have no idea whether that pays off. Maybe there are less trivial things or scenarios where it pays off.

And Meijer has a problem with complexity anyway; it doesn't fit his neat functional mindset. Maybe you're not helping any programmer by blowing up trivial idioms. -- Left foot... Right Foot... Maybe that's all they need to know to get it right.

But it doesn't show from the Haskell code I read; maybe they've got the way you should use the interfaces reversed. Or maybe you should just know a number of principles and use them. No idea.

But I think I've shown, at least for parser combinators, that it doesn't pay off to make the trivial explicit for that algebraic structure. Worse, I managed to write, I probably should clean it though, a large number of functions following a tailored algebra, or a rich signature, which fitted some informal specification pretty nicely.

And it can't be shown where making the trivial explicit in that rich structure would help me. And I don't do philosophy, it needs to be shown in code.

(Maybe monads are a lousy example since I already had lots of objections against that anyway.)

If Haskell is lifting

If Haskell is lifting trivialities to engineering principles

It seems to me that you have the relationship backwards. Haskell is applying engineering principles to factor out common trivialities. Which is useful.

I think I've shown, at least for parser combinators, that it doesn't pay off to make the trivial explicit for that algebraic structure.

I would say you've shown that you don't need monads, and that a singular choice for monadic bind can seem arbitrary (in a similar sense that `Ord` for booleans can be arbitrary).

Architecture: Everything with a door is a Donad

Just think the above one through in a scenario where engineering would put Donads central.

I suspect a reference to

I suspect a reference to toplogy would be more appropriate.

More suitable than a crude comparison

"Nothing sharpens the mind more than a good hanging." - Yeah, after my rant there probably is a small conclusion if one could agree that there may be a point to it.

I think that academically, I am right. What to do? What I think, what you do want, in a optimal mathematical sense, from Haskell is that you can declare an algebraic structure, a signature, and specify, seamlessly, that certain aspects of it have a certain (categorical) structure. Where with seamlessly I mean that you don't rename the operators (in the example, && doesn't rename to bind >>==) but you do get all the other higher-order operators on that part of the structure.

Nice research idea. Maybe worth it since I think that, because of purity, Haskell will never really make it much further than as a good prototyping tool for academic research and as an incubator for new ideas for other languages. (Unless you can do this already, I don't know Haskell that well.)

Then again. Maybe a pony is just good enough.

Are ML function types wrong?

The monad serves two purposes in Haskell, strictness ordering, and uniqueness (as the result must be 'deconstructed' each time). This comes back to the discussion about applicative vs generative. a <- getchar is generative as getchar has the type IO a getting a value out of IO a has generative semantics, not applicative semantics. Both Haskell and ML adopt generative semantics for type constructors / destructors which is what enables the monad to package IO. The monadic binding itself is only useful in Haskell for sequencing operations in a lazy language.

What this means (I think) is that in a strict, pure language, you do not need monads, you just need type constructors to be pure. So you just need getchar :: IO a but IO does not need to be a monad, and you don't need the return or bind methods. Something like:

case getchar of
    IO a -> ... do something with a

A further observation is that this problem is to do with the fact that we can't distinguish between a value and a function in the type system. A value and a nullary function both have the type 'a' (and its not just nullary functions but all functions that have received all their arguments). So you can either distinguish between pure and impure functions, or you could have all functions in a type constructor. You could then introduce a general syntactic sugar for deconstructing that class, for example '()' so that:

fn :: Fun a
fn() :: a

This would also allow distinguishing between applying arguments and evaluating a function.

f1 :: Int -> Int -> Fun Int
f2 = f1 1 2 :: Fun Int
f1(1, 2) :: Int
f2() :: Int

Which would allow all the arguments to be supplied to a function without actually evaluating it, but all functions would have generative semantics.

So my suggestion is that ML functions (which are impure and therefore generative) should really have a type like a -> b -> Fun c


I haven't been reading the discussion and came for the catchy title of this post. I don't understand what you mean with IO (and other parametrized types) being applicative or generative. From the previous discussion with Andreas I assume you know what applicative/generative mean in the case of functions, I just don't see the relation you make with parametrized types (which I would guess are always all "applicative").

I also don't understand the rest of your message:

- the part about pure languages not needing monads is confusing to me, as I use monads precisely to express effects that the language I'm using doesn't support in direct style (or because its effect-typing system is lacking and I wish better type-level tracking of the effects used). You can always build "effect ASTs" directly (so you don't "need" monads), but the do-notation is often convenient to make the syntax looks more like direct style.

I think I understand the "Fun" part, and it's not a new observation. If your language has a built-in IO monad (that in ML would contain exceptions, references, and I/O), you can define the effectful arrow (a => b) using the pure arrow type and the IO monad: (a -> IO b). Conversely, if your language has a notion of impure arrow (short of having an actually fine-grained effect system) (a => b), you can define the underlying IO monad, IO a, as (unit => a). This lets you distinguish "pure curried" functions, of the type (a -> b => c) or, equivalently, (a -> b -> unit => c) which corresponds to (a -> b -> IO c), from the evil functions that may perform effects as soon as their first argument is given, (a => b => c).

Agree in part

I agree about the effectful arrow, and was going to make that point myself, although its not a monad but 'generation' (as in a normal arrow is applicaton with reference to applicative vs generative semantics).

I think you are wrong about parametrised types. Constructing and destructing them are generative or packing IO with a constructor would not work remember a monad is just a constructor class defining bind and return. There is no magic in the bind and return functions.

The bit you are missing is that in a strict language you do not need the sequencing (bind) of a monad, as function evaluation is strict anyway. You could just use 'let':

Let a = unIO getchar
    b = unIO getchar
in unIO (print a + b)

Wrong maybe, confused certainly

I have *no idea* what you mean by a parametrized type being "generative".

In the hope of generating better mutual understanding, here is what *I* think the IO monad is. It is quite different from the sort of code you write.

In a pure dialect of ML, here is an example of using IO. Interestingly, I inlined the content of the (fairly simplistic) IO library, so that what happens is clear. But users are free to think of get_string, put_string etc. as magic hand-optimized implementations that play tricks with purity, and maybe in the next version of the library it will evolve in such a way.

type 'a io =
| Get of (string -> 'a io)
| Put of string * 'a io
| Return of 'a

let get_line = Get (fun s -> Return s)
let put_line s = Put (s, Return ())
let return v = Return v
let rec (>>=) m after = match m with
  | Get cont -> Get (fun s -> cont s >>= after)
  | Put (s, next) -> Put (s, next >>= after)
  | Return v -> after v

let (>>) m f = m >>= fun () -> f

let main : unit io =
  get_line >>= fun a ->
  get_line >>= fun b ->
  put_line a >>
  put_line b

Now, in an *impure* dialect of ML, here is an eval : 'a . 'a io -> 'a function that actually does the gritty work of running the programs returned by the "main" functions of the previous language.

let rec eval = function
  | Get cont ->
    let s = read_line () in
    eval (cont s)
  | Put (s, next) ->
    print_endline s;
    eval next
  | Return v -> v

In neither languages is there a need for type-level effects like generativity.

Function types

By applicative I mean the same as a applicative functor (the result is the same if applied twice). So:

a = getchar iostream
b = getchar iostream

if getchar were applicative/pure a and b would always be the same, if its impure/generative a and b could be different. I think this is the same thing as applicative vs generative functors but at the value level.

I think this is not a problem for ML as it does not allow nullary functions. This implies the special magic is in the '->' which is a type constructor, but that means ML expects side-effects on applying any argument? As you say it might be better written:

a -> b -> c => d

So you could see my suggestion above as a way to allow nullary functions in ML by moving the magic into unary type constructor, rather than having a special arrow.

What is happening in Haskell, where functions are pure? In a function f :: a -> IO () applying a to f does not produce the side effects so this arrow is applicative (pure) like the first two arrows in the ML function above. Now the magic happens when deconstructing the IO type (or is it on construction?).

It seems to me the final arrow in an ML function, and the IO type deconstructor have generative semantics (each time you evaluate them you can get a different value). Maybe I just mean impure, and am confusing two levels of terminology? applicative functors == pure functions, generative functors == impure functions, which implies applicative == pure and generative == impure? I don't think this detracts from the principal concern.

Monads and effects

You are throwing together so many only loosely related things here that I don't know where to start. But if you are genuinely interested in understanding the relation between monads and function types and effects better, Wadler's paper on the topic might be a good start.

Looks to agree with what I said

Wadler seems to agree (although unsurprisingly he expresses it better than I do, and with nicer symbols):

If the call-by-value translation of τ is τ†, then we have that (τ → τ′)†, where → represents a function that may have side effects, is equal to τ† → T τ′†, where → represents a pure function with no side effects.

I was using IO for T and () for call-by-value translation.

I do not believe that these concepts are loosely related. I think you have found that applicative functors must be pure, and impure ones must be generative. I think this is related. The type system has to maintain state to generate the new unique type (type level side-effects?), which parallels side-effects at the value level.

A well-sold bullshit story

Whatever Andreass. A monad is just a form function application that abstract that, loosely, it doesn't have any discernable semantics. Therefor you can do anything with it and nobody can ever get what you're talking about because you aren't talking about anything.

Haskell uses monadic IO because they can then express input/output relative to a meaningless, undefined, thing. It's a hack in a pure language, if you cannot express IO purily then you simply don't.

People will and can write for eternity papers about monads because they are meaningless things.

Monads provide for tenure. That's all there is to it.

(I guess I should explain it more mathematically. Technically, a monad is an algebra without observers. Therefore anything goes. You can replace any algebra without observers for a monad to dump your effectful computation in. It's an algebraic, logical, sink.)

Well, I went overboard with that one

Suffice to say that I just don't believe the author got it right.

ladder of abstraction

Left foot... Right Foot... Maybe that's all they need to know to get it right.

I think I understand where you're coming from here. After having grasped monads for a lot of different use-cases, they're becoming a low level abstraction to me. But, before then, I recall a time where I struggled with the concept.

Haskell does get one thing backwards: its ladder of abstraction seems to be upside down. Rather than starting with concrete concepts - like specific parser combinators - it often forces developers to grasp high level concepts like Monads just to use a parser combinator. The Monad typeclass is the only composition interface provided (or together with Functor and Applicative). To comprehend a concrete instance like 'Parsec', users must climb the rungs of monads, transformers, and streams.

Thinking in abstract terms is difficult: it comes late in children, it comes late to adults as they learn a new domain of knowledge, and it comes late within any given discipline. -- T. Green and A. Blackwell. Cognitive dimensions of
information artefacts: a tutorial
via Toomim. Managing Duplicated Code with Linked Editing

I feel there's a lot we could do to build a language so that developers can, in each new domain, start concrete and grow into the useful abstractions. But this doesn't make monads useless. They're useful once grasped. They're just misplaced on that ladder.

I agree.

I agree.


It is a teaser. Eric likes Scala very much; he co-authored the Scala-based Coursera Reactive Programming course with Martin Odersky and Ronald Kuhn. And here are some of his recent tweets mentioning Scala (@headinthebox):

Dec 19: What I love about Scala is that, unlike most typed languages, I can *always* write what I want, and the type system never runs out of steam.

Feb 3: ==> Time to switch to Scala dude!

Feb 3: @ashic Maybe I read too many POPL papers, but Scala matches 1:1 with how I expect a sophisticated PL to behave. Hardly ever hit a wall.

Feb 23: I'll be here as well, sharing my #Scala love story:

Feb 25:
My tool of choice as well. Every C# developer should really try #Scala. You won't look back. I promise. #intellij

Mar 14: @milessabin @jedws I actually like the explicitness of implicits in Scala.

Mar 20: @kirkwoll You should. Or not. Once you used Scala it hurts to go back to C#. Hurts really, really bad.

Apr 27: @statebox There is a reason I am a Scala fan ;-)

I suffered from Scala

I suffered from Scala withdrawal after leaving EPFL and joining Microsoft. I agree that Scala is the superior language, and wish we had something more like it on the .NET platform. However, C# is much better than Java at least, and quite workable. It has also worked me into a "worse is better" mindset, which actually promotes some efficiency (not chasing elegant abstractions and just settling more quickly).

That was then, and my impression of Scala these days is that it has become a much more functional language in terms of community, libraries, and language improvements. Scala is no longer considered the next generation OOP language, although it definitely has all of the nice OO features needed to fill that role!

This has led to somewhat of a disruption in the OO community as of late: we have once niche languages growing in popularity (Javascript), and the truly advanced languages (Scala) that are taking the FP banner rather than the OOP one. But this is just temporary, I think.

Well I hope so

Because I haven't even started yet dissecting half his arguments. And I probably shouldn't.

The Day 2 panel

The Day 2 panel is going to be fun!

I'll be there..

Representing... uh...

Sessions are coming online

Sessions are starting to show up for on demand viewing (on any platform).