YOW! Lambda Jam 2017: John Hughes - Why Functional Programming Matters

Why FP still matters (video)...

27 years ago I published “Why Functional Programming Matters”, a manifesto for FP–but the subject is much older than that! In this talk I’ll take a deep dive into its history, highlighting some of the classic papers of the subject, personal favourites, and some of my own work. At the end of the day, four themes emerge that characterize what I love about the subject.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I find this interesting from

I find this interesting from two perspectives. On one hand, it's an insider's view of pure functional programming, and it's always interesting to hear what insiders think is important about their field. On the other hand, it's a great example of how a True Believer tells history in a way that reinforces the answer they want to get. I imagine some far-future researcher, who is a True Believer in some other approach, going over the same history, with the same inflection points, and making it appear that the same decisions that Hughes advocates were obvious blunders. That far-future researcher is no more right than Hughes, of course. I'd rather a rendition of history that recognizes the branching possibilities rather than making any path look like the one right choice; and that is really hard to do, especially in prospect.

An obviously oversimplified inflection point is lazy argument evaluation. The first moment I noticed a goal-directed rendition of history was about eleven and a half minutes in, which he shows an equivalence in Lisp and says it's obviously wrong — and I was thinking, well of course it's wrong, it says two source expressions in Lisp are equivalent, which is a complete non-starter because Lisp is an interpreted language, those source expressions are passive data. What he had in mind was that functions in Lisp might have side-effects. Ah, perspective.

Nice overview of key ideas and concepts with lots of examples.

I found his narrative to be both interesting and informative. I was familiar with a lot of his examples but a few were new and he had some interesting insights into their relevance. I remember the prototyping paper and the interesting point for me wasn't so much that the Haskell prototype was a lot shorter (some of the other languages like Ada allowed very similar solutions) but that no one programming in those other languages thought of that solution because it somehow wasn't as "natural". It was a very good example of language shaping thought.

FP is excellent at that niche of problems it is excellent at!

What is there to say? Yah, FP solves some problems really well, so does Prolog, or Cobol for that manner. Still, there is little hope you'll ever write a real-time OS or Microsoft Word application in a functional style.

The only thing I hope for is better languages in the future who allow for functional programming solutions when warranted. But from the slow and verbose adoption of FP primitives in 'standard' languages, one can also conclude that FP 'doesn't matter much.'

COBOL

I had occasion to use COBOL for several summers, years ago. I was deeply impressed; it was always the target of jokes, but actually experiencing it I found it to be a really good language. For the things it's good at, of course.

Look at all this mathematical beauty!

Yah, I think the love affair of academics with FP basically boils down to:

Look at all this mathematical beauty! Surely that must give better software engineering practices!

Well, it doesn't. The vast majority of applications is rather mundane accounting on (the presentation and timely delivery of) structured data. Something OO serves rather well.

That doesn't mean that there isn't an important niche to cover with FP, or other languages. It's just that usually lazy evaluation and referential transparency won't pay off in the broader scheme of what needs to be implemented.

"Cute," albeit a bit too derogatory, is simply usually correct.

(To elaborate: Sure, it's nice when a solution only costs you 84 lines of Haskell instead of a thousand in an Algol-derivative. But that simply doesn't mean much in the scope of programs which need maybe millions of lines of code to implement the usual functionality around that.)

The vast majority of

The vast majority of applications is rather mundane accounting on (the presentation and timely delivery of) structured data. Something OO serves rather well.

I agree that most programs are simply operating on structured data, but I disagree with your last statement. "Structured data" is literally just a record. Adding methods and inheritance doesn't really help you with any meaningful challenge in expressing or operating on this data. FP languages typically feature bare records, which are much simpler to work with for these problems (the advantages of purity is debatable).

In my experience, programs are mostly composed of subsystems that operate on simple data as you say, and subsystems that compose and glue together other subsystems for more complex behaviours. OO-type abstractions are better suited to the latter, not the former. So FP at the lowest-level with higher-level object-style abstractions seems like the better default given my experience.

Reinstantiable module declarations

Adding methods and inheritance doesn't really help you with any meaningful challenge in expressing or operating on this data:

Well, I disagree. Roughly, software engineering practices went along this road.

  1. Encode the structured data as a record.
  2. Hide, encapsulate, the structured data/records because operations on data are more important, and you can somewhat liberally switch between models. That gave ADTs in the form of modules.
  3. Make module declarations reinstatiable through OO. (OO also opened up a plethora of other software engineering practices I don't touch upon.)

Personally, I hate OO and like FP much more. Still, I wouldn't encode a Microsoft Word in an FP style; you're square back to step 1. in the best practices of software engineering. Encapsulated, reinstatiable, hidden state just works out pretty well.

I submit that neither OO nor

I submit that neither OO nor FP give you any clue how to approach something like implementing a simple text editor (let alone Word). Whether they should is, of course, debatable.

panaceas

What I most dislike about OO and FP is the way their respective fanatics want them to be used for everything. (The only PL that should be used that way is Lisp! :-P)  A memorable experience in my undergraduate career, though, was a class called something like Simulation with GASP IV. GASP IV is a FORTRAN IV framework for simulation, and was really quite fun to use. The structure of the system was excellent for simulation, and unmistakeably was straight object-orientation. If I didn't know better, I'd think the object-oriented paradigm was invented for simulation.

Lisp and OO

Two comments. First OO WAS invented for simulation! The language was called Simula and it is an extension of Algol to include classes. C++ was, in fact, inspired by Simula. Simula was around at the same time as Fortran, in fact when I learned CS at USyd they had Fortran II compiler and Simula (no Algol compiler, just use Simula).

The pervasive power of Lisp (well I never wrote Lisp but I can do a bit of Scheme) comes from two sources: the first is obviously the ability to construct code as data and then actually execute it. The second source is hard to grasp because it is badly presented: the existence of explicit continuations. This means you have deep ability to make control structures entirely absent from weak languages based on subroutines.

sarcasm and ball-of-mud-ness

I really do apologize for the sarcasm; I should know better. It's the sort of thing emoticons were invented for, after all. If I'd suitably marked my crack about OO and simulation I could have saved us all time and effort.

Re the power of Lisp, I suggest it goes deeper than those specific features you name, although they're well-chosen. (The first you mention relates to interpretation versus compilation, which I blogged about a while back.) The deeper power of Lisp (on which my draft blog post is incomplete :-) I reckon is to do with whether programming should be a dialog between human and computer, or a dialog between humans. Lisp at its best minimizes interaction with the computer in favor of providing a medium for the programmer to explore their ideas.

Hide, encapsulate, the

Hide, encapsulate, the structured data/records because operations on data are more important

I agree, operations on data are more important. So why are you encapsulating all of that important data in your module so that I can't write more operations on it in my module? Encapsulation at this level is just counter-productive.

Encapsulation is important for service-type abstractions, since it permits reuse, proxying, and so on, but less so for data manipulations.

Because it works^tm?

Because that's the proven manner which usually works out according to the experience of most people.

It's little use to discuss this, honestly. We have best practices since software engineering isn't entirely rational; that is, you'll likely be able to provide some rationalization given some specific SE solution to some problem but you're not likely to cover all cases with that. Moreover, your carefully constructed rationalization is often likely to be both incorrect and completely bonkers according to other people.

That isn't a nice answer to people who prefer the universe neat and ordered -academics, teachers, and certainly mathematicians- but it is the reality of software engineering. Software engineering isn't neat; rather it is messy, tricky, and half of the time you won't exactly know why you're doing things. Except for experience.

Nearly anything can be made

Nearly anything can be made to work (Turing tarpit). There's a thread on HN right now about the disaster that is C++ compilation, and yet it's still used quite widely.

That OO can be used in the domain we've discussed doesn't entail OO is well suited to that domain. There have been plenty of threads here on LtU exploring why technical merit doesn't always win out.

And what is experience if not the application of a rational case analysis? This seems inconsistent with your claim that software engineering isn't rational. We may not have a perfect, or even great, formal model, but that's not to say that such a thing doesn't exist, or that there isn't a principled way to approach it given what we know (even if we know little).

We don't even know what OO is...

Mwa, experience is trying things and looking around in the world to see what approaches survive. I generally see the adoption of languages and techniques by industry as a Darwinian process, the fit survive. I explained that by "not fully rational" I mean "no good rationalization can usually be given."

I don't know how old you are but I remember the 'switch' from imperative to OO. It came as a surprise to most academics. It is trivial to understand the evolution of it (as I said, from explicit records, to 'algebraic' modules with hidden state, to reinstantiable modules) but in the beginning, nobody knew quite how to get their mind around it. Is it an actor model, or should you look at it as message passing, a process algebra? A natural fit for Von Neumann architectures? Does it have something to do with verbs and nouns? What makes a language OO anyway, is it just a style of programming? Or are objects something akin to (finite state) automata or CSP processes? Is a type system mandatory? If so, which one?

Yah, all of that and none of it at the same time. Depends on how you want to exploit it, maybe. And someone else clever will see something other in it.

Tangent, I recall some professor making a list of around two thousand requirement categories -or so, forgot- for embedded systems. Ranging from everything from user interface design, real-time constraints, data handling, fault tolerance, normalization constraints, to conforming to international law, or even fire hazard handling.

"Just use FP!" Yah, cute answer - the world and the problems you're solving are far more complex than that.

I used to have lots of academic "Just. Do. It. Like. This!" books. There is often something to be learned from them, but apart from some takeaways, they are usually not worth the money.

If there is a principled manner to software engineering, I haven't seen one yet. Maybe several thousand principled approaches to software engineering, true that. All of them true, all of them flawed.

Why would a principled approach exist? You can bash C++ but as it stands now an enormous amount of software runs on it and Haskell is somewhere 'best' described from a language perspective as nothing more than a blown-up expression evaluator with some odd type system on top. So far for the principled approach. (And for an expression evaluator, where's that website with seven different manners of composing programs in Haskell?)

For the moment, we're just stuck with what we've got. And principled approaches don't seem to beat ad-hoc systems like C++ which came into being because of a long history, haphazardous mistakes, and just tinkering to make something which fit the machine well somewhat better.

[EDIT: This was a bit written in haste but I imagine you get the gist.]

I generally see the adoption

I generally see the adoption of languages and techniques by industry as a Darwinian process, the fit survive.

Sure, as long as you don't define "fit" as "better suited to actually solving the problem at hand". Fashion industry trends are also Darwinian in this sense, where "fitness" is not "keeping us dry and warm", but serves more nebulous social purposes, like social signaling of various sorts.

I'm not sure that any discipline that hopes to be labeled "engineering" should be influenced by such "Darwinian" evidence.

OO provided some much needed abstraction over the typical procedural methods of program construction, but they weren't/aren't the only game in town, and they weren't/aren't necessarily the best either.

And principled approaches don't seem to beat ad-hoc systems like C++

Define "best". What metric are you using exactly?

Hehe, we disagree on principle

It's a capitalist system, fit means you end up with systems you can sell for profit.

There is no engineering without best practices. The other field is called research.

But I prefer to call myself a Marxist. So there's that too.

Principal Software Engineering

My take on this is that I agree and the problem is the use of the word "Engineering". Anyone engineering software is an idiot :-) The proof is that engineering follows principles and methods. But if you're a good programmer, there cannot be any pattern or rules to follow, because when you learn one *you automate it away*. Therefore, software development must be considered as Knuth said: its an ART.

Applying patterns anyway

But if you're a good programmer, there cannot be any pattern or rules to follow, because when you learn one *you automate it away*.

Taking that in reverse, any time you automate something away (and in programming, you almost can't avoid it), you must have had a pattern you followed imperfectly beforehand, which you now follow in a more consistent and more maintainable way. That gives us reason to believe that what you were doing before was a kind of poor application of patterns all along.

The way I juxtaposed that makes it sound like in programming, "you're never quite applying patterns yet" and "you were applying patterns all along" are two sides of the same coin, but I think there's a tiebreaker:

If you wrote some code once but arrive on a project that's built on a different language, requires a different copyright license, and is supported by a different team, then it may not be legally and socially feasible to use your old code. You might have to solve the same problems again, not using your old code exactly, but still recognizing the patterns you recognized before.

Maths

Whilst I take your point, I think the emphasis is wrong. Mathematical beauty DOES give better engineering practices. The problem is that functional programming is not all of mathematics, just the easiest one to study because set theory has been around in maths for a very long time.

What we need is not to abandon mathematics, but to recognise the functional approach is only a small part of it. The key problem in traditional maths is that it is declarative theory; that is, it deals with space and spatial relationships when we all know computers deal with action.

Control flow and ordering is central to computing yet functional programming tries to abstract it away. You can't extend functional programming to include effects. You need instead to compose it with a temporal theory.

People doing linear algebra have vector spaces (spatial) and linear operators (actions). But they can also switch to the dual space at leisure, where the vectors become linear functionals (actions) and the operators objects (spatial). It is lovely maths because the dual of the dual is an isomorphism. This symmetric system is the goal for programming. Sticking in the spatial model is weak, and trying to extend it to add effects is silly when the solution is to invert over to the dual space. However programming is nonlinear and the double dual isomorphism won't hold, its more like some kind of adjunction.

Felix integrates functions (spatial) and coroutines (temporal). Coroutines generalise procedures in a way that lets one reintroduce purity and transparency. The trick is that ordinary procedures have no return type so they can modify any visible variable, which means they cannot be abstracted into black boxes like functions. Coroutines allow purity because channel I/O abstracts them into black boxes again. A pure coroutine is one whose only observable effects are I/O on channels passed as parameters. Connecting circuits abstracts the channels away.

In Felix I can observe this but not enforce it. I don't have a suitable system of control types.

Vector inner product, Algol 60 vs Lisp

So I'm listening to that lecture. Somewhere around the 19 minute mark I'm wondering, since addition is not associative, is there any guarantee that running the same Lisp program twice will give twice the same result?

Associativity

Just to point out, traditionally Lisp would use rational numbers, and addition would be associative unless the bignums get so big the program runs out of memory.

FP vs HPC

Ok, rational numbers would be a way out.

Btw, later in the talk he talks about numerical integration. He is not entirely without knowledge, describe Aitken extrapolation, on the other hand I really wish he would have mentioned numerical cancellation, which throws a big spanner in his works. Or at least is standing with a big spanner at the ready to trip up the unwary.