What Does Functional Programming Mean?

What Does Functional Programming Mean? by Tony Morris

This presentation is intended for the February 2010 meeting of the Brisbane Functional Programming Group http://www.meetup.com/Brisbane-Functional-Programming-Group-BFG/.
The term functional programming has become common nomenclature, but its definition is also ambiguous. There are many de facto definitions, all of which fail to identify the important point to functional programming.

The argument is that referencial transparency is what defines functional programming. Which makes Scala and F# 'poor' for FP, or at least for achieving the compositionality goals of FP.

In my opinion there are ways to achieve compositionality without limiting oneself to the strict boundaries of pure programs, as long as one defines clearly which operations stray from these boundaries (cf. Haskell monads...)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Meh. Not worth reading.

Meh. Not worth reading. There isn't even a proper argument presented, just hand-wavy claims that 'this misses the point' and 'that is off the mark', followed by a claim that functional is about referential transparency, and a discussion of some benefits of purity.

I would note that referential transparency can coexist with impurity. This is observable in, for example, concurrent constraint and temporal logic models where effects are possible but idempotent in any given instant.

And composition is orthogonal to both purity and RT. Though, I grant that quite a few modern languages are 'poor' for composition.

It is an opinion piece, not

It is an opinion piece, not a scientific paper.

Interesting point about referential transparency coexisting with impurity.

Your last paragraph is what I am really interested in. Does purity or referential transparency take us further on the road to composability? Can you explain how they are 'orthogonal'?

Composition vs. Purity

Composition is about:

  • A standard set of composition operators
  • A set of operands closed under composition operators (such that the result can be further composed by the same operators).
  • A set of compositional properties. A property is compositional if we can reason about it inductively across composition operators - i.e. if we know the compositional properties of operands F and G then we will know the compositional properties of F*G, for every composition operator '*', without peeking inside F or G.

So when we speak of composition, what we really want is a rich set of composition operators and properties, without overly constraining the operands. All sorts of properties may be compositional under a few operators (e.g. being 'even' or 'positive' for integers is compositional under addition and multiplication), but the trick is finding useful properties that are compositional under every operator. Often we can achieve extremely rich composition, but it becomes far too domain-specific because we've limited ourselves to, for example, graphics.

Compositional properties are critical for local reasoning about large programs. For example, if 'liveness' is compositional, then we can say something like: "If F doesn't deadlock, and G doesn't deadlock, then F|G doesn't deadlock." Compare this to the typical multi-threading-and-locks model, where we can compose two independently safe programs and have deadlocks or race-conditions between them: developers would need to peek inside F and G to determine whether each composition is safe - not a proposition that scales well! By nature, compositional properties are also good candidates for regulation by a type-system, though I do not believe we should 'constrain' by every compositional property.

Does 'purity' help composition?

Purity itself is a compositional property (e.g. if F is pure and G is pure, then F o G is pure). Purity is a valuable compositional property that supports reasoning about safe refactoring (e.g. abstracting or reordering a calculation), optimizations (eliminating redundancy, dead-code elimination, partial evaluation), and security (confinement, isolation).

But a valid question is whether purity overly constrains the operands. We cannot, for example, compose open or distributed systems and services if we are constrained by purity. If we cannot compose external services, we are forced to reinvent them locally. To reinvent services locally seems to counter many of the benefits we were aiming to achieve from purity - such as eliminating redundancy, and safe refactoring of an application into services.

Further, purity severely constrains how we emit 'output' or model side-effects. E.g. if we decide to emit sound, we must propagate that decision to the edge of a pure model. Modeling this propagation becomes its own problem for developers, who end up reinventing imperative models (like monads) with all their fallacies and compositional weaknesses.

I would posit that constraining yourself to purity can actually hurt composition, on an absolute scale.

The solution isn't to abandon purity for the other extreme, but rather to find a middle-ground programming model that weakens purity while maintaining its abstraction and refactoring benefits, and achieving useful compositional properties. The qualifying models I'm familiar with use explicit time, declarative concurrency, logical monotonicity. Temporal logic, synchronous reactive, and RDP are the closest to this balance I've seen.

LtU always repeats itself four times

There's an echo of Peter van Roy's lightning strikes claim in what you wrote above.

"What does X mean?" rarely illuminates

If discussion already has a tendency to devolve into bickering over semantics, then why start with a prompt that focuses on semantics? This same problem seems to plague anybody trying to have a serious conversation about OOP - both proponents and detractors.

If the position taken here is that referential transparency is a desirable property, then why not talk about referential transparency directly from the start, rather than conflate it with FP?

Bickering over semantics

Would you prefer bickering over syntax?

There is no point in discussing OOP or FP, nor even RT, unless we have definitions that make effective distinctions. We should not argue OOP is good, bad, or ugly without first distinguishing OOP from the alternatives.

The problem is that words become politically charged. I.e. when there is a lot of buzz about OOP, people will find weak, pissant excuses to call everything and anything OOP. Rather than jumping on the OOP bandwagon, they drag the OOP bandwagon into the tarpit - weaken its definition until it becomes part of the amorphous blob.

Then we end up with 'OOP weenies' who cannot precisely define OOP. Hand-waving fools, every one of them.

That said, I grant it's somewhat pointless and unproductive to fight the blob. (A blob has no strong 'points', but is pretty good at ignoring your own no matter how many times you stab it.)

I think you misread Tim

I think Tim was suggesting a different style of communication, such as narrowcasting. Politician's use narrowcasting when doing campaign trips, and cater their message differently to farmers than they would lawyers. Likewise, most people will derive zero intellectual content from "What does X mean?", especially when all they care about is "How can I use X to do my job better?"

This presentation is not for me (and maybe others), because it doesn't explain "How can I use X to do my job better?"

Also, the paper the presenter cites at the end, The Essence of Functional Programming, only shows one side of the story. While monadic control can make a program more modular, it can also decompose a program in a way that makes it less modular! Such as, whenever you need true mutable state. Monadic control requires sequentially threading a computation, and cannot directly express true mutable state. Readers can refer to theState & Modularity LtU thread, and Concepts, Techniques & Models for further explanation.

In this regard, many people develop better understanding and perspective on what something is by also including examples of what something is not. "What is Coke? It's not Pepsi!" Such brilliant messages have been exploited by marketers since the dawn of psychology, and teachers would be smart to learn a few tricks from them. While the presenter does try to frame an anti-FP example, it frames the example as "something useless you might be doing, and not realizing how useless it is". Without assessing the underlying trade-offs for referentially transparent code, readers don't come away illuminated, they come away with new rote practices to replace the old rote practices.

True state [conclusions disputed]

At the risk of being off-topic for this inane subthread, I would like to state for the record that while Peter van Roy's arguments in your linked 2003 LtU post are interesting and expose a flaw in Haskell's approach to state, the conclusion that true "spooky action at a distance" mutability is the key to modularity is the wrong one. Frank Atanassow sketches an alternative approach in that thread that would address the modularity concern without all the downsides of true state, and I think there are other approaches that would work.

State continuum

Many design options that have been discussed on LtU before.

Peter's example is valid if you want completely transparent effects. In other words, when the problem requires that the control coordination must be implicit, then two modules cannot explicitly coordinate. Some real world problems have this characteristic, including famous problems in control theory that have information theoretic overlap. The information patterns dictate optimal control.

In control theory, there is a grab bag of tricks for dealing with state - such as model predictive control that runs a model "backwards in time" to see if "we are now where we thought we should be". This sort of real-time calibration is an "approach" to the downside of true state: model what you think will happen, and correct on-the-fly to make what you think will happen, and, well, make it happen. While we can't physically implement dataflow programming concepts such as anti-causality, we can compute "approximations".

Re: True State

I'm not convinced Frank Atanassow's solution does avoid the downsides of true state. He suggests modeling programs in a global state monad, and he claims "that you retain the ability to easily reason about the program as it will normally be used, in the identity monad", but this isn't actually true if some modules can observe the state. Applying a function twice in the state monad has a different meaning than applying it twice in the identity monad. When we model our applications atop an abstract monad, our reasoning is limited by some rather weak monad theories.

Modeling imperative stateful programming with pure functions does offer a few nice tricks (e.g. potential for backtracking, capturing worlds) but still inherits the fallacies of the imperative stateful model. So a lot of functional reasoning is lost. And, as PvR notes, the real issue is modularity in presence of concurrency...

I don't believe pervasive state (especially imperative state) is the right answer any more than I believe purity is. Both are harmful to modularity, extensibility, concurrency, and composability. The fact that you must model a non-pure model within the pure one in order to achieve modularity goals serves as evidence of this fact, never a counterpoint.

Use X to do my job better

I agree that your question is a more useful one than 'What does X mean?', and that this may be a better interpretation of Tim's position.

Misinterpretation

I probably should have considered that my use of "semantics" would confuse people. I meant "semantics" in the sense of "what (English) words mean," rather than in the PLT sense.

I was not foolishly advocating that a syntactic comparison would be in some way better, and I'm slightly embarrassed if my original post admitted that interpretation...

I was advocating that we should focus on the interesting properties or design choices in programming langauges (and their consequences for programmers), rather than on the contentious, and ultimately meaningless, way that we decide which languages are "FP," or "OOP," or what-have-you.

The real meaning

I've concluded that a programming language is functional if and only if variables are immutable by default. The only misclassification by this rule I know of is Scheme, and even there the Racket dialect corrects the problem. :)

Not sufficient

There are many programming models with immutable-by-default variables, but I wouldn't exactly call Datalog or Actors model 'functional'. I suspect there are some other criterion you're considering implicitly that belie the first word in 'if and only if'.

Not true of Racket

This isn't true of Racket; eg:


#lang racket
(define x 1)
(set! x 2)
x ;; prints 2

However, in Racket variables are immutable outside of the module in which they're defined, which makes determining whether a variable is mutated easy.