Seeking thoughtful criticisms of functional programming and languages

I'm having zero luck with my Google and CiteSeerX searches.

I'm not looking for some particular language advocate's uninformed criticisms. Instead I'm looking for some at least potentially well grounded critiques based on, just for instance: implementation state of the art, performance or memory footprint, issues regarding separate compilation and link-times and so on for large, multi-developer programs, problem domain applicability, OS integration mismatch, feature/library immaturity of language designs to date, library interoperability difficulties, pedagogical issues (especially training of working programmers), debugging or other programming pragmatics issues, language implementation difficulty related issues, critique of one or more particular features commonly found in (some) functional languages and so forth, awkwardness of implementing/handling certain data structures or very common programming idioms/sub-problems, blah blah blah.

Just some ideas. I can't find much of anything so far.

Quality experience reports along the same lines would also be very welcome, perhaps even more interesting than critiques on more general principles and judgments.

Mucho thanks!

Scott

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I believe this was the

I believe this was the classic analysis of Standard ML: http://www.springerlink.com/content/f5845222291p2g18/ . Some reason I thought Appel had one as well but didn't see. Sure wish there were more of these..

In terms of academic criticisms, hopefully that will lead to subsequent ones.

[[There's of course the Backus / Von Neumann paper, but I don't think it's as helpful here]]

Edit: you might have some luck in the security audit side of things. E.g., I believe David Wagner @ Berkeley had some.

Thanks here's a freebie reference to the paper

ML is purrfect.... ML

ML is really great, but some additions:

1. Syntactic conventions of ML, my biggest 'pain' is that they don't use the 'applicative syntax?' uniformly like Haskell does.

2. There's no real discussion of functors. I believe type classes are (in some uses) better than functors, but I don't really understand the latter, or the difference. Was there a real need for them anyway? I.e., doesn't a record also solve the problem functors intend to solve?

3. The type system infers types [for values] from their expressions. That is a design decision, which has some drawbacks. Could have been mentioned, maybe I missed it.

[4. Why rec? ]

Functors

Since MacQueen essentially invented ML functors, he is unlikely to criticise their presence ;-). Instead, you'll notice in the paper that he criticises the absence of higher-order functors in SML.

And no, records don't even come close to what modules provide.

Yah

Guess I had that coming. ;) Actually, I think I understand functors, and how they are compiled, but, ... Ah well, no but.

I'm inclined to agree - ML kinda rules :-)

My favorite ML so far is Alice ML - just a great feature set and distribution, conveniently already setup for x-platform GUI via GTK, and so on and so forth. Unfortunately, a sort of "end-of-life" announcement appeared some months back :-(

SML is also a nice ML, straight forward feel to the language, native compilation, good documentation and so on, but it doesn't even have x-platform sockets implemented (Windoze lacking) last I checked - which I interpret as a lack of will, manpower or both the support something approximating "industrial" application development. Pity.

I'm trying to get into OCaml, just because it's popular and well supported, but I find nearly every aspect of it has some infuriating little quirk or another (no, that's NOT intended as any kind of studied critique I can defend in an articulate manner).

I'd love to see Alice ML's language and libraries implemented on top of SML's compiler technology, but hell, that's just a pipe dream :-(

S.

p.s. IMHO, ML's have a problem with too much emphasis on a convenient top level (I don't care about pedagogy at all) but way too complicated mechanics in just building good old static applications or shared libs. I'd love to see a simple old-style Turbo Pascal/Turbo C style project + files + "Break Point/Build/Run/Debug" menu style IDE.

Or maybe better yet, a Smalltalk 3-4 pane style IDE (but centered on projects targeting static compilation of apps and libs) that hid the underlying file structure altogether.

Again, a man can dream ....

MLWorks

Whatever happened to Harlequin ML or MLWorks or whatever it was called? That was pretty much your dream of an ML IDE IIRC. Here's a link to some documentation:

http://www.cs.cmu.edu/~fp/courses/97-212/mlworks-intro.html

It's still out there, if you dig...

I don't mean to resurrect this old thread unnecessarily, but I just wanted to say you can still find (at least) the Windows (95!) demo version of MLWorks if you dig deep enough with your favorite search engine.

I remembered coming across it circa 2002, and being somewhat amazed someone still had a copy of it; some months ago I went digging again and though I remembered next to nothing about how I found it the first time, I did manage to find it again, after a bit more effort.

How useful this is -- either in general, or almost three years after Chris mentioned it above -- is perhaps questionable though.

Documentation?

"SML is also a nice ML, straight forward feel to the language, native compilation, good documentation and so on"

Wait, good documentation? I've found a number of good tutorials, but no good comprehensive reference for the core language (free on the web) as opposed to the basis library.

Appel did indeed write a

Appel did indeed write a critique of Standard ML (actually SML '90). A previous LtU thread discussing his paper can be found here.

Do you mean purely

Do you mean purely functional or functional in the broader sense? For purely functional Peter van Roy has written some interesting thoughts on why modularity requires state in CTM I believe (or perhaps in a paper).

Declarative memoization

Lyle Kopnicky had an interesting couple of posts on the mozart/oz mailing list a while back with the above title on about using the y-combinator to work around some of this restriction.

The Paradigms for Dummies

The Paradigms for Dummies document says something to that effect (section 4.4, page 27) "We give a scenario to show how we can design a modular system by using named state. Without named state, this is not possible.".

But the example was not well motivated. The state was for profiling, not for modularity.

Dave Thomas Rocks

Check out fellow Canadian Dave Thomas.

You'll see that not all of these articles are about functional programming, but I'm counting on you to be able to navigate an index.

Dave Thomas's papers

Not sure if this

Not sure if this is the type of criticism you are looking for (these are mostly engineering type problems) but the reasons I still program in non-functional languages are:

Lack of functional polymorphism leads to namespace crowding, i.e. no operator overloading. Type classes help address this somewhat, but it is still a pain point.

Lack of sugar. Despite having clearly inferior support for map/filter/reduce, most scripting languages provide cleaner string processing functionality because of OO-enabled operator overloading and plenty of sugar.

State. Again is an obvious one, but the lack of state is also a plus for pure functional languages. It is a trade-off, and is one of the main features that makes pure functional languages unique. For me this is clearly a feature and not a bug. I do think though, that many of the common problems caused by the lack of state could be addressed with a good helping of sugar on the part of the compiler.

Not so much a criticism...

Not so much a criticism, but you might be interested in Disciplined Disciple.

Disciplined Disciple is a language built on top of haskell that adds such things as destructive updates without losing type inference and compiler optimisations.

The two papers at the top of the page talk about many of the problems with purely functional programming and offer ways around them.

Very odd that there aren't more head on criticisms

I guess I just have prosaic questions that I think are obvious and beg answers:

  • Why aren't typical desktop apps written (yet) in functional languages as a matter of course(office suites, image viewers, browsers, etc.)?
  • Given the already sophisticated compiler technology invested in many (most? all?) machine compiled functional languages, why do these usually require and take recourse to an "FFI," instead of directly supporting machine level data representations and multiple calling conventions (such a Pascal and C do)?
  • Why aren't there "modern" IDE's for more functional languages (Scala is oddly well supported by several IDE's), even some of the most popular ones? The basic IDE is a pretty ancient, and relatively "low rent" developer tool going back to various Smalltalks, various Lisps, C, Pascal, Basic, etc.
  • Why does functional language adoption still seem so low across corporate America?
  • Why is there no ongoing "wave" of safer, more reliable, shorter, more maintainable and more portable functional language implementations of the common or most important Unix utilities, services, even server programs (sendmail, httpd) etc. ?

Are these just issues of will or fatalities of most research cultures? Or are these language design problems? Or are better designs that might address some of these issues still constrained by functional PLT state-of-the-art?

Inquiring minds like to know, if only because I'm interested in functional language design and implementation and adoption "challenges": perhaps the pragmatic challenges in addition to some of the very interesting theoretical problems often discussed here on LTU.

Mucho thanks!

Scott

Barrier to entry issues

I was going to post something earlier, but the main problems are human factors. A lot harder to nail down, and a lot harder to describe (and to be honest, often maligned on LtU).

From my seat in corporate America pure functional programming will never gain widespread adoption. The trend of corporate programming languages has gone steadily towards making programs easier to reason about; expanding the pool of employable developers. Languages that focus on functional programming go directly against that trend.

Too many programmers I've dealt with in my career have problems with the most simple of functional concepts introduced to mainstream languages. If those little things can't gain widespread adoption with programmers, how can the languages focused on them?

FP and reasoning

The trend of corporate programming languages has gone steadily towards making programs easier to reason about [...]. Languages that focus on functional programming go directly against that trend.

Er... what? There must be some subtle use of irony in that sentence that I am missing entirely.

There's a perspective it

There's a perspective it seems plausible from for a moment - the one that says that the average programmer's no good at reasoning about anything higher-order. It doesn't work, though: higher-order programming using objects as function wrappers is already widespread.

That, and

That, and the fact that most imperative programmers tend to be quite naive about how difficult it is to really reason about stateful programs properly.

Why should we care at all

If reallyreasoning about programs properly doesn't produce a gigantic universe of production (industrial, whatever) software applications, who cares?

Are these concerns just for some tiny minority of people who don't make a statistically significant contribution to the universe software that users find productive or delightful?

S.

Didn't claim that

Just to be sure, I didn't claim that people care, RM Schellhas did.

At the moment, yes

There are specialty areas where reasoning formally is important in production. For most things it isn't. So when people talk here about "reasoning about programs" they're talking about something that is still a 25 year goal. In the shorter term, the lever in FPLs has to do with better mechanisms for reusable abstractions and reusable programming, stronger typing, and so forth.

And when you look, you can see a lot of those features migrating into C# and other languages slowly but steadily. Which is one way that FPLs are having impact.

sure, it could be

if the system could reason about itself and tell me when things are borken (effect systems with good inference, say) then i think that would be a win.

Not a bang but a whimper

Very odd that there aren't more head on criticisms

If you do some poking around in LtU history, you'll find that this subject was over-discussed at one point, so there is no shortage of material if you look for it.

Functional ideas are gaining traction and I think will continue to do so, since designs with less state produce more robust programs.

The most damning thing about functional programming isn't all that damning: it's merely inconvenient for certain basic tasks.

Many programming tasks are I/O related (even "Hello World!" starts with I/O), and we all know that I/O is naturally stateful. Sure you can simulate stateless I/O, but it is simply not as convenient most of the time.

For most people, going whole hog on FP is doing something that is "good for you" instead of doing something that "tastes good", so I think the pure functional approach will never be the most popular or default approach for most things.

But I think the hybrid approach you see in many modern languages (Scheme, Scala, Clojure, OCaml, etc) that enables both styles as appropriate and convenient will continue to gain momentum, and I hope and believe that in some not-so-distant future (10 years? 20 years?) this question of "whither FP?" will be irrelevant, since FP techniques and approaches will have been fully and unostentatiously absorbed into the mainstream of PLs.

Re: the hybrid approach you see in many modern languages

The approach you see in most languages now (including the ones you mentioned) is to bolt effectful operations on to the side of a functional core. A better approach that I expect will eventually catch on is to model the functional requirements of code that deals with effects, such that the code can be interpreted by the environment. From the point of view of the code, the environment can then either be physical action by a machine or virtual execution some other piece of code. With some sugar sugar, this approach can be every bit as convenient for the common cases, and much more convenient for advanced cases that involve custom control flow and virtualization.

Pudding to be eaten?

A better approach that I expect will eventually catch on is to model the functional requirements of code that deals with effects, such that the code can be interpreted by the environment. From the point of view of the code, the environment can then either be physical action by a machine or virtual execution some other piece of code.

Do you have an extant example of what you mean by this?

The closest thing I can

The closest thing I can think of is the Data Types a la Carte paper.

Pudding Samplers

Pretty much everything by Conal Elliott, such as Flan.

eagerly looking forward to that

thanks to all those seriously hard working plt folks, pure functional will eventually solve the problems it faces, and then we'll be able to write hello world in a language that both is good for you & tastes good. in the mean while i hope that hybrid languages don't leave people just comfortable enough to not want to pursue or adopt those future best of all possible worlds fp languages.

Panglossian Programming

those future best of all possible worlds fp languages

I'm not sure that there is such a beast.

The logic here seems to be that if X is good, then 100% pure X is best. Personally, I find that a little bit too linear.

Though I certainly think that the default amount of statelessness for programs should be much, much greater than it is today, I am equally convinced that for some problems the stateful solution is the most natural and manageable one.

I perfectly happy for people to continue to tilt at this windmill though, since who knows what great idea they might come up with along the way...

Two different issues

It's easier to reason about programs that are written in pure functional style, but I agree with you that that's not universally true and that certain problems inherently involve state or are best approached with stateful solutions. But whether or not we want a functional coding style exposed to programmers for a given problem is a separate issue from whether we want a pure interpretation of that code.

[Which, I'm arguing, we definitely do. A pure interpretation of the code lets us give the code precise meaning to the code even without understanding the interpretation of the effects. Also, it lets us implement the effects in code, so long as we get all the types right.]

Reactive Demand Programming

My most recent assault on that windmill... not pure functional, mostly idempotent and with a bare minimum of state. (It sure is nice that windmills don't fight back.)

Conceptual revolutions...

... don't happen by overturning current practice in one day. They happen by providing migration paths. Hybrid languages like BitC are a necessary part of this. Languages with bolt-on statefulness aren't good enough to really support migration. Remains to be seen whether a langauge more like BitC or Cyclone is enough, but that's certainly the hope.

Functional Islands need Bridges

I certainly agree that the 'hybrid' approach that is common today leaves much to be desired. Functional-imperative hybrids lay vicious ruin to the nice functional properties for abstraction, composition, parallelization, and optimization. However, the approach you describe leaves a critical gap unresolved: assignment of the 'function' to the 'environment'. In particular:

  1. pure functions cannot self-publish - which means one must escape functional paradigm to assign a function to a given environment; this also applies to extension, maintenance, or upgrade
  2. pure functions cannot encapsulate authority - which means the 'environment' must contain excessive ambient authority

Reducing explicit use of state should help with concurrency issues. More generally, we want to reduce temporal coupling (the extent to which where present behavior of a system depends upon system history). But we can't ignore life-cycle (development, integration, maintenance, upgrade, extension), nor may we ignore security (which is not separable from life-cycle).

For years I was focusing on layered approaches, where a pure and deterministic core (usually total functional) is in a lower layer and is used for modeling needs, and above that is stateless FRP with encapsulated behaviors, and above that is something like actors model (committed choice, concurrent processes, and explicit state where necessary). Importantly, this means the higher-layer composition is possible without escaping the language (which cannot be achieved by 'functional interpreted by environment' mechanisms as seen in Haskell). Peter van Roy also promotes a layered discipline (though Oz doesn't force the discipline on you, meaning that hybrid properties are exhibited with regards to optimization, modularity, abstraction, etc.). Effect typing systems can force a layered discipline.

I spent the last year attempting to fit more and more of my service set into the 'stateless FRP', with the goal of supporting live-programming and upgrade. This led to a recent eureka moment, and I'm now pursuing an alternative model - a hybrid of functional-reactive with 'declarative' side-effects that I'm calling Reactive Demand Programming (RDP). [was Constraint Reactive Programming (CRP)]

By 'declarative' side-effects, I mean side-effects that match the properties of declarations: continuous, concurrent, idempotent, and reactive (if declaration binds to changing entity). A close model is Concurrent Constraint Programming, especially the timed variations based on temporal logic, but RDP trades global consistency for local security (and thus allows antagonistic and distributed agents, whereas CCP would generally break down under disruption or hostility).

Functional purity is a stronger assurance than we need - stronger than we want because it forces us to escape the programming model to securely modularize systems. But embracing effects doesn't require a hybrid approach, and we should be able to support 'declarative effects' as a big part of our languages.

Pure functions, not purely lambdas

From my point of view, the importance of purity in a foundation is in being able to reason about pure values from within a logic. It's not that the universe of discourse can consist only of lambda abstractions. Take for example any computation that is to incorporate non-determinism provided by your language runtime. The way I'd reason about the correctness of such a thing from within a pure logic is to posit that the runtime depends on unseen environment variables in a deterministic way. Then you can reason about the behavior of "a run-time" in a pure way.

Furthermore, you can provide a conforming runtime implementation, meaning one that satisfies all the required postulates, using pure computations. Such a virtual runtime could use a pseudo-random number generator to implement non-determinism, or branch and collect a set of a possible results, or do something else. There is just more power in this approach than having computations tied to black-box effects that you can't reason over inside the language.

Honestly, I'm not sure I grokked all of your above criticisms of my approach. I gathered them to be about trying to model everything with lambdas, but if you still see problems with my approach, please let me know.

Regarding CRP, you're really tackling a much more ambitious goal than am I right now. You seem interested in finding universal abstractions that have a whole slew of desirable properties. I'm still not convinced that such a universal architecture exists [at least, not if you extend the list of desirable properties to include the ones that you didn't think of]. Have you thought about how you would compile such a rich run-time model to take advantage of SIMD parallelism? The reason for tiers, as I see it, is to enable writing code that makes as few assumptions about the architecture as possible, so that it's more generally applicable.

Not about lambdas.

By 'pure function', what I mean is that: semantic effects are strictly limited to a returned value and output is deterministically computable from input. I do not exclude dedicated support for a wide variety of values, identifiers, etc. Some of the best language tools we have for modeling - such as logic programs and term-rewrite systems - can readily be expressed as 'pure functions' from one set of relationships to another.

The above criticisms of the pure functional approach are tied directly to those fundamental properties of functions, not to any particular expression of them (such as lambdas).

you can provide a conforming runtime implementation, meaning one that satisfies all the required postulates, using pure computations

This will regress the issues. I once heard it explained as: "Pure turtles, all the way down."

Where before you were faced with the problem of assigning a function-describing-an-application to an environment, you are now faced three problems: choosing an application payload, assigning each application in the set to the correct elements of the function-describing-an-environment, and finally assigning the function-describing-an-environment to its own environment (i.e. a set of hardware).

I can't speak for others, but I've never found infinite regression a satisfactory answer to any problem. General purpose programming must cover the full software life-cycle [rather than regress it], and must do so without requiring violation of security. (How would you upgrade or maintain a specific application in the above scenario without also achieving authority to manipulate all of the applications?)

posit that the runtime depends on unseen environment variables in a deterministic way

Yes, and you can model those unseen variables as oracles or the like. In this manner, any indeterminism flows from the environment into the expression, and the extent of that flow can be controlled.

I've studied these techniques, but I have been unable to apply them to solve security and life-cycle problems for programming-in-the-large. I would be delighted to have a solution.

You seem interested in finding universal abstractions that have a whole slew of desirable properties. I'm still not convinced that such a universal architecture exists [at least, not if you extend the list of desirable properties to include the ones that you didn't think of].

That isn't quite my approach. I'm interested in finding a set of abstractions, or possibly a layered set, that will allow me to ensure a whole slew of desirable properties for programming-in-the-large. And the properties I described for RDP are only those I can ascribe to RDP; that is, there are many desirable properties that I was unable to achieve with that particular design.

With regards to the properties (desirable or undesirable) that I failed to anticipate: my philosophy is that we can only advance state-of-the-art by tackling the major problems that exist today - i.e. by identifying things that are repeatedly and badly reinvented, or that require uncomposable frameworks, and finding ways to improve their abstraction and composition, and assure their quality. Better tools will only expand the current limits; when we reach them again, we'll have the information necessary to build yet another generation of tools.

Have you thought about how you would compile such a rich run-time model to take advantage of SIMD parallelism?

I've placed considerable thought into data-parallelism in general. SIMD parallelism is rather specific to vector mathematics and domains that are expressed by those techniques. Nothing in RDP would hinder this, but nothing in RDP helps with it either. That leaves opportunity to find a way to support it.

The reason for tiers, as I see it, is to enable writing code that makes as few assumptions about the architecture as possible, so that it's more generally applicable.

It is unclear to me where this is related to anything I asserted. I did mention layered languages, but that concept is unrelated to 'layered architectures' of the sort seen in the OSI model.

RE: Infinite regress. The

RE: Infinite regress. The fact that you can implement a virtual environment for a computation isn't meant to replace your ability to run it in an actual environment. As for how programs specify that they want to run in an actual environment? They don't. I get to decide how to map the program to the environment when I go to run the program. The program has no need to specify "real non-determinism" because nothing in the pure computations of the program will ever be able to tell a difference anyway.

How would you upgrade or maintain a specific application in the above scenario without also achieving authority to manipulate all of the applications?

It's hard for me to follow this as it seems to depend on some specific security model that you have in mind. Live upgrade is just another feature to be architected and layered above the core language. Security policies are going to be enforced by the OS or whatever is deciding whether you can hook some program up to system resources.

Regarding your CRP, it sounds as though I have gotten the wrong impression of it. It sounded to me from your post that you had abandoned the layered approach in favor of this new fangled CRP stuff. I must have read it wrong. And I do have your CRP page bookmarked - it's just a layer or two above where am I now.

RE: Infinite regress

The fact that you can implement a virtual environment for a computation isn't meant to replace your ability to run it in an actual environment.

Sure. I was only noting that regression doesn't fix issues, only moves them. Ultimately, it means moving those problems outside the paradigm.

By nature, a paradigm that requires escape to solve certain problems is not sufficient for general-purpose programming. This isn't to say it is bad, only that it needs companions.

As for how programs specify that they want to run in an actual environment? They don't. I get to decide how to map the program to the environment when I go to run the program. [...] Live upgrade is just another feature to be architected and layered above the core language. Security policies are going to be enforced by the OS or whatever is deciding whether you can hook some program up to system resources.

Indeed. You escape the paradigm and introduce external elements (such as yourself, or the OS, or a pervasive higher layer) to provide life-cycle and security features. Now, if you review the problems I listed for functional programming, I think you'll find a close match for these problems you are hand-waving to other layers and external agents.

For programming-in-the-large, life-cycle and security are not separable concerns. Thus, unless you find some clever way to handle life-cycle and security, pure functional programming won't by itself be good for more than programming-in-the-small.

It sounded to me from your post that you had abandoned the layered approach in favor of this new fangled CRP stuff.

That impression is accurate: RDP [was CRP] does cover the layers and feature-set I was exploring earlier (functional below FRP below actors model).

What I found strange was your assertion that layers are about writing programs that minimize architectural assumptions. (This was confused further by your reference to SIMD in the same paragraph, leading me to wonder whether you meant CPU Architecture.)

To me, software architecture describes how components are integrated and communicate with one another - i.e. process models, communications models, security models, and life-cycle. Whenever you write a program to control an environment, at least if you have any goals towards safety or correctness, you are making explicit assumptions about the underlying architecture - i.e. that it can accept your output, maintain your inputs, keep persistence if needed. Layered languages offer little support at all for ignoring architecture. Where layered languages help is integrating with it, i.e. because you can choose a most-appropriate layer.

Escaping the paradigm

I still [don't] buy the argument that you need to escape the functional paradigm beyond a monadic-like encoding. The compiler isn't going to be able to produce new hardware, so at some point there's gonna be - not just a non-function - but non-software involved in making anything actually happen. And since most hardware manufacturers don't interpret functional code directly, there will need to be some translation to whatever assembly code they provide. But that could be a relatively thin layer.

Furthermore, even if you want to expose the native assembly language of a processor to your language, you can do that in a pure way. The language can embed a description of the opcodes and how they update machine state over time, and then you can prove within your logic that a certain sequence of opcodes produces the same effect as that monadically encoded function. Or probably you'd prove that a more general result: that your compiler works.

My point regarding architecture (with SIMD as an arbitrary example of something specialized) is that if every feature is implemented in terms of a run-time model that does constraint satisfaction, then I'd think you'd have a harder time. But maybe reactive constraints is the semantics, but you have strategies for compiling it to specialized runtimes in important cases.

right software to the right device at the right time

The issues I described earlier are unrelated to producing software that can run on a given device or in a given environment. The life-cycle and security issues are about the assignment of the right software, to the right device, at the right time (and with proof of the right authority).

Perhaps you could express all the relevant life-cycle issues within a monadic-like encoding. However, I posit: if you do so, the effects on code will be utterly pervasive, such that only small fragments of your applications exhibit the advantageous properties of functional purity.

If you need a motivating example, review the scenario described here. That example is complex enough that you can't hand-wave the life-cycle issues away with an "I'll do it by hand!" approach.

if every feature is implemented in terms of a run-time model that does constraint satisfaction, then I'd think you'd have a harder time

RDP is about communication, composition, and maintenance of constraints. The system in-the-large drives continuously towards 'eventual consistency' with changing constraints. However, RDP makes no attempt to define what constraints mean to their observer: constraints are expressed as simple values (or sets of values), and each locally observed set of constraints is solved by ad-hoc, local mechanisms. Those mechanisms typically include functional programming. If the function involves large vector operations, its implementation might leverage SIMD vector operations.

It seems to me that

It seems to me that assigning software to devices is the job of the OS, which itself is just another piece of software to be produced. I read the referenced post, and it seems like a reasonable (though imprecise) list of requirements. I'm not going to think through an architecture, but I'm pretty comfortable that any architecture can be mechanically translated to a pure encoding in a straightforward way.

if you do so, the effects on code will be utterly pervasive, such that only small fragments of your applications exhibit the advantageous properties of functional purity.

As I was saying earlier, the point of purity isn't just the advantageous properties of simplicity of reasoning. It's that your language understands the meaning of its effects from an inside-the-box point of view. That brings logical simplicity and extra power.

the point of purity isn't

the point of purity isn't just the advantageous properties of simplicity of reasoning. It's that your language understands the meaning of its effects from an inside-the-box point of view. That brings logical simplicity and extra power.

You contradict yourself: either you do not have logical simplicity, or you do have advantageous properties of simplicity of reasoning.

With regards to an "inside-the-box POV": that much can be achieved by being rid of ambient authority. This is a very useful property for reasoning about a system, and is the whole basis for capability security. But purity is a much stronger property than is required to achieve confinement.

Purity constrains you such that not only must your language understand the meaning of its effects, but so must all users of any function that might exhibit 'output effects' or require reply. Fundamentally, this means pure code cannot encapsulate authorities and that pure code in the 'monadic driver' approach will accept a lot of 'extra' arguments that aren't actually required locally in order to forward replies or carry requests. These extra arguments hinder reasoning about security or 'least authority'. They also weaken the "inside-the-box POV" for any code that is either in the monadic layer or that logically accesses a history of inputs.

Thus, purity actually hinders reasoning about security and life-cycle relative to the capability model approaches to confinement.

For RDP, I do follow the connectivity rules for object capability model, and thus support its confinement properties.

assigning software to devices is the job of the OS, which itself is just another piece of software

Besides being another clear appeal to regression, that seems a rather unusual view. How does the OS learn which software is available to it? From where comes the policy for selecting one bit of software instead of another?

Batteries not included

You contradict yourself: either you do not have logical simplicity, or you do have advantageous properties of simplicity of reasoning.

It's a distinction I may have worded poorly, but not a contradiction. My first 'simplicity' refers to the ease of a programmer trying to understand the correctness of code built on top of the system, whereas the second refers to the ease of validating the underlying system.

I'm sure it's possible to develop some kind of logic to formally reason about the purity of a capability system based on the idea that capabilities cannot be created inside the system, but I doubt that approach will be as clean as a foundation built on functions.

Purity constrains you such that not only must your language understand the meaning of its effects, but so must all users of any function that might exhibit 'output effects' or require reply.

I think this (and paragraph it was a part of) helped me understand your point of view. You want certain separation properties for security reasons and think a monadic driver is going to violate the properties you want. But my language doesn't come pre-built with all of the security policies you have in mind. The monadic driver is the thing that implements those security policies. Whereas you've carefully included a security architecture as part of your language, such security policies would be provided by a library in the language I'm describing. The thing I'd argue, though, is that whatever policies you've decided upon could be implemented in a straightforward fashion. The monadic driver in such an approach will be analogous to whatever runtime layer you provide in your system. And again, note the power of this approach: we can reason about the correctness of our runtime with respect to whatever security properties we want from within the logic, and we can provide multiple implementations.

Pervasive Monadic Style devolves to Language Primitive

My first 'simplicity' refers to the ease of a programmer trying to understand the correctness of code built on top of the system, whereas the second refers to the ease of validating the underlying system.

Those are the same. Programmers and analysis tools in advance of the primary runtime (IDEs, compilers, type-checkers, linkers, even unit tests) have the same information available to them. If that information is not manifest in the code (e.g. for type-inference), it may still be made available through an IDE. (Even test results may be expressed from the IDE.)

it's possible to develop some kind of logic to formally reason about the purity of a capability system based on the idea that capabilities cannot be created inside the system, but I doubt that approach will be as clean as a foundation built on functions

Capabilities can be created inside the system, though they cannot reference anything outside the system. This is quite important when composing untrusted code and libraries. And reasoning 'formally' about any system only requires some properties that you know to hold true over compositions. For object capability systems, those properties regard aliasing and endowment of authority (which are, for simplicity and performance, tightly coupled).

You want certain separation properties for security reasons and think a monadic driver is going to violate the properties you want. But my language doesn't come pre-built with all of the security policies you have in mind. The monadic driver is the thing that implements those security policies.

I understand your point. I disagree with its relevance for two reasons:

  1. Security and life-cycle issues must apply to code that you aren't responsible for developing. Not only must you link code that you should not fully trust, but the converse is even more often the case: you are the one developing the code that other people shouldn't fully trust. Thus, the security policies you develop are irrelevant for programming-in-the-large.
  2. Therefore, "if you do [support security as a monadic architecture], the effects on code will be utterly pervasive [involving all code you write and most code you compose], such that only small fragments of your applications exhibit the advantageous properties of functional purity [and the vast majority must be reasoned about in terms of the monadic architecture's guarantees]".

I'll grant you might model capability systems using a monadic style. Where I disagree is with your position that you can both use a pervasive-to-the-point-of-ambient monadic style and achieve the simplicity of functional reasoning.

Whereas you've carefully included a security architecture as part of your language, such security policies would be provided by a library in the language I'm describing.

Well, not quite. Capabilities are certainly not a security architecture. What they are is a pervasive primitive that allow you to build and reason about a variety of "patterns of robust composition". One will express these patterns within libraries and frameworks.

whatever policies you've decided upon could be implemented in a straightforward fashion. The monadic driver in such an approach will be analogous to whatever runtime layer you provide in your system. And again, note the power of this approach: we can reason about the correctness of our runtime with respect to whatever security properties we want from within the logic, and we can provide multiple implementations.

Except that secure composition is about integrating with code written by untrusted developers; and, therefore, the monadic driver you mention must be well standardized and as pervasive as the language itself, and therefore we are forced to reason about the correctness of our runtime and compositions with respect to the pervasive monadic driver, and therefore none of what you just said bears a truth applicable in practice.

Monads and Reasoning

I'll grant you might model capability systems using a monadic style. Where I disagree is with your position that you can both use a pervasive-to-the-point-of-ambient monadic style and achieve the simplicity of functional reasoning.

There's a trivial counterexample to that, namely the identity monad. So the question becomes one of which effects may be present in a given piece of code and how they alter our ability to reason. Presumably effects are mediated by capabilities, and hopefully exposed to the type system via an indexed monad or similar.

There's at least one important non-identity case where the impact on reasoning is minimal: locally "output only" effects, in whatever number and with whatever consequences for the rest of the system, can be reasoned about by replacing the monad with a Writer yielding a list of outputs. It should be possible to use an FRP-like style in other locations as well, for whatever that buys you.

It's not global simplicity of course, but you might find it worth hanging on to anyway.

Monads vs. Capabilities for Reasoning about Effects

I'll grant the trivial example. And you could say the same of the 'maybe' monad and others that are, in every meaningful sense, just local variants. The issue arises with pervasive transport of effects across a monadic membrane. For example, when:

  • the correctness, safety, progress, or time-space performance of an applied function assumes a dependency between output of a monadic command and future input from the 'environment'
  • the life-cycle or update of sub-functions (i.e. libraries, services) over time becomes a property embedded in the environment, and thus relationships between functions have a dependency on the environment
  • correctness of a function-as-application across disruption (i.e. node power) effectively requires persisting the current 'state' embodied by the function (state, by nature, includes anything other than source-code that is required to regenerate the system)

These are not properties of monads in general. However, they are peculiar to the monadic-style 'functions-as-environment-drivers' techniques that Matt promotes.

Presumably effects are mediated by capabilities, and hopefully exposed to the type system via an indexed monad or similar.

Capabilities can be used to mediate effects readily enough. For example, rather than just output effects, one can also mediate input-effects and even such properties as indeterminism via use of capabilities. In fact, I highly recommend using them for these purposes: if something like indeterminism or temporal coupling (dependency on history) doesn't need to be an ambient authority, do not offer pervasive access.

You may also apply effect-typing to the capabilities themselves (i.e. input-only caps, output-only caps, near caps, far caps, green eggs and caps). I spent at least six years convinced that this is the best way to approach the problem, though my work on CRP has my faith faltering.

Utility of capabilities requires encapsulation of effects, such that you can limit how much of an implementation is exposed to each user of a capability. While pure functions meet the technical requirements of capability systems, they don't support any non-trivial applications or compositions of them.

at least one important non-identity case where the impact on reasoning is minimal: locally "output only" effects, in whatever number and with whatever consequences for the rest of the system, can be reasoned about by replacing the monad with a Writer yielding a list of outputs. [...] It's not global simplicity of course, but you might find it worth hanging on to anyway.

Unfortunately, pervasive existence of that feature would have serious negative consequences on reasoning about secure composition. The circumstances where these consequences exhibit are not at all uncommon: less-trusted code calls more-trusted code, or vice versa:

  1. When less-trusted code calls more-trusted code: the ability for the less-trusted code to capture and transform these output effects before forwarding them onward to the environment means that the environment cannot trust the integrity of the specified effects any longer.
  2. When more-trusted code calls less-trusted code: the effects generally seem to come from the more-trusted code. This subjects the effects to the confused deputy problem. Audit output effects is rarely useful for preventing the security violation: it is essentially impossible to distinguish benign effects from malign effects without a more global view of the system and the whole plan. (Auditing is mostly useful for the finger-pointing game after the violation.)

I posit that in practice, the vast majority effects cross trust boundaries. This is especially true for programming-in-the-large, but also applies to libraries and such. Thus, the circumstances in which the above issues apply are quite common. Of course, the vast majority of computations don't require expressing or reasoning about 'effects'; the above should not be read as 'the vast majority of code'. I still favor purity for the 90% of the code that is not effectful.

For monadic systems, one could presumably push all communications that cross trust boundaries into the monadic layer - i.e. by producing outputs that, for correct and consistent operation, depend upon future inputs. But, as I was arguing above, this approach must become very pervasive, and it means you may no longer reason about those effects in a functional manner.

I should note: capability systems are able to express the sort of "Writer yielding a list of outputs" confinement or interference you mention above. However, it is not a pervasive ability: one may intervene only for new, locally constructed capabilities. And, therefore, there is no risk of this sort of confinement interfering with a more global security analysis.

Monads make it too easy to transfer authority

If I correctly understand your point : you assert that monads, because they provide "automagic piping" of {environment, effects, authority} across function calls, make it too easy to transfer authority.

In a monadic style, the 'de facto' calling convention is `m >>= f` instead of `f x`, wich has the downside of transfering all the authority acquired in computing 'm' to the potentially untrusted function `f`. Or, conversely, make the environment coming from the trusted `f` look secure, when it was possibly influenced by the untrusted `m`.

You thesis in thus : "when a monadic approach is used, it leads *in practice* to uncareful binding over authority-carrying monads".

Is that correct ?

Boy, am I totally in the dark

I don't even see the motivating examples behind this entire discussion. I don't know if I'm even thinking in the right problem domain, but IMHO, 1) large server farms will be managed tightly and manually behind firewalls; 2) more widely distributed computations (SETI@Home style) will be controlled via proprietary means, and not at the language level (VM security policies may or may not play a role).

I'm just missing what this entire debate is about in terms of real world scenarios - and what, if anything, it has to do with the relatively prosaic subject of this thread that I started :-)

S.

Real-world scenarios are

Real-world scenarios are already widespread. There have been repeated, half-successful attempts to support general purpose code for everything from word documents and spreadsheets to database queries and web browsing. Most of these approaches fail to meet their larger goals because either (1) they fail to accommodate security requirements, or (2) the application or server component with which they integrate fails to achieve internal security requirements. The consequence of this failure is generally that we resort to even-less-secure techniques, such as transfer of an executable file.

Firewalls and Sandboxes and such tend to limit system accessibility and expressivity far too much. I.e. if I have authority to both your webcam and your display, I should be able to send to you a single piece of code that will read from your webcam and write to your display. To the degree possible, it shouldn't matter where I am (i.e. firewalls mustn't interfere), because that would cause accessibility issues. Any resource is insecure to the extent it has accessibility issues for authorized users: that is a 'denial of service'. And if the code I send to your display is run in a sandbox, it almost certainly won't be able to directly utilize my authority for your webcam... which, again, is an accessibility issue.

Security must not require firewalls and sandboxes. Those are artifacts of computer insecurity.

I stumble across these issues regularly. From another comment:

Services outside the TCB are often critical to the correct behavior of a system and achieving confidence in it. In my own domain, this includes flight controllers for UAVs, or obstacle avoidance on a USV or UGV, or sane link-lost behaviors. And yet all of these services must interact with a plethora of external interests. A flight-controller must interact with both mission management payload behaviors (such as tilting to give camera a better view of a region, or get a better shot). A UGV with a payload with a drill-bit in the ground (or worse, one with a robotic arm involved in attaching an explosive charge to an IED) cannot come running straight back to mama when link is lost. A USV drawing a tether line (used for many sea sensors and systems) can't just 'stop' to avoid a collision, because the rotors and line might interact in usually bad ways (damaging one another or entangling). And all these payloads are pluggable.

In the general case, anyone with authority to interact with a component tied to a device (i.e. a 'capability' in the capability model) must be able to place software on or near that device in order to represent interests (and continue gathering data) during a link loss, disruption, jamming, or even intentional communications silence (for stealth). That is, high-confidence, general-purpose secure systems must not couple authority and transport-layer connectivity. Payloads, operators, coordinating platforms (forward observers, loitering munitions, refueling stations, traffic controllers), and such must be capable of distributing general-purpose untrusted code to one another to represent long-lived interests and communications. One must be able to run this code, have it interact or coordinate with multiple subsystems - including external systems (thus forbidding 'sandboxes') - to which the sender of the code already possesses authority.

I believe any solution will necessarily be language-based, because one of those critical security features for generalization across domains is the ability to transport general purpose code.

Much disagreement

First, I agree with you and Sandro about capabilities being created - I meant something like external capabilities. That comment was really off on a tangent, though, so I don't want to dwell on it.

I disagree with you that my two uses of 'simple' are the same. Here's another shot: the ease of using a logic to prove things is not necessarily related to the ease of proving that the logic is sound.

I also disagree with your point 1 and think this gets to the heart of the matter. Sure, if I'm building services that run on a certain platform, then my services lose value as the availability of that platform decreases (e.g. if I'm developing Flash games, I want everyone to have Flash). My goal, though, is not to build a platform, but to build a language that allows platforms to be built. To me, this seems to be another instance of layers vs. monolithic design.

Finally, I don't think it's correct to say that the security policies I develop aren't relevant to programming in the large - not if adherence to the library security policies can be checked by the type system.

I do not see how your point

I do not see how your point on "ease of proving the logic is sound" vs. "ease of using the logic to prove things" applies to anything written above.

And with regards to "the heart of the matter", it seems to me that most of our disagreements are related to your repeated appeals to infinite regression. You should know better than to perform induction without solving the base case.

Languages are platforms. You can build platforms atop a language: those are called 'frameworks', and exhibit a program language and properties of their own. In practice, applications written atop different frameworks will inevitably need to interact securely even while exhibiting development and maintenance life-cycles. I'm a fan of meta-programming, but a language unsuitable for composing and integrating the resulting applications is, by nature, unsuitable for meta-programming. The relevant issues must be solved in the 'base case' platform: the language.

Perhaps you imagine the language plus some undefined underlying Operating System will allow you to reason about secure composition and life-cycles. If so, then I disagree. Regressing to an undefined environment can only hinder reasoning about effects from an application. Further, there are no existing operating systems that support the level of service composition, security, performance, or end-to-end system reasoning we can achieve via languages.

To me, this seems to be another instance of layers vs. monolithic design.

That's a false dichotomy. Monolithic design and layering are independent properties. Indeed, layers often exist in monolithic designs (such as the ring security model, or stovepipe systems). Avoiding a monolithic design is about decentralization and distribution of authority, responsibility, security policy, resource management policy, software life-cycle, and so on.

My interest is open distributed systems programming. Monolithic designs contradict most of the properties I seek. I do not promote them.

I don't think it's correct to say that the security policies I develop aren't relevant to programming in the large - not if adherence to the library security policies can be checked by the type system

There are extents to which a flexible type-system can express and enforce security concerns (especially wrgt. first-class ADTs). This can be very useful for programming-in-the-small - i.e. verifying the inner workings of a complex framework.

But for programming-in-the-large, not only would you need standardized ways to express these issues, you would also need full implementation details of the systems with which you are integrating. Without those full implementation details, you cannot trust whatever security policies are described by the interface published to you. Demanding full implementation details is often not feasible for programming-in-the-large (for a variety of reasons, including the maintenance life-cycle).

For these reasons, reasoning about security for programming-in-the-large should not rely heavily upon type-systems. The same applies to most other end-to-end system properties. I'll make an exception here: feel free to use types to the extent that they can be checked, audited, or enforced locally and, hopefully, inexpensively. (Linear types for exclusive transfer of capabilities are an example of a type that can be enforced locally, but for which the operation becomes very expensive when crossing host-host boundaries in a distributed program.)

In any case, any standard for how security issues are expressed for programming-in-the-large will, by nature, be pervasive. Once you account for the need to enforce these locally, i.e. in a distributed program, they also require special handling from the underlying language runtime. Between standardization and special support, it seems silly to not recognize the result to be a new language.

Seeking examples

I still don't see any infinite regression (except maybe this discussion). If you could suggest a minimal feature set that you think would be problematic for my approach, I wouldn't mind thinking through that example. Even if you end up wanting to argue that it's not any small set of features, but the interaction of all of them that makes things hard, I still think it would be easier to evaluate that argument with something concrete in mind. Otherwise, I end up responding with 'nun-uh' to statements like this:

But for programming-in-the-large, not only would you need standardized ways to express these issues, you would also need full implementation details of the systems with which you are integrating.

I still don't see any

I still don't see any infinite regression (except maybe this discussion).

You have, thus far, expressed the following regressions:

  1. "assigning software to devices is the job of the OS, which itself is just another piece of software" - you answer that FP software puts off this problem to 'just another piece of software' which, presumably, is FP; therefore, you didn't actually answer the question.
  2. "you can provide a conforming runtime implementation" - which, in context, regresses the issue of assignment of an application to its environment to the issue of assigning a whole conforming runtime to its environment
  3. "My goal, though, is not to build a platform, but to build a language that allows platforms to be built." - implies your belief that platforms don't need a base

If you could suggest a minimal feature set that you think would be problematic for my approach, I wouldn't mind thinking through that example.

The problem is the environment. Programming-in-the-large is about integration of systems; there is no such thing as a 'compile-time' because some systems or services are always active, and you must integrate with them. A primary mechanism for integration of services is code-distribution, whether it be JavaScript sent to a web browser, a query sent to a database, a Haskell library from hackage, or even an executable file. Much of this code represents only a single 'role' in a larger system, and therefore you'll never have a complete view of the code or system.

Different code within any given application will thus involve many different security concerns. You shouldn't even trust all your libraries equally, since in-the-large libraries are essentially 'runtime distributed code' with a slower life-cycle.

As noted in my reply to Philippa naive approaches to dealing with effects across 'trust' or security-policy differentials can introduce problems.

Don't you have the same

Don't you have the same problem in any language (for example C)? C can't start a new thread either, you need to use the interface that the hardware provides.

RE: same problem in any language

No, the same problem is not common to all languages. C is able to specify which software runs on which hardware without regressing to an external agent. (It isn't very good at that job, which results in comments here, but it is capable of it.)

There is no 'hardware interface' for creating a new thread.

But there is. You need to

But there is. You need to configure things, set up interrupts and go to user level. Also if you want to do I/O you use the x86 in and out instructions, which to my knowledge you don't have access to in C (without inline assembly).

In C you can jump execution to a location in memory where you have loaded a program. You can't do this in memory safe programming languages (functional or not), so you have to resort to interpretation of the user programs. In practice it doesn't do what you want, but it theory it does?

You imagine a specific

You imagine a specific implementation of threads. Remember that threads express concurrency, but do not guarantee parallelism.

In C you can jump execution to a location in memory where you have loaded a program. You can't do this in memory safe programming languages (functional or not), so you have to resort to interpretation of the user programs. In practice it doesn't do what you want, but it theory it does?

Interpreting programs works well enough, especially if your language can perform significant partial evaluation. The problem with assigning software to hardware is unrelated to how the software is interpreted. The problem is expressing the assignment itself - i.e. program extensions and upgrades, including device-drivers. You cannot express this in a purely functional language. Even expressing it monadically doesn't help until such a time as the monadic 'language' is accepted widely and, thus, becomes a de-facto language of its own, having its own properties. But, at that point, you are no longer writing code in the purely functional language.

In C you can jump execution

In C you can jump execution to a location in memory where you have loaded a program.

Actually you can't, at least you can't and conform to the C spec where casting data pointers to function pointers is undefined behaviour.

You can think of the monadic

You can think of the monadic program as bytecode for a virtual machine. All the same options for native interpretation or compilation exist for this "bytecode" that exist for other languages. I even commented that you can, in a pure logic, prove the VM implementation correct relative to an axiomatization of the hardware. As Jules asks, how does this differ from the situation with C?

The distinction between the OS and other programs was to address your security concerns. The OS would presumably be written in a monad with full authority over the local machine. It would then expose a more disciplined capability scheme to the applications running on top of it.

Pervasive Monadic Style devolves to Language Primitive, ctd.

You can think of the monadic program as bytecode for a virtual machine.

I can't think of monadic program as "bytecode for a virtual machine" unless that VM is standardized enough to solve the related life-cycle issues (code distribution, application integration, etc.). Thus, that statement of yours is only true if I standardize the monadic language to the point that it is as pervasive - or more so - than the language in which it was written, such that reasoning about the vast majority of programs is in terms of the monadic language rather than the functional one.

You have been insisting that you can have it both ways: reasoning about programs in terms of functional properties AND integration of application and service components in terms of a monadic language so pervasive it penetrates the inner workings of most services and applications. You cannot! Programmers would need to reason about non-trivial programs in terms of the properties offered by the monadic language, and the underlying OS/VM.

In such a world, you might as well embrace the standardized, widespread monadic language as 'primitive' - the language that allows the whole system to glue together. Doing so won't change how you reason about anything. The pure functional language itself would only be suitable for in-the-small programming - i.e. for specific algorithms and small fragments of larger applications and services. That is still an important role.

you can, in a pure logic, prove the VM implementation correct relative to an axiomatization of the hardware. As Jules asks, how does this differ from the situation with C?

The notion of standardizing a monadic architecture for widespread integration of components really doesn't differ much from C. In fact, you could match C's basic operators, call the result C. It would be 'pure functional' in a hand-wavy 'heap-passing style' sort of way. Then you could try to prove its implementation.

The distinction between the OS and other programs was to address your security concerns. The OS would presumably be written in a monad with full authority over the local machine. It would then expose a more disciplined capability scheme to the applications running on top of it.

Security concerns cannot be separated. There is no point at which the Applications start and the OS stops. I find this distinction to be a serious conceptual mistake promulgated by the monolithic OS kernels (the kernel is NOT the OS).

Attempting to address my security concerns by dumping it upon a 'separate' OS and ignoring composition between applications as a fundamental part of the OS is likely to have me mounting my hobby horse, grabbing lance and shield, and skewering you.

Capability discipline cannot just apply below an application. It must also apply within them, especially once you begin working with extensible applications and inter-process communication (i.e. apps that can receive queries or JavaScript extensions).

I'm not saying anything too deep

All I'm saying is that if interpretation of the meaning of effects is done by the environment, rather than your program, then it's easy to give other semantics to that program from within your language and the inside-the-box semantics of your program are very simple and don't involve a description of effects at all. The rest of what you need to reason about the environment doesn't change - you do it the same way whether your programs are monadic or not. As a side point, I note that reasoning about the environment can be done in a pure way as well, and that's true whether you use monadic programs or not.

Meaning of Effects

Most 'impure' programming models with effects also leave the meaning of those effects to the 'environment'. For example, the meaning of a message is decided by its recipient.

But with pure functional programs, you cannot express which environment gets interpret the effects. Thus, you cannot make assertions regarding integrity or security of interactions involving multiple environments.

I think the point you and

I think the point you and David are disputing is quite different. As I pointed out, if you're starting with a memory-safe core language you're already capability-secure. Whether you choose to retain the capability security properties via FFI extensions is up to you.

You are correct that you can certainly define a monadic library interface to selectively violate capability security properties, and thus confine those violations to selective parts of the program, but only by giving up the compositional security properties your language already enjoys.

David is simply saying that this is an unacceptable tradeoff, or would only be acceptable assuming these violations were so prevalent that it should be expected in all programs written in this language.

You are correct that you can

You are correct that you can certainly define a monadic library interface to selectively violate capability security properties

I never meant to make that point. We've been all over the map, but I was imagining David at one point to be asking how you build in a library a monadic layer that oversees the safe exchange of capabilities between subprograms also written in monadic style.

I never meant to make that

I never meant to make that point. [monads can violate capability security]

Nevertheless, a monadic interface can bypass the ordinary parameter-passing requirements of a referentially transparent language, and thus violate capability security properties.

but I was imagining David at one point to be asking how you build in a library a monadic layer that oversees the safe exchange of capabilities between subprograms also written in monadic style.

Sounds tricky. Monads are specifically designed to avoid the need for explicit parameter passing, the latter of which is required for capability security. Perhaps some solution with sealers/unsealers and a dynamic permission check. I'll have to think about it.

I'm sure it's possible to

I'm sure it's possible to develop some kind of logic to formally reason about the purity of a capability system based on the idea that capabilities cannot be created inside the system, but I doubt that approach will be as clean as a foundation built on functions.

Capabilities can be created inside a system. In fact, every reference, local or top-level, is a capability. Capability discipline basically boils down to ensuring that all ambient capabilities, ie. top-level bindings, are transitively immutable and side-effect free. Thus, all authority flows by parameter passing, where "authority" is generally defined as "ability to induce side-effects".

Therefore the pure lambda calculus and all of its typed variants are capability systems. They become non-capability systems as soon as the designers introduce ambient authorities, like openFile. To some extent this can be mitigated if openFile are "fully virtualizable", ie. by an in-language chroot jail or similar.

So a capability discipline is just an additional constraint on the host language, and so is just as "clean" as that host language.

Pure functions and authority encapsulation

2. pure functions cannot encapsulate authority - which means the 'environment' must contain excessive ambient authority

I'm sorry to ask a beginner question in such a insightful debate, but could you elaborate a bit on this ?

I intuitively understand a part of what you mean : pure functions cannot get information/authority without it appearing in the input type of the function (or generally in the type in the value). For example in Haskell, the only reason you can "do IO" is that there is a standardized value named 'main' wich has type "IO ()", with a special meaning wich connects it to the environment of the program. Without a global value in the monad IO, you couldn't do input/output. The authority to do IO is given by the environment.

What I don't understand is what you exactly mean by "encapsulate authority". Hiding the use of, say, IO, inside values with pure types ? That doesn't seem desirable and probably contradicts what you said in an earlier thread. It must be something else (pardon my general ignorance about the capability-security model).

I'm also not sure how you caracterize excessive ambient authority. Is the publishing of a "authority to do I/O" to any Haskell program an excessive ambient authority ?

I also read your page on Constraint Reactive Capability Programming. It is quite terse and abstract. I think you should consider adding examples (the few already there really help).

RE: "encapsulate authority"

What I don't understand is what you exactly mean by "encapsulate authority". Hiding the use of, say, IO, inside values with pure types?

My reply to Philippa might help clarify this matter. In short, I refer to the ability to protect the integrity and informational content of 'effects' from the user of a function, in a fine-grained manner.

Use of g :: a -> IO b would allow encapsulation of some effects inside g. But, if f uses g, then f also gains access to IO. Thus, the authority granted by the IO monad was not not encapsulated within g. In this case, f has access to foreign functions, the three standard IO channels, sockets and the Internet, and so on.

You might reasonably note that f does not have access to an IOVar, MVar, or TVar encapsulated within g without a shared reference to it. That is, authorities to variables are not granted by holding the IO monad alone; they are also capability secure. If we forbid unsafePerformIO, it becomes feasible to develop a monadic alternative to IO that offers no ambient authority.

But reasoning about monadic code is very different from reasoning about pure code: with pure code we can always commute lexically independent expressions, eliminate redundant expressions, eliminate expressions whose values are unused, and substitute the full expression for an assigned variable. We cannot, in general, do any of this for monadic code. Monads aren't the only means to encapsulate authority. I've developed a computing model RDP that is much 'closer to purity' in that RDP allows arbitrary expressions be commuted, allows redundant expressions eliminated, and allows expressions be substituted for assigned variables. But I believe that it will never be possible to eliminate code that encapsulates authorities on the basis that the value from the expression is unused. Thus, code that encapsulates authorities will never be pure.

So pure code won't encapsulate authority (it belongs to the environment, and thus available to all users of a function) and code that encapsulates authority isn't pure (can't be replaced by the value, or eliminated if that value is unnecessary). QED.

I'll note that when I mentioned advantages of functional in that earlier thread, I was not talking about the functional 'drivers' approach pursued in some FPLs, which generally assume a relationship between past outputs and future inputs.

I'm also not sure how you caracterize excessive ambient authority.

Many language designers see the ability to create new values (which might need space on the heap or to be GC'd) as being a reasonable ambient authority. But it is still an ambient authority, and may be abused. Thus, there are ambient authorities that are not considered excessive. Some might suggest that an authority to random numbers or the current time should be ambient. I would reject this.

Generic I/O is not an ambient authority in Haskell, even though it is often an excessive authority for specific Haskell programs.

Effect and State

Many programming tasks are I/O related (even "Hello World!" starts with I/O), and we all know that I/O is naturally stateful. Sure you can simulate stateless I/O, but it is simply not as convenient most of the time.

Functional programming is excellent when dealing with IO as shown by how easy it is to patch together a compiler in an functional language.

Functional programming is excellent when dealing with state, just not [concurrent] mutable state, in functional pure languages. The worst thing you can say about that is that you sacrifice performance and convenience for purity, safety/robustness.

What functional programming, especially lazy ones, are bad at is dealing with sequential effects. That's not state, although that is easily confused when with dealing the "World State" IO monad.

A matter of taste

It's worth noting that opinions vary, widely, as to whether even lazy functional languages are any worse at dealing with sequential effects than directly imperative languages - not everyone shares your distaste for monads and related techniques (eg indexed monads, applicative functors/idioms), and there's a good argument to be had that they're an improvement on doing the same imperative things directly in a host language with no effect system.

My distaste

is with fanboys, not monads.

I think that's a terrible

I think that's a terrible example. Compilers don't do I/O so much as they are just one big function.

Actually, I think one of the reasons FP proponents are more convinced of the superiority of functional programming than is necessarily warranted is due to their high degree of applicability to writing compilers, where almost all I/O is input to a function at one end and output at the other end, where the input and output are defined in specifications, where there is an unambiguously correct mapping from source to target, where the data is inherently tree structured, where the structure of the computation involves recursive pattern matching and transformation, where there is virtually no dynamic code loading necessary, etc. It's a match made in heaven; and if all you do is geek on languages and compilers - especially in an academic context - of course it's self-evident that functional programming is superior to everything else.

A better example of more complicated I/O might be an IDE: loading up a project with sub-projects, not just once but multiple times, in and out, from a steady state of the event loop; displaying various interactive views of the data; loading up plugins dynamically, which may dynamically modify the UI layout and hook almost anything throughout the depths of the app; the state machine of debugging, and the interplay between the debug monitor, the debuggee, the compiler evaluator, the editor and watch window displays, many on different threads, different processes or even different machines running different OSes on different architectures, with I/O through the display, the keyboard, mouse, network, USB cable, serial cable, etc., all happening at the same time.

Stuff goes In, Stuff goes Out

I think you just made my point exactly.

Good at IO, bad at effects/events.

Have you tried structuring

Have you tried structuring programs Erlang-style using Haskell's lightweight concurrency? It's another case of embedding an external paradigm, but it's pretty effective once you've got a few patterns down and is exactly how I'd attack an IDE-like problem. There's a certain amount of room for irritation with the lack of extensible records and variants, admittedly. But it's not worse than a good many imperative languages which you don't seem to have a problem with.

Weaknesses and Problems

Pointing out a weakness of a language doesn't mean I have a problem with it.

It is (mostly) easy to extend an IDE in Java/C++, not so per se in Haskell. Even if it would be as easy to do it in Haskell as in Erlang, then there remains the point that it -for most people- is easier in Java/C++/most OO languages.

[ Though I believe it's solvable, to make it easy, even in Haskell. ]

Shifting Goalposts

I notice that suddenly we're talking about extending the IDE rather than writing it in the first place. The most immediate problem with Haskell there is the lack of extensible records, variants and other datatypes (Java and C++ use classes for this purpose), but this is hardly an intrinsic property of lazy functional languages as a whole! Rather, it's a historical accident that's difficult for Haskell to shift but not particularly difficult to deal with in a fresh design.

That aside, the ease of extensibility is significantly a property of the IDE at hand. There are, I'll grant, an awful lot more examples of good-enough software architecture in C++ or Java than in Haskell. But I'm not sure what "not so per se in Haskell" can mean if it's not referring to the extensible types issue.

Nor do I see what the "for most people" point has to say of value. Can you raise something that's more than a familiarity issue? Frankly, Erlang beats the pants off C++ or Java for handling complex, event-laden IO - just as it was designed to!

Pause and Effect

At the moment, I believe, OO just matches development of visual window-like interfaces very well since extensible type-safe state combined with imperative concurrent message passing is almost a one-on-one match.

You believe that "The most immediate problem with Haskell there is the lack of extensible records, variants and other datatypes". Besides it not being true ;), I have the feeling you refer here to that it is hard to mimic OOs subtyping in Haskell without, say, extensible records. Which may well be true, I am not sure.

But then, is the point you're making not: FP is bad at IDE development since FP isn't OO, and all will be well when we change our FP language to an OO one! Uh? What happened to combinator libraries?

The other point is, I guess, on laziness and effects. Half the reason of the existence of the IO monad is that since on-demand evaluation makes it more difficult to know when a side-effect can occur in a lazy language, effects need to be 'sequentialized' explicitly. Yes, it's a weakness since in most other languages you can state effects directly in a language, and you know when they occur, and there is no extra plumbing or syntax needed. Purity and encapsulation of state may be seen as arguments that that is a good thing, I personally don't believe so.

[ Title is an easter egg, btw. ]

Extensible data isn't unique

Extensible data isn't unique to OO, and the model I'm suggesting for IO isn't OO either, although it's related. That's the beauty of a language like Haskell: it's a far better metalanguage than an OO language is, which means I can turn it into something better-suited to the task at hand than OO itself. And then I can go build combinator libraries and custom control structures and all the rest of the fun FP toys on top of that - I'm not limited to using the object language's features to express myself. Or to one object language, for that matter.

It's true that lazy functional languages need a little extra syntax in order to handle sequential effects, but this is a trade-off for the power mentioned above. To call it simply "bad" is extremely misleading. Informed programmers choose to use Haskell for tasks involving complex IO because it's easy to handle, not because they're fanboys. You might consider this cheating, but OO programs cheat in just the same way to handle other tasks - they're just not as good at it.

Propaganda Philippa

There is so much wrong with the above post that I think it's best we don't talk to each other. I will not discuss propaganda.

Funnily enough you're

Funnily enough you're leaving the strong impression that you're engaging in propaganda yourself. That's why we're arguing in the first place, because your arguments don't take account of common ways that lazy functional languages are used.

If you have a reasoned critique, please make it. But it needs to engage with the ideas above. Embedding languages (small and large) in Haskell is extremely common practice, it's what combinator libraries are all about. But if not? Simply saying a class of languages is bad and then calling people fanboys and accusing them of propaganda doesn't belong on LtU.

Halelujah, Hosanna and Haskell

I'm pointing out the relative strengths and weaknesses of a known language: Haskell. Where is any claim made that I ever found Haskell a bad language? I love it to death? Wild accusations from your part.

Purity is nice, but is unknown whether it scales. I doubt it.

Laziness is nice, it gives a very declarative language, but you run into performance (time and space) quirks. That is _known_.

Functional programming excels at writing DSLs and combinator languages, yeah, I agree. And yes, I see you repeat my argument and use it against me.

Functional programming doesn't excel at things OO is better at. Big deal. I dare state it, do you?

You stated I have a problem with monads, I don't, how can I have something against a mathematical structure? I have a problem with people which think it's a good approach, or even _final_ approach, that falls in the category uninformed fanboyism. I disagree with that, it just solves a/some problems. End of story.

You're a strong proponent, but blind. And, as far it is interesting to discuss believe systems with people who run around chanting on bare feet, it usually doesn't go anywhere.

So, let's don't, or do, whatever we feel like. ;)

[ Actually, I started of discussing FP, not Haskell. Different things. ]

I'm pointing out the

I'm pointing out the relative strengths and weaknesses of a known language: Haskell. Where is any claim made that I ever found Haskell a bad language? I love it to death? Wild accusations from your part.

You're making flimsily justified statements about the weaknesses of a known language, and you started by making statements about lazy functional languages as a whole. I never made the claim that you found or called Haskell a bad language. You have said Haskell is bad at specific things that a good many people consider it to be good at. In the meantime, you imply that anyone who does is a fanboy and directly accuse me of propaganda. If you want to talk about wild accusations, look in the mirror.

Purity is nice, but is unknown whether it scales. I doubt it.

And that's okay, because Haskell doesn't have to be used entirely for the pure functional programming paradigm. It's quite capable of being used as the core of a multiparadigm language built with libraries around it, and this is a common way to use it. Having the pure functional paradigm at its core offers it particular strengths in comparison to the likes of Oz or C++ or even a closer relative like Ocaml.

Functional programming excels at writing DSLs and combinator languages, yeah, I agree. And yes, I see you repeat my argument and use it against me.

I've long argued in favour of using Haskell in this way. You seemed to imply that there's something wrong with doing it when it embeds another paradigm, though.

Functional programming doesn't excel at things OO is better at. Big deal. I dare state it, do you?

Except that I prefer concurrent message passing for the same tasks, something which a number of functional languages support well. You haven't engaged with this idea at all, you've merely dismissed it out of hand.

You stated I have a problem with monads, I don't, how can I have something against a mathematical structure? I have a problem with people which think it's a good approach, or even _final_ approach, that falls in the category uninformed fanboyism. I disagree with that, it just solves a/some problems. End of story.

It's reasonable to say you have a problem with monads as used in Haskell, so you can leave the word games behind. It takes more than your mere disagreement to render something uninformed fanboyism.

So, let's don't, or do, whatever we feel like. ;)

You appear to feel like repeatedly insulting and smearing people. Stop it.

In a religious debate, you bring tomatoes

What else?

I am pointing out weaknesses of a paradigm, and you repeatedly counter it with "Yes but in Haskell that is solved by ..."

That is not the same argument. I can program FP in assembly if I want, I can program OO in Haskell if I want. Doesn't mean it is reasonable to state that it is just as easy to do OO in an FP language as in an OO language.

Stating that I have a problem when I write a critique is the exact fanboyism associated with a religious debate. The insult is on your part. Accept it.

I am pointing out weaknesses

I am pointing out weaknesses of a paradigm, and you repeatedly counter it with "Yes but in Haskell that is solved by ..."

The pure functional paradigm (in the Van Roy sense) is not bad at sequencing effects, it simply doesn't have them. But this is not true of functional programming languages, and it is not true when FP is used as a metaparadigm - an entirely valid and appropriate use of the FP paradigm which gave us, amongst other things, the world's first ANSI- and ISO-standardised OOPL in the form of Common Lisp.

I used an example in Haskell of embedding a paradigm other than OO in order to handle IO. FPLs have to embed another paradigm to handle IO to some extent, whether simple imperative programming or something else, so this is hardly an unreasonable thing to bring to the table given that we were admitting IO to the discussion in the first place. You then responded specifically with regard to Haskell again, so arguments about how Haskell handles something are fair game. It's worth noting that a good many other FPLs support message-passing concurrency in some form - Haskell's support is descended via Concurrent ML, for example.

That is not the same argument. I can program FP in assembly if I want, I can program OO in Haskell if I want. Doesn't mean it is reasonable to state that it is just as easy to do OO in an FP language as in an OO language.

It depends on the FP language at hand, but it's a lot easier to do OO in an FPL with reasonable support for extensible datatypes than to do FP in an OOPL without first-class functions. There is in fact a strong tradition of OO programming in FPLs, to the extent that much active research on OO programming has been carried out in them.

That said, I never made the statement that it is as easy to do OO in an FP language as an OO language. All I said was that using concurrent message-passing within an FPL allows complex IO to be handled as well as in an OOPL. If anything I'm inclined to consider this an understatement: the primary use for Erlang (the most obvious concurrent message-passing language) is in managing complex IO.

Personally, if libraries/platform support were not an issue and I wanted to use an OO approach to something, my language of choice would be functional with strong support for OO rather than a ground-up OO language. But it's worth remembering just how close languages like Smalltalk are to functional languages themselves. The gap between OO and strict, impure FP is pretty narrow until you involve a type system and this is something a critique of FP in the wider sense needs to account for.

Stating that I have a problem when I write a critique is the exact fanboyism associated with a religious debate. The insult is on your part. Accept it.

Idiomatically, having a problem with something does not mean a fault on your part. Suggesting that you have a problem with a language isn't an insult, it's just a statement about taste. You've made it quite clear that Haskell used in the ways I've described is not to your taste. So no, the insult isn't on my part at all.

If you want to accuse someone of a "religious debate", it helps to have actually engaged with their points instead of trying to dismiss them as propaganda. If you had simply said that you were talking strictly about the functional paradigm per Van Roy, I would have related that to functional languages and functional programming in the wider sense and left it at that.

Scaling purity

Purity is nice, but is unknown whether it scales. I doubt it.

I don't want to wade in to the rest of the mud fest here, except to suggest that it is one.

On this technical point, however, I think it's worth noting that the issue is much larger. Speaking subjectively, I agree that purity does not scale, and in selected but important applications one is hard-pressed to make it perform. Monads certainly do not compose well in certain ways. State also does not scale, in part because it frequently embeds hidden assumptions in an implementation that lies far from the point of dependency.

So this should lead us to two questions:

  1. What does scale, and what empirical or anecdotal evidence do we have for that scale?
  2. What scales in the human sense? One reason to hire Haskell/O'Caml people is that they are a self-selected bunch. A cynical take would be that it's worth the hassle of letting them program in Haskell/O'Caml for the sake of the brain power. So how much of the scale problem is technical and how much is people?

Personally, I believe strongly in disfunctional programming and object-disorientation, but both of you two already knew that.:-)

Mainstream languages

Mainstream languages obviously do scale. Just look at the enormous applications written in C++. The question is not whether they scale but how well they scale relative to a pure language.

Look on my applications, ye Mighty, and despair!

Given the observed defect rates in large applications based on mainstream tools, and the absurdly high failure rates (something like 80%?) for large software projects, we can safely conclude that no, mainstream development practices do not scale. While some large applications survive the carnage by a combination of luck, skill, or sheer stubbornness, the field of software development as it stands today is, for the most part, a glorious monument to failure.

Rather, the question is where and why things go wrong, and to what extent alternative tools (such as programming languages) can help us avoid pitfalls.

Scaling is relative

Mainstream languages are the only thing known to work. Of course they don't work well enough, that's why we're trying to find something better. "Does X scale?" is not a meaningful question. The question is "Does X scale better than Y?". The current state of affairs is that mainstream languages work well enough to develop production quality operating systems, word processors, etc. It is unknown whether this can be reasonably done in purely functional programming languages.

Monads certainly do not

Monads certainly do not compose well in certain ways.

I suspect something can be learned from examining the issues around this - why we want that composition in the first place, what we can loosen up to allow it and what we can do instead of it.

This may or may not be of some interest, by the way: The Monad Zipper.

Don't blame the math

I agree, that's exactly the right question. Asking why monads don't compose is like asking why we can't add scalars and vectors.

My own view of the situation is that monads combine language and interpreter in a way that's not quite what we usually want. If we want to represent computations with effects, there is a natural way to combine a pair of them - do one, then the other, and produce all effects along the way. If one computation computes the integer 4 and makes the sound card beep, and another computation increments its argument and makes the screen flash, I'd expect their composition to make a beep, a flash, and return 5.

It's obvious how to do this when you're talking about monads that just run a linear course emitting effects along the way. The problem is with monads like List, etc. that introduce a non-trivial control flow. The solution is to separate the abstract program from its interpretation. Combining programs is then easy. Combining interpreters in general isn't easy, but that's because that's an operation that wasn't really well defined in the first place.

Thanks for the link to the monad zipper paper. I'll give it a look.

Asking why monads don't

Asking why monads don't compose is like asking why we can't add scalars and vectors.

Indeed. Really we're talking about composing features, via a mechanism such as monad transformers (but there are other options around). And it turns out that they compose... subtly.

My own view of the situation is that monads combine language and interpreter in a way that's not quite what we usually want.
[..]
The problem is with monads like List, etc. that introduce a non-trivial control flow.

Definitely, although I prefer Error(T) as an example. The two combinations with StateT yield a difference that's easily appreciable as useful rather than junk: should exceptions leave state untouched, or should they roll it back as if the computation is a transaction?

The solution is to separate the abstract program from its interpretation. Combining programs is then easy. Combining interpreters in general isn't easy, but that's because that's an operation that wasn't really well defined in the first place.

I'm not entirely convinced: what happens when programs care about transactionality or lack thereof? Perhaps they expect transactional failure, perhaps they want to see the state when the failure occurred, perhaps they even want to take the state, push it into some other channel and then roll it back?

That said, I certainly try to keep track of features (capabilities?) that my computations use. It's the first step towards abstracting away from the specific monad implementation you used, which rapidly becomes an engineering issue. I still prefer to build the monad using transformers where I can because it gives me less chance to screw up, and I've been known to bolt on ErrorT for local use in functions all of ten lines long.

For what it's worth, I feel I have an understanding of how control flow works when monad transformers are stacked together. For me, it's defined and even behaves in a way I have some experience reasoning about in small cases (and transformers like state or even output features like the screen flashing have the benefit that they commute and can be treated as one big layer, although screen flashing can't be undone by ErrorT!). It's "just" a priority-based rules system.

I'd still like to see some specific cases where "monads don't compose" and there was a reasonable desired outcome. There's definitely a way to go on doing the engineering around transformer stacks, though the monad zipper is a nice step forward. Similarly we've a way to go in implementing transformer stacks efficiently!

Monad transformers and stuff

Monad transformers still suffer from the lack of separation between the program and its interpretation. Take your error and state example. The program that raises errors and modifies state doesn't ever need to know which of those policies are desired. From the view inside of the program, raising an error is the end.

I'm not entirely convinced: what happens when programs care about transactionality or lack thereof? Perhaps they expect transactional failure, perhaps they want to see the state when the failure occurred, perhaps they even want to take the state, push it into some other channel and then roll it back?

I'm maybe not exactly sure what you're getting at, so I'll try two answers. One is that you can write the interpreter to implement whichever of those behaviors you expect. While interpreters don't compose in general, you can write little transformers (basically, the interpreter part of monad transformers) that peel off effects, and you can apply those in whatever order you want. For transactional memory, I'd expect to see an interpreter that peels off the memory access and atomic signals, exposing the other signals. You could provide machinery to queue up other signals if they don't yield a response, but there's not too much you can do about getKey in the middle of a transaction.

The second is that, with a sufficiently powerful type system, you can give your program a type that indicates that it can only be interpreted in a manner that is consistent with it having a semantically useful meaning. So in theory, if your program relies on certain sequences of actions happening atomically, you can insist that it's interpreted in such a way with types.

Monad transformers still

Monad transformers still suffer from the lack of separation between the program and its interpretation. Take your error and state example. The program that raises errors and modifies state doesn't ever need to know which of those policies are desired. From the view inside of the program, raising an error is the end.

The program that merely raises and modifies state only needs to know that the monad it lives in provides state and error raising, yes. This isn't incompatible with monad transformers, the program just gets a polymorphic type.

A program that also catches errors may care very much whether errors roll back state changes or not, at which point the transformer stack appears to hold useful information.

I'm not entirely convinced: what happens when programs care about transactionality or lack thereof? Perhaps they expect transactional failure, perhaps they want to see the state when the failure occurred, perhaps they even want to take the state, push it into some other channel and then roll it back?

I'm maybe not exactly sure what you're getting at, so I'll try two answers. One is that you can write the interpreter to implement whichever of those behaviors you expect. While interpreters don't compose in general, you can write little transformers (basically, the interpreter part of monad transformers) that peel off effects

I think we already have this separation: polymorphic typing gives you the interface/implementation separation and the monad implementation essentially is the interpreter (and its syntax).

The second is that, with a sufficiently powerful type system, you can give your program a type that indicates that it can only be interpreted in a manner that is consistent with it having a semantically useful meaning. So in theory, if your program relies on certain sequences of actions happening atomically, you can insist that it's interpreted in such a way with types.

Sure, a sufficiently powerful type system solves a lot of ills and Haskell's is often good enough already. What I'm interested in is how we talk effectively about this. Do we just stick with the transformer stack and write predicates about it like "Error below State"? Can we do better, and if so how? What would a taxonomy of effect interactions look like?

Monad transformers still considered harmful

You're right. If the only types you ever assign to "programs" are polymorphic in any monad with a suitable constructor set, then I think you get something isomorphic to what I'm advocating.

Why do you need the monad transformer then? Suppose that instead of writing Foo :: ErrorT String IO Int, we write Foo :: IOMonad m, ErrorMonad m => m Int. When do we need to commit to using ErrorT? Well, we need to know it's an ErrorT when we're going to call runErrorT. So why couldn't we just have a function runErrorMonad :: (forall m. IOMonad m, ErrorMonad m => m Int) -> (forall m. IOMonad m => m (String, Int)) that introduces the ErrorT instance behind the scenes? The only difficulty in Haskell seems to be writing such a beast that's polymoprhic in its constraints. Seems like I've seen something about that, but I don't know.

Anyway, if you work through this approach, you get something that's quite simple. The main entities of interest are only ever "programs" (I call them "processes" in my language) and combining them is straightfoward - you always leave flow control to be decided by the combining code. Actually, if you consider the effect parameters as messages, it looks like a minimalist object oriented scheme.

Do we just stick with the transformer stack and write predicates about it like "Error below State"?

That sounds horrible :).

Monad transformers considered unavoidable?

Why do you need the monad transformer then? Suppose that instead of writing Foo :: ErrorT String IO Int, we write Foo :: IOMonad m, ErrorMonad m => m Int.

This is what happens automatically, in fact. The operations on ErrorT all come from appropriate typeclasses, so type inference gives you types very much along those lines.

When do we need to commit to using ErrorT? Well, we need to know it's an ErrorT when we're going to call runErrorT. So why couldn't we just have a function runErrorMonad :: (forall m. IOMonad m, ErrorMonad m => m Int) -> (forall m. IOMonad m => m (String, Int)) that introduces the ErrorT instance behind the scenes?

Why bother? The types have near-identical effects, the only difference being that if client code knows about and specifies ErrorT it can use runErrorT but not runErrorMonad. To put it another way, runErrorMonad is just a wrapper around runErrorT and we need the monad transformer somewhere in the concrete regardless of how we handle the interface in the abstract.

Do we just stick with the transformer stack and write predicates about it like "Error below State"?

That sounds horrible :).

Of course it does! But we need at least some way to talk about the interactions, or we're left taking each others' word that some combination really does give us TransactionalState. Either there's some better alternative (which I'd love to hear about!) or we're stuck with the transformer stack model[1] as the core of how we reason about interacting effects and we need to learn how to actually do that.

None of this means that we can't introduce the concept of TransactionalState and say that "Error below State" provides it, of course.

[1] Though not necessarily a static, monad transformer stack - we might locally stack an applicative transformer somewhere, and there's at least one known interaction (delimited continuations and dynamically-scoped variables) where the sensible model involves a dynamic transformer stack.

Why bother...

Because the 'only programs' approach is much easier to understand, IMHO. You never write down a type like (ErrorT String (StateT blah)). You instead write runErrorState and runTransactionalState - the incremental interpreters. Those functions are pretty trivial to write and you can usually tell what they did by just looking at their types. You build runErrorState from runError and runState in whichever order is correct for your need. The way we talk about 'TransactionalState' is to, when you want those semantics, call 'runTransactionalState'.

Regarding your [1], do you have a URL or search terms that will point me to a paper on that? Does Oleg have a paper on that? I'd like to see how that dynamic transformer stack would map to my language. [Edit: I found the Oleg paper 'Delimited Dynamic Binding' that you brought to mind. Let me know if there's something else.]

Monadic non-composition

Definitely, although I prefer Error(T) as an example. The two combinations with StateT yield a difference that's easily appreciable as useful rather than junk: should exceptions leave state untouched, or should they roll it back as if the computation is a transaction?

The issues can be more subtle than seems to be generally appreciated; the "lazy" StateT transformer doesn't really compose with the Error monad because it will transmute into a strict state monad. At a risk of sounding like a broken record, I have three examples in my posts Are continuations really the mother of all monads? and Fun with the Lazy State Monad.

I need to write another article that points out that these three examples fail to terminate when combined with not only continuations, but also ErrorT and a wide variety of other monads. The transmutation of the lazy state monad is a more widespread phenomenon than I appreciated when I wrote those articles.

I add scalars and vectors

I add scalars and vectors all the time.

Possibly

Well, C/C++/Pascal/Modula/Ada/ML? scales in the human [and software] sense, that is known to some extend.

And that hints that modular imperative impure programming seems to scale best for programming in the large. I think there is a future for (impure) FP languages, but the time-window may have passed. I am not even sure GC collected languages scale.

modularity is the key to scalability

big programs that work well always have a lot of modularity in my experience. Even those written in languages without great support for modularity you see a lot of structure and organization in high quality programs. And in my experience modularity depends a lot on a disciplined approach to managing state. As soon as people start engaging in a free-for-all that many languages permit things start getting real mysterious real quickly.

We did have serviceable

We did have serviceable IDE's prior to widespread adoption of OOP languages. For a few years early in my career, I coded up more custom WIMP applications than I can remember minus access to OOP languages (at work; I fiddled with the old Smalltalk/V 286 system at home).

S.

Prior to widespread adoption of OOP languages...

we (mostly) programmed OO in non-OO languages.

Is it such a weakness?

Half the reason of the existence of the IO monad is that since on-demand evaluation makes it more difficult to know when a side-effect can occur in a lazy language, effects need to be 'sequentialized' explicitly. Yes, it's a weakness

I'm not arguing lazy vs. strict at all, but just a tiny bit of extra syntax around sequentially evaluated code shouldn't be particularly onerous all by itself - given all the other syntactic cruft we put up with to get anything real done in almost any language.

Now, *certain types of apps* might end up using the IO monad A WHOLE LOT, but so what? Just the nature of certain programming problems/problem domains/or whatever.

I guess my point is that in itself, I wouldn't think this by itself would be such a show stopper with regard to pragmatic, industrial use of Haskell or Miranda or whatever other lazy functional languages lurk out there.

Or again, maybe I'm missing something.

S.

It's not the syntax only

Now, *certain types of apps* might end up using the IO monad A WHOLE LOT, but so what? Just the nature of certain programming problems/problem domains/or whatever.

I guess my point is that in itself, I wouldn't think this by itself would be such a show stopper with regard to pragmatic, industrial use of Haskell or Miranda or whatever other lazy functional languages lurk out there.

Laziness, monads and purity all just stand in the way of productive programmers. The fact that, yes you can make it work in a certain manner, doesn't mean it's appealing, the best way, or that you should.

As it stands now, time-to-market just dictates that most programs are best written in Java/C#/C++ with an IDE. There's a niche for FP sure, but it'll take time.

[ Can't believe I am having this discussion again. ]

At the moment, I believe, OO

At the moment, I believe, OO just matches development of visual window-like interfaces very well since extensible type-safe state combined with imperative concurrent message passing is almost a one-on-one match.

I personally don't think it's a good match, but there's more evidence for the assertion that it's definitely not the only good match. I very much prefer a dataflow or FRP-like mechanism for specifying data dependencies and constructing UI control graphs. Conal has shown with Fran and other projects, that FRP integrates very nicely with UI.

This supports Philippa's argument that Haskell just requires a different approach, and automatically discounting any approaches that fall outside a particular mindset isn't fruitful.

Non-OO UI

Exactly. My go-to approach at the moment would be message-passing concurrency at lower levels, and I imagine I'd end up building dataflow tools on top for higher level work. Building dataflow on top of OO in various ways isn't unheard of in other languages, of course!

What is the evidence that

What is the evidence that FRP works well for GUI applications? What are interesting (i.e. not applications that are trivial in any model) applications written using FRP and why was FRP advantageous?

This has been discussed

This has been discussed various times on LtU, so I'll just mention some keywords: FrTime, Fran, Yampa, and genuinely functional user interfaces. FRP itself is not the necessarily the pinnacle, I usually use "FRP" as a shorthand for "declarative/functional change propagation". Bling is a non-FRP approach that I believe mostly fits my criteria.

Jules is asking a much

Jules is asking a much tougher question. He is asking you: Show me the equivalent of Visual Studio written in Fran or Yampa. He wants to see how well it scales to large real world software engineering projects.

The Yampa link is to a

The Yampa link is to a thesis about a 3D game. As a closing thought, if we required Jules' level of proof we would never have switched from assembly to C. Modern UI toolkits are clearly non-composable. If you take these UI elements and add signals, change propagation is automatic and composition becomes much easier. You certainly pay some overhead for this automatic book keeping, but given Yampa was successfully used for a simple 3D game, its performance is usable enough for Visual Studio-like programs, and your problems are not going to be any worse than if you were managing all of this state manually.

Clarification request

I wonder if you could clarify what you mean by modern UI toolkits not being composable. The PME (properties, methods, events) model of components as used by Visual Basic, Delphi's VCL and .NET's WinForms has been composable enough to support commercial markets in third-party components and controls.

Delphi's IDE is itself composed out of components written in Delphi.

I agree they have been

I agree they have been "composable enough" to support those markets, but that's not the same as saying they are composable. Assembly was high-level enough to write plenty of programs prior to the adoption of higher-level languages; Herculean effort can circumvent any number of deficiences.

Certainly existing toolkits have reusable components, but you will not get the same level of composition as you will find in something like Bling. Events require significant duplication and stateful reasoning, and you require an event per property to track these stateful changes and use mutation everywhere to propagate changes. FRP-like solutions unify and automate both of these into a signal abstraction. Surely you can see how that would solve a number of problems and results in more composable, dynamic behaviour.

I'm not saying that you need

I'm not saying that you need this kind of proof before you should use something. I'm just saying that you need this kind of proof to conclude that FRP works at least as well as mainstream approaches.

I agree that a 3d game is interesting, but games are not GUI applications. The FRP toolkits you mention all seem to be focused on games, or at least things that are mainly solutions to differential equations. GUI applications don't work like this at all. Also, this 3d game is several orders of magnitude simpler than something like Visual Studio, a word processor or a web browser.

All higher-level UI toolkits

All higher-level UI toolkits are based on lower-level drawing toolkits. If Yampa works on this lower-level, then deriving a higher-level can retain it's composable properties.

Further, Bling and Fruit from "Genuinely Functional UIs" are examples of such higher-level toolkits, so I gave you examples on both levels.

Finally, I agree that VS and browsers are more complicated, but given FRP-like approaches result in simpler and more composable small programs than the corresponding OO/imperative toolkit, what reason is there to believe that OO/imperative toolkits will suddenly overtake them as program complexity scales up?

I rather think there is enough evidence for a PLT person to conclude that something FRP-like is the way forward here. The only concern to my mind is performance. FRP is to mutation/change propagation as garbage collection is to memory management: its performance will be more than sufficient for the vast majority of programs, but there will be some domains where it won't meet certain requirements (performance, resource use, latency, etc.).

Further, Bling and Fruit

Further, Bling and Fruit from "Genuinely Functional UIs"are examples of such higher-level toolkits, so I gave you examples on both levels.

Perhaps. But where are the applications? In the Genuinely Functional UIs I see an example of an application consisting of a button and if you click the button you will see a message. Hardly a real world application. The other example is a ultra bare bones pong *game*.

Bling is already much more interesting from a GUI application perspective with its declarative constraints. But it too seems to be focused on physics and animation.

I rather think there is enough evidence for a PLT person to conclude that something FRP-like is the way forward here.

Yes! But please don't focus on floating point only. Animation, physics, integration are all floating point and a play a very tiny if any role in most GUI apps. Performance of GUI glue code is also a non problem in GUI apps. The research on Fudgets was an excellent start.

Several pieces of production

Several pieces of production software were developed with Flapjax, an optimized** FRP embedding for web apps.

I agree with Jules assessment that developing a framework is insufficient for concluding success. Promise, yes, which is sufficient for a lot of research questions. However, for many other conclusions, an evaluation of writing software in the large helps (and presents a useful feedback loop for the idea). This was also one of the takeaways of Philip Wadler's paper linked in this same thread.

**I'd argue Bling is also optimized as it is an interface to atypically used hardware for UIs.

Odd comment on Bling's home page

"Students, artists, researchers, and hobbyists will also find Bling useful as a tool for quickly expressing ideas or visualizations. Bling's APIs and constructs are optimized for the fast programming of throw away code as opposed to the careful programming of production code."

What's that all about?

S.

Good question. Basically, I

Good question. Basically, I was trying to go after the Processing.org crowd as a potential user base for Bling. That hasn't worked out I think. I'm sort of stumped, what I've written is a weird UI programming library for my own use (and any interns that I force to use it), without a real plan (or means) for attracting a user base. In my new role at MSRA, my research has moved away from UI/graphics languages and into the cloud (so to speak).

I keep forgetting about

I keep forgetting about Flapjax, but yes that's also an excellent example!

As for concluding success, I suppose it depends what is meant. Something FRP-like is a better primitive on which to build a UI framework, so I would say computer scientists have successfully developed a better foundation. All that remains is a lot of elbow grease to develop a fully fleshed out framework, and no doubt some refinements to make it as pleasant as possible. So it's certainly not successful in the elbow-grease department, but that wasn't my point: I merely pointed out that OO is not the only "good match" for UIs, and in fact is not really a good conceptual match at all IMHO, as I prefer an FRP-like foundation.

OO versus FP {GUIs, IDEs}

At the moment, I believe, OO just matches development of visual window-like interfaces very well since extensible type-safe state combined with imperative concurrent message passing is almost a one-on-one match.

I don't know how true that really is. I'm not a GUI guru, but in terms of expressing myself, Erlang/TK is by far my favorite I have tried to date. The bigger issue probably is that functional programmers tend not to write GUIs on a daily basis, and that when they do write GUIs they are somewhat more prone to reinventing the wheel and creating their own GUI library.

Also, GUI libraries are often treated as second-class citizens in the FP toolchain; languages such as Python and Smalltalk have long shipped with stable GUI libraries, and there are a number of well-maintained C and C++ gui libraries that are relatively easy to get working. By contrast, functional GUI libraries are often difficult to get working, and are frequently abandoned.

FP is bad at IDE development

I don't anybody really knows whether this assertion is true or not. I certainly don't. Although Emacs Lisp isn't particularly functional, writing GNU Emacs extensions is a heck of a lot easier than Visual Studio/Eclipse/NetBeans plugins.

I think the bigger issue is that writing a really good IDE is remarkably difficult in any language, not least because of the need to integrate with third party compilers, debuggers, etc., which tend to have very large and poorly documented APIs, if they even have a programmatic interface at all.

There are plenty of

There are plenty of criticisms; it's just that (in my experience) functional programming advocates sometimes focus on different priorities than working programmers.

In my limited experience with ML, the biggest issues I saw were pragmatic and human factors issues:

1. Higher-order functions are hard to wrap your brain around. I'd speculate wildly that only a small fraction of programmers can do so effectively and feel comfortable doing it, and even then, it is not easy. Thus languages like ML tend to appeal to programmers who are two sigmas away from the mean. So I find it hard to imagine ML gaining mainstream acceptance. And there are powerful network effects which penalize languages that are hard for many/most programmers to learn; this affects tool development, and it also affects collaborative projects (if you are the only programmer on your team who feels comfortable with ML, what do you think are the odds that the team is going to adopt ML? if your boss is worried that when you leave he will have difficulty hiring another strong ML programmer, what do you think his preferences on programming language
are going to be?).

2. Integration with the underlying platform is vastly inferior, and programming tools are often inferior too. If I want to write systems code that interacts with specific low-level OS APIs, C offers much better support. (Something as simple as pipe+fork+dup etc. becomes a headache in ML.) Similarly for integration with libraries, and for support from IDEs, debuggers, profilers, and other tools for developers. Languages like Java and C and C++ have tons of libraries; ML is a bit behind the curve. Of course this is not "fundamental", but you asked why adoption in corporate America is so low, and for adoption, these kinds of factors are critical.

I'd hypothesize that PL researchers may have a different set of priorities than a working programmer; e.g., many PL researchers might place a higher priority on mathematical elegance and formal semantics, whereas many working programmers might place a higher priority on strong platform support, ease of learning, and widespread adoption. This is understandable and I don't think it reflects some kind of moral failure on either side's part; it just represents the differing roles of the two communities.

A parting comment, which is probably unfair: ML is great for writing compilers, which might not be a huge surprise, given that it comes out of a community of researchers who spend a lot of time writing compilers. For writing some other kinds of systems code, though, the experience isn't always the same.

Anyway, those are personal, unscientific opinions, possibly unfair or unsuppoted by the evidence.

Higher-order functions are

Higher-order functions are hard to wrap your brain around.

I think C# and LINQ have disproven this common misconception. Higher-order functions are easier to work with than objects or interfaces, which is what you'd have to replace those parameters with. Compare IComparator<T> with Func<T, T, bool>.

Integration with the underlying platform is vastly inferior, and programming tools are often inferior too.

This I agree with. Tools are the domain of industry though, so it's a catch-22. Haskell seems to be getting lots of work here though, and F# is getting MS's full support and integration into Visual Studio, so there's progress being made.

Evidence? Just because

Evidence? Just because higher-order functions are more accessible LINQ/C# doesn't mean they are easier to understand for developer. In fact, where LINQ really wins is when they can hide the higher order functions under some nice from/where/select syntax. There are other cases in UI programming (events, signals) where getting rid of explicit higher-order functions is a net win.

The entire LINQ framework

The entire LINQ framework and the accolades it's been getting are my evidence. The query syntax makes some things clearer, and some things harder.

As for "easier to understand", lambdas are certainly easier than having to define the equivalent object or interface explicitly. I will be absolutely astonished if anyone thinks that implementing a IComparer or IEqualityComparer is easier than just passing in the corresponding lambda, particularly for beginners.

Humans can only deal with so much abstraction, but my argument is that higher-order functions as used in .NET provide no more abstraction than .NET developers were already capable of handling (they were using interfaces and objects instead of lambdas). Lambdas just brought economy of expression, not greater abstraction.

Fair enough. I don't equate

Fair enough. I don't equate higher-order functions with lambdas, I would also include delegates in the mix as being complicated. So in my view, lambdas make delegates syntactically easier to express, but don't really change the complexity of the language (i.e., they don't make things easier).

I'm not sure about "accolades" LINQ has been getting, it could just be hype and buzz, its hard to tell how successful it has been. Although I agree its useful, I don't use it very much in my mostly UI-oriented C# programs to say it is a must have. I do use the companion expression tree API a lot, however, but I'm probably in the minority on that :)

I am much more partial to declarative solutions for comparing things, which shouldn't be a surprise to anyone who knows my work. You can, for example, describe what property you want to sort on (say in LINQ we add SQL's "sort by"), or we can add annotations to a class definition on what properties should be used in sorting (and use object views to support multiple sorting schemes).

Fair enough. I don't equate

Fair enough. I don't equate higher-order functions with lambdas, I would also include delegates in the mix as being complicated. So in my view, lambdas make delegates syntactically easier to express, but don't really change the complexity of the language (i.e., they don't make things easier).

I'll return to my example of IEqualityComparer. Which is easier:

public bool Compare<T>(T lhs, T rhs, IEqualityComparer<T> compare)
{
  return compare.Equals(lhs, rhs);
}
public class CompareInt : IEqualityComparer<T>
{
  public int Equals(int lhs, int rhs) { return lhs < rhs; }
}
...
Compare(0, 2, new CompareInt());

Or this:

public int Compare<T>(T lhs, T rhs, Func>T, T, int< compare)
{
  return compare(lhs, rhs);
}
...
Compare(0, 2, (lhs, rhs) => lhs < rhs);

I think the latter is considerably easier because it's succinct, directly expresses the intent and is entirely local. There are plenty of similar single-method interfaces in the .NET class libraries, and defining class instances starts to get tedious.

I also think lambdas/delegates make the language more complex, since you now have another concept beyond classes and interfaces to learn, but they make expressing simple programs simpler, and what are complex programs but a composition of simpler programs? So I'm afraid I have to disagree with every point in your paragraph. :-)

In any case, the original point was that higher-order functions (HOF) are somehow "too advanced" for most programmers, but any moderately usable collections library already has higher-order operations. Unfortunately, without HOF they must use clunky means to express them.

I don't use it very much in my mostly UI-oriented C# programs to say it is a must have.

The LINQ interface will soon get much more use via the Rx framework. You'll also note that every new MS product now primarily uses delegates instead of interfaces for callbacks (see parallel extensions, code contracts, Rx, etc.). They reserve interfaces for a more complex protocol, like Rx's IObserver whose semantics is defined by multiple operations. Even then, they provide a static function to construct an IObserver using 3 delegates.

Re: declarative solutions, I'm not against them, but annotating fields is too inflexible, and adding "sort by" is just syntactic sugar over the OrderBy combinator. It's not a huge win in this case, which is why MS left it out.

The typing (keyboard kind)

The typing (keyboard kind) is more with delegates, but I argue that the complexity is the same. I also argue that the declarative "select * in list sort by some_property ascending" is probably clearer still than solutions that involve callbacks. I'm not disagreeing with you that closures are better than delegates for expressing callbacks, just that we can get rid of callbacks also.

I'm not so bullish on Rx, as discussed in other threads I believe there are better ways of managing event stream complexity.

I still wish C# supported anonymous inner classes though, it would really help out in the case where a real interface is needed.

I'm not disagreeing with you

I'm not disagreeing with you that closures are better than delegates for expressing callbacks

I'm confused. Delegates are closures. Did you mean "interfaces"?

just that we can get rid of callbacks also.

I'm curious how you would do so and still retain the flexibility of composition every program needs. You can't introduce a new keyword every time you need to add a new operation to the language. Type classes allow implicit resolution of these sorts of constraints, but you still need explicit overloads or you get duplication, ie. Haskell's sort/sortBy. There are plenty of proposals to integrate explicit with implicit overloading in order to solve this problem, but explicit overloading isn't much simpler than providing a lambda, and as my example showed, a lambda is sometimes just easier because it's a local and direct expression of intent.

I'd like to turn this around and ask, where is the evidence that novice or experienced programmers find higher-order functions problematic or confusing? I personally think it's always been a myth, and am curious where it started. Programmers have been using HOF since qsort was added to C's standard library.

Re: anonymous inner classes, I don't have much use for them, but one thing C# should have taken from Java but didn't is static interface members. The factory pattern is just no substitute.

I think C# and LINQ have

I think C# and LINQ have disproven this common misconception. Higher-order functions are easier to work with than objects or interfaces, which is what you'd have to replace those parameters with. Compare IComparator with Func.

Just because the majority of C# programmers can use simple anonymous functions to work on collections doesn't make them capable of adopting a functional programming language full time.

You don't need to do

You don't need to do anything much more complicated to be an extremely proficient user of libraries and get things done in an FPL, especially an impure one. It's true that the libraries have to exist, and that they will contain more complex idioms. But this is true wherever you go. The majority of C# programmers don't actually write anything that requires such sophistication, and working in a functional programming language with suitable libraries full time will resemble working in a DSL full time.

Not so sure about programmer (in)capabilities

In the earlier C++ days, I subscribed to "C++ Report" and got a kick out of reading all the "language lawyer" articles about templates and such.

I'd bet good money that most C++ programmers can't even follow the language design/use nuances in most of those old articles - but SO WHAT? There are tons of average programmers happily (or not) using C++ and templates and operator overloading and on and on everyday.

In fact, it's arguable than since the dawn of the early industrial age, the vast, vast majority of tool users of any kind have not had a particularly deep or thorough understanding of their tools.

Again, big whoop? Tool innovation, tool use, productivity gains and overall progress happened anyway.

So why can't programmers use some functional language to get their boring day-to-day work done in a typically mediocre way with only a rudimentary understanding of their loverly *functional language* tools?

S.

So why can't programmers

So why can't programmers use some functional language to get their boring day-to-day work done in a typically mediocre way with only a rudimentary understanding of their loverly *functional language* tools?

A C++ programmer that doesn't define their own templates or operator overloads isn't changing the way they think about their problem domain. They're still writing imperative or OO-style code, just with less tools in the box. Transitioning to functional programming is more than just swapping out the tools.

Transitioning, Learning, etc.

I think you are completely right, so long as we recall that learning the imperative programming paradigm in the first place is (was) not a cakewalk for the vast majority of developers: classes/objects/virtual member functions/interfaces or pure virtual classes/etc., pointers, memory management, trees/linked lists/hash tables/other data structures, recursion, assignment as copy vs. assignment by reference, bitwise arithmetic, "computer-ese" numeric type systems in general, and on and on an on.

I'm sure the profs on LTU can think of many more mountains that beginners have to climb just to become reasonable with Pascal, C, C++, Java or whatever.

Coming back to my search for reasoned critiques of FP, I am actually presuming, for real, that FP has something very concrete to offer most, if not all, "average working programmers." Just from software development's brief history, it seems reasonable that a transition to FP may be too hard for some. Transitioning can be hard.

But presuming FP has so much to offer at least some very significant subset of working programmers, one would think:

  • many would undertake the transition (as the multi-decade transition to OOP, structured programming before that, etc.)
  • most beginners would naturally gravitate to FP tools early in their education and careers.
  • companies would naturally transition to FP for the same reasons as their earlier transitions to new development tools - higher productivity and more reliable programs (add your own favorite application and development life cycle quality criteria).

Or am I wrong in assuming that FP clearly yields real world benefits both to the development process and to the resulting applications in most, if not all, common problem domains? If I'm wrong, why do people care about FP at all?

For now, I'm assuming these benefits are real, and that these are potential if not already actual. Whether FP delivers these goods via Haskell, some ML (SML, OCaml, F# ...), Oz, Clean, Scala, Scheme or some other language yet to "get it right" is the open question to me.

I'm particularly interested in that last option - some language, or tool set, or set of language features required to "get it right."

This motivates my seeking out of critiques: of FP in general, of particular FP languages, of how applications perform, of inadequate language level access to machine resources or libraries, of inadequate tool sets or accompanying development methodologies for larger projects and teams, negative experience reports and so on.

My inquiring mind still seeks the route to a general purpose software development world made better by FP.

S.

In my limited experience

Or am I wrong in assuming that FP clearly yields real world benefits both to the development process and to the resulting applications in most, if not all, common problem domains?

In my limited experience, you're wrong in that assumption. Or at least that the real world benefits don't translate directly to productivity gains for most problem domains.

I think you are completely right, so long as we recall that learning the imperative programming paradigm in the first place is (was) not a cakewalk for the vast majority of developers

Indeed. Now is learning functional programming harder than that? I tend to think it is. I mean kids can learn and understand lists of tasks or the set of objects necessary to complete a task. When does the average person learn the mathematical concept of a function? With the current state of education most places, isn't it never?

I'm not quite sure how to alleviate these problems, but as long as most companies don't gain much benefit and have the burden of long transition/developer spin-up times they're not going to make that transition.

I'm not sure what the solution is to alleviate those two problems though.

In my limited experience,

In my limited experience, you're wrong in that assumption. Or at least that the real world benefits don't translate directly to productivity gains for most problem domains.

Some core principles of functional programming would benefit any domain in which you'd want to use unit testing. In other words, just about every domain can benefit from these principles. At the very least "small reusable functions", and "functions depend only on their parameters". I think this is uncontroversial.

I personally think immutability and higher-order functions are also beneficial, but those tend to be more controversial, albeit less so in our increasingly concurrent architectures.

Sure, these are all surefire benefits.

Sure, these are all surefire benefits.

But these can be (and largely have been) adopted without transitioning to a functional language and incurring those downsides. I might be assuming too much in scottmcl's post, but looking at the transition to a FPL I don't think you get more than the status quo where these principles are used in mainstream languages.

Real world benefits are irrelevant

as long as most companies don't gain much benefit and have the burden of long transition/developer spin-up times they're not going to make that transition.

Honestly, whether or not they'd gain much benefit doesn't really matter; it's the transition time that's significant. Small-scale incentives in the corporate world usually have a short-term bias, meaning that (all else equal) if something is a net loss for a nontrivial period of time it won't be adopted even if it offers substantial gains afterwards.

Overcoming that effect requires external pressure of some sort. FP is still "weird" to many mainstream programmers, so there's little bottom-up pressure, and there's not much in the way of glamor/hype/marketing for FP that higher-ups would be aware of, so no top-down pressure.

In other words, programming language adoption--like many other things--comes down primarily to social factors. Objective benefits to using a language are a tangential issue at most.

The tyranny of the predominant paradigm

This is an old debate, but...

Back in the times when I learned programming it felt like ages until I grokked pointers, i.e., nested state. Recursion and first-class functions OTOH were totally obvious to me. Given that the majority of today's programmers deals with state and pointers on a daily basis, I see only 3 possible explanations:

1.) I'm weirder than thou.

2.) Most arguments just display experience bias and/or prejudices about the relative complexity of these concepts.

3.) Most programmers haven't actually grokked state and pointers, they just hack their way through.

I tend to believe in 2) - it's all about exposure.

Design and Tools

1. Higher-order functions are hard to wrap your brain around. I'd speculate wildly that only a small fraction of programmers can do so effectively and feel comfortable doing it, and even then, it is not easy. Thus languages like ML tend to appeal to programmers who are two sigmas away from the mean. So I find it hard to imagine ML gaining mainstream acceptance. And there are powerful network effects which penalize languages that are hard for many/most programmers to learn; this affects tool development, and it also affects collaborative projects (if you are the only programmer on your team who feels comfortable with ML, what do you think are the odds that the team is going to adopt ML? if your boss is worried that when you leave he will have difficulty hiring another strong ML programmer, what do you think his preferences on programming language
are going to be?).

I agree with the two sigma comment, not everyone can build a parser combinator library from scratch. Question is: How bad is that?

I.e., if 95% of application developed are of limited functionality where libraries are just used and stitched together, then why not leave the 5% of designing the libraries to those who can? And then, those 95% can learn, and possibly it'll turn out that with enough experience a large number of them will become proficient at it, just like beginner OO programmers evolve into experts.

There is also the possibility that the 5% who can at the moment, is just a low number because these are the 5% which easily pick it up, and adoption is not widespread enough for those who take more time to ever grow into an expert on it.

And given that, then a functional language might be an ok-ish alternative since I think the paradigm just allows for more complex libraries to be build easily than in any other.

Ah men, brother

Conversely, we should recognize that same 95% that can't make a DFA state machine from a Lex style specification might very well be able to code up actuarial analysis programs or bioinformatics data munging applications that the 5% of typical "system software" developers could never accomplish.

What's fortunate is that it only takes 5% of the developers to be the system software library builders to enable the other programming 95% to build the actually useful user and domain expertise centric applications that drive the entire enterprise of software development in the first place.

S.

Can't resist

And there are powerful network effects which penalize languages that are hard for many/most programmers to learn...

I completely agree. But this begs the question of how we explain the success of C++...

same as java

money behind marketing.

I think that's a far too

I think that's a far too facile. C was a simple language, with a systems programming reputation behind it. Its main competitor, Pascal, was seen as a children's toy (and without vendor extensions, it was), so all the cool kids wanted to move to C, type safety etc. be damned.

C++ was a path to a kind of object orientation, the new fad that formalized a handful of what we would now call design patterns that were already in use in C, one way or another. People transitioned because they could write "C with classes", i.e. take or leave as much of the new language as they wanted. But this optional nature, and deep integration with the C model of compilation, left C++ with deep structural problems, so that when Java came along and basically made the OO unoptional and abandoned any pretense of C compatibility outside basic statement and expression syntax, it was readily adopted. The timing worked out well for Java too, as machines were just starting to get large enough for the space inefficiencies of GC to not matter as much, and the (ultimately unfounded) hype of applets rode the web hype.

Hard to learn

There's a pretty big difference between a language that is "hard to learn" in the sense of knowing all its features and/or passing the language lawyer bar vs. a language that is hard to learn in the sense that most find it a head scratching exercise to translate the pseudo-code from their Algorithms 101 book to something they can use in said language.

Exactly!

Exactly!

Why?

C is closest to a portable allows-a-lot-of-hacks reliable fast abstract programming language. It just beats the pants out of all other systems languages when it comes to performance. You can go to the bear metal, and it seems just at the right level of abstraction to be able to program 99% of all application reasonably easily.

The thing with C is that there are no real reasons _not_ to program in it.

C++ solves a lot of problems with C, has tight integration with it, introduces new problems, but goes well with the OO and Visual Applications paradigms.

Most of the pool of C programmers naturally advanced to C++. Seems normal in hindsight.

C++ is the best C we've got...

Or so I used to counsel my developer employees :-) The ability to pick and choose some (or none) of its various features in one's coding is a nice virtue of the language.

My big problem with C or C++ as the best design for a general purpose programming language is that the lack of GC greatly hinders the family of cross module abstractions where responsibility for data liveness and cleanup is not easily pegged to a particular module, as well as a whole family of other well known lovely programming idioms that beg for GC.

Have any LTU folks taken a serious look at D, a language in the spirit of C++ that supports GC?

S.

Obligatory Superfluous Comment

The ability to pick and choose some (or none) of its various features in one's coding is a nice virtue of the language.

And also its biggest problem. (No experience with D, btw.)

Minor point

Why do these usually require and take recourse to an "FFI," instead of directly supporting machine level data representations and multiple calling conventions (such a Pascal and C do)?

Because that turns out to be difficult. I used to say "straightforward". Having done it in BitC I no longer think so. There are interactions between low-level representation and other features that are just nasty.

And researchers aren't paid to do work where the devil is in the details. If it can't be demonstrated in concept form in 18 months it's not fundable research these days. And once you get your publications out of it, continuing to work on something for the sake of "completion" is career-limiting behavior.

We (i.e. all of computer science) have a serious tech transfer gap.

This is ironic, because the

This is ironic, because the big problem in software engineering is always the devil in the details. The whole reason we need better tools and languages is so that they work better for various kinds of detail devils.

What about the CLR and Scala 2.8?

Maybe not a trivial compilation and run time strategy, but these, IIRC, generate new code for generic data structures and generic functions over "non-pointer" data types - scalars, doubles, even small "value semantics" structures (at least in the CLR, I believe).

Then one can safely use stack frame and heap value structure descriptors for perfect garbage collection, removing all tags from run time data values excepting the normal headers for heap allocated objects (such as good old malloc uses as well). Yes, there might be some exceptional circumstances I'm not considering :-)

The stack frame descriptors address the problem of potentially different calling conventions over function abstraction boundaries, helping the gc along its merry way during a stack scan. Therein also likely lies a solution to pointers into the middle of data structures.

Per another recent inquiry of mine, it might well be feasible to treat calling convention as yet another function type parameter, which in turn would be accounted for by all HOF, generating new versions of, say "foldr" and "map", as necessary. Another alternative is to auto-wrap alternative calling conventions in one, or just a few, "universal calling convention" stubs, this again allowing for alternative calling conventions in HOF, if with a slight loss in speed (depending upon the cost of the generated stub function relative to the manifestly passed/called function).

These directions may lead to non-trivial, trippy program analysis, and more complex, trippy run times, but it seems like at least a mostly satisfactory solution to more seamless integration with the OS and external libraries might be on the horizon - given the will of the language designers and implementers.

S.

How enterprises use functional languages, and why they don't

A critique of functional programming appears in my paper How enterprises use functional languages, and why they don't.

Thank you!

Thank you!

Ordinary programmers are smart enough, with the right context.

I remember how much trouble my college classmates had with scheme, and the idea, new to them at the time, of first-class functions. As far as they were concerned, functions as values that could be stored in variables, returned from other functions, passed as arguments, etc, violated something profound in their universe of thought.

Now, those same people managed to quickly master the idea of OOP, in which functions-and-other-data are aggregated into "objects" and passed around, stored, returned, etc. It's actually more complex and constraining than the idea of first-class functions, but somehow, as long as the functions are subordinate to some defined type, they don't have a problem with the idea of the first-class object which contains functions.

It got weirder though. Even after they had the hang of specializing virtual methods for different types in C++, and thought overloaded operators were very cool and handy, they hit a conceptual wall when an instructor tried to introduce generic functions.

They got inheritance. With a little work they got multiple inheritance. Once they had those, they got interface classes, and mixins, and virtual methods. But they could not figure out extensible records, property lists, and duck typing, which are what all of the above constrain and approximate.

They had no problem with memorizing 23 levels of operator precedence in C, and mastered the highly context-sensitive syntax of perl with just a few complaints, but were baffled when trying to express the same abstract syntax trees with parentheses. "Normal" brains apparently deal with complicated syntax more easily than visually apparent structure.

I don't understand where the psychological barriers are, exactly. But what I saw as an undergrad, I've continued to see working in industry. You take some idea, some abstraction, and people don't get it. You take the same idea, wrap it up in some kind of model that gives them a comfortable way to think about it, possibly along with some other ideas, and suddenly it makes sense to them.

These mental "wrappers" usually constrain the underlying idea somewhat. It's as though some rules for using it, even if the rules limit its usage and aren't intrinsic or necessary to the idea, give people some traction for reasoning about and working with it.

Anyway, the effect is dramatic. OO wraps the profound idea of first-class functions, and suddenly the everyman programmer is transformed into a genius who masters working with the profound idea, even if he still doesn't "get" it or know what to do with it when you try to explain the idea itself. Syntax wraps the idea of structure, and suddenly the everyman programmer is a genius who easily expresses and parses complicated expressions whose structure visually represented he doesn't even understand. Virtual methods wrap and constrain the idea of generic functions, and suddenly the everyman programmer is a genius who gets how to use them. And so on.

What kind of idea the wrapper has to be is completely mysterious; Diverse fits of inspiration have given us all our major industry-adopted programming paradigms, as well as thousands that never went anywhere important. If I could figure out what mental models people need to get particular PL ideas, what ways of thinking about something would be comfortable for most of the "average programmers" out there, then I'd be a supergenius rock-star programming language designer.

If you can figure out how to wrap up functional programming ideas in some concept or framework that they can think about comfortably, ordinary programmers will become geniuses who will master it.

Just a thought: nouns and verbs

Re conceptual frameworks (bear with me a moment, I really am making a point about programming languages):

There is a school of thought concerning language, to the effect that nouns are good and verbs are nasty. My own encounters with this are recent, perhaps because I've only been noticing allusions as they go by, rather that looking for historical precedents. A few years ago a French author wrote a novel with no verbs, and said some pretty bad things about verbs in the preface (as the Wikipedia article notes). More surprising to me was a passage written by a prominent conlanger just last year waxing eloquent on verbs bad, nouns good (here; a search for either "ugly" or "glorious" takes you to the right paragraph) — granted, he was praising a conlang with no verbs, but that doesn't seem to weaken my point. BTW, one of his conlangs has 57 noun cases ("in honor of the ketchup co."), so, as you say, it's not about complexity.

I don't belong to that school of thought. I like verbs (even though I'm unfond of complicated systems of conjugation). My conlangs tend toward explorations of the nature of verbs, and the nature of coercions back and forth between verbs and nouns. But I also like first-class functions and, to be perfectly honest, kind of dislike object-orientation. So could it be that most people find nouns easier to deal with, and that's why they find OOP easier to deal with? In which case, the way to to make FP work for the ordinary programmer is to noun-ify it. (My backup theory is that the trouble is the way FP is being presented, which has been shaped by verb-lovers and therefore doesn't work well for noun-lovers.)

Please back away from the lexical item, sir

So could it be that most people find nouns easier to deal with, and that's why they find OOP easier to deal with? In which case, the way to to make FP work for the ordinary programmer is to noun-ify it.

Be careful John, or you and LtU will end up as whipping boys on Language Log. ;-)

Ironically

Ironically, I would argue that object-oriented programming is much heavier on verbs than functional programming. The paradigm notwithstanding, the meat of any code usually is method/function calls. And there, OO uses imperative verbs were FP prefers declarative nouns. I.e, FP already is "nounified".

For example, various OO style guides require all getter kinds of methods be named like "GetEnvironment", describing the imperative to "go and retrieve the environment", whereas in the functional approach the respective function is typically named "environment", because the environment "just is", and the function denotes it.

I think this isn't really

I think this isn't really true. Getters and setters are generally used to avoid the versioning and encapsulation pitfalls of public fields; they are just a manual implementation of what other OO languages call properties. To describe properties as verb-oriented would be pretty strained, I think.

Yet

Yet it's what I see used. In any case, that was just one random example. (I could now rant on about how the idea - proposed by certain OO style guides - of having different naming conventions for plain getters and for "computational attributes" defeats the whole purpose of having getters as an abstraction...)

The idea of computational

The idea of computational attributes being different than plain field-backed getters usually comes down to computational cost concerns, I think. If the computed attribute is a trivial function of state, it's no harm as a property / named with a getter convention. But if it involves significant computation, such that the client of the class ought to be aware of the fact before using the attribute as if it were a field, it is not really breaking the abstraction to make a distinction - rather, it's a means of advertising efficiency in the abstraction in the API. See section "Properties vs. Methods" in Property Usage Guidelines in the MS .NET reference for an example rationale.

This is another thing that comes into functional programming; particularly with lazy languages, it can be hard to see the space and time complexities and where they will be spent, as the point of the definition or invocation of the abstraction isn't necessarily the same as the point of use. The section "2.2.4 Totally Inappropriate Data Structures" in Worse is Better is also relevant here, where (I suspect) Lisp's homoiconicity works against it by not making much syntactic distinction between fundamentally differently performing abstractions.

conceptual models

I notice the same syndrome in myself, when I see an abstract idea I can't grasp the implications until I have made a concrete example.

When I was working on a equation to calculate how recombination ratio increased with distance along the chromosome (given the recombination ratio between a and b who are one unit apart). As soon as I'd start to explain what I was doing so I could ask for help people would get an overwhelmed expression on their faces and quickly explain that they didn't know much about DNA. I then had the idea of reframing the problem to be about dice.
You have N dice if, when you roll them, you end up with an odd number of '1's facing up then you get a point, otherwise you get nothing. For any given number of dice (and for any given number of sides) what percentage of the time will you get points.
This was much more successful, people were able to very quickly understand what I was trying to do and willing to try and figure it out.

I have a fun logic puzzle for you.
there are four cards each had a letter on one side and a number on the other. You were told by a friend that if a card has a K on one side then it always has a 2 on the other side. You see these four cards
A 2 K 7
which cards do you have to flip over to check whether your friend was mistaken?

You're at a party to make sure that there isn't any underage drinking going on. You see
an adult with a drink you can't identify,
someone of unknown age with a glass of water
a teenager with a drink you can't identify
and someone of unknown age with a cup of beer in their hand

who's ID do you have to check?

Doing it the wrong way

which cards do you have to flip over to check whether your friend was mistaken?

Unlike the complex semantics of "if" in natural language, material implication in formal logic requires only that the consequent not be true when the antecedent is. If the antecedent is false, the implication is true either way.

Thus, the hypothesis would be falsified only if the K has a non-2 on the other side, or if the 7 has a K on the other side, so just those cards must be checked.

who's ID do you have to check?

Everyone's, including the alleged water-drinker. The legal consequences for enabling underage drinking are too severe to assume that someone who looks legal actually is, or that something that looks like water isn't vodka. Better to play it safe.

;)

Hiding stacks

Because the 'only programs' approach is much easier to understand, IMHO. You never write down a type like (ErrorT String (StateT blah)).

But I only ever write that type somewhere that cares about the implementation anyway. And personally I don't bother because it can be inferred from the implementation - or I bother once, to define a newtype for the monad I just implemented. Give me partial type annotations and I wouldn't even have to do that, I'd just write something like this:

newtype MyMonad a = _transformerStack a
newtype MyMonadResult a = _result a

runMyMonad :: {- parameters -> -} MyMonad a -> MyMonadResult a
runMyMonad {- parameters -} = runLayer . runAnotherLayer . etc

We can abstract away from it all we want, but there's really no point in trying to conceal it at the point of implementation.

You build runErrorState from runError and runState in whichever order is correct for your need. The way we talk about 'TransactionalState' is to, when you want those semantics, call 'runTransactionalState'.

Yeah, so now the notion of TransactionalState has no type-level description (as opposed to the label, TransactionalState) and lives entirely in runError . runState initialState. It's ducking the question, and to the extent it's not we're still left talking about transformer stack via its term-level representation. What happens when I want to know if some other interpreter fulfills the criteria for TransactionalState? We're stuck watching them write the typeclass instance, there's no way to know if it actually does the right thing.

To put this another way: I want to talk about the semantics of effect interactions, what language(s) do I have to do this in? So far we've got the effect stack and that's it. Hiding the effect stack from the type system thus weakens the type system's ability to reason about effects.

Wow.

Some of the things that happened in this thread are exactly why we have a policies document (and a page about the spirit of LtU). Please, have a look.

Somewhat incredibly some significant points and questions managed to surface, which will hopefully be turned into independent threads, or even homepage items.

Honest Question

I've dabbled a bit in functional and functional style programming, but there's one point that I don't believe has been answered in the points above: What happens if you need to expand a low level operation in the bowels of your application to do something it didn't need to before, without having to thread new arguments through the chain.

For example, I recently added some debug stats gathering to our low level vector operation, to report on how many times certain operations were called. This was easy to add and even make thread safe as a temp hack. How would you perform this kind of operation in a 100% pure language?

This is at the bottom layer of a 100,000 line app, written in a mostly functional style. Please don't focus too much on the counting part, but on the injection of new functionality into base layers, without having to rewrite the upper layers

That's a really good

That's a really good question. One can of course cheat (use unsafePerformIO). Additionally, if you're already in some sort of monad stack (which you may often be, but of course not necessarily) then adding an additional effect to the stack is simple, even if that stack doesn't include IO (i.e. it is pure).

Starting with some good abstractions and having type inference really helps as well -- many upper level operations might in fact be parametric over certain aspects of the results they get, or if you're threading data structures through anyway, those data structures could easily take an additional field.

For the specific issue raised though, in GHC one can simply add a cost center pragma and run with profiling and the operation count is maintained for you.

So part of the answer should be, I think, that certain types of common operations which are about instrumenting the code should be orthogonal to the code itself, and handled in other fashions.

None of these is fully satisfactory, but in practice, they suffice without too much work.

An improvement in refactoring tools would be a big help too.

There's no universal answer

Using unsafePerformIO ruins the whole "100% pure language" thing, of course...

That said, since Haskell is the closest thing I'm aware of to a fully pure language that gets general-purpose use, the impression I get from Haskell veterans on this question is: For debugging purposes, just discard purity--the module Debug.Trace does exactly this, giving you old-fashioned printf debugging via unsafePerformIO. For production code, avoid thinking of making a function stateful when it previously wasn't as a minor change, and adjust your design and refactor accordingly. In many cases the refactoring will be easy or completely trivial, but there's no generally-applicable answer.

avoid thinking of making a

avoid thinking of making a function stateful when it previously wasn't as a minor change, and adjust your design and refactor accordingly.

I'd replace "stateful" by "monadic" here, since the point is more general than stateful vs. stateless code. Any kind of monadic code must negotiate some kind of sequencing which was not part of the original pure design, as well as deal with the 'side effects' of the specific monad. (And of course, the non-monadic portion of the code must be modified as well to handle the monadic type.)

This is probably why adding more layers to an existing monad stack is much easier than converting code which was written in non-monadic style.

Monads are a symptom, not the disease

I'd replace "stateful" by "monadic" here, since the point is more general than stateful vs. stateless code.

You're right that it's the "side effects" that matter, but I'd still stick with "stateful". To be more precise, the issue is how far from the function the "side effects" need to matter. If you discover that a pure function deep in the program needs to return an error for some values and the only functions that use it can handle its failure properly, you just change it use Maybe or Either, tweak the other functions, and call it good. Or maybe you have a few functions that need to return multiple results (i.e., the list monad)--add a few list comprehensions, extract the single result you want, everything else is fine.

On the other hand, if you have a pure function deep in the program that needs a piece of data which is only used far away in the call graph, it can be a huge ordeal to propagate that bit of "global state" all the way down, even though this doesn't require any additional sequencing--it means adding a new parameter to every intermediate function, none of which use it for anything other than passing the value along. Ugh!

Of course that's just manually reimplementing the Reader monad, but the relevant part is adding non-local dependencies to the code. In Haskell, the way you do this is typically monadic; Reader for pure non-local parameters, Writer for pure non-local return values, State for both combined, and IO for impure state. In each case, the headache is due to needing to connect two widely separated parts of the program (IO being the worst-case scenario of connecting code to the program's entry point).

A monad stack is of course a pre-existing connection between everything that uses it, which is why threading something extra along that connection is easy.

Implicit Configuration Parameters

if you have a pure function deep in the program that needs a piece of data which is only used far away in the call graph, it can be a huge ordeal to propagate that bit of "global state" all the way down

Functional Pearl: Implicit Configurations

Note that the approaches described in Implicit Configurations are not monadic.

Exception proving the rule

Come now, that's hardly fair! Statements about anything involving Haskell's type system being difficult clearly don't apply to Oleg. The rest of us use types to convince GHC our programs meet its standards; GHC uses types to try to convince Oleg it's up to his standards.

Logging

I agree on the Oleg comment.

Still, purity doesn't handle basic things like for instance logging very well. Think about it, it's something you want on a lot of places in your program, and you usually insert it very deep, i.e., there where the data is. The only solution in a pure language is to thread it [the logging module] throughout the whole program, which makes the purpose of purity moot.

Could be worse

You have a point in general, but given the other benefits of purity I'll bite the bullet and deal with it if need be--and certainly I'd like it as the default. I think there's a case to be made for more fine-grained distinctions, as well. If I'm forced to break purity, allowing write-only impurity via specific logging functions is vastly less problematic than throwing open the door to anything and everything up to launching the missiles. The approach of "oh well, everything's impure, let's try and work with that" appeals very little, even less having than giant ugly stacks of monad transformers everywhere.

That said, in specifics and assuming Haskell again, I'm not convinced how often logging pure code would make sense. Usually if I care to log something it's either: for debugging purposes, in which case unsafePerformPrintfDebugging is perfectly fine by me; because important things are actually happening, which implies an IO interface which I can log to along with performing whatever actions; or due to requirements involving data persistence or transactionality, which if it's that important I'll probably design the program around that goal from the beginning.

Under what circumstance would one want to add endemic logging with no prior expectation, to a large chunk of pure code, that will run long-term in a production environment? I'm really not seeing the scenario here.

Thanks

Thanks for the replies, they're all very interesting. I guess part of the question is whether it's possible (or should be possible) for abstraction to hide whether something uses state or not. If it is possible then you shouldn't have to pass something down, as the intermediate layers shouldn't have any idea of the need

I think you can draw parallels with IoC and dependency injection here. In cases like that, the fact that a low level component suddenly needs to use a DB connection (even if it's use is abstracted) does have an impact, because you need to pass the factory for the connection through the intermediate layer. This is something I'm very interested in as I try and push unit testing and testable components at my workplace

Either way though, I think it can be hard to deny the fact that this can make software harder to maintain/modify in the short term, simply because it's harder to control the scope of changes, and every project I've come across, however well intentioned, occasionally needs a "tweek" at a low level that isn't strictly "nice" from an architecture perspective

Hidden state

Objects can model hidden state, not just abstract state. In this way, not all of an object may be observable based on the I/O relation provided by the object's sequential interfaces.

It's not that an abstraction hides whether something uses state. It is that the abstraction allows hidden state. See Brock-Ackerman Anomaly for a good discussion of this, since it is important to take into account when objects with hidden state interact. In OO programming, especially with inheritance, this is extremely common and non-robust communication protocols have fancy nicknames like "Fragile Base Class".

Language or Programming

I think your missing the point with logging. A large number of (fail-safe/robust/traceable) applications need logging, (who did what, when, why - why did the program fail) and you're bagatalising the issue. [Medicine, avionics, banking applications come to mind.] Logging requirements may be pretty severe and high-priority. Like, it should be fast/unobtrusive, able to be inserted between specific code points for robustness, able to log to different places, able to switch between outputs after/at a specific time point, able to trigger back-up, etc.

You probably can't even meet all above requirements with UnsafePerformIO only, but I am not sure.

The question is whether purity should be a language feature or programming style. I like pure programming, most of what I write is pure, but I personally like ML languages better, where it doesn't enforce a hindering straight jacket onto the programmer. And, Imho, Haskell just misses out there.

False Dichotomy

There are many options other than purity as 'language feature vs. programming style'. Effect typing and substructural typing (linear/unique types) are among them. You might also study declarative side-effects, as you might see in concurrent constraint programming with temporal logic. Logging is one domain that would be especially easy to express with declarative side-effects.

Anyhow, if logging is a functional requirement, then you build it into the system. For Functional Programming designs, this usually means using monads or arrows to emit a logging stream. Alternatively, an AOP style might be able to 'weave' logging into an otherwise pure program.

The "hybridized" Functional/Imperative styles, such as ML, have a huge cost associated with them. The imperative side-effects are not idempotent or concurrent, and so a lot of abstractions and optimizations are difficult to validate, especially once you start introducing parallelism. ML just misses out there.

You missed the point entirely

As I stated before, if logging is required at a lot of code points in the language, then purity by necessity means passing around a 'logging' module throughout the whole program, be it explicit, with monads or whatever construction you can think of. As I stated before, this makes the point of purity moot; it doesn't scale well.

It is really straightforward. The lines of (difficult) code you need to write for using some feature which just should be present throughout the whole program will always be annoyingly bigger, at the cost of the programmer, at the cost of not being able to easily implement reusable modules, and at the cost of good SE practice.

If logging is required at a

If logging is required at a lot of code points in the language, then purity by necessity means passing around a 'logging' module throughout the whole program, be it explicit, with monads or whatever construction you can think of.

Except that isn't true. Pure functional code perhaps requires that, but (for example) various forms of pure logic programming, concurrent constraint programming, or other classes of declarative concurrency would not.

this makes the point of purity moot; it doesn't scale well

I agree; I do not believe purity scales well to programming in-the-large (that is: to extension/plug-in, maintenance, distribution, end-to-end integration, software lifecycle; programming in-the-large isn't about lines-of-code). I've a few comments in this topic already describing this opinion.

But hybridized functional/imperative, as ML uses, also scales poorly, hindering parallelism, reordering, persistent or distributed operation, various subprogram abstractions and optimizations.

Thus one may profit from finding a third option: interweave pure and impure code, and control effects to ensure they can be understood locally. There are many approaches to accomplish this, and I mentioned a few in the prior post.

The lines of (difficult) code you need to write for using some feature which just should be present throughout the whole program will always be annoyingly bigger, at the cost of the programmer, at the cost of not being able to easily implement reusable modules, and at the cost of good SE practice.

That isn't true, not even for the pure functional scenario. Regarding lines-of-code, there is nothing to be gained by favoring an imperative language vs. a monadic or arrows within a pure language. In both cases, you are implicitly passing around the environment.

Explicit

Regarding lines-of-code, there is nothing to be gained by favoring an imperative language vs. a monadic or arrows within a pure language. In both cases, you are implicitly passing around the environment

The lines-of-code remarks were regarding a pure language and passing monads around I would call an explicit manner.

And, just the fact that you need to write down an extra number of characters, means that you actually go against -good, a somewhat skewed- definition of loose coupling. The cost of adding, or removing at some point, a -possibly program wide- logging module from functions, or their types, of another module just is unnecessarily prohibitive.

Why does it need to be explicit?

just the fact that you need to write down an extra number of characters

You're ignoring the possibility of code being polymorphic in its effects (or monad). Not having the type system aware of logging has its own set of problems.

Being "explicit" is about

Being "explicit" is about syntax - nothing more, and nothing less. A hidden parameter is implicit, by definition. Calling 'explicit' the passing of monads without actually looking a the syntax is not reasonable.

Being 'explicit' is orthogonal to pure or imperative evaluation semantics. One can pass around imperative environments just as easily as monads. One can talk about lexical and dynamic scoping, explicit vs. implicit parameters and return values, just as easily for a pure functional language as for an imperative one.

Your claim that "you need to write down an extra number of characters" for pure languages is simply not true.

Sure, logging will require you to write something extra. But to run a comparison, that must be measured relative to adding logging to an imperative language, dealing with dependency injection or weaving of logging functions and logging-level parameters, and dealing with all the same issues of introducing logging across library or module boundaries.

Weaving

Sure, logging will require you to write something extra. But to run a comparison, that must be measured relative to adding logging to an imperative language, dealing with dependency injection or weaving of logging functions and logging-level parameters, and dealing with all the same issues of introducing logging across library or module boundaries.

Which is exactly what I am thinking of. When it comes to logging, an imaginary impure Haskell program would just be shorter. And yeah, well, if you start hiding stuff through a monad, which you'ld still would need to weave through a program, and push the envelope that far to call that implicit, than I'll argue that according to your definition, I might as well call C in (implicitly) pure programming language.

When it comes to logging, an

When it comes to logging, an imaginary impure Haskell program would just be shorter.

As a rule, imaginary programs tend to have very few lines-of-code. So I cannot help but be unimpressed by this claim of yours.

yeah, well, if you start hiding stuff through a monad, which you still would need to weave through a program, and push the envelope that far to call that implicit, than I'll argue that according to your definition, I might as well call C in (implicitly) pure programming language.

Even if you plan to argue that C code might be viewed as a global heap-passing style, that argument will fall apart upon introducing concurrency/threads, failure handling, pluggable modules, and IO.

Pure code can easily surpass C in all those aspects for manageable software engineering. For impure paradigms to be effective contenders, they really must introduce a security model and a sane concurrency model. And, even then, one would do well to keep purity for the data-modeling elements.

Referential Transparancy

When it comes to logging, an imaginary impure Haskell program would just be shorter.

As a rule, imaginary programs tend to have very few lines-of-code. So I cannot help but be unimpressed by this claim of yours.

Since one wouldn't need to pass a reference explicitly in a non-referentially transparent language, it would almost be shorter by definition, it is that easy.

Are you a Yehova witness by any chance?

Purity with Implicit Parameters

Since one wouldn't need to pass a reference explicitly in a non-referentially transparent language, it would almost be shorter by definition, it is that easy.

[edit] We are talking about purity, not referential transparency!

One doesn't need to pass a reference explicitly even in a pure language. Syntactic sugar, type inference, attribute grammars, dynamic scope, and any number of other means exist that allow one to pass or return arguments without reflecting this directly in the syntax. In these cases the extra syntax is needed if you wish to control (interrupt, transform, filter) the hidden flow of information.

Similarly, imperative languages adhering to the object capability model would require you to pass arguments around for any 'environment' in which communication is to take place. Again, said parameters can be explicit or implicit (using syntactic sugar, attribute grammars, dynamic scope). If they are implicit, security demands that one can use a little extra syntax to control the flow of these extra arguments. Object capability languages (of the non-trivial variety) are not pure.

You mis-attribute a property of syntax as a property of purity.

Bull

You either pass the reference, or you don't. If you don't do it, or somehow implicit, the language ain't pure. C has exactly your attribute that you don't have to pass all references explicitly in the language, it is therefor called an impure language.

Attribute grammars, AOP, or whatever you want to toss in all are irrelevant to the subject.

Man, take it from a teacher. You just don't want to admit that you're wrong when you are. Hardly academic and something you can get rid of with a good psychologist or counselor.

some harmless advice

Check out the 'harmless advice' functional AOP paper: it shows how one can verifiably weave in advice without destroying reasoning about the original program. I don't remember the achieved property there (some notion of integrity violation...), but obviously you can keep going, such as by ensuring advice is terminating.

I think there's some disagreement on definitions here; once you pick one, I'd expect you can twist this approach to fit it. Do you have a counter-example?

You either pass the

You either pass the reference, or you don't. If you don't do it, or somehow implicit, the language ain't pure.

You do not need to explicitly pass parameters around in order to have purity. Go educate yourself on point-free functional programming, attribute grammars, Haskell's monadic syntactic sugar, and my earlier reference to Oleg's Implicit Configurations (which uses typeclasses and type-inference) - to name just a few bodies of knowledge that contradict your claim.

Even when arguments are implicit, purity makes a huge difference with regards to data-parallelism partial evaluation, semantics from laziness, various optimizations (lifting expressions out of loops, lifting duplicate expressions, eliminating expressions whose outputs aren't needed), suitability for declarative sub-program abstractions (which might duplicate expressions where convenient, eliminate said replication where inconvenient, reorder them arbitrarily - all infeasible in imperative), and various other properties. All the properties of pure code still apply.

C has exactly your attribute that you don't have to pass all references explicitly in the language, it is therefor called an impure language.

C is impure for reasons independent of the fact that you do not need to pass all references explicitly. Indeed, there are many impure languages where you do need to explicitly pass all transitively mutable references, such as the object capability languages. Go study a few of those (E language, for example).

One might reasonably consider 'imperative' to describe code that advances logical time (or, equivalently, world-state) across sub-expressions, especially if you cannot prevent this behavior. As a consequence the reordering or replication sub-expressions can change the semantics of imperative programs. All this is independent of whether the time or world state was passed in implicitly vs. explicitly.

Implicitly advancing (mutating) time or world-state is different from implicitly forwarding the same.

Man, take it from a teacher. You just don't want to admit that you're wrong when you are. Hardly academic and something you can get rid of with a good psychologist or counselor.

I suggest you review site policy.

Transparent logging

It seems to me that logging is only a problem for referential transparency if the program that is doing the logging can also read the log file/stream. As far as I can see, you can add some sort of write-log primitive to a pure functional language without any problems, provided the result is not observable.

Of course, if I were to do that I would want to generalize such a feature a bit more.

Can matter

It matters if you care about the order/multiplicity of the things logged.

... which brings us to the

... which brings us to the harmless advice paper.

Not really, no

There's roughly two ways where "logging" a piece of pure, referentially transparent code makes sense: capturing intermediate values taken by parts of an expression, where you care about the logical structure more than the exact order of computation; or recording what the program was doing at each point in time, where you care about the order and multiplicity only in that the logging should not change the runtime behavior.

Referentially transparent code is, by definition, not really doing anything; it's an inert definition, determining the actual value of which is an implementation detail. It may often be worthwhile to log said implementation, in order to examine those details, but this isn't the same thing as "logging" the code itself and shouldn't be thought of as such.

Treating a referentially transparent value as if it was just an impure program with some annoying restrictions imposed, and thus wanting to "log" what it's "doing" in some sense, is almost always misguided. Much of the value of pure code is that it need not have a simple one-to-one relationship with what the physical computer is doing at each successive moment; thinking as if that relationship did hold will lead only to heartbreak.

Write-only logging primitives are just an invasive way of annotating code to indicate that you want the runtime to keep a log of what it's doing; obviously useful and safe, but a less invasive means would be nicer, I'd think.

If you don't care, you don't care

Order of evaluation of pure code doesn't affect the result, but there still is a natural order in the source text. All things equal, I think I'll take my logging once and in order.

What's your view on incorporating random number generation into an otherwise pure program?

I think a lot depends on

I think a lot depends on what information the logs are supposed to provide. It would help if language implementations provided built-in support for monitoring computations--"record the value of this intermediate data structure", "keep a log of each time this function is called", "log any value bound to this identifier if evaluated, or log that a bound value has gone out of scope as an unevaluated thunk", &c.

Unfortunately, that sort of thing doesn't seem to be a high priority for people working on such languages--consider the situation with Haskell debuggers, for instance.

What's your view on incorporating random number generation into an otherwise pure program?

Also somewhat dependent on context, but I'm mostly inclined to say that it's another case where a lack of language primitives can force (apparently benign) impurity for pragmatic reasons. In this case, I suspect the ideal program would be pure and nondeterministic, with each result representing a probabilistic sample chosen from the space of possible outputs.

If memory serves me, nondeterminism actually forms a monad in a very natural way, and as such can be implemented as you'd expect in Haskell (the List monad being the simplest example), but then we're back to lifting and other monadic cruft, not to mention all the joys of monad transformers.

Lacking more direct support from the language, I wouldn't shed too many tears over introducing stochastic computation via carefully controlled impurity, but only after giving proper consideration to explicit nondeterminism and/or threading state for a PRNG.

Pure and Non-deterministic?

I suspect the ideal program would be pure and nondeterministic, with each result representing a probabilistic sample chosen from the space of possible outputs.

Methinks "pure and non-deterministic" is a contradiction in terms (unless you mean in the sense of NDFAs).

Perhaps your 'ideal' is something weaker than purity, yet that still has useful declarative qualities (such as semantics independent from order of expression). For the particular issue of indeterminism, you might look into stochastic concurrent constraint programming, or at some indeterministic modeling or term-rewrite languages such as Alloy and Maude.

I understand that, though for different reasons.

I consider purity insufficient for in-the-large programming, but I wish to keep many declarative advantages even while handling communication effects (such as logging). Some of my recent work towards a declarative yet secure communications model is described at Reactive Demand Programming.

But for the particular case of true-random bits, I suggest they be logically generated at some 'edge' of the system then work their way into it. This allows you to preserve referential transparency, idempotence, nor effectively support reactive programming and caching - even after giving up on full purity. You lose a lot by introducing indeterminism arbitrarily.

If you are okay with a PRNG, ISAAC is relatively secure.

It would help if language implementations provided built-in support for monitoring computations [...] consider the situation with Haskell debuggers, for instance.

Glasgow Haskell has a decent profiler, if that sort of thing is what you're looking for.

For very large values of nondeterministic

Methinks "pure and non-deterministic" is a contradiction in terms (unless you mean in the sense of NDFAs).

Yes. Nondeterministic in the sense of producing a set of all possible outputs in some statistical distribution; the entire output is thus interchangeable with the computation itself. Often this is conceptually what one would like to have, but it leaves something to be desired from a pragmatic implementation standpoint.

Stochastic sampling from an idealized distribution is one reasonable approximation. Feeding random bits in from outside sounds roughly equivalent, I think? I'm not sure what the best way to handle the issue is. Thanks for the link, looks interesting--will take a look later, when time allows.

Glasgow Haskell has a decent profiler, if that sort of thing is what you're looking for.

It does, but I did mean debugger. There's one built into GHCi, of course, but the conventional stepping debugger approach doesn't work well in Haskell, for obvious reasons. Other tools exist as well, though fortunately, actually needing a debugger is surprisingly rare.

NDFA-like computes

The 'all possible outputs in some statistical distribution' is doable, though you'd probably wish to pursue it more as a Logic Programming problem (Probabilistic or Bayesian logics) or Term Rewriting system (akin to CafeOBJ, for which each 'term' is logically a set of all possible values). These would give you the option to focus your attention on 'what is the probability of this particular, interesting outcome?'. Evaluation strategies can then bound the probability (at least X%, at most (X+d)%) Increasing the confidence in a prediction could be achieved lazily.