CUFP write-up

A write-up of the Commercial Users of Functional Programming meeting held this October is available, for those of us who didn't attend. The write-up is well written and thought provoking (it was written by Jeremy Gibbons, so that's not a surprise).

The goal of the Commercial Users of Functional Programming series of workshops is to build a community for users of functional programming languages and technology. This, the fourth workshop in the series, drew 104 registered participants (and apparently more than a handful of casual observers).

It is often observed that functional programming is most widely used for language related projects (DSLs, static analysis tools and the like). Part of the reason is surely cultural. People working on projects of this type are more familiar with functional programming than people working in other domains. But it seems that it may be worthwhile to discuss the other reasons that make functional languages suitable for this type of work. There are plenty of reasons, many of them previously discussed here (e.g., Scheme's uniform syntax, hygiene, DSELs), but perhaps the issue is worth revisiting, seeing as this remains the killer application for functional programming, even taking into account the other types of project described in the workshop.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Getting to the point

Talking about languages and functional programming I am reminded of my favorite Lisp book of all time. I learned Lisp originally because I wanted to write simple language interpreters on PC's. This was nearly impossible on early PC's because you need function pointers not available in Basic, and tough to use anyway even if you have Pascal. But there is a "one page" solution using Lisp. The Winston, Horn book is possibly the best programming book I ever saw. It gets right to the point and has lots of short bits of code that you can use to get started.

There is a trend in the Winston Lisp books that is interesting. The early editions like 1981 are shorter, more practical, and more to the point. As time goes on they seem to get longer and more obscure. There is more and more space devoted to smaller and smaller problems. This actually seems to be the case with programming in general. Is there anything wrong with an interpreter written in half a page of code?

Mostly that it doesn't scale

Mostly that it doesn't scale up when you start doing more complicated things. I'm not modelling my compiler code on Jack Crenshaw's "Let's Build a Compiler!" these days either, much as it was an important starting point for me.

"If you build it, they will come"

Development environment = programming language + tools + libraries + support. From the report, it's obvious that most, if not all, functional languages, miss one or more elements of what makes a development environment. Academia is not interested in providing the missing elements, so we are in a catch 22 situation.

The FP community should, at some point, stop developing programming languages and compilers and get to work to provide the missing elements.

"If you build it, they will come"...

Already happening

F# looks like it's going to lead the way here.

Re: F# (and Scala, and Quark)

a drawback from my perspective is that they use libraries which are focused on mutable data structures. i presume that probably doesn't matter diddly to the majority of 'real' programmers out there, and won't stand in the way of their adoption.

(as a further aside, i wonder how bad it would be to hack up some wrappers (Java already has some-ish along those lines in the more recent Collections stuff) for turning all of those mutating libraries into ones which return new copies, and leave the originals un-molested? of course, that doesn't work easily when you pass things off to other libraries.)

Hard to have it both ways

One of the reasons that languages like F# and Scala can get wider traction is because they have easy access to such huge libraries of functionality. As you say, though, those libraries are largely focused around mutable structures.

I can't talk about F# but I can say that Scala also comes with a small standard library that includes immutable versions of common functional collections like list, set, map, etc. They'll be of less use when interacting with Java libraries, of course, but for working within the Scala world they do just fine.

If F# doesn't come with such collections it should be easy enough to build them.

For working with Java libraries that expect standard Java collections, another nicety is that Scala has implicit conversions which can allow you to work with Java collections using higher order functions - e.g. you could add "map" and "bind" to a Java List. Combining that with Java immutability wrappers gives you something that works reasonably well - though not necessarily as efficiently as true functional structures would be.

If you build it, they still won't know it's there

If you build it, they still won't know it's there. That's why F# is going to get most of the attention there - MS can do the job properly, both in developing and in marketing it.

It's also worth mentioning that it's much harder building good tools for languages without sufficient reflection!

It all depends on how much accessible it is.

If you build it in such a way that is easily accessible, they will come.

For example, when I visit www.haskell.com, I would like to see:

1) a 'download tools' button which allows me to download the compiler and IDE for my operating system.
2) a 'download sdk' button which allows me to download the sdk.
3) a 'libraries' page.
4) a 'tutorials' page.
5) a CUFP page.

The tools download should include:

1) compiler
2) IDE

The tools should be installable with the normal install procedure of the O/S I am using.

The IDE should include debugging.

The SDK should include:

1) gui
2) database
3) collections, algorithms and data structures
4) networking
5) i/o
6) xml
etc

The libraries page should host all sorts of 3rd party extensions to the sdk.

The documentation of all the libraries should be uniform.

All the above might exist for Haskell in one form or another, but they are not organized. Personally, I can't track down everything I need for developing applications, and certainly not willing to fight against different installation methods, different compilers, different help files.

Perhaps I am spoiled by Java and .NET, after all :-)...

Interest and resources

AM wrote: Academia is not interested in providing the missing elements

Might be fairer to say, those academics who have the interest, don't have the resources to do the job properly.

Not their job

I don't think it's really their job either. Academia is meant to do research, not product development. Given that, it actually is amazing how much development already is done in CS-related academia. Compare that to other branches of science.

Academia could organize it, at least.

I agree, it's not academia's job to do it. I certainly don't expect professors to start coding libraries. But they have the resources: a huge number of students plus a huge number of open-source developers. They could, at least, organize it.

If it was upon me, I would make a site where standards and APIs can be collaboratively designed, published and tested for my favorite language. I would look upon the most popular APIs for design ideas, if I did not have experience in a field.

The nature of functional programming languages make it easy to just hand off specs to somebody and then gather together the results.

For example, a GUI library usually has around 30 widgets...If I had, let's say, 15 developers, a very good GUI library could be created in 1 month, given that each developer can finish an average widget (coding, test etc) in around 15 days.

You all people need to understand how important support is. You can't just expect companies to use software some guy made in his own spare time.

For example, I am trying to write a C++ application (C++ requested by the customer) right now which uses garbage collection. I can't, because the Boehm collector (the only one existing right now for C++) crashes in a multithreaded environment. I am waiting for a response from mr Boehm for one week now.

While debugging the GC problems, I decided I wanted a debug version of pthreads-win32, in order to see why a certain pthreads function returned a certain value. I tried to compile the sources (available online), but they just will not compile.

I am now dependent on the appetite of two people to work on their projects...I am waiting for the response of the one of them, hoping that I can get a meaningful answer so I don't have to go bother the other one with my problems.

It's situations like these that managers want to avoid. It does not matter that your language is the very best in the universe. What it matters is how long it will take for my project to finish, if I am going to be able to get support in the future when the client requests an uprgade etc.

Still not their job

Academia could organize it, at least.

Researchers have two core jobs: publishing and getting funding (and the cynic in me isn't so sure about publishing). Organizing large scale projects, open source or not, only fits into a researcher's job description to the extent that it provides both.

It's unlikely that there's a lot of grant money floating around for "Using Open Source to Create Better FP Tooling: A Study in Herding Cats to Build Some Really Pragmatic Stuff That Isn't Terribly Interesting From a Theoretical Standpoint but Darn, It's Useful."

The result is that organizing a large project must be a side project for the researcher. There's plenty of anecdotal evidence for how hard it is for non-researchers to build a successful project "on the side." Now imagine you're a researcher on the "publish or perish" mouse wheel. Every second you spend working on organizing the project is another second you're not working towards your next paper or grant.

That any broadly useful tools ever come out of the laboratory is simply a testament to the passion and dedication these people bring to the field they love.

If the rest of us want these tools to be at the next level then we have two choices: wait for uncle Microsoft or grandpa IBM to build it or invest sweat into our own projects.

For example, a GUI library

For example, a GUI library usually has around 30 widgets...If I had, let's say, 15 developers, a very good GUI library could be created in 1 month, given that each developer can finish an average widget (coding, test etc) in around 15 days.

Ah, but some people would disagree. In particular, the "nine times as hard" rule applies here. And managing fifteen devs is much, much harder than it looks :)

Is there any inherent reason

Is there any inherent reason why computing scientists can't do networking but physicists can?

CS vs. physics

Physicists seem better at BSing their funding agencies about how their theories are going to turn out correct, honest, if only they pump another billion or two into the next supercollider. One of those dozen or so postulated hidden dimensions has to be around here somewhere! Maybe they're hiding under a pile of dark matter! ;-P

Seriously, I'm not sure what you're referring to. The founding of the web? Something else?

Seriously, I'm not sure what

Seriously, I'm not sure what you're referring to. The founding of the web? Something else?

With "networking" I really meant relationships and having some shared goals.

Not sure I agree with this...

...if you look at the existence of the major conferences such as ICFP and POPL, you see that computing science does, in my opinion, a fairly good job of self-selecting subsets with shared interests and goals. To Achilleas' point, however, what often isn't present is what I would call the goal of productizing the results of the research. Consider, for example, that the Scala group itself maintains the Scala Plug-In for Eclipse, that the Scala Book is available, and that the Lift web framework is no longer a one-man project. These, along with Scala's ability to use the huge library of existing Java code, represent, I think, a healthy blend of the researcher's goal of improving the state of the art and the engineer's pragmatic need to get real work done using popular, well-documented and maintained tools.

If someone asked me to start a web application today, I'd almost certainly do it in Scala and Lift.

Small electron colliders[*] are cheap

The non-serious part of my previous comment already addressed that. The organization you linked to is currently primarily concerned with a $7.5 billion dollar project to build a device without which a major branch of physics will be dead in the water. Unlike the case with supercomputers and CPUs, the LHC is not something which industry is about to build for its own purposes. So 111 nations are collaborating to help build this thing. That's a wonderful example of international collaboration, but what do you propose an equivalent should be in computing science?

Most areas of computing science have a low barrier to entry [cost-wise], so those sorts of collaborations aren't necessary. In physics, that collaboration is essential just to test certain theories.

Any cross-discipline comparison of this sort is quite meaningless without some sort of concrete thesis which attempts to account for the huge differences in the nature of the two fields.

[*] CPUs

Networking and shared goals?

Haskell was born exactly the way you described.

Language apps

If goals are the issue, then it is the applications of functional programming that matter, as Ehud suggests. Functional applications are about "language" problems. These are the killer apps of the functional arena. This is probably what I had in mind when I posted the Winston Lisp books above. The book is almost nothing but language apps. This is certainly what interests me about this area. For me the issue is not so much promoting this or that language but developing better tools for doing applications. These are likely to be one of a kind languages.