Questions Five Ways

I think one of the better ideas I've had on this blog is my Questions Five Ways series. For each post, I'll ask a guiding question of five leading hackers, some from the Ruby community and some from outside it.

So far the questions were about concurrency, code reading, and static code analysis & testing. I understand Pat is interested in hearing suggestion for future topics.

I find the discussion about concurrency interesting. Naturally we have been urging people to look at Erlang for quite awhile, and Haskell parallelism is also a frequent topic here. It is nice to see how these things are becoming more mainstream. It also means it is about time we moved on to new things...

Code reading is, of course, near and dear to me.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

What's next?

It is nice to see how these things are becoming more mainstream. It also means it is about time we moved on to new things...

I think that hits it on the head, Ehud.

I must see five mentions per day each of Haskell, Erlang, functional programming, closures, etc. in the mainstream programming blogosphere. I don't know how much of a role LtU played in popularizing these ideas and technologies, but there is no doubt that they have been popularized.

As you say, this is a very good thing. But what can we do for an encore? ;-)

There is still some interesting stuff going on in the theory world (e.g. Wadler's relatively recent work on blame typing and idioms) and on the engineering side I would like to see all PLs get the kind of "deep" tool support that Java has in the form of the full-featured IDEs. (The Rubyists in the OP article don't seem that interested in this part...)

But I must confess, I'm starting to find it harder to find new PLT goodness to feed my habit. ;-)

I wonder... Is it the world

I wonder... Is it the world that has changed, or is it just us old farts?

Kids these days...

I wonder... Is it the world that has changed, or is it just us old farts?

A bit of both I think. ;-)

The world has changed to the extent that we are like the people who built a small town on the frontier only to look up one day and find that the suburbs have been built up around us.

We've changed in that we've used up the ready-to-hand content; it's pretty hard at this stage to stumble across something "new and exciting" like the lambda calculus that just happens to have been waiting around for us to discover for several decades.

It sucks getting old. ;-)

New PLT goodness

I'm starting to find it harder to find new PLT goodness to feed my habit.

Pessimism completely unwarranted! On the contrary, it's nice to move on from always talking about the same old stuff. Languages are moving to a higher level of abstraction. We're seeing very nice libraries that bring programming to a new level. For example, the Scalaris and Beernet libraries in my own SELFMAN project. They provide a simple abstraction, a transactional object store, and mask very well the problems of coherence and imperfect failure detection in distributed applications. The heavy lifting is done by the Paxos uniform consensus algorithm, which was considered rocket science just ten years ago. IMHO, this is where the future is: strong algorithms will lift languages to higher levels of abstraction. We are slowly but surely taking off from the ground level, where language operations are just fancily wrapped machine instructions.

Optimism requires patience

We are slowly but surely taking off from the ground level, where language operations are just fancily wrapped machine instructions.

I agree. We've been mired at ground level because of technological limits - things like unconstrained mutability. We've only just begun to reach the point where some niche languages support rising above that, but there's relatively little experience with doing it for real at significant scale, which would provide impetus for much additional invention and innovation.

One problem is that technology transfer from academia to industry is still hampered by the weak foundations provided by the mainstream languages. Which might help explain the malaise being described here.

Danvy has occasionally referenced one of Piet Hein's Grooks:

T. T. T.

Put up in a place
where it's easy to see
the cryptic admonishment
T. T. T.

When you feel how depressingly
slowly you climb,
it's well to remember that
Things Take Time.

The present's so bright, I gotta wear shades

One problem is that technology transfer from academia to industry is still hampered by the weak foundations provided by the mainstream languages. Which might help explain the malaise being described here.

I think you are describing the old malaise that we were comfortable with. ;-)

The new malaise (at least for me) is that some of our favorite PLT ideas have gained a beach-head in the mainstream and there is now an inner call to find new frontiers.

Maybe that is where Peter lives. ;-)

Where's the light source?

(Some of) the ideas have gained a beach-head in the mainstream, but I'm saying that being able to really build on those ideas, which is an obvious next step, is still significantly inhibited by the limitations of the legacy languages, not to mention that most people don't even consider them legacy languages.

We're also saddled with legacy VMs like .NET and JVM, which punish all attempts to implement semantically different languages on top of them. It can be done, as languages like Scala, Clojure, and F# demonstrate, but they all also demonstrate the limitations of their foundation.

Throwing some ideas over the wall and being happy when they're recognized seems to me to miss the bigger problem. A healthy situation would involve more back and forth - having the ideas being pushed to the max in real projects in industry, and having academia respond to some of the challenges that creates. That's not really happening yet with the ideas we're discussing, so to me, their increased mindshare is only the beginning.

That's a very good point.

That's a very good point. New directions should be inspired by both internal theoretical concerns and real world pressures. The point of the malaise, I guess, is that neither seems at the moment to lead to something radically new or different.

Shine on, you crazy Ruby

not to mention that most people don't even consider them legacy languages.

While not everyone is on the bandwagon yet, I think the momentum is definitely with the understanding that they are legacy languages.

The Java vanguard, for example, long ago fled to the newer languages such as Python, Ruby, Scala, etc, and the discontent that prompted that flight has seeped into the mainstream.

It can be done, as languages like Scala, Clojure, and F# demonstrate, but they all also demonstrate the limitations of their foundation.

With enough demand from these directions, there is likely to be a movement to fix or replace the VMs to solve these problems. Even without perfection, you can go along way with incremental improvements and the mental tools that PLT can give you.

That's not really happening yet with the ideas we're discussing, so to me, their increased mindshare is only the beginning.

The game isn't over, for sure, but these ideas have crossed the chasm. If they are really as good as we've been saying they are (obviously I think so), their utility will get them established just as thoroughly as, say, Structured Programming has long been.

A healthy situation would involve more back and forth - having the ideas being pushed to the max in real projects in industry, and having academia respond to some of the challenges that creates.

This does go on, just not as directly as we might expect. The two domains don't have quite the same goals and intentions, after all. And I personally don't think that is a problem, since that means that some of us can occupy the niche that bridges the two ecosystems.

I guess I just want some new ideas to learn about and champion. I've started to take the old ones for granted...

Tasty!

For example, the Scalaris and Beernet libraries in my own SELFMAN project.

Thanks for the reminder about SELFMAN; I've always meant to keep up with it more than I have.

I almost added distributed computing to my list of areas of current interest, but didn't have any recently read papers to refer to. ;-)

However, my PLT taste buds would be even more excited if you could show how these new libraries could be replaced with elegant extensions of the kernel language...

Extensions to the kernel language

if you could show how these new libraries could be replaced with elegant extensions of the kernel language

There you've got me. That's work in progress. There is some good stuff out there, like Raphaël Collet's Ph.D. thesis (transparency for fault tolerance) and Mark S. Miller's Ph.D. thesis (transparency for security), but we still haven't embedded Paxos in the kernel language. It's not easy to figure out what should go in the kernel language (as primitives) and what should not. A common mistake is to put too much in the language. Even garbage collection and virtual memory (now usually taken for granted) took a long time to be accepted. RMI is probably a mistake and will have to be taken out eventually. The jury is still out on STM.

PL vs application

It's not easy to figure out what should go in the kernel language (as primitives) and what should not.

I guess the more general question is: how do you distinguish innovations in PLs per se, versus "merely" an innovative application of existing PL technology?

Composability

Composability

Music can be recognized by its...

Composability

Care to elaborate?

Merely an application...

I guess the more general question is: how do you distinguish innovations in PLs per se, versus "merely" an innovative application of existing PL technology?

Operating systems, APIs, protocols, and frameworks have all been likened (very reasonably) to languages. They often can be judged by the same properties: transparency, security, orthogonal persistence, modularity, robustness, etc.

Applications, on the other hand, have failed to achieve such high praise as being likened to PL. Most especially, they aren't composable. Why aren't they composable? Because applications are defined/described by their behavior, not by an interface with a service contract.

It's a subtle difference, but a significant one.

Other things than languages can be composed, of course, though 'music' does not seem to be one of them... At least, if I were to be pedantic about it, I'd note that 'musical composition' typically seems to involve developing (in a language of some sort) a program or description of sounds to be played against time, as opposed to literally arranging and composing 'music' by grabbing sounds out of the air and shoving them through space-time.

cat -n file.txt | sort +5 | grep "todo"

Applications ... aren't composable. Why aren't they composable? Because applications are defined/described by their behavior, not by an interface with a service contract.

How does the classic UNIX approach of creating many small apps that do one thing well, and can be composed using pipes, fit into that argument?

They are composable, but

They are composable, but only because of the anemic data model: the string is the only data type. We're all looking for something better.

"Composable with rich data models"?

We're all looking for something better.

Indeed. So it would seem that composability alone is not a sufficient criterion for distinguishing PL innovations from applications of existing PL technology.

No true Scotsman...

Those happen to be commands in a shell-language. I'd note that commands designed for integration with the shell, such as those you name above, are really the UNIX equivalent of functions. C had several design criterion just for making that sort of integration easier (the support for args, env, having 3 file-descriptors open).

The behavior resulting of invoking those commands should be considered an application of the shell language. ^_^

When is an application not an application?

I'd note that commands designed for integration with the shell, such as those you name above, are really the UNIX equivalent of functions.

You could look at them that way. I see them rather as applications that have been designed for (among other things) composability. Or is your contention that an application designed to be composable ceases to be an application and becomes a language primitive (even if it can also be used as a stand-alone application)?

So here's a question: What

So here's a question: What are the design constraints for achieving compositionality?

PEBKAC

I'm not sure. I suppose that the existence of an interface accessible to other components is a prerequisite (keyboard-and-screen-as-interface seems less than useful for composition unless you're willing to include human-at-keyboard as an intercomponent connector within your system - although there are certainly some systems that work that way). For UNIX apps the interface is of course stdin/stdout. I would assume that some form of agreed protocol at the interface (what David has been calling a "service contract") is also required. For UNIX apps that seems to have been "stream of bytes". For proper compositionality we also require that a composition can itself be treated as a component. That seems to imply a certain uniformity to the interfaces and protocols. Beyond that, I don't have any good answers yet.

Referential transparency?

Referential transparency?

Required, or just nice to have?

Referential transparency is nice to have, and it certainly makes components easier to reason about. But I'm not certain it's required. For example, we probably want I/O components, but they're not usually considered referentially transparent.

r.t =/= pure

r.t =/= pure. I do think you want "same inputs entail same outputs" to hold (in general; there are a few exceptions I can think of)

No argument there.

No argument there. Engineers designing (for example) a circuit have to include all sorts of design margins to accommodate the fact that their circuit components are not properly referentially transparent (the input->output mapping can vary slightly due to internally generated noise, thermally-induced changes in resistance, and a host of other problems). In principle software components can be free of those issues. However, given that it is possible to design systems from components that aren't precisely referentially transparent, I'm hesitant to say that RT is a hard requirement for compositionality. And as you say, there is a place for exceptions.

Of course. One reason this

Of course. One reason this is a good property to mention is that when you decide to go the side-effect route, you often lose compositionality, which you may later regret (this just happened to me this afternoon...)

Capabilities Abstraction

What are the design constraints for achieving compositionality?

I posit the following:

  1. capabilities abstraction: ability to pass capabilities (links or object references) into the definition for objects; UNIX provides this only via the 3 file-descriptors (which the shell links under-the-hood). Without capabilities abstraction, composition can only occur incidentally via a shared name-space. Corresponds roughly to parametricity for functional composition.
  2. configuration language: a language just for defining a composition! Shell pipes, functional composition, visual dataflow, etc. all qualify.
  3. common data model: objects speak the same language as their recipients expect
  4. behavioral contract: objects in the configuration know what to expect of one another behaviorally, e.g. if replies or feedback are involved.
  5. separate naming and construction: objects are named or links are provided independently of objects being defined or constructed. (For cyclic composition.)
  6. well-defined partial constructs: either there are no construction-time communications behaviors (silent construction), or said behaviors are delayed until after every object in the configuration is constructed, or the environment can delay messages to objects that have not yet been constructed. (Frees environment from ordering-issues in construction, necessary for cyclic construction.)
  7. transparent live reference: aka dependency injection. Naming or linking live objects is just as easy as declaring new objects. UNIX fails here in a big way. Necessary for secure live-object composition. related: 'new' considered harmful

For even more fun, consider abstract compositions, functional compositions, and composable abstract compositions... which I'll define as:

abstract composition: a partially-defined composition but with a few holes to fill - minimally a couple missing objects or parameters. This generally means black-box composition is necessary, which adds some extra complications to the 'data model' and 'behavioral contract' issues because the developer of the abstract composition needs to communicate the communication and behavior requirements to the user of the abstract composition.

functional composition: an abstract composition where the absolute count of initial objects in the configuration objects is determined by functional abstractions. E.g. a simple example would be a composition with a number of repeated filters based on the input to the function. This sort of composition is often difficult to perform in a visual configuration language.

composable abstract composition: an abstract composition A that accepts an abstract composition B as an input then provides input to the abstract composition specifically including references or links to objects introduced by A and vice versa. Example:

  1. abstract configuration AbstractA accepts abstract configuration AbstractB as input, creating a concrete configuration A
  2. configuration A introduces objects X, Y, Z.
  3. configuration A passes references to X and Y into AbstractB, creating a concrete configuration B
  4. configuration A defines objects X and Z with reference to objects defined in configuration B.

If that sort of composition can be completed, especially in combination with functional composition, some fairly interesting constructs become easy to define.

Another question to ask is: What are the design constraints for achieving significant static partial-evaluation optimization and dead-code elimination from configurations?

My answer was: strong object capability security (so objects don't communicate except through capabilities), encapsulation of configurations (output from configuration is just a few limited exports), silent constructors (allows dead-code elimination of objects that aren't reachable from the exported objects)...

[Edit addending to 'optimizable' composition] Oh... and (since Ehud mentioned it but for me it was just assumed) referential transparency of the configuration language, including evaluation of its abstract forms, with total functional programming (guaranteed termination) in order to allow a maximum degree of partial-evaluation and static optimization within abstract configurations.

Not an Application until Invoked

Or is your contention that an application designed to be composable ceases to be an application and becomes a language primitive (even if it can also be used as a stand-alone application)?

Not quite. I don't see commands of any sort, not even 'firefox', to be applications. The application is the behavior - the result of invoking the command. That's the critical difference.

Shell commands are composable. Shell commands, however, are part of a shell language. The pipe-composable shell commands happen to follow a variety of service contracts with regards to pulling data from one location and pumping data to another and accepting arguments from certain sources (most notably the command-line arguments and environment). Shell commands can be composed, and the result of the composition is a program... in a language... it's still composable.

That program is turned into an application by the shell runtime environment when you press ENTER.

And, beyond that point, there is no standard way to compose the application, at least not from outside of it.

Applications that act as servers or language-interpreters or operating systems, though, may happen to provide compositional properties. I'm particularly fond of the abstract-factory plugin class of composition, as is common to browsers. Applications may achieve composition by providing a new language, service-contract, protocols, etc. from within.

Composing behaviors

I don't see commands of any sort, not even 'firefox', to be applications. The application is the behavior - the result of invoking the command. That's the critical difference.

But it's precisely the composition of the behaviour of a Firefox application with the behaviour of a server application that produces a useful result (often a new application, e.g. a webmail application). Does that make the behaviour of Firefox part of a "distributed application" language? Sure, the browser and the server use a common language (HTTP) to mediate their interactions, but that language doesn't define the composition (i.e. specify the components and the fact that they're connected - that's defined dynamically by the user).
Applications may achieve composition by providing a new language, service-contract, protocols, etc. from within.
Do those applications then become part of some (perhaps implicit) compositional language?

We seem to be getting a bit off-topic here. The original question asked how one distinguishes PL innovations from applications of existing PL technology. You indicated that "composability" was the answer. I'm still not sure I see how composability alone allows you to make the distinction. As you have admitted, many things can be composed, and they don't necessary require a programming language in order to generate the composition.

Am I part of a language?

Applications may achieve composition by providing a new language, service-contract, protocols, etc. from within.

Do those applications then become part of some (perhaps implicit) compositional language?

Am I part of English language simply because I read and write it? What about C++? I read and write that, too.

It's a metaphysical position that strikes me as downright awkward, but I can't defend against it from any objective standpoint if you believe so. The idea that the application is part of the language it participates in should, for logical consistency, have the same answer.

I would consider a running Firefox instance to be a participant in many languages, protocols, frameworks (HTTP, HTML, JavaScript, DOM, Gecko, XUL, netscape plugins, etc.). I would not consider to be part of any of these. I'm pickier than most when it comes to language, though. I'm curious what others think.

The original question asked how one distinguishes PL innovations from applications of existing PL technology. You indicated that "composability" was the answer. I'm still not sure I see how composability alone allows you to make the distinction.

There may be other distinctions we could make, but we only need one, and we'd be wise to make it a useful distinction. Composability is sufficient to make a useful distinction between what we typically consider applications and what we typically consider languages.

If it's a composable artifact of PL, then we can still usefully measure it by the compositional properties of languages: security, transparent locality, abstraction, orthogonal persistence, safety, symmetry, etc. If we can measure it by those properties, we might as well call it a language (as we do for frameworks, operating systems, APIs, ...).

If we can't or won't compose it, then there's no point in examining the compositional properties that we use to describe languages, and so we might as well call it an application.

We seem to be having trouble with composition

Am I part of English language simply because I read and write it? What about C++? I read and write that, too.

It's a metaphysical position that strikes me as downright awkward, but I can't defend against it from any objective standpoint if you believe so. The idea that the application is part of the language it participates in should, for logical consistency, have the same answer.

I'm afraid that you misunderstood my question. I wasn't suggesting that an application that uses a language somehow becomes part of that language. Rather, I was trying to understand the claim you seemed to me to be making, which was that "things which are composed" become part of a language simply because they can be composed. In the case posed in my question, the language composable objects become part of would not be the language used at the interface (that would be the common data model and behavioral contract you have referred to in other posts), but rather the language used to define compositions (what you have elsewhere called a configuration language). At least, that was what I took your point to be, and was trying to make sure I understood it.

RE: Extensions to kernel language

On the transactions side, STM modified for Actors Model or for Process Calculi seems promising. Both of them are naturally distributed. The Actors Model version I developed for compatibility with Object Capability Model.

Scalaris also aims at redundancy and regeneration and automatic distribution and load-balancing. While I still rely on intelligently written libraries to create the most robust, self-healing system possible, I do provide some kernel support for these in my Awelon language at the 'object configuration' layer.

  • the configuration may declare each actor to be 'nearby' one or more other actors, which will influence automatic mobility/distribution/duplication of actors (within limits of a trust model). Purely asynchronous actors may be duplicated for other reasons, too.
  • the configuration may declare each actor to 'depend' upon one or more other actors. This serves a quadruple purpose. Given actor Alice depends on actor Bob:
    1. if Bob 'dies', then Alice will (if possible) regenerate Bob. To this end, redundant copies of stateful data will automatically be distributed and managed, within the limits of the trust model.
    2. if Bob 'dies', then Alice will also die (if unable to regenerate Bob). This causes cascading destruction, reducing issues of 'partial failure' (which are rather difficult issues!)
    3. there is a special class of 'suicide' actor that kills itself upon request and, after dying from suicide, can't be reincarnated or regenerated. This allows destruction on demand where explicitly added, which is rather useful given the limitations of distributed GC. Limiting this capability to a particular actor helps with object capability model.
    4. Explicit dependencies are great food for heuristic GC, providing semantic decision support beyond mere reachability.

Other than destruction via dependency and explicit destruction of suicide actors, all language-provided actors are 'persistent'. They might be heuristically garbage-collected, though. The automatic regeneration, along with the intelligent redundancy and self-healing in other 'standard' libraries (like data-distribution service, document GUIs, etc) minimizes the damage caused by imperfect GC.

At the runtime/implementation level, I support introducing and distributing capabilities that are not primitive to the language, such as access to ambient information (time, random numbers, caches) that you wouldn't want to cross the network to reach.

  1. ambient plugin-abstract-factory-provided actors that can represent global resources outside the language. These can be (and will be) duplicated in other compatible runtimes. Examples would be random number generators, access to time or clock information, access to the local/ambient HTTP cache, etc. This avoids the situation where, say, actors would need to reference an HTTP cache, a clock, or a random-number-generator on another machine. It's also object-capability safe.
  2. object plugin-abstract-factory-provided actors, which can migrate and be made redundant (in the 'dependency' sense above) and persistent but always have a 'primary' copy. Used mostly to represent subscriptions (e.g. a subscription to the clock - would be a problem if duplicated and started sending double signals!), but can represent any unique interaction between Awelon actors and ambients. I.e. this would allow a subscription to a clock to be automatically distributed or migrate to use the local clock on another machine more 'nearby' the subscribing actor, as opposed to sending clock signals across the network.
  3. local ambient/object actors aren't sharable but serve similar purpose, albeit to support access to application-local resources (like discovering and subscribing to joystick inputs from a browser linking the runtime). These simply make the plugin system more fully-featured.
  4. volatile actors may be provided by the application linking the runtime, and hooks arbitrary application resources that inherently die (and cannot be regenerated) upon death of the application.

I'm still working on implementation, but the design seems sound.

I was rather proud of the discovery that the 'virtual machine' or implementation should be designed to support certain capabilities that aren't made from the language itself, most especially the ambients and objects. When I eventually re-implement Awelon in Awelon, I'll definitely keep some form of support for 'plugin extensions' to the runtime to add 'capabilities' not available to constructors within the language, but that can still be migrated or duplicated across machines.

Anyhow, Scalaris in its lower layer aims to achieve a somewhat less internally-secure version of transactions and at the higher-level supports the applications. I tend to think the layering is... upside down, or at least that's my impression. For Oz, too. The layers of Scalaris and Oz seem to me to build upon one another in the opposite direction they should for ideal performance, security, safety, and clueless composition. I can't help but imagine a lot of self-discipline will be needed in programming to regain those properties.

Direction of layers

The layers of Scalaris and Oz seem to me to build upon one another in the opposite direction they should for ideal performance, security, safety, and clueless composition.

Can you explain?

RE: Direction of Layers

I probably cannot explain very well, not without sitting down and really formulating my thoughts. But I'll offer an attempt. I'll note here that by 'Oz' I mostly meant 'Mozart', and in particular its use of logic variables (and streams and trees of such) as rather expensive communication primitives with a rather poor robustness and resilience profile in the face of partial failure and network disruption.

I'll focus this response on the Scalaris layering. A simple formulation of the layers in Scalaris vs. a Suggested alternative:

LayerScalaris Suggested
1 Server/App Transaction
2 TransactionReplication
3 ReplicationServer/App/State
4 State

In the Scalaris system, users talk to the application, the application starts a transaction, and the application names various DHTs and performs the updates, the transaction manager commits or aborts. While replication & self-healing is transparent to the application, the application itself is not subject to transparent replication. State in Scalaris is the distributed hash table (DHT).

In the Suggested system, users start a transaction, users talks to one or more services, those services may talk also to one another, may potentially create more services. The user then commits the transaction, and the transaction manager dutifully fires off the messages to achieve commit or abort. In this case, composition is possible, so the service itself may be the initial 'user' of other other services and may itself be constructed of services. Service and state are each subject to replication and self-healing. Services introduced during transaction have their life-cycle subject to the transaction, thus allowing programmers to introduce whole applications without exposing any partially-defined states. Fundamental state is a cell (get/put one value, or subscribe and receive updates), but state is just another service and services may emulate a cell. DHTs could be constructed readily enough.

In terms of composition, the Scalaris design doesn't allow applications to compose transactionally. Even if two different applications share reference and subscription to a common DHT, they won't be informed of one another's modifications until after commit. There are no 'transactional events'.

Security is hurt directly as a consequence of the composability being hurt. In particular, applications become monolothic because they must simultaneously carry capabilities/references to every DHT they wish to involve in the transaction. Even assuming 'ideal' unforgeable names for DHTs (capabilities) and proper certificate-based limitations on replication across DHT platforms (due to extent of trust capability in an open system), and a Transaction Manager protocol that doesn't hold capabilities (allowing untrusted platforms to host/replicate manager), this issue remains. There are ways around this, but they fundamentally involve tuning the Scalaris application design into the Suggested approach.

The Suggested approach can be used to compose applications, service, and state externally and internally without distinction. This allows fine-grained object capability security within the application. The same certificate trust-based replication limits can apply to avoid replicating services to untrusted platforms in an open network. This approach does have its own cost, but a relatively minor one. In particular, transaction identifiers - enough for the service platform to connect to the transaction manager - must be part of the protocol... passed around sort of like cookies, and recognized by everyone. (Indeed, HTTP cookies are sufficient.) It extends well to hierarchical transactions.

As far as safety and performance, Scalaris will perform well enough in its designed role. The layering won't hurt it. But the need for workarounds to compose applications and to add 'behavior' to the transactions eventually will hurt performance. Perhaps the SELFMAN project will overcome some of those issues.

LtU's role & tools

LtU has certainly helped me look at functional programming (and languages) along with related technologies.

As for tools in the Ruby world, I dunno that we're not interested. More that we're interested in tools that aren't tied to a specific IDE. Maybe I'm misunderstanding by 'deep' tool support though. Can you be more specific?

Tool support

As for tools in the Ruby world, I dunno that we're not interested. More that we're interested in tools that aren't tied to a specific IDE.

I don't know about the "we". As time goes on, less and less people are interested in just Vim and Emacs modes when programming.

But Ruby already has good IDE support. It's just that it's next to impossible to do "everything" that the Java support in IDEs give.

If Java did one thing, it made many understand the importance of tooling.

Maybe I'm misunderstanding by 'deep' tool support though. Can you be more specific?

Just look at Java support in Eclipse, Netbeans, Intellij, or C# support in Visual Studio with the Resharper plugin.

Deep tools

Maybe I'm misunderstanding by 'deep' tool support though. Can you be more specific?

Dave's examples are the same ones I would give. The key thing about them that is different and more powerful is that these IDEs have a more semantic understanding of the source code which goes well beyond the merely syntactic understanding required for, say, source code pretty printing.

A suite of powerful refactorings, for example, makes it much easier to make complex global design improvements to the source code, and it goes a long way towards making the source code the embodiment of the mental model for the program.

Extensive use of metaprogramming and eschewing of static guarantees on source code make it harder provide this kind of support. To some extent, a language must be designed from the ground up to allow for this kind of support, and Java, despite its failings, has shown how much can be done in this direction.

mainstream programming blogosphere

I think the intersection of mainstream and blogosphere captures a very small (and quite possibly unrepresentative) portion of mainstream. Similarly the open source community seems to be a lot more progressive (after all a lot of them don't have to answer to anybody about how to invest their efforts) and very vocal about their preferences.

However, I don't think they're actually convincing the remainder of mainstream programmers or their management yet. Probably they're still being considered the "cool" kids who go on and on about the Subaru Impreza WRX STi by the grown-ups who reiterate that the car for the job has been and still is the Mercedes...

Aggregation for the masses

I think the intersection of mainstream and blogosphere captures a very small (and quite possibly unrepresentative) portion of mainstream.

I will grant it is hard to tell objectively one way or the other. My subjective sense, based on programmers I've known, is that a significant proportion of all programmers follow at least one of the programming aggregator feeds, and that is where I'm seeing a LOT of references to "non-mainstream" languages and language issues.

It wasn't that long ago that Java was the cool new kid on the block, and look where it is now.

F# in Visual Studio 2010 by default

To me, VS 2010 being released will be the turning point. Millions of dotnet programmers will have F# in their IDE by default. Once they start playing around with the REPL, and have the ability to just highlight some code, alt-enter to run it, and not have to deal with so much boiler-plate, mainstream C# and Java development will seem a bit antiquated.

I'm not saying that F# or Scala will be mainstream lanugages for your average corporate development, but the languages themselves will be mainstream.

The Visual Studio support for F# is really incredible. I hope the Scala plugin guys take notice.

Call me a cynic, but somehow

Call me a cynic, but somehow I didn't exactly conclude from that article that these "things are becoming more mainstream."

If anything, there seems to be a lot of questions to answer about why they aren't more mainstream, even given this sort of exposure.

I just came across this post the other day, and even though I do think there is an agenda, there is a certain point to it as well.

I feel as if the PL community is becoming more fractured or something.

Incidentally, I'm wondering if F# will spur the adoption of OCaml in any way.

PL is man's own Babel. The

PL is man's own Babel. The fracturing started with the competition between Lisp and Fortran, if not before...