"Fortress Wrapping Up"

It appears that the Fortress project is wrapping up. I'd be interested in seeing more commentary of lessons learned.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Surprising that the project died completely

GLS: "Nevertheless, over the last few years, as we have focused on implementing a compiler targeted to the Java Virtual Machine, we encountered some severe technical challenges having to do with the mismatch between the (rather ambitious) Fortress type system and a virtual machine not designed to support it (that would be every currently available VM, not just JVM). In addressing these challenges, we learned a lot about the implications of the Fortress type system for the implementation of symmetric multimethod dispatch, and have concluded that we are now unlikely to learn more (in a research sense) from completing the implementation of Fortress for JVM."

It's surprising to me that these technical challenges were so severe that they appear to have made the entire project uneconomical. That is, it seems -- reading between the lines -- that they couldn't feasibly productize the Fortress work on top of the JVM. Either that, or they didn't get any interest from Oracle management in productizing Fortress.

In any case, I very much hope they write a paper detailing the specific hindrances to building Fortress's type system on top of the JVM, since that seems to have been the death knell for the whole project....

I am not surprised...

It's not that easy to target an arbitrary virtual machine. For instance, almost all compilers I know target an abstract assembly in which the basic allocation unit is a cell, which may hold either a pointer or an integer of register size, sometimes another small value. Moreover, a number of target assembly languages then continue to allow for pointer arithmetic in order to implement fast branching. AFAIK, both are impossible in the Java VM, at least, not without a performance hit.

Then there is RTTI which may not map into the Java manner of keeping track of types, forcing one to generate another layer of RTTI on top of Java.

Then there is the fact that it is a mostly functional language; meaning it will allocate an enormous amount runtime, which might be slow in Java. Plus that I don't know how efficiently Java reduces closures; which may be another runtime penalty.

Lastly, Fortress sits in the same solution space as Haskell and ML. Why target a VM inefficiently when there are other languages who already fill the same niche which enjoy a larger audience?

For a sufficiently complex language, there are usually more reasons not to target a Java VM, or .NET for that matter. Targeting a Java VM is mostly for untyped scripting languages, or 'cheap' Java knock-offs. (Or those languages which can get away with an AST reducer.)

(I saw Martin Odersky's post on the other thread. I should have mentioned Scala, of course, as yet another language extending Java, and filling the same niche of, Java VM targeting languages.)

Productizing languages is difficult

Having done market research for developer-centric HPC tools, I can tell you that there really isn't a market for a productized Fortress. There are people out there who might use it for free, but almost no one will pay for it. And, even if they did, there aren't enough people willing to pay to make a strong business case.

The starting market for developer tools in the HPC space is developers who work on scientific or HPC projects. Across the world, that on the order of 1k-10k developers total, across all scientific domains. Note that this excludes technical end users such as those that may use Matlab or IDL, though that market isn't much larger.

Assuming (generously) that 10% of those developers would be willing to switch languages and pay for Fortress, you have 100-1000 paying customers, total. A list price of $10k would give you $1M - $10M in revenues. The lower end of that would barely support development and leave nothing for marketing/sales/support. The middle would support it, but with no profits. And, without recurring revenue (e.g., customers buy a subscription rather than a license), the business wouldn't be sustainable.

Really? Is this including

Really? Is this including finance, oil rigs, graphics shops, ..? Also, you hit muddy waters wrt cloud computing, mobile, and embedded.

When a company makes a new language...

...the business plan is generally not to make money by selling copies of the toolchain. The plan is to develop software more efficiently (or, in rare cases, to develop more efficient software) by using the language internally. If the company is large, users outside the company are welcomed, to provide potential future hires for HR. If the company is not as large, outside users are welcome so that the language gets an ecosystem of libraries to help it flourish. Charging anything at all for 1.0 of the essential toolchain seems like a good way to kill a language at birth.

DARPA money

In this case, the company didn't make a new language. A particular research team. The business plan was to fund some portion of Sun's research using DARPA money. The DARPA program solicited something that Sun people wanted to do, so they pursued it.

The CRASH program at DARPA, in my opinion, was pointless. The problem is that tagged hardware doesn't generalize, requires non-commodity memory hardware (therefore isn't deployable), and simply isn't necessary or helpful to achieve the results that Howie sought. It's instructive to remember that the AS/400 started with tagging at all layers of the hardware. Over the years, that hardware support was progressively abandoned in favor of software approaches with no loss of function and a significant increase in performance. The LISP world went through a similar experience.

Some people have offered the opinion that Howie pushed tags as a way of forcing the research community out of well-worn ruts in their thinking. Personally, I don't find this notion very likely, but I suppose it is possible.

The right way to think about hardware tags is that they are a non-generalizable form of type tagging. We have better, generalizable ways of doing that today than we had 40 years ago when hardware tags last seemed like a good idea.

This is really interesting

This is really interesting. I am surprised I haven't heard of CRASH before given my interest in both PL and biological computation . Anyway, did any interesting PL work come out of it?

CRASH is an ongoing program

CRASH is an ongoing program -- we're in the midst of year 3 of 4. It's funding a large number of different research projects, but the one shap refers to is this one: http://www.crash-safe.org/ . More info about CRASH is here: http://www.darpa.mil/Our_Work/I2O/Programs/Clean-slate_design_of_Resilient_Adaptive_Secure_Hosts_%28CRASH%29.aspx

Full disclosure: much of my work over the past few years has been funded by the CRASH project, working on types, contracts, virtual machines, and extensible languages, all in Racket.

Better Racket than rockets!

Better Racket than rockets!
The SAFE site seems pretty lean, though. Given your disclosure I presume it does not reflect the extent of the project.

SAFE is actually quite

SAFE is actually quite ambitous, featuring a new architecture, new OS, two new PLs, and new software. A position paper on the design from a year ago is here: http://www.crash-safe.org/node/9

still tagged

As far as I know, the AS/400 successors still use tagged hardware today, so it may not be an ideal example. The increase in performance wasn't because they abandoned tags, but because they replaced their quirky special-purpose very CISCy architecture with a modern RISC, with a minimum of hardware support for their tagging needs.

I think they are gone

It is certainly possible that I am recalling incorrectly, but here's my memory of matters:

Tags on the AS/400 memory were gone when I joined IBM circa 1999. The extra per-register tag bit on IBM's implementation of the PowerPC went away while I was there.

The externally visible architecture remains tagged, but I'm pretty sure that the hardware implementation is no longer tagged.

not so sure

You no doubt have better inside knowledge about this then, but from Soltis's description of the architecture it's quite clear that there are tags in hardware: one tag bit for each 128 bit of memory, mainly serving as a forgery check for system-generated pointers. The tag bit can only be set by a special instruction; any ordinary writes to memory will clear it. Registers aren't tagged, only memory.

It is unclear how this could be emulated in software without trapping every write to memory, incurring hideous overhead.

This seems to be very confused

I don't know if you're intentionally running these together, but the DARPA HPCS program that funded Fortress along with X10, Chapel, and a much larger amount of hardware work, is totally different than the DARPA CRASH program that, among many other projects, funds the SAFE research project (at Penn, Harvard, Northeastern, and BAE) that's looking at tagged architectures.

If I have more time later, I can say more about the other issues here.

Yes. I'm confused

I did indeed mix them up. Thanks. I was actually talking to JMS about the SAFE work yesterday.

Thanks, Sam.

economics

It's surprising to me that these technical challenges were so severe that they appear to have made the entire project uneconomical.

Projects are not always, in and of themselves, economical or not economical. They exist in a larger economic context that has a say in that question. The Fortress project moved from one management regime, sales force, CFO, etc. to another. Any outsider, speculative guess as to why the Fortress project is winding down has to consider that transition.

I'm inclined towards:

[...] didn't get any interest from Oracle management in productizing Fortress.

That would be consistent with the way Oracle has "in-housed" other free software projects they got during acquisitions.

Here's my outsider speculation: At Sun, the prospect of bringing lessons learned during Fortress R&D to the community process and improving JVM in an orderly way was plausibly strategic. Those conditions changed. It wasn't and isn't Oracle's strategy.

Whatever you want to say about Sun's strategies, it seems to me that in the years before the end they saw benefit in pushing the envelope of what was commonly available as free software. For example, even if Fortress itself were never "the next big thing" -- what would it have been like to see JVM grow new features that would encourage other languages to include features for symmetric multimethod dispatch? Or for that matter, for other VMs/run-time systems to gain such features, following the example from Sun?

It seems to me that Sun Research was inclined to look for and surf those big "waves" of common practice -- the introduction of Java for the web, way back, as an early example.

Oracle's approach looks rather different. I think they, in essence, got fired because Oracle never had any deep interest in the project from the start, but it would have been impolitic to just kill it right off the bat.

stillborn

The Fortress project has been apparently dead long before announcing its termination, and this has nothing to do with Oracle or their economic policy, or with implementation difficulties. How much interest did Fortress create in the world for a decade? Where are the hordes of enthusiasts behind it? Checking any of, or both java.net/projects/projectfortress/lists and blogs.oracle.com/projectfortress, the answers are quite obvious. Even the project wrap up announcement received no comments so far.
So, I see the really interesting question to be not ‘why the project was closed’ but ‘why it was never live right from its birth’.

Data distributions

Considering that they came out of DoD HPCC and the directions that were wanted there, I was always disappointed that they did parallelism only in the most trivial way. In that link there is an acknowledgement that data distributions were something that they had wanted to do but never got around to.

Personally I would swap any amount of syntax and fancy types for a workable parallelism design. At least Chapel made good progress there.

HPCC is hard

If you put a number of domain experts together and define a simple language around all the idioms and glue the underlining C libraries to that simple language, you'ld have a reasonable chance to make the programmer's job in HPCC easier. And I guess Chapel is doing that.

To be honest, I don't think gluing an elaborate Java on top of a number of C libraries, which do their work well, makes a lot of sense since I estimate that programmers will be put back by the drawbacks of the high-level language. I would expect a factor two or three slower program development with regard to an easier language.

And as another poster said, there is no business case for Sun in HPCC, but there is a business case for Cray, so no wonder they're developing Chapel.

It seems like Fortress had

It seems like Fortress had an identity crisis from the beginning. It came out of the DARPA HPCC program but it hardly dealt with parallelism at all, and unsurprisingly was not picked up for phase II funding. It was never clear to me if it was a production language emphasizing parallelism or a research project about the LaTeX syntax typesetting and a statically typed LISP-y multiple dispatch mechanism.

Pretty much all the things

Pretty much all the things you say here are not correct. Parallelism, structured with what we called distributions, is fundamental to the language. In fact, the default Fortress for loop is parallel. Also, the mathematical syntax and multiple dispatch were part of the language at the beginning.

Perhaps I came across as

Perhaps I came across as overly critical, from the outside it seemed that all the parallelism was just Map/Reduce. At least that (and the fancy syntax) was what got emphasized in Guy Steele's talks.