The Fortress Language Spec v0.618

The Fortress Language Spec v0.618

We've already discussed the Fortress project at Sun's Programming Language Research lab. The project seems to be progressing well: they have now released a preliminary version of the language spec as a tech report. This should help assuage the folks who were grumbling about the hype and market-speak of Sun's promotional article. Their PL research team includes some real Smarty McSmartpants, so I expect really good things from them.

From a cursory glance at the spec, there seem to be many interesting features, including:

  • an advanced component and linking architecture
  • objects and traits (which look similar to Moby and Scala)
  • multiple dispatch
  • type inference
  • parameterized types
  • extensible syntax (with a conspicuous reference to Growing a Language)
  • first-class functions
  • proper tail calls
  • labelled jumps, exceptions
  • contracts
  • some interesting parallelism features (including futures, parallel loops, atomicity primitives, transactions)
  • matrices
  • comprehensions
  • bignums, fixnums (signed and unsigned), floating-point, imaginary and complex numbers
  • dimensions

Future work includes proving type soundness, apparently. :)

It is so much fun when the luminaries start working on new languages. History in the making!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Looks familiar

(although I retracted my comment as not really pertinent, some people seem to have answered it, so I put it back)

It looks very much like Scala, doesn't it ?

Sort of

Fortress is similar to Scala: it's an object-orientend language with traits, functions, and generic types.

Fortress is also similar to Fortran: it's a language designed for the needs of high performance computation with special support for arrays and matrices as complex data structures, and ways of explicitly parallelizing computation in a shared address space model.

There are also new things in Fortress: the component system, the syntax, the model of generic types are all new and developed for Fortress.

Just as an example, the following two expressions are the same:

  a sin x log log x
and
  a * (sin (x)) * (log (log (x)))

Funny you should mention that

Fortress looks very promising -- but I must admit I did a double take when I saw the syntactic ambiguity (for human readers) between multiplication and function application.

While an expression like "a sin x log log x" may be obviously unambiguous because we recognize sin and log as functions, what about "a foo x bar quux x"? In almost all other languages, it's possible to identify both function application and multiplication purely by syntactic inspection, without having to look elsewhere to figure out whether some name represents a function or not. Unless I'm missing something, I would vote strongly against allowing such ambiguity in a language.

The rest looks great, though. :)

Precision vs Math

One of the goals of the Fortress syntax has been to express mathematical notation as closely as possible. That means that some things require context-sensitive disambiguation, because that how mathematical notation works. However, I doubt people will use juxtaposition for function application for all their functions, without any parens.

juxtaposition for function application

However, I doubt people will use juxtaposition for function application for all their functions, without any parens.
Consider Haskell and (ab)use of the $ operator.

I would prefer requiring multiplication signs where necessary -- it should make type errors much much easier to understand.

Impressive, but Cluttered

I wonder if there is actual room for language research anymore. Does every research language need to be cluttered with multiply inherited OO, type-inference, multiprocessing, safety checking, and exception primitives? In this case, I think that there is some good research being done here on multiprocessing primitives, but other than that, this is just a mishmash of features from paths that have been pretty well trodden in the past.

Is the goal of this language to be a research language, or is it to be a trial baloon of a successor to FORTRAN? If the former, it seems way too cluttered with extraneous features to give useful results either on a theoretical level or with respect to being able to disentangle properties of individual features from that of the gestalt. If the latter, well, it seems that a bunch of features that seem beside the point have been added for no particularly great reason.

Make no mistake - the number of constructs and themes attacked in this language is certainly ambitious (and impressive), but I can't shake the feeling that putting all of this stuff together seems to make the language really cluttered. There seems to be no unifying structure or elegance. Half of me says that this is actually a good thing - certainly a system that might initially seem cluttered has much space for finding Alexander's "Quality Without a Name". But the other half says that sometimes clutter is just clutter.

I really don't know and I'm really trying to make sense of this. I just don't see a need for yet another "kitchen sink" language with gutter syntax. Can someone explain to me what the actual goal of this language is? If they actually wanted to build a "secure FORTRAN" (as the introduction states), why didn't they just start with APL or Sisal constructs (e.g., arrays, functions, maps, etc.) and grow from there? Or make an actual mathematical language with operators and types corresponding to that domain? Tossing everything else in just because "it's needed in a modern language" without first examining whether or not that feature brings anything to the table seems perverse.

Ultimately the question is what the purpose of a "secure FORTRAN" really is. And is FORTRAN really insecure at all? If you look at FORTRAN as a domain specific language whose job is to compile closed mathematically-oriented code into a quick binary, then it's pretty secure inherently (or at least a subset of it could easily be made to be). As such, there's a bunch of stuff here that just doesn't fit. On the other hand, if you look at FORTRAN as a general purpose language then you need all of this stuff to make it "generally useful". I just wonder if the whole notion might be better served by an interoperable pair of languages, one which is inherently secure but limited to a closed mathematical domain, and a second one that has objects and exceptions and clowns (it's not a real language without clowns, you know).

Or maybe I'm just cranky today :-).

Academic vs "Real" languages

Fortress is intended to be a language real people use. That means that it needs features for what people need, not just what is publishable research.


It's also a language produced by a research lab, so it has new (and hopefully interesting) ideas.

Tossing everything else in just because "it's needed in a modern language" without first examining whether or not that feature brings anything to the table seems perverse.

We certainly didn't just add features without thought. You'll note that the language doesn't bear a particularily strong resemblance to the languages it's authors have worked on previously (Scheme, Java, Common Lisp, Fortran, Haskell, Modula-3, ML). We thought hard about what would make a language useful.


Conversely, some things are needed in a modern language, like a way to structure software. And some things are needed by the particular goals of this language, like support for scientific computation.


Finally, Fortress is indeed not a small language. But no real-world language is. OCaml is not small, neither is MzScheme or Python. GHC Haskell isn't small either, nor is Java or C#.

A real advance for real users

In fact, I would bet that real users would benefit more from a type system that inferred matrix dimensions concretely if possible and abstractly if necessary than from set comprehensions.

For instance,

A, B: matrix
C = A * B
D = C + 1

tells us quite a bit about the required sizes of A, B, C and D. Getting matrix sizes wrong by forgetting a transpose or some such is one of the most common mistakes in numerical programming. Fixing that will fix much more code than comprehensions ever will.

Will build AI for fun or profit

Look at Hongwei Xi's work...

..on Dependent ML. Checking matrix sizes was the first thing I thought of when I read his work.

Resemblance

You'll note that the language doesn't bear a particularily strong resemblance to the languages it's authors have worked on previously (Scheme, Java, Common Lisp, Fortran, Haskell, Modula-3, ML).

From a first reading of the spec I would say quite the contrary - its heritage from most of these shows through. This is not intended as criticism, of course.

I also suppose the design is far from frozen and that it will still change in more or less radical ways. In many respects it already looks very nice, but I'm slightly puzzled by a few design choices.

No doubt the authors have thought about this a lot more than I have, but the language does not look very amenable to partial parsing (mostly because of the way juxtaposition is used for multiplication and function application). It feels like the C typedef mistake again: difficult to parse a piece of code without knowing what the identifiers stand for. Today partial parsing is useful not only in editors (where it has many uses) but also debuggers, static code analysis tools, interface generators etc.

I have way too much respect for the authors to believe that they have asked computation scientists what they would have liked in a language and used that as a base, but frankly it appears as if someone wanted to lift mathematical notation directly from paper into code. This is only superficially attractive since the notations of code and maths are used in very different ways and should be optimised accordingly.

Writing for two audiences...

I understand your concern, and I believe that there is a lot of clunkiness in the language as well. However, I think that this is on purpose, because the purpose of the language is to provide two levels of use. On one level, you'll have those researchers who don't know how to program really well (and who don't want to learn either) but who need a program to approximate some values. On the other level you'll have your researchers who do understand programming and will put together libraries in Fortress (using that nifty AST syntax) that will allow those on the other level to do more stuff intuitively.

That being said, I think that this approach betrays the initial statement that they are writing a one-purpose language. I also have a problem with the idea that there'll need to be a special editor to write code in the language, and again I think they comprimised on this because they are trying to create a language for two audiences.

What's So Special About Streams of Characters?

Nathaniel Dean: I also have a problem with the idea that there'll need to be a special editor to write code in the language, and again I think they comprimised on this because they are trying to create a language for two audiences.

I'm not so sure about this. First of all, I've long missed the "structure editors" of some of the first Lisp environments I ever used. Later, I was exposed to VaxTPU and LSE (Language-Sensitive Editor), and I thought they were heading in the right direction: why should it be possible to make a syntax error? The editor should prevent it! What we hand to the compiler should already be an abstract syntax tree instead of a stream of characters.

Finally, if you look at something like Epigram, it might begin to become apparent that when dealing with very expressive type systems, it's helpful to be able to engage in a dialogue when ambiguities in type inference arise, and it's good to be able to exploit better means of human/machine interaction than just reading streams of characters and typing streams of characters.

So I'm not at all convinced that a modern language requiring something more than a text editor to interact with it is a priori a bad thing.

Language Representation

What we hand to the compiler should already be an abstract syntax tree instead of a stream of characters.

I very much agree with this statement. However, it somewhat implies blurring the distinction between editor and compiler, doesn't it? I mean, the editor has already "compiled" the code to an AST. Alternatively, the concrete syntax can be seen as just a "view" of the AST provided by the editor. Which leads to the idea that it should be possible to have multiple, customised, views (concrete syntax) for the same abstract language. In the same way that I can currently set up my editor to display keywords in green, or strings in bold and red, I should be able to display my Set type with curly-braces and commas, while someone else's view represents them as the keyword "set" followed by a list of elements separated by whitespace, perhaps. Of course, it's considerably more difficult to create a consistent syntax than to just pick some colours, but I think it should be possible, at least. You could package up a complete, consistent syntax definition as a "skin" for some language. If done at the right level, I think it could work quite well, and allow people to move beyond surface-syntax reasons for disliking a language. Maybe that way madness lies, but I think it would be interesting to try it. Some editors, like the new XCode allow you to see a different high-level view of the code (graphically), but being able to change the view of the low-level syntax would also be useful, I think.

As editors become more complicated, it seems that they will share quite a bit of code with compilers/interpreters. So it probably makes sense to break up the compiler and make all the parts available as libraries. This, in turn, seems to lead down the road to greater reflective and meta-programming capabilities for programming languages: if the parser, interpreter, optimiser, and compiler etc are all available as standard libraries for your language, then it seems a fairly natural step to use them on your own code.

Two-way street

These are all interesting thoughts by both Paul and Neil.

The current one-way street of parsing a concrete syntax representation to an abstract syntax tree would change to a two-way street of parsing and pretty-printing with these ideas. The canonical representation of code would no longer be a certain concrete syntax, but rather the abstract syntax (for which by Neil's idea more than one concrete syntax can exist).

However, I think subtle problems can arise here. Usually, you loose information when parsing concrete syntax (on purpose of course), but this also means you can not recover the exact same concrete representation but rather have to use a generic pretty-printing algorithm. This is a problem inherent to the scetched setting, but people don't tend to like it when their own markup gets lost.

Another problem is what to use as 'the' abstract syntax. If it is abstract enough, more than one construction in a given concrete syntax can be mapped on an abstract syntax construction, so you loose even more information in the parsing-pretty-printing pipeline. (e.g. the abstract syntax only comes with one type of loop construct, but a concrete syntax provides several.)

There and Back Again, Refactoring Browsers.

Smalltalk refactoring browsers turn a parse tree back into source code by allowing the user to specify pretty printing settings.

Anything from indentation of lines to spaces between the function name and parentheses can be configured. More information on formatting abstract syntax trees on page 89 of Dr. Don Roberts Ph.D. thesis, Practical Analysis for Refactoring.

The parse tree used in the original refactoring browser is an extended version of the built-in tree, the extended version includes comments and other information usually lost, see page 84 of the same thesis.


--Shae Erisson - ScannedInAvian.com

Sort of, but not quite

and I thought they were heading in the right direction: why should it be possible to make a syntax error? The editor should prevent it! What we hand to the compiler should already be an abstract syntax tree instead of a stream of characters.

In practice, this turns out to be too intrusive, as people want not just syntactically valid translations, but also wish to do the sort of syntactically invalid translations that they are used to from editing text. That's reasonable, as there are often editing "shortcuts" that take you from one valid state to another very easily, but require going through invalid states.

The state of the art in real-life-use editors is that the editor is responsible for creating the best syntax tree it can on the fly, and warning the user interactively when their code is in an invalid state. Automated refactorings can use this tree for large- or small- scale program transformations, and a parallel analysis thread is responsible for running tree traversals and highlighting possible non-compile-fatal error states found (dead code, deprecated or non-performant usages, portability concerns, likely bug patterns). A modern IDE may ship with dozens of automated transformations and hundreds of automated error detectors ("inspections" or "audits", in marketing-speak) available to the user. "Structural" search and replace is available for user-created parse-tree search or transformation, but good old textual search-and-replace is supported as well.

Best of both worlds, really

In practice, this turns out t

In practice, this turns out to be too intrusive, as people want not just syntactically valid translations, but also wish to do the sort of syntactically invalid translations that they are used to from editing text.

I just finished reading Generative Programming and the chapter on Intentional Programming was, essentially, non-stop praise for the idea that programmers don't enter text in the traditional sense. I found the whole idea unsettling for the same reasons you mentioned.

You have to allow me to make mistakes! :)

Epigram, again

As Paul Snively said earlier, see Epigram for a best-of-both-worlds approach to free text editing and syntax directed editing (and a very cool typesystem that makes it all possible).

--Shae Erisson - ScannedInAvian.com

Baroque concoction of really neat ideas

I know I'm not the target audience for this language but it seems like a language that was designed without the kind of external validation that only real code can give you. I went through the spec and every time I came across yet another really neat feature I found myself pitying anyone asked to maintain code written in a language with this many features in version 1.0. Given that historically it's been easier to _add_ features to languages rather than remove them then adding all these features seems like a mistake. It feels like the kind of premature optimisation that is going to make it hard to change this language when real users get their hands on it.

Real users are likely to ask for even more features and Fortress seems doomed to become a baroque concoction of really neat ideas. That's assuming that anyone is ever able to implement all these features. The world doesn't need another Perl 6 no matter how celebrated the inventors.

Just once I'd like to see a small clean language (like Steve Dekorte's Io) get this kind of corporate backing.

Make no mistake

I wonder, how it's possible to make mistake.
Make no mistake - the number of constructs and themes attacked in this language is certainly ambitious (and impressive), but I can't shake the feeling that putting all of this stuff together seems to make the language really cluttered. There seems to be no unifying structure or elegance. Half of me says that this is actually a good thing - certainly a system that might initially seem cluttered has much space for finding Alexander's "Quality Without a Name". But the other half says that sometimes clutter is just clutter.
[Spam deleted. -- Ehud]

Parallelism

[Still skimming the ref]

How do the parallel features handle arbitrary classes? E.g. Could I design custom data collection classes with appropriate traits and have the compiler figure out the best way to parallelize them according to memory locality, or state of the heap or something? Are there constraints on effects imposed by the parallel execution?

implementation time frame?

This looks like an excellent general purpose functional language... but it looks like it's currently just a spec. Do they plant to implement it? Are there any implementations currently in progress?

Offtopic, but..

I wonder if v0.618 is a reference to ɸ.

It'd be cool if they versioned the spec a la TeX version numbers.

About Traits... and language research

A link for those that want to know more on Traits:
http://www.iam.unibe.ch/%7Escg/Research/Traits/index.html. The TOPLAS paper gives a good overview.

Note that I worked on Traits myself, but I just though that the link might interest some of you.

In response to one of you that asks whether there is room for actual language research: I think there is (obviously). The problem is not that a lot of things already exists: that is really great! One of the interesting things is in seeing what drove the decisions for some of these older inventions, and seeing whether they are still valid. For example, Smalltalk needed a lot of memory, and hence needed expensive hardware to run on. But this is no longer the case. Another example: my current desktop machine has two gigabytes of memory, while most languages we use were conceived in times when memory was counted in kylobytes. So maybe we can put that to good use and come up with a new language addressing issues faced by developers today, while taking advantage of our hardware. I compare this to much older disciplines than computer science (take physics, for example), where there is also still room for a lot of innovation in seemingly fully understood areas. And suddenly constants are no longer constant, and atoms no longer atomic.

Why not allow LaTeX syntax?

The ability to use Unicode for more mathematical-like notation is nice, but I wonder why they did not simply adapt the syntax of Latex as an ASCII-based alternative for entering these. Practically everone already knows those.

Wow, nice language.

It seems a bit like my NICOL...(but what do I know?)

Seriously now, I don't think such bloated languages are needed. What is needed is a programming language that is assembly, has nice S-expression syntax, and can also be 'executed' in compile-time. Then all other constructs can be made with this basic mechanism.

(but as I said above, what do I know? :-) )

Looks like Frink

It sounds like many of the design goals and intended audience overlap quite a bit with Frink, which does a lot of things Fortress is trying to do. It makes me wonder how much of this has been thought through fully. For example, one of the samples shows:

spin:AngularVelocity := 2 π radian / 226 million year

This implies that the designers have either decided:
  1. To make division a lower precedence operator than multiplication. This would be surprising in a language intended to replicate normal mathematical notation.
  2. To try and do some disambiguation based on context. Unfortunately, this approach punishes those who use or expect normal mathematical notation, and does some very unexpected things as I've written about in the Frink FAQ.
  3. This example should be rewritten with parentheses.
I'd like to know more about numerical types. I'm curious to know if there will be a "do the right thing" type of numerical type, so that, say, a number is automatically promoted from integer to arbitrary-size integer to rational (say, 1/2 doesn't become 0) to floating-point through complex, etc., so the programmer doesn't have to micromanage that throughout their program.

Of course this is slower than making the programmer figure it all out manually, but it's very useful. It's not easy to reconcile the goals of Doing the Right Thing and Doing It Fast. :)

More disambiguation: if you have a function called "x" and a variable called "x", what does x(x+1) do? Is it a function call or just another mathematical expression?

In any case, I'm quite interested in the language's progress. A lot of the things that this team is working on are things that I'm working on too.

Precedence

I guess they only made division a lower precedence than the invisible multiplication operator, which is normal in mathematical notation.

Citation?

Do you have a citation for this? I've never heard of anyone treating "invisible" multiplications any differently, and would find it very surprising indeed. It's certainly not the case in established languages like, say, Mathematica that follow standard mathematical notation very closely.

I'd also find the language to be pretty (surprising|untrustable) if it completely changed its behavior if you switched an implicit multiplication to be explicit.

Math Typography in Programming

Sam Tobin-Hochstadt: Fortress syntax [will] express mathematical notation as closely as possible. That means that some things require context-sensitive disambiguation, because that how mathematical notation works.

Anton van Straaten: I would vote strongly against allowing such ambiguity in a language.

Anton is right. You want a model-view distinction. The view can float, ambiguously or otherwise, on a clear semantic model. That is how Wolfram Research solved these problems for Mathematica in its multi-year typography effort. The input/output techniques are good and cover much of the ground you want. There is also provision for context-sensitive disambiguation where the user demands it.

My (perhaps false) impression is that Fortress people are not deeply intimate with MMA. That is sad, since Guy is Mr. Lisp and MMA is like Lisp in important ways. It's a large system and Guy's brief comments fail to do it justice (I know he wasn't trying, but still). Saying that it can return large expressions (true) bypasses its interactive math interface, a world gold standard. I've used MMA for something like 15 years. In the typography department it has much to offer, and the similarities to Fortress goals are striking.

(Edit: Private conversations with Guy indicate the impression was correct. Their information on MMA is basically second-hand. Guy also sometimes uses the term "notation" when he really means "user input encoding" - user key/mouse gestures to put something on screen.)

Not everything is perfect. I pester Wolfram all the time about minor semantic issues (they never make ComplexJ stay put, legacy holdovers C, E, and I are evil, the rational domain is ill-defined, existential queries are hard to pose) but the typeset i/o is excellent and arouses few complaints. Whatever one thinks about Mathematica's programming language and syntax, the typeset i/o is gorgeous and intuitive. I'd like to review it without becoming a commercial.

Someone will mention TeX. Let's be clear that MMA i/o is true user i/o, not static typesetting markup. Mathematica, for its part, can already handle TeX and MathML import/export and copy/paste. It even includes TeX aliases for those who live and die by them. (Edit: You can also define your own input aliases and in fact, entire notational systems.) MathML is better for markup anyway, since it captures math semantics, and enjoys W3C and math vendor support.

Mathematical typography is a huge undertaking. Please don't reinvent it. Help someone already working on it. The place to start Fortress' editor is a working, open-source MathML editor.

Mathematics is a weird language with layout rules not captured by Unicode qua Unicode. Unicode alone does not even address human language layout problems, c.f. Graphite. So my feeling about "special characters" is this: using Unicode is fine, but you'll have to exceed Unicode anyway to represent mathematics. I happen to like Mathematica's special character handling. Each character has a glyph, but also layout rules (summation sigma grows with its argument), ASCII names ( \[DifferentialD] ), and short aliases ( escape-dd-escape ) for the ASCII names. The names mean that Mathematica files can be, and are in fact stored as 7-bit ASCII. They don't even need Unicode. Mathematica files are strictly 7-bit ASCII files. That was an intentional design decision. These files retain provision for bitmap caching and such. Unicode serialization is fine, if Fortress wants to do that.

In the model/view sense, Mathematica calls views "forms," of which there are many. TraditionalForm and StandardForm are typeset views. Both work for input and output. (The reference pages show explicit calls which are not required in general.) FullForm is the direct AST or underlying model, if you please, serialized into 7-bit ASCII.

StandardForm is TraditionalForm minus ambiguities. Compare. Typically you'll see tokens appearing more like programming function calls. You might also see specialized characters with unique semantics like DifferentialD. Unambiguous StandardForm is the default. OutputForm is an ASCII-art equivalent, the old default output from pre-typography days. It works on ASCII text terminals.

Dave Griffith: The state of the art in real-life-use editors is that the editor is responsible for creating the best syntax tree it can on the fly, and warning the user interactively when their code is in an invalid state.

That's how Mathematica's editor works. There is an underlying FullForm at all times. For typesetting I generally use the palettes rather than keyboard navigation. Either way, input is like filling out forms, you just tab from box to box and type numbers or expressions. Even in the temporary state (empty boxes) there is a FullForm underneath. Autocopy facilitates interaction by turning typeset output into typeset input on the fly. One can interact with Mathematica in 7-bit ASCII as well, and you can even force that mode by using the kernel stand-alone. The kernel can also manipulate editor objects programmatically.

That is for programmers only. The basics anyone can use. Non-programmers sometimes get confused about symbol definitions ("where do they go?") and prefer systems like MathCAD which offer a spreadsheet-like interface: if it's on the screen, then it's defined, but if it's not on the page, it's not. Broken formulas are colored red. MathCAD also supports MathML last time I checked. Speaking as a programmer myself, I find MathCAD more difficult to use and limiting, but understand its appeal to non-programmers. Well, MMA 5.1 at least includes a GUI kit.

Mathematica does not do physical units, but Fortress should. The stock advice in Mathematica is to define unit symbols ("Second") and multiply them against expressions. The problem is that these symbols are no different from variables, and thus enter into formulas. What you need is a separate type system which merely tags expressions without modifying them. See my prior remarks on units.

Now we tread semantic territory that I do not really want to cover. I may post a separate message when time allows.

Solid Commentary

You are very correct in the primal importance of the gui/edtor in the path toward Fortress really shining as a glyphical language.

I have some conceptual work here regarding a similar environment for Common Lisp, for implementation with Cocoa:

http://nimbusgarden.com/ben/_HCI_/SeeLisp/SeeLisp_files/SeeLisp.038-004.png

http://nimbusgarden.com/ben/_HCI_/SeeLisp/SeeLisp_files/SeeLisp.033-001.png

http://nimbusgarden.com/ben/_HCI_/SeeLisp/

http://nimbusgarden.com/ben/_HCI_/

...and would like to repurpose this work toward a Bidirectional Unicode MathML-based editor for use as a pretty-print Fortress editor/environment.

Is anyone familiar with the most active projects along these lines (MathML&vector graphics Fortress editors)? My initial survey did not reveal lots of activity. The Eclipse plugin perhaps?

Is it too early for such an effort?

PSciCo Project

Prof. Robert Harper (CMU): The goal of the PSciCo Project is to explore the use of advanced language technology for scientific computing. There is at present a sharp division between prototyping languages, such as Mathematica and MatLab, and production languages, such as C and Fortran, for scientific computing. Prototyping languages are convenient to use interactively, support symbolic and numeric computation, but tend to be quite inefficient and semantically suspect. Production languages are batch-oriented, provide little or no support for symbolic computation, but tend to produce very efficient code. The goal [is to] eliminate this distinction.

Quoted. Editorial note: Actually, math languages can be blindingly fast, since their internal algorithms are written in C and heavily optimized by numerical experts. I consider speed more an issue of judicious language use than intrinsic flaws, though there are days.

Axiom seems similar

Tim Daly, proprietor of Axiom, wrote as follows in May 2004:

My goal isn't to solve physics/math problems. My goal is to build a system that will be used by computational mathematicians 30 years from now. Once this is the stated goal several things become clear.

One clear problem that every system suffers from is that the research papers are disconnected from the code. Mathematicians do the research and programmers do the code. Usually it is the same person with two mindsets. So the math mindset writes the theory with theorems and proofs then publishes it, possibly making claims (with no way to verify the claims by others). The programmer mindset writes the code which hopefully correctly implements the theory but never publishes it. Or publishes it as a "contribution" to some system....

The gap between the theory and the implementation (I call it the impedance mismatch) is too large for most systems....Systems like Axiom are much closer to the mathematics. But not close enough. We need systems that span this gap in carefully structured ways so we can be efficient without being obscure....

My current...claim is that we need to start with an old idea "Literate Programming" and evolve it to suit the needs of the next generation Computational Mathematician.

LtU also discussed Axiom.

a PL better than Java deemed an enormous challenge

I dont't think I've seen this mentioned on LtU yet: The Soul of a New Programming Language, an interview with Guy Steele.

The opening paragraph:

Guy Steele leads a small team of researchers in Burlington, Massachusetts, who are taking on an enormous challenge -- create a programming language better than Java.

It was

I dont't think I've seen this mentioned on LtU yet: The Soul of a New Programming Language, an interview with Guy Steele.

I guess it was mentioned. The OP contained this link, or did you mean something else?

Paul Graham Quotes Guy Steele

"We were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp." - Guy Steele, co-author of the Java spec (link)

MathML Links

MathML Editors; MathML E-list.

Don't forget that OpenOffice has an equation editor.