alt.lang.jre @ IBM developerWorks

Welcome to the new series alt.lang.jre. While most readers of this series are familiar with the Java language and how it runs on a cross-platform virtual machine (part of the JRE), fewer may know that the JRE can host languages besides the Java language. In its support for multiple languages (including some that pre-exist the Java platform) the JRE provides a comfortable entry point for developers coming to the Java platform from other backgrounds. It also gives developers who have rarely, if ever, strayed from the Java language the opportunity to explore the potential of other languages without sacrificing the familiarity of the home environment.

This series may become an amusing resource.

The first installment is about Jython.

Making Asynchronous Parallelism Safe for the World


Guy L. Steele, Thinking Machines Corporation, 1990

    We need a programming model that combines the advantages of the synchronous and asynchronous parallel styles. Synchronous programs are determinate (thus easier to reason about) and avoid synchronization overheads. Asynchronous programs are more flexible and handle conditions more efficiently.

    Here we propose a programming model with the benefits of both styles. We allow asynchronous threads of control but restrict shared-memory accesses and other side effects so as to prevent the behavior of the program from depending on any accidents of execution order that can arise from the indeterminacy of the asynchronous process model.

This paper and the ones below may be of special interest in light of the current discussion of modern parallel computer architectures.

Richard Feynman and the Connection Machine

by way of lemonodor

An entertaining article by Danny Hillis about Richard Feynman's work at Thinking Machines Corporation on the Connection Machine.

We've mentioned the Connection Machine's data-parallel programming style on LtU before, and Connection Machine Lisp remains my all-time favourite paper in computer science.

Functional programming with GNU make

One of the gems squirreled away on Oleg's site is "Makefile as a functional language program":

"The language of GNU make is indeed functional, complete with combinators (map and filter), applications and anonymous abstractions. That is correct, GNU make supports lambda-abstractions."

Although I've classified this under Fun, Oleg exploits
the functional nature of Make for a real, practical application:

"...avoiding the explosion of makefile rules in a project that executes many test cases on many platforms. [...] Because GNU make turns out to be a functional programming system, we can reduce the number of rules from <number-of-targets> * <number-of-platforms> to just <number-of-targets> + <number-of-platforms>."

See the article for a code comparison
between make and Scheme, and check out the Makefile in question.

What's up guys?

So I am busy a couple of days, and not one editor manages to post something new? I am disappointed...

Maybe it's time we recruited some more contributing editors. If you are a regular and want to join, let me know.

Generics in Visual Basic 2005

You knew it couldn't be far behind, right?

Defining and Using Generics in Visual Basic 2005 on MSDN has the details.

New Chip Heralds a Parallel Future

Missing no chance to stand on my soapbox about the need for easy PL retargeting, I bring you insights from Paul Murphy about our parallel-processing, Linux future.

[T]he product has seen a billion dollars in development work. Two fabs...have been custom-built to make the new processor in large volumes....To the extent that performance information has become available, it is characterized by numbers so high that most people simply dismissed the reports....

The machine is widely referred to as a cell processor, but the cells involved are software, not hardware. Thus a cell is a kind of TCP packet on steroids, containing both data and instructions and linked back to the task of which it forms part via unique identifiers that facilitate results assembly just as the TCP sequence number does.

The basic processor itself appears to be a PowerPC derivative with high-speed built-in local communications, high-speed access to local memory, and up to eight attached processing units broadly akin to the Altivec short array processor used by Apple. The actual product consists of one to eight of these on a chip -- a true grid-on-a-chip approach in which a four-way assembly can, when fully populated, consist of four core CPUs, 32 attached processing units and 512 MB of local memory.

Paul follows up with a shocker.

I'd like to make two outrageous predictions on this: first that it will happen early next year, and secondly that the Linux developer community will, virtually en masse, abandon the x86 in favor of the new machine.

Abandonment is relative. The new processor will emulate x86 no problem, as Paul notes. In the PowerPC line, already today we have Linux for PowerPC complete with Mac OS X sandbox. From a PL standpoint, however, this development may cattle-prod language folks off their x86 back ends and into some serious compiler refactoring work. I hope so!

Eric Gunnerson's JavaOne report

This may be of interest to LtU readers.

Most of the specific language features were discussed here previously, but the C# perspective may make this worth a look.

Database Abstraction Layers and Programming Languages

From time to time I like to return to the issue of database integration, only to once again remark that the difficulty in creating good database APIs (as opposed to simply embedding SQL) is the result of the poor programming facilities provided by most programming languages (e.g., no macros for syntax extension, no continuation or first class functions to handle control flow etc.).

Why return to this topic today? Jeremy Zawodny aruges on his blog that Database Abstraction Layers Must Die!

Along the way he says,

Adding another layer increases complexity, degrades performance, and generally doesn't really improve things.

So why do folks do it? Because PHP is also a programming language and they feel the need to "dumb it down" or insulate themselves (or others) from the "complexity" of PHP.


Why do we need an abstraction layer anyway?

The author uses an argument I hear all the time: If you use a good abstraction layer, it'll be easy to move from $this_database to $other_database down the road.

That's bullshit. It's never easy.

Double ouch, but true enough. Databases are like women (can't live with them, can't live without), and getting rid of one can be as painful as divorce...

So what's the solution? Surprise, surprise: use a libary. But isn't that an abstraction layer? Of course it is.

What Jeremy advocates is plain old software engineering and design. Everyone should do it. I can't beleive anyone does anything else.

But wait. I just told you it's hard to build such a library, since programming languages makes the design of such libraries hard (e.g., should you use iterators, cursors or return record buffers? should your library database access routine be as flexible as a select statement?) So we design libaries that aren't very good, but hopefully are good enough.

And that's the question I put before you. We all know about coupling and cohesion. We all know about building software abstractions. Are our tools for building abstractions powerful enough for this basic and standard abstraction: the database access abstraction layer?

Type-Based Optimization for Regular Patterns

Type-Based Optimization for Regular Patterns, by Michael Y. Levin and Benjamin C. Pierce. WWW Workshop on High-Performance XML Processing, May 2004.

We describe work in progress on a compilation method based on matching automata, a form of tree automata specialized for pattern compilation, in which we use the schema of the value owing into a pattern matching expression to generate more efficient target code.

A set of slides is also available.