archives

The new old or The "Return" to Concurrency

In order to develop a fairly complex pipeline of operations for a content management system I am developing I found myself resorting to the old unix way of doing things: I need to process a large set of data (emails), so I set up a pipeline of coprocesses (with messages between each process relating to some chunk of email on disk)

     cp1 | cp2 | cp3 | cp4 | cp5 .. cp12

While this may seem trivial to most people here, I was struck by how profound this classic (20-30 yr old) approach is. Yes, I know that unix (shell) pipes are limited because they are only unidirectional, but if I followed status quo these days the implementation would have been a monolithic OO app (with cp 1-12 being objects passing messages to each other) or perhaps something more FP (with cp 1-12 being a chain of pure functions calls).

Instead, here we have a truly concurrent solution that will take advantage of multiple CPUs, message passing, and has strict encapsulation -- all in a language neutral architecture.

This came about as an experiment relating to using a severely restricted language (in this case AWK) to implement a fairly complex application. Working under Unix with minimal tools is yielding ways of thinking I haven't considered since my hardcore Unix days in the 80s.

While this may sound like just a simple workflow problem, for my app there is some conditional variability in play where some processing may need to be excluded from the workflow, but that too can be handled by traditional unix piping: if a process has nothing to do to certain data (or is instructed by the previous process not to touch certain data) it is simply passed along (untouched) to the next process.

Nothing mind boggling here, but it did strike me as interesting from a monolithic super language vs small language in a unix environment perspective.

Fortran articles online

I have the pleasure of thanking ACM for granting permission to post the full texts of five ACM-copyrighted articles to the FORTRAN/FORTRAN II web site at the Computer History Museum. Here they are; for those already in the ACM Digital Library, we also link to the canonical ACM version via its DOI (Digital Object Identifier).

Once again we owe a big thank you to Paul McJones.

Dataflow languages and hardware - current status and directions

Being interested in dataflow languages and hardware for almost three weeks already I found very little information about those topics. The most interesting was Wavescalar dataflow processor mainly because of recency of the work.

It seem (from Google index) that dataflow programming is somewhat out of vogue.

Anyway, do anyone have any information about dataflow language implementations and hardware support for that computing paradigm?

And why does anything dataflow based seem to be out of mainstream?