The End of the GPU Roadmap

The slides from Tim Sweeney's High Performance Graphics 2009 keynote expand upon his Next Mainstream Programming Language talk we discussed some year back. To summarse, Tim makes the following points (in my interpretation):

  • The current GPU pipeline is too limiting and too difficult to program
  • The CPU roadmap from Intel and others show a convergence between the CPU and GPU. We're going to see a lot of cores with vector units hanging off a shared cache. The cores are getting simpler but programming them is getting harder
  • Programs that are purely functional can be automatically vectorised, and the new CPUs are going to extend the scope of vectorisation to general programs
  • Economic reality demands tools to make programming this hardware easier. New languages seem the only viable way forward

It's an interesting talk with a bit more detail at the bit-bashing end than previous talk. The question is: who is going to make this language?

Certified Web Services in Ynot

Certified Web Services in Ynot

In this paper we demonstrate that it is possible to implement certified web systems in a way not much different from writing Standard ML or Haskell code, including use of imperative features like pointers, files, and socket I/O. We present a web-based course gradebook application developed with Ynot, a Coq library for certified imperative programming. We add a dialog-based I/O system to Ynot, and we extend Ynot’s underlying Hoare logic with event traces to reason about I/O behavior. Expressive abstractions allow the modular certification of both high level specifications like privacy guarantees and low level properties like memory safety and correct parsing.

Ynot, always ambitious, takes another serious swing: extracting a real web application from a proof development. In some respects the big news here is the additional coverage that Ynot now offers in terms of support for file and socket I/O, and the event trace mechanism. But there's even bigger news, IMHO, which is the subject of another paper that warrants a separate post.

A Java Fork/Join Framework

Doug Lea: A Java Fork/Join Framework, Proceedings of the ACM 2000 conference on Java Grande.

This paper describes the design, implementation, and performance of a Java framework for supporting a style of parallel programming in which problems are solved by (recursively) splitting them into subtasks that are solved in parallel, waiting for them to complete, and then composing results. The general design is a variant of the work−stealing framework devised for Cilk.

This work is about to be incorporated into Java 7 as jsr166y:

Parallel*Array (often referred to as PA) and its planned follow-ons for sets and maps, provide an easier/better way of routinely programming to take advantage of dozens to hundreds of processors/cores: If you can think about a programming problem in terms of aggregate operations on collections of elements, then we can automate parallel execution. This generally pays off if either you have lots of elements, (in which case, it works well even if the operations are small/cheap), or if each of the operations are time consuming (in which case it works well even if there are not a lot of elements). To take advantage of this though, the aggregate processing must have a regular structure, which means that you must be able to express things in terms of apply, reduce, filter, map, cumulate, sort, uniquify, paired mappings, and so on.

A Reactive Model-based Programming Language for Robotic Space Explorers

Michel Ingham, Robert Ragno, and Brian Williams, A Reactive Model-based Programming Language for Robotic Space Explorers, Proceedings of the 6th International Symposium on Artificial Intelligence, Robotics and Automation in Space, Montreal, Canada, June 2001.

Model-based autonomous agents have emerged recently as vital technologies in the development of highly autonomous reactive systems, particularly in the aerospace domain. These agents utilize many automated reasoning capabilities, but are complicated to use because of the variety of languages employed for each capability. To address this problem, we introduce model-based programming, a novel approach to designing embedded software systems. In particular, we introduce the Reactive Model-based Programming Language (RMPL), which provides a framework for constraint-based modeling, as well as a suite of reactive programming constructs. To convey the expressiveness of RMPL, we show how it captures the main features of synchronous programming languages and advanced robotic execution languages.

Reactive programming has been discussed on LtU quite a bit in the past. Functional reactive programming, in particular, has got a lot of attention. But I don't think we've covered reactive model-based programming before. This is an interesting approach that combines constraint-based modeling with reactive programming, and compiles programs down to hierarchical constraint automata for execution. Whereas reactive languages like Esterel focus on manipulating signals, RMPL works by manipulating (potentially hidden) system states.

Translation of Tree-processing Programs into Stream-processing Programs based on Ordered Linear Types

Translation of Tree-processing Programs into Stream-processing Programs based on Ordered Linear Types, Koichi Kodama, Kohei Suenaga, Naoki Kobayashi, JFP 2008.

There are two ways to write a program for manipulating tree-structured data such as XML documents: One is to write a tree-processing program focusing on the logical structure of the data and the other is to write a stream-processing program focusing on the physical structure. While tree-processing programs are easier to write than stream-processing programs, tree-processing programs are less efficient in memory usage since they use trees as intermediate data. Our aim is to establish a method for automatically translating a tree-processing program to a stream-processing one in order to take the best of both worlds.

We first define a programming language for processing binary trees and a type system based on ordered linear types, and show that every well-typed program can be translated to an equivalent stream-processing program. We then extend the language and the type system to deal with XML documents. We have implemented an XML stream processor generator based on our algorithm, and obtained promising experimental results.

Linear logic is what you get when you give up the structural rules of weakening and contraction -- it's logic when you cannot reuse or forget any hypothesis you make in the course of a proof. This means that every formula is used exactly once, which makes it useful (via Curry-Howard) for programming, as well: linear types give us a way of tracking resources and state in a type system. Intuitively, you can think of it as a kind of static analysis that ensures that an object's runtime reference count will always be one.

However, linear logic retains the structural rule of exchange, which lets us use hypotheses in a different order than we introduced them. Ordered logic is what you get when you drop exchange, as well. (Amusingly, it was invented long before linear logic -- in the 1950s, Lambek introduced it with an eye towards applications in linguistics: he wanted to express the difference between one word occurring either before, or after, another.)

This means that you can use ordered logic to express the order in your types the order in which you use data, as well, which the authors here use to design a language whose typechecker statically rules out memory-inefficient programs.

Going functional on exotic trades

Simon Frankau, Diomidis Spinellis, Nick Nassuphis, Christoph Burgard. Going functional on exotic trades. JFP 19(01):27-45.

The Functional Payout Framework (fpf) is a Haskell application that uses an embedded domain-specific functional language to represent and process exotic financial derivatives. Whereas scripting languages for pricing exotic derivatives are common in banking, fpf uses multiple interpretations to not only price such trades, but also to analyse the scripts to provide lifecycle support and more. This paper discusses fpf in relation to the wider trading workflow and our experiences in using a functional language in such a system as both an implementation language and a domain-specific language.

Section 3 is a nice discussion of why Haskell was chosen. Section 4 illustrates one of the major benefits of using DSLs, namely that different backends can be applied to programs written in the DSL, allowing various types of runtime behavior and analysis without the need to re-code each program for each purpose (thus n+m, rather than n*m programs).

Another topic that may be worth discussing is the authors' dislike of point free style.

What we really need, of course, are DSLs for financial regulation (and possibly for specifying various definitions of honesty), but that's a separate issue...

Purpose-Built Languages

Mike Shapiro, Purpose-Built Languages, ACM Queue vol. 7, no. 1, February 2009.

Mike Shapiro from Sun Microsystems examines the evolution of what he calls "purpose-built languages" (essentially domain-specific languages). Shapiro discusses the way that such languages have often been a key part of the development of larger software systems, and describes how many of them have sprung into existence and evolved without formal design. In particular, he traces the evolution of the adb debugger language. As Shapiro says:

The debugger tale illustrates that a little purpose-built language can evolve essentially at random, have no clear design, no consistent grammar or parser, and no name, and yet endure and grow in shipping operating systems for more than 40 years. In the same time period, many mainstream languages came and went into the great beyond... For purpose-built languages, a deep connection to a task and the user community for that task is often worth more than clever design or elegant syntax.

The same article also appeared in the April 2009 issue of Communications of the ACM.

The Art of the Propagator

The Art of the Propagator, Alexey Radul and Gerald Jay Sussman.

We develop a programming model built on the idea that the basic computational elements are autonomous machines interconnected by shared cells through which they communicate. Each machine continuously examines the cells it is interested in, and adds information to some based on deductions it can make from information from the others. This model makes it easy to smoothly combine expression-oriented and constraint-based programming; it also easily accommodates implicit incremental distributed search in ordinary programs. This work builds on the original research of Guy Lewis Steele Jr. and was developed more recently with the help of Chris Hanson.

I just ran across this tech report. I haven't read it yet, but the subject is particularly well-timed for me, since I just finished a correctness proof for a simple FRP system implemented via imperative dataflow graphs, and so constraint propagation has been much on my mind recently. It's pretty clear that constraint propagation can do things that FRP doesn't, but it's not so clear to me whether this is a case of "more expressiveness" or "more fragile abstractions".

D is for Domain and Declarative

The list of accepted papers is out for the IFIP Working Conference on Domain Specific Languages. Happily for me, the program reveals much interest in languages for reasoning, decision making, and search. Even among people who are not my coauthors. :)

Declarative programming tends to attract skepticism because it has the reputation of poor and hard-to-control performance. The approach of DSL embedding appears to ameliorate this problem, and the success of SAT solvers appears to chip away at this reputation.

Meanwhile, the call for papers is out for Principles and Practice of Declarative Programming 2009, which has a venerable program committee. The submission deadline is May 7.

Swift: making web applications secure by construction

Swift is a language-based approach to building web applications that are secure by construction. Swift applications are written in the Jif language, a Java-based language that incorporates "security-typing" to manage the flow of information within an application. The Swift compiler automatically partitions the application code into a client-side JavaScript application and a server-side Java application, with code placement constrained by declarative information flow policies that strongly enforce the confidentiality and integrity of server-side information.

Swift was recently featured in the "Research Highlights" section of the Communications of the ACM, as a condensed version of an earlier conference paper. The original conference paper is Stephen Chong, Jed Liu, Andrew C. Myers, Xin Qi, K. Vikram, Lantian Zheng, and Xin Zheng, Secure web applications via automatic partitioning, Proceedings of the 21st ACM Symposium on Operating Systems Principles (SOSP'07), pages 31–44, October 2007.

Jif has been mentioned previously on LtU here and here.

XML feed