Computer music: a bastion of interactive visual dataflow languages

The area of computer music has been designing systems for aiding music composition and performance for over 40 years now. The developers have not been much influenced by mainstream programming fads, but have been driven mainly by the needs of composers and performers. Current systems keep the principal original metaphor, the patchboard: a collection of live modules ("patches") connected by cables and easily inspected and modified on the fly. This metaphor has full GUI support and is extended with interactive visual tools for abstraction and exploration, such as the maquette. The language semantics are based on deterministic concurrency, with controlled use of nondeterminism and state.

Current systems are full-fledged programming platforms with visual dataflow languages. Two examples among many are the Max/MSP platform for controlling and generating music performances in real time and the OpenMusic design studio for composers. Max/MSP has two visual dataflow languages: one for control and one for real-time waveform generation. OpenMusic has a dataflow language controlled interactively with many tools for composers.

These systems are actually general-purpose programming platforms. They show that visual dataflow can be made both practical and scalable. In my view, this is one promising direction for the future of programming.

Why Did M.I.T. Switch from Scheme to Python?

Daniel Weinreb has a short investigative piece about why MIT's well-known 6.001 course based around SICP and Scheme has been replaced with Python:

I’ve been seeing mail and blog postings, particularly from people in the Lisp community, why MIT has switched from using Scheme to Python in the freshman core curriculum for the department of Electrical Engineering and Computer Science.

At the International Lisp Conference, Prof. Gerry Sussman gave a short impromptu talk explaining the new freshman curriculum. Just to get a second opinion, I later called Prof. Jacob White, one of the designers of the curriculum and lecturers for the core courses. He confirmed Gerry’s description.

Asking why they changed languages is, in some sense, the wrong question.

Also discussed previously on LtU here.

A Brief, Incomplete ... History of Programming Languages

LtU Contributing Editor James Iry has written a brief history covering every prominent programming language and inventor:

A Brief, Incomplete ... History of Programming Languages

However, some of the details seem open to question. Perhaps LtU readers could help him iron out any historical inaccuracies.

LNGen

LNGen

LNgen generates locally nameless infrastructure for the Coq proof assistant from Ott-like specifications. Its output is based on the locally nameless style advocated in Engineering Formal Metatheory and includes all of the "infrastructure" lemmas associated with that style. Compared to Ott's locally nameless backend, LNgen favors generating a large collection of infrastructure lemmas over handling complex binding specifications and methods of defining syntax and relations.

There are really three stories here:

  1. Coq 8.2 shipped a while ago.
  2. Ott, a tool for PLT semantics work, has acquired a backend in support of the increasingly-popular "locally nameless" representation of binding structure in mechanized programming language metatheory.
  3. LNGen is another tool, using a subset of Ott syntax, that takes a slightly different approach from Ott's new backend to addressing the same issues.

From the U. Penn folks who brought us the Coq tutorial at POPL '08.

Achieving Security Despite Compromise Using Zero-Knowledge

Achieving Security Despite Compromise Using Zero-Knowledge

One of the important challenges when designing and analyzing cryptographic protocols is the enforcement of security properties in the presence of compromised participants. This paper presents a general technique for strengthening cryptographic protocols in order to satisfy authorization policies despite participant compromise. The central idea is to automatically transform the original cryptographic protocols by adding non-interactive zero-knowledge proofs. Each participant proves that the messages sent to the other participants are generated
in accordance to the protocol. The zero-knowledge proofs are forwarded to ensure the correct behavior of all participants involved in the protocol, without revealing any secret data. We use an enhanced type system for zero-knowledge to verify that the transformed protocols conform to their authorization policy even if some participants are compromised. Finally, we developed a tool that automatically generates ML implementations of protocols based on zero-knowledge proofs. The protocol transformation, the verification, and the generation of protocol implementations are fully automated.

This is the follow-up to this story. The prior work did not account for compromised participants. This work does.

I continue to be excited about the prospect of this previous story's work being applied to the type system described in this story, possibly resulting in an awesome new language for developing secure software.

An Interview with the Diamondback Ruby Team

On Ruby has an interview with two members of the Diamondback Ruby team.

There is a long-discussed tension between statically [...] typed languages like Java and dynamically [...] typed languages like Ruby. My hope has been to discover how to include the best aspects of static and dynamic typing in a single language. We all really like Ruby's design and features, and felt it was the right language to start with. As we go, we'll look to derive general design principles that make sense for most any programming language, to simplify and improve the process of programming generally.

The interview covers DRuby's goals as well as a bit about its OCaml based infrastructure. More technical information about DRuby's type system can be found in Static Type Inference for Ruby.

Purpose-Built Languages

Mike Shapiro, Purpose-Built Languages, ACM Queue vol. 7, no. 1, February 2009.

Mike Shapiro from Sun Microsystems examines the evolution of what he calls "purpose-built languages" (essentially domain-specific languages). Shapiro discusses the way that such languages have often been a key part of the development of larger software systems, and describes how many of them have sprung into existence and evolved without formal design. In particular, he traces the evolution of the adb debugger language. As Shapiro says:

The debugger tale illustrates that a little purpose-built language can evolve essentially at random, have no clear design, no consistent grammar or parser, and no name, and yet endure and grow in shipping operating systems for more than 40 years. In the same time period, many mainstream languages came and went into the great beyond... For purpose-built languages, a deep connection to a task and the user community for that task is often worth more than clever design or elegant syntax.

The same article also appeared in the April 2009 issue of Communications of the ACM.

Branching Time vs. Linear Time: Semantical Perspective

Sumit Nain and Moshe Vardi, Branching Time vs. Linear Time: Semantical Perspective, invited ATVA'07 paper.

...this paper puts forward an, admittedly provocative, thesis, which is that process-equivalence theory allowed itself to wander in the “wilderness” for lack of accepted guiding principles. The obvious definition of contextual equivalence was not scrupulously adhered to, and the underspecificity of the formalisms proposed led to too many interpretations of equivalence.

In revisiting the notion of process equivalence, which is a fairly central part of concurrency theory, Nain and Vardi end up arguing in favor of a purely trace-based notion of equivalence and the use of linear-time logics. This in turn leads to a rejection of bisimulation as a tool for establishing process equivalences:

The gist of our argument is that branching-time-based notions of process equivalence are not reasonable notions of process equivalence, as they distinguish between processes that are not contextually distinguishable. In contrast, the linear-time view does yield an appropriate notion of contextual equivalence.
...

In spite of its mathematical elegance and ubiquity in logic, bisimulation is not a reasonable notion of process equivalence, as it makes distinctions that cannot be observed.

They take pains to point out that they are not claiming that bisimulation or CTL should be abandoned or are not useful. Rather their emphasis is on the fact that bisimulation is not a contextual equivalence and is therefore not appropriate for establishing equivalence between (for example) a specification and its implementation. As they say in the conclusion of the paper:

While one may not realistically expect a single paper to overwrite about 30 years of research, a more modest hope would be for this paper to stimulate a lively discussion on the basic principles of process-equivalence theory.

Intentional tool released

apparently Intentional finally shipped.

Martin Fowler's overview.

Download video presentation via MSDN.