Peter Van Roy: Programming Paradigms for Dummies

Roy, Peter van (2009). Programming Paradigms for Dummies: What Every Programmer Should Know. In G. Assayag and A. Gerzso (eds.) New Computational Paradigms for Computer Music, IRCAM/Delatour, France.

This chapter gives an introduction to all the main programming paradigms, their underlying concepts, and the relationships between them. We give a broad view to help programmers choose the right concepts they need to solve the problems at hand. We give a taxonomy of almost 30 useful programming paradigms and how they are related. Most of them differ only in one or a few concepts, but this can make a world of difference in programming. We explain briefly how programming paradigms influence language design, and we show two sweet spots: dual-paradigm languages and a definitive language. We introduce the main concepts of programming languages: records, closures, independence (concurrency), and named state. We explain the main principles of data abstraction and how it lets us organize large programs. Finally, we conclude by focusing on concurrency, which is widely considered the hardest concept to program with. We present four little-known but important paradigms that greatly simplify concurrent programming with respect to mainstream languages: declarative concurrency (both eager and lazy), functional reactive programming, discrete synchronous programming, and constraint programming. These paradigms have no race conditions and can be used in cases where no other paradigm works. We explain why for multi-core processors and we give several examples from computer music, which often uses these paradigms.

I have not found this paper in the LTU archives, but I though it is likely of interest to this community. Of course, the author is well know here (e.g., his book Concepts, Techniques, and Models of Computer Programming). I like the bird's eye view of this paper.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Diagram

Relevant story: Diagram showing all programming paradigms and their relationships, which seems to be the same as fig. 2 on p.13.

It's this kind of Van Royesque work, the sorting out design concepts, that I was complaining was lacking in the Semantics: Logic vs PL thread. Perhaps we need a Semantic Paradigms for Dummies.

This is the first mention of the paper, I think, and should be promoted to the front page.

Diagram revised

Yes, the diagram is a slightly revised version of the one discussed before. As far as I understand, the diagram was made for a talk at a Symposium at IRCAM in 2007, and this paper/chapter is part of a forthcoming book "New Computational Paradigms for Computer Music".

BTW: I reformatted the announcement to match the common format of front page stories of this kind.

[Promotion] Seconded

Good stuff.

Done.

Done.

Outside-visible named state ?

There is one point in the paper that I found surprising and counter-intuitive at least. Page 27, emphasis mine :

A component without named state, once proved correct, always stays correct. Correctness is not so simple to maintain for a component with named state. A good rule is that named state should never be invisible: there should always be some way to access it from the outside.

I would rather suppose that state hiding is generally a good idea, and that it is not a problem for some component to use state internally if effects are not observable, wich seems in direct contradiction with state "visible to the outside". Do I misunderstand the idea, or do you have a more detailed explanation ?

There ought to be different levels of access

regardless of the presence of state--what is visible via an API is frequently a subset of what is visible via use of introspection or with a debugger.

Sometimes, named state is necessary to support a stateful abstraction--a BankAccount object by design will hold different values for its AccountBalance attribute (however derived) at different times. Other times, named state may support a stateless abstraction, for example, with explicitly encoded memoization in a language without a delay primitive or default lazy evaluation.

One other notion regarding state that might merit some expansion. In the Big Chart, which has been expanded quite a bit since its last appearance on LtU; there is noted the difference between "cell" and "local cell"; the latter being defined in the context of concurrent languages which permit thread-local storage, essentially, but for which state is not visible across process boundaries. "State" without the local qualifier refers to either mutable cells in sequential languages, or to shared state in concurrent languages such as Java. I think this is a useful dichotomy to make, but one which merits further consideration.

Furthering the confusion--most of the exampes cited for "local cell" are languages or systems which eschew shared state due to the well-known reliability and scalability problems with monitor-based synchronization, even when the system in question has multiple threads running on the same processor. Yet the final example is tuple spaces a la Linda--the purpose of which is to simulate monitor-based shared memory on a distributed system. It's not a full simulation, obviously--since individual transactions with the tuple space are atomic, but explicit locking of the tuple space is not available, tuple spaces don't suffer from deadlock; but the inclusion of a means to simulate shared memory on hardware that doesn't support it easily seems to violate the spirit of the "local cell", even though tuple spaces operate on top of message-passing in the distributed system case. (Of course, the same is true for RDBMSs; computers running enterprise apps do not share state with the DB server; instead messages--SQL commands/queries and their results--are exchanged between client and server, and higher levels of synchronization are handled on top of the IPC mechanism).

Local or External State

Encapsulating and hiding state seems to be one of those questionable 'best practices' that causes lots of unnecessary headaches (e.g. for persistence, maintenance, upgrade, extension, live programming). I came to a similar conclusion in my article local state is poison.

I wonder how much I was influenced by this paper. I remember having read it, but don't remember the details, and my conclusion seems to come from a different direction.

In any case, external (hence accessible) named state can still be secured, e.g. using cryptographic names (PKI or HMAC or proof-of-work or sparse capabilities) or secure partitioning (chroot sandbox, directory caps, opaque paths).

'Closures' good for AOP?

Peter says in the section on Closures that they are 'good for AOP'. I'd need to disagree. They're good for many things, but AOP is for working around the weaknesses of closures: closures usually fail to integrate concerns relevant to the caller or the caller's environment. Just passing closures into closures can help, but doesn't fix the problem, either.

Admittedly, I don't believe 'weaving' is all that great either.

Something closer to IBM's Hyperspaces (require a feature, provide an aspect of a feature, combine provided feature aspects - scattered throughout code and gathered by compiler then combined by user-defined function), extensive language support for common concerns (like security, concurrency, etc.), plus some support for dynamic scoping (or other forms of implicitly threaded context) for introducing new concerns (metaprogramming) seems a reasonable 'multi-concept' approach to AOP that doesn't require weaving.

But closures... by themselves, closures aren't nearly effective enough.

Lisp and Haskell have certainly put their brands of implicitly threaded context (special vars and monads) to good use.

For non-cflow advice...

...closures and state certainly give you all the raw ingredients you need.

(* An "AOP-style fixed point" -- use backpatching to build a recursive function,
   and then return the ref cell (aka join point) so people can mutate it.
   Peter Landin may have invented this method, btw -- though he was too 
   disciplined to let the ref cell escape!
*)

let aop_fix f =
  let joinpoint = ref (fun _ -> assert false) in
  let () = joinpoint := f (fun z -> !joinpoint z) in
  ((fun z -> !joinpoint z), joinpoint)

(* Here's how you can add before and after advice to a function. By 
   updating the joinpoint/recursive-ref, we can add code to be run on
   recursive calls to a function.
*)

let before_advice advice joinpoint =
  let f = !joinpoint in 
  joinpoint := (fun x -> let () = advice x in f x)

let after_advice advice joinpoint =
  let f = !joinpoint in
  joinpoint := (fun x ->
		  let v = f x in
		  let () = advice x v in
		    v)

(* Of course, every example must be factorial. Notice that we get a 
   function we can call, as well as a join point fjoin. 
*)

let (fact, fjoin) =
  aop_fix (fun fact n -> if n = 0 then 1 else n * fact (n - 1))

(* Add some advice to log what the argument is. *) 

let () =
   before_advice (fun n -> Format.printf "Called with: %d\n" n) fjoin

(* We get the following results *)

# fact 5;;
Called with: 5
Called with: 4
Called with: 3
Called with: 2
Called with: 1
Called with: 0

- : int = 120

Cflow cannot be simulated this way, since it's basically reflection on your continuation. Of course, since it positively grinds contextual equivalences into dust, this is probably a good thing. :)