LtU Forum

Sad News - Ken Anderson Dies Unexpectedly at a Conference

Forwarded message from Timothy J Hickey
-----

Cc: peter@norvig.com, Geoffrey Knauth ,
Kathleen Huber
From: Timothy J Hickey
To: JScheme Developers ,
Jscheme Users
Subject: [Jscheme-user] Sad News about Ken Anderson
Date: Sat, 22 Jan 2005 09:37:37 -0500

Dear JScheme community,

I'm sorry to bring you the very sad news that Ken Anderson,
one of the co-developers of JScheme, died last night at the Spam
conference
in Cambridge. He was in great spirits, talking about JScheme, when
he collapsed mid-sentence.

Ken was one of the three main developers of JScheme.
JScheme had two parent languages -- Peter Norvig's SILK
and Tim Hickey's Jscheme applet. Ken came onto the project
in the beginning (1997) and built the Scheme-Java interface that became
the javadot notation. He was responsible for the beautifully designed
cache techniques that make javadot so efficient. He also
was a tireless developer, chasing down bugs, dreaming up and
seriously analyzing proposed new optimizations. He was a
talented software engineer who loved building an elegant
and simultaneously practical language. He was also a really
nice guy. Kind, generous, thoughtful. The world is a poorer
place for his passing.

Ken touched many lives and brought many communities together.
I worked with him on JScheme for 2-3 years (publishing two papers)
before I met him in person (or even knew what he looked like!)
We've met regularly since to plot the future of JScheme and to
share our war stories of JScheme in the world. He has played
a major role in the Lightweight Language conferences and was
also an active player in the anti-Spam effort. I will miss him,
as I'm sure many other will as well.

JScheme will remain an active project and is very much part
of what Ken has given to the world. I am honored to have been
his friend and codeveloper.

Sincerely,

---Tim Hickey---

Dynamic Eager Haskell

I was looking at at some memory profiles of a misbehaving Haskell program a while back, and the electrical engineer in me couldn't help but think that the graphs looked very much like step responses. And it got me to thinking about how lazy programs with a space leak might be thought of as being an unstable dynamic system (a pole in the RHP). You deal with problems like that with Control Theory. But how do you apply it to a Haskell program? Maybe with something like Eager Haskell. My understanding of Eager Haskell is that it tries to execute your program eagerly, and when it starts to exhaust memory (from trying to evaluate an infinite list, etc.) it resorts back to laziness. That doesn't seem very sophisticated. But what if the "eagerness" knob was available to the program itself, or maybe another agent. You could then start to think about building better controllers. What variables would we want to control? I don't know, and it would probably depend on your algorithm, but interesting candidates might be: rate of change of memory usage, speed at which characters are written to your output file, ratio of memory usage to CPU usage (i.e. we're creating a lot of thunks that aren't being evaluated), etc. I could also imagine that applying a dose of control theory to the behavior of lazy programs might make it easier to reason about their resource usage. Heck, even if it didn't lead to anything useful, it might be interesting to put PID controller into your runtime system, just to see what happens.

Advanced Topics in Types and Programming Languages

The long awaited sequel to Types and Programming Languages by Benjamin Pierce is out! Check it out here:
Advanced Topics in Types and Programming Languages.

It is available from amazon. I'm certainly getting a copy. TAPL was great!

JVM-based scripting languages poll

A recent poll found that Jython was the favorite JVM-based scripting language among those polled, weighing in at 59%. Groovy came in second at 15%.

Santa Claus in Polyphonic C#

We've seen the Santa Claus problem before, but that was a few Xmas's ago.

Jingle Bells: Solving the Santa Claus Problem in Polyphonic C#

Incompleteness in semantics and parallel-or

I remember seeing a link on LtU to some lecture notes explaining that in (denotational?) semantics of simple imperative programming languages, the semantics would have a serious hole if parallel-or was not included; the strong-exists operator made things even better.
I have searched and searched the archives, and cannot find any post that remotely resembles this. Help!

Asynchronous Middleware and Services

An interesting CFP, that given the discussions here recently should interest at least a couple of readers.

Speech-to-text friendly programming languages

I have a few friends that have severe repetitive stress injury and can effectively no longer type for long periods of type.

I'm trying to consider an environment and a language which would be suited to speech-to-text input. My first thought on the base language is Standard ML since definitions are self contained and require no "punctuation" e.g.

let x=5
let y=7

is valid and complete without double-semicolons at the end. Thus, you could say "let x be five, let y be seven" and produce the above code without too much interpolation.

That said, there would have to be a grammar that translated a precise speech into ML and to be really effective, the ML generated would have to be constantly reparsed and kept in a symbol-table state so that the speech processing program could use the inherent structure of the underlying language to disambiguate slurred or otherwise ambiguous speech. Another good reason to use Standard ML is that parsed ML contains more information than many other languages due to type-safety putting restrictions on variable/function usage -- more disambiguation possible.

Does anyone have any thoughts on other requirements for such a beast or pointers to research that has already been done that I might not find through an old-fashined ACM/Citeseer/DBLP search?

Non-Deterministic Interaction Nets

Non-Deterministic Interaction Nets

Abstract

The Interaction Nets (IN) of Lafont are a graphical formalism used to model parallel computation. Their genesis can be traced back to the Proof Nets of Linear Logic. They enjoy several nice theoretical properties, amongst them pure locality of interaction, strong confluence, computational completeness, syntactically-definable deadlock-free fragments, combinatorial completeness (existence of a Universal IN). They also have nice "pragmatic" properties: they are simple and elegant, intuitive, can capture aspects of computation at widely varying levels of abstraction. Compared to term and graph rewriting systems, INs are much simpler (a subset of such systems that imposes several constraints on the rewriting process), but are still computationally complete (can capture the lambda-calculus). INs are a refinement of graph rewriting which keeps only the essential features in the system.

Conventional INs are strongly confluent, and are therefore unsuitable for the modeling of non-deterministic systems such as process calculi and concurrent object-oriented programming. We study four diffrent ways of "breaking" the confluence of INs by introducing various extensions:

  • IN with Multiple (reduction) Rules (INMR) Allow more than one reduction rule per redex.
  • IN with Multiple Principal Ports (INMPP) Allow more than one active port per node.
  • IN with MultiPorts (INMP) Allow more than one connection per port.
  • IN with Multiple Connections (INMC) Allow hyper-edges (in the graph-theoretical sense), i.e. connections between more than two ports.

The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software

Herb Sutter has this fascinating article discussing the notion that programmers haven't really had to worry much about performance or concurrency because of Moore's Law. However the CPU makers are running out of ways to make CPUs faster, in terms of raw Mmhz, and instead will be reduced to opting for multi-cpu and multi-core approaches.

As a consequence he believes that concurrency will become a first class concern for more developers (including language designers) and language there will be a resurgence in interest in what he calls 'efficient languages'.

I'm currently doing a lot more work with highly concurrent GUI applications in Java and discovering that there's a shortage of sensible guidelines out there on designing applications in the face of pervasive multi-threading or even ways of re-organising standard design patterns like MVC to handle inter-thread communications.

Since the only mainstream language I know of with good concurrency features built in is Java 5 I'm left wondering how people handle these issues in other languages.

Does anyone know of any research about designing for concurrency in rich client applications? Ideally in a style that minimises the amount of locking involved?

XML feed