LtU Forum

ll-discuss's new home

For all those (like me) who didn't get the memo, I just found out that the seemingly defunct ll1-discuss mailing list has in fact found a new home at:

http://lists.csail.mit.edu/pipermail/ll-discuss/

Full abstraction is not very abstract (via comp.lang.scheme)

There's been a raging battle on comp.lang.scheme for two months and 300+ posts about interpreters, denotational semantics and compositionality. I've skipped most of it, but this post from Will Clinger was instructive to me, particularly the part on full abstraction:

In other words, one of the most un-abstract semantics that could be imagined is fully abstract. Examples like this are part of the reason full abstraction is no longer regarded as a terribly useful concept.

It turns out that full abstraction is more easily achieved by making the operational semantics *less* abstract than by making the denotational semantics *more* abstract. Hence a fully abstract pair of semantics is likely to be *less* abstract than a pair of semantics that are *not* fully abstract.

Why type systems are interesting

In the Explaining Monads thread, Luke Gorrie wrote:
If someone could next show what makes type systems so interesting in terms that I'll understand then I'll be well on my way into the modern functional programming world.. :-)

When you write code, even in the most dynamic language, you always know something about the types of the values you're operating on. You couldn't write useful code otherwise - if you doubt that, try coming up with some examples of operations you can perform on a value without knowing something about the type of that value.

Type systems provide a way to encode, manipulate, communicate and check this type knowledge. In an earlier thread, you mentioned that you write the types of functions in the comments. That's an informal type system! Why do you do use a type system? Because type systems are useful, important, and as a result, interesting.

The question didn't mention the word "static", but I'll raise it anyway. The type knowledge I'm talking about is necessarily mostly static - you know it when you're writing the program, and it's knowable when you're reading the program. Since we can't write useful code without it, it's also very important knowledge. That raises the question, if this important static knowledge is present in our programs, shouldn't we make it explicit somehow?

Further, if possible, since we exploit and depend on this knowledge on almost every line of code we write, shouldn't we check it for validity as early as we can, even while we're writing the programs (e.g. within an IDE), perhaps before it's possible to run unit tests on them? Given this ability, why wouldn't you take advantage of it?

Of course, the answer to that is that encoding a program so that its every term can have its type statically checked, introduces some constraints. But as I've argued before, the alternative of avoiding all static encoding of type information (other than in comments) is mostly a reflection of weaknesses in the state of the art.

Summary of techniques / approaches / models / languages for parallel computation

Hi,

Can anyone point me to a survey paper for approaches to parallel computation?

Googling, I've found Models and Languages for Parallel Computation (Skillicorn and Talia) which is a pretty good example of what I'm looking for (from an initial skim through). However, that's from 1996 - I'm wondering if there's anything newer (or just complementary).

Thanks,
Andrew

Looking for the source of a quote

I know I should be able to find it on Google, but I can't: Who was it who said something like, "Programming languages should be designed primarily for humans to read and only secondarily for machines to execute."?

Old computer science and technical books worth searching for

There have been several threads recently about buying old computer science books, so I thought I'd make it into its own topic. What obscure books are out there that are worth searching for?

One that I've found interesting is "Lucid: The Dataflow Programming Language." There's also a followp, "Multidimensional Programming" (which haven't seen).

Occam books are also eye-opening.

And I have a copy of "Programming in Parlog" (a parallel Prolog variant) which I have yet to read.

Explaining monads

I am off to bed, so I'll get back to this tomorrow per Luke's request.

For the time being I recommend section 2 of Wadler's Monads for functional programming.

Understanding continuations

In my neverending quest to grasp continuations (stupidly, apart from actually trying to use them in a supporting language), I found this "continuation sandwich" example interesting.

I wrote down my thoughts on what I still don't understand. If anyone would be willing to comment there or here and help me to correct my still-confused understanding of continuations I'd be very grateful.

SIGAPL

SIGAPL is showing signs of life. After skipping a year while a new slate of officers reorganized, *Quote Quad* (the SIGAPL newsletter) Volume 34, Number 1 appeared in my mail box. The SIG has fallen on some hard times, but I think it's well positioned for a comeback. With cheap, fast hardware and service oriented architectures, the productivity advantages of *Array Programming Languages* are compelling for some applications.

Sun R&D efforts

Here's a flash cartoon outlining the future of Java after the settlement. Warning: Geek humor alert. :-)

XML feed