archives

SK Calculus not Consider SeKsy?

note: thanks to Ehud's grace (or lack of good judgement, depending on your viewpoint) I have been granted powers of editor ;-) Please feel free to offer me suggestions in private or in public on how I can better address the needs of community in what I choose to post or write. I hope the contributions of a neophyte can still be appreciated.

- Christopher (cdiggins@gmail.com )

For my first post as editor I thought I'd bring up an observation of mine while perusing the literature. of what appears to be a lack of usage of the typed SK calculus in theoretical papers. In my studies I am very surprised it hasn't come up more often, since it is elegant, and easy to understand.

I encountered the typed SK calculus first in Laufer and Odersky 1993. A web search for the phrase "typed SK calculus" yielded a brief usage on the Haskell Wiki in a discussion on Generalized Data Types.

I first encountered the SK calculus without types when reading Brent Kerby's A Theory of Concatenative Combinators which is an informal examination of combinators in Joy.

As a referesher, the typed SK calculus is made up of only two operations S and K:

  K : a -> b -> a
  S : (a -> b -> c) -> (a -> b) -> a -> c

So the questions I am pondering are: why is the lambda calculus so much more popular? Is it because Scheme and other similar languages more closely resemble the lambda calculus? If there was a language based on the SK calculus, would it become more popular? I also wonder what applications there are for the SK calculus, and what place it should occupy in my programming language theory arsenal?

A reflective functional language for hardware design and theorem proving

J. Grundy, T. Melham, and J. O'Leary, A reflective functional language for hardware design and theorem proving, Journal of Functional Programming, 16(2):157-196, March 2006.

This paper introduces reFLect, a functional programming language with reflection features intended for applications in hardware design and verification. The reFLect language is strongly typed and similar to ML, but has quotation and antiquotation constructs. These may be used to construct and decompose expressions in the reFLect language itself.

reFLect is apparently a follow-on to the earlier FL language (which I asked about in an earlier thread), the language used in Intel's Forte hardware verification environment. Within Forte, reFLect can be used to perform a seamless mix of theorem proving and model-checking (invocation of a model-checker is just a reFLect function call, with the source text of any call to the model-checker that returns true becoming a theorem of the logic). Tom Melham's reFLect page has a little more information, and also points out that a Linux version of the Forte system, including an implementation of reFLect, is freely downloadable from Intel.

Stephen Wolfram forecasts the future

From this article:

A dominant theme of the past 50 years has been the growing understanding of how to create computer programs for specific tasks. A central theme of the next 50 will be the systematic exploration of the "computational universe" of all possible programs.

In recent years we have discovered that very small programs can produce immensely rich and complex behaviour. As we explore the computational universe this fact will become crucial to science, art, technology and our whole way of thinking. Today when we create things, we expect to do it by explicit human effort. Fifty years from now, it will more often be done by automatically mining the computational universe. Perhaps we will even find the ultimate model of our universe that way. No doubt many industries will be created from technology discovered in the computational universe - whether for programmable nanostructures, algorithmic drugs, mass-customised art or generative microeconomics.

Just as scientific concepts such as momentum and natural selection are commonplace in our thinking today, so in 50 years will be concepts like computational irreducibility and the principle of computational equivalence. I expect the children of 50 years from now will learn cellular automata before they learn algebra.

Recursion Parallel Prolog

Reform Prolog is a recursion-parallel implementation of Prolog. By executing all invocations of a recursive predicate in parallel, data-parallelism can be exploited. In contrast to most other parallel Prolog systems, Reform Prolog thus exploits structured parallelism; in contrast to present-day parallel Fortran systems, Reform Prolog can parallelize programs with procedure calls, pointer datastructures and arbitrary data dependences into doacross-style loops.

The Reform Prolog project ended in 1996 and somehow morphed into the High Performance Erlang project (which should be no surprise, as Erlang has clear roots in the Prolog world, though the concurrency model is different).

Practical Laziness

Hi y'all!

I have a language that I've been working on for a while that I intend to be lazy. At this point, the "flavor" of the language is pretty much set, but I'm still working out some of lower-level details, and I'm wondering about the implementation of laziness. Strict-evaluation is certainly easy to implement, since one never bothers to create thunks, but for non-strict evaluation, is there any literature on the practical elements that go into choosing how frequently to turn thunks into values? Do most implementations wait until they hit conditional forms or IO, or do they tend to put upper-bounds on how large a chain of thunks can get, or ... ?

thanks!

JimDesu