Relating FFTW and Split-Radix

Relating FFTW and Split-Radix. Proc. of ICESS'04, the First International Conference on Embedded Software and System, December 9-10 2004, Hangzhou (Zhejiang), China.

This ongoing attempt to reproduce an efficient implementation of FFT using staging and abstract interpretation attempts to answer the question "How can we get the raw performance of hardware without giving up the expressivity and clarity of software?"

Here's how Oleg describes the contribution of this paper,

One may think that generating truly optimal power-of-two FFT is straightforward: we generate the naive radix-2 FFT code, and then optimize it, removing trivial multiplications (x*1), trivial additions (x+0), etc. That was the approach demonstrated previously.

And the bottom line is: the improved code is better than the original, but still worse than the best (FFTW) code. Here's why: if we consider an expression, (sin(pi/4) * x + cos(pi/4) * y), we may think that it can be optimized to sin(pi/4)*(x+y) thus saving one multiplication. Alas, computed sin(pi/4) and cos(pi/4) are not _exactly_ equal. Therefore, no optimizer is able to detect that they are the common factor. The previous paper BTW did handle the case of sin(pi/4) -- and still fell short of FFTW. To achieve optimum, more complex trigonometric transformations have to be applied.

This paper shows that to achieve the best quality of the generated code, we have to use domain knowledge. In case of FFT, we use the fact that it is a linear transform whose factors are roots of unity. This gives us an abstract representation of terms amenable to exact trigonometry. We can essentially symbolically manipulate the terms, and then concretize our abstract representation into the code. In the end, we generated FFT code that exactly matches the number of FP operations of FFTW. Somewhat unexpectedly, when we changed the function that generates the code for general complex multiplication, we obtained `split-radix FFT' -- another well-known FFT technique.

Oleg points out that the point isn't that they managed to reproduced the FFTW results. The crucial point is that we know exactly which identities (i.e., axioms) contributed to the optimum. The search was principled rather heuristic, and the code is generated in only one pass. There are no manipulations on the code: it is generated just right.

Datatype Laws without Signatures

Datatype Laws without Signatures
Using the well-known categorical notion of `functor' one may define the concept of datatype (algebra) without being forced to introduce a signature, that is, names and typings for the individual sorts (types) and operations involved. This has proved to be advantageous for those theory developments where one is not interested in the syntactic appearance of an algebra.

Does it sound like "a module without a signature"?

If you like programming with bananas, lenses, and other weird things you might like this paper as well.

PS: OTOH, if you are sceptic about bialgebraic programming, then dialgebraic is definitely not for you.

Y in haskell

From the Haskell mailing list.

People often wonder about Y in Haskell, so I think it is worth to have this link in the archive. Nothing new here for Haskell mavens, though.

On the Unusual Effectiveness of Logic in Computer Science

In 2001, Moshe Vardi organised a workshop devoted to a the topic of The Unusual Effectiveness of Logic in Computer Science with papers presented covering such topics as "Logic as the calculus of computer science" (Vardi) and "Descriptive complexity" (Immerman), and later a gang consisting of Halpern, Harper, Immerman, Kolaitis, Vardi, and Vianu published a likewise named 24 page article in the July 2002 issue of the Bulletin of Symbolic Logic.

The title is derived from Wigner's famous article on The Unreasonable Effectiveness of Mathematics in the Natural Sciences, which was devoted to raising and attempting to answer the important question: why should mathematics have been so useful to natural scientists? With respect to logic, my answer for the effectiveness of LICS is that, while computation is a physical phenomenon, it is a phenomenon that is best understood via powerful abstractions, and the most powerful abstractions we have at the moment are abstractions in mathematical logic, because of the fundamental relationship of Turing completeness to Goedelian incompleteness.

Links derived from Richard Zach's Motivating Intro Logic for Philosophy majors (and others).

More links...

Sam Ruby: Continuations for Curmudgeons

This essay is for people who, in web years, are older than dirt. More specifically, if there was a period of time in which you programmed in a language which did not have garbage collection, then I mean you. For most people these days, that means that you had some experience with a language named C.

It is a pretty safe bet that -- despite your deep technical background -- you find that you have some genetic defect that makes it completely impossible for you to understand articles with titles like Continuations Made Simple and Illustrated. And for some reason, that bothers you.

A nice blog post that explains a number of basic PL concepts (value vs. reference, continuations, closures, coroutines) using examples from a bunch of popular languages (C, Javascript, Ruby, Python, BASIC, Java).

The Glasgow Haskell Compiler Survey - GHC needs your feedback!

If you're a GHC user, the Glasgow Haskell Compiler HeadQuarters needs your feedback! See Simon Peyton-Jones original message, or go directly to the user survey. Here's a quote from the original message:

We'd like to hear from *absolutely everyone* who uses GHC: whether you
are a first-year undergraduate or a famous professor; whether you use
GHC for industrial applications or recreation; whether you use
higher-rank polymorphism or have only just learned what functional
programming is. Everyone!

We'll use the information to guide our future priorities, and we'll
publish some kind of summary of what we learn in due course. It's all
anonymous, of course, unless you choose to say who you are.

Chris Coyne's Context Free Design Grammar, and SCIgen - Randomly generated CS papers.

Chris Coyne's Context Free Design Grammar was just mentioned by Perry Wagle on the #haskell irc channel. It's a beautiful use of simple concepts in a surprising manner. If you haven't seen the examples, you're missing out.

Chris mentions on the download page that's he's gotten a lot of hits because he's linked from SCIgen, the recently slashdotted automatic CS paper generation project at MIT. Even more amazing, One of their randomly generated papers was accepted.

CogPrints - if you believe PLs are languages

CogPrints, an electronic archive for self-archive papers in any area of Psychology, neuroscience, and Linguistics, and many areas of Computer Science (e.g., artificial intelligence, robotics, vison, learning, speech, neural networks), Philosophy (e.g., mind, language, knowledge, science, logic), Biology (e.g., ethology, behavioral ecology, sociobiology, behaviour genetics, evolutionary theory), Medicine (e.g., Psychiatry, Neurology, human genetics, Imaging), Anthropology (e.g., primatology, cognitive ethnology, archeology, paleontology), as well as any other portions of the physical, social and mathematical sciences that are pertinent to the study of cognition.

I cite it here mostly because of CS/Language, but other categories are marginally related to PLT as well (I believe "cog" in the name of the site stands for "cognitive", and we used to discuss cognitive problems in the past ;-) ).

Links

You can see slides from the Links meeting here and commentary and pictures here. (Thanks to Ethan Aubin for already starting a thread under the former, and to Ehud Lamm for inviting me to guest blog.)

Ethan Aubin writes:

So why do we need a new language? What cannot be accomplished with existing frameworks? There is a slide following this asking why can't you do this in Haskell or ML, but I don't know why they (or even java/php/etc) aren't enough.
Let me try to answer this. Links is aimed at doing certain specific things.
  • (a) Generate optimized SQL and XQuery directly from the source code -- you don't need to learn another language, or work out how to partition your program across the three tiers. This idea is stolen from Kleisli. You need to build it into the compiler, so Haskell, Ocaml, or PLT Scheme won't do.
  • (b) Generate Javascript to run in the browser directly from the source code -- you don't need to learn another language, or work out how to partition your program across the three tiers. We're hoping to provide a better way to write AJAX style programs. Again, you need compiler support -- vanilla Haskell, Ocaml, or PLT Scheme can't do this.
  • (c) Generate scalable web interfaces -- where scalable means all state is kept in the client (in hidden fields), not in the server. PLT Scheme and WASH provide the right interface, but they are not scalable in this sense, precisely because making them scalable involves fiddling with the compiler. (Felleisen and company have pointed out the right way to do this, applying the well-known CPS and defunctionalization transformations.)
So that's my argument for a new language.

Is it a good enough argument? Is this enough of an advantage to get folk to move from PHP, Perl, Python? Not clear. I suspect if it is good enough, a major motivating factor is not going to be anything deep, but simply the fact that being able to write everything down in one language instead of three or four will make people's brains hurt less.

Ethan Aubin also writes:

Wadler goes into the FP success stories, Kleisli, Xduce, PLT Scheme (Continuations on the Web), Erlang. If you take the befenits of these individually, you've got a language which solves the 3-tier problem better than what we have now, but I don't think it meet the criteria of "permitting its users to do something that cannot be done in any other way". So, I'd like to ask the all the perl/php/asp/pythonistas on LtU, what it is the killer-app that that your language cannot handle?
I'd love to see answers to this question!

Dominus talks about HO Perl

Via Xavier Noria, an interview with Mark Jason Dominus in the Perl review, mostly about Higher-Order Perl [2].

Don't get too overheated...

Postscript: Add your links to this subnode.