archives

print.google.com

Very relevant to the discussion in Journals and papers: Google has started a library of scanned, out-of-print books found in university libraries.

Via Greg Restall and slashdot.

UCPy: Reverse Engineering Python

Interesting paper from PyCon 2003 (so yes, it's old news - but new on Lambda, as far as I can see): UCPy: Reverse-Engineering Python.

The project partly entails replacing the "CISC-style" Python VM with a much smaller "RISC-style" VM. The authors' comments on this decision are worth considering in the light of recent discussions about the design of the Parrot VM.

Speed and semantics in CTM Chap. 1

Because it seems highly regarded among LtU members, I've started reading CTM. Already Chapter 1 has provoked some thoughts/questions.

In Section 1.7 (Complexity), the authors introduce an optimization, which, in simplified form, is to use

L=expr
{Fn L L}

instead of

{Fn expr expr}.

This is advertised as changing the enclosing algorithm from complexity 2^n to n^2.

I have no doubt that in practice, i.e. using their implementation of the language (and many (all?) implementations of similar languages) this speedup holds.

But, my question is, what in the semantics of the language makes their speedup claim true?

At this point in the book their language lacks side effects, so, in theory, couldn't a compiler take advantage of referential transparency and "hoist" (is that the right word? seems like it is usally applied to loops) the evaluation of expr outside the function call, i.e. do same optimization suggested?

In general what (if anything) does the semantics of the language allow you to conclude about complexity?

Perhaps an operational semantics allows you to make such conclusions, but I really don't see how a denotational one does. And, if an operational semantics allows you to make complexity conclusions, does this mean that an optimizing compiler could violate semantics (even if such a "violation" were welcome because it resulted in a speedup)?