User loginNavigation |
Speed and semantics in CTM Chap. 1Because it seems highly regarded among LtU members, I've started reading CTM. Already Chapter 1 has provoked some thoughts/questions. In Section 1.7 (Complexity), the authors introduce an optimization, which, in simplified form, is to use L=expr instead of {Fn expr expr}. This is advertised as changing the enclosing algorithm from complexity 2^n to n^2. I have no doubt that in practice, i.e. using their implementation of the language (and many (all?) implementations of similar languages) this speedup holds. But, my question is, what in the semantics of the language makes their speedup claim true? At this point in the book their language lacks side effects, so, in theory, couldn't a compiler take advantage of referential transparency and "hoist" (is that the right word? seems like it is usally applied to loops) the evaluation of expr outside the function call, i.e. do same optimization suggested? In general what (if anything) does the semantics of the language allow you to conclude about complexity? Perhaps an operational semantics allows you to make such conclusions, but I really don't see how a denotational one does. And, if an operational semantics allows you to make complexity conclusions, does this mean that an optimizing compiler could violate semantics (even if such a "violation" were welcome because it resulted in a speedup)? By bdenckla at 2005-03-22 20:00 | LtU Forum | previous forum topic | next forum topic | other blogs | 6865 reads
|
Browse archives
Active forum topics |
Recent comments
22 weeks 6 days ago
22 weeks 6 days ago
22 weeks 6 days ago
45 weeks 19 hours ago
49 weeks 2 days ago
50 weeks 6 days ago
50 weeks 6 days ago
1 year 1 week ago
1 year 6 weeks ago
1 year 6 weeks ago