Keep Blogging Worker Bee!

Jacob Matthews, of PLT fame, is writing a blog currently covering PLDI. It's an interesting and amusing read; check it out!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I think that Jacob sometimes

I think that Jacob sometimes doesn't get what the authors at PLDI meant. For example, the code placement for branch prediction seems to me like a very good idea. Modern branch predictors are usually very good, so nibbling off a few percent of misses is a very good thing imo, especially when the technique incurs no runtime overhead.

very likely guilty as charged

Though on the subject of that particular paper: the problem with optimizing for that sort of thing is that the gory details of the branch prediction hardware are undocumented and tend to change from minor chip revision to minor chip revision, so your buckets and buckets of effort on getting a 3% performance increase on a Pentium 4 3GHz may well not buy you anything on a Pentium 4 2.5GHz, much less an AMD chip (this is the same reason why it's difficult to tune an allocator to a particular caching strategy, for instance). If the hardware actually exposed what it was doing to the compilers, then these techniques would definitely be more tractable, but since they don't it seems like an awful lot of effort for a small gain.

Don't get me wrong, it was a huge amount of effort and that he was able to get it working at all was pretty awesome. But to me the most valuable lesson from the work wasn't that you ought to run out and implement exactly this strategy in all your compilers, but pointing out that the abstraction layers we create between the various components of the system do have costs, and that if they weren't quite so abstract then we could get more performance out of them at the cost of losing some of the benefits of abstraction. That's a tradeoff that shows up all the time, and that in particular showed up at PLDI this year, and there's no reason to believe that we've got every abstraction layer perfectly tuned so it's definitely worth looking into.

Anyway, I should point out that most of the PLDI papers are already available online, so if you want to read the papers yourself and form your own opinion of them it's easy to do. That paper in particular is by Daniel Jimenez and is on his website in PDF and Postscript.