Partial evaluation applied to high speed lighting preview

Ostensibly, The Lightspeed Automatic Interactive Lighting Preview System by Ragan-Kelly et al. is about graphics, not programming language theory, and the abstract would confirm that opinion:

We present an automated approach for high-quality preview of feature-film rendering during lighting design.

However, I think it is an interesting example of a practical application of partial evaluation and specialization. 3D rendering is a computationally expensive process, but often the work of graphics artists is to make incremental changes rather than rendering fresh images from scratch. Lightspeed performs static dependency analysis of code written in the C-like Renderman shader DSL (RSL). The code is then specialised so that a static part (including a large 'cache' containing a partial computation) runs in the usual way in Renderman shader language on (a virtual SIMD machine on) the CPU, and a dynamic part is generated so as to run on a different architecture: GPGPU. The dynamic part contains the parameters that artists need to update frequently, resulting in fast previewing of results.

PS Note the 'lighting' in rendering-speak can mean any computation performed at a piece of geometry at the point at which you would compute the interaction between light and surface. Often it doesn't actually mean lighting.

(Disclaimer: I'm not an author on this paper but I did have an almost negligible bit of involvement.)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Metaprogramming is the key to interaction performance

See also Oleg's notes -- search for "Pixar" -- at http://okmij.org/ftp/papers/USENIX06-impressions.txt

Partial Evaluation in rendering

Many of the old software renderers in the 1990's patched binary constants into existing x86 code at offsets generated by an assembler. You could call that partial evaluation!

Didn't Wolfenstein 3D do this?

I was under the impression that some of the code was itself a 'render' so the mapping from screen space to texture space for each column of the screen was hard coded into assembly language routines. Or something like that. It'd be nice to have a programming language that offered a principled way of doing this and I think it's interesting to think about the kind of structures such a language might require.

Yes, it did

I believe that's why there was a noticeable delay when resizing the viewport: it had to regenerate all the column drawing code.

However, I think that technique was only worth it when running on a 286 processor, and regular drawing methods were superior on later generations of Intel processors.

Program slicing vs. partial evaluation

If I understand correctly, Ragan-Kelley's work is more akin to program slicing than partial evaluation. The use of the term "specialization" in this context dates back to Gunter, Knoblock, and Ruf's work, which is similar in spirit. The key difference between slicing and specialization is that slicing does not perform value-specific optimizations like constant folding.

Depends on how you slice it

more akin to program slicing than partial evaluation

By specialising values the Cg compiler on the back end can then perform constant folding. So in effect the entire system does get the advantages of constant folding.

Data specialization

Dan is right here -- it does extensive constant folding both from the static inputs, and from the post-caching value analysis ("cache compression"). In practice, this is mostly achieved using the Cg compiler, but it is very much an conscious part of the system design.

As such, I consider this a direct follow-on to the data specialization technique Knoblock and Ruf first proposed in the context of shaders. It's not just about slicing and translating the source shaders, but about automatically generating a cache and significantly reducing the preview version based on specialization (here, mostly value analysis and constant folding).

Dunno About Wolfenstein...

...but I'd bet money that the software renderer in Unreal did. :-)

Seriously, it's interesting that Mark Leone, who is well-known in Standard ML and Run-Time Code Generation (RTCG) circles, wound up at Pixar.

A friend and former Activision colleague is involved in some way (that I should get him to explain more about) in GPGPU. For some time, I was interested in libsh, but it's no longer maintained after the project participants went commercial, and libsh never really quite made it on Mac OS X anyway. Nevertheless, the papers and possibly the book—I haven't read it—behind the work are probably worth a look.

I read a large chunk of the book

libsh is an amazing piece of work but I think the main thing I learnt from the book was that C++ might not be the best choice of 'host' language for a shading DSEL.

BTW Mark Leone is one of the authors of this paper which is about tackling the same problem of lighting interactively. But I don't think that Pixar's own approach is using the same kind of general purpose static analysis of source code and partial evaluation of this new paper.

Ray tracing

There's also been a bunch of work done on applying partial evaluation to ray tracing. The first paper I know of is Mogensen's master's thesis from 1986, "The Application of Partial Evaluation to Ray Tracing". A more recent paper out of the same research group is Holst, "Partial evaluation applied to ray tracing".

Partial evaluation

You're right, there is tons of prior work in partial evaluation applied to rendering -- and indeed, I cited Mogensen's thesis for exactly this reason. Parameterized Ray Tracing is another major early(-ish) work in this area, but different techniques date back to the '70s.

As distinguished from partial evaluation of an entire ray tracer, this project is really about partial evaluation of shaders, and looks specifically at static analysis applied to these domain-specific programs. With respect to the overall rendering process, the static/dynamic division is fixed by the overarching assumption of a fixed view/geometry and varying lighting. Caching is performed at a very specific part of the overall rendering pipeline, so we avoid worrying about the renderer, as a whole, wherever possible.