As most of us learned in high school physics, ray tracing is an idea that has been around for centuries. Global illumination in computer graphics has a somewhat shorter history.
Early computer graphics shading models proposed by Henri Gouraud and Bui Tuong Phong, among others, computed reflections based on the spatial relationship between light sources and a point on a surface. Somewhat later, Jim Blinn developed a reflection model derived from Ken Torrance’s work describing inter-reflections of the microscopic structures of surfaces. (That model was later expanded by Rob Cook and Ken Torrance himself into the widely used Cook-Torrance shading model.) Shortly thereafter, Blinn and Martin Newell published a paper describing environment mapping, which replaced light sources with a 360-degree texture map of the surrounding environment.
It was this development of environment mapping that caught my attention. The rendering of a shiny teapot with doors and windows reflected from its surface yielded a level of realism that I had never imagined for computer-generated imagery. I stared at the image for hours and wondered how one could ever improve it. Environment mapping did have one huge limitation of not being able to accurately render reflections of objects close to the object being rendered.
An image from Turner Whitted’s 1979 SIGGRAPH paper featured a structure resembling the Bell Labs building in Holmdel, New Jersey, where Whitted’s work was done.
Clearly the effects of Phong’s model are local and Torrance’s model and its derivatives are microscopic in scale. It seemed natural to label Blinn and Newell’s model as “global” in scope. Hence the term “global illumination.” The question then became how to implement it without the limitations of the environment map?
The availability of frame buffers changed the computer graphics landscape dramatically. Rather than attempting to render images at the refresh rate of CRTs, it became possible to render at any speed and view a static image on the screen. Ed Catmull’s subdivision algorithm coupled to a z-buffer could render arbitrarily complex curved surfaces into a frame memory in a matter of minutes rather than milliseconds. Blinn and Newell’s environment mapping required tens of minutes per frame. This trend toward higher realism at slower rates gave me the courage to try something even slower.