As most of us learned in high school physics, ray tracing is an idea that has been around for centuries. Global illumination in computer graphics has a somewhat shorter history.
Early computer graphics shading models proposed by Henri Gouraud and Bui Tuong Phong, among others, computed reflections based on the spatial relationship between light sources and a point on a surface. Somewhat later, Jim Blinn developed a reflection model derived from Ken Torrance’s work describing inter-reflections of the microscopic structures of surfaces. (That model was later expanded by Rob Cook and Ken Torrance himself into the widely used Cook-Torrance shading model.) Shortly thereafter, Blinn and Martin Newell published a paper describing environment mapping, which replaced light sources with a 360-degree texture map of the surrounding environment.
It was this development of environment mapping that caught my attention. The rendering of a shiny teapot with doors and windows reflected from its surface yielded a level of realism that I had never imagined for computer-generated imagery. I stared at the image for hours and wondered how one could ever improve it. Environment mapping did have one huge limitation of not being able to accurately render reflections of objects close to the object being rendered.
An image from Turner Whitted’s 1979 SIGGRAPH paper featured a structure resembling the Bell Labs building in Holmdel, New Jersey, where Whitted’s work was done.
Clearly the effects of Phong’s model are local and Torrance’s model and its derivatives are microscopic in scale. It seemed natural to label Blinn and Newell’s model as “global” in scope. Hence the term “global illumination.” The question then became how to implement it without the limitations of the environment map?
Because ray tracing is so incredibly simple, it should have been an obvious choice for implementing global illumination in computer graphics. Ray casting for image generation had been pioneered by Arthur Appel, at IBM, and commercialized by Robert Goldstein and associates, at MAGI. MAGI had originally utilized multi-bounce ray tracing to track radiation within tanks. In my own early career, I had been involved with ocean acoustics and remembered a diagram of ray-traced sound being refracted through varying depths of the ocean and reflected from the surface. Eventually a memory of that diagram popped into my head and it then became clear to me how to improve upon the global illumination method that Blinn and Newell had initiated.
My own hesitation about using ray tracing was simply concern about performance. There has always been a bit of a divide between computer graphics for real-time interaction and graphics for film. General Electric’s lunar landing simulator obviously had to run in real time to train astronauts. This constraint continued with David Evans and Ivan Sutherland’s flight simulators as well as neighboring research at the University of Utah. Henri Gouraud’s smooth shading ran in real time at essentially no cost. A little known chapter in Bui Tuong Phong’s dissertation includes a description of circuitry for real-time shading even though most implementations of Phong shading did not run in real time.
The availability of frame buffers changed the computer graphics landscape dramatically. Rather than attempting to render images at the refresh rate of CRTs, it became possible to render at any speed and view a static image on the screen. Ed Catmull’s subdivision algorithm coupled to a z-buffer could render arbitrarily complex curved surfaces into a frame memory in a matter of minutes rather than milliseconds. Blinn and Newell’s environment mapping required tens of minutes per frame. This trend toward higher realism at slower rates gave me the courage to try something even slower.