next up previous
Next: Mathematical foundations Up: Semi-Automatic Generation of Transfer Previous: Relationship to edge detection,

Previous work

The canonical references for direct volume rendering of scalar fields are Levoy [Lev88] and Drebin et al.[DCH88]. Levoy presents two transfer function designs for visualizing two distinct classes of data. One transfer function is for visualizing isovalue contours in smoothly varying data; the other is for displaying boundaries in datasets containing adjoining regions of relatively constant value. Both methods assign opacity as functions of data value and gradient magnitude. Drebin et al. use a transfer function which sets opacity as a function of data value alone, and strives to differentiate between materials by assigning different colors to separate ranges of data values. Today, most transfer functions used in volume rendering are still set according to the methods presented by these two authors. Examples of recent work employing Levoy-style transfer functions is described in [Mur95] and [YESK95]. An example of transfer functions based on data value alone is described in [PMea96].

Direct volume rendering research today tends to focus either on making the rendering algorithms faster, or extending existing rendering methods to work with a wider variety of data sets. Exciting developments include the shear warp factorization [LL94], and the use of texture maps for rendering at interactive rates on machines with 3-D texture memory [GK96]. Examples of existing techniques applied to a new domain are the extension of splatting [Wes90] to non-rectilinear volumes [Mao96], and using ray casting for vector field visualization [Fru96]. Often, direct volume rendering is not the tool of choice for visualization of scalar fields, and isosurface rendering is used instead, as in geology [PBL96] or computational medicine [SJM95]. The generation of polygonal isosurfaces is still an area of active research [IYK96,SHLJ96].

To date there has been little research into the generation of transfer functions for direct volume rendering. Two recent papers describe methods for assisting the user in the exploration of possible transfer functions. The first method [HHKP96] uses genetic algorithms to "breed" a good transfer function for the dataset in question. The system randomly generates a set of transfer functions, and renders small images for each one. Presented with this set of renderings, the user then picks the few renderings that seem to best display the volume data, and a new population of transfer functions is stochastically generated based on those that the user picked. This process iterates until the user feels that the best transfer function has been found. Alternately, a image-processing metric like entropy, variance, or energy is used as an objective fitness function to evaluate the rendered images without human guidance, and the process eventually converges on a transfer function which maximizes the fitness function. The method succeeds in generating good renderings, and frees the user from having to edit the transfer function manually.

The second method [MABea97] addresses the problem of ``parameter tweaking'' in general, with applications to light placement for rendering, motion control for articulated figure animation, as well as transfer functions in direct volume rendering. Here, the goal is not to find the one best parameter setting, but to find as wide a variety of parameter settings as possible, relying on a user-specified metric to determine similarity between rendered images. The system randomly generates a very large set of transfer functions, and then formats small thumbnail renderings resulting from these transfer functions into a two dimensional arrangement called a "design gallery". The user peruses these thumbnail images, selecting the most appealing rendering. When applied to the problem of volume rendering, this method also seems to find reasonable transfer functions, but like the genetic algorithm method, it is at the cost of completely divorcing the user from first-hand experimentation with transfer functions.

It is important to identify the idea behind both of these approaches. Instead of interacting with the parameter space of transfer functions directly, the user explores it indirectly, seeing only the results of the parameter setting on the rendering. Rather than creating a better interface for transfer function editing, the methods avoid the issue entirely by having the user pick transfer functions based on the resulting rendered images. In both cases, the process which generates the transfer functions is random, so the user is resigned to choose among those presented, and the quality of the final transfer function is always uncertain. The genetic algorithm method is not guaranteed to find a good transfer function, regardless of how long the user ``breeds'' them. The design gallery algorithm may or may not happen upon a good setting in its initial random generation of transfer functions, and the user would have to start the process over from the beginning to try to find better settings. Significantly, in both cases, the generated transfer functions are not at all constrained by any measured properties of the data.

It should also be pointed out that both methods only generate good transfer functions for renderings from one particular viewpoint. If the viewpoint changes, the genetic algorithm will iterate differently, and the layout algorithm for the design gallery may generate completely different results. Because of this, it could be said that these two methods are not so much for finding good transfer functions as they are for finding good renderings.

While the two methods above could be termed ``image-driven'', the approach described in this thesis is ``data-driven''. It applies solely to those domains of volume rendering where the user's goal is to visualize the surfaces of materials represented in the volume. The method for transfer function generation is entirely derived from that goal. In this sense, the method described here has a level of validity or correctness not attained by either of the previous methods.

Although the previous work described above on the specific problem of finding transfer functions for direct volume rendering is not suitable for our needs, there is other work on related problems that is relevant. Bergman et al. [BRT95] address the problem of colormap selection in visualizing two dimensional scalar fields, to aid in the production of ``false color'' images. Because the human perceptual system perceives variations in hue differently than variations in intensity, the authors have developed a tool which chooses a colormap based on the user-specified goal of the visualization task, as well as the data's spatial frequency characteristics. More recently, Bajaj et al. [BPS97] describe a tool for assisting the user in selecting isovalues for effective isosurface volume visualizations of unstructured triangular meshes for isosurface rendering. By exploiting the mathematical properties of the mesh, important measures of an isosurface such as surface area and mean gradient magnitude can be computed with great efficiency, and the results of these measurements are integrated into the same interface which is used to set the isovalue. Because the interface provides a compact visual representation of the metrics evaluated over the range of possible isovalues, the user can readily decide, based on their rendering goals, which isolevel to use. The importance to this thesis is that the contour spectrum is a good example of how an interface can use measured properties of the data to guide the user through the parameter space controlling the rendering.


next up previous
Next: Mathematical foundations Up: Semi-Automatic Generation of Transfer Previous: Relationship to edge detection,