next up previous
Next: Bandlimiting and the ``thickness'' Up: Semi-Automatic Generation of Transfer Previous: Spiny Dendrite (hippocampus): Experimenting


Conclusions and future work

In the volume rendering community, the problem of transfer function specification is acknowledged to be one of the remaining hard problems that needs to be solved before direct volume rendering gains widespread acceptance in a variety of disciplines. This thesis has presented a solution to this problem for those situations where the goal of the rendering is to visualize material boundaries in scalar volume datasets, with transfer functions which assign opacity only. This solution is based on the ``histogram volume'', a data structure which measures the position-independent relationship between the data value and its derivatives throughout the volume.

At a general level, the histogram volume allows opacity function generation to be separated into two steps-- the first automatic, the second controlled by the user. After the histogram volume has been created, it contains information which permits the automatic calculation of the position function, mapping from data value to a signed distance to the boundary center. The user can then create the boundary emphasis function to map from this spatial domain to opacity. The composition of these two steps is an opacity function. Fortunately, the second mapping is easy to create, since the user is controlling only the character of the rendered boundary, while the difficult portion-- detecting and locating the boundaries in the data value domain-- has been done automatically with the histogram volume. With the techniques presented here for semi-automatic opacity function generation, a large amount of guesswork has been removed from the process, and the interface has been simplified and made more intuitive.

This thesis has also demonstrated the usefulness of two dimensional opacity functions, first proposed by Levoy in 1988 [Lev88] but used only infrequently since then. Fortunately, the same interface for setting the boundary emphasis function can be used to control either one or two dimensional opacity functions with equal ease.

The histogram volume analysis which is used to generate the position function is based on a certain mathematical boundary model, but there are other possible ways of analyzing the histogram volume to extract the necessary information. We have shown that for every boundary that occurs in a dataset, there will be a characteristic curve in the histogram volume. Therefore, it would be fitting to use an object recognition technique from computer vision, such as the Hough transform, to find those curves wherever they appear in the histogram volume. The use of the Hough transform for this purpose was briefly explored, but concerns about its computational cost, mathematical complexity, and the correctness of its implementation, led to its being abandoned in favor of the much simpler techniques. Nonetheless, the Hough transform and related methods still warrant further exploration.

One important possible improvement to the presented technique is better modeling of the ideal boundary. Each imaging modality, such as CT and MRI, has a characteristic frequency response, which in turn determines the nature of measured boundaries. We assumed a Gaussian frequency attenuation which made the computation of the position function extremely easy. However, a more accurate mapping from data value to position should be possible with more sophisticated models of frequency response, including those that model directional variation in frequency response, a characteristic of all standard scanners. This may necessitate measuring different kinds of derivatives, or recording them in other formats than a histogram volume, but the mapping from position to opacity could still be done with boundary emphasis functions.

There is also at least one interesting generalization of our method. MRI scanners can measure many different quantities (proton density, spin decay rate, diffusion, etc), and many MRI datasets contain vector information, with two, three, or more separate measurements at each voxel. Within each material, these separate data values will be roughly constant. Because this characteristic of material uniformity was the main driving assumption in the development of the techniques in the context of scalar data, it should be possible to extend them to non-scalar data which shares the characteristic. Though the data structures and methods of derivative measurement would be different, the algorithms and results would be largely the same: all the boundary regions in the volume would be made visible more easily and the user would again enjoy high-level control over the rendered boundaries. There would be the additional benefit of robustness in the boundary detection, because the spatially coincident boundaries in all the data channels would reinforce one another.

Besides these fundamental changes and generalizations of the algorithm, there are a number of smaller modifications which deserve exploration. There is room for improvement in how the directional derivatives are measured. The histogram volume analysis stage relies on a method for collapsing either an entire slice or a scanline of the histogram volume down to one value. Currently an average value is used, but it might be advantageous to use another measure, such as mode or median. Finally, in its current form, the algorithm can not account for noise in the data besides establishing a simple threshold mechanism which allows a crude compensation for the ambient gradient magnitude caused by noise-induced fluctuations in data value. Better noise handling, either as a pre-process filtering or an integral part of the derivative measurement is certainly an area for further investigation.

Direct volume rendering has proven itself to be a potentially very valuable tool in a wide variety of applications. The power of the tool is currently being limited by the difficulties faced by its users in trying to set the transfer function. This thesis has offered a solution to the problem for rendering an extremely important class of scalar volume datasets-- those in medical and biological applications-- where the goal is visualizing the structure of material boundaries. In keeping with the spirit of direct volume rendering, whereby the data values map to the rendered image without any geometric pre-computation or algorithmic alteration, this method makes the boundaries visible based on a fundamental relationship measured in the dataset itself. The position-independent relationship between the data value and its first and second derivatives allows for two advancements: the automatic detection of the boundaries present in the volume, as well as an intuitive interface for controlling how those boundaries are rendered.

The traditional trial-and-error path to a useful transfer function has been replaced by a novel and powerful high-level interface for transfer function editing. The core idea is to let the data help the user find the transfer function. Coupling the interface for transfer function specification to the underlying dataset guarantees some appropriateness of the transfer function for the task of visualizing the structure of material boundaries. It is the author's hope that this idea will have relevance to other visualization applications where the promise of direct volume rendering has yet to realized.


next up previous
Next: Bandlimiting and the ``thickness'' Up: Semi-Automatic Generation of Transfer Previous: Spiny Dendrite (hippocampus): Experimenting