Texture Based Volume Visualization

Apart from using the volume representation as a sophisticated surface representation, I am also interested in techniques for visualization of volumes. For instance, texture based volume visualization is fast becoming a very powerful technique. This page is a small gallery of images created using diverse texture based methods.

Acknowledgments

All software used to create the images below was written by me, but the techniques were found in literature. See for instance the following publications:


Eurographics STAR Interactive High-Quality Volume Rendering with Flexible Consumer Graphics Hardware K. Engel, University of Stuttgart

SIGGRAPH 2002 Course Notes. no. 42: High-Quality Volume Graphics on Consumer PC Hardware

The MR scan used for most of the images below is a scan of Lars Pedersen. He was scanned at the Danish Research Center of Magnetic Resonance, H:S Hvidovre Hospital.

Volume Visualization using 2D Textures

The simplest form of volume visualization employs only 2D texture mapping which is supported by even the cheapest graphics cards available.

To render a volume we need to store the volume as a stack of slices. We keep three stacks of slices for the same volume. Each stack is perpendicular to one the three major axes, and we always choose the stack that is most perpendicular to the viewing direction.

Then each slice is rendered back to front. We have to map the densities in the volume to opacity and colour. The simplest approach is to simply use the volume density as both colour and opacity. The slices are then rendered using the over operator:

pxl_colour=opacity*colour - pxl_colour*(1-opacity)

where pxl_colour is the colour stored in the framebuffer. A final, important idea is to use the alpha test to reject new fragments (incoming pixel values) if the opacity is below some threshold.

This method requires only basic OpenGL functionality (no extensions).

We can identify a number of weaknesses with this simple approach. First, we need to store three sets of slices. Secondly, the quality of the image depends on viewing angle. Finally, we get a slight visual pop whenever the viewing direction changes.

In the images below, the viewpoint is turned slightly. Just enough that the most perpendicular view stack changes. This illustrates popping:

Volume Editing

Volume editing is very simple. All we have to do is change voxel values and then rebind the textures. This allows us to interactively change a volume.

This method was used to edit the Lars Pedersen data in order to show the brain tissue.

While this technique would be more powerful if it were combined with some segmentation technique, it is interesting just how easy it is to write a simple software system for volume visualization and editing. In fact, students in our graphics course create a texture based volume visualization and editing tool as an exercise.

Volume Visualization using 3D Textures

A more attractive approach is to use 3D textures. The same rendering technique can be used, but the slices are now always completely perpendicular to the viewing direction. This is possible because a 3D texture is like solid material and when we apply the 3D texture to an arbitrary plane, it is like cutting out a thin sheet of this solid material.

This method is also not perfect. Instead of popping artifacts we get some motion artifacts because the slices change for every frame. However, with 3D textures it is easy to change how many slices to use.

Isosurface Visualization using 3D Textures and Cubemaps

Cube mapping is a powerful technique for simulating specular reflections on graphics hardware. To do specular reflections, however, we must have a surface normal, but we can precompute normals and store them in the colour channels of the volume. With this in place, it is possible to render Lars as if his head were a bust made of shiny metal.

Isosurface Visualization using 3D Textures

If normals are stored in the volume we can also render isosurfaces in a simpler way. In the example below register combiners (a fragment shading facility available on the NVIDIA NV2x graphics cards) were used to compute an ambient, diffuse, and specular illumination for each pixel.

Some things are worth noting. If we zoom in, it is possible to see some faint noise in the shading. This is probably due to the fact that the pixel pipeline is only eight bit. Fortunately, this is changing fast! Moreover, the shading is pretty smooth, but still some artifacts from the slicing are visible around the eyes.

Slab-based Volume Visualization using 3D Textures

A great recent idea due to Klaus Engel et al. is to look at two slices at a time. If, for a given pixel, the value of the first slice is below the isovalue and the value of the next slice is above, we can linearly interpolate the gradients stored in either slice to the isovalue.

This results in very smooth shading. Using this method removes many artifacts from the Lars Pedersen visualization.

The candypig below is a synthetic volume created using my level-set based volume sculpting system. This volume is a distance field (i.e. voxel values change linearly with distance to surface) and pretty close to the ideal type of data for the slab based visualization shown above. The images below show how the candypig using just slices (left) and slabs (right):
Finally, I have applied this method to a CT scan of a skull:

Ray Casting in a Fragment Program

Texture based volume rendering along the lines discussed above is very different from software based ray casting. However, it is now possible to write fragment shaders that perform complicated operations. This means that ray casting algorithms very similar to software ray casting has now become feasible in a fragment program [Kruger et al.]

I have implemented a simple ray casting algorithm using the OpenGL Shading Language. In one call to a GLSL fragment program, a single ray is cast through the volume. Of course, the program is simplistic: There is a fixed number of iterations and hence no early termination. The program doesn't even detect if it is outside the volume. However, this was as far as I was able to get on a Geforce FX5900 with the self imposed constraint that one iteration of the fragment program should equal one entire ray traversal.

Of course, to compute the ray directions it was necessary to first render the front and then the back side of the cube containing the volume. To get the desired accuracy, I use floating point textures to collect the results. For this purpose, Mark Harris's RenderTexture class was used.

The image below shows the well-known teddy bear model. The quality leaves something to be desired, but that is, of course, due to the fact that only 100 samples were used. More than that and the program would not run on an FX5900. With several passes, far better results could have been obtained. To give an indication of what is possible, the Lars Pedersen data set was rendered with a very small step size. This gives better quality, but only the face can be rendered in 100 iterations.

Conclusions

I find that ray casting was surprisingly simple to implement while preintegration and view aligned slices required jumping through some hoops that were a bit tricky. On the other hand, my implementation was simplistic - lacking early termination for instance. However, that should be simple to address on a modern GPU. The texture based volume visualization methods in general are cool, but they do have some limitations. The lack of high numerical precision in the fragment pipelines of modern graphics cards has been addressed, and now we can use floating point arithmetic. A remaining problem is volume sizes. Large volumes typically need to be split up into smaller pieces that are rendered separately.
Andreas Bærentzen, jab 'at' imm.dtu.dk
Last modified: Wed Sep 8 10:52:07 CEST 2004