Hardware accelerated volume rendering

Written by Paul Bourke
November 2003


The following outlines a programming project aimed at making full use of OpenGL based accelerated hardware to render volume datasets at interactive frame rates (for example, greater than 10 frames per second). It was assumed that only datasets that fit into main memory would be considered, that is, disk spooling would not be considered. In addition to "normal" use on a workstation, another goal was for the software to support stereoscopic projection using either frame sequential or side-by-side stereo pairs commonly used in passive stereoscopic projection environments.


No matter how fast the graphics hardware there will always be volumes of sufficient resolution to make interactive rates impossible, at the time of writing and with the graphics cards (nVideo fx2000 and Wildcat 6110) available this occurred around the 500x500x500 size. A number of approaches were taken to improve interactivity, they were:

  • The operator may choose the level of subsampling of the native volume resolution. This is typically by a factor of 2, 4, or 8. This enables quick exploration, testing of voxel to colour mappings, and view point considerations. A single keystroke or menu selection changes the subsampling level.

  • When the operator is interacting with the model it is subsampled by a factor of two in all dimensions, typically resulting in a factor of 8 performance improvement. Interestingly the subsampled data often reveals additional structure due to the increased transparency because there are half the number of contributing texture planes.

  • Instead of always looking at the whole volume, a smaller subvolume (shown in yellow in the images on the right) can be moved around the whole volumetric dataset (shown in grey on the right). This was initially chosen for performance reasons but turned out to be a powerful way of exploring large datasets when one was inside the data volume.


The algorithm uses texture mapped planes implemented in OpenGL. The textures are extracted from the volumetric data after applying a colour/opacity mapping function. This function is shown in the lower left corner of the images on the right, the pink represents the volumetric data histogram, the colour map is along the bottom of the histogram and the white curve indicates the opacity. At the time of writing 8, 16, and 32 bit integer voxel value datasets are supported as per the vol file format.

The orientation of the planes depend on the relative position of the virtual camera. One of three sets are used, each set is perpendicular to one of the axes of the volumetric cube. The orientation of the planes used is the one that is closest to being perpendicular to the camera view direction, see figure 1 and 2 for examples of two of the three possible plane orientations (grey lines within yellow subcube). As the camera passes between the 45 degree edges of the inspection sub volume the planes are swapped and the new texture set computed. As long as the camera stays within the current quadrant no reference need be made to the volumetric data. The textured planes are additionally drawn from the back to the front so that the OpenGL blend operators will work correctly (depth test is disabled).

Figure 1: Orientation of planes

Figure 2: Orientation of planes


Usage: glvol [options] volfile
          -h     this text
          -f     full screen
          -s     active stereo
         -ss     dual screen stereo
          -m s   load map file
          -M s   load marker file
          -r n   initial resolution (1,2,3,4)
         -vx n   initial view volume x size
         -vy n   initial view volume y size
         -vz n   initial view volume z size
          -c s   load camera file
          -a     start in auto rotate mode
          -A n   set tween frames for animation
Key Strokes
  arrow keys     move inspection box
       mouse     rotate/roll camera
           h     move camera to home position
           f     focus camera to current inspection cube
           c     center inspection volume
           w     write current window to a TGA file
           W     write frames to TGA file continuously
         i,k     translate, up/down
         j,l     translate left/right
 x,X,y,Y,z,Z     automatic scanning along axes
         <,>     move camera forward/backward
         +,-     change inspection box size
          f1     toggle decoration display
          f2     toggle texture display
          f3     toggle histogram display
          f4     toggle text information
          f5     toggle camera tracking
          f6     toggle autorotate
          f7     toggle 3d cursor
     1,2,3,4     set supsampling level to 1,2,4,8
           q     quit

MRI data

Volumetric data courtesy of Brent Tully

Helix waves

2dF subvolume

Screen dump


Rendering and visualisation of furnace data

Written by Paul Bourke
May 2004

Temperature distribution

Oxidant distribution

Nitrous Oxide distribution

Flow pattern

Visualisation of confocal microscope data

Written by Paul Bourke
March 2000

Confocal microscope data from Swinburne University
School of Biophysical Sciences and Electrical Engineering.


The following documents a visualisation exercise, namely to interactively explore 3D voxel based data from a confocal microscope. The software that came with the microscope provided the ability to view semi transparent rendered images but their generation was slow and the depth cues weren't strong. The goal then was to create interactive 3D models and present them in an existing 3D stereo pair projection environment.


The following illustrates the images as they are saved from the confocal microscope software. It saves a multilayer, 16bit tiff file. Selected slices from that file are presented below for the dataset being demonstrated here. The original images are 512 pixels square while the examples below are presented at 200 pixels square. The original file consisted of 18 layers, as is typical of this sort of acquisition the sampling in one direction (18) is much less than the sampling in the other directions (512).

Subsampling voxel space

The first stage in processing the images was to subsample in the image plane by a factor of 8. It turns out that in this application the in-plane dimensions are greatly oversampled and a factor of 8 is not unreasonable to smooth out the noise in the in-plane data. It also conveniently brings the voxel cube into more appropriate/manageable dimensions of 64x64x18.

Marching cubes

The marching cubes algorithm is applied to the voxel space to derive isocontour levels. Two examples are shown below, in the left example the resulting triangular facets are shown in wire frame. In the right hand image is the same data and view shown as a shaded model. The normal at each vertex is derived by averaging the normals of the faces that share the vertex.

Isolevel: 200

Isolevel: 200

The noise that one sees in the top right image makes the 2D views very hard to interpret but it adds a visually powerful 3D stereo effect and even makes the geometry of the rest of the structure clearer. These models have around 15,000 triangular facets, at this level they can easily be rotated interactively (around 20 frames per second) by the low performance end of the OpenGL card market.

Multiple isosurfaces

The following image on the right shows multiple isolevel surfaces each made semi transparent. While the structure isn't so clear from this image, it is very clear when it can be interactively inspected in stereo (the extra contour levels only added another 10,1000 facets to the model).

Isolevel: 200

Isolevel: 120,160,200