CubeRender

Fast 360 degree 3D model exploration technique using 6 precomputed views mapped onto the interior faces of a cube

Written by Paul Bourke January 1991

Image sets and other resources

OpenGL -- Radiance -- GeomView -- Macintosh -- VRML -- PovRay


Introduction

This report discusses an interesting technique that makes it possible to interactively view a high quality rendered environment from a single position using only 6 precomputed renderings. The technique has great potential for Architectural presentation of rendered scenes because it combines the ability to have fast (interactive) user control over the view direction while at the same time presenting very high quality renderings.

The problem

The realism now possible with many rendering packages can be very attractive as a presentation tool. The considerable time required (often hours) to perform such renderings is not a concern because they are performed well ahead of the presentation. At the other end of the scale is the user controlled walk through experience where the user may choose to explore the environment at their leisure. In order to achieve 15 to 20 frames per second, only very simple models with crude rendering techniques can be attempted. There simply isn't computer hardware fast enough to perform high quality rendering of geometrically complicated models. The question then is what solutions exists between these two extremes? How can one present a 3D environment so that the user can explore it interactively and at the same time view it with a high degree of realism.

A previous attempt

One approach, which was the topic of a previous report by myself, is to precompute many views at many positions within the 3D model. In the example I demonstrated, 8 views (45 degree steps) were created for each view position. The view positions were all at an average human eye height and lay on a regular grid aligned to fit on the interior of the environment, a room in the example. The correct view was determined from the database of precomputed views as the user moved from node to node and turned between view directions. In this solution both the movement between nodes and the changes in view direction at a node are discrete. Because of the potentially large number of images necessary both these discrete steps were quite large, 45 degrees for turning and 2 meters for movement. Even this resulted in weeks of rendering time as well as hundreds of megabytes of disk storage.

This solution

The approach taken here is to separate the exploration process into two activities, that of moving from one position to another in the scene and that of turning ones head while remaining at rest. Since an acceptable way of exploring an Architectural environment is to move to a position and then look around, this solution slows down the rate at which movement between positions can be achieved but it greatly speeds up the way the viewer can look about from a particular position. It is expected that, in implementations of this technique, while the movement between positions remains discrete, the changes in view direction can become continuous and unconstrained.

Consider a viewing position within a 3D environment and make 6 renderings from this position. These 6 renderings are taken along each of the coordinate axes (positive and negative), they are each perspective views with a 90 degree camera aperture. Another way of imagining these 6 views is as the projection of the scene onto each of the faces of a unit cube centered at the view position.

Once these 6 images have been generated, forget about the original model and create a new model which consists of only a unit cube centered at the origin with each of the 6 images applied as a texture (image map) to each internal face of the cube. The interesting part is that if the inside of the cube is viewed from the origin the seams of the cube cannot be seen (given that the cube is rendered with ambient light only). Indeed, as the view direction is changed what one sees is the same as what one would see with the same view direction in the original model.

The advantages are that the "rendering" of the cube with textures can be done very quickly. Very little is needed in the rendering pipeline since there are no light sources, only ambient light and no reflected rays need to be computed. In fact many graphical engines have fast inbuilt texture mapping routines ideal for exactly this sort of operation.

Example 1

As an example, the following shows the 6 precomputed views from a computer based 3D model created by Matiu Carr.


Figure 1

The views are arranged as if the cube they are applied to is folded out. An alternative method of folding out the cube is shown in figure 2 where the top and bottom faces are cut into quarters.


Figure 2

As can be seen there are no "gaps" although there are discontinuities at the seams. With some imagination you might believe that the discontinuities go away when the cube is folded back together. Two views from the interior of this cube are shown below on the left along with the same view on the right but this time with the edges of the cube shown.


Figure 3


Figure 4

Looking at the images on the left, it is hard to imagine that you are viewing only the walls of a cube with murals painted on them. It is quite easy to imagine the painted walls when the edges of the cube are visible as in the images on the right.

Here is a movie generated using the approach described above. (Raw images)

Example 2

Model by Bill Rattenbury. (Raw images)


Figure 5 - The folded out cube.


Figure 6 - The views on the left have the edges shown.


Figure 7 - Here is the cube viewed from the outside.


Geomview Viewer

If you have Geomview and texture capabilities then the following 6 OOGL files will allow experimentation. Be sure to locate the camera at the origin [W]reset and just use the [o]rbit tool.


Macintosh viewer

A Macintosh viewer was written for evaluation purposes. It displayed the view from any user chosen view direction as well as giving the user control over the camera aperture and window size. The camera view direction can be entered directly as a vector or the left-right and up-down arrow keys will rotate the view direction in the horizontal or vertical plane respectively given an user specified angle increment.

The viewer can also be used to view user generated environments given that the user can create the 6 precomputed views correctly. Examples of six precomputed view images are supplied with the Macintosh viewer as PICT files. The orientation of the 6 views with respect to each other must match the example in figure 1, this orientation is shown explicitly in the following cube mapping diagram.


Figure 8 (Raw images)

The Macintosh viewer had the additional ability to show where the edges of the cube are, this is nice for demonstration purposes and was employed to generate the images shown in this document.


POVRAY Example

Raw images, POVRAY scene, POVRAY ini

The following example was created by Joseph Strout and demonstrates how the 6 views might be created using POVRAY.


Figure 9


Radiance Example

If you are using Radiance then the next two appendices contain the code necessary to create the appropriate texture mapped cube from a position within your favorite Radiance model. They are also provided with this document as two scripts MAKE6 and MAP6. Your use of these for a Radiance model called "mymodel.rad" looking from position (x,y,z) might be something like this, of course you will need to substitute your favourite or necessary options for the first two steps.

	oconv mymodel.rad > mymodel.oct
	make6 x y z mymodel
	map6 x y z | oconv - > x_y_z.oct
	rview	-av 1 1 1 -ab 0 -ps 1 -dr 0 -lr 0 \
		-vh 90 -vv 90 -vp 0 0 0 x_y_z.oct

The following are the Radiance rpict calls required to create the 6 views from one view position. The important thing here are the up vectors for the top and bottom views so that the mapping onto the cube works correctly later on.

#
# Call this with four parameters
# The first three are the camera position coordinates
# The last is the oct file name (.oct assumed)
#
rpict -vp $1 $2 $3 -vd 1 0 0 -vh 90 -vv 90 \
      -av .1 .1 .1 \
      -x 300 -y 300 \
      $4.oct > $1_$2_$3_p+100.pic
rpict -vp $1 $2 $3 -vd -1 0 0 -vh 90 -vv 90 \
      -av .1 .1 .1 \
      -x 300 -y 300 \
      $4.oct > $1_$2_$3_p-100.pic
rpict -vp $1 $2 $3 -vd 0 1 0 -vh 90 -vv 90 \
      -av .1 .1 .1 \
      -x 300 -y 300 \
      $4.oct > $1_$2_$3_p0+10.pic
rpict -vp $1 $2 $3 -vd 0 -1 0 -vh 90 -vv 90 \
      -av .1 .1 .1 \
      -x 300 -y 300 \
      $4.oct > $1_$2_$3_p0-10.pic
rpict -vp $1 $2 $3 -vd 0 0 1 -vu 0 1 0 -vh 90 -vv 90 \
      -av .1 .1 .1 \
      -x 300 -y 300 \
      $4.oct > $1_$2_$3_p00+1.pic
rpict -vp $1 $2 $3 -vd 0 0 -1 -vu 0 1 0 -vh 90 -vv 90 \
      -av .1 .1 .1 \
      -x 300 -y 300 \
      $4.oct > $1_$2_$3_p00-1.pic

The following is the Radiance model of a cube with 6 views mapped on as colourpicts. Replace $1, $2, and $3 with the coordinates of your view position. It is this model which is rendered using rview or rpict, remembering that the view position should be (0,0,0) and the ambient light level needs to be high since there are no light sources.

void plastic flat
0 0
5 1 1 1 0 0
flat colorpict top
13 red green blue $1_$2_$3_p00+1.pic picture.cal pic_u pic_v 
   -t -.5 -.5 0 -ry 180
0 0
flat colorpict bottom
11 red green blue $1_$2_$3_p00-1.pic picture.cal pic_u pic_v 
   -t -.5 -.5 0
0 0
flat colorpict left
15 red green blue $1_$2_$3_p-100.pic picture.cal pic_u pic_v 
   -t -.5 -.5 0 -rz 90 -ry 90
0 0
flat colorpict right
15 red green blue $1_$2_$3_p+100.pic picture.cal pic_u pic_v 
   -t -.5 -.5 0 -rz -90 -ry -90
0 0
flat colorpict back
13 red green blue $1_$2_$3_p0+10.pic picture.cal pic_u pic_v 
   -t -.5 -.5 0 -rx 90
0 0
flat colorpict front
15 red green blue $1_$2_$3_p0-10.pic picture.cal pic_u pic_v 
   -t -.5 -.5 0 -rz 180 -rx -90
0 0
top polygon p1
0 0 12         -0.5               -0.5                0.5
               -0.5                0.5                0.5
                0.5                0.5                0.5
                0.5               -0.5                0.5
bottom polygon p2
0 0 12         -0.5               -0.5               -0.5
                0.5               -0.5               -0.5
                0.5                0.5               -0.5
               -0.5                0.5               -0.5
back polygon p3
0 0 12          0.5                0.5               -0.5
                0.5                0.5                0.5
               -0.5                0.5                0.5
               -0.5                0.5               -0.5
front polygon p4
0 0 12          0.5               -0.5               -0.5
               -0.5               -0.5               -0.5
               -0.5               -0.5                0.5
                0.5               -0.5                0.5
left polygon p5
0 0 12         -0.5               -0.5               -0.5
               -0.5                0.5               -0.5
               -0.5                0.5                0.5
               -0.5               -0.5                0.5
right polygon p6
0 0 12          0.5               -0.5               -0.5
                0.5               -0.5                0.5
                0.5                0.5                0.5
                0.5                0.5               -0.5


VRML Example

A VRML example, a text version, the raw images.

For this to function properly your VRML player must support the following
  • Allow view direction rotation without moving the camera position from the origin.
  • The surfaces need to totally ignore light position by either supporting ambient light only, or by honouring the ambientColor settings for the surfaces of the cube.
  • Any camera based "headlight" needs to be turned off.


OpenGL Example

OpenGL is ideally suited to employing this technique as long as your OpenGL implementation has good texture mapping support. It is simply necessary to create the 6 faces of a cube specifying the texture coordinates and map the six images onto the faces appropriately.

The source code to a simple OpenGL program that implements this technique along with 6 example textures is provided here. To compile it you will need to have the GL libraries as well as the GLUT libraries correctly installed.

The "usage" for the viewer supplied is as follows, note the construction line toggle which is nice for showing people where the edges of the cube actually are.

Usage:    cuberender -x nnn -y nnn [-h] [-f] [-c]
      -x nnn   width of the images, required
      -y nnn   height of the images, required
          -h   this text
          -f   full screen
          -c   show construction lines
Key Strokes
  arrow keys   rotate left/right/up/down
  left mouse   rotate left/right/up/down
middle mouse   roll
 right mouse   menus
         <,>   decrease, increase aperture
           c   toggle construction lines
           q   quit

The example images provided here are 512 square, using a 4D51T card in a Dec Alpha this could be rotated at around 10 frames per second.

Antialising

The code provided above uses GL_NEAREST in the calls
    glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
    glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);

This means that the textures are sampled at their centers and this can lead to aliasing artifacts. One way around this is to use GL_LINEAR instead in which case a 2x2 average is formed, however this leads to the need for special handling at the edges. OpenGL provides support for this but it means that you need to create textures that have a 1 pixel border where the border has the appropriate pixels from the adjacent faces.

So the calls would change as follows
    glTexImage2D(GL_TEXTURE_2D,0,4,w+2,h+2,0,GL_RGBA,GL_UNSIGNED_BYTE,bottom);
becomes
    glTexImage2D(GL_TEXTURE_2D,0,4,w+2,h+2,1,GL_RGBA,GL_UNSIGNED_BYTE,bottom);
and
    glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
    glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
become
    glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
    glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);

Comparison between CubeView and Apple QuickTime VR

The characteristics are ordered in three sections, those which are clearly in CubeViews favour, those for which there is little difference, and finally those in QuickTimeVR's favour. Most of the items in QuickTime VR's favour arise simply from a lack of work being done to refine this technique, they are not inherent to the technique itself.
Attribute                               CubeView         QuickTime VR
---------------------------------------------------------------------------
Full 360 vertical viewing               Yes              No
Image quality                           Excellent        Lossy compression
Ease of scene generation                Very easy        More difficult
Multiplatform scene generation          Yes              Maybe one day
Distortion                              None             Some
Full camera attribute control           Yes              Not yet
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Full 360 horizontal viewing             Yes              Yes
Based on precomputed information        Yes              Yes
Suitable for computer generated scenes  Yes              Yes
File sizes                              Similar          Similar
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Multiplatform playback support          Could be         Yes
Fast playback (interactive)             Needs work       Yes
Multiple nodes                          Needs work       Yes
Object nodes and views                  Needs work       Yes
Suitable for photographic scenes        Difficult        Yes
---------------------------------------------------------------------------

References

Greene, N. (1986)
Environment Mapping and Other Applications of World Projections.
IEEE Computer Graphics and Applications, November 1986 (p. 21-29).