There are three frequently used techniques for rapidly displaying either photographic or computer generated surround environments, they are: cylindrical panoramics, spherical maps, or cubic maps. In the late 80s panoramic and spherical maps were popularised by Apple with their QuickTime VR software, more recently (2000) that technology was extended to handle cubic maps. In all three cases images are mapped onto some geometry (cylinder, sphere, cube) with a virtual camera located in the center, depending on the performance of the host hardware and software the user can interactively look in any direction. This can lead to a strong sense of immersion especially if the environment is projected onto a wide display that fills up a significant part of the viewers field of view. One might ask how a greater sense of immersion can be achieved and in particular whether stereoscopic projection is possible. It turns out that stereoscopic 3D cylindrical panoramas are straightforward to create, the rest of this document will discuss the process for computer generated stereoscopic 3D panoramas. A more recent addition to this document will further describe an interactive viewer and show examples of stereoscopic panoramics of real world environments.
As with all stereoscopic 3D projection it is necessary to create two images from slightly different viewpoints corresponding to the two human eyes. In this case we need to create two panoramics. Many rendering packages support panoramic cameras but they are modelled after a centered camera, for a stereoscopic panorama one needs to create the panorama using a cameras with a narrow horizontal field of view and wide vertical field of view. A large number of these slice renderings are calculated as the camera is rotated, all the resulting slice renderings are stuck together to form the final panoramic. So in the following example 360 one degree slices are created and stuck together to form the following panoramic.
The reason why one needs to revert to such a scheme is because the cameras, unlike a normal panoramic, don't rotate about the center of the camera but rather about a rim with radius of half the intended eye separation. Two possible topologies are illustrated below, in the first the view direction vectors for each camera are parallel to each other, in the second they toed-in and meet at what will be called the focal length (this is the distance for zero parallax). So to summarise, in either geometry the left and right eye camera rotates by some small amount (say 1 degree) and a rendering is performed with a perspective camera with a 1 degree horizontal aperture and a larger vertical aperture (eg: 90, 120, <180). The exact settings that will ensure that the slices join properly is dependent on the rendering software being used.
For the toe-in setup the final panoramic is automatically aligned, that is, objects that are the focal length away from the camera will be at zero parallax and so the two panoramics can be projected without any horizontal offset applied.
For parallel view directions the final panoramics need to be shifted horizontally with respect to each other. This can be seen in the image below for an object at the focal distance. In order for it to be at zero parallax the solid red line on the left image needs to lined up with the solid blue line on the right image.
The degree of horizontal shift is easy to calculate given the geometry above. If r is half the eye separation then the angle theta is given by
And so the pixel shift is just the proportion of this angle to 360 degrees. The pixel shift can either be applied when joining the slices together to form the panoramic (recommended) or it could be applied within the stereoscopic panorama viewer.
Capture using one camera
It is possible to capture stereoscopic panoramic images by using one camera, generally with a wide angle lens. The camera rotates perpendicular to a circle, as shown below, in small steps.
A strip of pixels is extracted from each image, they are aligned next to each other to create the left and right eye panoramic images. Depending on which pair of pixels is chosen, the effective eye separation for the panoramic images can be varied, see the inner circle above.
Example using PovRay
In order to facilitate the creation of the rotating camera rig in PovRay a camera include file was created as follows. This should be included into the PovRay scene file in place of any other camera specification. It makes a number of assumptions (for example, up is the y axis) but it gives the basic idea.
To see how one might use this here are the ini and pov files for the left and right panoramics for a scene courtesy of Joseph Strout: test1left.ini, test1left.pov, test1right.ini, test1right.pov. The ini file creates a 360 frame animation with a 1 degree wide camera. The final panoramic in this case will be 3600 pixels by 1800 pixels.
Update (Nov 2007): a custom camera for PovRay that renders a stereoscopic panoramic image pair directly.
Stereo-capable panoramic viewer
Writing a panoramic viewer based upon OpenGL is "trivial", it only requires a cylinder with the panoramic mapped as a texture. Writing a stereoscopic viewer is not much more difficult.
The main complication for high resolution panoramics is the texture memory available and the largest texture supported. For example a 4096 by 2048 texture is usually going to require 32 MB. Many OpenGL drivers place modest limits on the largest texture size, the way around any such restriction is to tile the panoramic in N by N pieces on the cylinder.Examples
Extensions, October 2003
The viewer originally written for cylindrical stereoscopic panoramic images has been extended as follows.
Usage: panoramic [options] tga_file_name [right_tga_file_name] Options -h this text -f full screen -F half full screen -2 stereo panoramics (default: mono) -a n vertical camera aperture (default: 60) -t n width in degrees (default: 360) -r n start in rotate mode, speed n (default: 0) -s spherical map (default) -c cylindrical map -l planar map -dt n delta theta (default: 360) -up constrain up vector (default: on) -0 constrain to plane (default: off) -fps n set target fps to n (default: 40) -e n edge blend mode, n pixel overlap -eb s edge blend settings read from file -wall n which vroom wall (1 to 8) -server n server mode, n clients -client s client mode, name of server Interaction arrow keys rotate left/right/up/down, pan for planar map left mouse rotate, pan for planar map middle mouse roll right mouse pop up menus [,] roll a,A autospin faster/slower +,- zoom in/out w window dump 1..8 VROOM wall ID e,E increase/decrease eye separation f,F increase/decrease focallength q quit
Some stunning examples of real world stereoscopic panoramics have been captured by Peter Murphy. An example showing the left eye of a panoramic stereo pair is given below, this is a full spherical panoramic image, the original around 4000 pixels wide!
The following shows the left and right views from within the viewer, note that normally these would be displayed full screen on a dual display card and viewed through a dual projector passive stereo system. Using above average graphics cards (at the time of writing) this viewer was readily able to display 4096 pixel stereo panoramic pairs at 30 frames per second.
And finally, two images showing the geometry of the underlying textured cylinder and sphere.
An obvious extension is to add computer generated aspects to the environment such as avatars. To to this correctly the added geometry needs to be in the correct perspective, it may need to be occluded behind geometry in the panoramic, and it needs to be illuminated in a consistent way with the lighting of the panoramic. The first step to achieving this is illustrated below, the sun position is determined, a ground plane(s) are positioned so any additional geometry can lie at the correct vertical position and move into the foreground/distance correctly, and finally if the outlines of objects in the scene are known such as the gravestone then any added geometry that movies behind that gravestone will be occluded by it.
Extensions, April 2004
A number of changes were made to the performance and in addition two new map formats were supported, namely high resolution stereo planar images and stereoscopic cubic maps.An example follows courtesy of Peter Murphy
Left eye cubic maps as unwrapped cube.
Added new cubic map type, now supports 6 face cubic maps as well as 4 face cubic maps. The performance has also been greatly improved, as well as the support for higher quality/resolution images. The largest cubic map attempted has been 4 x (4096x4096) in stereo with a frame rate of more than 75 fps (it is limited by vertical refresh syncronisation). Indeed, the frame rate is now not limited by the size of the panoramic but by the display size and the camera aperture. The limit of the size of the panoramic that can be handled is dictated by system memory. The largest stereoscopic spherical map attempted is 8192 x 8192, again with a vertical refresh limited frame rate on a 1024x768 stereo display.Update October 2004
Experiments in augmented characters filmed in stereo.
Update, Octover 2005
Interface with Intersense tracker
S. Tzavidas and A.K. Katsaggelos,
H.C. Huang and Y.P. Hung.
J. Gluckman, S. Nayar, and K. Thoresz.
S. Peleg and M. Ben-Ezra.