Multiwall and offaxis projection

Written by Paul Bourke
February 2003

AutoCAD Model courtesy of Chris Little, Stephanie Phan, Andrew Hutson,
Frank Sears from the Melbourne University School of Architecture

The images shown here were created for a number of purposes: to illustrate the need for offaxis frustums for immersive environments, to test the movie playing performance of the recently released Matrox Parhelia 3 display cards, to demonstrate correct image creation and presentation at the CAEV (3 wall projection system at Melbourne University), and finally to test a more general camera specification in PovRay.

The images below on the right illustrate the projection type. The red lines are the projection planes and the blue lines the border of the frustum. These are top views but the same offaxis frustum concept also applies vertically. For the correct or minimally distorted projection, the viewer should be located at the intersection of the blue lines. The key concept is that for correct projection the frustums change with the viewers position (requiring viewer head tracking) and the frustums are generally asymmetric

All the images are 3072 by 768 and should therefore be a one to one match with projector pixels. All projections are based upon the screens being 2.4m wide and 1.8m high (4:3 ratio) with the side screens at an angle of 21 degrees (geometry of the test environment). It could be argued that the screens would be better angled at a greater angle than 21 degrees so as to bring the "sweet spot" (for 3 symmetric frustums, example 1) closer to the center of the room.

Example 1

3 symmetric frustums
View at focal point, 6.5m from center screen


This is commonly the projection used when the rendering software being used only supports symmetric frustums.

Example 2

Single symmetric frustum
View from focal point, 6.5m from center screen


This is the simplest render option, often chosen chosen because it's the easiest.

Example 3

Single symmetric frustum
View 3.1m from center screen


Similar to (2) which can be applied to any position along the normal to the center screen.

Example 4

Cylindrical projection
View from focal point, 6.5m from center screen


This is a good approximation to example 1 and can be performed in a single render. Unfortunately it only works for a single viewer position.

Example 5

1 symmetric, 2 asymmetric frustum
View 3.1m from center screen


Example 6

1 symmetric, 2 asymmetric frustum
View 2.0, from normal of the left screen


Example 7

3 asymmetric frustum
View 2.0m from the left edge of the left screen


This is an example of the most general case where all the frustums are asymmetric.

Example 8

1 symmetric, 2 asymmetric frustum
View 1.0m from center screen


An example of a typical viewer position for highly immersive applications. This isn't necessarily the common for front projection since a viewer position this close is likely to block the light from a front projection arrangement.

  • Those that give undistorted views are indicated but they only give undistorted views if the viewer stands in the right place. Distorted projections are: 2, 3, and 4.

  • Correct views can be created from any position in the room requiring one or more off axis frustums.

  • I claim that the errors in the incorrect projection examples are most noticeable in animated (camera or object) content.

  • All images have been created using PovRay and custom offaxis frustum code.

Multiple wall projection

Using independent computers and OpenGL

December 2000
Written by Paul Bourke


This document describes an approach to presenting 3 dimensional environments on multiple screens (projected onto large walls). The key challenges for this project included the following.

  • The content must be interactive, typically around 30 frames per second.
  • Independent computers will be used to drive each display.
  • Relatively low cost Linux machines with consumer grade OpenGL hardware will be used.
  • The displays must be "perfectly" synchronised, it isn't acceptable to have noticeable delays between the content on each display.
  • The orientation of the displays must be very general, coping with any number of displays, different angles and orientations.

The hardware setup for this experiment was minimal, standard Linux (Intel based) boxes with GeForce OpenGL cards and 100MBit networking (10MBit was also shown to be satisfactory for the small communication volumes needed). While most testing was performed simply using three monitors placed in the right orientation, the system was operated in a couple of 2 and 3 wall installations with projectors creating a seamless wide angle image.

Intermachine communication

In order to ensure that the displays on each computer updated exactly in sync, a simple socket library was written using sock streams over TCPIP. (All the machine were on a 100MBit network). Note that even if all the computers were identical and updating as fast as they could, they wouldn't stay in synchronisation. The main source of variation occurs when the geometry complexity in one view is much greater than the other views. Since OpenGL does frustum culling, the complex views will render more slowly.

The software was written with one machine acting as the server and all the others clients. The same binary served as both client and server. A few messages were defined that were exchanged between the clients and server, these could be extended for more complicated environments, for example, where the geometry was changing. The messages are briefly explained below.

  • vp x y z
    Change the view position to (x,y,z). Server -> client.
  • vd x y z
    Change the view direction vector. Server -> client.
  • update
    Draw geometry to the back buffer. Server -> client.
  • swap
    Swap buffers, back buffer to front buffer. Server -> client.
  • flags f1 f2 f3 ....
    A way to pass general flags to the clients from menus chosen on the server. For example, rendering mode, models choice, wall configuration, etc.
  • ready wallid
    As a result of an update from the server, a client will send this message when they have finished drawing to the their back buffer. Client -> server
  • quit
    The user chooses to quite while operating the server, quit is a menu item. Server -> client.

So, the general flow is as follows. Whenever the user (controlling the server) chooses a menu item, that information is sent to the clients with the "flags" message. As the user drags the mouse controlling the camera position and orientation, the "vp" and "vd" messages are sent to the clients. Every 30ms the server tries to refresh its display, it sends the "update" message to each client and draws the geometry to its own back buffer. The server then waits for "ready" messages from each client, when everyone is ready the server sends a "swap" message at which point an essentially instantaneous buffer swap is performed on all machines.


Strict performance tests haven't been performed because they rely on so many factors (scene complexity and type, OpenGL hardware, machine characteristics,....). For the particular configuration being used here, for a scene with sufficient geometry that could only just render at 30fps on a single machine, introducing 3 synced machines incurred a penalty of less than 2fps. While no testing has yet been done on the performance/scaling for very large numbers of walls, there was no further loss of frame rate for 5 walls.
It is of course not necessary for all the machines in the "pool" to be of identical performance, the end performance (frame rate) will of course be limited by the slowest OpenGL hardware in the machine pool.

OpenGL projection

OpenGL is especially suited to this type of projection because it has a very general frustum functionality (glFrustum()) which allows one to create the so called "off-axis" projections needed. The standard perspective projection offered by most rendering engines is unsuitable because it assume the view position is along a line that is both normal to and passes through the center of the projection plane.

Wall configuration

The wall specification is totally general. So while all the example above show a 3 wall environment where all the walls are in line, in practice the tests were mostly performed on 3 walls where the two side walls are toed in by about 30 degrees. Another arrangement is shown below where the walls are arranged vertically. Another application for 2 synced machines is in passive stereographics where one machine handles the left eye and the other machine the right eye. This functionality was also used in the testing as well as a 5 wall environment.

The walls are specified by their centroid, normal, and up vector. These are all defined in real world coordinates, that is, one can measure a particular configuration directly and enter those parameters into the software as is. In addition the user position or in the case of an audience the "sweet spot" is also required, again in real world coordinates. While this is a very convenient coordinate system for defining the walls, it does require that the geometry being displayed is scaled to an appropriate size.


  • A variation of the above has been implemented for machines playing precomputed movies. The obvious application for this is playing a precomputed movie in stereo. In this case there is only one wall but two cameras.

  • The exploratory exercise above only dealt with very simple camera flying controls (position and orientation). Other camera attributes can be trivially controlled by adding further message types.

  • Control over the geometry is also straightforward, as the user modifies objects on the server, messages need to be sent to the clients so they can reflect those changes. This also applies to time varying geometry, eg: traffic lights, planetary motion, moving characters, etc. There are some other operations that may not be trivial, for example, creating new geometry where the data transfer volumes may become more significant.