Simulating Depth of Field Blurring

Written by Paul Bourke
January 1995


Introduction

Most raytracers do not and can not create the blurring effect that occurs with objects at very different distances from the camera compared to the object in focus. The reason most rendering packages cannot reproduce this is because they act as a pinhole camera, that is, they do not have a lens system and an associated focal point and focal depth.

This note describes a crude method of simulating this blurring effect that results with a real camera.

Overview of the technique

The approach involves taking a number of renderings with the virtual camera positioned at random points about the desired camera position. The focal point of the camera is not changed. Each of the resulting renderings are then averaged (pixel by pixel) to form the final image.

The displaced camera positions

The random distribution of displaced camera positions are taken in the plane perpendicular to the camera view direction. They are calculated such that they have a uniform angle distribution and a Gaussian length distribution.

The displacement vectors from the initial camera position C, the focal point F, and an up vector U are calculated as follows.

1) calculate the unit view direction vector

D = || F - C ||

2) calculate the unit vector to the right of the view direction, this is the cross product of view direction vector and the up vector

R = || D x U ||

3) calculate the unit vector perpendicular to the direction vector and right vector, from the cross product of the right vector and the direction vector

P = || R x D ||

The vectors in R and P are unit vectors, perpendicular to each other, lying in the plane perpendicular to the view direction D. By taking Gaussian weighted combinations of these two vectors we get the desired distribution for the displaced camera positions. That is, they have a uniform distribution of angles about the original camera position and the distance from the original camera position is Gaussian distributed.

The width of the Gaussian distribution effectively varies the depth of field. A narrow Gaussian results in a broad depth of field, a wide Gaussian a narrow depth of field.

The number of displaced camera positions

For each camera position (but the same focal point) the scene is rendered resulting in one image per displaced camera position. If there are N displaced camera positions then the resulting value at any pixel is the average of the appropriate pixel from all the N images. It is easy to see that if only two renderings are performed that the resulting image will just look like the overlap of the two renderings. As N gets to around 10 the ability to distinguish individual renderings starts to disappear and the objects distant from the focal point appear blurred. Of course the desired effect needs to be achieved with the minimal number of renderings since often a single rendering can be a sufficiently time consuming exercise.

The following shows a test scene with images resulting from an increasing number of displaced camera positions.

The following shows the same test scene with a range of Gaussian radii.