Simulating Depth of Field BlurringWritten by Paul Bourke
Most raytracers do not and can not create the blurring effect that occurs with objects at very different distances from the camera compared to the object in focus. The reason most rendering packages cannot reproduce this is because they act as a pinhole camera, that is, they do not have a lens system and an associated focal point and focal depth.
This note describes a crude method of simulating this blurring effect that results with a real camera.
Overview of the technique
The approach involves taking a number of renderings with the virtual camera positioned at random points about the desired camera position. The focal point of the camera is not changed. Each of the resulting renderings are then averaged (pixel by pixel) to form the final image.
The displaced camera positions
The random distribution of displaced camera positions are taken in the plane
perpendicular to the camera view direction. They are calculated such that they
have a uniform angle distribution and a Gaussian length distribution.
The displacement vectors from the initial camera position C, the focal point F, and an up vector U are calculated as follows.
1) calculate the unit view direction vector
D = || F - C ||
2) calculate the unit vector to the right of the view direction, this is the cross product of view direction vector and the up vector
R = || D x U ||
3) calculate the unit vector perpendicular to the direction vector and right vector, from the cross product of the right vector and the direction vector
P = || R x D ||
The vectors in R and P are unit vectors, perpendicular to each other, lying in the plane perpendicular to the view direction D. By taking Gaussian weighted combinations of these two vectors we get the desired distribution for the displaced camera positions. That is, they have a uniform distribution of angles about the original camera position and the distance from the original camera position is Gaussian distributed.
The width of the Gaussian distribution effectively varies the depth of field. A narrow Gaussian results in a broad depth of field, a wide Gaussian a narrow depth of field.
The number of displaced camera positions
For each camera position (but the same focal point) the scene is rendered resulting in one image per displaced camera position. If there are N displaced camera positions then the resulting value at any pixel is the average of the appropriate pixel from all the N images. It is easy to see that if only two renderings are performed that the resulting image will just look like the overlap of the two renderings. As N gets to around 10 the ability to distinguish individual renderings starts to disappear and the objects distant from the focal point appear blurred. Of course the desired effect needs to be achieved with the minimal number of renderings since often a single rendering can be a sufficiently time consuming exercise.
The following shows a test scene with images resulting from an increasing
number of displaced camera positions.
The following shows the same test scene with a range of Gaussian radii.
Generating Hiddenline (wireframe) Images
Written by Paul Bourke
The scene descriptions below both (a) and (b) lead to the same rendered image in (c). The scene in (a) and (b) consist of vertical bands of different widths, the dotted lines represent the pixel boundaries.
This is the most prevalent form of aliasing and occurs in regions of abrupt change in intensity. Examples of such regions are object or shadow boundaries or the boundaries of sharp highlights. The next figure illustrates the staircasing effect. (a) shows an example object as it should appear and (b) shows the rendered result. The dashed lines in are the pixel boundaries.. Most of the pixels spanning the object's boundaries end up being shaded incorrectly as either wholly within the object or not at all within the object. Note also that there are an infinite number of polygons which would generate the image in (b)
Another important aliasing problem occurs across whole regions of an image in which constituent objects alternate at a high frequency. The samples taken by a ray tracer can sometimes generate images containing what are called Moiré interference patterns. The following shows an example rendered image that contains Moiré interference patterns.
Antialiasing is the name given to the process that attempts to eliminate or reduce the impact of aliasing artefacts in rendered images. Antialiasing algorithms can be classified as either analytic or discrete.
Analytic algorithms perform a thorough analysis of the elements of a scene and filter out any high frequencies prior to sampling. Although very effective, these algorithms tend to be complicated and time consuming, requiring expensive analysis of the geometrical properties of the objects involved.
Discrete methods such as ray tracing treat a scene as a set of unrelated intensity values. To avoid the complications of incorporating an analytic filter, methods have been devised to improve the nature of the actual sampling performed to help reduce aliasing effects.Supersampling
The simplest discrete method available is known as supersampling. It involves casting more than one regularly spaced sample per pixel and using the average of the results for the pixel intensity. For example, a pixel could be subdivided into the grid illustrated in below. To render the pixel, primary rays are cast through each of the indicated sample points. The intensity value of the pixel is the average of all the samples.
Adaptive sampling is similar in nature to supersampling. Each pixel to be traced has a primary ray cast through each of its corners. If the intensity of any of the four rays varies significantly from the other three, then the pixel is split into four rectangular portions. Each of the new quadrants has primary rays cast through their corners. Any quadrant for which the new rays display any significant difference are further subdivided and the process is repeated for that quadrant. The subdivision process can be repeated to an arbitrary level. Note that quadrants at a new level have rays in common with the previous level and with each other thus reducing the amount of actual tracing that needs to be performed.
Adaptive sampling works very well on boundary edges. However, certain scenes will still cause problems since they can cause the sampler's subdivision phase to stop prematurely, e.g. failing to notice gaps between similarly coloured objects. In these circumstances aliased artefacts can occur in the final image.Stochastic Sampling
Stochastic sampling is another discrete antialiasing method. The method antialiases scenes through the use of a special sample distribution which replaces most aliased image artefacts with noise of the correct average intensity.
In order to determine the colour of a particular pixel the pixel is subdivided into a grid of a preset number of cells. A primary ray is then traced through some randomly determined location in each of the cells as shown below. This type of sample distribution closely resembles the 'Poisson disk' distribution of photoreceptors found in the extrafoveal region of the human retina.
The samples are combined into one value for the pixel. The weighting given to each sample should depend in some way on its distance from the centre of the pixel, for example a Gaussian function. It is usually acceptable however to use a 'box' filter, i.e. apply an equal weighting to all the samples.
Since the samples are not regularly spaced, aliasing artefacts that would otherwise have appeared in the image are for the most part converted to noise.
In conclusion however it should be noted that aliasing is an inherent part of any discrete process, its effect can be minimised but not eliminated.References
Robert L. Cook and Thomas Porter and Loren Carpenter. Distributed Ray Tracing. Computer Graphics v 18 no 3, Jul 1984 (SIGGRAPH '84 Proceedings), pp 137-145
Robert L. Cook. Stochastic Sampling in Computer Graphics. ACM Transactions on Graphics, v 5 no 1, 1986, pp 51-72
Aliasing is a general term given to artefacts that occur when a continuous signal or function is discretely sampled. This happens all the time when computers are used to acquire information from the real world for processing, a few examples:
Another artefact that occurs when objects are drawn onto a digital image (eg: the screen) is the so called "staircase" effect. Consider the drawing of a black line onto a white background at an angle other than horizontal or vertical. The line can only be drawn using the pixels of a finite size, this causes steps in the representation of the line. See the magnified version of the line in the following example and the curved parts of the letter "d".
As with the first type of aliasing, this staircase effect can be reduced (although not removed) by sampling the image at a higher resolution and then subsampling, with averaging, to the desired resolution. In the image below the same elements as above are known geometrically, essentially at an infinite resolution. There are techniques that can render these geometric objects antialiased.
The net effect is like a blurring of the object, note however that
antialiasing cannot be performed by simply blurring the original image.
Typically antialiasing of objects that aren't known to an infinite resolution
is performed using original images 2, 3, or 4
times the resolution of the final subsampled/averaged image.
The relative gain diminishes as the image resolution is increased
and obviously the process becomes more computationally unattractive
as the resolution increases.
The following is a line subsampled from high resolution renderings, scale factors are shown below the line.