Polyray v1.7

Written Copyright (c) 1991-1994 by Alexander R. Enzmann


The data files are ASCII text, the output image file format supported is Targa. Input images (for texturing, height fields, etc.) may be: Targa (all variants defined in the Truevision spec), GIF (both 87a and 89a should work), or JPEG (JFIF format). Polyray is case sensitive, so the following ways of writing foo are all considered different: Foo, FOO, and foo. For an abbreviated list of Polyray's syntax, see the file quickref.txt.

Initialization File

The first operation carried out by Polyray is to read the initialization file polyray.ini. This file can be used to tune a number of the default variables used during rendering. It must appear in the current directory. It doesn't have to exist, it is typically used as a convenience to eliminate retyping command line parameters.

Each entry in the initialization file must appear on a separate line, and have the form:

 default_name   default_value
The names are text. The values are numeric for some names, and text for others. The allowed names and values are:

Valid polyray.ini options
OptionDescription
abort_test
alias_threshold
antialias
display
max_level
max_samples
optimizer
pixel_size
pixel_encoding
renderer
shade_flags
shadow_tolerance
status
warnings
true/false/on/off
threshold to start adaptive antialiasing
none/filter/adaptive1/adaptive2
none/vga1-vga5/hicolor1-hicolor5/truecolor1-truecolor5
max depth of recursion
# samples for focal blur
none/slabs
8/16/24/32
none/rle
ray_trace/scan_convert/wire_frame/raw_triangles/uv_triangles
default/bit mask of flags
minimum distance for blocking objects
none/totals/line/pixel
on/off

A typical example of polyray.ini would be:

    abort_test		on
    alias_threshold	0.05
    antialias		adaptive
    display		vga
    max_samples		8
    pixel_size		24
    status		line
If no initialization file exists, then Polyray will use the following default values:

Default polyray.ini settings
NameDefault
abort_test
alias_threshold
antialias
display
max_level
max_samples
optimizer
pixel_size
pixel_encoding
renderer
shade_flags
shadow_tolerance
status
warnings
on
0.2
none
none
5
4
slabs
16
rle
ray_trace
default: 0.001
none
on

Raw Triangles

A somewhat odd addition to the image output formats for Polyray is the generation of raw triangle information. What happens is very similar to the scan conversion process, but rather than draw polygons, Polyray will write a text description of the polygons (after splitting them into triangles). The final output is a (usually long) list of lines, each line describing a single smooth triangle. The format of the output is one of the following:

   x1 y1 z1 x2 y2 z2 x3 y3 z3
or
   x1 y1 z1 x2 y2 z2 x3 y3 z3 nx1 ny1 nz1 nx2 ny2 nz2 nx3 ny3 nz3
   u1 v1 u2 v2 u3 v3

If the output is raw triangles then only the three vertices are printed. If uv_triangles are being created, then the normal information for each of the vertices follows the vertices and the u/v values follow them. The actual u/v values are specific to the type of object being generated.

Currently I don't have any applications for this output, but the first form is identical to the input format of the program RAW2POV. The intent of this feature is to provide a way to build models in polygon form for conversion to the input format of another renderer.

Detailed description of the Polyray input format:

An input file describes the basic components of an image:

  • A viewpoint that characterizes where the eye is, where it is looking and what its orientation is.
  • Objects, their shape, placement, and orientation.
  • Light sources, their placement and color.

Beyond the fundamentals, there are many components that exist as a convenience such as definable expressions and textures. This section of the document describes in detail the syntax of all of the components of an input file.

Expressions

There are six basic types of expressions that are used in Polyray:

float
Floating point expression (e.g., 0.5, 2 * sin(1.33)). These are used at any point a floating point value is needed, such as the radius of a sphere or the amount of contribution of a part of the lighting model.
vector
Vector valued expression (e.g., <0, 1, 0>, red, 12 * <2, sin(x), 17> + P). Used for color expressions, describing positions, describing orientations, etc.
arrays
Lists of expressions (e.g., [0, 1, 17, 42], [<0,1,0>, <2*sin(theta), 42, -4>, 2*<3, 7, 2>])
cexper
Conditional expression (e.g., x < 42).
string
Strings used for file names or systems calls
images
A Targa, GIF, or JPEG image.

The following sections describe the syntax for each of these types of expressions, as well as how to define variables in terms of expressions. See also the description of color maps, image maps (from which you can retrieve color or vector values), indexed maps, and height maps.

Numeric expressions

In most places where a number can be used (i.e. scale values, angles, RGB components, etc.) a simple floating point expression (float) may be used. These expressions can contain any of the following terms:

Numeric expressions supported by Polyray
ExpressionDescription
-0.1, 5e-3, ab, ... A floating point number or defined value
'(' float ')' Parenthesised expression
float ^ float Exponentiation, same as pow(x, y)
float * float Multiplication
float / float Division
float + float Addition
float - float Subtraction
-float Unary minus
acos(float) Arccosine, (radians for all trig functions)
asin(float) Arcsin
atan(float) Arctangent
atan2(float,float) Angle from x-axis to the point (x, y)
ceil(float) Ceiling function
cos(float) Cosine
cosh(float) Hyperbolic cosine
degrees(float) Converts radians to degrees
exp(float) e^x, standard exponential function
fabs(float) Absolute value
floor(float) Floor function
fmod(float,float) Modulus function for floating point values
heightmap(image,vector) Height of an pixel in an image
indexed(image,vector) Index of an pixel in an image
legendre(l,m,n) Legendre function
ln(float) Natural logarithm
log(float) Logarithm base 10
min(float,float) Minimum of the two arguments
max(float,float) Maximum of the two arguments
noise(vector)
noise(vector,float)
Solid texturing (noise) function. If the second argument is given, it is used as the number of octaves (repetitions) of the 3D noise function.
noise(vector,vector) Second arg provides more flexible operation using: <pos scale, noise scale, octaves>
pow(float,float) Exponentiation (x^y)
radians(float) Converts degrees to radians
sawtooth(float) Sawtooth function (range is 0 - 1)
sin(float) Sine
sinh(float) Hyperbolic sine
sqrt(float) Square root
tan(float) Tangent
tanh(float) Hyperbolic tangent
visible(vector,vector) Returns 1 if second point visible from first.
vector[i] Extract component i from a vector (0<=i<=3)
vector . vector Dot product of two vectors
|float| Absolute value (same as fabs)
|vector| Length of a vector

Vector Expressions

In most places where a vector can be used (i.e. color values, rotation angles, locations, ...), a vector expression is allowed. The expressions can contain any of the following terms:

Vector expressions supported by Polyray
ExpressionDescription
vector + vector Addition
vector - vector Subtraction
vector * vector Cross product
vector * float Scaling of a vector by a scalar
float * vector Scaling of a vector by a scalar
vector / float Inverse scaling of a vector by a scalar
brownian(vector,vector) Makes a random displacement of the first point by an amount proportional to the components of the second point
brownian(vector) Random displacement of up to 0.1
color_wheel(x, y, z) RGB color wheel using x and z (y ignored), the color returned is based on <x, z> using the chart below:

	Z-Axis
	   ^
	   |
	   |
     Green   Yellow
	 \   /
	  \ /
 Cyan ---- * ---- Red  -----> X-Axis
	  / \
	 /   \
      Blue   Magenta
Intermediate colors are generated by interpolation.
dnoise(vector)
dnoise(vector,float)
Returns a vector (gradient) based on the location given in the first argument. If the second argument is given, it is used as the number of octaves (repetitions) of the 3D noise function
dnoise(vector,vector) Second arg provides more flexible operation using: <pos scale, noise scale, octaves>
planar_imagemap(image, vector [, rflag]) Image map lookup functions. If the third argument is given, then the image will be tiled, otherwise black is used outside the bounds of the image. Note: for planar image maps only the x and z coordinates of the second argument are used.
cylindrical_imagemap(image, vector [, rflag])
spherical_imagemap(image, vector [, rflag])
environment_map(vector, environment)
rotate(vector,vector) Rotate the point specified in the first argument by the angles specified in the second argument (angles in degrees).
rotate(vector,vector,float) Rotate the point specified in the first argument about the axis specified in the second argument by the angle given in the third argument
reflect(vector,vector) Reflect the first vector about the second vector (particularly useful in environment maps)
trace(vector,vector) Color resulting from tracing a ray from the the point given as the first argument in the direction given by the second argument.

Arrays

Arrays are a way to represent data in a convenient list form. A good use for arrays is to hold a number of locations for polygon vertices or as locations for objects in successive frames of an animation.

As an example, a way to define a tetrahedron (4 sided solid) is to define its vertices, and which vertices make up its faces. By using this information in an object declaration, we can make a tetrahedron out of polygons very easily.

   define tetrahedron_faces
      [<0, 1, 2>, <0, 2, 3>, <0, 3, 1>, <1, 3, 2>]

   define tetrahedron_vertices
      [<0, 0, sqrt(3)>,
       <0, (2*sqrt(2)*sqrt(3))/3, -sqrt(3)/3>,
       <-sqrt(2), -(sqrt(2)*sqrt(3))/3, -sqrt(3)/3>,
       ]

   define tcf tetrahedron_faces
   define tcv tetrahedron_vertices
   define tetrahedron
   object {
     object { polygon 3,tcv[tcf[0][0]],tcv[tcf[0][1]],tcv[tcf[0][2]]} +
     object { polygon 3,tcv[tcf[1][0]],tcv[tcf[1][1]],tcv[tcf[1][2]]} +
     object { polygon 3,tcv[tcf[2][0]],tcv[tcf[2][1]],tcv[tcf[2][2]]} +
     object { polygon 3,tcv[tcf[3][0]],tcv[tcf[3][1]],tcv[tcf[3][2]]}
   }

What happened in the object declaration is that each polygon grabbed a series of vertex indices from the array tetrahedron_faces, then used that index to grab the actual location in space of that vertex.

Another example is to use an array to store a series of view directions so that we can use animation to generate a series of very distinct renders of the same scene (the following example is how the views for an environment map are generated):

   define location <0, 0, 0>
   define at_vecs [<1, 0, 0>, <-1, 0, 0>, < 0, 1, 0>, < 0,-1, 0>,
                   < 0, 0,-1>, < 0, 0, 1>]
   define up_vecs [< 0, 1, 0>, < 0, 1, 0>, < 0, 0, 1>, < 0, 0,-1>,
                   < 0, 1, 0>, < 0, 1, 0>]

   // Generate six frames
   start_frame 0
   end_frame 5

   // Each frame generates the view in a specific direction.  The
   // vectors stored in the arrays at_vecs, and up_vecs turn the
   // camera in such a way as to generate image maps correct for using
   // in an environment map.
   viewpoint {
      from location
      at location + at_vecs[frame]
      up up_vecs[frame]
      ...
    }

Conditional Expressions

Conditional expressions are used in one of two places: conditional processing of declarations (see section 2.7) or conditional value functions.

cexper has one of the following forms:

!cexper
cexper && cexper
cexper || cexper
float < float
float <= float
float > float
float >= float

A use of conditional expressions is to define a texture based on other expressions, the format of this expression is:

(cexper ? true_value : false_value)

Where true_value/false_value can be either floating point or vector values. This type of expression is taken directly from the equivalent in the C language. An example of how this is used (from the file spot1.pi) is:

   special surface {
      color white
      ambient (visible(W, throw_offset) == 0
               ? 0
               : (P[0] < 1 ? 1
                 : (P[0] > throw_length ? 0
                    : (throw_length - P[0]) / throw_length)))
      transmission (visible(W, throw_offset) == 1
                    ? (P[0] < 1 ? 0
                       : (P[0] > throw_length ? 1
                          : P[0] / throw_length))
                    : 1), 1
      }

In this case conditional statements are used to determine the surface characteristics of a cone defining the boundary of a spotlight. The amount of ambient light is modified with distance from the apex of the cone, the visibility of the cone is modified based on both distance and on a determination if the cone is behind an object with respect to the source of the light.

Run-Time expressions

There are a few expressions that only have meaning during the rendering process:

IDirection of the ray that struck the object
PPoint of intersection in object coordinates
NNormal to the point of intersection in world coordinates
WPoint of intersection in world coordinates
UThe u/v/w coordinate of the intersection point
x,y,zComponents of the point in object coordinates
u,v,wComponents of the uv-coordinate of the point

These expressions describe the interaction of a ray and an object. To use them effectively, you need to understand the distinction between world coordinates, object coordinates, and u/v coordinates. Object coordinates describe a point or a direction with respect to an object as it was originally defined. World coordinates describe the same point after it has been rotated/scaled/translated. u/v coordinates describe the point in a way natural to a particular object type (e.g., latitude and longitude for a sphere). Typically texturing is done in either object coordinates or u/v coordinates so that as the object is moved around the texture will move with it. On the other hand shading is done in world coordinates.

The variables u, v, and w are specific to each surface type and in general are the natural mapping for the surface (e.g., latitude and longitude on a sphere) In general u varies from 0 to 1 as you go around an object and v varies from 0 to one as you go from the bottom to the top of an object. These variables can be used in a couple of ways, to tell Polyray to only render portions of a surface within certain uv bounds, or they can be used as arguments to expressions in textures or displacement functions.

Not all primitives set meaningful values for u and v, those that do are:

bezier, cone, cylinder, disc, height fields, NURB, parabola, parametric, sphere, torus, patch

Other surface types will simply have u=x, v=y, w=z. An intersection with a gridded object will use the u/v coordinates of the object within the grid.

See the file uvtst.pi in the data archives for an example of using uv bounds on objects. The file spikes.pi demonstrates using uv as variables in a displacement surface. The file bezier1.pi demonstrates using uv as variables to stretch an image over the surface of a bezier patch.

The meanings of some of these variables are slightly different when creating particle systems, or when colouring the background of an image

Named Expressions

A major convenience for creating input files is the ability to create named definitions of surface models, object definitions, vectors, etc. The way a value is defined takes one of the following forms:

define token expression float, vector, array, cexper
define token "str..." string expression
define token object { ... }
define token surface { ... }
define token texture { ... }
define token transform { ... } Each entry may be one of scale/translate/rotate/shear
define token particle { ... }

Objects, surfaces, and textures can either be instantiated as-is by using the token name alone, or it can be instantiated and modified:

   token,
or
   token { ... modifiers ... },

Polyray keeps track of what type of entity the token represents and will parse the expressions accordingly.

Note: It is not possible to have two types of entities referred to by the same name. If a name is reused, then a warning will be printed, and all references to that name will use the new definition from that point on.

Static variables

Static variables are a way to retain variable values (expressions, objects, textures, ...) from frame to frame of an animation. Instead of the normal declaration of a variable:

   define xyz 32 * frame
you would do something like this:
   if (frame == start_frame)
      static define xyz 42
   else
      static define xyz (xyz + 0.1)

The big differences between a static define and a define are that the static variable will be retained from frame to frame, and the static variable actually replaces any previous definitions rather than simply overloading them.

The static variables have an additional use beyond simple arithmetic on variables. By defining something that takes a lot of processing at parse time (like height fields and large image maps), you can make them static in the first frame and simply instantiate them every frame after that.

One example of this would be to spin a complex height field, if you have to create it every frame, there is a long wait while Polyray generates the field. The following declarations would be a better way:

   if (frame == start_frame)
      static define sinsf
         object {
            smooth_height_fn 128, 128, -2, 2, -2, 2,
                             0.25 * sin(18.85 * x * z + theta_offset)
            shiny_red
            }
   ...
   sinsf { rotate <0, 6*frame, 0> }
   ...

Several examples of how static variables can be used are found in the animation directory in the data file archive. Two that make extensive use of static variables are movsph.pi, which bounces several spherical blob components around inside a box, and cannon.pi which points a cannon in several directions, firing balls once it is pointed.

Warning: A texture inside a static object should ONLY be static itself. The reason is that between frames, every non-static thing is deallocated. If you have things inside a static object that point to a deallocated pointer, you will most certainly crash the program. Sorry, but detecting these things would be too hard and reallocating all the memory would take up too much space.

Lazy Evaluation

Normally, Polyray tries to reduce the amount of time and space necessary for evaluating and storing variables and expressions. For example, if you had the following definition of x:

   define x 2 * 3

Polyray would store 6 rather than the entire expression. This can cause problems in certain circumstances if Polyray decides to reduce the expression too soon. The problem is most notable in particle systems since the expression that is used in it's declaration is used for the life of the particle system. An example of the problem would be a declaration like:

   define t (frame - start_frame) / (end_frame - start_frame)

When Polyray encounters this it will store a value for t that is between 0 (for the first frame) and 1 (for the last frame). This great, up until you declare a particle system with something like:

   if (frame == start_frame)
      define partx
      particle {
         ...
         death (t > 0.5 ? 1 : 0)
         }

Clearly the intent here is that the particles created by this system will die halfway through the animation (t > 0.5). This won't happen, since Polyray will evaluate the value of t at the time that partx is declared, and store the expression (0 > 0.5 ? 1 : 0), which then reduces to 0. This means the particle will never die.

To avoid this difficulty, the keyword noeval can be added to a definition to force Polyray to postpone evaluation. The declaration of t would now be:

   define noeval t (frame - start_frame) / (end_frame - start_frame)

When partx is declared, the entire expression is now used and the particles will die at the appropriate time. Note that noeval is always the last keyword in a definition, and it works correctly with static definitions as well as normal ones.

Definition of the viewpoint

The viewpoint and its associated components define the position and orientation the view will be generated from.

The format of the declaration is:

    viewpoint {
       from vector
       at vector
       up vector
       angle float
       hither float
       resolution float, float
       aspect float
       yon float
       max_trace_depth float
       aperture float
       max_samples float
       focal_distance float
       image_format float
       pixel_encoding float
       pixelsize float
       antialias float
       antialias_threshold float
     }

All of the entries in the viewpoint declaration are optional and have reasonable default values (see below). The order of the entries defining the viewpoint is not important, unless you redefine some field. (In which case the last is used.)

The parameters are:

Parameters of the viewpoint Definition
ParameterDescription
aspectThe ratio of width to height. (Default: 1.0.)
atThe center of the image, in world coordinates. (Default: <0, 0, 0>)
angleThe field of view (in degrees), from the center of the top row to the center of the bottom row. (Default: 45)
fromThe location of the eye. (Default: <0, 0, -1>)
hitherDistance to front of view pyramid. Any intersection closer than this value will be ignored. (Default: 1.0e-3)
resolutionNumber of columns and rows in the image. (Default: 256x256)
upWhich direction is up (Default: <0, 1, 0>)
yonDistance to back of view pyramid. Any intersection beyond this distance will be ignored. (Default: 1.0e5)
max_trace_depthThis allows you to tailor the amount of recursion allowed for scenes with reflection and/or transparency. (Default: 5)
apertureIf larger than 0, then extra rays are shot (controlled by max_samples) to produce a blurred image. Good values are between 0.1 and 0.5. (Default: 0)
max_samplesNumber of rays/pixel when performing focal blur (Default: 4)
focal_distanceDistance from the eye to the point that things are in focus, this defaults to the distance between from and at.
image_formatIf 0 then normal image, if 1 then depth image
pixel_encodingIf 0 then uncompressed, if 1 then image is RLE
pixelsizeNumber of bits/pixel in image (Default: 16)
antialiasLevel of antialiasing to use (Default: 0)
antialias_thresholdThreshold to start antialiasing (Default: 0.01)

The view vectors will be coerced so that they are perpendicular to the vector from the eye (from) to the point of interest (at).

A typical declaration is:

   viewpoint {
      from <0, 5, -5>
      at   <0, 0,  0>
      up   <0, 1,  0>
      angle 30
      resolution 320, 160
      aspect 2
    }

In this declaration the eye is five units behind the origin, five units above the x-z plane and is looking at the origin. The up direction is aligned with the y-axis, the field of view is 30 degrees and the output file will default to 320x160 pixels.

In this example it is assumed that pixels are square, and hence the aspect ratio is set to width/height. If you were generating an image for a screen that has pixels half as wide as they are high then the aspect ratio would be set to one.

Note that you can change from left handed coordinates (the default for Polyray) to right handed by using a negative aspect ratio (e.g., aspect -4/3).

Objects/Surfaces

In order to make pictures, the light has to hit something. Polyray supports several primitive objects. The following sections give the syntax for describing the primitives, as well as how more complex primitives can be built from simple ones.

An object declaration is how polyray associates a surface with its lighting characteristics, and its orientation. This declaration includes one of the primitive shapes (sphere, polygon, ...), and optionally: a texture declaration (set to a matte white if none is defined), orientation declarations, or a bounding box declaration.

The format the declaration is:

   object {
      shape_declaration
      [texture_declaration]
      [translate/rotate/scale declarations]
      [subdivision declarations]
      [displacement declaration]
      [shading flag declaration]
      [bounding box declaration]
      [u/v bounds declaration]
    }

The following sub-sections describe the format of the individual parts of an object declaration. (Note: The shape declaration MUST be first in the declaration, as any operations that follow have to have data to work on.)

Object Modifiers

Object modifiers are statements within an object declaration that are used to move and/or change the shape of the object.

The declarations are processed in the order they appear in the declaration, so if you want to resize and object before moving it, you need to put the scale statement before the translation statement. The exception to this are displacement and u/v bounds. These act on the underlying shape and are applied prior to any other object modifiers, regardless of where they appear in the object declaration.

Position and Orientation Modifiers

The position, orientation, and size of an object can be modified through one of four linear transformations: translation, rotation, scaling, and shear. Other modifications that can be made to the shape are displacement and u/v bounding.
Translation

Translation moves the position of an object by the number of units specified in the associated vector. The format of the declaration is:

   translate <xt, yt, zt>
Rotation

Rotation revolves the position of an object about the x, y, and z axes (in that order). The amount of rotation is specified in degrees for each of the axes. The direction of rotations follows a left-handed convention: if the thumb of your left hand points along the positive direction of an axis, then the direction your fingers curl is the positive direction of rotation. (A negative aspect ratio in the viewpoint flips everything around to a right handed system.)

The format of the declaration is:

   rotate <xr, yr, zr>

For example the declaration:

   rotate <30, 0, 20>

will rotate the object by 30 degrees about the x axis, followed by 20 degrees about the z axis.

Remember: Left Handed Rotations.

Scaling

Scaling alters the size of an object by a given amount with respect to each of the coordinate axes. The format of the declaration is:

   scale <xs, ys, zs>

Note that using 0 for any of the components of a scale is a bad idea. It may result in a division by zero in Polyray, causing a program crash. Use a small number like 1.0e-5 to flatten things. Usually you will just want to use 1 for components that don't need scaling.

Shear

A less frequently used, but occasionally useful transformation is linear shear. Shear scales along one axis by an amount that is proportional to the location on another axis. The format of the declaration is:

   shear yx, zx, xy, zy, xz, yz

Typically only one or two of the components will be non-zero, for example the declaration:

   shear 0, 0, 1, 0, 0, 0

will shear an object more and more to the right as y gets larger and larger. The order of the letters in the declaration is descriptive, shear ... ab, ... means shear along direction a by the amount ab times the position b.

This declaration should probably be split into three: one that associates shear in x with the y and z values, one that associates shear in y with x and z values, and one that associates shear in z with x and y values.

You might want to look at the file xander.pi - this uses shear on boxes to make diagonally slanted parts of letters.

Displace

The displacement operation causes a modification of the shape of an object as it is being rendered. The amount and direction of the displacement are specified by the statement:

   displace vector
or
   displace float

If a vector expression is given, then Polyray will displace each vertex in the surface by the vector. If a floating point expression is given, then Polyray will displace the surface along the direction of the normal, by the amount given in the expression.

For effective results, the value of u_steps and v_steps should be set fairly high. Polyray subdivides the surface, then performs the displacement. If there are too few subdivisions of the surface then the result will be a very coarse looking object.

An example of an object that has a displacement is shown below. The displacement in the outer object is applied to each of the two spheres within.

  object {
    object {
      sphere <0,-1.5, 0>, 2
      u_steps 32
      v_steps 64
      shiny_red
    } + object {
      sphere <0, 1.5, 0>, 2
      u_steps 32
      v_steps 64
      shiny_blue
      translate <0, -1.5, 0>
      rotate <-20, 0, 30>
      translate <0, 2.25, 0>
    }
    displace 0.5 * sin(5*y)
  }
UV Bounds

By adjusting the value of the u/v bounds, it is possible to only render a selected portion of a surface. The format of the declaration is:

   uv_bounds low_u, high_u, low_v, high_v

For example, the following declaration will result in a wedge shaped portion of a sphere:

   object {
      sphere <-2, 2, 0>, 1
      uv_bounds 0.3, 1.0, 0.0, 0.8
    }
The same effect could be achieved through the use of CSG with one or more clipping planes. For most purposes (e.g., creating a hemisphere by setting u_low to 0.5) the uv_bounds will easier to create and faster to render.

Bounding box

In order to speed up the process of determining if a ray intersects an object and in order to define good bounds for surfaces such as polynomials and implicit surfaces, a bounding box can be specified. A short example of how it is used to define bounds of a polynomial:

   define r0 3
   define r1 1
   define torus_expression (x^2 + y^2 + z^2 - (r0^2 + r1^2))^2 -
                           4 * r0^2 * (r1^2 - z^2)
   object {
      polynomial torus_expression
      shiny_red
      bounding_box <-(r0+r1), -(r0+r1), -r1>, < (r0+r1),  (r0+r1),  r1>
    }

The test for intersecting a ray against a box is much faster than performing the test for the polynomial equation. In addition the box helps the scan conversion process determine where to look for the surface of the torus.

Subdivision of Primitives

The amount of subdivision of a primitive that is performed before it is displayed as polygons is tunable. These declarations are used for scan conversion of object, when creating displacement surfaces, and to determine the quality of an implicit function. The declarations are:

   u_steps n
   v_steps m
   w_steps l
   uv_steps n, m
   uv_steps n, m, l

Where u generally refers to the number of steps around the primitive (the number of steps around the equator of a sphere for example). The parameter v refers to the number of steps along the primitive (latitudes on a sphere). Cone and cylinder primitives only require 1 step along v, but for smoothness may require many steps in u.

For blobs, polynomials, and implicit surfaces, the u_steps component defines how many subdivisions along the x-axis, the v_steps component defines how many subdivisions along the y-axis, and the w_steps component defines how many subdivision along the z-axis.

Shading Flags

It is possible to tune the shading that will be performed for each object. The values of each bit in the flag has the same meaning as that given for global shading previously:

Meaning of the bits in the shading flags
NumberIDDescription
1
2
4
8
16
32
Shadow_Check
Reflect_Check
Transmit_Check
Two_Sides
UV_Check
Cast_Shadow
Shadows will be generated on the object
Reflectivity will be tested
Check for refraction
If on, highlighting will be performed on both sides of a surface
If on then u/v coordinates are calculated
If off then the object doesn't cast a shadow

By default, all objects have the following flags set: Shadow_Check, Reflect_Chect, Transmit_Check, UV_Check, and Cast_Shadow. The two sides check is normally off.

The declaration has the form:

   shading_flags xx

For example, if the value 50 (32 + 16 + 2) is used for xx above, then this object can be reflective and will cast shadows, however there will be no tests for transparency, there will be no shading of the back sides of surfaces, and there will be no shadows on the surface.

Note: the shading flag only affects the object in which the declaration is made. This means that if you want the shading values affected for all parts if a CSG object, then you will need a declaration in every component.

Primitives

Primitives are the lowest level of shape description. Typically a scene will contain many primitives, appearing either individually or as aggregates using either Constructive Solid Geometry (CSG) operations or gridded objects.

Descriptions of each of the primitive shapes supported by Polyray, as well as references to data files that demonstrate them are given in the following subsections. Following the description of the primitives are descriptions of how CSG and grids can be built.

Bezier patches

A Bezier patch is a form of bicubic patch that interpolates its control vertices. The patch is defined in terms of a 4x4 array of control vertices, as well as several tuning values.

The format of the declaration is:

   bezier subdivision_type, flatness_value,
          u_subdivisions, v_subdivision,
          [ 16 comma-separated vertices, i.e.
             <x0, y0, z0>, <x1, y1, z1>, ..., <x15, y15, z15> ]

The subdivision_type and flatness_value are no longer used by Polyray. They are retained in the declaration for backwards compatibility.

The number of levels of subdivision of the patch, in each direction, is controlled by either the u_subdivisions and v_subdivisions given in the bezier shape declaration or by the value of uv_steps if given later in the object declaration. The more subdivisions allowed, the smoother the approximation to the patch, however storage and processing time go up.

An example of a bezier patch is:

   object {
      bezier 2, 0.05, 3, 3,
         <0, 0, 2>, <1, 0, 0>, <2, 0, 0>, <3, 0,-2>,
         <0, 1, 0>, <1, 1, 0>, <2, 1, 0>, <3, 1, 0>,
         <0, 2, 0>, <1, 2, 0>, <2, 2, 0>, <3, 2, 0>,
         <0, 3, 2>, <1, 3, 0>, <2, 3, 0>, <3, 3,-2>
     uv_steps 8, 8
     rotate <30, -70, 0>
     shiny_red
   }

Blob

A blob describes a smooth potential field around one or more spherical, cylindrical, or planar components.

The format of the declaration is:

   blob threshold:
      blob_component1
      [, blob_component2 ]
      [, etc. for each component ]

The threshold is the minimum potential value that will be considered when examining the interaction of the various components of the blob. Each blob component one of two forms:

   sphere <x, y, z>, strength, radius
   cylinder <x0, y0, z0>, <x1, y1, z1>, strength, radius
   plane <nx, ny, nz>, d, strength, dist
   torus <x, y, z>, <dx, dy, dz>, major, strength, minor

The strength component describes how strong the potential field is around the center of the component, the radius component describes the maximum distance at which the component will interact with other components. For a spherical blob component the vector <x,y,z> gives the center of the potential field around the component. For a cylindrical blob component the vector <x0, y0, z0> defines one end of the axis of a cylinder, the vector <x1, y1, z1> defines the other end of the axis of a cylinder. A planar blob component is defined by the standard plane equation with <nx, ny, nz> defining the normal and 'd' defining the distance of the plane from the origin along the normal.

Note: The ends of a cylindrical blob component are given hemispherical caps.

Note: toroidal blob components won't render correctly in raytracing. The numerical precision of a PC is insufficient. It will work correctly in scan conversion or raw triangle output.

Note: The colon and the commas in the declaration really are important.

An example of a blob is:

  object {
    blob 0.5:
      cylinder <0,  0, 0>, <5, 0, 0>, 1, 0.7,
      cylinder <1, -3, 0>, <3, 2, 0>, 1, 1.4,
      sphere <3, -0.8, 0>, 1, 1,
      sphere <4,  0.5, 0>, 1, 1,
      sphere <1,  1,   0>, 1, 1,
      sphere <1,  1.5, 0>, 1, 1,
      sphere <1,  2.7, 0>, 1, 1

    shiny_red
  }

Note: since a blob is essentially a collection of 4th order polynomials, it is possible to specify which quartic root solver to use. See section 2.3.2.15, for a description of the root_solver statement.

Box

A box is rectangular solid that has its edges aligned with the x, y, and z axes. it is defined in terms of two diagonally opposite corners. The alignment can be changed by rotations after the shape declaration.

The format of the declaration is:

   box <x0, y0, z0>, <x1, y1, z1>

Usually the convention is that the first point is the front-lower-left point and the second is the back-upper-right point. The following declaration is four boxes stacked on top of each other:

  define pyramid
    object {
        object { box <-1, 3, -1>, <1, 4, 1> }
      + object { box <-2, 2, -2>, <2, 3, 2> }
      + object { box <-3, 1, -3>, <3, 2, 3> }
      + object { box <-4, 0, -4>, <4, 1, 4> }
      matte_blue
    }

Cone

A cone is defined in terms of a base point, an apex point, and the radii at those two points. Note that cones are not closed (you must use discs to cap them).

The format of the declaration is:

   cone <x0, y0, z0>, r0, <x1, y1, z1>, r1

An example declaration of a cone is:

   object {
     cone <0, 0, 0>, 4, <4, 0, 0>, 0
     shiny_red
   }

Cylinder

A cylinder is defined in terms of a bottom point, a top point, and its radius. Note that cylinders are not closed.

The format of the declaration is:

   cylinder <x0, y0, z0>, <x1, y1, z1>, r
An example of a cylinder is:
   object {
     cylinder <-3, -2, 0>, <0, 1, 3>, 0.5
     shiny_red
   }

Disc

A disc is defined in terms of a center a normal and either a radius, or using an inner radius and an outer radius. If only one radius is given, then the disc has the appearance of a (very) flat coin. If two radii are given, then the disc takes the shape of an annulus (washer) where the disc extends from the first radius to the second radius. Typical uses for discs are as caps for cones and cylinders, or as ground planes (using a really big radius).

The format of the declaration is:

   disc <cx, cy, cz>, <nx, ny, nz>, r
or
   disc <cx, cy, cz>, <nx, ny, nz>, ir, or

The center vector <cx,cy,cz> defines where the center of the disc is located, the normal vector <nx,ny,nz> defines the direction that is perpendicular to the disc. i.e. a disc having the center <0,0,0> and the normal <0,1,0> would put the disc in the x-z plane with the y-axis coming straight out of the center.

An example of a disc is:

   object {
     disc <0, 2, 0>, <0, 1, 0>, 3
     rotate <-30, 20, 0>
     shiny_red
   }
Note: a disc is infinitely thin. If you look at it edge-on it will disappear.

Glyphs (TrueType fonts)

The glyph primitive creates shapes similar to the sweep primitive. The big difference is that straight line and curved line segments may appear in the same contour. Additionally, the glyph can be made from several contours with exterior ones defining the outsides of the shape and interior ones defining holes within the glyph.

Typically the glyph primitive will be used in conjunction with the utility TTFX (described below) to translate TrueType font information into Polyray glyph information.

The following declaration creates a box with a square hole cut out of it.

   object {
     glyph 2
       contour 4, <0, 0>, <4, 0>, <4, 4>, <0, 4>
       contour 4, <1, 1>, <1, 3>, <3, 3>, <3, 1>
     texture { shiny { color red reflection 0.2 } }
     translate <-2, 0, 0>
   }

The default placement of glyph coordinates is in the x-y plane and the glyph has a depth of one in z (starting at z=0 and going to z=1). To change the depth, just use something like: scale <1, 1, 0.2>.

Each entry in a contour is a 2D point. If a non-zero z component is added then the point is assumed to be an off-curve point and will create a curved segment within the contour. For example, the following declaration makes a somewhat star shaped contour with sharp corners for the inner parts of the star and rounded curves for the outer parts of the star:

  object {
    glyph 1 contour 14,
      <r0*cos( 1*dt),r0*sin( 1*dt)>,<r1*cos( 2*dt),r1*sin( 2*dt),1>,
      <r0*cos( 3*dt),r0*sin( 3*dt)>,<r1*cos( 4*dt),r1*sin( 4*dt),1>,
      <r0*cos( 5*dt),r0*sin( 5*dt)>,<r1*cos( 6*dt),r1*sin( 6*dt),1>,
      <r0*cos( 7*dt),r0*sin( 7*dt)>,<r1*cos( 8*dt),r1*sin( 8*dt),1>,
      <r0*cos( 9*dt),r0*sin( 9*dt)>,<r1*cos(10*dt),r1*sin(10*dt),1>,
      <r0*cos(11*dt),r0*sin(11*dt)>,<r1*cos(12*dt),r1*sin(12*dt),1>,
      <r0*cos(13*dt),r0*sin(13*dt)>,<r1*cos(14*dt),r1*sin(14*dt),1>
  }

The program TTFX.EXE has been included to help with the conversion of TrueType fonts from their .TTF format (typically found in the /WINDOWS/SYSTEM directory) into the sort of declaration that Polyray can understand. For example,

   ttfx \windows\system\times.ttf Foo > temp.pi
or,
   ttfx \windows\system\times.ttf Foo 0.2 > temp.pi

By default, the characters in the string being converted (the word Foo above) are packed right next to each other. If a number follows the text string, then the characters are separated by that amount. A spacing of 0.2 has worked pretty well for the fonts I've tried.

If you then add the following line to the top of temp.pi (leaving everything else like viewpoint to their defaults)

   light <0, 100, -100>
and render it with
   polyray temp.pi -r 1
you will get a nice rendered image of the word Foo.

Note that the combination of TTFX, Polyray raw triangle output, and RAW2POV is a way you can create TrueType characters for use in a number of renderers.

Implicit Surface

The format of the declaration is:

   function f(x,y,z)

The function f(x,y,z) may be any expression composed of the variables: x, y, z, a numerical value (e.g., 0.5), the operators: +, -, *, /, ^, and any Polyray supported function. The code is not particularly fast, nor is it totally accurate, however the capability to ray-trace such a wide class of functions by a SW program is (I believe) unique to Polyray.

The following object is taken from sombrero.pi and is a surface that looks very much like diminishing ripples on the surface of water.

   define a_const 1.0
   define b_const 2.0
   define c_const 3.0
   define two_pi_a 2.0 * 3.14159265358 * a_const

   // Define a diminishing cosine surface (sombrero)
   object {
     function y - c_const * cos(two_pi_a * sqrt(x^2 + z^2)) *
                            exp(-b_const * sqrt(x^2 + z^2))
     matte_red
     bounding_box <-4, -4, -4>, <4, 4, 4>
   }

Rendering the following object will show all the places within a sphere where the solid noise function takes on the value 0.5.

   object {
     object {
       function noise(3*P) - 0.5
       u_steps 64
       v_steps 64
       w_steps 64
       Lapis_Lazuli
       bounding_box <-2, -2, -2>, <2, 2, 2>
     }
     & object { sphere <0, 0, 0>, 2 }
     rotate <-10, 20, 0>
   }

It is quite important to have the bounding box declaration within the object where the function is declared. If that isn't there, Polyray will be unable to determine where to look for intersections with the surface. (The bounding box of an implicit function defaults to <-1, - 1, -1>, <1, 1, 1>.)

Height Field

There are two ways that height fields can be specified, either by using data stored in a Targa file, or using an implicit function of the form y = f(x, z).

The default orientation of a height field is that the entire field lies in the square 0 <= x <= 1, 0 <= z <= 1. File based height fields are always in this orientation, implicit height fields can optionally be defined over a different area of the x-z plane. The height value is used for y.

File Based Height Fields

Height fields data can be read from any Targa, GIF, or JPEG format file. A GIF image will be treated as an 8 bit format image as described below. Any color information in the GIF file is ignored.

Note that if you use JPEG images, a grayscale JPEG will be treated as an 8 bit format and a color JPEG will be treated as a 24 bit format height field. Due to the lossy nature of JPEG, it is extremely unlikely that a color JPEG will be useful as a height field. It is possible that reasonable results can be obtained using grayscale JPEG images as height fields (no guarantees).

By using smooth_ in front of the declaration, an extra step is performed that calculates normals to the height field at every point within the field. The result of this is a greatly smoothed appearance, at the expense of around three times as much memory being used.

The format of the declaration is:

   height_field "filename"
   smooth_height_field "filename"
8 Bit Format

Each pixel in the file is represented by a single byte. The value of the byte is used as an integer height between -127 and 128.

16 Bit Format

Each pixel in the file is represented by two bytes, low then high. The high component defines the integer component of the height, the low component holds the fractional part scaled by 255. The entire value is offset by 128 to compensate for the unsigned nature of the storage bytes. As an example the values high = 140, low = 37 would be translated to the height:

   (140 + 37 / 256) - 128 = 12.144

similarly if you are generating a Targa file to use in Polyray, given a height, add 128 to the height, extract the integer component, then extract the scaled fractional component. The following code fragment shows a way of generating the low and high bytes from a floating point number.

         unsigned char low, high;
         float height;
         FILE *height_file;

         ...

         height += 128.0;
         high = (unsigned char)height;
         height -= (float)high;
         low = (unsigned char)(256.0 * height);
         fputc(low, height_file);
         fputc(high, height_file);
24 Bit Format

The red component defines the integer component of the height, the green component holds the fractional part scaled by 255, the blue component is ignored. The entire value is offset by 128 to compensate for the unsigned nature of the RGB values. As an example the values r = 140, g = 37, and b = 0 would be translated to the height:

   (140 + 37 / 256) - 128 = 12.144

similarly if you are generating a Targa file to use in Polyray, given a height, add 128 to the height, extract the integer component, then extract the scaled fractional component. The following code fragment shows a way of generating the RGB components from a floating point number.

         unsigned char r, g, b;
         float height;
         FILE *height_file;

         ...

         height += 128.0;
         r = (unsigned char)height;
         height -= (float)r;
         g = (unsigned char)(256.0 * height);
         b = 0;
         fputc(b, height_file);
         fputc(g, height_file);
         fputc(r, height_file);
32 Bit Format

The four bytes of the 32 bit Targa image are used to hold the four bytes of a floating point number. The format of the floating point number is machine specific, and the order of the four bytes in the image correspond exactly to the order of the four bytes of the floating number when it is in memory.

For example, the following code shows how to take a floating point number and store it to a file in the format that Polyray will expect. You will also need to write the Targa header, etc.

   unsigned char *byteptr;
   float depth;
   FILE *ofile;

   ... calculations for depth ...

   /* Store a floating point number as a 32 bit color- this is
      obviously a machine specific result for the floating point
      number that is stored.  There is also an assumption here
      that a float type is exactly 4 bytes and that the
      size of an unsigned char is exactly 1 byte. */
   byteptr = (unsigned char *)&depth;
   fputc(byteptr[0], ofile);
   fputc(byteptr[1], ofile);
   fputc(byteptr[2], ofile);
   fputc(byteptr[3], ofile);

   ...
Implicit Height Fields

Another way to define height fields is by evaluating a mathematical function over a grid. Given a function y = f(x, z), Polyray will evaluate the function over a specified area and generate a height field based on the function. This method can be used to generate images of many sorts of functions that are not easily represented by collections of simpler primitives.

The valid formats of the declaration are:

   height_fn xsize, zsize, minx, maxx, minz, maxz, expression
   height_fn xsize, zsize, expression
   smooth_height_fn xsize, zsize, minx, maxx, minz, maxz, expression
   smooth_height_fn xsize, zsize, expression

If the four values minx, maxx, minz, and maxz are not defined then the default square 0 <= x <= 1, 0 <= z <= 1 will be used.

For example,

   // Define constants for the sombrero function
   define a_const 1.0
   define b_const 2.0
   define c_const 3.0
   define two_pi_a 2.0 * 3.14159265358 * a_const

   // Define a diminishing cosine surface (sombrero)
   object {
     height_fn 80, 80, -4, 4, -4, 4,
       c_const * cos(two_pi_a * sqrt(x^2 + z^2)) *
                 exp(-b_const * sqrt(x^2 + z^2))
     shiny_red
   }

will build a height field 80x80, covering the area from -4 <= x <= 4, and -4 <= z <= 4.

Compare the run-time performance and visual quality of the sombrero function as defined in sombfn.pi with the sombrero function as defined in sombrero.pi. The former uses a height field representation and renders quite fast. The latter uses a very general function representation and gives smoother but very slow results.

Lathe surfaces

A lathe surface is a polygon that has been revolved about the y-axis. This surface allows you to build objects that are symmetric about an axis, simply by defining 2D points.

The format of the declaration is:

    lathe type, direction, total_vertices,
       <vert1.x,vert1.y,vert1.z>
       [, <vert2.x, vert2.y, vert2.z>]
       [, etc. for total_vertices vertices]

The value of type is either 1, or 2. If the value is 1, then the surface will simply revolve the line segments. If the value is 2, then the surface will be represented by a spline that approximates the line segments that were given. A lathe surface of type 2 is a very good way to smooth off corners in a set of line segments.

The value of the vector direction is used to change the orientation of the lathe. For a lathe surface that goes straight up and down the y- axis, use <0, 1, 0> for direction. For a lathe surface that lies on the x-axis, you would use <1, 0, 0> for the direction.

Note that CSG will really only work correctly if you close the lathe - that is either make the end point of the lathe the same as the start point, or make the x-value of the start and end points equal zero. Lathes, unlike polygons are not automatically closed by Polyray.

Note: since a splined lathe surface (type = 2) is a 4th order polynomial, it is possible to specify which quartic root solver to use. See section 2.3.2.15 for a description of the root_solver statement.

NURBS

Polyray supports the general patch type, Non-Uniform Rational B- Splines (NURBS). All that is described here is how they are declared and used in Polyray. For further background and details on NURBS, refer to the literature.

They are declared with the following syntax:

   nurb u_order, u_points, v_order, v_points,
        u_knots, v_knots, uv_mesh
or
   nurb u_order, u_points, v_order, v_points, uv_mesh

Where each part of the declaration has the following format and definition,

NameFormatDefinition
u_orderintegerOne more than the power of the spline in the u direction. (If u_order = 4, then it will be a cubic patch.)
u_pointsintegerThe number of vertices in each row of the patch mesh
v_orderintegerOne more than the power of the spline in v direction.
v_pointsintegerThe number of rows in the patch mesh
u_knots[...]Knot values in the u direction
v_knots[...]Knot values in the v direction
uv_mesh[[...]

[...]]

An array of arrays of vertices. Each vertex may have either three or four components. If the fourth component is set then the spline will be rational. If the vertex has only three components then the homogenous (fourth) component is assumed to be one. The homogenous component must be greater than 0.

For example, the following is a complete declaration of a NURB patch

   object {
     nurb 4, 6, 4, 5,
         [0, 0, 0, 0, 1.5, 1.5, 3, 3, 3, 3], // Non-uniform knots
         [0, 0, 0, 0, 1, 2, 2, 2, 2],        // Uniform open knots
	[[<0,0,0>, <1,0, 3>, <2,0,-3>,     <3,0, 3>, <4,0,0>],
	 [<0,1,0>, <1,1, 0>, <2,1, 0>,     <3,1, 0>, <4,1,0>],
	 [<0,2,0>, <1,2, 0>, <2,2, 5,2>,   <3,2, 0>, <4,2,0>],
	 [<0,3,0>, <1,3, 0>, <2,3, 5,0.5>, <3,3, 0>, <4,3,0>],
	 [<0,4,0>, <1,4, 0>, <2,4, 0>,     <3,4, 0>, <4,4,0>],
	 [<0,5,0>, <1,5,-3>, <2,5, 3>,     <3,5,-3>, <4,5,0>]]
     translate <-2, -2.5, 0>
     rotate <-90, -30, 0>
     uv_steps 32, 32
     shiny_red
   }

The preceding patch was both non-uniform and rational. If you don't want to bother declaring the knot vector you can simply omit it. This will result in a open uniform B-Spline patch. Most of the time the non-uniform knot vectors are unnecessary and can be safely omitted. The preceding declaration with uniform knot vectors, and non-rational vertices could then be declared as:

   object {
     nurb 4, 6, 4, 5,
      [[< 0, 0, 0>, < 1, 0, 3>, < 2, 0,-3>, < 3, 0, 3>, < 4, 0, 0>],
       [< 0, 1, 0>, < 1, 1, 0>, < 2, 1, 0>, < 3, 1, 0>, < 4, 1, 0>],
       [< 0, 2, 0>, < 1, 2, 0>, < 2, 2, 5>, < 3, 2, 0>, < 4, 2, 0>],
       [< 0, 3, 0>, < 1, 3, 0>, < 2, 3, 5>, < 3, 3, 0>, < 4, 3, 0>],
       [< 0, 4, 0>, < 1, 4, 0>, < 2, 4, 0>, < 3, 4, 0>, < 4, 4, 0>],
       [< 0, 5, 0>, < 1, 5,-3>, < 2, 5, 3>, < 3, 5,-3>, < 4, 5, 0>]]
     translate <-2, -2.5, 0>
     rotate <-90, -30, 0>
     uv_steps 32, 32
     shiny_red
   }

Note that internally NURBS are stored as triangles. This can result in a high memory usage for a finely diced NURB (uv_steps large).

Parabola

A parabola is defined in terms of a bottom point, a top point, and its radius at the top.

The format of the declaration is:

   parabola <x0, y0, z0>, <x1, y1, z1>, r

The vector <x0,y0,z0> defines the top of the parabola - the part that comes to a point. The vector <x1,y1,z1> defines the bottom of the parabola, the width of the parabola at this point is r.

An example of a parabola declaration is:

   object {
     parabola <0, 6, 0>, <0, 0, 0>, 3
     translate <16, 0, 16>
     steel_blue
   }

This is sort of like a salt shaker shape with a rounded top and the base on the x-z plane.

Parametric surface

A parametric surface allows the creation of surfaces as a mesh of triangles. By defining the vertices of the mesh in terms of functions of u and v, Polyray will automatically create the entire surface. The smoothness of the surface is determined by the number of steps allowed for u and v.

The mesh defaults to 0 <= u <= 1, and 0 <= v <= 1. By explicitly defining the uv_bounds for the surface it is possible to create only the desired parts of the surface.

The format of the declaration is:

   parametric <fx(u,v), fy(u,v), fz(u,v)>

For example, the following declarations could be used to create a torus:

   define r0 1.25
   define r1 0.5

   define torux (r0 + r1 * cos(v)) * cos(u)
   define toruy (r0 + r1 * cos(v)) * sin(u)
   define toruz r1 * sin(v)

   object {
     parametric <torux,toruy,toruz>
     rotate <-20, 0, 0>
     shiny_red
     uv_bounds 0, 2*pi, 0, 2*pi
     uv_steps 16, 8
   }

Polygon

Although polygons are not very interesting mathematically, there are many sorts of objects that are much easier to represent with polygons. Polyray assumes that all polygons are closed and automatically adds a side from the last vertex to the first vertex.

The format of the declaration is:

    polygon total_vertices,
       <vert1.x,vert1.y,vert1.z>
       [, <vert2.x, vert2.y, vert2.z>]
       [, etc. for total_vertices vertices]

As with the sphere, note the comma separating each vertex of the polygon.

I use polygons as a floor in a lot of images. They are a little slower than the corresponding plane, but for scan conversion they are a lot easier to handle. An example of a checkered floor made from a polygon is:

   object {
      polygon 4, <-20,0,-20>, <-20,0,20>, <20,0,20>, <20,0,-20>
      texture {
         checker matte_white, matte_black
         translate <0, -0.1, 0>
         scale <2, 1, 2>
       }
    }

Polynomial surface

The format of the declaration is:

   polynomial f(x,y,z)
The function f(x,y,z) must be a simple polynomial, i.e. x^2+y^2+z^2- 1.0 is the definition of a sphere of radius 1 centered at (0,0,0).

For quartic (4th order) equations, there are three ways that Polyray can use to solve for roots. By specifying which one is desired, it is possible to tune for quality or speed. The method of Ferrari is the fastest, but also the most numerically unstable. By default the method of Vieta is used. Sturm sequences (which are the slowest) should be used where the highest quality is desired.

The declaration of which root solver to use takes one of the forms:

   root_solver Ferrari
   root_solver Vieta
   root_solver Sturm

(Capitalization is important - these are proper nouns after all.)

Note: due to unavoidable numerical inaccuracies, not all polynomial surfaces will render correctly from all directions.

The following example, taken from devil.pi defines a quartic polynomial. The use of the CSG clipping object is to trim uninteresting parts of the surface. The bounding box declaration helps the scan conversion routines figure out where to look for the surface.

   // Variant of a devil's curve in 3-space.  This figure has a top
   // and bottom part that are very similar to a hyperboloid of one
   // sheet, however the central region is pinched in the middle
   // leaving two teardrop shaped holes.
   object {
     object { polynomial x^4 + 2*x^2*z^2 - 0.36*x^2 - y^4 +
                         0.25*y^2 + z^4
              root_solver Ferrari }
     & object { box <-2, -2, -0.5>, <2, 2, 0.5> }
     bounding_box <-2, -2, -0.5>, <2, 2, 0.5>
     rotate <10, 20, 0>
     translate <0, 3, -10>
     shiny_red
   }

Note: as the order of the polynomial goes up, the numerical accuracy required to render the surface correctly also goes up. One problem that starts to rear its ugly head starting at around 3rd to 4th order equations is a problem with determining shadows correctly. The result is black spots on the surface. You can ease this problem to a certain extent by making the value of shadow_tolerance larger. For 4th and higher equations, you will want to use a value of at least 0.05, rather than the default 0.001.

Spheres

Spheres are the simplest 3D object to render and a sphere primitive enjoys a speed advantage over most other primitives.

The format of the declaration is:

    sphere <center.x, center.y, center.z>, radius

Note the comma after the center vector, it really is necessary.

My basic benchmark file is a single sphere, illuminated by a single light. The definition of the sphere is:

   object {
     sphere <0, 0, 0>, 2
     shiny_red
   }

Sweep surface

A sweep surface, also referred to as an extruded surface, is a polygon that has been swept along a given direction. It can be used to make multi-sided beams, or to create ribbon-like objects.

The format of the declaration is:

    sweep type, direction, total_vertices,
       <vert1.x,vert1.y,vert1.z>
       [, <vert2.x, vert2.y, vert2.z>]
       [, etc. for total_vertices vertices]

The value of type is either 1, or 2. If the value is 1, then the surface will be a set of connected squares. If the value is 2, then the surface will be represented by a spline that approximates the line segments that were given.

The value of the vector direction is used to change the orientation of the sweep. For a sweep surface that is extruded straight up and down the y-axis, use <0, 1, 0> for direction. The size of the vector direction will also affect the amount of extrusion (e.g., if |direction| = 2, then the extrusion will be two units in that direction).

An example of a sweep surface is:

   // Sweep made from connected quadratic splines.
   object {
     sweep 2, <0, 2, 0>, 16,
        <0, 0>, <0, 1>, <-1, 1>, <-1, -1>, <2, -1>, <2, 3>,
        <-4, 3>, <-4, -4>, <4, -4>, <4, -11>, <-2, -11>,
        <-2, -7>, <2, -7>, <2, -9>, <0, -9>, <0, -8>
     translate <0, 0, -4>
     scale <1, 0.5, 1>
     rotate <0,-45, 0>
     translate <10, 0, -18>
     shiny_yellow
   }

Note: CSG will really only work correctly if you close the sweep - that is make the end point of the sweep the same as the start point. Sweeps, unlike polygons are not automatically closed by Polyray.

See the description of glyphs for a more general swept surface.

Torus

The torus primitive is a doughnut shaped surface that is defined by a center point, the distance from the center point to the middle of the ring of the doughnut, the radius of the ring of the doughnut, and the orientation of the surface.

The format of the declaration is:

    torus r0, r1, <center.x, center.y, center.z>,
                  <dir.x, dir.y, dir.z>

As an example, a torus that has major radius 1, minor radius 0.4, and is oriented so that the ring lies in the x-z plane would be declared as:

   object {
     torus 1, 0.4, <0, 0, 0>, <0, 1, 0>
     shiny_red
   }

Note: since a torus is a 4th order polynomial, it is possible to specify which quartic root solver to use.

Triangular patches

A triangular patch is defined by a set of vertices and their normals. When calculating shading information for a triangular patch, the normal information is used to interpolate the final normal from the intersection point to produce a smooth shaded triangle.

The format of the declaration is:

    patch <x0,y0,z0>, <nx0,ny0,nz0>,  [UV u0, v0,]
          <x1,y1,z1>, <nx1,ny1,nz1>,  [UV u1, v1,]
          <x2,y2,z2>, <nx2,ny2,nz2> [, UV u2, v2]

The vertices and normals are required for each of the three corners of the patch. The u/v coordinates are optional. If they are omitted, then they will be set to the following values:

   u0 = 0, v0 = 0
   u1 = 1, v1 = 0
   u2 = 0, v1 = 1

Smooth patch data is usually generated as the output of another program.

Constructive Solid Geometry (CSG)

Objects can be defined in terms of the union, intersection, and inverse of other objects. The operations and the symbols used are:

   csgexper + csgexper  - Union
   csgexper * csgexper  - Intersection
   csgexper - csgexper  - Difference
   csgexper & csgexper  - Clip the first object by the second
   ~csgexper            - Inverse
   (csgexper)           - Parenthesised expression

Note that intersection and difference require a clear inside and outside. Not all primitives have well defined sides. Those that do are:

Spheres, Boxes, Glyphs, Polynomials, Blobs, Tori, and Functions.

Other surfaces that do not always have a clear inside/outside, but work reasonably well in CSG intersections are:

Cylinders, Cones, Discs, Height Fields, Lathes, Parabola, Polygons, and Sweeps.

Using Cylinders, Cones, and Parabolas works correctly, but the open ends of these surfaces will also be open in the resulting CSG. To close them off you can use a disc shape.

Using Discs, and Polygons in a CSG is really the same as doing a CSG with the plane that they lie in. If fact, a large disc is an excellent choice for clipping or intersecting an object, as the inside/outside test is very fast.

Lathes, and Sweeps use Jordan's rule to determine if a point is inside. This means that given a test point, if a line from that point to infinity crosses the surface an odd number of times, then the point is inside. The net result is that if the lathe (or sweep) is not closed, then you may get odd results in a CSG intersection (or difference).

CSG involving height fields only works within the bounds of the field.

As an example, the following object is a sphere of radius 1 with a hole of radius 0.5 through the middle:

   define cylinder_z object { cylinder <0,0,-1.1>, <0,0,1.1>, 0.5 }
   define unit_sphere object { sphere <0, 0, 0>, 1 }

   // Define a CSG shape by deleting a cylinder from a sphere
   object {
     unit_sphere - cylinder_z
     shiny_red
   }

Gridded objects

A gridded object is a way to compactly represent a rectangular arrangement of objects by using an image map. Each object is placed within a 1x1 cube that has its lower left corner at the location <i, 0, j> and its upper right corner at <i+1, 1, j+1>. The color index of each pixel in the image map is used to determine which of a set of objects will be placed at the corresponding position in space.

The gridded object is much faster to render than the corresponding layout of objects. The major drawback is that every object must be scaled and translated to completely fit into a 1x1x1 cube that has corners at <0,0,0> and <1,1,1>.

The size of the entire grid is determined by the number of pixels in the image. A 16x32 image would go from 0 to 16 along the x-axis and the last row would range from 0 to 16 at 31 units in z out from the x- axis.

The format of the declaration is:

   gridded "image.tga",
      object1
      object2
      object3
      ...

An example of how a gridded object is declared is:

   define tiny_sphere object { sphere <0.5, 0.4, 0.5>, 0.4 }
   define pointy_cone object { cone <0.5, 0.5, 0.5>, 0.4,
                                    <0.5, 1, 0.5>, 0 }

   object {
      gridded "grdimg0.tga",
         tiny_sphere { shiny_coral }
         tiny_sphere { shiny_red }
         pointy_cone { shiny_green }
      translate <-10, 0, -10>
      rotate <0, 210, 0>
   }

In the image grdimg0.tga, there are a total of 3 colors used, every pixel that uses color index 0 will generate a shiny coral coloured sphere, every pixel that uses index will generate a red sphere, every pixel that uses index 2 will generate a green cone, and every other color index used in the image will leave the corresponding space empty.

The normal image format for a gridded object is either grayscale or color mapped. To determine which object will be used, the value of the pixel itself in the grayscale image and the color index is used in the mapped image. If a 16 bit Targa is used, then the second byte of the color is used. If a 24 bit Targa is used, then the value of the red component is used. This limits the # of objects for all images to 256.

A color JPEG is treated the same as a 24 bit Targa (unlikely to be useful, due to the lossy nature of JPEG).

Particle Systems

There are two distinct pieces of information that are used by Polyray to do particles. The first is a particle declaration, this is the particle generator. The second is the particle itself. It is important to retain the distinction, you generally only want to have one particle generator, but you may want that generator to produce many particles.

The form of the declaration for a particle generator is:

   particle {
     object "name"
     position vector
     velocity vector
     acceleration vector
     birth float
     death float
     count float
     avoid float
   }

The wrapper for the particle must be the name of an object appearing in a define statement. If there isn't an object with that name, Polyray will write an error message and abort. Everything else is optional, but if you don't set either the velocity or acceleration then the object will just sit at the origin.

The default values for each component are:

Defaults for particle declaration
ComponentDefault
position
velocity
acceleration
birth
death
count
avoid
<0, 0, 0>
<0, 0, 0>
<0, 0, 0>
frame == start_frame
false (never dies)
- 1
false (doesn't bounce)

As an example, the following declaration makes a starburst of 50 spheres. Note the conditional before the particle declaration. You almost always want to do this, otherwise you will create a new particle generator at every frame of the animation. Since the default birth condition is to generate particles only on the first frame, the only side effect here would be to suck up more memory every frame to retain the particle generator definition.

   frame_time 0.05

   define gravity -1

   // Star burst
   if (frame == start_frame)
   particle {
     position <0, 5, 0>
     velocity brownian(<0, 0, 0>, <1, 1, 1>)
     acceleration gravity
     object "bsphere"
     count 50
   }

The value in the velocity declaration generates a random vector that ranges from <-1, -1, -1> to <1, 1, 1>. Each particle of the 50 is given a different initial velocity, which is what gives the bursting effect.

An additional declaration, frame_time xx, has been added specifically for tuning particle animations. This declaration determines how much time passes for every frame. Each particle starts with an age of 0, after each frame it's age is incremented by the value xx in the frame_time declaration. Additionally the position and velocity of the particle is updated after every frame according to the formula:

   V = V + frame_time * A
   P = P + frame_time * V

The status of a particle is set by making use of some of the standard Polyray variables. The names and meanings are:

VariableMeaning
P
x
y
z
I
u
Current location of the particle as a vector
X location of the particle (or P[0])
Y location of the particle (or P[1])
Z location of the particle (or P[2])
Current velocity of the particle as a vector
Age of the particle (frame_time * elapsed frames since birth)

These values are used in two situations, when checking the death condition of a particle and when calculating the acceleration of the particle.

If an avoid statement is given then before every frame the position of the particle is checked to see if it hits any objects in the scene (non-particle objects). If so, then the particle will bounce off the object.

Color and lighting

The color space used in polyray is RGB, with values of each component specified as a value from 0 -> 1. The way the color and shading of surfaces is specified is described in the following sections.

RGB colors are defined as either a three component vector, such as <0.6, 0.196078, 0.8>, or as one of the X11R3 named colors (which for the value given is DarkOrchid). One of these days when I feel like typing and not thinking (or if I find them on line), I'll put in the X11R4 colors.

The colouring of objects is determined by the interaction of lights, the shape of the surface it is striking, and the characteristics of the surface itself.

Light sources

Light sources are one of: simple positional light sources, spot lights, or textured lights. None of these lights have any physical size. The lights do not appear in the scene, only the effects of the lights.

Positional Lights

A positional light is defined by its RGB color and its XYZ position.

The formats of the declaration are:

    light color, location
    light location

The second declaration will use white as the color.

Spot Lights

The formats of the declaration are:

    spot_light color, location, pointed_at, Tightness, Angle, Falloff
    spot_light location, pointed_at

The vector location defines the position of the spot light, the vector pointed_at defines the point at which the spot light is directed. The optional components are:

Optional components for spot_light declaration
NameDescription
colorThe color of the spotlight
TightnessThe power function used to determine the shape of the hot spot
AngleThe angle (in degrees) of the full effect of the spot light
FalloffA larger angle at which the amount of light falls to nothing

A sample declaration is:

   spot_light white, <10,10,0>, <3,0,0>, 3, 5, 20

Directional lights

The directional light means just that - light coming from some direction.

   directional_light color, direction
   directional_light direction

An example would be:

  directional_light <2, 3, -4>

giving a white light coming from the right, above, and behind the origin.

Textured lights

Textured lights are an enhancement of point lights that add: a function (including image maps) to describe the intensity & color of the light in each direction, transformations, and size. The format of the declaration is:

   textured_light {
     color float
     [sphere center, radius]
     [translate/rotate/scale]
   }

Any color expression is allowed for the textured light, and is evaluated at run time.

A rotating slide projector light from the data file ilight.pi is shown below:

   define block_environ
   environment("one.tga", "two.tga", "three.tga",
               "four.tga", "five.tga", "six.tga")
   textured_light {
     color environment_map(P, block_environ)
     rotate <frame*6, frame*3, 0>
     translate <0, 2, 0>
   }
Area Lights

By adding a sphere declaration to a textured_light, it is turned into an area light. The current implementation is a bit rough for long and narrow shadows, but in general gives very good results. A typical declaration of an area light is:

   textured_light {
      color white
      sphere <8, 10, 0>, 1
   }

Depth Mapped Lights

Depth mapped lights are very similar to spotlights, in the sense that they point from one location and at another location. The primary use for this light type is for doing shadowing in scan converted scenes. The format of their declaration is:

   depthmapped_light {
      [ angle float ]
      [ aspect float ]
      [ at vector ]
      [ color expression ]
      [ depth "depthfile.tga" ]
      [ from vector ]
      [ hither float ]
      [ up vector ]
   }

You may notice that the format of the declaration is very similar to the viewpoint declaration. This is intentional, as you will usually generate the depth information for depthfile.tga as the output of a run of Polyray. To support output of depth information, a new statements was added to the viewpoint declaration, image_format.

A viewpoint declaration that will output a depth file would have the form:

   viewpoint {
     from [ location of depth mapped light ]
     at   [ location the light is pointed at ]

     image_format 1
   }

Where the final statement tells Polyray to output depth information instead of color information. Note that if the value in the image_format statement is 0, then normal rendering will occur.

If a hither declaration is used, then the value given is used as a bias to help prevent self shadowing. The default value for this bias is the value of shadow_tolerance in polyray.ini.

Background color

The background color is the one used if the current ray does not strike any objects. The color can be any vector expression, although is usually a simple RGB color value. The format of the declaration is:

    background <R,G,B>
or
    background color

If no background color is set black will be used.

An interesting trick that can be performed with the background is to use an image map as the background color (it is also a way to translate from one Targa format to another). The way this can be done is:

  background planar_imagemap(image("test1.tga", P)

The background also affects the opacity channel in 16 and 32 bit Targa output images. If any background contributes directly to the pixel (not as a result of reflection or refraction), then the attribute bit is cleared in a 16 bit color and the attribute byte is set to the percentage of the pixel that was covered by the background in a 32 bit color.

In order to extend the flexibility of the background, the meanings of various runtime variables are set uniquely.

Global variables
VariableDescription
u,x
v,z
W
P
N
I
U
w
How far across the output image (from 0 at left to 1 at right)
How far down the output image (from 0 at bottom to 1 at top)
<0,0,0>
Same as <x, 0, z>
Direction of the current ray
<0,0,0>
Same as <u, v, w>
Level of recursion (0 for eye rays, higher for reflected and refracted rays)

As an example, suppose you wanted to have an image appear in the background, but you didn't want to have the image appear in any reflections. Then you could define the background with the following expression:

   background (w == 0 ? planar_imagemap(img1, P) : black)

If you wanted to have one image in the background, and another that appears in reflections (or refracted rays), then you could use the following expression:

   background (w == 0 ? planar_imagemap(img1, P)
                      : spherical_imagemap(img2, N))

The previous background expression fits img1 exactly into the output image and maps img2 completely around the entire scene. This might be useful if you have a map of stars that you want to appear in reflections, but still want to have an image as the background.

Global Haze (fog)

The global haze is a color that is added based on how far the ray travelled before hitting the surface. The format of the expression is:

   haze coeff, starting_distance, color

The color you use should almost always be the same as the background color. The only time it would be different is if you are trying to put haze into a valley, with a clear sky above (this is a tough trick, but looks nice). A example would be:

   haze 0.8, 3, midnight_blue

The value of the coeff ranges from 0 to 1, with values closer to 0 causing the haze to thicken, and values closer to 1 causing the haze to thin out. I know it seems backwards, but it is working and I don't want to break anything.

Textures

Polyray supports a few simple procedural textures: a standard shading model, a checker texture, a hexagon texture, and a general purpose (3D noise based) texture. In addition, a very flexible (although slower) functional texture is supported. Individual textures can be combined in various ways to create new ones. Texture types that help to combine other textures include: layered textures, indexed textures, and summed textures.

The general syntax of a texture is:

   texture { [texture declaration] }
or
   texture_sym

Where texture_sym is a previously defined texture declaration.

Procedural Textures

Procedural textures (i.e. checker, matte_white, shiny_red, ...) are ones that are completely defined at the time the data file is read.

Standard Shading Model

Unlike many other ray-tracers, surfaces in Polyray not have a single color that is used for all of the components of the shading model. Instead a number of characteristics of the surface must be defined (with a matte white being the default).

A surface declaration has the form:

     surface {
        [ surface definitions ]
     }

For example, the following declaration is a red surface with a white highlight, corresponding to the often seen plastic texture:

   define shiny_red
   texture {
     surface {
       ambient red, 0.2
       diffuse red, 0.6
       specular white, 0.8
       microfacet Reitz 10
     }
   }
The allowed surface characteristics that can be defined are:

ParameterDescription
color
ambient
diffuse
specular
reflection
transmission
microfacet
Color if not given in another component
Light given off by the surface
Light reflected in all directions
Amount and color of specular highlights
Reflectivity of the surface
Amount and color of refracted light
Specular lighting model (see below)

The lighting equation used is (in somewhat simplified terms):

   L = ambient + diffuse + specular + reflected + transmitted
or
   L = Ka + Kd * (l1 + l2 + ...) + Ks * (l1 + l2 + ...) + Kr + Kt

Where l1, l2, ... are the lights, Ka is the ambient term, Kd is the diffuse term, Ks is the specular term, Kr is the reflective term, and Kt it the transmitted (refractive) term. Each of these terms has a scale value and a filter value (the filter defaults to white/clear if unspecified).

See the file colors.inc for a number of declarations of surface characteristics, including: mirror, glass, shiny, and matte.

For lots of detail on lighting models, and the theory behind how color is used in computer generated images, run (don't walk) down to your local computer book store and get:

Illumination and Color in Computer Generated Imagery Roy Hall, 1989 Springer Verlag

Source code in the back of that book was the inspiration for the microfacet distribution models implemented for Polyray.

Note that you don't really have to specify all of the color components if you don't want to. If the color of a particular part of the surface declaration is not defined, then the value of the color component will be examined to see if it was declared. If so, then that color will be used as the filter. As an example, the declaration above could also be written as:

   define shiny_red
   texture {
     surface {
       color red
       ambient 0.2
       diffuse 0.6
       specular white, 0.8
       microfacet Reitz 10
     }
   }
Ambient light

Ambient lighting is the light given off by the surface itself. This will be a constant amount, independent of any lights that may be in the scene.

The format of the declaration is:

    ambient color, scale
    ambient scale

As always, color indicates either an RGB triple like <1.0,0.7,0.9>, or a named color. scale gives the amount of contribution that ambient gives to the overall amount light coming from the pixel. The scale values should lie in the range 0.0 -> 1.0

Diffuse light

Diffuse lighting is the light given off by the surface under stimulation by a light source. The intensity of the diffuse light is directly proportional to the angle of the surface with respect to the light.

The format of the declaration is:

    diffuse color, scale
    diffuse scale

The only information used for diffuse calculations is the angle of incidence of the light on the surface.

Specular highlights

The format of the declaration is:

   specular color, scale
   specular scale
The means of calculating specular highlights is by default the Phong model. Other models are selected through the Microfacet distribution declaration.
Reflected light

Reflected light is the color of whatever lies in the reflected direction as calculated by the relationship of the view angle and the normal to the surface.

The format of the declaration is:

   reflection scale
   reflection color, scale

Typically, only the scale factor is included in the reflection declaration, this corresponds to all colors being reflected with intensity proportional to the scale. A color filter is allowed in the reflection definition, and this allows the modification of the color being reflected (I'm not sure if this is useful, but I included it anyway).

Transmitted light

Transmitted light is the color of whatever lies in the refracted direction as calculated by the relationship of the view angle, the normal to the surface, and the index of refraction of the material.

The format of the declaration is:

   transmit scale, ior
   transmit color, scale, ior

Typically, only the scale factor is included in the transmitted declaration, this corresponds to all colors being transmitted with intensity proportional to the scale. A color filter is allowed in the transmitted definition, and this allows the modification of the color being transmitted by making the transmission filter different from the color of the surface itself.

It is possible to have surfaces with colors very different than the one closest to the eye become apparent. (See gsphere.pi for an example, a red sphere is in the foreground, a green sphere and a blue sphere behind. The specular highlights of the red sphere go to yellow, and blue light is transmitted through the red sphere.)

A more complex file is lens.pi in which two convex lenses are lined up in front of the viewer. The magnified view of part of a grid of coloured balls is very apparent in the front lens.

Microfacet distribution

The microfacet distribution is a function that determines how the specular highlighting is calculated for an object.

The format of the declaration is:

   microfacet Distribution_name falloff_angle
   microfacet falloff_angle

The distribution name is one of: Blinn, Cook, Gaussian, Phong, Reitz. The falloff angle is the angle at which the specular highlight falls to 50% of its maximum intensity. (The smaller the falloff angle, the sharper the highlight.) If a microfacet name is not given, then the Phong model is used.

The falloff angle must be specified in degrees, with values in the range 0 to 45. The falloff angle corresponds to the roughness of the surface, the smaller the angle, the smoother the surface.

Note: as stated before, look at the book by Hall. I have found falloff values of 5-10 degrees to give nice tight highlights. Using falloff angle may seem a bit backwards from other raytracers, which typically use the power of a cosine function to define highlight size. When using a power value, the higher the power, the smaller the highlight. Using angles seems a little tidier since the smaller the angle, the smaller the highlight.

Checker

the checker texture has the form:

   texture {
     checker texture1, texture2
   }

where texture1 and texture2 are texture declarations (or texture constants).

A standard sort of checkered plane can be defined with the following:

   // Define a matte red surface
   define matte_red
   texture {
     surface {
       ambient red, 0.1
       diffuse red, 0.5
     }
   }

   // Define a matte blue surface
   define matte_blue
   texture {
     surface {
       ambient blue, 0.2
       diffuse blue, 0.8
     }
   }

   // Define a plane that has red and blue checkers
   object {
     disc <0, 0.01, 0>, <0, 1, 0>, 5000
     texture {
       checker matte_red, matte_blue
     }
   }
Hexagon

the hexagon texture is oriented in the x-z plane, and has the form:

   texture {
     hexagon texture1, texture2, texture3
   }

This texture produces a honeycomb tiling of the three textures in the declaration. Remember that this tiling is with respect to the x-z plane, if you want it on a vertical wall you will need to rotate the texture.

Noise surfaces

The complexity and speed of rendering of the noise surface type lies between the standard shading model and the special surfaces described below. It is an attempt to capture a large number of the standard 3D texturing operations in a single declaration.

A noise surface declaration has the form:

   texture {
     noise surface {
       [ noise surface definition ]
     }
   }
The allowed surface characteristics that can be defined are:

Surface characteristics
ParameterDescription
color <r, g, b> Basic surface color (used if the noise function generates a value not contained in the color map)
ambient scale Amount of ambient contribution
diffuse scale Diffuse contribution
specular color, scale Amount and color of specular highlights, if the color is not given then the body color will be used.
reflection Reflectivity of the surface
transmission scale, ior Amount of refracted light
microfacet kind angle Specular lighting model (see the description of a standard surface)
color_map(map_entries) Define the color map (see following section on color map definitions for further details)
bump_scale float How much the bumps affect the normal
frequency float Affects the wavelength of ripples and waves
phase float Affects the phase of ripples and waves
lookup_fn index Selects a predefined lookup function
normal_fn index Selects a predefined normal modifier
octaves float Number of octaves of noise to use
position_fn index How the intersection point is used in the generation of a noise texture
position_scale float Amount of contribution of the position value to the overall texture
turbulence float Amount of contribution of the noise to overall texture.

The way the final color of the texture is decided is by calculating a floating point value using the following general formula:

   index = lookup_fn(position_scale * position_fn +
                     turbulence * noise3d(P, octaves))
The index value that is calculated is then used to lookup a color from the color map. This final color is used for the ambient, diffuse, reflection and transmission filters. The functions that are currently available, with their corresponding indices are:

Positional and lookup functions
IndexEffect
Positional functions1
2
3
4
5
default
x value in the object coordinate system
x value in the world coordinate system
Distance from the z axis
Distance from the origin
Angle from the x-axis (in the x-z plane, from 0 to 1)
0.0
Lookup functions1
2
3
default
sawtooth function, result from 0 -> 1
sin function, result from 0->1
ramp function, result from 0->1
no modification made

Definitions of these function numbers that make sense are:

  define position_plain       0
  define position_objectx     1
  define position_worldx      2
  define position_cylindrical 3
  define position_spherical   4
  define position_radial      5

  define lookup_plain    0
  define lookup_sawtooth 1
  define lookup_sin      2
  define lookup_ramp     3

An example of a texture defined this way is a simple white marble:

   define white_marble_texture
   texture {
     noise surface {
       color white
       position_fn position_objectx
       lookup_fn lookup_sawtooth
       octaves 3
       turbulence 3
       ambient 0.2
       diffuse 0.8
       specular 0.3
       microfacet Reitz 5
       color_map(
         [0.0, 0.8, <1, 1, 1>, <0.6, 0.6, 0.6>]
         [0.8, 1.0, <0.6, 0.6, 0.6>, <0.1, 0.1, 0.1>])
     }
   }

In addition to colouration, the bumpiness of the surface can be affected by selecting a function to modify the normal. The currently supported normal modifier functions are:

IndexEffect
1
2
3
default
Make random bumps in the surface
Add ripples to the surface
Give the surface a dented appearance
no change

Definitions that make sense are:

  define default_normal 0
  define bump_normal    1
  define ripple_normal  2
  define dented_normal  3
See also the file texture.txt for a little more explanation.

Functional Textures

The most general and flexible texture type is the functional texture. These textures are evaluated at run-time based on the expressions given for the components of the lighting model. The general syntax for a surface using a functional texture is:

   special surface {
     [ surface declaration ]
   }

In addition to the components usually defined in a surface declaration, it is possible to define a function that deflects the normal, and a function that modifies the intersection point prior to texturing. The format of the two declarations are:

   position vector
   normal vector
An example of how a functional texture can be defined is:
   define sin_color_offset (sin(3.14 * fmod(x*y*z,1) + otheta)+1)/2
   define sin_color <sin_color_offset, 0, 1 - sin_color_offset>

   define xyz_sin_texture
   texture {
     special surface {
       color sin_color
       ambient 0.2
       diffuse 0.7
       specular white, 0.2
       microfacet Reitz 10
     }
   }

In this example, the color of the surface is defined based on the location of the intersection point using the vector defined as sin_color. Note that sin_color uses yet another definition.

The position declaration is useful to achieve the effect of turbulence. By adding a solid noise to the position, it is possible to add swirl to a basic texture. For example:

   define white_marble
   texture {
     special shiny {
       color white_marble_map[sawtooth(x)]
       position P + dnoise(P, 3)
     }

This will create a basic white marble (which would have very straight, even bands of color), and by adding dnoise, swirls the colors around.

The normal declaration is used to add some bumpiness to the surface. The bumps can be created with a statement as simple as:

   normal N + (dnoise(P) - <0.5, 0.5, 0.5>)

The value <0.5,0.5,0.5> is subtracted from the dnoise function since dnoise only returns positive values in each component.

Note: if the color component has any alpha in it (as the result of a lookup from a color map or from an image) then that alpha value will be used as the scale for the transmit statement.

Color maps

Color maps are generally used in noise textures and functional textures. They are a way of representing a spectrum of colors that blend from one into another. Each color is represented as RGB, with an optional alpha (transparency) value. The format of the declaration is:

   color_map([low0, high0, <r0, g0, b0>, a0, <r1, g1, b1>, a1]
             [low1, high1, <r2, g2, b2>, a2, <r3, g3, b3>, a3]
             ...
             [lowx, highx, <rx, gx, bx>, ax, <ry, gy, by>, ay])

Note that there are no commas between entries in the color map even though commas are required within each entry. (This is a holdover to retain compatibility with earlier versions of Polyray.) If you don't need any transparency in the color map, then the following declaration could be used:

   color_map([low0, high0, <r0, g0, b0>, <r1, g1, b1>]
             [low1, high1, <r2, g2, b2>, <r3, g3, b3>]
             ...
             [lowx, highx, <rx, gx, bx>, <ry, gy, by>])

In this case, the alpha is set to 0 for all colors created by the color map. Note that it is possible to mix colors with and without alpha value.

Note: If there is an alpha value in the color map, then the amount of alpha is used as the scale for the transmit component of the surface. To turn off this automatic transparency, use transmit 0.

Using CMAPPER

A good way to build color maps for layered textures is with ColorMapper,

Written by : SoftTronics, Lutz + Kretzschmar

This is available as CMAP.ZIP in the forum Graphdev on Compuserve. This program allows you to build color maps with varying colors and transparency values. The output of this program does have to be massaged a little bit to make it into a color map as Polyray understands it. In order to help with this process an IBM executable, makemap.exe has been included. To use this little program, you follow these steps:

  1. run CMAPPER to create a color map in the standard output (not the POV-Ray output format).
  2. run makemap on that file, giving a name for the new Polyray color map definition
  3. Add this definition to your Polyray data file.

If you saved your map as foo.map, and you wanted to add this color map to the Polyray data file foo.inc, with the name of foox_map, you would then run makemap the following way:

   makemap foo.map foox_map >> foo.inc

This makes the translation from CMAPPER format to Polyray format, and appends the output as, define foox_map color_map(...), to the file foo.inc.

Image maps

Projecting an image onto a surface is one of the most common texturing techniques. There are four types of projection supported: planar, cylindrical, spherical, and environment. Input files for use as image maps may be any valid Targa, GIF, or JPEG image.

The declaration of an image map is:

   image("imfile.tga")

Typically, an image will be associated with a variable through a definition such as:

   define myimage image("imfile.tga")

The image is projected onto a shape by means of a projection. The four types of projection are declared by:

   planar_imagemap(image, coordinate [, repeat]),
   cylindrical_imagemap(image, coordinate [, repeat]),
   spherical_imagemap(image, coordinate)
   environment_map(coordinate,
                   environment("img1.tga", "img2.tga", "img3.tga",
                               "img4.tga", "img5.tga", "img6.tga"))

The planar projection maps the entire raster image into the coordinates 0 <= x <= 1, 0 <= z <= 1. The vector value given as coordinate is used to select a color by multiplying the x value by the number of columns, and the z value by the number of rows. The color appearing at that position in the raster will then be returned as the result. If a repeat value is given then entire surface, repeating between every integer Value of x and/or z.

The planar image map is also used for wrapping images based on u/v coordinates. By using the vector <u, 0, v>, the planar image map will automatically wrap the image around surfaces that have natural u/v coordinates (such as sphere, cylinder, torus, etc.).

The cylindrical projection wraps the image about a cylinder that has one end at the origin and the other at <0, 1, 0>. If a repeat value is given, then the image will be repeated along the y-axis, if none is given, then any part of the object that is not covered by the image will be given the color of pixel (0, 0).

The spherical projection wraps the image about an origin centered sphere. The top and bottom seam are folded into the north and south poles respectively. The left and right edges are brought together on the positive x axis.

The environment map wraps six images around a point. This method is a standard way to fake reflections by wrapping the images that would be seen from a point inside an object around the object.

Following are a couple of examples of objects and textures that make use of image maps:

   define hivolt_image image("hivolt.tga")
   define hivolt_tex
   texture {
     special surface {
       color cylindrical_imagemap(hivolt_image, P, 1)
       ambient 0.9
       diffuse 0.1
     }
     scale <1, 2, 1>
     translate <0, -1, 0>
   }
   object { cylinder <0, -1, 0>, <0, 1, 0>, 3 hivolt_tex }
and
   define disc_image image("test.tga")
   define disc_tex
   texture {
     special surface {
       color planar_imagemap(disc_image, P)
       ambient 0.9
       diffuse 0.1
     }
     translate <-0.5, 0, -0.5>
     scale <7*4/3, 1, 7>
     rotate <90, 0, 0>
   }
   object {
     disc <0, 0, 0>, <0, 0, 1>, 6
     u_steps 10
     disc_tex
   }

Note: If there is an alpha/opacity value in the image map, then the amount of opacity is used as the scale for the transmit component of the surface. To turn off this automatic transparency, use transmit 0.

Bumpmaps

Bumpmaps are declared using the same sort of projections as image maps (excepting environment maps). The following are the valid declarations of bump maps:

   planar_bumpmap(image, coordinate [, bump size]),
   cylindrical_bumpmap(image, coordinate [, bump size]),
   spherical_bumpmap(image, coordinate [, bump size])

Instead of an optional repeat argument, bumpmaps have an optional bump size argument. If this argument is left out, then the bump size is set to one. Note that negative bump values are allowed and cause the bumps to appear to project the other way.

Any Targa image can be used, but for best results greyscale or color mapped images are best. The following declarations show how a bump map can be applied to objects:

   include "colors.inc"

   define tile_bumps image("tile1.tga")

   define bumpmap_red1
   texture {
     special shiny {
       color red
       normal planar_bumpmap(tile_bumps, <8*u, 0, 6*v>, 1)
     }
   }
   object {
       object { torus 2, 0.75, <0, -1.25, 0>, <0, 1, 0> }
     + object { cone <0, -2, 0>, 1, <0, 3, 0>, 0 }
     + object { sphere <2, 0, 4>, 2 }
     bumpmap_red1
   }

The bumpmap is declared using u/v coordinates so that it will follow the natural coordinates of the object. This ensures that it wraps properly around the torus, cone, and sphere in the example above. There is an automatic wrapping of bump maps, so there will be 8 tiles in the u direction and 6 tiles in the v direction of each object.

Indexed Textures and Texture Maps

A texture map is declared in a manner similar to color maps. There is a list of value pairs and texture pairs, for example:

   define index_tex_map
      texture_map([-2, 0, red_blue_check, bumpy_green],
                  [0, 2, bumpy_green, reflective_blue])

Note that for texture maps there is a required comma separating each of the entries.

These texture maps are complimentary to the indexed texture (see below). Two typical uses of indexed textures are to use solid texturing functions to select (and optionally blend) between complete textures rather than just colors, and to use image maps as a way to map textures to a surface.

For example, using the texture map above on a sphere can be done accomplished with the following:

  object {
    sphere <0, 0, 0>, 2
    texture { indexed x, index_tex_map }
  }

The indexed texture uses a lookup function (in example above a simple gradient along the x axis) to select from the texture map that follows. See the data file indexed1.pi for the complete example.

As an example of using an image map to place textures on a surface, the following example uses several textures, selected by the color values in an image map. The function indexed_map returns the color index value from a color mapped image (or uses the red channel in a raw image). The example below is equivalent to creating a material map in the POV-Ray raytracer.

  object {
    sphere <0, 0, 0>, 1
    texture {
      indexed indexed_map(image("txmap.tga"), <x, 0, y>, 1),
              texture_map([1, 1, mirror, mirror],
                          [2, 2, bright_pink, bright_pink],
                          [3, 3, Jade, Jade])
      translate <-0.5, -0.5, 0> // center image
    }
  }

In this example, the image is oriented in the x-y plane and centered on the origin. The only difference between a indexed_map and a planar_imagemap is that the first (indexed_map) returns the index of the color in the image and the second returns the color itself. Note that the texture map shown above has holes in it (between the integer values), however this isn't a problem as the indexed_map function will only produce integers.

Layered Textures

Layered textures allow you to stack multiple textures on top of each other. If a part of the texture is not completely opaque (non-zero alpha), then the layers below will show through. For example, the following texture creates a texture with a marble outer layer and a mirrored bottom layer and applies it to a sphere:

   include "colors.inc"

   define marble_alpha_map
      color_map([0.0, 0.2, white, 0,   white, 0]
                [0.2, 0.5, white, 0,   black, 0.2]
                [0.6, 1.0, black, 0.2, black, 1])

   define mirror_veined_marble
   texture {
     layered
       texture {
          special shiny { color marble_alpha_map[marble_fn] }
       },
     mirror
   }

   object {
     sphere <0, 0, 0>, 2
     mirror_veined_marble
   }

Summed Textures

Summed textures simply add weighted amounts of a number of textures together to make the final color. The syntax is:

   texture {
     summed f1, tex1, f2, tex2, ...
   }

The expressions f1, f2, ... are numeric expressions. The expressions tex1, ... are textures.

Comments

Comments follow the standard C/C++ formats. Multiple line comments are enclosed by /* ... */ and single line comments are preceded by //.

Single line comments are allowed and have the following format:

   // [ any text to end of the line ]

As soon as the two characters // are detected, the rest of the line is considered a comment.

Animation support

An animation is generated by rendering a series of frames, numbered from 0 to some total value. The declarations in Polyray that support the generation of multiple Targa images are:

Declarations for animation support
DeclarationDescription
total_frames valThe total number of frames in the animation
start_frame valThe value of the first frame to be rendered
end_frame valThe last frame to be rendered
frame_time valDuration of each frame (for particles)
outfile "name"
outfile name
Polyray appends the frame number to 'name' in order to generate distinct Targa files.

The values of total_frames, start_frame, and end_frame, as well as the value of the current frame, frame, are usable in arithmetic expressions in the input file. Note that these statements should appear before the use of: total_frames, start_frame, end_frame, or frame as a part of an expression. Typically I put the declarations right at the top of the file.

WARNING: if the string given for outfile is longer than 5 characters, the three digit frame number that is appended will be truncated by DOS. Make sure this string is short enough or you will end up overwriting image files.

Conditional processing

In support of animation generation (and also because I sometimes like to toggle attributes of objects), polyray supports limited conditional processing. The syntax for this is:

   if (cexper) {
     [object/light/... declarations]
   } else {
     [other object/light/... declarations]
   }

The use of conditional statements is limited to the top level of the data file. You cannot put a conditional within an object or texture declaration. i.e.

   object {
     if (foo == 42) {
       sphere <0, 0, 0>, 4
     } else {
       disc <0, 0, 0>, <0, 1, 0>, 4
     }
   }

is not a valid use of an if statement, whereas:

   if (foo == 42) {
     object {
       sphere <0, 0, 0>, 4
     }
   } else {
     object {
       disc <0, 0, 0>, <0, 1, 0>, 4
     }
   }
or
   if (foo == 42)
     object { sphere <0, 0, 0>, 4 }
   else if (foo = 12) {
     object { torus 3.0, 1.0, <0, 0, 0>, <0, 1, 0> }
     object { cylinder <0, -1, 0>, <0, 1, 0>, 0.5 }
   } else
      object { disc <0, 0, 0>, <0, 1, 0>, 4 }
are valid.

Note: the curly brackets { } are required if there are multiple statements within the conditional, and not required if there is a single statement.

Include files

In order to allow breaking an input file into several files (as well as supporting the use of library files), it is possible to direct polyray to process another file. The syntax is:

   include "filename"
Beware that trying to use #include ... will fail.

File flush

Due to unforeseen occurrences, like power outages, or room mates that hit the reset button so they can do word processing, it is possible to specify how often the output file will be flushed to disk. The default is to wait until the entire file has been written before a flush (which is a little quicker but you lose everything if a crash occurs).

The format of the declaration is:

   file_flush xxx

The number xxx indicates the maximum number of pixels that will be written before a file flush will occur. This value can be as low as 1 - this will force a flush after every pixel (only for the very paranoid).