 Environment Mapping

## Environment Mapping Algorithms

Developed by Yoshihiro Mizutani and Kurt Reindel

### Environment Mapping Overview

Environment mapping is a technique that simulates the results of ray-tracing. Because environment mapping is performed using texture mapping hardware, it can obtain global reflection and lighting results in real-time.

Environment mapping is essentially the process of pre-computing a texture map and then sampling texels from this texture during the rendering of a model. The texture map is a projection of 3D space to 2D space. There is an infinite number of ways to project a 3D surface to a 2D surface, but we shall limit our discussion to the following three methods.

• OpenGL Spherical Mapping
• Blinn/Newell Latitude Mapping
• Cube Mapping

The intended use of environment mapping is to simulate reflections or lighting upon objects without going through expensive ray-tracing or lighting calculations. We accomplish this objective by generating scenes using a two-pass approach.

1. The "environment" or synthetic world in which our reflective model is to be placed must first be rendered (or captured and digitized) as viewed from the desired position of our reflective model. This is done by rendering six images in the directions of Sky, Floor, North, South, East, and West. Of course the images don't really have to line up with the real compass directions, but they do have to be mutually orthogonal (or mirror directions). The images must align together (or a least be close). The "realism" of an image become significantly challenged as environment mapped models are displaced from the location in which the environment maps were generated. Everything in the environment (except the reflective model) is to be rendered.

2. The model is then placed in the environment and the viewing position placed at the desired location. For every vertex in the model, the vector from the viewing position to the vertex is "reflected" about the normal at the vertex, giving us a direction which is used as input to the 3D to 2D projection. This implies the samples taken from the environment maps are interpolated linearly.

After the six images are loaded into texture memory they can be either be sampled to generate an environment map or sampled directly to texture map a model. The process of creating an environment map can be imagined as projecting the six sides of a cube onto a sphere, and then flattening the sphere into a 2D map. See Figure 1. Fig. 1 Mapping cube onto a sphere

When applying an environment map to a model, texture coordinates are needed at each vertex. The UV coordinates must be calculated using the same 3D to 2D mapping used to generate the environment map. Geometric position of the model vertexes and the normal directions at the vertexes are used to compute a reflection vector. The view vector is usually from the origin of the eye coordinate system to a vertex of the model. If V is the view vector, and N the normal at the vertex in the eye coordinate system, then the reflection vector at the vertex R is:

``` Fig. 2 Eq. 1```

All environment mapping techniques have their strengths and weaknesses. Due to differences in the way the maps are generated, the quality of the images generated using the maps varies significantly. The following is a brief comparison:

### OpenGL Spherical Mapping

The spherical surface is mapped into 2D in the following formula: Eq. 2 OpenGL Spherical 3D to 2D projection

where R is the reflection vector in eye coordinates. This formula means that the entire surface of the sphere is mapped within the inscribed circle of 1.0x1.0 square. Figure 3 shows the cross section of a sphere on which the environment seen is mapped. In this example, the eye point is placed at the right hand side of the figure. The white points on the perimeter of the circle are mapped to the green points in the 2D environment map. Each point on the sphere's cross section is connected to a point on the plane by a white line. Figure 4 shows an orthogonal view of the texture plane referred to in Figure 3. The inside circle represents the front half of sphere, which is the right half of Figure 3. The outside circle represents the back half of the sphere, which is the left half of Figure 3.

``` Fig. 3 Fig. 4
```
Pros of GL Spherical Mapping

• Sampling from a single mipmap requires only one pass over each primitive with no need to sub-divide facets (except at the one singularity).
• The spherical map contains only one point of singularity (0,0,-1).
• Low resolution models mapped with a sphere map look good due to the non linear mapping used to generate the map.
Cons of GL Spherical Mapping
• The OpenGL map allows the viewing position only a single degree of freedom. This is due to the 3D to 2D mapping which resembles popping a beach ball (no tearing allowed), and stretching it flat. The singularity becomes mapped to the perimeter of the sphere map.
• The image represented by the pixels near the singularity become extremely distorted due to being mapped to the perimeter of the sphere map.
• A large portion of the mipmap is unused, but requires texture memory.  An example Reflective GL Spherical Map, GL Specular and Reflective textured teapot   An example diffuse lit teapot, diffusly lit teapot textured with specular reflective GL maps

### Blinn/Newell Latitude Mapping

Here the sphere is mapped to a single latitude-longitude texture map. The map's U coordinate represents longitude (from 0 to 360 degrees) and V coordinate represents latitude (from -90 to 90 degrees). The surface of the sphere is mapped from 3D to 2D with the following formula:

``` Fig. 5 Eq. 3```

Pros of Latitude Mapping

• Less distortion on Floor face (except for seam)
Cons of Latitude Mapping
• The North-to-South seam requires a check on uv coordinates to prevent wrapping off the sides of the South face
• The north, south, east, and west textures are compressed in the horizontal direction which causes a significant loss in quality
• Low resolution model easily show linear interpolation distortions  An example Reflective Latitude Map, teapot textured with reflective and specular Latitude maps   An example diffuse lit teapot, Latitude Specular Map, and diffusely lit teapot textured with reflective and specular Latitude maps

### Cube Mapping

Cube Mapping is the technique of rendering a model from samples taken directly from the six source textures (no intermediate environment map is created or sampled). An imaginary cube envelopes the model, each face textured with one of the source images. Each face of the imaginary cube is represented by an infinite plane with the plane normal passing through at UV coordinates (0.5,0.5). UV coordinates are computed by a constant multiplied by the dot product of the reflection vector and the vector normal to each face. UV coordinates collinear with the face normal get the value (0.5, 0.5).

This can be implemented using OpenGL, with the wrapping mode set to GL_CLAMP. UV coordinates which extend off the (0,0) to (1,1) range cause the triangle to pick up the GL_TEXTURE_BORDER_COLOR (0,0,0,0). The triangles are textured using up to three face textures and blended into the frame buffer with blend function source set to GL_SRC_ALPHA, and destination set to GL_ONE.

``` Fig. 6
```

Pros of Cube Mapping

• Source images are sampled directly meaning no distortion introduced by resampling into an intermediate environment map.

Cons of Cube Mapping

• Requires the facets to be rendered up to three times to insure all faces hit by the reflection rays are displayed.  Six textures displayed as an unfolded cube, Cube Mapped teapot