Projected Shadow Mapping with Cg and OpenGL

Projective Shadow Mapping

Projective Shadow Mapping

In this article, I will show how to implement projective shadow mapping in OpenGL using Cg shaders.
The basis of this post comes from the article titled [Transformation and Lighting in Cg]. I will assume the reader has a basic understanding of OpenGL and already knows how to setup an application that uses OpenGL. If you require a refresher on setting up an application using OpenGL, you can refer to my previous article titled [Introduction to OpenGL for Game Programmers].

I will take advantage of a few OpenGL extensions such as GL_ARB_framebuffer_object to create a offscreen framebuffer to render to and and GL_ARB_texture_border_clamp for clamping to the border color of the projective textures.

 

Introduction

As I am sure you are aware, when using hardware graphics API’s like OpenGL and DirectX to render images you don’t automatically get shadows drawn in your scene. Shadows have to be simulated using hardware rasterization techniques. There are two primary methods for simulating shadows:

  1. Stencil Shadows: Stencil shadows make use of the stencil buffer to simulate shadows. The basic idea is, the shadow receivers are rendered to the stencil buffer in the first pass. Then, in a second pass, the shadow casters are rendered to the scene using an infinite perspective projection which will generate the shadow volumes. If you render a darker pixel where the shadow volumes intersect the previously rendered stencil buffer, you will get perceptual shadows.
  2. Shadow Maps: Shadow mapping is the other popular technique for creating shadows and is the technique shown in this article. Shadow mapping requires that shadow casters in the scene be rendered from the perspective of the light. The resulting depth buffer is then used as a projective texture when rendering the scene in the second pass but then from the perspective of the camera.

In order to better understand how the projective shadow mapping works, I will first show you how to implement projective textures. With this knowledge, the transition to projective shadow mapping is trivial.

Dependencies

The demo shown in this article uses several 3rd party libraries to simplify the development process.

  • The Cg Toolkit (Version 3): The Cg Toolkit provides the tools and API needed to integrate the Cg shader programs in your application.
  • Boost (1.46.1): Boost has some very useful libraries that I use throughout my demo applications. In this demo, I use the Signals, Filesystem, Function, and Bind boost libraries to provide a generic, platform independent functionality that simplifies some of the features used in this demo.
  • OpenGL Extension Wrangler (GLEW): The OpenGL extension wrangler is the API I chose to check for the presence of the required extensions and to use the extensions in the application program.
  • Simple DirectMedia Layer (1.2.14): Simple DirectMedia Layer (SDL) is a cross-platform multimedia library that I use to create the main application window, initialize OpenGL, and handle keyboard, mouse and joystick input.
  • OpenGL Mathmatics (GLM): An OpenGL centric mathmatics library for 3D graphics applications.
  • Simple OpenGL Image Library (SOIL): SOIL is a tiny C library used primarily for uploading textures into OpenGL.

All of the dependencies described here are included in the source code example included at the end of this article.

Projective Textures

You can think of projective textures in the same way as a slide projector works in real-life. You place a photo slide in the projector and anywhere the light from the projector hits a surface, the image of the photo will be visible.

In a graphics a graphics rasterizer this works in a similar way. We just need to figure out how to map the projected texture onto the geometry.

The concept is simple if you imagine that the rendered geometry in a scene generates a texture (the color of the back buffer), we can map the resulting vertex coordinates from the rendered geometry onto a standard texture by shifting the clip-space position to texture-space (from the range [-1…1] to [0…1]).

As you may already know, we can map object-space vertices into clip-space by transforming the object-space vertex position by the combined model, view, projection matrices. This will result in the transformed X, Y, and Z values being in the range [-1…1]. In order to map the clip-space positions to a texture, we need to scale, and translate the clip-space values into texture-space by applying an additional matrix called the texture bias matrix. The texture bias matrix has the form:

This matrix will essentially scale the clip-space positions by ½ and translate them by ½ to transform them into texture space.

If we set the view, and projection matrices to match the view and perspective projection of the projector, then we can combine the object’s model matrix together with the view, and projection matrices of the projector as well as the texture bias matrix to create the final texture matrix that can be used to transform the object space positions directly into texture space. We will call this matrix the texture matrix.

The resulting texture space position can then be used to sample a projective texture in the fragment program using the tex2Dproj method. The sampled color from the texture projection can then be blended with the base color of the object to produce the projector effect.

The Projective Texture Demo

The projective texture demo is used to demonstrate the projective texture effect. We will load an ordinary 2D texture to use as the projected texture. For this demo, I am using the demon texture that is provided with the sample programs that come with the installation of the Cg toolkit. Of course you are welcome to use the pictures from your last family vacation because who doesn’t love to see a slide show from their last family vacation?

The demo for the projective texture effect is based on the previous article titled [Transformation and Lighting in Cg]. In that demo, a simple scene is rendered with light rotating around a cone and a torus primitive which was used to demonstrate the basic Blinn-Phong lighting model. In this scene, I replaced the cone primitive with a sphere, but basically it is the same scene.

Global Variables and Forward Declarations

We always start a new demo by defining any global variables that are used in the application and forward-declaring any functions that will be defined later in the source.

As mentioned previously, I am using the demon texture that comes with the Cg toolkit samples. This texture is defined as a byte array in a header file so no texture needs to be loaded from disc.

static const GLubyte
g_DemonTexture[3*(128*128)] = {
    /* RGB8 image data for a mipmapped 128x128 demon texture */
#include "demon_image.h"
};
GLuint g_DemonTextureID = 0;

This is the texture that will be projected in the scene.

This is only the definition of the texture data. Later we will load the texture into the GPU by creating an OpenGL texture object and filling that texture object with this texture data. The g_DemonTextureID variable will be used to identify the demon texture in the OpenGL context.

We also need to define a few variables to store the application and camera classes.

Application g_App( "Shadowing Demo", 512, 512);
PivotCamera g_Camera;

glm::vec3   g_InitialCameraRotation( 0, 0, 0 );
glm::vec3   g_InitialCameraPiviot( 0, 0, 0 );
glm::vec3   g_InitialCameraPosition( 0, 0, -10 );

The variable of type Application is used to initialize the render window and receive messages from the operating system and call the appropriate methods in the demo.

The PivotCamera is used to manipulate the camera’s view. It is used to pan and rotate the view so we can view the scene from different angles.

We also define a few initial properties for the camera. In this case, the camera is placed 10 units behind the scene looking towards the origin of the world.

Next, we’ll define a few matrials and some light properties that are used for calculating the lighting contributions on the scene objects.

// Some default materials
Material g_BrassMaterial;
Material g_RedPlasticMaterial;
Material g_GreenEmeraldMaterial;

glm::vec4 g_GlobalAmbient = glm::vec4( 0.1f, 0.1f, 0.1f, 1.0f );
glm::vec3 g_LightPosition;
glm::vec4 g_LightColor = glm::vec4( 0.95f, 0.95f, 0.95f, 1.0f );

To sample the projective texture we need to know how to describe the projector’s view and projection matrices.

// The distance from the light's position to the near clipping plane.
float g_fLightNearPlane = 0.1f;
// The distance from the light's position to the far clipping plane.
float g_fLightFarPlane = 50.0f;
// The light's field of view in degrees
float g_fLightFieldOfView = 90.0f;

glm::mat4 g_LightViewMatrix;
glm::mat4 g_LightProjectionMatrix;

In order to build the projection matrix for the projector (in this case, we will use the light as a projector), we need to know what the near, and far clipping planes are as well as the field-of-view of the projection matrix. We also need to know the aspect ratio of the viewport but we don’t define a variable for the aspect ratio because it will be computed based on the dimension of the viewport. I will discuss the aspect ratio of the projection matrix in a little bit more detail in the section that describes the shadow projection technique.

In addition to the view, and projection matrices, we need to define the texture bias matrix that is used to transform the clip-space vertices into texture space as previously mentioned.

glm::mat4 g_TextureBiasMatrix;
glm::mat4 g_TextureMatrix;

The g_TextureBiasMatrix variable will store the bias matrix described earlier and the g_TextureMatrix variable will store the combined view, projection, and bias matrices.

The OnInitialize Method

The OnInitialize method will be invoked right after the application starts. This method will be used to load the shaders and setup any of the materials and matrices which aren’t modified anywhere else in the application.

void OnInitialize( EventArgs& e )
{
    InitGlew();

    glClearColor( 0.0f, 0.0f, 0.0f, 1.0f );
    glClearDepth( 1.0f );

    // Create an effect manager
    EffectManager::Create().Initialize();
    EffectManager& effectMgr = EffectManager::Get();

    LoadResources();

    effectMgr.EffectLoaded += OnEffectLoaded;
    effectMgr.RuntimeError += OnRuntimeError;

    // Load the effects
    effectMgr.CreateEffectFromFile( "Shaders/C9E5_projectiveTexture.cgfx", "C9E5_projectiveTexture" );
    effectMgr.CreateEffectFromFile( "Shaders/C9E5_shadowMapping.cgfx", "C9E5_shadowMapping" );

    // Set shared parameters that will probably never change.
    effectMgr.SetGlobalAmbient( g_GlobalAmbient );

    // Setup brass material
    g_BrassMaterial.Emissive = glm::vec4( 0.0f, 0.0f, 0.0f, 1.0f );
    g_BrassMaterial.Ambient = glm::vec4( 0.33f, 0.22f, 0.03f, 1.0f );
    g_BrassMaterial.Diffuse = glm::vec4( 0.78f, 0.57f, 0.11f, 1.0f );
    g_BrassMaterial.Specular = glm::vec4( 0.99f, 0.91f, 0.81f, 1.0f );
    g_BrassMaterial.SpecularPower = 27.8f;

    // Setup red plastic material
    g_RedPlasticMaterial.Emissive = glm::vec4( 0.0f, 0.0f, 0.0f, 1.0f );
    g_RedPlasticMaterial.Ambient = glm::vec4( 0.0f, 0.0f, 0.0f, 1.0f );
    g_RedPlasticMaterial.Diffuse = glm::vec4( 0.5f, 0.0f, 0.0f, 1.0f );
    g_RedPlasticMaterial.Specular = glm::vec4( 0.7f, 0.6f, 0.6f, 1.0f );
    g_RedPlasticMaterial.SpecularPower = 32.0f;

    g_GreenEmeraldMaterial.Emissive = glm::vec4( 0.0f, 0.0f, 0.0f, 1.0f );
    g_GreenEmeraldMaterial.Ambient = glm::vec4( 0.0215f, 0.1745f, 0.0215f, 1.0f );
    g_GreenEmeraldMaterial.Diffuse = glm::vec4( 0.07568f, 0.61424f, 0.07568f, 1.0f );
    g_GreenEmeraldMaterial.Specular = glm::vec4( 0.633f, 0.727811f, 0.633f, 1.0f );
    g_GreenEmeraldMaterial.SpecularPower = 76.8f;

    // Setup the light's projection matrix
    // NOTE: The far-near planes should only be as large as necessary to render the 
    // shadow casters in the scene.  Keeping the difference small will produce more accurate
    // floating point values in the depth buffer.
    g_LightProjectionMatrix = glm::perspective( g_fLightFieldOfView, (float)g_ShadowBufferWidth / (float)g_ShadowBufferHeight, g_fLightNearPlane, g_fLightFarPlane );
    // Setup the texture bias matrix to convert from clip space => texture space
    g_TextureBiasMatrix = glm::scaleBias( 0.5f, 0.5f );
}

The InitGlew method will initialize the OpenGL Extension Wrangler and check for the supported extensions that are being used in this demo. I check for the existence of the GL_ARB_texture_border_clamp and the GL_ARB_framebuffer_object extensions.

On lines 210, and 211 some basic OpenGL states are initialized and thereafter the effect manager class is created and initialized. The functionality of the EffectaManager class is described in more detail in the previous article titled [Introduction to Cg Runtime with OpenGL].

After the effect manager is created and initialized, but before the effects are actually loaded, any resources that are used by the effects should be loaded. This is done in the LoadResources method which will be shown next.

The EffectManager::EffectLoaded event is registered with the OnEffectLoaded method which will be invoked whenever an effect is either loaded for the first time or if the effect is reloaded because the file has changed on disc.

On lines 223, and 224 the two effects that are demonstrated in this demo are loaded. The first effect demonstrates the projective texture effect while the second effect demonstrates the shadow mapping effect.

On lines 230-247, a few materials are initialized that are applied to the different geometric objects in the scene.

On line 253, the light’s perspective projection matrix is computed from the filed of view (FOV), aspect ratio, and near and far planes. The aspect ratio of the light’s projection matrix is determined by the width and depth of the shadow frame buffer object (FBO) that is used to generate the shadow map. I will come back to this when I talk about the projective shadow maps. I first want to make sure you understand how projective textures work.

For the projective texture map, we also need the texture bias matrix that was described earlier. This bias matrix is defined by the g_TextureBiasMatrix variable.

The LoadResources Method

The LoadResources method is where the texture data will be loaded into GPU memory.

void LoadResources() 
{
    // Load the demon texture that will be used as a projection map
    glGenTextures( 1, &g_DemonTextureID );
    glBindTexture(GL_TEXTURE_2D, g_DemonTextureID);
    /* Load demon decal texture with mipmaps. */
    gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB8, 128, 128, GL_RGB, GL_UNSIGNED_BYTE, g_DemonTexture );

    static const glm::vec3 white(1,1,1);

    glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, glm::value_ptr(white) );
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR );

    glBindTexture( GL_TEXTURE_2D, 0 );

I will only show half of this method now and the rest of the method, I will return to when I talk about the projective shadow maps.

The first part of this method simply loads the demon texture from the texture data defined in the global scope.

The functionality provided by the GL_ARB_texture_border_clamp extension is used to ensure that if the texture is sampled with texture coordinates out of the [1..0] range that the white border color will be retrieved instead. This prevents the texture from stretching across the entire scene or being repeated everywhere.

We also want to make sure the texture is smoothly blended and since we have mipmaps for the demon texture, we can use the GL_LINEAR_MIPMAP_LINEAR minification filter which will produce a smooth blend between mip-map levels.

The OnEffectLoaded Method

Whenever an effect is loaded (or reloaded) this method will be invoked. We can use this method to assign the constant variables to the shader effects.

void OnEffectLoaded( EffectLoadedEventArgs& e )
{
    Effect& effect = e.Effect;
    if ( effect.GetEffectName() == "C9E5_projectiveTexture" )
    {
        EffectParameter& projectiveTextureParameter = effect.GetParameterByName( "projectiveSampler" );
        if ( projectiveTextureParameter.IsValid() )
        {
            projectiveTextureParameter.Set( g_DemonTextureID );
        }
    }

In this case, when the “C9E5_projectiveTexture” effect is loaded, the “projectiveSampler” effect parameter is set to the demon texture we just loaded. Inside the shader, this parameter is nothing more than a sampler2D texture sampler.

The OnUpdate Method

The OnUpdate method is invoked every frame by the application class. Since we want to animate the position of the light, we will use this method to update the light’s position.

void OnUpdate( UpdateEventArgs& e )
{
    static float fAnimTimer = 0.0f; 
    static float fReloadTimer = 0.0f;

    // Every seconds, we'll check to see if the effects need to be reloaded.
    fReloadTimer += e.ElapsedTime;
    if ( fReloadTimer > 2.0f )
    {
        EffectManager::Get().ReloadEffects();
        fReloadTimer = 0.0f;
    }

    if ( g_bAnimate )
    {
        fAnimTimer += e.ElapsedTime;
        // Move the light position in a circle
        g_LightPosition.x = 10.0f * sinf( fAnimTimer );
        g_LightPosition.y = 3.0f;
        g_LightPosition.z = 10.0f * cosf( fAnimTimer );
    }
}

If any effect is modified while the application is running the EffectManager::ReloadEffects method on line 441 will reload any out-of-date effects.

On lines 448-451, the position of the light is updated based on the current time.

The OnPreRender Method

The OnPreRender method is invoked after the OnUpdate method but before the OnRender method. We can use this method to update any shared parameters known by the effect manager as well as the view, projection, and texture matrices.

void OnPreRender( RenderEventArgs& e )
{
    // Update the shared parameters owned by the effect manager
    EffectManager& mgr = EffectManager::Get();

    mgr.SetElapsedTime( e.ElapsedTime );
    mgr.SetApplicationTime( e.TotalTime );
    mgr.SetMousePosition( g_CurrentMousePos );
    mgr.SetMouseButtonState( g_bLeftMouseDown, g_bRightMouseDown );

    g_Camera.ApplyViewTransform();

    // Create the Light's view matrix
    g_LightViewMatrix = glm::lookAt( g_LightPosition, glm::vec3(0), glm::vec3(0,-1,0) );
    // Build the texture matrix for the projected shadow map
    g_TextureMatrix = g_TextureBiasMatrix * g_LightProjectionMatrix * g_LightViewMatrix;
}

On lines 460-463 some shared parameters are updated in the EffectManager class.

On line 465, the camera’s view matrix is applied to the OpenGL’s combined model-view matrix to ensure any objects rendered using the fixed-function pipeline are updated correctly.

On line 468, the view matrix of the projector is created based on the position of the light. You may notice that the Y-coodinate of the “up” parameter to the glm::lookAt is negative. This is because we need to flip the texture projector to ensure the Y-coordinate of the resulting projected texture coordinate is reversed because the Y-axis in clip-space goes from [-1..1].

The final texture matrix is computed by multiplying the bias matrix, the projection matrix, and the view matrix. We also need to apply the object’s world matrix to compute the final texture matrix but that will be done in the render method on a per-object basis.

The demo renders two scenes. The first scene (RenderScene0) will demonstrate the projective texture effect and the second scene (RenderScene1) will demonstrate the shadow mapping effect.

Let’s first examine the projective texture effect.

The RenderScene0 Method

The first scene uses the projective texture effect to render the scene.

// Render the scene with a projective texture
// using the projective texture effect.
void RenderScene0()
{
    EffectManager& mgr = EffectManager::Get();

    Effect& effect = mgr.GetEffect("C9E5_projectiveTexture");
    Technique& technique = effect.GetTechniqueByName("main");

    glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

    mgr.SetViewMatrix( g_Camera.GetViewMatrix() );
    mgr.SetProjectionMatrix( g_Camera.GetProjectionMatrix() );

    RenderScene( effect, technique );

    // Draw a cone that represents the spotlight
    glm::mat4 spotlightMatrix = glm::inverse( g_LightViewMatrix );
    glPushMatrix();
    {
        glMultMatrixf( glm::value_ptr(spotlightMatrix) );

        glColor3f( g_LightColor.r, g_LightColor.g, g_LightColor.b );
        glutSolidCone( 0.75, 1.5, 16, 8 );
    }
    glPopMatrix();   
}

First we get a reference to the “C9E5_projectiveTexture” effect and a reference to the “main” technique defined in that effect.

On line 506, the scene is rendered from the camera’s perspective using the projective texture technique.

Since we want to visualize the position and orientation of the light, we will also render a cone that that represents the light (or projector).

The RenderScene Method

The RenderScene method is used to draw the scene geometry using the passed-in shader effect and technique.

The scene consists of two primitives (a sphere, and a torus) as well as the six quads that render the walls of the room. One light is circling the scene, always pointing back at the origin.

You will notice that this scene is virtually identical to the scene used for the previous article titled [Transformation and Lighting in Cg]. Therefore, this should just be a review.

First, we’ll render the torus.

void RenderScene( Effect& effect, Technique& technique )
{
    EffectManager& effectMgr = EffectManager::Get();
    
    glm::mat4x4 worldMatrix = glm::translate( 4.0f, -1.25f, 0.0f );
    worldMatrix = glm::rotate( worldMatrix, -90.0f, glm::vec3( 1.0f, 0.0f, 0.0f ) );
    glm::mat4x4 invWorldMatrix = glm::inverse( worldMatrix );

    EffectParameter& lightParameter = effect.GetParameterByName( "gLight" );
    // Light position and direction is in object space.
    lightParameter["position"].Set( glm::vec3( invWorldMatrix * glm::vec4(g_LightPosition, 1.0f) ) );
    lightParameter["direction"].Set( glm::normalize( glm::vec3( invWorldMatrix * glm::vec4( -g_LightPosition, 0.0f ) ) ) );
    
    EffectParameter& eyePosParameter = effect.GetParameterByName( "gEyePosition" );
    glm::mat4 viewMatrix = glm::inverse( g_Camera.GetViewMatrix() );
    glm::vec3 eyePos = glm::vec3( viewMatrix[3] );

    // Eye position parameter in object space
    eyePosParameter.Set( glm::vec3( invWorldMatrix * glm::vec4( eyePos, 1.0f ) ) );

    // Texture matrix for current object
    EffectParameter& textureMatrixParamter = effect.GetParameterByName( "gTextureMatrix" );
    textureMatrixParamter.Set( g_TextureMatrix * worldMatrix );

    effectMgr.SetWorldMatrix( worldMatrix );
    effectMgr.SetMaterial( g_BrassMaterial );
    effectMgr.UpdateSharedParameters();
    effect.UpdateParameters();

    foreach( Pass* pass, technique.GetPasses() )
    {        
        if ( pass->BeginPass() )
        {
            glutSolidTorus( 0.75, 2.0, 32, 32 );
            pass->EndPass();
        }
    }

Every object in the scene needs a world transform that defines it position and orientation in the scene (even if it’s the identity matrix). On line 605, a world transform that places the torus 4 units to the right of the origin and 1.25 units below the origin. The torus is also rotated so it appears to lay flat on the floor.

Since we are doing the lighting calculations in object-space, we also need to compute the inverse of the object’s world matrix.

On line 609, a reference to the light parameter is retrieved and it’s position and direction values are set in object space.

On line 614-619, the camera’s eye position parameter is also set in object-space.

We also need to build the texture matrix together with the object’s world matrix. On line 622, the “gTextureMatrix” parameter is set by multiplying the pre-computed texture matrix together with the object’s world matrix.

On lines 625-627 the dynamic shared parameters are updated in the EffectManager class.

Then, the torus is rendered using the technique that was passed to this function.

The rest of the primitives in the scene are rendered in an identical manner. The world-matrix is computed for the object, then the object-space light parameters and eye position parameters are updated and the texture matrix combined with the object’s world matrix is updated.

That’s all that needs to be done to implement the projective texture effect on the application side. The only thing you really need to be aware of is if the texture matrix is correct. If you don’t see the projected texture on your scene, or it looks like the texture is randomly placed on the scene, you should make sure that you have applied the projector’s view, and projection matrices as well as the texture bias matrix and finally the object’s world matrix in the correct order and assigned it to the texture matrix in the shader program.

The Projective Texture Effect

The projective texture effect is almost identical to the effect shown in the previous article titled [Transformation and Lighting in Cg] so I will only show the unique parts here.

The Vertex Program

Since we are doing the lighting calculations in object-space, the vertex program will simply pass-through the object-space position and normal to the fragment program. The projected texture coordinate will also be computed in the vertex program and this is simply the object-space position multiplied by the texture matrix we’ve computed earlier.

// This is based on the C9E5v_projTex from "The Cg Tutorial" (Addison-Wesley, ISBN
// 0321194969) by Randima Fernando and Mark J. Kilgard.  See page 254.
void C9E5v_projTex(float4 position      : POSITION,
                   float3 normal        : NORMAL,

               out float4 oPosition     : POSITION,
               out float3 objectPos     : TEXCOORD0,
               out float3 oNormal       : TEXCOORD1,
               out float4 oTexCoordProj : TEXCOORD2,

           uniform float4x4 modelViewProj, 
           uniform float4x4 textureMatrix )
{
    oPosition = mul(modelViewProj, position);
    objectPos = position.xyz;
    oNormal = normal;

    // Compute the texture coordinate for querying
    // the projective texture.
    oTexCoordProj = mul(textureMatrix, position );
}

On line 55, the projected texture coordinate is computed by multiplying the object-space position by the texture matrix. The projected texture coordinate is then sent to the fragment program.

The Fragment Program

We need to define the texture sampler that will be used to sample the demon texture.

texture projectiveTexture;
sampler2D projectiveSampler = sampler_state
{
    Texture = <projectiveTexture>
};

As you can see, we are using a standard 2D texture sampler. There is no special sampler type for projective textures.

Just as was shown in the [Transformation and Lighting in Cg] article, I am using the oneLight fragment program. This fragment program was explained in full detail in that article, so I will assume that you know the basic lighting model calculations and I will only show how the projected texture is sampled.

// Based on C5E4v_twoLights (page 131) but for just one per-fragment light
void oneLight(float4 position     : TEXCOORD0,
              float3 normal       : TEXCOORD1,
              float4 texCoordProj : TEXCOORD2,

         out float4 color         : COLOR,

     uniform float3   eyePosition,
     uniform float4   globalAmbient,
     uniform Light    light,
     uniform Material material,
     uniform sampler2D projectiveMap )
{
    // Calculate emissive and ambient terms
    float4 emissive = material.Ke;
    float4 ambient = material.Ka * globalAmbient;

    // Loop over diffuse and specular contributions for each light
    float4 diffuseLight;
    float4 specularLight;

    C5E10_spotAttenLighting(light, position.xyz, normal, 
        eyePosition, material.shininess,
        diffuseLight, specularLight);

    // Now modulate diffuse and specular by material color
    float4 diffuse = material.Kd * diffuseLight;
    float4 specular = material.Ks * specularLight;

    // Sample the projective texture
    float4 projTexColor = float4( 1, 1, 1, 1 );
    if ( texCoordProj.w > 0 )
    {
        projTexColor = tex2Dproj( projectiveMap, texCoordProj );
    }

    color = projTexColor * ( emissive + ambient + diffuse + specular ) ;
    color.w = 1;
}

Everything before line 142 is the same lighting calculations as shown in the transformation and lighting article and does not need any further explanation.

On line 143, the projected texture color value is initialized to white and on line 144, the w-component of the projected texture coordinate is tested for a positive value. The reason for this is that if a fragment is behind the projector, then the w-component of the projected texture coordinate will be negative. Since we don’t want the projected texture to appear behind the projector, we test for the sign of the w-component. If it’s behind the light, we simply don’t color it.

On line 146, the texture is sampled using the tex2Dproj method that is provided with the Cg shader language.

On line 149, the projected texture color is blended with the lit fragment color to produce the final color.

The result should be something similar to the image shown below.

Projective Texture Demo

Projective Texture Demo

The Projected Shadow Demo

Now that you understand how projected textures work, it should be relatively simple to understand how projected shadow mapping works. The basic difference between the projected texture demo and the projected shadow mapping demo is the texture that is used.

We will use the following algorithm to implement projected shadow mapping.

Render the shadow casting objects into a depth buffer from the perspective of the light.
Render the scene again from the perspective of the camera.
    If the depth of the rendered fragment is greater than the depth in the shadow buffer
    then the fragment is in shadow
    otherwise the fragment is illuminated by the light

This is a very basic, high-level algorithim for performing shadow mapping. Let’s see how this can be implemented at a lower-level.

Gloabal Variables

In addition to the global variables declared for the projected texture mapping demo, we also need to declare a variable to hold a Frame Buffer Object (FBO) that will be used to render the shadow map. Keep in mind that you should check for the existance of the GL_ARB_framebuffer_object extension before trying to create a frame buffer object.

// Frame buffer objects
GLuint g_ShadowFBO = 0;
// The dimensions of the shadow map
GLuint g_ShadowBufferWidth = 2048;
GLuint g_ShadowBufferHeight = 2048;
// A depth (texture) buffer that will be attached to the FBO
GLuint g_DepthBufferID = 0;
// A color (texture) buffer that will be attached to the FBO
GLuint g_ColorBufferID = 0;

A frame buffer object can be thought of as a second screen to render to. You can set the current render buffer to the off-screen frame buffer object, do any rendering you need to do to that frame buffer, then restore the render buffer to the standard back-buffer surface.

A frame buffer object consists of multiple separate surfaces. It can have a stencil buffer, a depth buffer, and one or more color buffers. The maximum number of color buffers you can have is determined by the glGetIntegerv(GL_MAX_COLOR_ATTACHMENTS, …) qeury.

For our purposes, we need at least a depth buffer but for the frame buffer to be considered complete, we also need to attach a color buffer. It may also be necessary to ensure that all buffers attached to a frame buffer object are of the same size.

The width of our frame buffer object is specified by the g_ShadowBufferWidth variable and the height is specified by the g_ShadowBufferHeight parameter. The higher the frame buffer resolution, the better quality our shadows will appear. Lower resolutions will create jagged edges to appear at the shadow borders.

The depth buffer is stored in the g_DepthBufferID parameter. This object will be created as a texture and this texture will be used as the projected texture just as the demon texture was used in the previous demo.

The color buffer that is attached to the frame buffer will be stored in the g_ColorBufferID variable. We don’t use the color buffer to generate the shadow map but in some cases it may be necessary to have at least one color buffer attached to the frame buffer object for the frame buffer object to be considered “complete”.

The LoadResources Method Continued

I have already shown the part of the LoadResources method that loads the demon texture that was used for the projective texture demo, now I will outline how we can create the frame buffer and render targets that are attached to the frame buffer.

    // Create the FBO and attach the depth buffer to it.
    glGenFramebuffers( 1, &g_ShadowFBO );
    glBindFramebuffer( GL_FRAMEBUFFER, g_ShadowFBO );

We first create the frame buffer object and bind it to the GL_FRAMEBUFFER target. Next, we will create textures to attach to the framebuffer object.

    // Create a depth texture for the FBO
    glGenTextures( 1, &g_DepthBufferID );
    glBindTexture( GL_TEXTURE_2D, g_DepthBufferID );
    glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, g_ShadowBufferWidth, g_ShadowBufferHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );

    glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, glm::value_ptr(white) );
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE );

    // Attach the depth texture to the depth target for the frame buffer.
    glFramebufferTexture2D( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, g_DepthBufferID, 0 );

We first generate a texture object to store the depth information. The texture memory is created using the GL_DEPTH_COMPONENT enumeration for both the internal format and the format parameters of the glTexImage2D method.

We also specify the same texture parameters for the depth texture as we did for the projective texture. We want to make sure that if any texture coordinates are out-of-range of the texture, that we just return the largest depth value (1.0) so any pixel outside of the field of view of the light will not get shaded.

The only additional texture parameter specified for the depth texture is the glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE ); parameter on line 160. This ensures that the texture sampler performs a depth compare on the red component of the texture instead of directly returning the depth value from the texture. If the fragment is in shadow, the sampler will return 0.0 and if it is not in shadow, it will return 1.0. I will revisit this again when I discuss the fragment program.

Setting the texture’s compare mode to GL_COMPARE_R_TO_TEXTURE will allow you to use the tex2Dproj shader method to test if the fragment is in shadow, but it will also prevent the standard tex2D method from working if you wanted to use it to visualize the value of the depth buffer. In order to use the depth buffer as a standard texture so that you can query the depth value using the tex2D method, you have to set the GL_TEXTURE_COMPARE_MODE to the default value of GL_NONE. I solve this issue by setting sampler state parameters in the shader program which I will show later.

On line 163, the depth buffer we just created is attached to the depth attachment point of the frame buffer object as a 2D texture object.

    // Create a color buffer for the color attachment point of the FBO
    // We are not using the color buffer directly, but it may be necessary 
    // to attach a color buffer to complete the frame buffer.
    glGenTextures( 1, &g_ColorBufferID );
    glBindTexture( GL_TEXTURE_2D, g_ColorBufferID );
    glTexImage2D( GL_TEXTURE_2D, 0, 4, g_ShadowBufferWidth, g_ShadowBufferHeight, 0, GL_RGBA, GL_FLOAT, NULL );

    glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
    glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );

    glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, g_ColorBufferID, 0 );

We create a second texture that is the same size as the depth texture that is to be attached to the frame buffer’s first color attachment point. We are not actually using this color buffer for anything in this demo, but in order to ensure the frame buffer object is complete, we should attach a color buffer to at least the first color attachment point for the frame buffer.

In my case, I could actually remove the entire block of code shown above from my own demo and the frame buffer was still considered complete and the demo worked fine. But some implementations of the GL_ARB_framebuffer_object extension, may not consider the frame buffer object to be complete if it does not have at least one color buffer attached. You can try removing it in your own implementations and leave a comment at the end of the article letting me know what platform you are using and whether or not the frame buffer object was complete without the color buffer attached to it.

And to ensure the frame buffer object is able to be used as a render buffer, we can check the frame buffer status using the glCheckFramebufferStatus method.

    // Check to see if the frame buffer is valid
    GLenum fboStatus = glCheckFramebufferStatus( GL_FRAMEBUFFER );
    if ( fboStatus != GL_FRAMEBUFFER_COMPLETE )
    {
        std::cout << "Incomplete Framebuffer status." << std::endl;
    }

    // Unbind the buffers so we can render normally
    glBindFramebuffer( GL_FRAMEBUFFER, 0 );
    glBindTexture( GL_TEXTURE_2D, 0 );

If the glCheckFramebufferStatus method does not return GL_FRAMEBUFFER_COMPLETE, then we need to check if we are correctly attaching the depth buffer, and the color buffer and that they have a valid size and format for each buffer.

On line 186, the buffers are detached so that we can render normally to the back buffer.

The OnEffectLoaded Method Continued

Previously, I showed where the projective texture is specified as the sampler parameter for the projective texture effect. Now we need to specify that the depth buffer texture we created in the LoadResources method is used as the projected texture for the shadow mapping effect.

    if ( effect.GetEffectName() == "C9E5_shadowMapping" )
    {
        EffectParameter& projectiveTextureParameter = effect.GetParameterByName( "projectiveSampler" );
        if ( projectiveTextureParameter.IsValid() )
        {
            projectiveTextureParameter.Set( g_DepthBufferID );
        }

    }

If the "C9E5_shadowMapping" is loaded (or reloaded), we'll get a reference to the "projectiveSampler" sampler parameter and specify that the depth buffer of the frame buffer object is to be used as the projective texture.

The RenderScene1 Method

The projective shadow mapping effect is rendered using the RenderScene1 method.

First, we'll get a reference to the effect and the two techniques that will be used to render this effect.

// Render the scene with a projective shadow map
// using the shadow mapping effect.
void RenderScene1()
{
    EffectManager& mgr = EffectManager::Get();

    Effect& effect = mgr.GetEffect("C9E5_shadowMapping");
    Technique& shadowTechnique = effect.GetTechniqueByName("shadow");
    Technique& mainTechnique = effect.GetTechniqueByName( "main" );

To render the shadow mapping effect, we will render the scene using two passes. The first pass will use the "shadow" shader technique and the second pass will just render the scene the same way it was rendered with the projective texture effect using the "main" technique.

Next, we'll render the first pass into the frame buffer from the perspective of the light.

    glBindFramebuffer( GL_FRAMEBUFFER, g_ShadowFBO );
    
    glPushAttrib( GL_VIEWPORT_BIT );
    
    glViewport( 0, 0, g_ShadowBufferWidth, g_ShadowBufferHeight );

    glClear( GL_DEPTH_BUFFER_BIT );

    // Render the scene to the FBO from the perspective of the light
    mgr.SetViewMatrix( g_LightViewMatrix );
    mgr.SetProjectionMatrix( g_LightProjectionMatrix );

    RenderScene( effect, shadowTechnique );

    glPopAttrib();

    // Unbind the frame buffer so we render to the back buffer again.
    glBindFramebuffer( GL_FRAMEBUFFER, 0 );

On line 531, the frame buffer object we created in the LoadResources method is bound to be current render target.

We also need to specify the viewport size to be the same size as the current render target. On line 535 we use the glViewport method to specify the viewport to be the width and height of the frame buffer object.

Then we clear the depth buffer. We don't need to clear the color buffer bit because we aren't using it anyways.

Next, we specify the view and projection matrices to that of the light and render the scene using the shadow technique.

In order to render to the standard back buffer again, we need to unbind the frame buffer object.

In the next pass, we'll render the scene again but this time from the perspective of the camera. We've already assigned the depth buffer from the frame buffer to the "projectiveSampler" parameter of the "C9E5_shadowMapping" effect in the OnEffectLoaded method so that will be used as the projective texture when we render the scene. All the other parameters such as the texture matrix, will be the same as they were for the projective texture demo.

        // Now render the scene again, this time from the perspective of the camera.
        glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

        // Now render the scene from the perspective of the camera.
        mgr.SetViewMatrix( g_Camera.GetViewMatrix() );
        mgr.SetProjectionMatrix( g_Camera.GetProjectionMatrix() );

        RenderScene( effect, mainTechnique );

For the second pass, we want to clear both the color buffer and the depth buffer.

Then we set the view and projection matrices to that of the camera and render the scene using the "main" technique.

Since we also want to visualize the position and orientation of the light, we draw a cone in the scene that represents the spotlight.

        // Draw a cone that represents the spotlight
        glm::mat4 spotlightMatrix = glm::inverse( g_LightViewMatrix );

        glPushMatrix();
        glMultMatrixf( glm::value_ptr(spotlightMatrix) );

        glColor3f( g_LightColor.r, g_LightColor.g, g_LightColor.b );
        glutSolidCone( 0.75, 1.5, 16, 8 );

        glPopMatrix();

It may be useful to visualize the depth buffer to see if the depth values are correct. In the next section I will show a simple method for rendering the depth buffer as a full-screen quad. In the demo you can toggle the display of the depth buffer by pressing the 'D' key on your keyboard while viewing the second scene.

Visualizing the Depth Buffer

If you wanted to visualize the depth buffer of the frame buffer object, you can simply bind the depth buffer texture to a texture sampler then render a screen size quad to the screen using that texture.

    if ( g_bDebugDepthBuffer )
    {
        Technique& debugTechnique = effect.GetTechniqueByName( "debugShadow" );

        // Updating the sampler parameter will ensure any sampler states 
        // specified in the shader are set.
        effect.GetParameterByName("depthSampler").UpdateParameter();
        effect.GetParameterByName( "zNear" ).Set( g_fLightNearPlane );
        effect.GetParameterByName( "zFar" ).Set( g_fLightFarPlane );

        effect.UpdateParameters();

        foreach( Pass* pass, debugTechnique.GetPasses() )
        {
            if ( pass->BeginPass() )
            {
                glBegin( GL_QUADS );
                {
                    glTexCoord2f( 1.0f, 1.0f ); glVertex2f(  1.0f,  1.0f );
                    glTexCoord2f( 1.0f, 0.0f ); glVertex2f(  1.0f, -1.0f );
                    glTexCoord2f( 0.0f, 0.0f ); glVertex2f( -1.0f, -1.0f );
                    glTexCoord2f( 0.0f, 1.0f ); glVertex2f( -1.0f,  1.0f );
                }
                glEnd();

                pass->EndPass();
            }
        }
    }

I've created a special shader technique that can be used to render a screen size quad by supplying the clip-space positions of the quad vertices and the fragment program will just plot the pixels of the texture to the screen. If the render window is not square but the depth texture is square (same width, and height) then the texture will appear stretched. Just change the render window to be square where the application object is created (line 31 in the main.cpp file) and the depth texture will not appear stretched.

The Projective Shadow Effect

The projective texture effect is almost identical to the one shown for the projected textures. The primary difference with this effect is the additional shadow technique that is used to generate the depth information for the depth buffer of the frame buffer object. This depth buffer is then used as the projected texture that is sampled in the main technique that is used to render the scene from the camera's point of view.

The Shadow Technique

The "shadow" technique is the technique that is used to render the first pass that generates the depth information. This technique consists of only a single vertex program and no fragment program (since we are not concerned with the fragment color, only the depth value).

The Vertex Program

The vertex program for the shadow technique is extremely simple. Since we only need the clip-space depth value that's all we are going to produce. Any other information like surface normals, or texture coordinates are absolutely not necessary if all we need is the depth value of the current vertex. We don't even need a fragment program because the viewport and clipping stage of the rendering pipeline will automatically generate the interpolated depth values and write them into the depth buffer.

void C9E5v_shadow( float4 position         : POSITION, 

               out float4 oPosition        : POSITION, 
            
           uniform float4x4 modelViewProj ) 
{
    oPosition = mul( modelViewProj, position );
}

The object-space vertex position is passed to the vertex program, the clip-space position is computed from the current model, view, projection matrix and the resulting value is returned by the function as an out parameter. The next stage of the rendering pipeline will automatically generate the interpolated depth values and store them in the depth buffer for us.

Techniques and Passes

There is one special consideration you must make when rendering the shadow pass and that is that we only want to render the back-facing polygons. If we render both front-facing and back-facing polygons into the depth buffer, you will get z-fighting artifacts in the final render. Back-facing polygons will also get this artifact but if we apply lighting to those fragments, then they will appear black anyways (because they are back-facing relative to the light) and the artifacts will be masked.

technique shadow
{
    pass p0
    {
        VertexProgram = compile arbvp1 C9E5v_shadow( gModelViewProj );
        DepthTestEnable = true;
        CullFaceEnable = true;
        CullFace = Front;
    }
}

The "shadow" technique only defines a single pass and only the vertex program is specified. To ensure only back-facing polygons are rendered to the depth buffer, we set the CullFaceEnable state parameter to true and the CullFace state parameter to Front. This will ensure that all front-facing polygons are not rendered.

The Main Technique

The main technique for the shadow effect is almost identical to that of the projective texture effect. The only major difference comes from how the return value from the projected texture is used in the fragment program.

Let's first take a look at the vertex program for this technique.

The Vertex Program

The vertex program for the main technique of the shadow effect is exactly identical to that of the projected texture example. I will list it here for clarity.

// This is based on the C9E5v_projTex from "The Cg Tutorial" (Addison-Wesley, ISBN
// 0321194969) by Randima Fernando and Mark J. Kilgard.  See page 254.
void C9E5v_projTex(float4 position      : POSITION,
                   float3 normal        : NORMAL,

               out float4 oPosition     : POSITION,
               out float3 objectPos     : TEXCOORD0,
               out float3 oNormal       : TEXCOORD1,
               out float4 oTexCoordProj : TEXCOORD2,

           uniform float4x4 modelViewProj, 
           uniform float4x4 textureMatrix )
{
    oPosition = mul(modelViewProj, position);

    objectPos = position.xyz;
    oNormal = normal;

    // Compute the texture coordinate for querying
    // the projective texture.
    oTexCoordProj = mul(textureMatrix, position );
}

The vertex program accepts the object-space vertex position and normal and the clip-space position is computed from the current model, view, projection matrix as always.

Since we are doing the lighting calculations in the fragment program in object space, we also need to pass the object-space vertex position and normals to the fragment program.

In order to query the shadow map in the fragment program, we need to compute the projected texture coordinate and pass it to the fragment program. This is done on line 100 by multiplying the object-space vertex position by the texture matrix that was computed in the application.

The Fragment Program

Again, the fragment program is almost identical to that of the projected texture fragment program. The only major difference is the way the projected texture is sampled.

// Based on C5E4v_twoLights (page 131) but for just one per-fragment light
void oneLight(float4    position     : TEXCOORD0,
              float3    normal       : TEXCOORD1,
              float4    texCoordProj : TEXCOORD2,

          out float4    color        : COLOR,

      uniform float3    eyePosition,
      uniform float4    globalAmbient,
      uniform Light     light,
      uniform Material  material,
      uniform sampler2D projectiveMap )
{
    // Calculate emissive and ambient terms
    float4 emissive = material.Ke;
    float4 ambient = material.Ka * globalAmbient;

    // Loop over diffuse and specular contributions for each light
    float4 diffuseLight;
    float4 specularLight;

    C5E10_spotAttenLighting(light, position.xyz, normal, 
        eyePosition, material.shininess,
        diffuseLight, specularLight);

    // Now modulate diffuse and specular by material color
    float4 diffuse = material.Kd * diffuseLight;
    float4 specular = material.Ks * specularLight;

    // Use PCF to improve shadow quality at the shadow edges.
    float shadowCoeff = 0.0f; 
    for ( int i = 0; i < 9; ++i )
    {
        shadowCoeff += tex2Dproj( projectiveMap, texCoordProj + fTaps_PCF[i] * 0.01f ) / 9.0f;
    }

    color = emissive + ambient + ( shadowCoeff * ( diffuse + specular ) );
    color.w = 1;
}

The fragment program receives the object-space position and normals from the vertex program and computes the lighting in the normal way (for a detailed explanation of the lighting equations used here, you can refer to my previous article titled [Transformation and Lighting in Cg]). The changed code is the computation of the shadowCoeff value that is used to modulate the fragment's diffuse and specular components. The shadow factor (shadowCoeff) is sampled from the texture using the tex2Dproj method. Since we specified the GL_COMPARE_R_TO_TEXTURE texture compare mode for the depth texture, the
tex2Dproj method will return 0.0 if the fragment is in shadow, and 1.0 if it is not.

I am also using a technique called Percentage Closer Filtering (PCF) in order to reduce the jagged edges at the shadow boundary. The idea behind PCF is, instead of sampling just a single value of the shadow map, why not sample the shadow map 9 times and average the result. The fTaps_PCF array contains offset values that are used to offset the texture lookup into the 8 neighboring pixels and the current pixel.

// The offset vectors to perform Percentage Closer Filtering 
// to improve shadow quality around the shadow edges.
static const float4 fTaps_PCF[9] = {
	{-1.0,-1.0, 0.0, 0.0},
	{-1.0, 0.0, 0.0, 0.0},
	{-1.0, 1.0, 0.0, 0.0},

	{ 0.0,-1.0, 0.0, 0.0},
	{ 0.0, 0.0, 0.0, 0.0},
	{ 0.0, 1.0, 0.0, 0.0},

	{ 1.0,-1.0, 0.0, 0.0},
	{ 1.0, 0.0, 0.0, 0.0},
	{ 1.0, 1.0, 0.0, 0.0}
};

The fTaps_PCF array defines the offsets around the current pixel being sampled. The offset factor (in this case 0.01f) determines the distance between the sampled pixels. Offset factors that are too large will produce noticeable layering effect and offset factors too small will eliminate the effect all together. You can test different values for the offset matrix and offset factor to try to achieve better results. Using this demo, you can modify the shader program directly and instantly see the results updated in the render window.

Techniques and Passes

The main technique for the shadow effect defines only a single pass and defines both a vertex program and a fragment program.

technique main
{
    pass p0
    {
        VertexProgram = compile arbvp1 C9E5v_projTex( gModelViewProj, gTextureMatrix );
        FragmentProgram = compile arbfp1 oneLight( gEyePosition, gGlobalAmbient, gLight, gMaterial, projectiveSampler );

        DepthTestEnable = true;
        CullFaceEnable = true;
        CullFace = Back;
    }
}

In this case, we define both the vertex program and the fragment program to the methods just defined. We also want to enable face-culling, but this time we want to cull back-facing polygons. This is not absolutely necessary for the shadow effect to work, it is just a standard optimization that is performed by almost all shader effects.

If everything is working correctly, we should see something similar to the image shown below when we view the second scene (by pressing '2' on the keyboard to switch to the second scene).

Shadow Demo

Shadow Demo

The Debug Shadow Technique

In order to visualize the depth buffer, I added an additional technique specifically designed to render a depth buffer to a screen-aligned quad.

The Vertex Program

The requirement of the debugShadow technique is that the vertex positions must be supplied in clip-space. This prevents the need of supplying a view matrix, or a projection matrix and the vertices passed by the application can simply be passed-through to the fragment program.

void DebugShadowV( float4 position : POSITION, 
                   float2 texCoord : TEXCOORD0,

                   out float4 oPosition : POSITION, 
                   out float2 oTexCoord : TEXCOORD0 )
{
    oPosition = position;
    oTexCoord = texCoord;
}

The vertex program for the debug shadow technique will simply pass-through the clip-space vertex position and the texture coordinates that were supplied by the application.

The Fragment Program

In order to visualize the depth buffer, we have to define a different sampler than the one that is used for the shadow sampler in the main technique. The reason for this is that if we specify GL_COMPARE_R_TO_TEXTURE texture parameter for the depth buffer, then the texture will be sampled as a shadow map using the tex2Dproj method in the shader program. However, with this parameter set, we cannot sample the depth texture with tex2D and get a sensible value. To solve this, we specify two texture samplers, one that will be used to sample the texture in the shadow mapping pass that specifies the GL_COMPARE_R_TO_TEXTURE texture parameter and another sampler that specifies GL_NONE as the texture compare mode and this sampler will be used to sample the texture in the depth debug pass. Unfortunately, because both samplers operate on the same texture object, only one sampler can be valid at a time.

texture depthTexture;
sampler2D depthSampler = sampler_state
{
    Texture = <depthTexture>;
    CompareMode=None;
};

texture projectiveTexture;
sampler2D projectiveSampler = sampler_state
{
    Texture = <projectiveTexture>;
    CompareMode=CompareRToTexture;
};

The first sampler defined here will allow the depth texture to be sampled with the tex2D method and get the actual depth value back, and the second sampler specifies a compare mode of CompareRToTexture and can be sampled with tex2Dproj method to get the shadow factor as a result. If you want to switch between the two methods, you must tell the Cg runtime to setup the sampler with the correct sampler states by calling the void cgGLSetupSampler( CGparameter param, GLuint texobj ) method (which is done in my application by calling EffectParameter::UpdateParameter).

Since we can't know what the range of the near and far clipping planes are when the depth buffer was created, we have to pass them as arguments to the shader.

// Near and far clipping planes of the projection matrix when the 
// depth buffer was generated. Set by the application.
float zNear;
float zFar;

float DepthLinear( float fDepth ) 
{
    return ( zNear * 2.0f ) / ( zFar + zNear - fDepth * ( zFar - zNear ) );
}

void DebugShadowF( float2 texCoord : TEXCOORD0, 

                   out float4 oColor : COLOR,

                   uniform sampler2D sampler )
{
    // Sample the depth map
    float4 depth = tex2D( sampler, texCoord );
    oColor = DepthLinear( depth.z );
    oColor.w = 1.0f;
}

The fragment program accepts the texture coordinate from the vertex program and outputs a single color that is rendered to the screen. The sampler2D parameter is the depth buffer texture that should contain the depth values that are used to perform the shadow mapping.

The DepthLinear method will linearize the depth values from the texture according to the near and far clipping planes of the projection matrix that was used to generate the depth values.

Techniques and Passes

There is no special render states to consider when rendering a screen space quad, so the technique for the shadow debug effect is very simple.

technique debugShadow
{
    pass p0
    {
        VertexProgram = compile arbvp1 DebugShadowV();
        FragmentProgram = compile arbfp1 DebugShadowF( depthSampler );
    }
}

The pass doesn't define any special states like the others did. We pass the depth sampler to the fragment program which will allow us to sample the depth map as a standard texture instead of a shadow map.

If we now run the demo and switch to the second scene by pressing the '2' key on the keyboard, we can view the depth buffer by pressing the 'D' key to toggle debugging of the depth buffer, we should see something similar to the image shown below.

Debug Shadow Buffer

Debug Shadow Buffer

The Video

If everything works correctly, the demo should look something similar to the video shown below.

References

The Cg Tutorial

The Cg Tutorial

The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics (2003). Randima Fernando and Mark J. Kilgard. Addison Wesley.

Special thanks to Quinten Lansu for helping me fix some tricky issues that I was having trouble with while creating this demo.
Special thanks to Sen (see comments) for pointing out the use of the GL_COMPARE_R_TO_TEXTURE texture compare mode parameter.

Download the Source

The source code for this demo can be downloaded from the link below.

[Shadows.zip]

4 thoughts on “Projected Shadow Mapping with Cg and OpenGL

  1. I think the tex2Dproj is not returning the expected 0/1 because you might have forgotten to have set GL_TEXTURE_COMPARE_MODE to GL_COMPARE_R_TO_TEXTURE. By default it’s set to GL_NONE, which causes it to return the actual depth value, instead of doing a depth comparison.

  2. Doesn’t run on a NVIDIA GT 550M, but it does however run on my other laptop which has an ATI Mobility Radeon HD 4570. If you change the profiles to glslv & glslf, only the shadow example works.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>