Normal Mapping with Cg and OpenGL

Environment Mapping

Environment Mapping

In this article, I will discuss a technique called normal mapping. Normal mapping is a shader technique that encodes pre-computed surface normals in a texture that can be used to add extra detail to a surface without the requirement of adding extra geometry. Before reading this article, you should have a basic understanding of OpenGL and you should know how to setup a Cg shader. For a review on OpenGL, you can refer to my previous article titled [Introduction to OpenGL for Game Programmers] and to learn how to incorporate Cg shaders in your own applications, you can refer to my article titled [Introduction to Cg Runtime with OpenGL].

Introduction

Normal mapping is a shader technique that is used to add detail to the face of a polygon that does not require further tesselation of the mesh.

In some cases it may be impractical to have a highly tesselated model to the degree that the artist wishes in order to capture the full detail of the model. Shader programmers needed a solution that could allow the model to be expressed with a very low polygon count yet still retain the detail that the artists work so hard to achieve. The solution is normal mapping.

How this works is that the artists will create the highly detailed model with 3D modeling software and from this high polygon model a texture is generated that contains the suface normal of the mesh at every point and this surface normal is what is encoded in the texture. Then the artist will create a simplified, low-polygon version of the same model. The low-polygon version will resemble the general shape of the high-polygon version of the model but it will not retain the detail.

Using the normal map that was generated from the high-polygon model, the lost details can be simulated in low-polygon model.

Dependencies

The demo shown in this article uses several 3rd party libraries to simplify the development process.

  • The Cg Toolkit (Version 3): The Cg Toolkit provides the tools and API needed to integrate the Cg shader programs in your application.
  • Boost (1.46.1): Boost has some very useful libraries that I use throughout my demo applications. In this demo, I use the Signals, Filesystem, Function, and Bind boost libraries to provide a generic, platform independent functionality that simplifies some of the features used in this demo.
  • OpenGL Extension Wrangler (GLEW): The OpenGL extension wrangler is the API I chose to check for the presence of the required extensions and to use the extensions in the application program.
  • Simple DirectMedia Layer (1.2.14): Simple DirectMedia Layer (SDL) is a cross-platform multimedia library that I use to create the main application window, initialize OpenGL, and handle keyboard, mouse and joystick input.
  • OpenGL Mathmatics (GLM): An OpenGL centric mathmatics library for 3D graphics applications.
  • Simple OpenGL Image Library (SOIL): SOIL is a tiny C library used primarily for uploading textures into OpenGL.

All of the dependencies described here are included in the source code example included at the end of this article.

The BumpMapping Demo

For the BumpMapping demo, I will setup the application and the shaders that will be used to render the effect. I will be using the EffectFramework library that I used in the previous article titled [Environment Mapping with Cg and OpenGL].

Let’s start by taking a look at the application code.

Headers and Globals

The first thing we do in any application source code is include all the headers and define any global variables and declare any methods that are defined later in the source file.

#include "BumpMappingPCH.h"
#include "Application.h"
#include "PivotCamera.h"
#include "ElapsedTime.h"
#include "EffectManager.h"
#include "Material.h"
#include "Effect.h"
#include "EffectParameter.h"
#include "Technique.h"
#include "Pass.h"

#include "Events.h"

These headers that come after the required pre-compiled header include the classes from the EffectFramework library. Details of the EffectFramework library were demonstrated in the article titled [Introduction to Cg Runtime with OpenGL] so I will not go into any further detail here about them.

Next we’ll define a few global variables.

Application g_App( "Bump Mapping Demo", 1280, 720 );
PivotCamera g_Camera;

glm::vec3   g_InitialCameraRotation( 0, 0, 0 );
glm::vec3   g_InitialCameraPiviot( 0, 0, 0 );
glm::vec3   g_InitialCameraPosition( 0, 0, -10 );

GLuint  g_TorusDisplayList = 0;

GLuint  g_EnvCubeMap = 0;
GLuint  g_NormalMap = 0;
GLuint  g_DiffuseMap = 0;

Material g_ReflectiveMaterial;

glm::vec3 g_LightPosition;
glm::vec4 g_GlobalAmbient( 0.3f, 0.3f, 0.3f, 1.0f );
glm::vec4 g_LightColor ( 0.95f, 0.95f, 0.95f, 1.0f );

bool g_bAnimate = true;
float g_fRotatePrimitive = 0.0f;

glm::ivec2 g_CurrentMousePos(0);
bool g_bLeftMouseDown = false;
bool g_bRightMouseDown = false;

The Application variable will be used to create the main application window, initialize OpenGL, and process and translate events.

A pivot camera is also defined that will allow use to pan and pivot the view.

We will be rendering a torus to demonstrate the effect. We will use an OpenGL display list to generate a procedural torus and to render the torus every frame. The g_TorusDisplayList variable is used to store the ID of the display list that defines the procedural torus.

On lines 23-25, the OpenGL texture ID’s are defined to reference the different textures that are used in this demo. The g_EnvCubeMap variable will be used to refer to the cube-map texture that will be used for the skybox that is rendered as the background for our scene. This cube map will also be used as the environment map that will be applied to the model to simulate reflection on the torus. For a full explanation on environment maps, you can refer to my previous article titled [Environment Mapping with Cg and OpenGL]. The g_NormalMap variable is used to reference the normal map that is applied to the torus to generate the detail of the stone pattern that is applied the torus. And the g_DiffuseMap is used to apply a color texture to the torus. In this application the same texture coordinates are used for both the diffuse map and the normal map so these two textures should match up with each other.

The g_ReflectiveMaterial variable is used to define the diffuse, specular and reflection coefficient values that will be used to render the torus.

On lines 29-32 a few variables are defined that will control the lighting for the scene.

The g_bAnimate variable is used to toggle the animation of the torus and the light in the demo. The [space] bar can be used to toggle the animation on and off and the g_fRotatePrimitive is used to determine how much rotation to apply to the light and torus primitive.

On lines 36-38 a few variables are defined to store mouse information that is used to control the pivot camera.

We also need to forward-declare the event callbacks that will be registered with the application class.

void OnKeyPressed( KeyEventArgs& e );
void OnMouseButtonPressed( MouseButtonEventArgs& e );
void OnMouseButtonReleased( MouseButtonEventArgs& e );
void OnMouseMoved( MouseMotionEventArgs& e );
void OnResized( ResizeEventArgs& e );
void OnUpdate( UpdateEventArgs& e );
void OnRender( RenderEventArgs& e );
void OnPreRender( RenderEventArgs& e );

void OnInitialize( EventArgs& e );
void OnEffectLoaded( EffectLoadedEventArgs& e );
void OnRuntimeError( RuntimeErrorEventArgs& e );
void OnTerminate( EventArgs& e );

As well as a few additional function declarations.

void InitGL();
void InitGlew();
void LoadResources();

void DrawCubeMap( GLuint texID );

The InitGL method is used to initialize a few OpenGL state parameters before we start rendering and the InitGlew method will initialize the OpenGL extensions that are used in this demo.

Any texture or model resources that are used in this demo will be loaded in the LoadResources method.

The DrawCubeMap method will render the sky-box model that is used as the background for the scene.

The Main Method

The main method will simply initialize a few parameters and register the callbacks for the application class.

int main( int argc, char* argv[] )
{
    g_Camera.SetRotate( g_InitialCameraRotation );
    g_Camera.SetPivot( g_InitialCameraPiviot );
    g_Camera.SetTranslate( g_InitialCameraPosition );

    // Register event callbacks
    g_App.KeyPressed += OnKeyPressed;
    g_App.MouseButtonPressed += OnMouseButtonPressed;
    g_App.MouseButtonReleased += OnMouseButtonReleased;
    g_App.MouseMoved += OnMouseMoved;
    g_App.Resize += OnResized;

    g_App.PreRender += OnPreRender;
    g_App.Render += OnRender;

    g_App.Update += OnUpdate;
    g_App.Initialized += OnInitialize;
    g_App.Terminated += OnTerminate;

    return g_App.Run();
}

The camera is initialized using the parameters that were defined in the global scope and the callback functions are registered with the application class.

On line 88, the main application window is created and the application loop is started by calling the Application::Run method.

The OnInitialize Method

The OnInitialize method is invoked after the application has created the OpenGL context and the main application window. We can use this method to load graphics resources and initialize the resources that are used for the demo. We will also load the effect shaders that are used to render the torus.

void OnInitialize( EventArgs& e )
{
    // Initialize OpenGL stuff.
    InitGL();
    // Initilize the extensions and check for support.
    InitGlew();

    // Create an effect manager
    EffectManager::Create().Initialize();
    EffectManager& effectMgr = EffectManager::Get();

    g_TorusDisplayList = CreateTorusDisplayList( 0.75, 2.0, 64, 64 );

    LoadResources();

    effectMgr.EffectLoaded += OnEffectLoaded;
    effectMgr.RuntimeError += OnRuntimeError;

    // Load the effects
    effectMgr.CreateEffectFromFile( "Resources/Shaders/C8E6_bump_mapping.cgfx", "C8E6_bump_mapping" );

    // Set shared parameters that will probably never change.
    effectMgr.SetGlobalAmbient( g_GlobalAmbient );

    // Setup reflective material
    g_ReflectiveMaterial.Reflection = 0.33f;
    g_ReflectiveMaterial.Diffuse = glm::vec4( 0.9f, 0.6f, 0.3f, 1.0f );
    g_ReflectiveMaterial.Specular = glm::vec4( 1.0f, 1.0f, 1.0f, 1.0f );
    g_ReflectiveMaterial.SpecularPower = 50.0f;
}

Before we can use any OpenGL extension features, we have to initialize the OpenGL states and GLEW extension support.

On line 233, the EffectManager instance is created and initialized and a reference to that instance is stored in the local variable called effectMgr.

On line 236, the torus display list is generated that defines the torus primitive that is used to demonstrate the bump mapping effect.

Then the texture resources are loaded in the LoadResources method.

On lines 240 and 241 we register event callbacks for when an effect is loaded and when a Cg runtime error occurs.

On line 244 the shader effect that performs the bump-mapping (normal-mapping) is loaded by the effect manager.

The effect manager class defines a few shared parameters that can be applied to all shaders that define effect parameters with a particular semantic (see EffectManager::CreateSharedParameters to see which shared parameters are created by the effect manager class.

On line 247, the global ambient shared parameter is set.

On lines 250-253 a material that attempts to mimic a gold brick surface properties are defined in the g_ReflectiveMaterial material variable.

Initialize the OpenGL context

Before we do any rendering, we should make sure that the OpenGL context is setup correctly. We will also be relying on the existence of some of the OpenGL 2.0 functions so we will use GLEW to setup the extensions and check for OpenGL 2.0 support.

void InitGL()
{
    glClearDepth( 1.0f );
    glEnable( GL_DEPTH_TEST );
}

void InitGlew()
{
    if ( glewInit() != GLEW_OK )
    {
        std::cout << "Failed to initilalize GLEW!" << std::endl;
        exit(-1);
    }

    // Check for the supported extensions
    if ( !glewIsSupported( "GL_VERSION_2_0" ) )
    {
        std::cout << "Required OpenGL version support is missing." << std::endl;
        exit(-1);
    }
}

The only thing that necessarily needs to be initialized is the value that the depth buffer is set to when we call glClear together with the GL_DEPTH_BUFFER_BIT enumeration and to enable depth testing.

Since I am using an extension of OpenGL that is introduced in OpenGL 2.0, I need to initialize GLEW and check for the version support. If GLEW fails to initialize or the required extension support is missing then an error message will be displayed and the program will exit.

Creating the Procedural Torus

The torus object is created procedurally in the CreateTorusDisplayList method. This method is identical to the one shown in the previous article titled [Environment Mapping with Cg and OpenGL]. The only difference in this demo is that in addition to the vertex normal, we also need to generate the tangent vector and optionally the binormal vector for every vertex of the torus object. Together with the vertex normal, the tangent and binormal vectors are needed to generate the tangent-space basis vectors that are used to transform the surface normals that are encoded in the normal map into object space. The creation of the tangent-space basis vectors will be discussed in more detail later in this article.

In the TorusVertex method will be used to compute the tangent and binormal vectors for each vertex of the torus.

// Plot a single vertex of the torus
inline void TorusVertx( float u, float v, float innerRadius, float outerRadius )
{
    static const float _2_PI = 6.283185307179586476925286766559f;

    static const float sTexCoord[3] = { 2.0, 0, 0 };
    static const float tTexCoord[3] = { 0, 1.0, 0 };

    static const GLuint POSITION = 0;   // Generic Attribute 0 is bound to the POSITION  semantic.
    static const GLuint NORMAL = 2;     // Generic Attribute 2 is bound to the NORMAL semantic.
    static const GLuint TEXCOORD0 = 8;  // Generic Attribute 8 is bound to the TEXCOORD0 semantic.
    static const GLuint TANGENT = 14;   // Generic Attribute 14 is bound to the TANGENT semantic.
    static const GLuint BINORMAL = 15;  // Generic Attribute 15 is bound to the BINORMAL semantic.

    float x, y, z;      // POSITION
    float nx, ny, nz;   // NORMAL
    float tx, ty, tz;   // TANGENT
    float bx, by, bz;   // BINORMAL
    float s, t;         // TEXCOORD

    float cu, su, cv, sv;

    cu = cosf( u * _2_PI );
    su = sinf( u * _2_PI );
    cv = cosf( v * _2_PI );
    sv = sinf( v * _2_PI );

    // Position
    x = ( outerRadius + innerRadius * cv ) * cu;
    y = ( outerRadius + innerRadius * cv ) * su;
    z = innerRadius * sv;

    // Normal
    nx = cu * cv;
    ny = su * cv;
    nz = sv;

    // Tangent
    tx = ( outerRadius + innerRadius * cv ) * -su;
    ty = ( outerRadius + innerRadius * cv ) * cu;
    tz = 0.0f;

    // Binormal
    bx = -cu * sv;
    by = -su * cv;
    bz = cv;

    // U, V texture mapping
    s = ( u * sTexCoord[0] ) + ( v * sTexCoord[1] );
    t = ( v * tTexCoord[0] ) + ( v * tTexCoord[1] );

    glVertexAttrib3f( NORMAL, nx, ny, nz );
    glVertexAttrib3f( TANGENT, tx, ty, tz );
    glVertexAttrib3f( BINORMAL, bx, by, bz );
    glVertexAttrib2f( TEXCOORD0, s, t );
    glVertexAttrib3f( POSITION, x, y, z );
}

The u, and v parameters are in the range [0..1] passed from the CreateTorusDisplayList method.

The sTexCoord and tTexCoord static variables allow the texture coordinates to be stretched across the surface of the primitive in order to achieve a more desirable texture mapping effect. I choose some values here that provide a nice effect for the base texture and normal map I am using.

The POSITION, NORMAL, TEXCOORD0, TANGENT, and BINORMAL constants define the generic attribute ID’s for those corresponding types. In the shader program, we can bind the input streams for the tangent and binormal vectors to the TANGENT, and BINORMAL semantics in order to receive this data from the application. This will be shown in the section about the shaders.

On line 158 – 160 the X, Y, and Z vertex positions are computed based on the parametric equation of a torus:

Where

  • are in the interval .
  • is the outer radius of the torus.
  • is the inner radius of the torus.

The normal is computed by taking the derivative of the parametric equation for the torus.

The tangent is computed by taking the partial derivative of the parametric equation with respect to u and the binormal (or also known as the bitangent) is computed by taking the partial derivative of the parametric equation with respect to v.

The texture coordinates for the torus are simply the passed-in u, and v parameters scaled by the values of the sTexCoord and tTexCoord parameters.

In most cases, you won’t have to be concerned with the generation of the tangent and binormal vectors but since we are procedurally generating the torus geometry, we need to do this ourselves. Generally, the 3D modeling package can generate these vectors for you and export them together with the rest of the geometry information.

If you are interested in learning how to compute the tangent space basis vectors for an arbitrary mesh, you could refer to the article titled [Computing Tangent Space Basis Vectors for an Arbitrary Mesh] located here: http://www.terathon.com/code/tangent.html

On lines 181-185 the vertex attributes are written to the display list. Using the glVertexAttrib[N]f methods are equivalent to using the similar glVertex3f, glNormal3f, and glTexCoord2f methods to commit vertex data to the GPU. But there is no (standard) method in the OpenGL SDK for sending tangent and binormal vertex data to the GPU. So to be consistent, I use the glVertexAttrib[N]f method to send all vertex data to the GPU.

The best resource that explains what vertex attributes are associated with which ID’s, you can refer to the nVidia developer documentation for the GPU profiles supported by Cg (http://http.developer.nvidia.com/Cg/vp40.html – see the section titled “Varying Input Semantics”). Although this documentation is horribly incomplete, it is almost the only source that documents which shader semantics are associated with which OpenGL attribute IDs. If you can provide a better source that explains how to use the vertex attributes to send vertex data to the GPU depending on the profile, then please leave a comment and let me know.

The LoadResources Method

The LoadResources method is used to load the cube map texture, the diffuse texture, and the normal map texture that will be used to render our scene.

void LoadResources()
{
    // This texture was downloaded from http://www.hazelwhorley.com/textures.html
    // These textures are protected by the Creative Commons license for use with non-commercial applications.
    // Special thanks to Hazel Whorley for creating these great textures.
    g_EnvCubeMap = SOIL_load_OGL_cubemap( "Resources/Textures/Mountain/mountain_west.bmp",
                                          "Resources/Textures/Mountain/mountain_east.bmp", 
                                          "Resources/Textures/Mountain/mountain_up.bmp",
                                          "Resources/Textures/Mountain/mountain_down.bmp",
                                          "Resources/Textures/Mountain/mountain_south.bmp",
                                          "Resources/Textures/Mountain/mountain_north.bmp",
                                          SOIL_LOAD_AUTO,
                                          SOIL_CREATE_NEW_ID,
                                          SOIL_FLAG_MIPMAPS );

    // To prevent artifacts at the edges, use clamping at texture bounds.
    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    // Make sure we unbind and disable the cube map texture.
    glBindTexture( GL_TEXTURE_CUBE_MAP, 0 );
    glDisable( GL_TEXTURE_CUBE_MAP );

    // Load a normal map for the brick texture
    g_NormalMap = SOIL_load_OGL_texture( "Resources/Textures/RedBrick-NormalMap.png", SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_MIPMAPS );

    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );

    // Load a diffuse map for the brick texture
    g_DiffuseMap = SOIL_load_OGL_texture( "Resources/Textures/GoldBricks-ColorMap.png", SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_MIPMAPS );

    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );

    glBindTexture( GL_TEXTURE_2D, 0 );
    glDisable( GL_TEXTURE_2D );
}

This method is almost identical to the same method in the previous article titled [Environment Mapping with Cg and OpenGL] so I won’t go into any detail here. Basically we load the three textures used by the demo. First, the cube map texture is loaded that will be used to render a scenic background for our scene as well as used as the environment map to apply a reflective surface effect on the bumped torus.

On line 115, the normal map texture is loaded and on line 121, the diffuse map texture is loaded.

The OnEffectLoaded Method

Whenever an effect is loaded by the effect manager, the OnEffectLoaded method will be invoked passing as a parameter a reference to the effect that was loaded.

We can use this method to set some of the constant parameters that are used by the effect (such as the textures).

void OnEffectLoaded( EffectLoadedEventArgs& e )
{
    // Set the properties for the effects
    Effect& effect = e.Effect;

    // Assign the texture parameters
    EffectParameter& cubeMapParameter = effect.GetParameterByName( "envSampler" );
    cubeMapParameter.Set( g_EnvCubeMap );

    EffectParameter& normalMapParameter = effect.GetParameterByName( "normalSampler" );
    normalMapParameter.Set( g_NormalMap );

    EffectParameter& baseTextureParam = effect.GetParameterByName( "diffuseSampler" );
    baseTextureParam.Set( g_DiffuseMap );
}

The three texture parameters are being set in this function. First the cube map is assigned to the “evnSampler” parameter. Then the normal map is assigned to the “normalSampler” effect parameter and finally the diffuse map is assigned to the “diffuseSampler” shader parameter.

The OnUpdate Method

For every frame, the application class will invoke the OnUpdate method.

void OnUpdate( UpdateEventArgs& e )
{
    static float fAnimTimer = 0.0f; 
    static float fReloadTimer = 0.0f;
    static float fRotationRate = 45.0f;

    // Every seconds, we'll check to see if the effects need to be reloaded.
    fReloadTimer += e.ElapsedTime;
    if ( fReloadTimer > 2.0f )
    {
        EffectManager::Get().ReloadEffects();
        fReloadTimer = 0.0f;
    }

    if ( g_bAnimate )
    {
        fAnimTimer += e.ElapsedTime;

        g_fRotatePrimitive = fmodf(fAnimTimer * fRotationRate, 360.0f );

        // Move the light position in a circle
        g_LightPosition.x = 5.0f * sinf( fAnimTimer );
        g_LightPosition.y = 2.0f;
        g_LightPosition.z = 5.0f * cosf( fAnimTimer );
    }
}

On lines 394-399, we check to see if any of the loaded shader effects have been modified on disc and if so, the effect manager class can reload the effects automatically by calling the EffectManager::ReloadEffects method.

On lines 401-411 the rotation parameter is updated and the light position is rotated around the scene in a circle.

The OnPreRender Method

Before we can render the scene, we need to make sure that the effect manager’s shared parameters are set correctly.

void OnPreRender( RenderEventArgs& e )
{
    // Update the shared parameters owned by the effect manager
    EffectManager& mgr = EffectManager::Get();

    mgr.SetViewMatrix( g_Camera.GetViewMatrix() );
    mgr.SetProjectionMatrix( g_Camera.GetProjectionMatrix() );

    mgr.SetElapsedTime( e.ElapsedTime );
    mgr.SetApplicationTime( e.TotalTime );
    mgr.SetMousePosition( g_CurrentMousePos );
    mgr.SetMouseButtonState( g_bLeftMouseDown, g_bRightMouseDown );

    g_Camera.ApplyViewTransform();
}

On line 417, we get a reference to the EffectManager singleton. The EffectManager defines a few shared parameters that can be used to connect the shader parameters to so that any effect that defines a parameter with a particular semantic will automatically have the parameters updated when the effect manager’s shared parameters are updated. You can refer to the EffectManager::CreateSharedParameters method and the EffectManager::UpdateSharedParameters method in the sample code to find out what shared parameters are supported by the effect manager.

On line 427 the view transform is applied so that objects that are rendered using the fixed-function pipeline are rendered correctly.

The OnRender Method

The OnRender method will render a single torus in the center of the world with a rotation about the Y-axis. It will also render the skybox that is used as the background for our scene. A sphere that matches the color of the light is also rendered to represent the single light source that is used to illuminate the sphere.

void OnRender( RenderEventArgs& e )
{
    EffectManager& mgr = EffectManager::Get();

    glm::mat4 viewMatrix = glm::inverse( g_Camera.GetViewMatrix() );
    glm::vec3 eyePos = glm::vec3( viewMatrix[3] );

    // (Optimization) We only need to clear the depth buffer but because 
    // the cube map will overdraw the entire color buffer.
    glClear( GL_DEPTH_BUFFER_BIT );

    // Draw the unit cube map around the camera.
    DrawCubeMap( g_EnvCubeMap ); 

    DrawAxis( 2.0f, g_Camera.GetPivot() );

    glm::mat4x4 worldMatrix(1.0f);

We start by getting a reference to the EffectManager instance and setup a parameter called eyePos which represents the position of the viewer in world space.

The depth surface buffer is cleared using the glClear method. Notice that we don’t need to clear the color buffer because the skybox that we will render every frame will always overdraw the entire color buffer anyways.

The skybox is rendered at the position of the camera by calling the DrawCubeMap method and an axis that represents the focal point for the pivot camera is rendered using the DrawAxis method.

The position and orientation of the torus primitive is determined by the world transform that is associated to the object. This transformation is stored in the worldMatrix parameter defined on line 479.

Next we’ll draw the torus using the “C8E6_bump_mapping” shader effect that was loaded in the OnInitialize method.

    // Draw a reflective-refractive torus
    {        
        Effect& effect = mgr.GetEffect("C8E6_bump_mapping");

        EffectParameter& eyeParameter = effect.GetParameterByName( "gEyePosWorld" );
        eyeParameter.Set( eyePos );

        EffectParameter& lightParameter = effect.GetParameterByName( "gLight" );
        lightParameter["position"].Set( g_LightPosition );
        lightParameter["color"].Set( g_LightColor );

        worldMatrix = glm::translate( glm::vec3( 0.0f, 0.0f, 0.0f ) );
        worldMatrix = glm::rotate( worldMatrix, -g_fRotatePrimitive, glm::vec3( 0, 1, 0 ) );

        mgr.SetWorldMatrix( worldMatrix );
        mgr.SetMaterial( g_ReflectiveMaterial );

        mgr.UpdateSharedParameters();
        effect.UpdateParameters();

        Technique& technique = effect.GetFirstValidTechnique();
        foreach( Pass* pass, technique.GetPasses() )
        {
            pass->BeginPass();
            glCallList( g_TorusDisplayList );
            pass->EndPass();
        }
    }

On line 483 a reference to the shader effect is retrieved from the effect manager.

On line 485, the shader parameter that represents the world position of the viewer is queried and set to the eyePos parameter that was defined earlier.

The gLight shader parameter is a struct parameter with struct members Light.position and Light.color. These parameters are set to the global parameters for the lights position and color respectively.

Then we define the world transform of the torus by building a rotation matrix that rotates the torus about the Y-axis.

On line 495, the uniform shader parameters are set and on line 498, and they are updated and committed to the GPU.

On line 501, the only technique defined in the shader is queried and for each pass in the technique, the torus geometry is rendered using the display list that was created using the CreateTorusDisplayList method earlier.

Then we also want to draw a sphere that has the color and position of the light that is used to illuminate the torus.

    // Draw a sphere that represents the light
    glPushMatrix();
    {
        glTranslatef( g_LightPosition.x, g_LightPosition.y, g_LightPosition.z );

        glColor3f( g_LightColor.r, g_LightColor.g, g_LightColor.b );
        glutSolidSphere( 0.5f, 8, 8 );
    }
    glPopMatrix();

    g_App.Present();
}

On lines 511-518, a sphere is rendered the same color and position as the light that is used to illuminate the torus.

And finally, the back buffer is presented on the screen by calling the Application::Present method.

The Shader Effect

We are only using the “C8E6_bump_mapping.cgfx” shader that was loaded in the OnInitialize method shown earlier.

Structs and Globals

First we will define a few structs and some global variables that are used in the vertex program and the fragment program.

struct Material {
    float4 Ke           : MAT_EMISSIVE;
    float4 Ka           : MAT_AMBIENT;
    float4 Kd           : MAT_DIFFUSE;
    float4 Ks           : MAT_SPECULAR;
    float shininess     : MAT_SPECULARPOWER;
    float reflection    : MAT_REFLECTION;
};

// From page 138
struct Light {
    float3 position;
    float4 color;
};

// Normal map
texture normalTexture;
sampler2D normalSampler = sampler_state
{
    Texture = <normalTexture>;
    MinFilter = Linear;
    MagFilter = Linear;
};

// Diffuse map
texture diffuseTexture;
sampler2D diffuseSampler = sampler_state
{
    Texture = <diffuseTexture>;
    MinFilter = Linear;
    MagFilter = Linear;
};

// Envirnoment map
texture cubeTexture;
samplerCUBE envSampler = sampler_state
{
    Texture = <cubeTexture>;
    MinFilter = Linear;
    MagFilter = Linear;
};

float4x4 gModelViewProj : WORLDVIEWPROJECTION;
float4x4 gModelToWorld  : WORLD;
float4x4 gModelToWorldIT: WORLDINVERSETRANSPOSE;

// Eye position in world space
float3   gEyePosWorld;
Material gMaterial;
Light    gLight;
float4   gGlobalAmbient : GLOBALAMBIENT;

The Material struct stores the material properties that are required by the fragment program. The Ke defines the emissive color component and the Ka parameter defines the ambient color component. Diffuse and specular values are defined in the Kd and Ks properties respectively and the shininess parameter defines the specular power of the material which controls how shiny the material appears. In order to implement the environment effects on our primitives we also need to know how reflective the material is. For that we define the reflection property which should be assigned a value between 0 and 1 where 0 indicates the material is not reflective and 1 indicates the material is completely reflective.

The Light defines the properties of the light that are used in the fragment shader. For this demo we are only interested in the position and general color of the light. You might see some lighting models split the light color into ambient, diffuse, and specular colors but for simplicity, I don’t do that.

For a complete discussion on implementing lighting in Cg, you can refer to my previous article titled [Transformation and Lighting in Cg].

Then we define three samplers. The diffuseSampler is used to sample the base texture of the primitive and the normalSampler is used to sample the normal map. The evnSampler is the environment cube map texture that will be used as the reflection map that is blended with the object’s base color. The same environment map is also used to render the background of our scene.

Then we also define three matrices which are used to transform the object space vertecies and normals into clip-space and world space. The gModelToWorldIT matrix is used to transform the vertex normal and the tangent vectors into world space correctly even if the objects has non-uniform scaling applied to it’s world transform.

The reason we multiply the vertex normal by the inverse transpose of the world matrix ensures that the normal vector stays normal. That is, the orientation is preserved while the scaling is removed. If we simply multiply the normal by the world matrix, then if the world matrix contains a non-uniform scale (different scale factors on all 3 axes), then the normal will become skewed. At least that’s the short answer. For a longer answer that may or may not make sense, you can refer to “Subject 5.27: How do I transform normals?” of the comp.graphics.algorithims frequently asked questions.

On lines 48-51 the additional global variables are defined for the position of the viewer in world space, the material, the light, and the global ambient that will be used to properly calculating the lighting in the fragment shader.

The Vertex Program

The vertex program will be used to transform the object-space vertex positions into clip space. Also, the vertex normals and tangents will be transformed to world-space and passed to the fragment program.

void C8E5v_bump(  float4 position : POSITION,
                  float2 texCoord : TEXCOORD0,
                  float3 normal   : NORMAL,
                  float3 tangent  : TANGENT,

              out float4 oPosition  : POSITION,     // Vertex position in clip-space
              out float2 oTexCoord  : TEXCOORD0,
              out float3 oPositionW : TEXCOORD1,    // Vertex position in world space
              out float3 oNormalW   : TEXCOORD2,    // Normal vector in world space
              out float3 oTangentW  : TEXCOORD3,    // Tangent vector in world space
              out float3 oBinormalW : TEXCOORD4,    // Binormal vector in world space

          uniform float4x4 modelViewProj, 
          uniform float4x4 modelToWorld,
          uniform float4x4 modelToWorldIT )
{
    oPosition = mul(modelViewProj, position);
    oTexCoord = texCoord;

    oPositionW = mul( modelToWorld, position );
    oNormalW = mul( modelToWorldIT, float4( normal, 0 ) );

    // Compute the tangent space vectors
    oTangentW = normalize( mul( modelToWorldIT, float4( tangent, 0 ) ) );
    oBinormalW = cross( oNormalW, oTangentW );
}

The first few parameters to the function are the position, texCoord, normal, and tangent values expressed in object space are passed from the application program. The next set of out parameters are computed in the vertex program and passed as input parameters to the fragment program. The uniform parameters are the parameters that do not change for a single pass. These are the parameters that are assigned to the effect parameters in the application before the geometry is rendered.

On line 69, the clip-space position of the vertex is computed and assigned to the output parameter with the POSITION semantic. This is the only out parameter that is not an in parameter of the fragment program.

The object-space position and normal vectors are transformed into world-space because I chose to perform all of the lighting calculations in world-space instead of object space (as was done in the previous article titled [Transformation and Lighting in Cg]). My reason for this is that I wanted to combine the environment mapping technique demonstrated in [Environment Mapping with Cg and OpenGL] with normal mapping. In order to sample the correct color from the environment map, the reflected ray must be computed in world-space because the environment cube map also exists in world space. On one hand, I could perform all of the basic lighting equations in tangent space (which is the accepted way of doing lighting when applying normal maps) or I could do the lighting calculations in object-space or world-space. If I decided to do the lighting in tangent-space then I would need to convert the reflected ray vector from tangent space into world-space in the fragment program anyways in order to do the cube map lookup. And if I wanted to perform the lighting in tangent-space, then I would need to transform the light vectors from world space into tangent space in the vertex program and pass the tangent space light vectors as parameters to the fragment program – but that would also limit the program to one or two lights per pass.

So I decided to only compute the tangent space basis vectors in the vertex program and do all of the lighting calculations in the fragment program in world space. This only required that the surface normal from the normal map needed to be transformed from tangent-space into world-space. This could be done simply by multiplying the tangent-space normal by the transpose of the matrix formed from the tangent basis vectors in world-space. This will be shown in the fragment program.

Before we can perform the correct lighting based on a surface normal that is defined in the texture, we need to be able to transform the normal in the normal map into world-space (the same space as the light position and vectors are expressed). We need to create a set of basis vectors that are used to describe the correct space. These basis vectors are defined for every vertex of the mesh and we have already seen one of the vectors that are required in other vertex programs and that is the normal. In order to create an orthonormalized rotation matrix, we need at least one more vector and that vector is called the tangent vector and this vector is supplied by the application when rendering the model. The third required vector is called the binormal (or bitangent) and this vector can be computed simply by taking the cross-product of the normal and tangent vectors. This is shown on line 77 in the code sample above.

The Fragment Program

The fragment program will do the final lighting calculations and blend the base color of the torus together with a reflected color from the environment map.

// This is C8E4f_specSurf from "The Cg Tutorial" (Addison-Wesley, ISBN
// 0321194969) by Randima Fernando and Mark J. Kilgard.  See page 209.
float3 expand( float3 v ) { return (v-0.5)*2; }

void C8E4f_specSurf(float2 texCoord          : TEXCOORD0,
                    float3 positionW         : TEXCOORD1,   // Vertex position in world space
                    float3 normalW           : TEXCOORD2,   // Normal vector in world space
                    float3 tangentW          : TEXCOORD3,   // Tangent vector in world space
                    float3 binormalW         : TEXCOORD4,   // Binormal vector in world space

                out float4 color             : COLOR,

            uniform float3      eyePosW,                     // Eye Position in world space
            uniform Material    material,
            uniform float4      globalAmbient,
            uniform Light       light,                      // Light properties in world space
            uniform sampler2D   normalMap,
            uniform sampler2D   diffuseMap,
            uniform samplerCUBE environmentMap )
{
    // Tangent basis matrix
    float3x3 tangentMatrix = float3x3( normalize( tangentW ), 
                                       normalize( binormalW ),
                                       normalize( normalW ) );

    // Fetch and expand range-compressed normal
    float3 normalTex = tex2D(normalMap, texCoord).xyz;
    // Surface normal in texture space
    float3 N = normalize( expand(normalTex) );

    // Transform from texture space to world space
    N = mul( N, tangentMatrix ); 

    // Light vector
    float3 L = normalize( light.position - positionW );
    // View vector
    float3 V = normalize( eyePosW - positionW );
    // Half-angle vector
    float3 H = normalize ( L + V );

    // Compute diffuse and specular lighting dot products
    float specular = 0;
    float diffuse = saturate(dot(N, L));
    if ( diffuse > 0 )
    {
        specular = pow(saturate(dot(N, H)), material.shininess);
    }
    
    // Compute the bumped normal in world space
    float3 I = positionW - eyePosW;
    float3 R = reflect( I, N ); // both I and N need to be in world space!
    float4 reflectedColor = texCUBE( environmentMap, R );
    float4 baseColor = tex2D( diffuseMap, texCoord );
    float4 diffuseColor = material.Kd * light.color * diffuse;
    float4 specularColor = material.Ks * light.color * specular;
    
    color = baseColor * ( material.Ke + material.Ka * globalAmbient + diffuseColor + specularColor );
    color = lerp( color, reflectedColor, material.reflection );
}

Since the surface normal stored in the normal map is stored as an RGB value where each component is stored in the range [0..1], we need to convert the normal stored in the texture into a normalized surface normal with each component in the range [-1..1]. We use the expand method defined on line 82 to convert the range-compressed surface normal into a 3D normal we can use.

The first thing we do in the fragment program is build a rotation matrix from the tangent basis vectors that are computed in the vertex program. Since the vectors are passed as texture coordinates, we need to re-normalize them to remove any scaling that my have been accidentally introduced. The tangent space matrix is computed by setting the tangent vector as the X-vector, the binormal as the Y-vector and the normal vector as the Z-vector of the transform matrix.

It’s important to understand that this Tangent, Binormal, Normal (TBN) matrix will transform a vector from world-space to tangent-space. So in order to transform the tangent-space normal to world-space, we must multiply by the inverse of the tangent-space matrix.

The tangent basis matrix can be used to convert the view (eye) position, the light position, and the light direction vectors from world space into tangent space, but it can also be used to convert the normal from the normal map from tangent space into world space (which makes more sense to transform one vector into world space than it would be to transform 3 vectors into tangent space). So on line 111, we transform the tangent-space surface normal into world-space by multiplying the normal vector (N) by the tangent matrix on the right. This is equivalent to multiplying the normal vector by the transpose of the tangent matrix on the left.

Since we know the tangent matrix is orthonormalized, we can compute the inverse of the matrix simply by taking it’s transpose which saves us the trouble of inversing the tangent matrix. For a review of matrix transpose and inverses, you can refer to my previous article titled [3D Math Primer for Game Programmers (Matrices)].

Now that we have the surface normal in world-space, we can use it to calculate the regular lighting contributions as shown in the previous article titled [Transformation and Lighting in Cg], and to compute the reflected color from the environment map as shown in the previous article titled [Environment Mapping with Cg and OpenGL].

The final fragment color is simply a blend between the lit texture color of the fragment with the reflected color of the environment based on the material’s reflection factor.

Techniques and Passes

In this effect, we only define a single technique and a single pass.

technique main
{
    pass p0
    {
        VertexProgram = compile latest C8E5v_bump( gModelViewProj, gModelToWorld, gModelToWorldIT );
        FragmentProgram = compile latest C8E4f_specSurf( gEyePosWorld, gMaterial, gGlobalAmbient, gLight, normalSampler, diffuseSampler, envSampler );
    }
}

Both the vertex program and the fragment program specify the keyword “latest” as the profile to compile the program for. This causes the Cg runtime to compile the vertex programs and fragment programs using the latest profile supported on platform that the application is running on.

If everything goes good, you should see something similar to what is shown below.

References

The Cg Tutorial

The Cg Tutorial

The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics (2003). Randima Fernando and Mark J. Kilgard. Addison Wesley.

The cube map images used for this demo were downloaded from http://www.hazelwhorley.com/textures.html. Special thanks to Hazel Whorley for creating these great cube maps.
The brick texture and normal map were downloaded from http://www.tutorialsforblender3d.com/Textures/Bricks-NormalMap/Bricks_Normal_3.html.

Download the Source

The source code including dependencies used to create this demo can be downloaded from the link below.

[BumpMapping.zip].

9 thoughts on “Normal Mapping with Cg and OpenGL

    • This is a file that is required by SDL to provide a sound implementation on the Windows operating system. It is included in the DirectX SDK and if you have your search paths correctly installed then the compiler should automatically find this file.

      Make sure you configure your Win32 search paths in Visual Studio 2008 & 2010 (not just the x64 search paths!)

  1. The texture coordinates of the torus are wrong/not set.

    You can see this if you only apply a texture to the torus. If you replace glCallList( g_TorusDisplayList ) with glutSolidTeapot(1.0f) you can see the shader is working though :)

    • Not sure why the textures aren’t working for you. When you run the demo on your PC, you don’t see the same torus that is shown in the YouTube video that is embedded at the end of this article? If you don’t see the textured torus when you run this demo then I would have to guess there is something unique with your hardware. Perhaps the attribute ID I am using for the texture coordinates (first stage) is something other than ’8′?

      • No, I don’t see the torus shown in the vid, I only see the reflection of the skybox.

        I fixed it by replacing TEXCOORD0, on line 54 in C8E6_bump_mapping.cgfx, with ATTR8. Don’t know if it still works on other machines though.

        PS: I tested this on a ATI Mobility Radeon HD 4570.

          • This only happens on ATI cards and its because the glVertexAttrib3f() function is used to set the attribute ID. For ATI you will have to use ATTR0-15 when setting the vertrex attribute manually like this. NVIDIA works fine with the ATTR semantics as well so to be ATI safe use those.

  2. Great tut, just keep it up, i very like your tutorials stlye, its not just give us the soulution, but it also give us knowladge that needs to create our own engine!

    I also find a bug :) In the cgfx file this line make no sense, i mean its pointless to multiply a vec whit a matrix (also its cause invalid normals).
    N = mul( N, tangentMatrix );

    Solution:
    N = mul( tangentMatrix, N );

    ui: sorry for the english

    • Vader:

      In the article it is stated:

      It’s important to understand that this Tangent, Binormal, Normal (TBN) matrix will transform a vector from world-space to tangent-space. So in order to transform the tangent-space normal to world-space, we must multiply by the inverse of the tangent-space matrix.

      we transform the tangent-space surface normal into world-space by multiplying the normal vector (N) by the tangent matrix on the right. This is equivalent to multiplying the normal vector by the transpose of the tangent matrix on the left.

      And since we know that the tangent matrix is orthonormalized, multiplying the nornal vector (N) by the transpose of the tangent matrix, is equivalent to multiplying by it’s inverse only much cheaper!

      So no, this is not a bug – it is entirely intentional and the reasoning for it is explained in the article.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>