Transformation and Lighting in Cg

Spotlight Shader Effect

Spotlight Shader Effect

In this article I will demonstrate how to implement a basic lighting model using the Cg shader language. In this article, I assume the reader is familiar with the OpenGL graphics API and how to setup an application that uses OpenGL. If you want to see how you can setup an application that can be used to do OpenGL graphics rendering, you can refer to my previous article titled [Introduction to OpenGL for Game Programmers].

 

Introduction

Lighting is one of the most important features in games because if used correctly, lighting in games can cause the participant to experience the desired emotions at the intended moments. This is an important factor when one is trying to create a game that completely envelops the participant. For me, one of the most enveloping games I’ve played was Doom 3. It’s masterful use of light and shadows sometimes caused me to scream like a little girl when a baddie suddenly jumped out of the darkness.

In this article, I will demonstrate how the basic lighting model can be implemented using shaders. If you’ve only used the fixed-function pipeline of Direct3D or OpenGL, you have probably taken advantage of the built-in lighting system that is implemented in the rendering pipeline. If you want to write your own programmable vertex or fragment shaders, then you need to bypass the built-in transformation and lighting calculations and implement them yourself. In this article, I will show you how to do that.

Dependencies

The demo shown in this article uses several 3rd party libraries to simplify the development process.

  • The Cg Toolkit (Version 3): The Cg Toolkit provides the tools and API needed to integrate the Cg shader programs in your application.
  • Boost (1.46.1): Boost has some very useful libraries that I use throughout my demo applications. In this demo, I use the Signals, Filesystem, Function, and Bind boost libraries to provide a generic, platform independent functionality that simplifies some of the features used in this demo.
  • Simple DirectMedia Layer (1.2.14): Simple DirectMedia Layer (SDL) is a cross-platform multimedia library that I use to create the main application window, initialize OpenGL, and handle keyboard, mouse and joystick input.
  • OpenGL Mathmatics (GLM): An OpenGL centric mathmatics library for 3D graphics applications.

Any dependencies used by the demo are included in the source distribution available at the bottom of the article so you can hopefully just unzip, compile and run the included samples.

The Basic Lighting Model

In computer graphics, there are several lighting models that can be used to simulate the reflection of light with specific material properties. The Lambertian reflectance model simulates the light that is diffused equally in all directions. The Gouraud shading model will calculate the diffuse and specular contributions of every vertex of a model and the color of the pixel will be interpolated between vertices. This is also known as per-vertex lighting. Phong shading, often referred to as per-pixel lighting is a lighting method that will interpolate the vertex normal across the face of a triangle in order to calculate per-pixel (or per-fragment) lighting. Using the Phong lighting model the final color of the pixel is the sum of the ambient (and emissive), diffuse, and specular lighting contributions. A variation of the the Phong shading model is the Blinn-Phong lighting model. The Blinn-Phong lighting model calculates the specular contribution slightly differently than the standard Phong lighting model. In this article, I will use the Blinn-Phong lighting model to compute the lighting contribution of the object’s materials.

The general formula for the lighting model is:

Where is the final output color of the pixel, is the ambient term, is the emissive (the color the material emits itself without any lights), is the light reflected evenly in all directions based on the angle to the light source, and is the concentration of light that is reflected based on both the angle to the light and the angle to the viewer.

Ambient and Emissive Terms

The Ambient term can be thought of as the global contribution of light in the entire scene. Even if there is no light in the scene, the global ambient contribution can be used to create some global illumination. If there are lights in the scene, each light can contribute to the global ambient if it in the line-of-sight of the object it is illumination. The global illumination is then the sum of all of the individual lights ambient contribution.

The emissive term is the amount of light the material emits without any lights or ambient contribution. This value can be used to simulate materials that can glow in the dark. This does not automatically create a light source in the scene. If you want to simulate an object that actually emits light, you have to create a light source at the objects location and everything in the scene must take that light source into consideration when rendering (except the object itself).

If we make the following definitions:

  • is the global ambient contribution.
  • is the ambient reflection constant of the material.
  • is the emissive reflection constant of the material.

Then the final ambient contribution of the fragment will be:

And the emissive contribution will be:

Lambert Reflectance

Lambert reflectance is the diffusion of light that occurs equal in all directions, or more commonly called the diffuse term. The only thing that we need to consider to calculate the lambert reflectance term is the angle of the light relative to the surface of the normal.

If we consider the following definitions:

  • is the position of the surface we want to shade.
  • is the surface normal at the location we want to shade.
  • is the position of the light source.
  • is the diffuse contribution of the light source.
  • is the normalized direction vector pointing from the point we want to shade to the light sorce.
  • is the diffuse reflectance of the material.

Then the diffuse term of the fragment is:


The ensures that for surfaces that are pointing away from the light (when ), the fragment will not have any diffuse lighting contribution.

The Phong, and Blinn-Phong Lighting Models

The final contribution to calculating the lighting is the specular term. The specular term is dependent on the angle between the normalized vector from the point we want to shade to the eye position (V) and the normalized direction vector pointing from the point we want to shade to the light (L).

Blinn-Phong vectors

Blinn-Phong vectors

If we consider the following definitions:

  • is the position of the point we want to shade.
  • is the surface normal at the location we want to shade.
  • is the position of the light source.
  • is the position of the eye (or camera’s position).
  • is the normalized direction vector from the point we want to shade to the eye.
  • is the normalized direction vector pointing from the point we want to shade to the light source.
  • is the reflection vector from the light source about the surface normal.
  • is the specular contribution of the light source.
  • is the specular reflection constant for the material that is used to shade the object.
  • is the “shininess” constant for the material. The higher the shininess of the material, the smaller the highlight on the material.

Then, using the Phong lighting model, the specular term is calculated as follows:




The Blinn-Phong lighting model calculates the specular term slightly differently. Instead of calculating the angle between the view vector (V) and the reflection vector (R), the Blinn-Phong lighting model calculates the half-way vector between the view vector (V) and the light vector (L).

In addition to the previous variables, if we also define:

  • is the half-angle vector between the view vector and the light vector.

Then the new formula for calculating the specular term is:




As you can see, the Blinn-Phong lighting model requires an additional normalize of the half-angle vector which requires the use of the square-root function which can be expensive on some profiles. The advantage of the Blinn-Phong model becomes apparent if we consider both the light vector (L) and the view vector (V) to be at infinity. For directional lights, the light vector (L) is simply the negated direction vector of the light. If this condition is true, the half-angle vector (H) can be pre-computed for each directional light source and simply passed to the shader program as an argument saving the cost of the computation of the half-angle vector for each fragment.

Attenuation

Attenuation is the gradual fall-off of the intensity of a light as it gets farther away from the object it illuminates.

The attenuation factor of a light is determined by the distance from the point being shaded to the light, and three constant parameters that define the constant, linear and quadratic attenuation of the light source.

If we define the following variables:

  • is the scalar distance between the point being shaded and the light source.
  • is the constant attenuation factor of the light source.
  • is the linear attenuation factor of the light source.
  • is the quadratic attenuation factor of the light source.

The the formula for the attenuation factor is:

For lights that are infinitely far away from the object being rendered such as directional lights, we don’t want any attenuation to occur (if we are simulating the effects of the sun for example). In this case, we set the constant attenuation constant to 1.0 and the linear and quadratic attenuation constants to 0.0. This will alway result in an attenuation factor of 1.0 independent of the distance to the light source.

The result of the attenuation factor is a scalar that is applied to both the diffuse and specular terms.

Spotlights

Spotlights have the additional property that they only emit light within a directed cone in the direction of the light. The cone of the light can be defined by two cosine angles, the inner cone angle and the outter cone angle. The intensity of the resulting light is computed by taking the dot product between the normalized vector from the light position to the point being shaded and the light’s direction vector.

Inner and Outer Cones for Spotlights

Inner and Outer Cones for Spotlights (from "The Cg Tutorial")

If we define the following variables:

  • is the position of the point we want to shade.
  • is the position of the light source.
  • is the direction of the light source.
  • is the normalized direction vector between the light source to the point we want to shade.
  • is the cosine of the angle between the normalized direction vector between the light source and the point we want to shade and the normalized direction vector of the light source.
  • is the cosine angle of the inner cone of the spotlight.
  • is the cosine angle of the outer cone of the spotlight.

Then the formula for calculating the intensity of the spotlight is:



The smoothstep function will smoothly interpolate between two values. This function produces better results then simply doing a linear interpolation between the two values.

It is interesting to note that the cosine angle of the inner cone will be larger than the cosine angle of the outer cone. This is because it measures the dot product of a vector that is relative to the direction vector of the light. If the vectors are parallel (in the same direction) then the dot product is 1.0 and if they are perpendicular, then the dot product is 0.0. Which means that if you want a wider spotlight cone you have to decrease (closer to 0.0) the cosine angle of the cones.

Transformations and the Importance of Spaces

Now that we’ve seen the general equations for implementing the basic lighting equations, we need to apply it. Before we can apply these formulas we need to know something about the spaces in which all of our objects live. In a previous article titled [3D Math Primer for Game Programmers (Coordinate Systems)] I discussed some of the common spaces that exist in 3D graphics programs. Please refer to that article if you require an explanation of object space, world space, and view space.

The reason I am bringing up spaces in this article is that it is very important that before we can sensibly calculate the lighting that is applied to an object, we must ensure that all positions and directions of objects, lights, and the eye position are in the same space.

In general we will transfer positions and directions from one space to the other by multiplying by the corresponding matrix that represents that space. For example, to get from object space to world space, we need to transform the vertex position and vertex normals by the world-space matrix to get those positions and normals into world space. However to get the positions and normals from object space to view space (the space that has the camera position at the origin), we can’t just simply multiply the object-space positions and normals by the view matrix because the the view matrix will only transform positions and normals from world space into view space.

Let’s consider this simple notation:

[Object Space] * [World Transform]      => [World Space]
[World Space]  * [View Transform]       => [View Space]
[View Space]   * [Projection Transform] => [Clip Space]

If we have positions and directions already expressed in world space and we need to express those positions and directions in object space, we can just multiply by the inverse of the world transform to get the coordinates in object space.

[Clip Space]  * [Inverse Projection Transform] => [View Space]
[View Space]  * [Inverse View Transform]       => [World Space]
[World Space] * [Inverse World Transform]      => [Object Space]

This is a very important point to remember when creating your lighting shaders! If you only remember one thing after reading this article, it should be this: Everything must be in the same space!

We can also combine matrices in order to transform coordinates from object space directly into clip space.

[Object Space] * ( [World Transform] * [View Transform] * [Projection Transform] ) => [Clip Space]

Depending on the API the order of the multiplication may be left-to-right (as shown above and used for DirectX) or right-to-left (shown below and used for OpenGL):

( [Projection Transform] * [View Transform] * [World Transform] ) * [Object Space] => [Clip Space]

It is usually advisable to pre-compute this combined matrix in advance and pass it as a parameter to the shader program so you avoid doing two matrix multiplies for every vertex of the mesh.

In some cases it may appear that the GPU is better at doing the matrix multiplication than pre-computing it on the CPU. For small meshes with very few vertices, this may be true but when the meshe’s vertex count increases, it becomes clear that pre-computing the combined matrix on the CPU becomes more efficient.

The Gouraud Shader Program

Gouraud shading is a per-vertex lighting shader that computes the basic lighting equation for each vertex of the mesh. The final fragment color is simply a linear interpolation between the vertices of the primitive. It may be cheaper to compute the lighting on a per-vertex level than a per-fragment level but the result is a poor reproduction of the specular term across the face of triangle. When viewed from close-up, this artifact is very obvious (especially with very large triangles) but when viewed from far way, the effect is not that obvious. As an optimization you could implement Gouraud shading on objects that are far away from the camera but if that object comes closer to the camera, you could switch to a Phong or Blinn-Phong lighting model.

Let’s take a look at the shader program that computes per-vertex lighting.

First, we need to define the global variables that will be passed to the vertex and fragment program entry points.

float4x4 gModelViewProj : WORLDVIEWPROJECTION;
float4   gGlobalAmbient : GLOBALAMBIENT;
float4   gLightColor;
float3   gLightPosition;
float3   gEyePosition;
float4   gEmissive      : EMISSIVE;
float4   gAmbient       : AMBIENT;
float4   gDiffuse       : DIFFUSE;
float4   gSpecular      : SPECULAR;
float    gSpecularPower : SPECULARPOWER;

The part that comes after the ‘:’ character is called the semantic. The semantic is used to inform the Cg runtime how that parameter should be bound to incoming geometry data. The Cg runtime allows you to query the semantic of global variables so that you can determine how that variable should be calculated. For example, I can query the effect for any parameter that uses the WORLDVIEWPROJECTION semantic and set that variable to the combined model, view, projection matrix regardless of the name of the parameter.

Vertex Program

The vertex program for the Gouraud shader program will calculate the the lighting for each vertex of the mesh. The lighting model used for this implementation is the Blinn-Phong lighting model described earlier.

void C5E1v_basicLight(float4 position  : POSITION,
                      float3 normal    : NORMAL,

                  out float4 oPosition : POSITION,
                  out float4 color     : COLOR,

              uniform float4x4 modelViewProj,
              uniform float4 globalAmbient,
              uniform float4 lightColor,
              uniform float3 lightPosition,
              uniform float3 eyePosition,
              uniform float4 Ke,
              uniform float4 Ka,
              uniform float4 Kd,
              uniform float4 Ks,
              uniform float  shininess)
{
  oPosition = mul(modelViewProj, position);

  float3 P = position.xyz;
  float3 N = normal;

  // Compute emissive term
  float4 emissive = Ke;

  // Compute ambient term
  float4 ambient = Ka * globalAmbient;

  // Compute the diffuse term
  float3 L = normalize(lightPosition - P);
  float diffuseLight = max(dot(N, L), 0);
  float4 diffuse = Kd * lightColor * diffuseLight;

  // Compute the specular term
  float3 V = normalize(eyePosition - P);
  float3 H = normalize(L + V);
  float specularLight = pow(max(dot(N, H), 0), shininess);
  if (diffuseLight <= 0) specularLight = 0;
  float4 specular = Ks * lightColor * specularLight;

  color = emissive + ambient + diffuse + specular;
  color.w = 1.0f;
}

The uniform keyword implies that this parameter is the same for every vertex the program will compute. Uniform variables cannot change their value between different vertices during the same render pass.

The out keyword implies that the value of this parameter will be computed in the function and returned when the function returns. The oPosition and color parameters are bound to the POSITION and COLOR semantics. These semantics have a special meaning to the Cg compiler. If a vertex does only one thing, it must at least compute the clip-space position of the incoming vertex. The incoming vertex position in object space will be passed in using the POSITION semantic and the vertex program must compute the clip-space position and save it to the out parameter that is also bound to the POSITION semantic. The out parameter bound to the POSITION semantic is also the only parameter that isn’t passed as an input parameter to the fragment program. The other out parameter color will be passed to the fragment program and stored in the in parameter that is bound to the same semantic, in this case COLOR.

On line 32, the clip-space position is computed by multiplying the incoming vertex position in object space by the model, view, projection matrix that has been pre-computed on the CPU and passed in the modelViewProj parameter.

On lines 34, and 35 the object-space position and object-space normals are stored. It is important to note the space being used to compute the lighting terms. If the vertex position and vertex normal are in object-space, then the position of the light, direction of the light, camera position (eyePosition), and view vector (V) must also be expressed in object space. For this vertex program, we assume the light position and camera position have been computed correctly in object-space on the CPU so we can avoid the matrix multiplication on the vertex program (since these values are constant for the entire object being rendered).

On line 38 the emissive term is computed simply by storing the emissive contribution of the material.

On line 41 the ambient term is computed by multiplying the global ambient factor by the ambient contribution of the material.

On lines 44-46 the diffuse term is computed. The L vector is the normalized vector from the point being shaded to the light source. Again, I want to emphasize the importance of both the point P and the light position parameter must be in the same space (object-space in this case) otherwise this will result in an incorrect vector!

The diffuse light contribution is the cosine angle of the L vector and the object-space vertex normal. If the L vector and the surface normal (N) are directly facing each other, the vertex will be fully lit. As the angle between L and the surface normal N increases, the surface will gradually darken until the L vector and the surface normal are perpendicular. If the vectors are pointing in opposite directions, the diffuse factor will be clamped to 0.

The specular term is calculated from the cosine angle of the half-angle vector (H) and the surface normal (N). We must also take the angle to the light into consideration when calculating the specular term. If the polygon face is back-facing, we also don’t want to apply any specular contribution to the final color. We do this by checking if the diffuse contribution is less than 0. If it is, we also set the specular contribution to 0.

The final color is computed by summing the lighting terms emissive, ambient, diffuse, and specular. On line 55 we also set the color’s w component to 1.0 to ensure the object appears opaque.

Fragment Program

Let’s take a look at the fragment program for this effect.

float4 C5E1f_basicLight(float4 c : COLOR) : COLOR
{ 
    return c; 
}

Not much to it. Since we only want to compute the lighting per-vertex, we simply pass-through the interpolated vertex color of the fragment. Here the input parameter c is bound to the input semantic COLOR. Just like the vertex program, the simplest fragment program must output at least a value that is bound to the output semantic COLOR. This will be the value that applied to the color buffer of the current render target.

Techniques and Passes

For this effect we also need to specify a technique that can be used to shade the geometry. The technique for this effect consists of a single pass that uses the vertex program and the fragment program just shown.

technique t0
{
	pass p0
	{
        VertexProgram = compile arbvp1 C5E1v_basicLight(gModelViewProj, gGlobalAmbient, gLightColor, gLightPosition, gEyePosition, gEmissive, gAmbient, gDiffuse, gSpecular, gSpecularPower  );
        FragmentProgram = compile arbfp1 C5E1f_basicLight();

        DepthTestEnable = true;
		DepthMask = true;
	}
}

The pass specifies the VertexProgram that is to be used for that pass which defines the profile (in this case arbvp1 which is the very basic vertex program profile for OpenGL) and the entry point for the program (in this case C5E1v_basicLight).

The FragmentProgram is also specified in the pass which is compiled using the arbfp1 fragment program profile for OpenGL.

The additional parameter in the pass DepthTestEnable, and DepthMask are OpenGL state variables that can be specified in the effect file and the Cg runtime will ensure that the correct state variable are set in the OpenGL API when the pass is used. For a full listing of all of the state variables that can be specified please refer to the section called “CgFX States” in the Cg reference manual. The reference manual is included in the CgFX toolkit.

The Cg Runtime Implementation

Now that we’ve defined our shader program let’s see what the application looks like that is used to load the shader and use the effect we just created to render some geometry. I am using the effect framework that I described in the previous article titled [Introduction to Cg Runtime with OpenGL]. The effect framework used in this demo is provided together with the source code that you can download at the end of the article.

Globals Variables

First, I’ll define a few global variables that are used throughout the demo application.

Application g_App( "Effect Template", 512, 512 );
PivotCamera g_Camera;

// Some default materials
Material g_BrassMaterial;
Material g_RedPlasticMaterial;
Material g_GreenEmeraldMaterial;

glm::vec3 g_LightPosition;
glm::vec4 g_LightColor;
bool g_bAnimate = true;

static int gs_iCurrentScene = 0;

The Application class is part of the Effect Framework. It is responsible for creating the OpenGL render window and invoking the callbacks when specific events occur (keyboard input, mouse input, rendering and updating application logic).

The PivotCamera class is also included in the Effect Framework. This class will be used to store both the projection matrix and the view matrix. It uses a simple arc-ball camera implementation to allow you to rotate and translate the view.

The Material class simply stores variables that define the material’s ambient, emissive, diffuse, specular, and specular power (or shininess) components for the object being rendered.

We also need to define a light for our scene. For this simple demo, we only specify the position of the light and the color of the light.

The g_bAnimate flag allows us to toggle animation of the light in the scene on and off.

We also want to keep track of what scene we want to render. In this demo, there are 4 different scenes. The gs_iCurrentScene variable is used to keep track of which scene we want to render. We can switch between the scenes using the 1-4 keys on the keyboard (more on this later).

The Main Function

int main( int argc, char* argv[] )
{
    g_Camera.SetTranslate( glm::vec3(0,0,-10) );

    // Register event callbacks
    g_App.KeyPressed += OnKeyPressed;
    g_App.MouseButtonPressed += OnMouseButtonPressed;
    g_App.MouseButtonReleased += OnMouseButtonReleased;
    g_App.MouseMoved += OnMouseMoved;
    g_App.Resize += OnResized;

    g_App.PreRender += OnPreRender;
    g_App.Render += OnRender;

    g_App.Update += OnUpdate;
    g_App.Initialized += OnInitialize;
    g_App.Terminated += OnTerminate;

    return g_App.Run();
}

On line 60 the position of the camera is initialized to 10 units in-front of the origin of the world. This is sufficient for what we want to render since we will mostly be rendering only a single geometric object.

The application class will notify the user of certain events through function callbacks. The application callbacks are registered before the application starts so that the user can handle the Initialized event which occurs as soon as the application is run.

Then we run the application by calling the Run method of the application class. This function will not return until the user quits the application.

The Initialize Function

Once the application is initialized and the OpenGL context is created, we can create and initialize the Cg runtime via the EffectManager class.

void OnInitialize( EventArgs& e )
{
    // Create an effect manager
    EffectManager::Create().Initialize();
    EffectManager& effectMgr = EffectManager::Get();

    effectMgr.EffectLoaded += OnEffectLoaded;
    effectMgr.RuntimeError += OnRuntimeError;

    // Load the effects
    effectMgr.CreateEffectFromFile( "Shaders/C4E1_transform.cgfx", "C4E1_transform" );
    effectMgr.CreateEffectFromFile( "Shaders/C5E1_basicLight.cgfx", "C5E1_basicLight" );
    effectMgr.CreateEffectFromFile( "Shaders/C5E3_fragmentLight.cgfx", "C5E3_fragmentLight" );
    effectMgr.CreateEffectFromFile( "Shaders/C5E10_spotAttenLighting.cgfx", "C5E10_spotAttenLighting" );

    // Set shared parameters that will probably never change.
    effectMgr.SetGlobalAmbient( glm::vec4( 0.1f, 0.1f, 0.1f, 1.0f ) );

    // Setup brass material
    g_BrassMaterial.Emissive = glm::vec4( 0.0f, 0.0f, 0.0f, 1.0f );
    g_BrassMaterial.Ambient = glm::vec4( 0.33f, 0.22f, 0.03f, 1.0f );
    g_BrassMaterial.Diffuse = glm::vec4( 0.78f, 0.57f, 0.11f, 1.0f );
    g_BrassMaterial.Specular = glm::vec4( 0.99f, 0.91f, 0.81f, 1.0f );
    g_BrassMaterial.SpecularPower = 27.8f;

    // Setup red plastic material
    g_RedPlasticMaterial.Emissive = glm::vec4( 0.0f, 0.0f, 0.0f, 1.0f );
    g_RedPlasticMaterial.Ambient = glm::vec4( 0.0f, 0.0f, 0.0f, 1.0f );
    g_RedPlasticMaterial.Diffuse = glm::vec4( 0.5f, 0.0f, 0.0f, 1.0f );
    g_RedPlasticMaterial.Specular = glm::vec4( 0.7f, 0.6f, 0.6f, 1.0f );
    g_RedPlasticMaterial.SpecularPower = 32.0f;

    g_GreenEmeraldMaterial.Emissive = glm::vec4( 0.0f, 0.0f, 0.0f, 1.0f );
    g_GreenEmeraldMaterial.Ambient = glm::vec4( 0.0215f, 0.1745f, 0.0215f, 1.0f );
    g_GreenEmeraldMaterial.Diffuse = glm::vec4( 0.07568f, 0.61424f, 0.07568f, 1.0f );
    g_GreenEmeraldMaterial.Specular = glm::vec4( 0.633f, 0.727811f, 0.633f, 1.0f );
    g_GreenEmeraldMaterial.SpecularPower = 76.8f;

    g_LightColor = glm::vec4( 0.95f, 0.95f, 0.95f, 1.0f );
}

The EffectManager class implements the features of the Cg runtime. In my previous article titled [Introduction to Cg Runtime with OpenGL] I describe the inner-workings of the Effect Framework, so I won’t go over the details again here.

Starting on line 96, the effects that are used for the 4 scenes are loaded. You have seen the source code for the “C5E1_basicLight” shader. I will also show the implementation of the other shaders later in the article.

On line 102, a shared parameter is set for the global ambient. Shared parameters allow you to define a single parameter once and it applies the value for every effect that is connected to that shared parameter. In this case, the shared parameter is connected to any effect parameter that has a parameter with the GLOBALAMBIENT semantic.

Starting from line 97, a few materials that are used for the geometric objects being rendered are setup.

On line 117, a slightly off-white light color is specified.

The OnUpdate Method

In the OnUpdate method, we will animate the position of the light. The user can press the spacebar to toggle the animation of the light.

void OnUpdate( UpdateEventArgs& e )
{
    static float fAnimTimer = 0.0f; 
    static float fReloadTimer = 0.0f;

    // Every seconds, we'll check to see if the effects need to be reloaded.
    fReloadTimer += e.ElapsedTime;
    if ( fReloadTimer > 2.0f )
    {
        EffectManager::Get().ReloadEffects();
        fReloadTimer = 0.0f;
    }

    if ( g_bAnimate )
    {
        fAnimTimer += e.ElapsedTime;
        // Move the light position in a circle
        g_LightPosition.x = 10.0f * sinf( fAnimTimer );
        g_LightPosition.y = 3.0f;
        g_LightPosition.z = 10.0f * cosf( fAnimTimer );
    }
}

Every 2 seconds, the EffectManager::ReloadEffects() method will be invoked. This allows us to modify the shader program at runtime. The EffectManager will automatically reload any out-of-date shader files.

If the animation is enabled, we simply move the light in a big circle in the x, z plane.

The OnPreRender Method

The OnPreRender function will make sure that all of the shared parameters in the EffectManager are updated.

void OnPreRender( RenderEventArgs& e )
{
    // Update the shared parameters owned by the effect manager
    EffectManager& mgr = EffectManager::Get();

    mgr.SetViewMatrix( g_Camera.GetViewMatrix() );
    mgr.SetProjectionMatrix( g_Camera.GetProjectionMatrix() );

    mgr.SetElapsedTime( e.ElapsedTime );
    mgr.SetApplicationTime( e.TotalTime );
    mgr.SetMousePosition( g_CurrentMousePos );
    mgr.SetMouseButtonState( g_bLeftMouseDown, g_bRightMouseDown );

    mgr.UpdateSharedParameters();

    g_Camera.ApplyViewTransform();
}

The effect manager defines a few default shared parameters that can be automatically updated in your shader program simply by using the semantic defined in the EffectManager class (see the EffectManager::CreateSharedParameters method in the Effect Framework). You can also create your own shared parameters by using the EffectManager::CreateSharedParameter method and specifying the type and semantic of the shared parameter. To get a reference to the shared parameter, use the EffectManager::GetParameterBySemantic method and set the parameter before calling the EffectManager::UpdateSharedParameters method.

After any shared parameters have been changed, the EffectManager::UpdateSharedParameters method must be called to commit the parameter’s value to the shader program. Any effects that use any of the shared parameter’s semantics will also be updated.

The RenderScene1 Method

The second scene is rendered with the RenderScene1 method (scene numbering is 0-based). The “C5E1_basicLight” effect is loaded, it’s parameters are set, and a solid torus is rendered using the only pass defined in the effect.

void RenderScene1()
{
    EffectManager::Get().SetMaterial( g_BrassMaterial );

    Effect& effect = EffectManager::Get().GetEffect( "C5E1_basicLight" );

    EffectParameter& lightPosParameter = effect.GetParameterByName( "gLightPosition" );
    lightPosParameter.Set( g_LightPosition );

    EffectParameter& lightColorParameter = effect.GetParameterByName("gLightColor");
    lightColorParameter.Set( g_LightColor );

    EffectParameter& eyePosParameter = effect.GetParameterByName( "gEyePosition" );
    glm::mat4 viewMatrix = glm::inverse( g_Camera.GetViewMatrix() );
    glm::vec3 eyePos = glm::vec3( viewMatrix[3] );

    eyePosParameter.Set( eyePos );

    effect.UpdateParameters();

    EffectManager::Get().UpdateSharedParameters();

    Technique& technique = effect.GetFirstValidTechnique();

    technique.BeginTechnique();
    foreach( Pass* pass, technique.GetPasses() )
    {        
        if ( pass->BeginPass() )
        {
            glutSolidTorus( 0.75, 2.0, 16, 16 );
            pass->EndPass();
        }
    }

    // Draw a sphere that represents the light
    glEnable( GL_DEPTH_TEST );

    glPushMatrix();
    glTranslatef( g_LightPosition.x, g_LightPosition.y, g_LightPosition.z );

    glColor3f( g_LightColor.r, g_LightColor.g, g_LightColor.b );
    glutSolidSphere( 0.5f, 8, 8 );

    glPopMatrix();
}

The effect manager class also defines a shared parameter for the material. The material’s shared parameters are set using the EffectManager::SetMaterial method.

On line 338, we retrieve a reference to the effect we want to use to render the geometry. On lines 340 – 348, we set some of the properties that are used by the effect. I want to emphasize here that the light position and the eye position (camera position) must be specified in object-space for this effect to produce correct results. Since the object is placed at the origin of the world, it’s world-matrix is the identity matrix. In this case, the inverse of the world matrix is also the identity matrix so there is no use in transforming the light position and the eye position parameters by the identity matrix because it will result in the same points! In a later example, we will render multiple objects which are placed at different positions in the scene. In this case we must multiply the light position and eye position by the inverse of the object’s world-matrix to transform these points into object space.

On line 352 and 354, we update the effect parameters. This will ensure that any modifications to these parameters is committed to the shader program before we can use them.

To use the shader effect we defined to render our geometry, we generally get a reference to the technique we want to use and loop through all of the passes of the technique and render the geometry for each pass. We will follow this practice here to render a torus.

For each pass of the technique we bind the vertex program and the fragment program by calling the Pass::BeginPass method. When we are done rendering everything we need to render with the same effect, we simply call Pass::EndPass to disable the effect and any programs that are associated with the effect.

We also want to visualize the position of the light in the scene. We do this by rendering a small sphere that is the same color and at the same position as the light.

If everything goes right, we should see something similar to the image shown below.

Per-Vertex Lighting Effect

Per-Vertex Lighting Effect

The Blinn-Phong Shader Program

The Blinn-Phong shader program differs from the Gouraud shader program in that it computes the basic lighting equation for every fragment of the object as opposed to only at the vertex positions. The underlying equations for the per-fragment shader program is identical to the per-vertex shader program but in this effect, the vertex program is a pass-through vertex program while the fragment program does all the lighting calculations. Since these two effects are almost identical, I will omit the explanation of the lighting functions and only identify the differences between the two implementations.

Again, we begin by defining the global parameters that can be set by the application.

float4x4 gModelViewProj : WORLDVIEWPROJECTION;
float4   gGlobalAmbient : GLOBALAMBIENT;
float4   gLightColor;
float3   gLightPosition;
float3   gEyePosition;
float4   gEmissive      : EMISSIVE;
float4   gAmbient       : AMBIENT;
float4   gDiffuse       : DIFFUSE;
float4   gSpecular      : SPECULAR;
float    gSpecularPower : SPECULARPOWER;

These properties are identical to the ones used in the per-vertex effect. The real difference is in the implementation of the vertex program and the fragment program.

The Vertex Program

The implementation of the vertex program is simply a pass-through vertex program whose only real task is to transform the incoming object-space vertex position into clip-space. The object-space position of the vertex and the object-space normal of the vertex are simply passed-through to the output parameters. These two additional out parameters will be the automatic in parameters to the fragment program.

void C5E2v_fragmentLighting(float4 position : POSITION,
                            float3 normal   : NORMAL,

                        out float4 oPosition : POSITION,
                        out float3 objectPos : TEXCOORD0,
                        out float3 oNormal   : TEXCOORD1,

                    uniform float4x4 modelViewProj)
{
  oPosition = mul(modelViewProj, position);
  objectPos = position.xyz;
  oNormal = normal;
}

If you look at the out parameters for this method, you will notice that the vector associated with the POSITION semantic must be the clip-space position of the vertex. The other two out parameters use the TEXCOORD semantics to pass the interpolated values to the fragment program.

The Fragment Program

In this effect, the fragment program is where all the fancy lighting calculations are done. The lighting equations in this fragment program is identical to that of the vertex program in the per-vertex effect.

void C5E3f_basicLight(float4 position  : TEXCOORD0,                        
                      float3 normal    : TEXCOORD1,

                  out float4 color     : COLOR,

              uniform float4 globalAmbient,
              uniform float4 lightColor,
              uniform float3 lightPosition,
              uniform float3 eyePosition,
              uniform float4 Ke,
              uniform float4 Ka,
              uniform float4 Kd,
              uniform float4 Ks,
              uniform float  shininess)
{
  float3 P = position.xyz;
  float3 N = normalize(normal);

  // Compute emissive term
  float4 emissive = Ke;

  // Compute ambient term
  float4 ambient = Ka * globalAmbient;

  // Compute the diffuse term
  float3 L = normalize(lightPosition - P);
  float diffuseLight = max(dot(L, N), 0);
  float4 diffuse = Kd * lightColor * diffuseLight;

  // Compute the specular term
  float3 V = normalize(eyePosition - P);
  float3 H = normalize(L + V);
  float specularLight = pow(max(dot(H, N), 0), shininess);
  if (diffuseLight <= 0) specularLight = 0;
  float4 specular = Ks * lightColor * specularLight;

  color = emissive + ambient + diffuse + specular;
  color.w = 1;
}

The only notable difference here is that because we are using the TEXCOORD semantic to pass the object-space normal to the fragment program, the interpolation engine can possibly introduce some undesirable scale in the normal vector. To eliminate this undesirable scaling, we simply re-normalize the incoming normal vector before using it. The rest of the function is identical to the per-vertex implementation.

Techniques and Passes

Only a single technique with a single pass is used to define this effect. It is very similar to the per-vertex implementation, except all of the lighting properties are being passed to the fragment program instead of the vertex program.

technique t0
{
	pass p0
	{
        VertexProgram = compile arbvp1 C5E2v_fragmentLighting( gModelViewProj );
        FragmentProgram = compile arbfp1 C5E3f_basicLight( gGlobalAmbient, gLightColor, gLightPosition, gEyePosition, gEmissive, gAmbient, gDiffuse, gSpecular, gSpecularPower  );

        DepthTestEnable = true;
		DepthMask = true;        
	}
}

As you can see, this is pretty much the same pass definition as previously shown except now all the lighting parameters are being passed to the fragment program.

The RenderScene2 Method

Back on the C++ side, we can render the geometry using the per-fragment lighting effect in the exact same way as we did the per-vertex lighting effect. The only difference between these two render methods is the name of the effect being used.

void RenderScene2()
{
    EffectManager::Get().SetMaterial( g_BrassMaterial );

    Effect& effect = EffectManager::Get().GetEffect( "C5E3_fragmentLight" );

    EffectParameter& lightPosParameter = effect.GetParameterByName( "gLightPosition" );
    lightPosParameter.Set( g_LightPosition );

    EffectParameter& lightColorParameter = effect.GetParameterByName("gLightColor");
    lightColorParameter.Set( g_LightColor );

    EffectParameter& eyePosParameter = effect.GetParameterByName( "gEyePosition" );
    glm::mat4 viewMatrix = glm::inverse( g_Camera.GetViewMatrix() );
    glm::vec3 eyePos = glm::vec3( viewMatrix[3] );

    eyePosParameter.Set( eyePos );

    effect.UpdateParameters();
    EffectManager::Get().UpdateSharedParameters();

    Technique& technique = effect.GetFirstValidTechnique();

    technique.BeginTechnique();
    foreach( Pass* pass, technique.GetPasses() )
    {        
        if ( pass->BeginPass() )
        {
            glutSolidTorus( 0.75, 2.0, 16, 16 );
            pass->EndPass();
        }
    }

    // Draw a sphere that represents the light
    glEnable( GL_DEPTH_TEST );

    glPushMatrix();
    glTranslatef( g_LightPosition.x, g_LightPosition.y, g_LightPosition.z );
    glColor3f( g_LightColor.r, g_LightColor.g, g_LightColor.b );

    glutSolidSphere( 0.5f, 8, 8 );

    glPopMatrix();
}

The only difference between the RenderScene1 method and the RenderScene2 method is the effect that is being used to render the geometry. On line 384, the “C5E3_fragmentLight” effect is being loaded instead of the “C5E1_basicLight” effect. Otherwise, everything else is identical!

If everything is correct, we should see something like the image shown below.

Per-Fragment Lighting Effect

Per-Fragment Lighting Effect

Although the shape still looks faceted due to the tessellation of the mesh, if you compare this image with the per-vertex image, you will notice the specular highlights are much more detailed and pronounced than in the per-vertex image.

Spotlight Effects

Spotlight effects are rendered the same way that the point lights are rendered as described in the previous sections. The only difference is that the spotlight defines a cone angle in the direction of the spotlight that effects the amount of diffuse and specular light that reaches the point we are shading. For the spotlight effect, we will also introduce the attenuation factor. The attenuation factor is not limited to only spotlights but generally it is only used for positional lights (as shown in the previous example) and spotlights. Directional lights usually simulate a light source at an infinite distance away and therefor don’t have any attenuation.

We will also define struct types in the CgFX shader file. We will define one struct to store material properties, and one struct to store light properties.

Let’s take a look at the variable declarations for the spotlight shader effect.

// From page 128
struct Material {
    float4 Ke       : EMISSIVE;
    float4 Ka       : AMBIENT;
    float4 Kd       : DIFFUSE;
    float4 Ks       : SPECULAR;
    float shininess : SPECULARPOWER;
};

// From page 138
struct Light {
    float3 position;
    float4 color;
    float kC;
    float kL;
    float kQ;
    float3 direction;
    float cosInnerCone;
    float cosOuterCone;
};

Material gMaterial;
Light    gLight;
float4x4 gModelViewProj : WORLDVIEWPROJECTION;
float4   gGlobalAmbient : GLOBALAMBIENT;
float3   gEyePosition;

We first declare a struct type to store material properties followed by a struct declaration for light properties.

The semantics declared for the material properties ensures that the effect manager can match-up the shared parameters to the effect’s parameters.

The light struct declares the following properties:

  • float3 position: The position of the light in object-space coordinates.
  • float4 color: The color of the light.
  • float kC: The constant attenuation factor of the light.
  • float KL: The linear attenuation factor of the light.
  • float kQ: The quadratic attenuation factor of the light.
  • float3 direction: The object-space direction of the light.
  • float cosInnerCone: The cosine angle of the spotlight’s inner cone.
  • float cosOuterCone: The cosine angle of the spotlight’s outer cone.

After the struct declarations, we’ll define a few variables to use them. The global properties defined on lines 22-26 will be accessed and set in the application before the effect is used to render objects.

The Vertex Program

For the vertex program for this effect, we will use the same vertex program that we used for the per-fragment lighting shader. This vertex program is a pass-through vertex program that just passes the opbject-space input position and normal of the vertex directly through to the fragment program.

void C5E2v_fragmentLighting(float4 position : POSITION,
                            float3 normal   : NORMAL,

                            out float4 oPosition : POSITION,
                            out float3 objectPos : TEXCOORD0,
                            out float3 oNormal   : TEXCOORD1,

                            uniform float4x4 modelViewProj)
{
    oPosition = mul(modelViewProj, position);
    objectPos = position.xyz;
    oNormal = normal;
}

There is nothing new here since this is identical to the vertex program for the per-fragment lighting effect described in the previous section.

The Attenuation Function

We can also declare functions in the Cg shader language. Functions are declared in much the same way as they are in the C programming language.

The Attenuation function will calculate the attenuation factor of the light based on the distance to the point being shaded.

float C5E6_attenuation(float3 P, 
                       Light light) 
{
    float d = distance(P, light.position);
    return 1 / (light.kC + light.kL * d + light.kQ * d * d); 
}

The attenuation function accepts the point we are shading and a Light parameter that defines the light we computing the attenuation factor for. Again, I want to emphasize that both the point P and the light’s position must be in the same space. It doesn’t matter which space you choose (world-space is a common space) but just make sure they are both in the same space.

The formula to calculate the attenuation was shown in the section titled “Attenuation” so I won’t discuss it here. The important thing to note is that as the distance between the position of the light and the point we are shading increases, the value returned by this function approaches zero.

The Dual-Cone Spotlight Function

We will also define a function to calculate the contribution of the light within the two cone angles of the spotlight. This function takes the same arguments as the attenuation function and computes the intensity of the light within the cone of the spotlight.

float C5E9_dualConeSpotlight(float3 P,
                             Light  light)
{
    float3 V = normalize(P - light.position);
    float cosOuterCone = light.cosOuterCone;
    float cosInnerCone = light.cosInnerCone;
    float cosDirection = dot(V, light.direction);

    return smoothstep(cosOuterCone, cosInnerCone, cosDirection);
}

Notice that V is the normalized direction vector from the light source to the point being shaded. This is the opposite vector of the L vector that is used to compute the diffuse and specular components of the point being shaded. If your spotlight effect doesn’t seem to be working very well and produces weired results, then it may be the case that you have inversed this direction vector.

The smoothstep function will return a smooth interpolation value between cosOuterCone and cosInnerCone. It is important to note that the outer cone angle is actually smaller than the inner cone angle. That’s because as the cone angles become closer to 1.0, the angle of the cone actually decreases towards 0 degrees. If the cone angles are decreased to 0.0, the angle of the cone actually increases towards 180 degrees. This is due to the fact that the dot product of two parallel vectors is 1.0, while the dot product of two perpendicular vectors is 0.0. And thus explains why the outer cone angle is the minimum value to the smoothstep function and the inner cone angle is the maximum value to the smoothstep function.

The Spotlight Attenuation Function

We will also create a function that calculates the diffuse factor and the specular factor of the light at the point we are shading. This function will take both the attenuation and the spotlight factors into account and return the diffuse and specular contributions of the light.

void C5E10_spotAttenLighting(Light  light, 
                             float3 P,
                             float3 N,
                             float3 eyePosition,
                             float  shininess,

                             out float4 diffuseResult,
                             out float4 specularResult) 
{
    // Compute attenuation
    float attenuationFactor = C5E6_attenuation(P, light);

    // Compute spotlight effect
    float spotEffect = C5E9_dualConeSpotlight(P, light);

    // Compute the diffuse lighting
    float3 L = normalize(light.position - P);
    float diffuseLight = max(dot(N, L), 0);
    diffuseResult = attenuationFactor * spotEffect * 
        light.color * diffuseLight;

    // Compute the specular lighting
    float3 V = normalize(eyePosition - P);
    float3 H = normalize(L + V);
    float specularLight = pow(max(dot(N, H), 0), 
        shininess);
    if (diffuseLight <= 0) specularLight = 0;
    specularResult = attenuationFactor * spotEffect * 
        light.color * specularLight;
}

The spotAttenLighting function accepts the position being shaded (P), the surface normal (N) at point P, the position of the eye (camera position) and the shininess factor of the material being shaded. I know I can’t say this enough, but we must make sure that P, N, and eyePosition are all in the same space!

The method will return the diffuse and specular contributions of the light in the diffuseResult and specularResult out parameters.

First, the attenuation and spotlight factors are computed using the methods described above. Then the diffuse and specular contributions are computed in the same way as for the point lights shown earlier expect the additional multiplication of the computed attenuation and spotlight factors.

The Fragment Program

The fragment program is responsible for computing the final color of the pixel. It relies on the C5E10_spotAttenLighting function described earlier to do most of the work. It’s main task is to combine the lighting terms to acquire the final color.

void oneLight(float4 position : TEXCOORD0,
              float3 normal   : TEXCOORD1,

              out float4 color     : COLOR,

              uniform float3   eyePosition,
              uniform float4   globalAmbient,
              uniform Light    light,
              uniform Material material)
{
    // Calculate emissive and ambient terms
    float4 emissive = material.Ke;
    float4 ambient = material.Ka * globalAmbient;

    // Loop over diffuse and specular contributions for each light
    float4 diffuseLight;
    float4 specularLight;

    C5E10_spotAttenLighting(light, position.xyz, normal, 
        eyePosition, material.shininess,
        diffuseLight, specularLight);

    // Now modulate diffuse and specular by material color
    float4 diffuse = material.Kd * diffuseLight;
    float4 specular = material.Ks * specularLight;

    color = emissive + ambient + diffuse + specular;
    color.w = 1;
}

We first compute the emissive and ambient terms. The diffuse and specular factors are computed in the C5E10_spotAttenLighting function and used to compute the diffuse and specular terms based on the diffuse and specular components of the material.

The final color is computed by summing the final terms. The w component of the final color is clamped to 1.0 to ensure the final fragment appears opaque.

Techniques and Passes

Again, we only define a single technique with a single pass for this effect.

technique t0
{
    pass p0
    {
        VertexProgram = compile arbvp1 C5E2v_fragmentLighting( gModelViewProj );
        FragmentProgram = compile arbfp1 oneLight( gEyePosition, gGlobalAmbient, gLight, gMaterial );

        DepthTestEnable = true;
        CullFaceEnable = true;
    }
}

In the application, we will use this pass to render the object.

The RenderScene3 Method

This scene is slightly more complex than the previous two scenes demonstrated. In this scene to objects will be rendered at different positions in the world. The two objects are enclosed in a large box and the scene is lit by a single spotlight.

First let’s set the effect parameters that are not dependent on the world space matrix of the objects being rendered.

void RenderScene3()
{
    EffectManager& effectMgr = EffectManager::Get();

    Effect& effect = effectMgr.GetEffect( "C5E10_spotAttenLighting" );

    EffectParameter& lightParameter = effect.GetParameterByName( "gLight" );
    lightParameter["color"].Set( g_LightColor );
    lightParameter["kC"].Set( 1.0f );
    lightParameter["kL"].Set( 0.0f );
    lightParameter["kQ"].Set( 0.0001f );
    lightParameter["cosInnerCone"].Set( 0.95f );
    lightParameter["cosOuterCone"].Set( 0.85f );

On line 429, we get a reference to the effect that will be used to render the objects in the scene.

On line 431, we get a reference to the “gLight” struct parameter and set it’s internal values based on some pre-defined constants. Since these parameters are not changing, we could set the parameters only once after the effect is loaded and only updated the changing parameters in this function. This is an optimization I will leave up to the reader to implement.

Next we’ll define the world matrix for the first object we will render.

    glm::mat4x4 worldMatrix = glm::translate( 4.0f, -1.25f, 0.0f );
    worldMatrix = glm::rotate( worldMatrix, -90.0f, glm::vec3( 1.0f, 0.0f, 0.0f ) );
    glm::mat4x4 invWorldMatrix = glm::inverse( worldMatrix );

The object will be rotated 90 degrees about the X-axis, and translated 4 units to the right of the origin, and 1.25 units below the origin.

We also need to compute the inverse of the world matrix so that we can transform the camera’s position (eye position) and the light’s position and direction vectors into object space.

Next we’ll set the light’s position and direction properties in object-space.

    // Light position and direction is in object space.
    lightParameter["position"].Set( glm::vec3( invWorldMatrix * glm::vec4(g_LightPosition, 1.0f) ) );
    lightParameter["direction"].Set( glm::normalize( glm::vec3( invWorldMatrix * glm::vec4( -g_LightPosition, 0.0f ) ) ) );

These two lines of code may be a bit daunting to read so let’s break it up into the component parts.

Since we need both the position and direction vectors to be in homogeneous coordinates to be applied to the 4×4 inverse world matrix, we have to convert the 3-component position vector into a 4-component position vector.

glm::vec4(g_LightPosition, 1.0f)

Notice that we set the final component (the w-component) of the position vector to 1.0. When performing matrix multiplication on vector, the w-component determines if the translation of the matrix will be taken into consideration when calculating the resulting vector. Since this is a position vector we need to take translation into account so the final component of the vector should be 1.0.

Then, to get the 4-component position vector into object space we multiply the position vector by the inverse of the object’s world matrix.

invWorldMatrix * glm::vec4(g_LightPosition, 1.0f)

Then we convert back to a 3-component vector because the position property of the light expects a 3-component vector, not a 4-component vector.

glm::vec3( invWorldMatrix * glm::vec4(g_LightPosition, 1.0f) )

And we set the value of the “position” property of the light parameter.

lightParameter["position"].Set( glm::vec3( invWorldMatrix * glm::vec4(g_LightPosition, 1.0f) ) );

And we do the same for the light’s direction vector, except because this vector represents a direction and not a position, we don’t want to take the translation into account (only rotation) so the final component of the homogeneous vector must be 0.0.

glm::vec4( -g_LightPosition, 0.0f )

The -g_LightPosition will ensure the light is always looking back at the origin of the world.

And again, we transform the direction vector to object-space by multiplying by the inverse of the object’s world matrix.

invWorldMatrix * glm::vec4( -g_LightPosition, 0.0f )

And we need to convert the value back to a 3-component vector because that’s what the parameter expects.

glm::vec3( invWorldMatrix * glm::vec4( -g_LightPosition, 0.0f ) )

Because we are dealing with a direction vector, it is also very important to normalize the vector so that the lighting calculations will be correct.

glm::normalize( glm::vec3( invWorldMatrix * glm::vec4( -g_LightPosition, 0.0f ) ) )

And finally, the parameter is set.

lightParameter["direction"].Set( glm::normalize( glm::vec3( invWorldMatrix * glm::vec4( -g_LightPosition, 0.0f ) ) ) );
I know, this is an ugly massive line of code!

Then we get the position of the camera (eye position) in world space and transform it to object space to apply it to the gEyePosition effect parameter.

    EffectParameter& eyePosParameter = effect.GetParameterByName( "gEyePosition" );
    glm::mat4 viewMatrix = glm::inverse( g_Camera.GetViewMatrix() );
    glm::vec3 eyePos = glm::vec3( viewMatrix[3] );

    // Eye position parameter in object space
    eyePosParameter.Set( glm::vec3( invWorldMatrix * glm::vec4( eyePos, 1.0f ) ) );

Again, we have to transform the eye position of the camera into object space by multiplying the position vector by the inverse of the object’s world matrix.

Then we set some shared parameters that will automatically set the effect’s parameters and update the parameters which will ensure the parameters are committed to the shader program on the GPU.

    effectMgr.SetWorldMatrix( worldMatrix );
    effectMgr.SetMaterial( g_BrassMaterial );
    effectMgr.UpdateSharedParameters();
    effect.UpdateParameters();

On line 454, the world-matrix of the object is applied to the effect manager. When we update the shared parameters on line 456, the parameter bound to the WORLDVIEWPROJECTION parameter will also be computed and updated to the effect.

The effect parameters that are not shared must also be updated so the Effect::UpdateParameters is also invoked on line 457.

Then, to render the object using the effect, we loop through the passes of the technique defined in the effect and render the geometry (in this case a torus).

    Technique& technique = effect.GetFirstValidTechnique();
    technique.BeginTechnique();
    foreach( Pass* pass, technique.GetPasses() )
    {        
        if ( pass->BeginPass() )
        {
            glutSolidTorus( 0.75, 2.0, 32, 32 );
            pass->EndPass();
        }
    }

This block of code is identical to the previous examples so there isn’t much need for explanation.

Since the second object (a red plastic cone) has a different world matrix than the first object, we need to compute a new world matrix and inverse world matrix and the light’s position, direction, and eye position parameters all need to be updated to reflect this new world matrix.

    // Draw a red plastic cone to the left of the torus
    worldMatrix = glm::translate( -4.0f, -2.0f, 0.0f );
    worldMatrix = glm::rotate( worldMatrix, -90.0f, glm::vec3( 1.0f, 0.0f, 0.0f ) );
    invWorldMatrix = glm::inverse( worldMatrix );

    // Light position and direction is in object space.
    lightParameter["position"].Set( glm::vec3( invWorldMatrix * glm::vec4( g_LightPosition, 1.0f ) ) );
    lightParameter["direction"].Set( glm::normalize( glm::vec3( invWorldMatrix * glm::vec4( -g_LightPosition, 0.0f ) ) ) );

    // Eye position parameter in object space
    eyePosParameter.Set( glm::vec3( invWorldMatrix * glm::vec4( eyePos, 1.0f ) ) );

    effectMgr.SetMaterial( g_RedPlasticMaterial );
    effectMgr.SetWorldMatrix( worldMatrix );
    effectMgr.UpdateSharedParameters();
    effect.UpdateParameters();

    foreach( Pass* pass, technique.GetPasses() )
    {        
        if ( pass->BeginPass() )
        {
            glutSolidCone( 2.0, 3.5, 16, 16 );
            pass->EndPass();
        }
    }

This is pretty much the same as for the previous object except the object is translated 4 units to the left of the origin and it is rendered with the red plastic material instead of the brass material.

And we will also render a green emerald box around the scene using 6 quads. Since the quads of the box are described relative to the origin of the world, it’s world matrix is the identity matrix, and thus the inverse of the world matrix is also the identity matrix. Since we are using the same effect as for the other two objects in the scene, we have to reset the light’s position and direction vectors and also set the eye position of the camera to their standard values.

    // Object is at the origin so both the world-matrix and the inverse world matrix are identity.
    lightParameter["position"].Set( g_LightPosition );
    lightParameter["direction"].Set( glm::normalize ( -g_LightPosition ) );

    eyePosParameter.Set( eyePos );

    effectMgr.SetMaterial( g_GreenEmeraldMaterial);
    effectMgr.SetWorldMatrix( glm::mat4(1.0f) );    // Reset the world-matrix to identity.
    effectMgr.UpdateSharedParameters();
    effect.UpdateParameters();

    // Draw a green box
    foreach( Pass* pass, technique.GetPasses() )
    {        
        if ( pass->BeginPass() )
        {
            glBegin(GL_QUADS);
            {
                // Top plane
                glNormal3f(0, -1, 0);
                glVertex3f(-12, 10, -12);
                glVertex3f( 12, 10, -12);
                glVertex3f( 12, 10,  12);
                glVertex3f(-12, 10,  12);

                // Bottom plane
                glNormal3f(0, 1, 0);
                glVertex3f( 12, -2, -12);
                glVertex3f(-12, -2, -12);
                glVertex3f(-12, -2,  12);
                glVertex3f( 12, -2,  12);

                // Left plane
                glNormal3f(1, 0, 0);
                glVertex3f(-12, -2,  12);
                glVertex3f(-12, -2, -12);
                glVertex3f(-12, 10, -12);
                glVertex3f(-12, 10,  12);

                // Right plane
                glNormal3f(-1, 0, 0);
                glVertex3f(12, -2, -12);
                glVertex3f(12, -2,  12);
                glVertex3f(12, 10,  12);
                glVertex3f(12, 10, -12);

                // Front plane
                glNormal3f(0, 0, -1);
                glVertex3f(-12, 10, 12);
                glVertex3f( 12, 10, 12);
                glVertex3f( 12, -2, 12);
                glVertex3f(-12, -2, 12);

                // Back plane
                glNormal3f(0, 0, 1);
                glVertex3f(-12, -2, -12);
                glVertex3f( 12, -2, -12);
                glVertex3f( 12, 10, -12);
                glVertex3f(-12, 10, -12);
            }
            glEnd();
            pass->EndPass();
        }
    }

On line 509, the world-matrix of the box is reset to identity.

And since we want to visualize the position and orientation of the spotlight, we draw a cone at the spotlights position looking at the origin.

    // Draw a cone that represents the spotlight
    glEnable( GL_DEPTH_TEST );

    glm::mat4 spotlightMatrix = glm::inverse( glm::lookAt( g_LightPosition, glm::vec3(0), glm::vec3(0,1,0) ) );

    glPushMatrix();
    glMultMatrixf( glm::value_ptr(spotlightMatrix) );

    glColor3f( g_LightColor.r, g_LightColor.g, g_LightColor.b );
    glutSolidCone( 0.75, 1.5, 16, 8 );

    glPopMatrix();
}

The glm::lookAt function will create a look-at matrix from the light position, orientated toward the origin. We need to take the inverse of this matrix because the order of matrix multiplication with the current matrix on the stack needs to be reversed (view * world instead of world * view) and this can be effectively accomplished by multiplying by the inverse of the matrix.

If everything is correct, the result should be something similar to the video shown below.

References

The Cg Tutorial

The Cg Tutorial

The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics (2003). Randima Fernando and Mark J. Kilgard. Addison Wesley.

Download the Source

You can download the source code for this project including Visual Studio 2010 solution and project files here:

[Cg_Lighting.zip]

4 thoughts on “Transformation and Lighting in Cg

  1. I noticed that I was missing the cg.dll and cgGL.dll files in the demo source files. I’ve included them in a new version of the zip file which you should get if you downloaded it again. If you have the Cg runtime installed on your computer, you wouldn’t even notice but if not, the demo wouldn’t have run.

    Please let me know if you find any more problems with the demo or if you have any suggestions on how I could improve this article.

  2. I also noticed that this demo will work fine if you have a NVidia GPU, but for ATI graphics adapters, you may get an error with the use of the DIFFUSE, and SPECULAR semantics. These semantics are reserved for some varying input semantics in the vp40 profile (see http://http.developer.nvidia.com/Cg/vp40.html).

    To resolve this issue, change any occurance of “DIFFUSE” and “SPECULAR” with “MAT_DIFFUSE” and “MAT_SPECULAR” in every cpp file and cgfx file in the solution. This should resolve the naming conflicts and allow the program to work for both NVidia and ATI graphics.

    I will ensure that future demos have these changes applied so people with ATI graphics adapters don’t get this error.

  3. I was wondering how I may contact you to get a download :) Very Interested in having a look, but I learn best through being able to pull apart code and mutate it to learn how it works

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>