Transformation and Lighting in Cg 3.1

Lighting with Cg 3.1

Lighting with Cg 3.1

In this article I will demonstrate how to implement a basic lighting model using the Cg shader language. If you are unfamiliar with using Cg in your own applications, then please refer to my previous article titled Introduction to Shader Programming with Cg 3.1.

This article is an updated version of the previous article titled Transformation and Lighting in Cg. In this article, I will not use any deprecated features of OpenGL. I will only use the core OpenGL 3.1 API.

Introduction

Lighting is one of the most important features in games. If used correctly, lighting in games can cause the participant to experience the desired emotions at the intended moments. This is an important factor when one is trying to create a game that completely envelops the participant. For me, one of the most enveloping games I’ve played was Doom 3. It’s masterful use of light and shadows sometimes caused me to scream like a little girl when baddies suddenly jumped out of the darkness.

In this article, I will demonstrate how the basic lighting model can be implemented using shaders. If you’ve only used the fixed-function pipeline of Direct3D or OpenGL, you have probably taken advantage of the built-in lighting system that is implemented in the rendering pipeline. If you want to write your own programmable vertex or fragment shaders, then you need to bypass the built-in transformation and lighting calculations and implement them yourself. In this article, I will show you how to do that.

Dependencies

The demo shown in this article uses several 3rd party libraries to simplify the development process.

Any dependencies used by the demo are included in the source distribution (source code is available upon request).

The Basic Lighting Model

In computer graphics, there are several lighting models that can be used to simulate the reflection of light with specific material properties. The Lambertian reflectance model simulates the light that is diffused equally in all directions. The Gouraud shading model will calculate the diffuse and specular contributions of every vertex of a model and the color of the pixel will be interpolated between vertices. This is also known as per-vertex lighting. Phong shading, often referred to as per-pixel lighting is a lighting method that will interpolate the vertex normal across the face of a triangle in order to calculate per-pixel (or per-fragment) lighting. Using the Phong lighting model the final color of the pixel is the sum of the ambient (and emissive), diffuse, and specular lighting contributions. A variation of the the Phong shading model is the Blinn-Phong lighting model. The Blinn-Phong lighting model calculates the specular contribution slightly differently than the standard Phong lighting model. In this article, I will use the Blinn-Phong lighting model to compute the lighting contribution of the object’s materials.

The general formula for the lighting model is:

Where is the final output color of the pixel, is the ambient term, is the emissive (the color the material emits itself without any lights), is the light reflected evenly in all directions based on the angle to the light source, and is the concentration of light that is reflected based on both the angle to the light and the angle to the viewer.

Ambient and Emissive Terms

The Ambient term can be thought of as the global contribution of light in the entire scene. Even if there is no light in the scene, the global ambient contribution can be used to create some global illumination. If there are lights in the scene, each light can contribute to the global ambient if it in the line-of-sight of the object it is illumination. The global illumination is then the sum of all of the individual lights ambient contribution.

The emissive term is the amount of light the material emits without any lights or ambient contribution. This value can be used to simulate materials that can glow in the dark. This does not automatically create a light source in the scene. If you want to simulate an object that actually emits light, you have to create a light source at the objects location and everything in the scene must take that light source into consideration when rendering (except the object itself).

If we make the following definitions:

  • is the global ambient contribution.
  • is the ambient reflection constant of the material.
  • is the emissive reflection constant of the material.

Then the final ambient contribution of the fragment will be:

And the emissive contribution will be:

Lambert Reflectance

Lambert reflectance is the diffusion of light that occurs equal in all directions, or more commonly called the diffuse term. The only thing that we need to consider to calculate the lambert reflectance term is the angle of the light relative to the surface of the normal.

If we consider the following definitions:

  • is the position of the surface we want to shade.
  • is the surface normal at the location we want to shade.
  • is the position of the light source.
  • is the diffuse contribution of the light source.
  • is the normalized direction vector pointing from the point we want to shade to the light sorce.
  • is the diffuse reflectance of the material.

Then the diffuse term of the fragment is:


The ensures that for surfaces that are pointing away from the light (when ), the fragment will not have any diffuse lighting contribution.

The Phong, and Blinn-Phong Lighting Models

The final contribution to calculating the lighting is the specular term. The specular term is dependent on the angle between the normalized vector from the point we want to shade to the eye position (V) and the normalized direction vector pointing from the point we want to shade to the light (L).

Blinn-Phong vectors

Blinn-Phong vectors

If we consider the following definitions:

  • is the position of the point we want to shade.
  • is the surface normal at the location we want to shade.
  • is the position of the light source.
  • is the position of the eye (or camera’s position).
  • is the normalized direction vector from the point we want to shade to the eye.
  • is the normalized direction vector pointing from the point we want to shade to the light source.
  • is the reflection vector from the light source about the surface normal.
  • is the specular contribution of the light source.
  • is the specular reflection constant for the material that is used to shade the object.
  • is the “shininess” constant for the material. The higher the shininess of the material, the smaller the highlight on the material.

Then, using the Phong lighting model, the specular term is calculated as follows:




The Blinn-Phong lighting model calculates the specular term slightly differently. Instead of calculating the angle between the view vector (V) and the reflection vector (R), the Blinn-Phong lighting model calculates the half-way vector between the view vector (V) and the light vector (L).

In addition to the previous variables, if we also define:

  • is the half-angle vector between the view vector and the light vector.

Then the new formula for calculating the specular term is:




As you can see, the Blinn-Phong lighting model requires an additional normalize of the half-angle vector which requires the use of the square-root function which can be expensive on some profiles. The advantage of the Blinn-Phong model becomes apparent if we consider both the light vector (L) and the view vector (V) to be at infinity. For directional lights, the light vector (L) is simply the negated direction vector of the light. If this condition is true, the half-angle vector (H) can be pre-computed for each directional light source and simply passed to the shader program as an argument saving the cost of the computation of the half-angle vector for each fragment.

The graph below shows the specular intensity for different values of .

Specular Intensity

Specular Intensity

By default (with a specular power of 0), the intensity of the specular highlight is always 1.0 regardless of the dot product of the half-angle vector and the surface normal (). This is shown on the blue line on the image above.

The red line shows a specular power of 1. In this case, the intensity increases linearly as approaches 1. This is equivalent to the diffuse contribution.

As the specular power value increases, the specular highlight becomes more focused. In general, very shiny objects (like glass and polished metal) have a high specular power (between 50 and 128) and dull objects (like plastics and matte materials) have a low specular power (between 2 and 10).

Attenuation

Attenuation is the gradual fall-off of the intensity of a light as it gets farther away from the object it illuminates.

The attenuation factor of a light is determined by the distance from the point being shaded to the light, and three constant parameters that define the constant, linear and quadratic attenuation of the light source.

If we define the following variables:

  • is the scalar distance between the point being shaded and the light source.
  • is the constant attenuation factor of the light source.
  • is the linear attenuation factor of the light source.
  • is the quadratic attenuation factor of the light source.

The the formula for the attenuation factor is:

For lights that are infinitely far away from the object being rendered such as directional lights, we don’t want any attenuation to occur (if we are simulating the effects of the sun for example). In this case, we set the constant attenuation constant to 1.0 and the linear and quadratic attenuation constants to 0.0. This will alway result in an attenuation factor of 1.0 independent of the distance to the light source.

The result of the attenuation factor is a scalar that is applied to both the diffuse and specular terms.

The image below shows how the intensity of the light is affected as the light source moves further away from the point being lit.

Light Attenuation Graphs

Light Attenuation Graphs

The blue line in the image above represents the falloff of the light with constant attenuation set to 1.0 and both linear and quadratic attenuation set to 0. In this case, the attenuation stays constant regardless of how far away the light source is.

The green line represents the intensity of the light with the constant and linear attenuation set to 1.0 and quadratic attenuation set to 0. In this case, the light’s intensity is 50% when the light is only 1 unit away from the point being lit.

The red line represents the intensity of the light with the constant and quadratic attenuation set to 1.0 and the linear attenuation set to 0. In this case, the light’s intensity approaches 0 much faster. Using quadratic attenuation on your light sources should be used with caution.

Spotlights

Spotlights have the additional property that they only emit light within a directed cone in the direction of the light. The cone of the light can be defined by two cosine angles, the inner cone angle and the outter cone angle. The intensity of the resulting light is computed by taking the dot product between the normalized vector from the light position to the point being shaded and the light’s direction vector.

Inner and Outer Cones for Spotlights

Inner and Outer Cones for Spotlights (from "The Cg Tutorial")

If we define the following variables:

  • is the position of the point we want to shade.
  • is the position of the light source.
  • is the direction of the light source.
  • is the normalized direction vector between the light source to the point we want to shade.
  • is the cosine of the angle between the normalized direction vector between the light source and the point we want to shade and the normalized direction vector of the light source.
  • is the cosine angle of the inner cone of the spotlight.
  • is the cosine angle of the outer cone of the spotlight.

Then the formula for calculating the intensity of the spotlight is:



The smoothstep function will smoothly interpolate between two values. This function produces better results then simply doing a linear interpolation between the two values.

It is interesting to note that the cosine angle of the inner cone will be larger than the cosine angle of the outer cone. This is because it measures the dot product of a vector that is relative to the direction vector of the light. If the vectors are parallel (in the same direction) then the dot product is 1.0 and if they are perpendicular, then the dot product is 0.0. Which means that if you want a wider spotlight cone you have to decrease (closer to 0.0) the cosine angle of the cones.

Transformations and the Importance of Spaces

Now that we’ve seen the general equations for implementing the basic lighting equations, we need to apply it. Before we can apply these formulas we need to know something about the spaces in which all of our objects live. In a previous article titled [3D Math Primer for Game Programmers (Coordinate Systems)] I discussed some of the common spaces that exist in 3D graphics programs. Please refer to that article if you require an explanation of object space, world space, and view space.

The reason I am bringing up spaces in this article is that it is very important that before we can sensibly calculate the lighting that is applied to an object, we must ensure that all positions and directions of objects, lights, and the eye position are in the same space.

In general we will transfer positions and directions from one space to the other by multiplying by the corresponding matrix that represents that space. For example, to get from object space to world space, we need to transform the vertex position and vertex normals by the world-space matrix to get those positions and normals into world space. However to get the positions and normals from object space to view space (the space that has the camera position at the origin), we can’t just simply multiply the object-space positions and normals by the view matrix because the the view matrix will only transform positions and normals from world space into view space.

Let’s consider this simple notation:

[Object Space] * [World Transform]      => [World Space]
[World Space]  * [View Transform]       => [View Space]
[View Space]   * [Projection Transform] => [Clip Space]

If we have positions and directions already expressed in world space and we need to express those positions and directions in object space, we can just multiply by the inverse of the world transform to get the coordinates in object space.

[Clip Space]  * [Inverse Projection Transform] => [View Space]
[View Space]  * [Inverse View Transform]       => [World Space]
[World Space] * [Inverse World Transform]      => [Object Space]

This is a very important point to remember when creating your lighting shaders! If you only remember one thing after reading this article, it should be this: Everything must be in the same space!

We can also combine matrices in order to transform coordinates from object space directly into clip space.

[Object Space] * ( [World Transform] * [View Transform] * [Projection Transform] ) => [Clip Space]

Depending on the API the order of the multiplication may be left-to-right (as shown above and used for DirectX) or right-to-left (shown below and used for OpenGL):

( [Projection Transform] * [View Transform] * [World Transform] ) * [Object Space] => [Clip Space]

It is usually advisable to pre-compute this combined matrix in advance and pass it as a parameter to the shader program so you avoid doing two matrix multiplies for every vertex of the mesh.

In some cases it may appear that the GPU is better at doing the matrix multiplication than pre-computing it on the CPU. For small meshes with very few vertices, this may be true but when the meshe’s vertex count increases, it becomes clear that pre-computing the combined matrix on the CPU becomes more efficient.

The Simple Lighting Effect

In this article, I will show a simple Cg shader effect that performs the basic lighting equation described above. This effect performs per-fragment lighting for point lights. This effect could be easily extended to perform lighting for spot-lights and directional lights as well but this is not shown here. You can refer to my previous article titled Transformation and Lighting in Cg to see how to implement a spotlight effect.

Although Cg does allow you to define the vertex program and the fragment program in separate files, I’ve decided to use the CgFX file format which allows you to specify both the vertex and fragment programs in the same file. I will first show how to implement the vertex program and then I will show the fragment program.

Data Structures

The Cg language is very similar to the C programming languages. Just as in C, we can declare structures in Cg. These structures will be used to group the stream data coming from the application as well as the data that is passed from the vertex program to the fragment program.

Application Data

First, I’ll define a structure to group all of the vertex attributes that are sent from the application to the shader.

struct AppData
{
    in float3 Position : POSITION;
    in float3 Normal   : NORMAL;
};

For this demo, I only require two stream attributes to be sent from the application.

  • Position: The 3D position of the vertex being lit.
  • Normal: The 3D normal vector of the vertex. The surface normal of the vertex being lit is required to perform lighting.

Each one of these parameters is also defined with special notation called the semantic. The semantic tells the shader compiler how to bind these attributes from the application. In this case, I am using the predefined semantic “POSITION” to bind the Position variable to the vertex position attribute sent from the application. The variable named Normal is bound to the normal vertex attribute using the predefined semantic “NORMAL“. The name of the variable has no impact as to which vertex attribute it is assigned to. Only the semantic is used to connect the shader variable to the vertex attributes coming from the application.

The semantic notation will make more sense when I show the implementation of the demo in C++.

Vertex to Fragment Data

I will also create a struct that is used for both the output from the vertex program and the input to the fragment program.

struct v2f
{
    float4 PosH     : HPOS;      // Output position in homogenous clip-space.
    float4 Normal   : TEXCOORD0; // Transformed normal in World space.
    float4 PosW     : TEXCOORD1; // Transformed position in World space.
};

There are three parameters defined in this struct. The first parameter PosH will be used to output the vertex position transformed to clip-space. This is the only one of the three parameters that will be output from the vertex program that will not be used as input to the fragment program. Every vertex program must output at least one parameter that is bound to the HPOS (or POSITION) semantic and this must always be the vertex position in homogenous clip-space.

The other two parameters Normal and PosW are the vertex normal and vertex position transformed to world-space. These parameters are mapped to the TEXCOORD0 and TEXCOORD1 semantics. The choice of semantic used for these two parameters is arbitrary but the TEXCOORDn semantics are not being used by any other attribute and they support interpolation over the polygon face.

Material Definition

I also want to send material information from the application to the vertex shader. Material information is specified for an entire mesh and not per-vertex. A variable that doesn’t change it’s value based on a vertex attribute is called a uniform variable. A uniform variable is set once before the mesh is rendered by the application and remains constant until the shader is finished processing all of the mesh’s vertices.

I’ll define a structure which will be used to group all of the material properties together.

struct _Material
{
    float4 Emissive;
    float4 Ambient;
    float4 Diffuse;
    float4 Specular;
    float  Shininess;  // The specular exponent
};

_Material Material;

The material’s different components were described earlier in the document, so I won’t describe them here again. You’ll notice that none of these parameters have semantics associated with them. In the case of uniform properties, the semantics are optional. You can use semantics if it helps you to identify the variable in the application. In this article, I will not use custom semantics, but if you are curious, you can read my article titled Transformation and Lighting in Cg. In this post, I do define custom material semantics and I use these semantics to automatically connect application parameters to shader parameters.

Light Definition

We also need to define a struct to group light properties.

// Light types (there is no enum type in Cg)
const int PointLight = 0;
const int SpotLight = 1;
const int DirectionalLight = 2;

struct _Light
{
    int LightType;

    float4 Color;

    // The position of the light in world space.
    float4 Position; 

    // Used by Directional and Spot lights.
    float4 Direction;

    // Attenuation factors used by Point and Spot lights
    float ConstantAttenuation;
    float LinearAttenuation;
    float QuadraticAttenuation;
    
    // Spotlight properties that determine the spotlight cone.
    // Angles closer to 1 produce tighter cones
    // and angles closer to 0 produce wider cones.
    // Angles of -1 will emulate point lights.
    float InnerCosAngle; 
    float OuterCosAngle;
};

_Light Light;

On line 26, I need to define a few constant values to define light types since Cg doesn’t support the enum type (neither does C). I will only show how to implement point-lights in this tutorial but implementing the other light types should be easy after reading this tutorial.

The Light structure defines only a single color parameter that is used for the light’s ambient, diffuse, and specular values. Some samples or tutorials may separate the light’s different color values but I prefer to use a single color component for the ambient, diffuse, and specular lighting values.

The position of the light is defined in world-space. I’ve chosen to implement the lighting equations in world-space because it is easy to understand not because it is the most optimized method. The important thing to remember is that everything must be in the correct space for the lighting equations to be correct, either object space, world space, eye space, or even in light space.

The direction component is only applicable to spot-lights and directional lights so it won’t be used here.

The attenuation factors control the intensity of the light as it moves away from the point being lit. If you have many lights in your scene, it makes sense to define your attenuation factors for each light to achieve the right lighting effects, but with only a single light, the default attenuation factors should be sufficient. You should keep in mind that it does not make sense to compute the attenuation of directional lights since they don’t have a position. Only point lights and spot lights have a position.

The spotlight cosine angles define the inner and outer cone angles for spot-lights. This value is used to determine the intensity of the light based on the direction of the spot-light and the direction from the spot-light to he point being lit. If the spot light is pointing directly at the point being lit, then the dot product of the two vectors will be 1.0 and the point will receive full intensity. If the point is at an angle to the spot light, it will receive a factor of the intensity based on the amount of angle between the spotlight and the point being lit and these cosine angles.

Global Variables

Besides the light and material properties, we also need to define a few global variables that are used by the vertex and fragment shaders.

float4 GlobalAmbient = float4( 0.2, 0.2, 0.2, 1.0 );

float4x4 ModelViewProjection;
float3x3 ModelMatrixIT; // Inverse-transpose of model matrix (needed for transforming normals)
float4x4 ModelMatrix; // Transform vertex position to world space.
float4   EyePosition; // The position of the camera in world space.

The GlobalAmbient variable is used to modulate the material’s ambient contribution as described in the Ambient and Emissive Terms section above.

The ModelViewProjection variable is used to hold the 4×4 transformation matrix that will transform the vertex position from local space to clip-space. This matrix will be computed by the application by multiplying the model, view, and projection matrices into a single matrix.

The ModelMatrixIT is a 3×3 matrix that is used to transform the vertex normal to world-space. This matrix is the inverse-transpose of the model matrix. We only need to use the inverse-transpose matrix to transform the normal to world space if the model matrix contains a non-uniform scale in it. If we know the model matrix does not (or cannot) contain a non-uniform scale, then we can substitute this matrix for the standard world-matrix of the object being rendered. Note that this is a 3×3 matrix. This is required because we do not want to consider the translation vector that is in a 4×4 matrix when transforming the normals. This matrix is equivalent to the gl_NormalMatrix in GLSL. The best explanation for using this matrix to transform normals I could find is here: http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/.

The ModelMatrix is a 4×4 matrix that transforms the vertex position into homogeneous world-space. Unlike the inverse-transform version of this matrix, we must consider the translation component of the matrix because we are transforming points instead of vectors.

The EyePosition represents the position of the camera in world-space. The eye position can also be obtained by taking the inverse of the view matrix and extracting the 4th row (for row-major matrices) or the 4th column (for column major matrices).

The Vertex Program

We will first take a look at the vertex program. This simple program will take the vertex attribute data sent from the application and transform that data into the correct spaces. The transformed vertex attributes will then be sent to the fragment program.

v2f mainVP( AppData IN )
{
    v2f OUT;
    
    OUT.PosH = mul( ModelViewProjection, float4(IN.Position, 1 ) );
    OUT.PosW = mul( ModelMatrix, float4(IN.Position, 1 ) );
    // In case of a non-uniform scale, we must transform the normal by 
    // the inverse transpose of the model matrix to perserve orthogonality.
    OUT.Normal = float4( mul( ModelMatrixIT, IN.Normal ), 0 );

    return OUT;
}

The vertex program’s main entry point takes a parameter of type AppData and returns a variable of type v2f. We will call these variables IN and OUT respectively.

As I mentioned earlier, the minimum functionality of the vertex program must transform the vertex position from object space to clip-space. This is done by multiplying the vertex position by the model-view-projection matrix.

To perform the lighting equations in the fragment program, we also need to convert the object-space vertex position and normal to world-space. This is done by multiplying the vertex position by the model matrix and the vertex normal is multiplied by the inverse-transpose of the model matrix.

The OUT parameter is returned by this function and used as the IN function for the fragment program.

The Fragment Program

The fragment program is responsible for computing the per-fragment lighting for our objects.

float4 mainFP( v2f IN ) : COLOR
{
    float4 normal = normalize( IN.Normal );
    float4 emissive = Material.Emissive;
    float4 ambient = GlobalAmbient * Material.Ambient;
        
    float attenuation = Attenuation( IN.PosW, Light );
    float4 diffuse = float4( 0, 0, 0, 1 );
    float4 specular = float4( 0, 0, 0, 1 );

    float4 L = normalize( Light.Position - IN.PosW );
    float NdotL = max( dot( normal, L ), 0 );

    if ( NdotL > 0 )
    {
        diffuse = NdotL * Light.Color * Material.Diffuse;

        float4 V = normalize( EyePosition - IN.PosW );
        float4 H = normalize( L + V );
        float NdotH = max( dot( normal, H ), 0 );

        specular = pow( NdotH, Material.Shininess ) * Light.Color * Material.Specular;
    }

    return emissive + ambient + ( diffuse + specular ) * attenuation;
}

This function shows an example of using a binding semantic to map the return value of the function to an output register of the fragment shader. At a minimum the fragment program must return a color value that is bound to the COLOR output semantic. In more complex shaders, the fragment program can return several color values and optionally a depth value. This shader only outputs a primary color that is mapped to the currently bound color buffer.

On line 88, the incoming vertex normal must be re-normalized because interpolating the surface normal as a texture coordinate can cause unwanted scaling in the vector.

On lines 89 and 90 the emissive and ambient terms are computed according to the equations discussed earlier.

The Attenuation function computes the light’s intensity based on the distance the light is away from the point being shaded according to the function shown below.

// The intensity of the light is inversely propotional to the distance it is away from the point being lit.
// In other words, the light gets dimmer as it moves further away from the point being lit.
// This function computes the light's intensity based on the distance to the point (p) and the light's attenuation factors.
float Attenuation( float4 p, _Light light )
{
    float d = distance( p, light.Position );
    return 1 / ( light.ConstantAttenuation + light.LinearAttenuation * d + light.QuadraticAttenuation * d * d );
}

On lines 93 and 94 the diffuse and specular terms are initialized to 0 in case the point being shaded is behind the light.

On line 97, the value is computed. If this factor is greater than 0 then the point being shaded is facing the light, otherwise it’s facing away from the light and we don’t need to compute the diffuse and specular components.

Otherwise, the diffuse is computed based on the color of the light and the material’s diffuse component modulated by the value of .

Similarly, the specular component is computed based on , the specular shininess, the color of the light and the material’s specular component according to the formulas discussed earlier.

The final color value is the sum of all of the lighting contributions. The diffuse and specular components are the only components that should be modulated by the attenuation factor of the light.

Techniques and Passes

The final thing that must appear in the CgFX source file is at least one technique that defines all of the passes that are required to implement the effect. For this simple effect, I will only define a single technique which defines only a single pass.

technique t0
{
    pass p0
    {
        VertexProgram = compile gp4vp mainVP();
        FragmentProgram = compile gp4fp mainFP();
    }
}

The only technique in this effect defines a single pass. The pass can contain any number of state assignments (for a complete list of state assignments that can appear in a pass block, refer to the CgFX State Documentation). For this simple shader, I will only assign the VertexProgram and FragmentProgram state assignments.

The profile that is used to compile the vertex program and the fragment program are the gp4vp and the gp4fp profiles. To see a list of the different profiles that you can use to compile your shaders then please refer to the profile documentation on the NVIDIA website (http://http.developer.nvidia.com/Cg/index_profiles.html.

The Cg Lighting Demo

Now that we’ve seen how to create the CgFX shader source file, let’s see how we can use this shader in a C++ application.

In this article, I will not show an extensive description of the source code. The code in this sample is based on the Cg template described in the Introduction to Shader Programming article. Since most of the code is explained in full detail there, I will only show the additional code that was added to this demo.

Data Structures

Similar to the CgFX source file, we need to define a few structures to group material and light properties. I will also add an additional structure that is used to store mesh information.

The Material Structure

The Material structure is used to group properties that belong to a material definition and initialize the materials with some default values.

// A struct to hold the material properties
struct Material
{
    std::string m_Name; // The material name (could be used for look-up tables?)

    glm::vec4 m_Emissive;
    glm::vec4 m_Ambient;
    glm::vec4 m_Diffuse;
    glm::vec4 m_Specular;
    float     m_Shininess;  // The specular exponent

    Material()
        : m_Emissive( 0.0f, 0.0f, 0.0f, 1.0f )
        , m_Ambient( 0.2f, 0.2f, 0.2f, 1.0f )
        , m_Diffuse( 0.8f, 0.8f, 0.8f, 1.0f )
        , m_Specular( 0.0f, 0.0f, 0.0f, 1.0f )
        , m_Shininess( 0.0f )
    {}
};

typedef std::vector<Material> MaterialArray;
// The list of materials used in the scene.
MaterialArray g_Materials;

This structure should be self explanatory. Since we may want to define several materials, we declare a vector of materials that will be populated when the scene data is loaded.

I’m not using textures in the demo, but if I was, I would probably also want to define a list of texture object ID’s in the material struct (one texture object ID for each texture stage).

The Light Structure

The Light structure is used to group properties that define a point-, spot-, or directional-light.

// Supported light types
enum LightType
{
    PointLight,
    SpotLight,
    DirectionalLight,    
};

// A struct to hold light properties.
struct Light
{
    // What kind of light this is.
    LightType m_Type;

    glm::vec4 m_Color;

    // Position is used by Point and Spot lights.
    glm::vec4 m_Position; 

    // Used by Directional and Spot lights.
    glm::vec4 m_Direction;

    // Attenuation factors used by Point and Spot lights
    float m_ConstantAttenuation;
    float m_LinearAttenuation;
    float m_QuadraticAttenuation;
    
    // Spotlight properties that determine the spotlight cone.
    // Angles closer to 1 produce tighter cones
    // and angles closer to 0 produce wider cones.
    // Angles of -1 will emulate point lights.
    float m_InnerCosAngle; 
    float m_OuterCosAngle;

    Light()
        : m_Type(PointLight)
        , m_Color(1, 1, 1, 1 )
        , m_Position( 0, 0, 0, 1 )
        , m_Direction( 0, 0, 1, 0 )
        , m_ConstantAttenuation( 1.0f )
        , m_LinearAttenuation( 0.0f )
        , m_QuadraticAttenuation( 0.0f )
        , m_InnerCosAngle( -1.0f )
        , m_OuterCosAngle( -1.0f )
    {}

};

Light g_Light;

The Light structure define the type, color, position, direction, attenuation factors, and spot-light cone angles. This structure matches then one defined in the CgFX file.

In this demo, I will only simulate a single light. I’ll use the g_Light global variable to store the properties of that light.

The Mesh Structure

The Mesh structure is used to group properties that are required to draw a mesh in the scene.

// A vertex definition that stores position and normals.
struct VertexXYZNorm
{
    glm::vec3 m_Pos;    // X,  Y,  Z
    glm::vec3 m_Norm;   // Nx, Ny, Nz
};

// A mesh is a renderable entity that contains vertex data and
// a material definition.
struct Mesh
{
    // Vertices stored in system memory.
    std::vector<VertexXYZNorm> m_Vertices;
    // Index buffer stored in system memory.
    std::vector<GLuint> m_Indices;

    // Vertices stored in Graphics memory.
    GLuint  m_vboVertices;
    // Indices stored in Graphics memory.
    GLuint  m_vboIndicies;

    // The VAO to quickly bind attribute streams.
    GLuint m_VAO;

    // The material that is associated with this mesh.
    Material m_Material;

    // Default constructor
    Mesh()
        : m_vboVertices(0)
        , m_vboIndicies(0)
        , m_VAO(0)
    {}

    ~Mesh()
    {
        if ( m_VAO != 0 )
        {
            glDeleteVertexArrays( 1, &m_VAO );
        }
        if ( m_vboIndicies != 0 )
        {
            glDeleteBuffers( 1, &m_vboIndicies );
        }
        if ( m_vboVertices != 0 )
        {
            glDeleteBuffers( 1, &m_vboVertices );
        }
    }
};

The VertexXYZNorm structure defines the vertex attributes that are streamed from the application to the vertex shader. At a minimum, the vertices of our mesh must define a position and a vertex normal in object-space.

The Mesh structure defines the m_Vertices and m_Indices variables which are used to store the mesh information in system memory.

The m_vboVertices and the m_vboIndicies variables are used to identify the vertex buffer objects that store the vertex data in graphics memory. If you are unsure how to use VBOs, I recommend you refer to my previous article titled Using OpenGL Vertex Buffer Objects.

The m_VAO member variable is used to refer to the Vertex Array Object that is used to group all of the vertex attribute streams that are required to render this mesh with a single binding call.

The m_Material member variable defines the mesh’s material properties.

Cg Parameters

Just like OpenGL, we must first define a context before we can use Cg in our application. The reference to the Cg context is stored in a CGcontext variable. The Cg context is required to load shader programs and effect files.

We will also define variables to hold the CgFX effect and the technique that is defined in the effect file.

// The context
CGcontext g_cgContext = NULL;
// Cg effect and technique variables.
CGeffect g_cgEffect = NULL;
CGtechnique g_cgTechnique = NULL;

In order to set the values of the uniform properties defined in the shader, we must define a reference to those properties in the shader effect file. We do that with the CGparameter type.

// Cg effect parameters
CGparameter g_cgModelViewProjection = NULL;
CGparameter g_cgModelMatrix = NULL;
CGparameter g_cgModelMatrixIT = NULL;
CGparameter g_cgGlobalAmbient = NULL;
CGparameter g_cgEyePosition = NULL;

We also need to define a structure to hold a reference to the material properties defined in the effect file.

// Cg Material parameters
struct CGMaterial
{
    CGparameter m_cgEmissive;
    CGparameter m_cgAmbient;
    CGparameter m_cgDiffuse;
    CGparameter m_cgSpecular;
    CGparameter m_cgShininess;
};
CGMaterial g_ShaderMaterial;

As well as a reference to the light properties defined in the effect file.

// Cg light parameters
struct CGLight
{
    CGparameter m_cgLightType;

    CGparameter m_cgColor;
    CGparameter m_cgPosition;
    CGparameter m_cgDirection;
    
    CGparameter m_cgConstantAttenuation;
    CGparameter m_cgLinearAttenuation;
    CGparameter m_cgQuadraticAttenuation;

    CGparameter m_cgInnerCosAngle;
    CGparameter m_cgOuterCosAngle;
};
CGLight g_ShaderLight;

Initialize Cg

The initialization routine is similar to the method used in the Introduction to Cg article previously posted. So I will not go into much detail about it here. In summary, we must create a Cg context, register OpenGL state assignments, load the CgFX effect, get a valid technique defined in the effect file, and get the references to the uniform parameters defined in the effect file.

The first thing we’ll do is register the error handler and create the Cg context.

void InitCG()
{
    // Register the error handler
    cgSetErrorHandler( &CgErrorHandler, NULL );

    // Create the Cg context.
    g_cgContext = cgCreateContext();

We also need to register the OpenGL state assignments. This is required for the effect to work because the VertexProgram and FragmentProgram keywords used in the pass block in our effect file are OpenGL state assignments.

    // Register the default state assignments for OpenGL.
    // See the section titled "OpenGL State" in the CgUserManual.
    cgGLRegisterStates( g_cgContext );
    // This will allow the Cg runtime to manage texture binding.
    cgGLSetManageTextureParameters( g_cgContext, CG_TRUE );

The cgGLSetManageTextureParameters is not strictly necessary in this demo because I’m not using textures. Setting this parameter to CG_TRUE ensures that the texture objects that are assigned to texture samplers in an effect are automatically bound and enabled when an effect that uses them is bound.

The next step is to load the effect file.

    // Load the CGfx file
    g_cgEffect = cgCreateEffectFromFile( g_cgContext, g_ShaderProgramName, NULL );
    if ( g_cgEffect == NULL ) exit(-1);

The effect file is loaded using the cgCreateEffectFromFile method which will load, compile, and link the shader programs defined in the effect file. If the effect failed to compile, then the error handler that was previously registered will be invoked and the compiler error will be displayed. If that happens, this function returns an invalid handle and the program exists.

If the CgFX file loaded, then we can query for a valid technique defined in the file. We cannot use the effect unless there is at least one valid technique for the current platform defined in the effect file.

    // Validate the technique
    g_cgTechnique = cgGetFirstTechnique( g_cgEffect );
    while ( g_cgTechnique != NULL && cgValidateTechnique(g_cgTechnique) == CG_FALSE )
    {
        std::cerr << "Cg ERROR: Technique with name \"" << cgGetTechniqueName(g_cgTechnique) << "\" did not pass validation." << std::endl;
        g_cgTechnique = cgGetNextTechnique(g_cgTechnique);
    }

    // Make sure we found a valid technique
    if ( g_cgTechnique == NULL || cgIsTechniqueValidated(g_cgTechnique) == CG_FALSE )
    {
        std::cerr << "Cg ERROR: Could not find any valid techniques." << std::endl;
        exit(-1);
    }

So far this is no different than the InitCG method described in the Introduction to Cg article.

Next, we need to query the uniform effect parameters.

    // Get the effect parameters
    g_cgModelViewProjection = cgGetNamedEffectParameter(g_cgEffect, "ModelViewProjection" );
    g_cgModelMatrix = cgGetNamedEffectParameter( g_cgEffect, "ModelMatrix" );
    g_cgModelMatrixIT = cgGetNamedEffectParameter( g_cgEffect, "ModelMatrixIT" );
    g_cgGlobalAmbient = cgGetNamedEffectParameter( g_cgEffect, "GlobalAmbient" );
    g_cgEyePosition = cgGetNamedEffectParameter( g_cgEffect, "EyePosition" );

    g_ShaderMaterial.m_cgEmissive = cgGetNamedEffectParameter( g_cgEffect, "Material.Emissive" );
    g_ShaderMaterial.m_cgAmbient = cgGetNamedEffectParameter( g_cgEffect, "Material.Ambient" );
    g_ShaderMaterial.m_cgDiffuse = cgGetNamedEffectParameter( g_cgEffect, "Material.Diffuse" );
    g_ShaderMaterial.m_cgSpecular = cgGetNamedEffectParameter( g_cgEffect, "Material.Specular" );
    g_ShaderMaterial.m_cgShininess = cgGetNamedEffectParameter( g_cgEffect, "Material.Shininess" );

    g_ShaderLight.m_cgLightType = cgGetNamedEffectParameter( g_cgEffect, "Light.LightType" );
    g_ShaderLight.m_cgColor = cgGetNamedEffectParameter( g_cgEffect, "Light.Color" );
    g_ShaderLight.m_cgPosition = cgGetNamedEffectParameter( g_cgEffect, "Light.Position" );
    g_ShaderLight.m_cgDirection = cgGetNamedEffectParameter( g_cgEffect, "Light.Direction" );

    g_ShaderLight.m_cgConstantAttenuation = cgGetNamedEffectParameter( g_cgEffect, "Light.ConstantAttenuation" );
    g_ShaderLight.m_cgLinearAttenuation = cgGetNamedEffectParameter( g_cgEffect, "Light.LinearAttenuation" );
    g_ShaderLight.m_cgQuadraticAttenuation = cgGetNamedEffectParameter( g_cgEffect, "Light.QuadraticAttenuation" );

    g_ShaderLight.m_cgInnerCosAngle = cgGetNamedEffectParameter( g_cgEffect, "Light.InnerCosAngle" );
    g_ShaderLight.m_cgOuterCosAngle = cgGetNamedEffectParameter( g_cgEffect, "Light.OuterCosAngle" );
}

You may notice that we can use the dot-notation to refer to member variables of the struct properties defined in the effect file. Similarly if we had array parameters, we could refer to them using array index operators: “Lights[0].Color” or “Lights[8].Position”.

If everything went okay, then we can start rendering our scene objects using this effect.

Loading a Mesh

For this demo, I am using the Open Asset Import Library to read a scene file. Since this article is not about the Open Asset Import Library, I will not go into much detail here about it here. What I want to show here is how I create the vertex buffer object (VBO) and vertex array objects (VAO) so that I can efficiently render the mesh on the GPU.

To convert a mesh from the Open Asset Import Library format to a format that can be used by my application, I will use the ConvertMesh function. This function takes a pointer to an aiMesh and returns a pointer to a newly created Mesh object. The Mesh structure has been described earlier.

Mesh* ConvertMesh( const aiMesh* aiMesh )
{
    if ( aiMesh == NULL ) return NULL;
    assert( aiMesh->HasPositions() && aiMesh->HasNormals() );

We first check the preconditions in this function. The aiMesh must at least have position and normal attributes. If we’ve confirmed that the incoming aiMesh parameter contains these vertex attributes, we can create a new Mesh object and extract these attributes.

    Mesh* mesh = new Mesh();
    mesh->m_Material = g_Materials[aiMesh->mMaterialIndex];

    // Extract the vertex attributes.
    for( unsigned int i = 0; i < aiMesh->mNumVertices; ++i )
    {
        aiVector3D pos = aiMesh->mVertices[i];
        aiVector3D norm = aiMesh->mNormals[i];

        VertexXYZNorm vert;
        vert.m_Pos = glm::vec3( pos.x, pos.y, pos.z );
        vert.m_Norm = glm::vec3( norm.x, norm.y, norm.z );

        mesh->m_Vertices.push_back(vert);
    }

On line 530, the material struct is copied to the new mesh. This material is copied by value so we can modify this mesh’s material after import without effecting any other mesh’s material.

The vertex positions and normals are copied from the aiMesh‘s mVertices and mNormals member variables to the Mesh‘s m_Vertices vector.

We also need to copy the index data from the aiMesh to our Mesh‘s index buffer.

    // Extract the indices
    for ( unsigned int i = 0; i mNumFaces; ++i )
    {
        aiFace face = aiMesh->mFaces[i];
        // Ignore non-triangles
        if ( face.mNumIndices == 3 )
        {
            mesh->m_Indices.push_back(face.mIndices[0]);
            mesh->m_Indices.push_back(face.mIndices[1]);
            mesh->m_Indices.push_back(face.mIndices[2]);
        }
    }

The aiMesh has an array of aiFace values. Each face structure defines a single polygon face of the mesh. If the mesh has been triangulated, then all faces in the mesh should contain only 3 indices. Since I don’t want to support non-triangulated mesh data in this demo, I will only copy the faces that contain exactly 3 indices.

Now that we have the vertex data and index data in system memory, we can use it to create our buffer data in graphics memory. First, we’ll create the Vertex Array Object (VAO) and the Vertex Buffer Object (VBO).

    // Create and bind the Vertex Array Object
    glGenVertexArrays(1, &mesh->m_VAO ); checkGL();
    glBindVertexArray( mesh->m_VAO ); checkGL();

    // Now build the VBOs
    glGenBuffers(1, &mesh->m_vboVertices); checkGL();
    glGenBuffers(1, &mesh->m_vboIndicies); checkGL();

First we generate unique ID’s for the VAO and VBO that is associated with this mesh. Then we load the vertex attributes into the vertices VBO.

    // Bind the VBO for the vertex attributes.
    glBindBuffer( GL_ARRAY_BUFFER, mesh->m_vboVertices ); checkGL();
    // Copy the vertex data from system memory to graphics memory.
    glBufferData( GL_ARRAY_BUFFER, sizeof(VertexXYZNorm) * mesh->m_Vertices.size(), &(mesh->m_Vertices[0]), GL_STATIC_DRAW ); checkGL();

The VBO ID is bound to the GL_ARRAY_BUFFER target. This tells OpenGL that we want to use this VBO for operations that use this target.

The VBO data is filled with the glBufferData (or glBufferSubData) function. If the use of VBO’s is still unclear, then please refer to my previous article titled Using OpenGL Vertex Buffer Objects.

We also need to bind and enable the vertex attribute streams that will be used to render the mesh. The vertex attribute stream data will be associated with the currently bound VAO if there is one. If you are not using VAO’s then you will have to bind and enable the attribute arrays before you can render the mesh using the glDrawArrays or glDrawElements in the render function. In this demo, I will use VAO’s.

    // Bind the vertex data to the attribute streams.
    glVertexAttribPointer( POSITION_ATTRIBUTE, 3, GL_FLOAT, GL_FALSE, sizeof(VertexXYZNorm), MEMBER_OFFSET(VertexXYZNorm,m_Pos) ); checkGL();
    glEnableVertexAttribArray( POSITION_ATTRIBUTE ); checkGL();

    glVertexAttribPointer( NORMAL_ATTRIBUTE, 3, GL_FLOAT, GL_FALSE, sizeof(VertexXYZNorm), MEMBER_OFFSET(VertexXYZNorm,m_Norm) ); checkGL();
    glEnableVertexAttribArray( NORMAL_ATTRIBUTE ); checkGL();
    
    glBindVertexArray( 0 ); checkGL();
    glBindBuffer( GL_ARRAY_BUFFER, 0 ); checkGL();

The glVertexAttribPointer will allow us to bind the vertex attribute arrays to the generic vertex attributes. CgFX uses the binding semantics associated with the vertex data in the CgFX shader file to determine which attribute stream is bound to which parameter in the shader.

If you recall, the AppData in the CgFX file looks like this:

struct AppData
{
    in float3 Position : POSITION;
    in float3 Normal   : NORMAL;
};

The parameter that is bound with the POSITION semantic is always associated with the generic attribute with ID 0 and the parameter that is bound with the
NORMAL semantic is always associated with the generic attribute with ID 2. For a list of the standard semantics and the associated generic attribute ID’s used by Cg, you can refer to the Cg Semantics table in my previous article titled Introduction to Shader Programming with Cg 3.1.

In the application code, the POSITION_ATTRIBUTE macro resolves to 0 and the NORMAL_ATTRIBUTE macro resolves to 2.

We also need to enable the attribute arrays with glEnableVertexAttribArray otherwise the data will not be sent to the rendering pipeline when our mesh elements are drawn.

And finally, when we are done defining the attribute arrays for the VAO, we can unbind the VAO and VBO objects.

We also need to populate the VBO for our indices.

    // Load the index data
    glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, mesh->m_vboIndicies ); checkGL();
    glBufferData( GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint) * mesh->m_Indices.size(), &(mesh->m_Indices[0]), GL_STATIC_DRAW ); checkGL();
    glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); checkGL();

    return mesh;
}

The index buffer must be defined separately from the vertex attribute arrays because the index data defines the order in which vertices are sent to the rendering pipeline but it does not define vertex attributes.

If everything is correct then we return the pointer to the new mesh.

The OnDisplay Method

The OnDisplay method is invoked whenever the screen needs to be redrawn. This function will populate the Cg parameters that are constant for that frame, such as camera and light properties.

void OnDisplay()
{
    glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); checkGL();

    // WARNING: OpenGL deprecated features.
    glMatrixMode( GL_MODELVIEW ); checkGL();
    glLoadMatrixf( glm::value_ptr(g_Camera.GetViewMatrix()) );
    DrawAxis( 2.0f, g_Camera.GetPivot() );

    glm::mat4 cameraMatrix = glm::inverse( g_Camera.GetViewMatrix() );
    cgSetParameter4fv( g_cgEyePosition, glm::value_ptr( cameraMatrix[3] ) );

    g_Light.m_Position = glm::vec4( g_Camera.GetPivot(), 1.0f );

    cgSetParameter1i( g_ShaderLight.m_cgLightType, (int)g_Light.m_Type );
    cgSetParameter4fv( g_ShaderLight.m_cgColor, glm::value_ptr(g_Light.m_Color) );
    cgSetParameter4fv( g_ShaderLight.m_cgPosition, glm::value_ptr(g_Light.m_Position) );
    cgSetParameter4fv( g_ShaderLight.m_cgDirection, glm::value_ptr(g_Light.m_Direction) );
    cgSetParameter1f( g_ShaderLight.m_cgConstantAttenuation, g_Light.m_ConstantAttenuation );
    cgSetParameter1f( g_ShaderLight.m_cgLinearAttenuation, g_Light.m_LinearAttenuation );
    cgSetParameter1f( g_ShaderLight.m_cgQuadraticAttenuation, g_Light.m_QuadraticAttenuation );
    cgSetParameter1f( g_ShaderLight.m_cgInnerCosAngle, g_Light.m_InnerCosAngle );
    cgSetParameter1f( g_ShaderLight.m_cgOuterCosAngle, g_Light.m_OuterCosAngle );

    RenderNode( g_RootNode );

    glutSwapBuffers();
}

Every frame starts by clearing the color and depth buffers.

The next few lines (lines 919-922) just draw an “axis” widget at the camera’s pivot point. These functions use deprecated fixed-function pipeline of OpenGL, so I encourage you to ignore them ;)

We need to extract the camera’s position in world space. To do that, we must inverse the view matrix and take the 4th column (or the 4th row for row-major matrices). This gives the “eye” position in world space. For an explanation on the camera transform and the view matrix you can refer to my article titled Understanding the View Matrix. The EyePosition parameter in the CgFX file is set using the cgSetParameter4fv/

The member variables of the Light parameter in the CgFX file is set in the same way according to the parameter type.

With all of the per-frame parameters set, we can render our scene. The RenderNode function will recursively render the nodes of a very simple scene graph implementation.

The RenderNode Function

The RenderNode function will render all of the meshes associated with a particular node in the scene graph. A scene node consists of a transformation matrix that is used to position and orient a mesh in the scene. Each scene node can contain zero or more meshes and a scene node can also contain child nodes that are positioned and orientated relative to it’s parent node. A scene node without a parent node is called the Root node and a scene node that does not have any child nodes is called a Leaf node.

To render a node, we first set the per-node specific shader parameters. These would be the nodes matrix parameters that determine the position and orientation of the rendered mesh.

void RenderNode( Node* pNode )
{
    if ( pNode == NULL ) return;

    // Update the matrix parameters
    glm::mat4 modelViewProjectionMatrix = g_Camera.GetProjectionMatrix() * g_Camera.GetViewMatrix() * pNode->m_WorldTransform;
    glm::mat3 modelMatrixIT = glm::transpose( glm::inverse( glm::mat3(pNode->m_WorldTransform) ) );

    cgSetParameterValuefc( g_cgModelViewProjection, 16, glm::value_ptr(modelViewProjectionMatrix) );
    cgSetParameterValuefc( g_cgModelMatrix, 16, glm::value_ptr(pNode->m_WorldTransform) );
    cgSetParameterValuefc( g_cgModelMatrixIT, 9, glm::value_ptr(modelMatrixIT) );

Each time a node is rendered, there are 3 parameters that need to be set:

  • g_cgModelViewProjection: A 4×4 matrix that is used to transform the vertex position from object-space to clip-space.
  • g_cgModelMatrix: A 4×4 matrix that is used to transform the vertex position from object-space to world-space.
  • g_cgModelMatrixIT: A 3×3 matrix that is used to transform the vertex normal from object-space to world-space.

Then we need to loop through the meshes that are associated with the node and render each mesh.

    // Render all of the meshes associated with this node.
    MeshArray::iterator meshIter = pNode->m_Meshes.begin();
    while ( meshIter != pNode->m_Meshes.end() )
    {
        Mesh* pMesh = (*meshIter);
        Material& material = pMesh->m_Material;

        // Set the material parameters
        cgSetParameter4fv( g_ShaderMaterial.m_cgEmissive, glm::value_ptr(material.m_Emissive) );
        cgSetParameter4fv( g_ShaderMaterial.m_cgAmbient, glm::value_ptr(material.m_Ambient) );
        cgSetParameter4fv( g_ShaderMaterial.m_cgDiffuse, glm::value_ptr(material.m_Diffuse) );
        cgSetParameter4fv( g_ShaderMaterial.m_cgSpecular, glm::value_ptr(material.m_Specular) );
        cgSetParameter1f( g_ShaderMaterial.m_cgShininess, material.m_Shininess );
        
        glBindVertexArray(pMesh->m_VAO); checkGL();
        glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, pMesh->m_vboIndicies ); checkGL();

        CGpass pass = cgGetFirstPass(g_cgTechnique);
        while ( pass )
        {
            cgSetPassState(pass);
            glDrawElements( GL_TRIANGLES, pMesh->m_Indices.size(), GL_UNSIGNED_INT, BUFFER_OFFSET(0) ); checkGL(); 
            cgResetPassState(pass);

            pass = cgGetNextPass(pass);
        }

        ++meshIter;
    }

    // Unbind
    glBindVertexArray( 0 );
    glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 );

For each mesh, we first set the Cg parameters that define the mesh’s material properties.

On line 856 the mesh’s VAO and VBO ids are bound for rendering.

Then, for each pass in the technique the mesh is rendered with glDrawElements.

The cgSetPassState function will set all of the state assignments defined in the pass. For our shader, only the VertexProgram and FrgamentProgram state assignments are defined in the pass.

The cgResetPassState will reset only the state assignments that are defined in the pass. It is important to note that resetting the pass’s state assignments will not set the values of the state assignments back to the values they were before cgSetPassState was called, but instead will reset the state assignments to whatever the default value is for that state assignments. For example, resetting the pass state for this example will set the currently bound vertex program and fragment program to NULL (even if the currently bound vertex program and fragment program was not NULL when cgSetPassState was first called). You should always check the CgFX state assignmnets documentation to verify what value the state will be set to when the pass is reset.

We shouldn’t forget to unbind the VAO and VBO objects when we are done rendering the meshes of the node.

After this node has been rendered, we need to recursively render the children of this node.

    // Render the children
    NodeArray::iterator nodeIter = pNode->m_Childeren.begin();
    while ( nodeIter != pNode->m_Childeren.end() )
    {
        RenderNode( (*nodeIter) );
        ++nodeIter;
    }
}

If everything works well, we should see something similar to what is shown below.

References

The Cg Tutorial

The Cg Tutorial

The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics (2003). Randima Fernando and Mark J. Kilgard. Addison Wesley.

Download the Source

The source code for this article is available upon request.

One thought on “Transformation and Lighting in Cg 3.1

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>