An Introduction to Unity Shaders

Surface Shader

Surface Shader

In this article, I will give a basic introduction to what shaders are, discuss the basics of the Unity rendering pipeline and introduce the reader how to use shaders within the Unity game engine.

Introduction

Shaders are used a lot in video games to produce a wide variety of effects like for example lightning models, distortion, (motion)blur, bloom, and edge detection effects. A shading language is used to program the graphics processing unit (gpu) and allows the user to setup the graphics hardware for rendering instead of the fixed function pipeline. There are 3 shaders types commonly used; Vertex Shaders, Fragment Shaders and Geometry Shaders.

In Unity the user is able to write custom shaders, however full screen effects require render textures which are only available in Unity Pro and won’t be covered in this article. These are full screen image processing effects are a pretty advanced topic on themselves. Because there is quite a lot of material to cover I won’t be delving too deep into writing custom shader effects but emphasize on what shaders are and how they are used in the Unity game engine.

Borderlands2 Cell Shading

Borderlands 2 Cell Shading

Shaders in Unity

In Unity all rendering is done through the use of shaders. Shaders in Unity are small scripts that allow the user to configure how the graphics hardware is set up for rendering. Unity ships with over 60 built-in shaders that are are used through the use of materials. There is a close relationship in Unity between materials and shaders. The shaders contain the code that defines what kind of properties and asset to use while materials allow the user to adjust the properties and assign assets. For example if we create a new GameObject and assign a material to it, we can choose a shader from the Inspector that specifies how our material should look like when we are in-game. The properties of the material vary depending on the shader that is used to render the GameObject.

There are 3 ways how shaders can be written in Unity; Fixed Function Shaders, Vertex and Fragment Shaders, and Surface Shaders. The code for these shaders is encapsulated using the Unity shading and material language called ‘ShaderLab’. The shaders themselves are typically written using CG or HLSL shading language.

Surface Shaders

Surface shaders are commonly used if your shader needs to be affected by both lights and shadows. Since the shader needs to be affected by light, you do not want to write out the complete shader code that handles a default BlinnPhong lighting model every time. This allows the user to write more complex shaders in a more compact way. If the shader doesn’t need to interact with lighting it is best to use a vertex and fragment shader instead. Otherwise Unity would still be making lighting calculations while we don’t need them.

Vertex and Fragment Shaders

Vertex and fragment shaders can be used if the shader doesn’t need to interact with lighting or when you need to deal with an effect that a surface shader can’t handle. These are the most flexible shaders to create shader effects however it makes it harder to interact with lighting.

Fixed Function Shaders

Fixed function shaders are used for older hardware that doesn’t support programmable shaders. The fixed function shaders are usually used as a last fallback to make sure a game is still rendering something meaningful when a certain shader effect is not supported by the graphics processing unit. Fixed function shaders are completely written using in Unity’s ‘ShaderLab’. We will look at this in more detail later.

Unity’s Built-In Shaders

Unity’s rendering pipeline

So with shaders we can define how our object will appear in the game world and how it will react to lighting. How these lights will react on the objects depend on the passes of the shader and which rendering path is used. The rendering path can be changed through Unity’s Player Settings. Or it can be overridden in the camera’s ‘Rendering Path’ setting in the inspector. In Unity there are 3 rendering paths: Vertex Lit, Forward Rendering and Deferred Rendering. If the graphics card can’t handle the current selected render path it will fallback and use another one. So for example if deferred rendering isn’t supported by the graphics card, Unity will automatically use Forward Rendering. If forward rendering is not supported it will change to Vertex Lit. Since all shaders are influenced by the rendering path that is set I will briefly describe what each rendering path does.

Vertex Lit

Vertex Lit is the simplest lighting mode available. It has no support for real-time shadows. It is commonly used on old computers with limited hardware. Internally it will calculate lighting from all lights at the object vertices in one pass. Since lighting is done on a per-vertex level, per-pixel effects are not supported.

Forward Rendering

Forward rendering renders each object in one or more passes, depending on the lights that affect the object. All lights are treated differently depending on the settings and intensity being set by the user. When forward rendering is used, the amount of pixel lights set from the quality menu that affect the object will be rendering using full per-pixel lighting. Additionally 4 point lights are calculated per-vertex and all other lights are computed as Spherical Harmonics which is an approximation. A light can be per-pixel lit depending on several situations. Lights with the render mode set to Not Important, are always per-vertex or spherical harmonics. Brighter lights are always calculated per-pixel also when the render mode is set to Important. Forward rendering is the default selected rendering path in Unity.

Deferred Rendering

In Deferred rendering there is no limit on the number of lights that affect an object and all lights are calculated on a per-pixel base. This means that all lights interact with normal maps etc. Lights can also have cookies and shadows. Since all lights are calculated per-pixel it works great on big polygons. Deferred rendering is only available in Unity Pro.

Unity Rendering Path

Selecting a rendering path

ShaderLab

All shaders in Unity are wrapped in a material and shading language that is called ‘ShaderLab’. ShaderLab organizes the shader structure. Here is an example from the Unity documentation:

Shader "MyShader" {
    Properties {
        _MyTexture ("My Texture", 2D) = "white" { }
        // other properties like colors or vectors 
        // go here as well
    }
    SubShader {
        // here goes the 'meat' of your
        //  - surface shader or
        //  - vertex and fragment shader or
        //  - fixed function shader
    }
    SubShader {
        // here goes a simpler version of the SubShader above that
        // can run on older graphics cards
    }
}

In ShaderLab we start by defining a name for the shader which in this case is “MyShader” and it will be listed in the inspector under that name. Next there are properties defined and these will be shown in the Material Inspector in Unity. Sub Shaders are used to create a list of shaders that can be used on different types of hardware. If the gpu doesn’t support a certain feature, it is able to fallback to a more simplified version of this shader. Unity itself will go through the list of sub shaders and pick the first sub shader that is supported by the hardware. Note: The user must always specify at least 1 sub shader!

Shader "example" {
    // properties and subshaders here...
    Fallback "otherexample"
}

Optionally the user can also define a fallback which tries to run another sub shader from another shader. The fallback gets called when all the sub shaders you defined are not able to run on the hardware. This is nice since we don’t need to rewrite all the sub shader code from another shader.

Shaders are automatically compiled by the Unity editor once you have edited them. Because shaders are compiled by the editor you cannot create shaders from scripts at run-time.

Fixed function shaders

To start, I will show you how to create a new fixed function shader that binds a color property for a material and after that we will walk through Unity’s vertex lit shader. We can create a new shader from the project panel in Unity by pressing Create – Shader or from the top menu In Unity under Assets – Create – Shader. If we want to edit a shader we can simply double click it in the project view similar to opening a script file. Once we double-clicked a shader, Unity will open MonoDevelop to start editing the shader file. After the shader is opened we first want to delete everything that Unity created by default. Unity by default creates a basic template for a surface shader. We can recognize if it is a fixed function shader if we take a look at the inspector when selecting a shader in the project view under Vertex and Pixel program.

Fixed Function Shader

Fixed Function Shader

Shader

First we need to start by giving our shader a name. Look at the ShaderLab body above to see the basic layout of the ShaderLab structure. We can use backslashes in the Shader name to put the shader in a custom folder in the inspector. It should look like this:

Shader “Custom/MyCustomShader” {
}

Properties

Next we need to define some properties that we want to tweak in the material inspector. First we start with the keyword ‘Properties’ and use opening brackets so we can list several properties we want. Properties can be used by the shader, so for example we could set a color and a texture and combine those two together which will be the final color of our material.

Let’s take a look at an example of a property.

Properties {
    _Color ("Main Color", Color) = (1.0, 1.0, 1.0, 1.0)
}

First the name of the property is set in this case _Color. Next we need to specify a display name; this is how the property will be listed in the material inspector. Next we specify the type of property we want to set and we need to assign an actual value to that property we just specified. In our case it is a Color, which takes 4 floats (RGBA). Instead of the Color property we can also add different types. For example: Range, 2D, Rect, Cube, Float, and Vector. Range is presented as a slider in the material inspector and goes from min to max. 2D takes a string as parameter and defines a 2D texture. Rect defines a rectangle (non-power of two) texture. Cube defines a cubemap texture, float defines a floating point property and Vector defines a four component vector. The texture property can also be given property options which can for example set the texture coordinate generation of this texture.

SubShader

Now we need to define at least one sub shader so our object can be displayed. As stated above Unity will pick the first sub shader that runs on the graphics card. Each sub shaders defines a list of rendering passes. Each pass causes the geometry to be rendered once. Generally speaking you would like to use the minimum amount of passes possible since which every added pass our performance goes down because of the object being rendered again. A pass can defined in 3 ways, a regular pass, use pass or grab pass. The ‘UsePass’ command is used when we want to use another pass from another shader. This can help by reducing code duplication. ‘GrabPass’ is a special pass. It grabs the content of the screen where the object is to be drawn into a texture. This texture can then be used for more advanced image based processing effects. A regular pass sets various states for the graphics hardware. For example we could turn on/off vertex lighting, set blending mode, or set fog parameters.

SubShader {
    Pass {
        Material {
            Diffuse [_Color]
        }
    }
    Lighting On
}

The material block binds the property values we set for _Color to the fixed function lighting material settings. The command Lighting On turns on the standard per-vertex lighting.

Tags

Additionally tags can be setup which tells how and when sub shaders should be rendered to the rendering engine. Tags must be specified inside the sub shader block and not inside the pass block.

Fallback

A fallback will automatically be used when none of the previously described sub shaders are able to run on the hardware and it will try to use a sub shader from another shader.

Category

Categories can be used to group multiple shaders inheriting the same rendering state. For example the user can group several sub shaders to with all of them having Fog turned off.

Fixed Function Color

A fixed function shader assigning a Color type

Vertex and fragment shaders

When using vertex and fragments shaders the so called ‘Fixed Function Pipeline’ is turned off. As an example the 3D transformations that normally take place are disabled. This means that when we want to write a vertex/fragment program we need to rewrite the fixed function functionality ourselves that is normally built into API’s like OpenGL. Note that vertex and fragment shaders are typically used when the shader doesn’t need to interact with lighting. If that is the case a ‘Surface shader’ is suggested.

In Unity shaders in ShaderLab are written using the Cg programming language. ‘Cg Snippets’ are compiled into low-level shader assembly by the Unity editor. That means that once you distribute your game files, only the low-level assembly code will be included. This is how a general ‘Cg Snippet’ would look in ShaderLab:

Pass {
    // ... the usual pass state setup ...

    CGPROGRAM
    // compilation directives for this snippet, e.g.:
    #pragma vertex vert
    #pragma fragment frag

    // the Cg code itself

    ENDCG
    // ... the rest of pass setup ...
}

The whole Cg code is placed inside the Pass block between CGPROGRAM and ENDCG keywords. The pragma directives indicate what function should be executed for the vertex program, and another function that is executed for the fragment program. In this case ‘vert’ is the vertex program and ‘frag’ is the fragment program. The vertex program is executed at a per-vertex level, where the fragment program is executed at a per-pixel level. Unity also contains several files that bring predefined variables and helper functions. This is done by the standard include directive like – #include “UnityCG.cginc”.

Similarly to the fixed function shaders we can specify properties that should be revealed in the material inspector. To use these variables in your ‘Cg Snippets’ we need to define a variable that has a matching name and a matching type. Unity will automatically set the variable for us as long the name and type are matched with the property. This means that if you defined a Property called _Color (“Main Color”, Color) in the Properties block you need to have a matching Color type named _Color. In this case a float4 would be the correct type to use since a Color consists out of 4 floating point numbers.

Note: the syntax might not look familiar to you. In case you want to learn more about the Cg programming language, I suggest heading over to the NVIDIA website. I added a few resource pages at the end of this article.

Surface shaders

Surface shaders in Unity are an easy way to write shaders that make use of lighting instead of using low-level vertex and pixel shaders. Surface shaders are still written in Cg/HLSL. The surface shader compiler will turn the surface shader into the actual vertex and pixel shaders as well as the rendering passes to handle forward and deferred rendering. Using surface shaders prevents the user from typing repetitive code that calculates lighting.

The user needs define a surface function that describes the standard output of the surface shader which describes properties of the surface like albedo color, normal, emission value, specularity and so on. This is how an output structure would look like:

struct SurfaceOutput {
    half3 Albedo;
    half3 Normal;
    half3 Emission;
    half Specular;
    half Gloss;
    half Alpha;
};

A surface shader is placed in between a CGPROGRAM and an ENDCG block. It must be placed inside the SubShader block. To define a surface shader the user must use #pragma surface to indicate it is a surface shader. The complete pragma directive looks like this:

#pragma surface surfaceFunction lightModel [optional parameters]

The surface function indicates which surface shader function contains the Cg code. The function should have the following form: void surfaceFunction(Input IN, inout SurfaceOutput o)

The Input or IN is a structure we should have defined. Input should contain any texture coordinates and extra automatic variable needed by the surface function surfaceFunction. The lightModel specified which lighting model we want to use. There are several built-in lighting models available like Lambert (diffuse) and BlinnPhong (specular). For all list of all the optional parameters I suggest you read the Unity documentation on surface shaders which will be listed at the end of this article. The user is also able to write a custom lightModel that can be used in the surface shader however that won’t be described in this article.

The Input structure we need to specify usually has the texture coordinates that are needed by the shader. Texture coordinates must be named ‘uv’ followed by the texture name. We can also use ‘uv2’ to indicate we are dealing with a second texture coordinate set. There are several values that can be defined in the Input structure like view direction, screen/world position, world normal etc.

So let’s take a look at a simple diffuse surface shader from the Unity manual.

Shader "Example/Diffuse Simple" {
    SubShader {
        Tags { "RenderType" = "Opaque" }
        CGPROGRAM
        #pragma surface surf Lambert
        struct Input {
            float4 color : COLOR;
        };
        void surf (Input IN, inout SurfaceOutput o) {
           o.Albedo = 1;
        }
        ENDCG
    }
    Fallback "Diffuse"
}

Probably you immediately notice that surface shaders are quite similar to the previously described fixed function shader except the surface shader has the additional lines we just described, like the #pragma directive indicating we are dealing with a surface shader here. So what is new here?

The most important change we see is that inside the SubShader block we start with CGPROGRAM and end the SubShader with ENDCG. All our Cg code will be placed within this block. After this we have the pragma directive to indicate this is surface shader. In this case we tell surf is our surface function and that the lighting model to use is Lambert.  Next there is an input structure defined. This shader only specified a color as input value.

After the input structure is created, the surface function is defined which takes the Input and a surface output as parameters. The output will be the actual output of the shader code. So in this case we can see that the albedo of the output is set to 1. This will result of this shader will be that the surface color will be set to white while it used the Lambert (diffuse) lighting model we defined in the pragma directive.

Surface Shader Checker

A surface shader creating a checker pattern

Conclusion

Shaders are quite an advanced and complex subject and learning how to write shaders isn’t easy and something that you learn in one day. Hopefully I was able to give the reader a better understanding of the underlying rendering pipeline, how lighting is treated and how shaders can be used to manipulate the resulting outcome. You probably used materials all the time already, but hopefully you now have a better understanding of how they work internally. There is a lot of material to cover so I didn’t go into great detail of each render setting of a shader. I suggest you refer to the Unity Reference Manual for a more detailed look on shaders.

We learned:

  • That shaders in ShaderLab are written using the Cg programming language.
  • That Unity utilizes different rendering pipelines.
  • How materials and shaders have a direct connection between them.
  • The differences between Fixed function, Vertex and Fragment, and Surface shaders.
  • That Surface Shaders are compiled into vertex and fragment shaders.
  • How all shader code is encapsulated in Unity’s ShaderLab.
  • That vertex/fragment shaders have two functions that describe both the vertex and fragment program.

Furthermore we took a look at:

  • How to write a basic ‘Fixed Function’ shader.
  • How surface shaders are being defined and what they can do.
  • How we can define a structure.

References

The following references were used to gather the information required to write this article.

http://docs.unity3d.com/Documentation/Manual/Materials.html
http://docs.unity3d.com/Documentation/Components/Built-inShaderGuide.html
http://docs.unity3d.com/Documentation/Manual/Shaders.html
http://docs.unity3d.com/Documentation/Components/SL-Shader.html
http://docs.unity3d.com/Documentation/Components/SL-Reference.html
http://docs.unity3d.com/Documentation/Components/SL-SurfaceShaders.html
http://docs.unity3d.com/Documentation/Components/SL-ShaderPrograms.html
http://docs.unity3d.com/Documentation/Components/SL-RenderPipeline.html
http://docs.unity3d.com/Documentation/Components/Rendering-Tech.html
http://unity3d.com/unity/download/archive

Images Used

http://docs.unity3d.com/Documentation/Images/manual/Materials-1.jpg

Suggested reading:

http://blogs.unity3d.com/2010/07/17/unity-3-technology-surface-shaders/

http://http.developer.nvidia.com/CgTutorial/cg_tutorial_chapter01.html

Download Unity Project

[UnityShaders.zip]

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>