## Motivation

You should read this article if:

- You want to learn about fragment shaders
- You want to think about building complex multipass shaders but are new to the subject
- You want to follow on to build an outlined toon shader using this approach and need to understand the basics

## Resources

## Introduction

In part #4 we produced a decent toon shader with an outline using rim detection – but the problem with that approach is that it only really works on smooth, curved models. We rely on the normal approaching the edge, which on flat sides, where the normal is consistent, will happen all of a sudden, or hardly at all – giving us an inappropriate edge effect.

There is a better way to produce an outline to a model – render the back faces, expanded slightly, in black – then render the front faces normally. That requires 2 passes at the model – and as you’ll probably remember, we can’t define a surface shader to do that.

So if we want this kind of outline we are going to have to write a fragment shader – no bad thing really, because it will teach us a lot about how shaders really work, lighting, and allow us to create effects others not possible in a surface shader.

The problem with writing a fragment shader is that we don’t have all of the surface shader magic to help us with the lighting process and we have to understand a lot more about the underlying techniques to create a decent render of our model.

This part of the tutorial teaches how to create a *bumped diffuse* shader in preparation for our more complex outline toon shader.

## Vertex & Fragment Shaders

So we are going to write a traditional vertex and fragment shader, rather than a surface shader – these must define two functions, one that will take the vertex from the model and convert it into a screen position, the other will provide the colour for a pixel.

So firstly “woo hoo” only two things to worry about – secondly “uh oh” looks like we might be responsible for a bit more processing.

Our vertex program is going to take “model space” data and convert it into *at least* a position for the vertex on the screen. It will also potentially output UV coordinates and other values that would be needed in our *fragment* program.

The the system is going to *interpolate* values from our vertex program across the vertices that will be rendered to the screen and call our *fragment *program with the values it comes up with.

Our *fragment* program is going to take the interpolated values and convert them into the final color for a pixel on the screen.

We’ll start with a single pass of the data, and build a simple diffuse and diffuse bumped shader – before moving on to multiple passes. When we use multiple passes each one will have a different vertex and fragment program, and will pass different information between them.

## A Diffuse, Vertex Lit Fragment Shader

Ok so lets build our first fragment shader. We’ll start by defining a Pass using Pass { //our code } inside our Subshader.

The Pass can define a number of tags that happen for that stage – for example we could Cull either back or front faces, write to the Z buffer or whatever. Our diffuse shader is just going to cull back faces (those facing away from the camera).

Remember that each pixel written *can* also have a value written to the Z buffer, which is the distance away from the camera of that pixel – this is so other pixels can choose whether to be drawn or not depending on whether they are closer or further away from the camera.

We choose to have Z buffer writing on and off with the commands ZWrite On|Off. Normally pixels will not be written if there is a closer pixel already being displayed due to some shader rendering a model before our shader gets a go – we can control this with the ZTest command: ZTest Less | Greater | LEqual | GEqual | Equal | NotEqual | Always which will change how pixels in our shader are written.

Here’s the start of our Pass definition:

Pass { Cull Back Lighting On CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" //More code here }

We define the pass, tell it to turn on lighting and cull back faces. We then define the start of our CG program and specify the names of the *vertex* and *fragment* programs. Finally we include a file of useful Unity definitions that we will need in our CG program.

### Vertex Lit?

So we have a couple of choices when we write a shader – we can define multiple passes so that each light gets to have a go at our program or we can take into account all of the lights at the vertices and interpolate them out. Clearly this latter is much faster and only requires a single pass while the former is more accurate.

If we write a Vertex Lit shader then we have to take into consideration all of the lights and their impact on the vertices. If we write a multipass shader then it gets called multiple times, one for each light that is affecting our model.

Unity has a specially written function that helps us write vertex lit shaders that we will see in the next section.

## The Vertex Program

Ok so lets define our *vertex *function – firstly it needs to get some information about the model – we define a structure for that:

struct a2v{ float4 vertex : POSITION; float3 normal : NORMAL; float4 texcoord : TEXCOORD0; };

This structure * relies* on

*semantics*– those : XXXX values. It doesn’t matter what we call our variables, they will be filled with values based on our : XXXX sematics. Here we are getting the position in

*model space*of the vertex and the direction of the normal, and we are also getting the texture coordinate from uv1. We’ll look at the full set of values we can get in a moment – for now this is enough for our shader.

*Spaces* are very important in fragment shaders. The concept of a *space* is basically what the coordinates are relative to.

- In
*model space*the coordinates are relative to the base mesh’s 0,0,0. Our*vertex*function needs to convert these coordinates into*projection*space where we will actually be rendering them relative to the camera. - In
*tangent space*the coordinates are relative to the*face*of the model that we are rendering – we use this for bump maps and will cover this is some detail later. - In
*world space*the coordinates are relative to the world 0,0,0 coordinate. - In
*projection space*the coordinates are relative to the camera (so in this space the camera is at 0,0,0)

*space*someone decides to

*light*their model in. This can be confusing to start with, it’s basically which

*coordinate system*you convert your light directions and positions into in order to run some calculation by which they work out the final colour of a pixel. Hopefully by the end of this tutorial you will also be able to have such conversations!

So having defined the input to our *vertex *program as the position, normal and uv from the underlying mesh – we also need to know what we need to output from it. Remember that what we output from the *vertex* function will be *interpolated* across the pixels being rendered, and this interpolated value becomes the **input** to our *fragment* function.

struct v2f{ float4 pos : POSITION; float2 uv; float3 color; };

So these are our return values. *Semantics* are far less important here – save that we **must** return one value tagged as *POSITION* which is the position of the vertex converted into *projection* *space*. All of the values we output (that don’t have the qualifier *uniform*) are going to be interpolated for our *fragment* function. We actually keep unvarying values in variables in our shader – just like we did in the *surface* shaders in the previous examples.

*vertex*function because that is called only once per vertex in our model while the

*fragment*function is called once per rendered pixel.

Ok, enough already, here’s the actual *vertex* function – converting a2v (our input) to v2f (the input for our *fragment* function).

v2f vert (a2v v){ v2f o; o.pos = mul( UNITY_MATRIX_MVP, v.vertex); o.uv = TRANSFORM_TEX (v.texcoord, _MainTex); o.color = ShadeVertexLights(v.vertex, v.normal); return o; }

So first we define an instance of our output. Then we transform *model* space to *projection* space by multiplying the vertex’s position by a predefined matrix we get from Unity (by including UnityCg.cginc). UNITY_MATRIX_MVP is a matrix that will convert a model’s vertex position to the *projection* space coordinate. We use the matrix multiplication command *mul* to perform the modification.

The next line works out a *uv *for a given texture – in other words the actual texture positions that relate to the uvs from the underlying mesh. We use a built in *macro* from UnityCG.cginc to do that.

*float4*valiables called _YourTextureName_ST (your texture’s name with _ST appended).

Finally we work out a base color for this pixel – this is the color of the *light* *falling on this pixel* and for this first shader we call *ShadeVertexLights* passing the model vertex and normal – this built-in function takes the 4 nearest lights into consideration in addition to ambient light.

Finally we return our output structure, ready for processing.

*fragment*function is called the system will be interpolating between the vertices of a triangle – i.e. the values returned by 3 separate calls to the

*vert*function.

## The Fragment Program

Ok so now we want to work out the colour of a particular pixel given our interpolated input structure.

float4 frag(v2f i) : COLOR{ float4 c = tex2D (_MainTex, i.uv); c.rgb = c.rgb * i.color * 2; return c; }

Our fragment program uses the familiar texture lookup to get the colour of the texture pixel for this screen pixel and then we multiply that be the interpolated light colour coming from the *vertex* function and multiple the whole thing by 2 (multiplying by 2 what all these shaders do – I don’t know why, but it’s too dark without it).

Then we return the colour of the pixel.

## Source Code

Ok, so together with our variable definitions, our whole diffuse vertex lit shader looks like this:

Shader "Custom/OutlineToonShader" { Properties { _MainTex ("Base (RGB)", 2D) = "white" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 200 Pass { Cull Back Lighting On CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" sampler2D _MainTex; float4 _MainTex_ST; struct a2v{ float4 vertex : POSITION; float3 normal : NORMAL; float4 texcoord : TEXCOORD0; }; struct v2f{ float4 pos : POSITION; float2 uv ; float3 color; }; v2f vert (a2v v){ v2f o; o.pos = mul( UNITY_MATRIX_MVP, v.vertex); o.uv = TRANSFORM_TEX (v.texcoord, _MainTex); o.color = ShadeVertexLights(v.vertex, v.normal); return o; } float4 frag(v2f i) : COLOR { float4 c = tex2D (_MainTex, i.uv); c.rgb = c.rgb * i.color * 2; return c; } ENDCG } } FallBack "Diffuse" }

At the end of all that we have a diffuse shader we could have written in a couple of lines of *surface* shader code! As Spiderman will tell you – with great power comes great responsibility.

To recap, with *vertex *and *fragment* programs we are responsible for:

- Converting the vertex from model space to projection space
- Creating texture coordinates from model uvs if needed
- Calculating all of the lighting
- Applying the lighting colour to the texture colour to get a final pixel colour

## Normal Maps

So let’s think how a normal map is stored – you’ll know that you have to specify as special option on a texture that will be a normal map in Unity? There’s a good reason for this…

A normal map is:

- the normal vector on the surface to which it relates. In other words, this is the normal in
*tangent*space. - stored pointing outwards only, it needs to be multiplied by 2 and have 1 removed from it to make it a fully 3D normal
- stored in an image (texture) format.
- compressed when it is stored. In fact because normals in a normal map are normalised (!) (they have a length of 1 – in other words they are
*unit*vectors), the map actually only needs to store 2 components of the vector, because the other will always have the value that means the whole vector is unit length. - stored with 16 bits of precision for each component (2 parts of rgba per component)

When we apply a normal map we need to combine the normal of the triangle the texture applies to with the normal of the particular pixel. This turns out to be not so easy as might first be thought.

As all normals are *normalized* we can’t simply add the two vectors together to get the final world normal for the pixel (the normal for the face and the normal for the pixel) because it has the effect of averaging them.

**not**what a normal map should be doing.

So it turns out that if we want a bump map then we should *light our model in tangent space*. In other words, we need to covert everything we need for a calculation of how much light is falling on a pixel to tangent space and work it out there. Also, we actually need to make a calculation of light per pixel and not per vertex as we were in our first example.

*world*space would mean converting every normal from the normal map to world space – so we’d have to do it on a per-pixel basis, a lot more calculations.

Fortunately converting to *tangent* space is fairly easy – but I’m afraid we are going to have to dispense with the very easy *ShadeVertexLights* and look at writing our own lighting calculation.

## Lighting Our Model

Ok, we’ve been skirting around this manual lighting thing – writing surface shaders and calling *ShadeVertexLights* but really it’s not that hard. We saw it in the lighting model in the toon surface shader. We will use Lambert lighting and it’s just the normal of the point **dot **the light direction *** **the attenuation * 2.

So long as we can convert everything into the same *space* it’s very simple.

### But Where Are The Lights?

Unity conveniently provides use with the position, colour and attenuation of the lights affecting our model – in the order of importance (closeness etc).

Vertex lights are defined as three arrays: *unity_LightPosition*, *unity_LightAtten* and *unity_LightColor*. The [0] entry is the most important light etc.

When we write a multi-pass lighting model (as we will next) we only deal with a single light at a time – in that case Unity also defines a *_WorldSpaceLightPosition0* value that we can use to work out where it is and a very helpful *ObjSpaceLightDir* function that will work out the direction to the light. To get the color of the light we are going to have to add an extra entry defining a variable from CG or include “Lighting.cginc” into our programs.

uniform float4 _LightColor0; //Define the current light's colour variable

## Diffuse Normal Map Shader – Forward Lighting (Not Vertex Lit)

We are going to move away from vertex lights in this shader and eventually define multiple passes for each light.

Ok – let’s do it. We add a property and **two** variables for our normal map (remember we need a sampler2D called _XXXX and the float 4 _XXXX_ST variables).

Now we need to build our data structures for the *vertex* and *fragment* programs:

struct a2v { float4 vertex : POSITION; float3 normal : NORMAL; float4 texcoord : TEXCOORD0; float4 tangent : TANGENT; };

We’ve added a : TANGENT to our input structure. We are going to use this to convert light directions to *tangent* space.

### Tangent Space Conversion

To convert a vector in *object *space to *tangent* space we need an extra two vectors defined for the vertex. Normally for a vertex we have it’s position and its normal – the *tangent* is orthogonal (at right angles) to the normal and together with the *binormal* which is the *cross* *product *of the *normal* and the *tangent* we can create a matrix that lets us do the conversion.

Helpfully for us UnityCG.cginc defines a macro called TANGENT_SPACE_ROTATION which does this calculation for us and provides a matrix call *rotation* that will convert *object* space coordinates to *tangent* space.

### Output from Vertex to Fragment Programs

To light our model we want to calculate the *tangent* space direction of our light in the *vertex* function and interpolate it across the surface of our triangles to improve performance in our *fragment* code. Therefore we are going to have to output the vectors to our light.

struct v2f { float4 pos : POSITION; float2 uv, uv2; float3 lightDirection; };

lightDirection will be the interpolated light direction vector. uv2 will be the texture coordinate of the bump map.

*directional*and

*point*lights only. We are not considering spotlight angle in this example.

## The Vertex Program

v2f vert (a2v v) { v2f o; TANGENT_SPACE_ROTATION; o.lightDirection = mul(rotation, ObjSpaceLightDir(v.vertex)); o.pos = mul( UNITY_MATRIX_MVP, v.vertex); o.uv = TRANSFORM_TEX (v.texcoord, _MainTex); o.uv2 = TRANSFORM_TEX (v.texcoord, _Bump); return o; }

In the vertex program we use the TANGENT_SPACE_ROTATION macro to create our *rotation *matrix to convert **object** space to *tangent* space.

*v*and it must contain a normal called

*normal*and a tangent called

*tangent*.

We then calculate the direction in object space to the light we are dealing with (at the moment the most important light) using the built in function *ObjSpaceLightDir(v.vertex)*. We want the light direction in *object* space because we have a transformation from that to *tangent* space – which we immediately apply by multiplying our new *rotation *matrix by the direction.

Finally we work out the *projection* space position of the vertex (remember we are **required** to do this and store it in a : POSITION output variable) and the uv coordinate of our texture.

### Directional and Point Lights

Unity is storing the lights position in a float4 – there’s a 4th element in other words. The way this works is that if the light is directional then the *xyz* will be the direction of the light and *w* (the last entry) will be 0. If it’s a point light then *xyz* will be the position of the light and *w* will be 1. You are probably thinking – **so?**

So what ObjSpaceLightDir does is take away the position of the vertex from the position of the light – but first it multiplies the position of the vertex by the *w* component of the light’s position. In other words if this is a directional light we take away 0 from the position (because actually it was actually a direction) and if it’s a point light we get a vector with a magnitude of the distance away of the light.

## The Fragment Function

In the fragment function we are going to unpack the normal from it’s encoded format in the texture map and use that, on its own, as the normal for our Lambert function. That’s because all that tangent space rotation on the light direction has already got it to take account of the *normal* of the face of the model we are rendering.

float4 frag(v2f i) : COLOR { float4 c = tex2D (_MainTex, i.uv); float3 n = UnpackNormal(tex2D (_Bump, i.uv2)); float3 lightColor = UNITY_LIGHTMODEL_AMBIENT.xyz; float lengthSq = dot(i.lightDirection, i.lightDirection); float atten = 1.0 / (1.0 + lengthSq * unity_LightAtten[0].z); //Angle to the light float diff = saturate (dot (n, normalize(i.lightDirection))); lightColor += _LightColor0.rgb * (diff * atten); c.rgb = lightColor * c.rgb * 2; return c; }

We first start with the base colour of the ambient light.

Next we work out how far away our real light is. If this was a directional light then the vector is already normalized so the distance will be 1 (no affect). Then we work out the final attenuation of the light by multiplying out the distance from the light squared (*dot*(v,v) is the square of v’s magnitude helpfully) by the intensity of the light (represented in unity_LightAtten).

For a directional light we are multiplying out by 1/1 + attenuation – in other words we are dividing the underlying colour (and hence brightness) by the attenuation + 1. **This is why we multiply the final colour by 2**.

For a point light we are also making its brightness fall off in relation to the square of the distance to the light.

Then we *dot* the normalised light direction with the normal we got from the bump map and now we can apply our light’s color to that in combination with the attenuation. (Remember we’ve already rotated the light’s direction to take into account the normal of the face of the model we are rendering).

To get the color of the light we are using _LightColor0 – this needs to be declared in our shader program (or we have to include “Lighting.cginc”. For now we’ll just define it after the include for our UnityGC.cginc.

uniform float4 _LightColor0;

Job done.

Complete shader:

Shader "Custom/OutlineToonShader" { Properties { _MainTex ("Base (RGB)", 2D) = "white" {} _Bump ("Bump", 2D) = "bump" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 200 Pass { Cull Back Lighting On CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" uniform float4 _LightColor0; sampler2D _MainTex; sampler2D _Bump; float4 _MainTex_ST; float4 _Bump_ST; struct a2v { float4 vertex : POSITION; float3 normal : NORMAL; float4 texcoord : TEXCOORD0; float4 tangent : TANGENT; }; struct v2f { float4 pos : POSITION; float2 uv : TEXCOORD0; float2 uv2 : TEXCOORD1; float3 lightDirection; }; v2f vert (a2v v) { v2f o; TANGENT_SPACE_ROTATION; o.lightDirection = mul(rotation, ObjSpaceLightDir(v.vertex)); o.pos = mul( UNITY_MATRIX_MVP, v.vertex); o.uv = TRANSFORM_TEX (v.texcoord, _MainTex); o.uv2 = TRANSFORM_TEX (v.texcoord, _Bump); return o; } float4 frag(v2f i) : COLOR { float4 c = tex2D (_MainTex, i.uv); float3 n = UnpackNormal(tex2D (_Bump, i.uv2)); float3 lightColor = UNITY_LIGHTMODEL_AMBIENT.xyz; float lengthSq = dot(i.lightDirection, i.lightDirection); float atten = 1.0 / (1.0 + lengthSq); //Angle to the light float diff = saturate (dot (n, normalize(i.lightDirection))); lightColor += _LightColor0.rgb * (diff * atten); c.rgb = lightColor * c.rgb * 2; return c; } ENDCG } } FallBack "Diffuse" }

## Handling Multiple Lights in Forward Mode

Ok so we’ve managed to shade this for one light – but only one. To handle more lights we are going to have to write another pass and start adding some tags to tell the system what we want done for each light.

We really only need 2 if you think about it:

- A pass to configure the first light exactly as we have it now
- A pass to
**add on**information from subsequent lights.

Tags { "LightMode" = "ForwardBase" }

Telling the system this is what should happen for the most important light.

Then we will copy and paste our pass to make a second one, but change the tag to:

Tags { "LightMode" = "ForwardAdd" }

And add a command telling it how to blend the colours.

Blend One One

In other words, add 1 * the current value to 1 * the new value. Then we remove the reference to the UNITY_AMBIENT_LIGHT in our second pass copy of the fragment shader, because we’ve already handled that in the first pass. Our final shader with both passes looks like this:

Shader "Custom/OutlineToonShader" { Properties { _MainTex ("Base (RGB)", 2D) = "white" {} _Bump ("Bump", 2D) = "bump" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 200 Pass { Tags { "LightMode"="ForwardBase" } Cull Back Lighting On CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" uniform float4 _LightColor0; sampler2D _MainTex; sampler2D _Bump; float4 _MainTex_ST; float4 _Bump_ST; struct a2v { float4 vertex : POSITION; float3 normal : NORMAL; float4 texcoord : TEXCOORD0; float4 tangent : TANGENT; }; struct v2f { float4 pos : POSITION; float2 uv : TEXCOORD0; float2 uv2 : TEXCOORD1; float3 lightDirection; }; v2f vert (a2v v) { v2f o; TANGENT_SPACE_ROTATION; o.lightDirection = mul(rotation, ObjSpaceLightDir(v.vertex)); o.pos = mul( UNITY_MATRIX_MVP, v.vertex); o.uv = TRANSFORM_TEX (v.texcoord, _MainTex); o.uv2 = TRANSFORM_TEX (v.texcoord, _Bump); return o; } float4 frag(v2f i) : COLOR { float4 c = tex2D (_MainTex, i.uv); float3 n = UnpackNormal(tex2D (_Bump, i.uv2)); float3 lightColor = UNITY_LIGHTMODEL_AMBIENT.xyz; float lengthSq = dot(i.lightDirection, i.lightDirection); float atten = 1.0 / (1.0 + lengthSq); //Angle to the light float diff = saturate (dot (n, normalize(i.lightDirection))); lightColor += _LightColor0.rgb * (diff * atten); c.rgb = lightColor * c.rgb * 2; return c; } ENDCG } Pass { Cull Back Lighting On Tags { "LightMode"="ForwardAdd" } Blend One One CGPROGRAM #pragma exclude_renderers xbox360 flash #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" uniform float4 _LightColor0; sampler2D _MainTex; sampler2D _Bump; float4 _MainTex_ST; float4 _Bump_ST; struct a2v { float4 vertex : POSITION; float3 normal : NORMAL; float4 texcoord : TEXCOORD0; float4 tangent : TANGENT; }; struct v2f { float4 pos : POSITION; float2 uv : TEXCOORD0; float2 uv2 : TEXCOORD1; float3 lightDirection; }; v2f vert (a2v v) { v2f o; TANGENT_SPACE_ROTATION; o.lightDirection = mul(rotation, ObjSpaceLightDir(v.vertex)); o.pos = mul( UNITY_MATRIX_MVP, v.vertex); o.uv = TRANSFORM_TEX (v.texcoord, _MainTex); o.uv2 = TRANSFORM_TEX (v.texcoord, _Bump); return o; } float4 frag(v2f i) : COLOR { float4 c = tex2D (_MainTex, i.uv); float3 n = UnpackNormal(tex2D (_Bump, i.uv2)); float3 lightColor = float3(0); float lengthSq = dot(i.lightDirection, i.lightDirection); float atten = 1.0 / (1.0 + lengthSq * unity_LightAtten[0].z); //Angle to the light float diff = saturate (dot (n, normalize(i.lightDirection))); lightColor += _LightColor0.rgb * (diff * atten); c.rgb = lightColor * c.rgb * 2; return c; } ENDCG } } FallBack "Diffuse" }

## Conclusion

This has been an introduction to vertex and fragment shaders. In the next section we will build on this base to make a much better toon shader.