## Motivation

You should read this article if you want to:

- Understand how to write a multipass toon shader
- Learn more about the different
*spaces*we can shade in and how that could be useful - Get to grips with a practical fragment shader
- Learn about matrix multiplication and the built in matrices and how to use them

This tutorial is much more practical than the technical introduction to vertex and fragment shaders in #5.

## Planning The Shader

Ok so we want to produce an outlined toon shader – simplified lighting and colors. What we need to do is:

- Draw an outline for the model
- Apply the toon shader principles from #4 to our vertex and fragment programs

## Drawing An Outline

There are many ways to skin a cat it’s said and in part #4 we tried some rim lighting/edge detection to give our character an outline. Now what we are going to do is use another *pass* to make a better job of it.

One of the building blocks of toon shading is that to draw an outline you can actually just render the part of the model you can’t see (the back faces) scaled up and in black. The idea goes that these will then be a good outline that isn’t destroying the fidelity of the front faces of your model – which is the effect of black lining the edges of the model as we did in #4.

So our first attempt at that will be to:

- Write a
*pass*that draws back faces only - Move all of the vertices so that they are bigger

Ok so a pass that only draws back faces:

Pass { Cull Front Lighting Off }

Now let’s consider the easy part – lets get this pass to always draw pixels in black!

CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" //The rest of our code needs to //go here float4 frag (v2f IN) : COLOR { return float4(0,0,0,1); } ENDCG

A fragment program that returns float(0,0,0,1) – totally opaque black.

So we now need the input structures for the vertex and fragment parts of our shader. We’re going to expand the faces along the *normal* of each vertex – that’s facing backwards and outwards from the point so should provide a useful tool. All we really need then is the *position* and the *normal* for each vertex in this pass:

struct a2v{ float4 vertex : POSITION; float3 normal : NORMAL; }; struct v2f { float4 pos : POSITION; };

Next we define an _*Outline* property as a range of 0..1 in our properties and add an *_Outline* variable to our code.

And finally we can write our vertex program to expand the vertices along their normals:

float _Outline; v2f vert (a2v v) { v2f o; o.pos = mul( UNITY_MATRIX_MVP, v.vertex + (float4(v.normal,0) * _Outline)); return o; }

So what this does is it takes the vertex adds on a proportion of the normal as specified with our new property and then converts the result to *projection* space using a handy Unity supplied matrix that does just that.

Matrices are used to convert many things in shaders. The principle is that if you have a square matrix, suitably populated with values and then you multiply that matrix by a single column matrix, the result is a transformed single column matrix with the answer.This diagram shows the resulting matrix. Now a position is just 3 or 4 values which can be treated as the single column matrix. So given a suitable transformation matrix we can easily create the output we want.

Unity has a bunch of predefined matrices that are constructed for the current view and can convert things from a series of different *spaces* to other ones.

So currently our *pass* looks like this:

Pass { Cull Front Lighting Off CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct a2v { float4 vertex : POSITION; float3 normal : NORMAL; }; struct v2f { float4 pos : POSITION; }; float _Outline; v2f vert (a2v v) { v2f o; o.pos = mul( UNITY_MATRIX_MVP, v.vertex + (float4(v.normal,0) * _Outline)); return o; } float4 frag (v2f IN) : COLOR { return float(0); } ENDCG }

The result of this looks like this:

So it kinda works but due to some of the faces in the model we’re getting a strange effect around the mouth and around the eyebrows. The problem is that our outline is writing to the Z buffer so the front faces aren’t getting drawn in a few circumstances that make it look like a poor attempt.

We could try fixing that by turning off Z buffer writing for this pass – it’s easy just need to modify the top of the pass.

Pass { Cull Front Lighting Off ZWrite Off }

Now as you will see, the strange effects on the face are gone – but you should be able to spot why I’ve put two models next to each other! Look at the overlap between them.

The back models front faces are overwriting the front models outline – that won’t do at all, the outline will keep vanishing in lots of cases of model overlap.

So we actually need to turn on Z writing.

Ok, so the problem with the first attempt is that some of the vertices are obviously pointing in very different directions, and to get the outline we want we have to scale some of them too much.

What we really need to do is make the vertices more like a silhouette rather than a real model. If we pancake the Z component of the back vertices as we look at them (almost remove it) then the dominant factor will be the x and y components that make our outline!

So it’s back to our vertex program and time for some more matrix action.

### Pancake The Back Faces

Our first challenge is that the vertex and normal we are getting are in model space – by we actually need then to be in the *view* space in order for z to really be the pancaking direction…

So we’re going to take on board a couple more Unity matrices.

First rather than just converting the vertex to *projection* space we will convert it to *view *space – that’s easy, just a different matrix.

Then we need to convert the *normal* to the same space – this is a bit trickier because normals are not really in model space, their are orthogonal to it – the practical upshot of that is that we have to convert them using the inverse of the ModelView transformation matrix – luckily Unity provides that one too. When we’ve converted the normal it’s z is the think we want to minimise.

So:

- Convert the vertex to view space
- Convert the normal to view space
- Fix the z element of the normal to some minimal value
- Re-normalise the normal (we broke it in the previous step)
- Scale the normal and add it on to the vertex position
- Convert the vertex position into
*projection*space

v2f vert (a2v v){ v2f o; float4 pos = mul( UNITY_MATRIX_MV, v.vertex); float3 normal = mul( (float3x3)UNITY_MATRIX_IT_MV, v.normal); normal.z = -0.4; pos = pos + float4(normalize(normal),0) * _Outline; o.pos = mul(UNITY_MATRIX_P, pos); return o; }

Note that the matrices in Unity are 4×4 – but our v.normal is just a float3 – we have to therefore cast the matrix to be a 3×3 or there will be a lot of errors in the console about ambiguous function calls.

If we use this shader with ZWrite on – it looks like this:

Which is good enough for me.

## Make it toony!

Ok so all that remains is to convert our surface shader’s toony features to the vertex and fragment program.

First we add the ramp texture from part #4 as a _*Ramp* property and _*Ramp* sampler2D.

This will make our toon lit areas – then we add a _*ColorMerge *property and floating point variable in our shader so we can reduce the variety of colours.

Now the only thing that changes is the fragment program from our shader in #5 – it now looks like this:

float4 frag(v2f i) : COLOR { //Get the color of the pixel from the texture float4 c = tex2D (_MainTex, i.uv); //Merge the colours c.rgb = (floor(c.rgb*_ColorMerge)/_ColorMerge); //Get the normal from the bump map float3 n = UnpackNormal(tex2D (_Bump, i.uv2)); //Based on the ambient light float3 lightColor = UNITY_LIGHTMODEL_AMBIENT.xyz; //Work out this distance of the light float lengthSq = dot(i.lightDirection, i.lightDirection); //Fix the attenuation based on the distance float atten = 1.0 / (1.0 + lengthSq); //Angle to the light float diff = saturate (dot (n, normalize(i.lightDirection))); //Perform our toon light mapping diff = tex2D(_Ramp, float2(diff, 0.5)); //Update the colour lightColor += _LightColor0.rgb * (diff * atten); //Product the final color c.rgb = lightColor * c.rgb * 2; return c; }

So all it’s doing is merging the colours just after sampling them and then applying our toon ramp texture as a lookup for the strength of lighting, just before we apply it.

This now gives us our final result:

The complete shader (one light) looks like this:

<pre class="brush: c; gutter: true; first-line:

Shader "Custom/OutlineToonShader" {

Properties {

_MainTex ("Base (RGB)", 2D) = "white" {}

_Bump ("Bump", 2D) = "bump" {}

_ColorMerge ("Color Merge", Range(0.1,20000)) = 8

_Ramp ("Ramp Texture", 2D) = "white" {}

_Outline ("Outline", Range(0, 0.15)) = 0.08

}

SubShader {

Tags { "RenderType"="Opaque" }

LOD 200

Pass {

Cull Front

Lighting Off

ZWrite On

Tags { "LightMode"="ForwardBase" }

CGPROGRAM

#pragma vertex vert

#pragma fragment frag

#include "UnityCG.cginc"

struct a2v {

float4 vertex : POSITION;

float3 normal : NORMAL;

float3 tangent : TANGENT;

};

struct v2f {

float4 pos : POSITION;

};

float _Outline;

v2f vert (a2v v) {

v2f o;

float4 pos = mul( UNITY_MATRIX_MV, v.vertex);

float3 normal = mul( (float3x3)UNITY_MATRIX_IT_MV, v.normal);

normal.z = -0.4;

pos = pos + float4(normalize(normal),0) * _Outline;

o.pos = mul(UNITY_MATRIX_P, pos);

return o;

}

float4 frag (v2f IN) : COLOR{

return float(0);

}

ENDCG

}

Pass {

Cull Back

Lighting On

Tags { "LightMode"="ForwardBase" }

CGPROGRAM

#pragma vertex vert

#pragma fragment frag

#include "UnityCG.cginc"

uniform float4 _LightColor0;

sampler2D _MainTex;

sampler2D _Bump;

sampler2D _Ramp;

float4 _MainTex_ST;

float4 _Bump_ST;

float _Tooniness;

float _ColorMerge;

struct a2v {

float4 vertex : POSITION;

float3 normal : NORMAL;

float4 texcoord : TEXCOORD0;

float4 tangent : TANGENT;

};

struct v2f {

float4 pos : POSITION;

float2 uv : TEXCOORD0;

float2 uv2 : TEXCOORD1;

float3 lightDirection;

};

v2f vert (a2v v) {

v2f o;

//Create a rotation matrix for tangent space

TANGENT_SPACE_ROTATION;

//Store the light's direction in tangent space

o.lightDirection = mul(rotation, ObjSpaceLightDir(v.vertex));

//Transform the vertex to projection space

o.pos = mul( UNITY_MATRIX_MVP, v.vertex);

//Get the UV coordinates

o.uv = TRANSFORM_TEX (v.texcoord, _MainTex);

o.uv2 = TRANSFORM_TEX (v.texcoord, _Bump);

return o;

}

float4 frag(v2f i) : COLOR {

//Get the color of the pixel from the texture

float4 c = tex2D (_MainTex, i.uv);

//Merge the colours

c.rgb = (floor(c.rgb*_ColorMerge)/_ColorMerge);

//Get the normal from the bump map

float3 n = UnpackNormal(tex2D (_Bump, i.uv2));

//Based on the ambient light

float3 lightColor = UNITY_LIGHTMODEL_AMBIENT.xyz;

//Work out this distance of the light

float lengthSq = dot(i.lightDirection, i.lightDirection);

//Fix the attenuation based on the distance

float atten = 1.0 / (1.0 + lengthSq);

//Angle to the light

float diff = saturate (dot (n, normalize(i.lightDirection)));

//Perform our toon light mapping

diff = tex2D(_Ramp, float2(diff, 0.5));

//Update the colour

lightColor += _LightColor0.rgb * (diff * atten);

//Product the final color

c.rgb = lightColor * c.rgb * 2;

return c;

}

ENDCG

}

}

FallBack "Diffuse"

}

I'll leave it to you to write a ForwardAdd pass for extra lights.

Hi

Firstly, Thanks for this amazing tutorial.

Secondly, I’m having problems with the final code.

Here are the errors:

1. Shader error in ‘Mymy/ToonShader_Online’: ‘vert’: function return value missing semantics at line 97 (on d3d11_9x)

Compiling Vertex program

Platform defines: UNITY_NO_LINEAR_COLORSPACE UNITY_ENABLE_REFLECTION_BUFFERS UNITY_PBS_USE_BRDF3

2. Shader error in ‘Mymy/ToonShader_Online’: ‘frag’: input parameter ‘i’ missing semantics at line 111 (on d3d11_9x)

Compiling Fragment program

Platform defines: UNITY_NO_LINEAR_COLORSPACE UNITY_ENABLE_REFLECTION_BUFFERS UNITY_PBS_USE_BRDF3

Thanks In Advance

LikeLike