Shader Showcase 2 ~ WGD Lightning Talk 2

Warwick Game Design seems to be making it tradition to hold Lightning Talks not just once a year, but once a term. This time round, we had great talks from one speaker about the mistakes she made during the development of her game, and one from the rarest of beasts in the society – a musician. Well, he wears many hats and one is music, anyway. Both were very informative, varied and interesting. Then there was me, caterwauling about shaders once more.

The Palette Swap shader

I shamelessly took inspiration from a talk from the previous Lightning Talks session and my first shader was my bespoke implementation of a palette swap. The general technique describes the process of mapping from some domain of colour values to another.

PaletteMap

Treat this map in a programming sense; a set of keys and the values each maps to – they’re called dictionaries or associative arrays in some languages. Alright, this is too abstract so far. My talk was about a specific problem I was having: I want a fully customisable colour scheme for my player sprite, which features eight different shades of grey.

PlayerSprites
He’s going grey after doing this job so long.

There’s at least a couple of ways of providing this functionality. One: create a different set of sprites for each colour combination. Now, I plan on having presets for skin colour, hair colour, shirt colour and trouser colour, each with ten different combinations. Dunno about you, but I think I’d get repetitive strain injury from drawing this spritesheet 10,000 times. Not to mention the havoc it would wreak on the build size of the game and the memory needed to hold them all!

The second way (pause this article now to have a guess. Got it? Good) is to implement the colour changes with a palette swap shader. I know, it was so hard to guess, I’m unpredictable like that. To understand what we’ll be doing, I shall introduce a few of the concepts of shading languages.

First off, I’m using a shading language called Nvidia Cg. Unity is really weird in that it uses a wrapper language, ShaderLab, to give Unity-friendly semantics around Cg programs embedded in a bunch of ShaderLab code. It’s pretty flexible and simplifies the whole pipeline to minimise writing boilerplate code. Then Cg is cross-compiled to either GLSL (OpenGL Shading Language) or HLSL (High-Level Shading Language) for OpenGL and DirectX platforms respectively. Cg is useful, in that it only supports features found in both those languages, but it’s limited because… it only supports features common to both those languages. Not to mention, the entire language has been deprecated since 2012. Apparently, it’s still helpful, though.

In Cg, and also in GLSL, a colour is a 4-element vector of floating-point (decimal) numbers between 0 and 1. Those represent the red, blue and green colour channels, plus the alpha channel – transparency. Colours are like a successful relationship, you have to have good transparency.

GreyscaleRange2
Greyscale range. Not to be confused with scaling a range; that’s mountain-climbing.
FloatingPoint
Floating-point numbers keep your shaders ship-shape.

In my implementation of palette swap, I don’t need much information for the keys of my map. In fact, I’m going to do something a bit hacky and use my keys as a sort-of array index (you’ll see shortly what I mean), so I only need to use one of the colour channels of the input pixels; since the input is greyscale, I arbitrarily pick the red channel.

It’s easier to appreciate this visually. Imagine the range of greyscale colours split into eight equally-sized sections. Oh, look, there’s an image above for that! Now, if we were computing things on the CPU side, we’d just go “ahh yeah, stick a bunch of if-statements in there and just set the new colour to, I dunno, pink if it’s between 0 and 32, blue between 32 and 64 and so on”. But STOP! We’re writing a shader! Shaders don’t like if-statements. I asked why, and they said if-statements once called shaders smelly.

Shaders work best if we can re-write our problem as a vector or matrix operation, since they’re vector processors – this is their bread and butter. Control flow in a shader is slower than a snail riding a tortoise because of the way GPUs implement branching. Luckily, we can send over our choice of 8 target colours over to our shader as an 8×4 matrix. Except, no. Cg only supports arbitrary-sized matrices up to size 4×4, so we need to do something a bit different and send over a two-element array of 4×4 matrices.

// Sample the matrix at a point determined by the texture.
fixed4 x = tex2D(_MainTex, i.uv);
float y = x.r * 2;
fixed4 result = fixed4((_ColorMatrix[y])[(y % 1) * 4].rgb, x.a) * i.color;
GreyscaleMap
Shaders: bringing excitement to your life, one pixel at a time.

We can then, as mentioned, use the red channel’s value as an array index. We multiply it by two to make sure we cover the range [0, 2] to access the correct matrix (_ColorMatrix[y]), and inside the matrix we want, we multiply the post-decimal-point part by four to act as a second index; this gets the row of the matrix we’re looking for. All you mathematicians out there are probably screaming at (y modulo 1), but in Cg, this’ll work on a floating-point by preserving only the part of the number after the decimal point. It’s a glorious exploit.

Now, we’ve completed the effect. When this is run in-game, you can switch out the greyscale sections of the sprite with some fancy new colours on the fly. I used this for a neat character customisation screen!

 

 

 

The Mesh Explosion shader

So far, all of the shaders I’ve shown off at the society have operated in the fragment shader. That is, they’ve been messing with the colour values of pixels. But this isn’t the only fancy thing one can achieve with shaders, oh no no no. You can mess with the actual geometry of a mesh in other shaders, such as the vertex shader, beforehand. I’m aiming for an effect that takes a mesh and pushes the triangles along their normals outwards – now, you actually can’t use a vertex shader for this; to calculate the triangle’s normal vector, you need access to all three of the vertices of the triangle. We need a geometry shader.

MeshExplosion
Directed by Michael Bay.

A geometry shader can operate on more than one vertex during each iteration; we’ll consume three at once to blast through triangles one at a time. For each triangle, we calculate the normal vector – this is a vector perpendicular to the face of the triangle, and it faces away from the front face of the triangle. The convention in computer graphics is that if, from your viewpoint, you can label the vertices of a face anticlockwise, then you are looking at the front face.

 

 

Finding the normal vector is a vector cross product away. Once we have that, we can add a multiple of that vector to each vertex of the triangle, so that the triangle packs its bags and departs the rest of the mesh. Furthermore, we can pass in a shader uniform variable to control how far the triangles move – I bound it to the mouse wheel on the CPU side. Here’s the pseudocode for calculating the normal vector:

// Calculate the normal vector in the normal way.
e0 = v0 - v1;
e1 = v2 - v1;
n = normalise(cross(e1, e0));
MeshNormal
Just a normal day for a triangle.

The movement code is exactly as you’d imagine – add a fraction of n to each vertex. The rest is some icky convoluted stuff to do with how geometry shaders need to output vertices (after all, a geometry shader can also be used to add geometry at runtime), and I don’t want to blow your minds any further or your screens will get covered in grey matter, so let’s not go into too much detail about that.

I don’t have the source code to hand for this project yet; I thought it would be more productive to get this post written up and release the source code later, when it’s been cleaned up. I’ll update this post when the time comes, but in the meantime I hope you gleaned some succulent shader suggestions. Happy shading everyone!

Advertisements