I’m President-Elect of Warwick Game Design Society!

I’ve been a member of the society since the first week of university. I remember flicking through all the welcome leaflets sent by the university in the weeks and months before moving here and being elated and excited when I noticed they had a society dedicated to making games. It’s been my passion since this blog started up!

So I’m humbled and honoured that I’ve been elected to lead the society next year as President. I hope to carry on the fantastic work the previous president, Alex Dixon, has done and I have full confidence the new exec can bring out the best in the society and its members.

Here’s to a great year for the society and a great year for making games!

Advertisements

Shader Showcase 2 ~ WGD Lightning Talk 2

Warwick Game Design seems to be making it tradition to hold Lightning Talks not just once a year, but once a term. This time round, we had great talks from one speaker about the mistakes she made during the development of her game, and one from the rarest of beasts in the society – a musician. Well, he wears many hats and one is music, anyway. Both were very informative, varied and interesting. Then there was me, caterwauling about shaders once more.

The Palette Swap shader

I shamelessly took inspiration from a talk from the previous Lightning Talks session and my first shader was my bespoke implementation of a palette swap. The general technique describes the process of mapping from some domain of colour values to another.

PaletteMap

Treat this map in a programming sense; a set of keys and the values each maps to – they’re called dictionaries or associative arrays in some languages. Alright, this is too abstract so far. My talk was about a specific problem I was having: I want a fully customisable colour scheme for my player sprite, which features eight different shades of grey.

PlayerSprites
He’s going grey after doing this job so long.

There’s at least a couple of ways of providing this functionality. One: create a different set of sprites for each colour combination. Now, I plan on having presets for skin colour, hair colour, shirt colour and trouser colour, each with ten different combinations. Dunno about you, but I think I’d get repetitive strain injury from drawing this spritesheet 10,000 times. Not to mention the havoc it would wreak on the build size of the game and the memory needed to hold them all!

The second way (pause this article now to have a guess. Got it? Good) is to implement the colour changes with a palette swap shader. I know, it was so hard to guess, I’m unpredictable like that. To understand what we’ll be doing, I shall introduce a few of the concepts of shading languages.

First off, I’m using a shading language called Nvidia Cg. Unity is really weird in that it uses a wrapper language, ShaderLab, to give Unity-friendly semantics around Cg programs embedded in a bunch of ShaderLab code. It’s pretty flexible and simplifies the whole pipeline to minimise writing boilerplate code. Then Cg is cross-compiled to either GLSL (OpenGL Shading Language) or HLSL (High-Level Shading Language) for OpenGL and DirectX platforms respectively. Cg is useful, in that it only supports features found in both those languages, but it’s limited because… it only supports features common to both those languages. Not to mention, the entire language has been deprecated since 2012. Apparently, it’s still helpful, though.

In Cg, and also in GLSL, a colour is a 4-element vector of floating-point (decimal) numbers between 0 and 1. Those represent the red, blue and green colour channels, plus the alpha channel – transparency. Colours are like a successful relationship, you have to have good transparency.

GreyscaleRange2
Greyscale range. Not to be confused with scaling a range; that’s mountain-climbing.
FloatingPoint
Floating-point numbers keep your shaders ship-shape.

In my implementation of palette swap, I don’t need much information for the keys of my map. In fact, I’m going to do something a bit hacky and use my keys as a sort-of array index (you’ll see shortly what I mean), so I only need to use one of the colour channels of the input pixels; since the input is greyscale, I arbitrarily pick the red channel.

It’s easier to appreciate this visually. Imagine the range of greyscale colours split into eight equally-sized sections. Oh, look, there’s an image above for that! Now, if we were computing things on the CPU side, we’d just go “ahh yeah, stick a bunch of if-statements in there and just set the new colour to, I dunno, pink if it’s between 0 and 32, blue between 32 and 64 and so on”. But STOP! We’re writing a shader! Shaders don’t like if-statements. I asked why, and they said if-statements once called shaders smelly.

Shaders work best if we can re-write our problem as a vector or matrix operation, since they’re vector processors – this is their bread and butter. Control flow in a shader is slower than a snail riding a tortoise because of the way GPUs implement branching. Luckily, we can send over our choice of 8 target colours over to our shader as an 8×4 matrix. Except, no. Cg only supports arbitrary-sized matrices up to size 4×4, so we need to do something a bit different and send over a two-element array of 4×4 matrices.

// Sample the matrix at a point determined by the texture.
fixed4 x = tex2D(_MainTex, i.uv);
float y = x.r * 2;
fixed4 result = fixed4((_ColorMatrix[y])[(y % 1) * 4].rgb, x.a) * i.color;
GreyscaleMap
Shaders: bringing excitement to your life, one pixel at a time.

We can then, as mentioned, use the red channel’s value as an array index. We multiply it by two to make sure we cover the range [0, 2] to access the correct matrix (_ColorMatrix[y]), and inside the matrix we want, we multiply the post-decimal-point part by four to act as a second index; this gets the row of the matrix we’re looking for. All you mathematicians out there are probably screaming at (y modulo 1), but in Cg, this’ll work on a floating-point by preserving only the part of the number after the decimal point. It’s a glorious exploit.

Now, we’ve completed the effect. When this is run in-game, you can switch out the greyscale sections of the sprite with some fancy new colours on the fly. I used this for a neat character customisation screen!

 

 

 

The Mesh Explosion shader

So far, all of the shaders I’ve shown off at the society have operated in the fragment shader. That is, they’ve been messing with the colour values of pixels. But this isn’t the only fancy thing one can achieve with shaders, oh no no no. You can mess with the actual geometry of a mesh in other shaders, such as the vertex shader, beforehand. I’m aiming for an effect that takes a mesh and pushes the triangles along their normals outwards – now, you actually can’t use a vertex shader for this; to calculate the triangle’s normal vector, you need access to all three of the vertices of the triangle. We need a geometry shader.

MeshExplosion
Directed by Michael Bay.

A geometry shader can operate on more than one vertex during each iteration; we’ll consume three at once to blast through triangles one at a time. For each triangle, we calculate the normal vector – this is a vector perpendicular to the face of the triangle, and it faces away from the front face of the triangle. The convention in computer graphics is that if, from your viewpoint, you can label the vertices of a face anticlockwise, then you are looking at the front face.

 

 

Finding the normal vector is a vector cross product away. Once we have that, we can add a multiple of that vector to each vertex of the triangle, so that the triangle packs its bags and departs the rest of the mesh. Furthermore, we can pass in a shader uniform variable to control how far the triangles move – I bound it to the mouse wheel on the CPU side. Here’s the pseudocode for calculating the normal vector:

// Calculate the normal vector in the normal way.
e0 = v0 - v1;
e1 = v2 - v1;
n = normalise(cross(e1, e0));
MeshNormal
Just a normal day for a triangle.

The movement code is exactly as you’d imagine – add a fraction of n to each vertex. The rest is some icky convoluted stuff to do with how geometry shaders need to output vertices (after all, a geometry shader can also be used to add geometry at runtime), and I don’t want to blow your minds any further or your screens will get covered in grey matter, so let’s not go into too much detail about that.

I don’t have the source code to hand for this project yet; I thought it would be more productive to get this post written up and release the source code later, when it’s been cleaned up. I’ll update this post when the time comes, but in the meantime I hope you gleaned some succulent shader suggestions. Happy shading everyone!

Bomb Blast ~ WGD ‘Spooky 2017’ Jam

For the first two game jams of the year, Warwick Game Design Society usually opts for simple themes to help ease newcomers into the society and get existing members warmed up. The past few years, those themes have been ‘Retro’ and ‘Spooky’, for Halloween, and this year was no different. However, this year not a single game shown off for ‘Spooky’ had a skeleton in sight! Maybe that’s the spookiest thing about it.

Again: ignore the fact I made this game in Term 1. Just ignore it. I post on this blog promptly and suggesting anything contradictory to that is slander.

I didn’t have a lot of time free to make something, but I gave it my best shot and built a competitive 4-player game in about 6 hours on barely any sleep. Ignoring the infinite-jump bug (the game does not check if you are grounded before letting you jump), it worked surprisingly well! To make it a bit more interesting, I hooked up the Joy-cons and Pro Controller from my Nintendo Switch to act as players 1-3, while player 4 is a keyboard/mouse user.

What I found most interesting is that each individual Joy-con is classed as an entirely separate controller, even when both ‘halves’ are connected at the same time. The axis and button numbers somewhat follow those of a conventional Xbox controller, but with the obvious additions of the SL and SR buttons. Unfortunately, Unity has no built-in way to retrieve gyroscope or NFC data and can’t activate HD rumble, and I had no time to perform any hackery to get all that working, else I would’ve at least added some cool rumble effects.

bomb_blast_02

You’re a kid now, you’re a squid n– oh, wait, wrong game…

The game itself is somewhat based on the Crash Bash minigame Space Bash, with four players on a destructible grid trying to blow each other up. However, I only had time to implement players dropping bombs onto their own position and pushing them at others. Bombs will destroy any floor around them and decimate any players in their blast radius. I also twisted things a little so the aim of the game is to paint as much of the floor in your own colour by walking over it, earning a couple of comparisons with Splatoon 2.

The game could do with a bit of a graphical overhaul, as I only had time for Placeholder characters and the most basic 16×16 sprites imaginable. As it stands, the bright green character especially is hard on the eyes. The stage might also be more interesting if it were larger, with a dynamic camera that zooms in and out, with perhaps more verticality, but it also needs to be easier to judge depth when jumping to the higher level.

In all, I think the experiments with using the Joy-cons as controllers were successful, although the game clearly needs more work in the fun department. Some powerups, some sound and a better control scheme would all be welcomed. But most importantly, I did manage to sneak in my guilty pleasure: screenshake.

Verdict: Bombed.

You can find the source code for this game on my Github.

Shader Showcase ~ WGD Lightning Talk

Every now and then, and definitely not when we could find any other speakers to come in, WGD holds Lightning Talks. Any member is free to come along and opine for a few minutes about whatever game-related topic they please. In the past, we’ve had talks on using OpenGL in fancy ways, using fancy DAWs (Digital Audio Workstations) to create funky tunes and “games for people”, for those fancy-pants people who love games involving no equipment except the bodies of you and your friends.

This time round, there were three talks – one was on palette-swapping and different techniques used in the past, and another was about making your own art (it was sorta like the Shia LaBeouf Just Do It talk, but with less shouting and more creepy drawings). Finally (or rather, in the middle of those two), was a talk from someone not at all built for public speech – me. I did an analysis of some of the cool shaders used in Super Mario Odyssey‘s Snapshot Mode, and I’m gonna do it again, right here. It was somewhat refreshing having a whole session of talks about arty subjects, even if they often swayed into technical artistry, given that the society is somewhat programmer-heavy. Never mind that the Lightning Talks took place last term – let’s pretend I’m prompt at blogging.

Anyway, it’s time to jump up in the air, jump up don’t be scared, jump up and make shaders every day!

00_demo_none.png

I’ve set up this demo scene with a few coloured cuboids, one directional light and a boring standard Unity skybox. It’ll do the job for demonstration purposes. In the executable version, you can freely move the camera around.

01_ex_greyscale

I took the most saturated environment I could find, just to desaturate it.

First up is a simple colour transformation. To convert an image to greyscale, all we do is take the luminance of each pixel in the scene and use that value for the red, green and blue values of that pixel’s colour.

float lum = tex.r * 0.3 + tex.g * 0.59 + tex.b * 0.11;

The luminance value takes into account the sensitivity of the human eye to each component of colour; your eyes are more sensitive to green, so it is heavily weighted in this calculation. That’s basically all there is to the greyscale shader.

02_demo_greyscale.png

This is pretty much what you’d expect; the effect is very similar to Odyssey’s.

03_ex_sepia.png

The city environment felt the most appropriate for a sepia-tone shot.

The sepia-tone shader is very similar to the greyscale one, although the transformation is a little more involved. Before, we used the same value for each of the resulting red, blue and green channels to get greyscale values, but this time, each of the resulting red, blue and green are separate functions of the original RGB values.

half3x3 magicNumbers = half3x3 
( 
    0.393 * 1.25, 0.349 * 1.25, 0.272, // Red. 
    0.769 * 1.25, 0.686 * 1.25, 0.534, // Green. 
    0.189 * 1.25, 0.168 * 1.25, 0.131 // Blue. 
);

half3 sepia = mul(tex.rgb, magicNumbers);

We have to resort to some matrix multiplication to achieve this. The 3×3 matrix represents a bunch of numbers magically pulled out of Microsoft’s rear end – the mapping from the original colours to the new ones. The last line here essentially represents a system of three equations, where, for example:

sepia.r = tex.r * 0.393 * 1.25 + tex.g * 0.349 * 1.25 + tex.b * 0.272

If you’ve never done linear algebra or matrix multiplication before, this might look a bit exotic, but trust me – it works!

04_demo_sepia.png

We end up with another unsurprising result. You might find this a bit too yellow, in which case just remove all the multiplications by 1.25 I added.

05_ex_depth.png

This one is where things get a bit more interesting. Upon seeing this effect, I immediately thought to myself, “Aha! They’re just using the depth buffer values! I’m a goddamn genius”! So, what is the depth buffer I hear you ask?

In computer graphics, you need to render things in the correct order. If you have a crate fully obscured by another crate, you wouldn’t want to render the obscured crate over the other one – it’d look really strange! Whenever a pixel is drawn, its depth – the distance from the camera plane to the object being drawn into this pixel in the z-direction – is recorded in the depth buffer (z-buffer); this is just a big ‘ol 2D array the same size as the program’s on-screen resolution. When you try and draw something, you first check the new object’s z-buffer value against whatever is already recorded at that position, and if it’s a higher value (the distance from the camera is further), that means it’s occluded and we discard that pixel.

06_demo_depth

This is the same scene from above, but from a different angle. Each pixel is just showing the greyscale z-buffer value of the final rendered image, which means it’s representing how far away the closest thing at that pixel position is. But why does this not look as good as the Odyssey example? Well, first of all, I’m not Nintendo EPD and they have a few more resources than I do. Second, I’ve not coloured the image myself, although as we saw earlier, this would be a fairly trivial change. The main reason is actually because the scene isn’t very large and we need to understand how the z-buffer does things to understand why.

There needs to be a bound on how small or large the z-buffer values are, else the values would be pretty meaningless. A camera has a near clip distance and a far clip distance, which are the minimum and maximum distances from the camera plane an object can be to be rendered. If you’ve ever clipped the camera through part of a wall in a game, that’s because the near clip plane is further away than the wall is, so that section of wall is culled (not rendered). Values in the z-buffer are typically floating point values between 0 and 1, so a small scene (i.e. one where the clip planes are fairly close together) would result in two objects having more different z-buffer values than the same objects in a large scene.

In the scene above, the near plane is very close to the camera and the far plane is just far enough to contain the whole scene. By default in Unity, the far clip plane is 1000 units away, which would result in the entire scene here being black – all z-buffer values would be close to 0. But having a small scene means you don’t really get much variation in colours, because things are really close and there’s nothing in the background – in the Odyssey screenshots above, there are things clearly in the foreground (Mario and the rocks he’s stood on) and the background (the section of island behind the waterfall). One detail that’s kind of hard to make out is that the furthest island actually shows up in very faint green in the snapshot, so it’s real geometry rather than part of a skybox. Also, the water particles don’t show up in this mode, so they’re likely reading from the depth buffer but not writing to it, and are rendered after all the opaque level geometry.

07_ex_blur

Seaside Kingdom Doggo raised this game’s review average by 19 points.

Next up is a blur effect. I won’t be talking about the radial blur starting from the edge that Super Mario Odyssey uses though; I’ll talk about a simpler blur that I had time to write – a Gaussian blur that operates on the whole image.

This type of blue works by throwing a ‘kernel’ over each pixel in the original image and looking at the neighbouring pixels. The larger the kernel, the more pixels are considered. Then, a weighted average of the pixel values, with the central pixel having most weight, is calculated for the new image. It’s a pretty simple concept, and probably how you’d expect, but is one step more complicated than a box blur which doesn’t bother with weighting the pixels and just does an unweighted average. Commonly, you would do two or three passes with different sized kernels, so the algorithm is run more than once and the resulting blur has more fidelity.

08_demo_blur.png

This image is doing three blurring passes, each with a larger and larger kernel. Each blurring pass actually requires two shader passes, one in the x-direction and another in the y-direction. You could reduce the amount of blurring by making the kernels smaller or reducing the number of passes.

09_ex_edge

TFW you turn up to a formal occasion in casual clothing.

The final algorithm I implemented is an image-space Sobel filter to attempt to detect edges in the image. This also acts in separate x-and y-direction passes; in each direction, a gradient is calculated, where a higher gradient means a more ‘different’ colour. By definition, you can’t do a gradient in both directions simultaneously, hence the need for two passes. This effect wouldn’t be used in a game if you wanted objects to stand out against objects of similar colour, but it’s pretty simple to understand conceptually.

10_demo_edge.png

Running the algorithm on our sample scene results in pillars that are pretty cleanly defined, but this is mostly due to their block-colouring. You can also see a couple of spots where pillars seem to ‘blend’ into one another, and all shadows are also edge-detected because they are dark areas right next to light flooring.

For a bit of fun, I ‘converted’ this effect into a cel-shading effect. Again, it’s not exactly Borderlands quality, but it was a fun exercise for me. This effect is also not present in Odyssey, so there is no reference image comparison.

11_demo_cel.png

The Sobel edge detection filter from above is inverted and multiplied by the source image, such that ‘edge’ areas become black and all other areas retain their original colouring. The areas where colours are similar clearly lack cel-shading here, such as the red pillars right-of-centre. You can also see on the yellow pillars where the skybox turns lighter that the black lines ‘break’ a little. Of course, because it’s based on the Sobel edge detection filter, you also get cel shading on the shadows, but the effect is less pronounced here because the shadows are dark.

All of the source code and a built executable for Windows can be found over on my Github. It’s really fun to pick apart effects you’ve seen in video games and I wholeheartedly recommend you try the same! If I have time over the holidays, I may try to pick apart some of the other effects, such as the NES/SNES filters which are probably just pixelation filters with a palette swap applied on top.

Tappy Dev ~ WGD ‘Fuck This’ 48-Hour Jam

Well this post is certainly overdue. This jam happened from the 26-28th February, so my apologies for taking quite so long writing this one up; the end of term was, as always, filled to the brim with horrible, terrible coursework.

The idea behind the ‘Fuck This’ jam is that participants make a game in a tool, language or engine that they’ve never used or hate using, or that they make a game in a genre they don’t like. That way, you can learn a new skill or add your own spin to a genre that’s never been tried before. Since learning a new tool would’ve taken too long, I opted for a bad genre. That genre was idle games. Since I’ve never really focused much on mobile development, I made Android my sole target platform. Sorry iPhone people and Windows Phone fans (both of you), but I didn’t have either of those at hand for testing.

My idea for a terrible idle game was a game in which you tap the screen in order to make games. Every time you tap the screen, you get a point, and every 1000 points will ‘make a game’. Those games are all jokes or tropes on the games industry, as seen in the example below.

Screenshot_20160324-191950

The counter gets incremented once every time you tap on the screen. That counts for multiple taps at the same time, so the pro strats for getting as many points as possible are to just mash the screen with your fingertips. Not sure if that counts as mashing, but I don’t care, just do it. Gotta get 1000 points to see my hilarious references, after all.

The art style was literally just me drawing badly on some paper using some coloured pencils. I made sure I coloured out of the lines, for good measure. I did this mostly because it was something different from pixel art, but it was also something I knew I could quickly and easily draw. Personally, I think it worked out remarkably well, considering the fact I didn’t spend all that long on it.

As you tap (or, well, ‘type’), the on-screen screen will fill up with text. It’s nothing special, just a measure of progress. And also an insult!

Screenshot_20160325-134436

If you want to try it out, you’ll need to find some way of installing non-Play Store apps on your phone. I’ve also built the game for Android 5.0+, so it (probably) won’t work on versions lower than that. I hope it works on your phone! Just click the banner below to download the game.

tappy_dev_header.png

Radicool Trip ~ WGD ‘Retro’ Two-Week Competition

It’s been a couple of weeks since the start of term, and with that came the return of Warwick Game Design’s Two-Week Competitions. I managed to cram a game into the past two weeks, and the result is Radicool Trip, a game where you’re an edgy 90’s kid with a slight addiction to a popular branded cola, Popsee.

radicool_01It’s exactly as dumb as it sounds.

It’s a short game, involving a trip from your bedroom to the shops. It was originally to be a sort-of puzzle platformer, but I spent so goddamn long on the text engine, it turned into a goofy text option-based… thing? I’m not even sure. It’s certainly not the most polished thing I’ve ever made, but it was enjoyable – the stupid 90’s references and jokes were fun to write. I like how the text engine worked out too; there’s room for improvement for sure, but I’ll most likely use it in future projects. Currently, the text system supports 6 rows of text with 14 mono-spaced characters each, but I’d like to add the ability to tweak the number of characters per row and switch from a purely mono-space font to variable-width fonts. I also hope to make the character controller feel better to control, as it’s currently a bit of a potato when it comes to jumping.

radicool_02Welcome to WGD.

The text engine also supported player choices in the form of small replies. The biggest pitfall of this is that each text box only supported one line of text – 14 characters – so replies were short. Plus, they obscured the actual text behind them. However, the concept of having different dialogue options and witty replies is something I’d like to build on, perhaps as part of an RPG.

radicool_03

Everyone loves options. Well, everyone has the option of liking options or not.

It’s an extremely short game (well, it’s basically just a tech demo of a text box), but I hope to take ideas from this into the next project. Speaking of which, the next WGD two-week competition is ‘Broken’ (with a side theme ‘Spooky’, for all the Halloween spookiness). I have plenty of ideas for it – I’ll most likely be going with a puzzle game I’ve wanted to make for a while, in which you have a world that can only be viewed from one side at a time using an orthographic camera. You’ll be able to view the world from different sides and rotate the world, but gravity will always act downwards so you’ll have to make sure the player doesn’t fall out of the world.

If you want to play Radicool Trip, you can try it out by clicking the flashy-looking button below. It’s short and there’s not much to do in it, but hey, it’s free!

 download_banner