Shader Showcase 2 ~ WGD Lightning Talk 2

Warwick Game Design seems to be making it tradition to hold Lightning Talks not just once a year, but once a term. This time round, we had great talks from one speaker about the mistakes she made during the development of her game, and one from the rarest of beasts in the society – a musician. Well, he wears many hats and one is music, anyway. Both were very informative, varied and interesting. Then there was me, caterwauling about shaders once more.

The Palette Swap shader

I shamelessly took inspiration from a talk from the previous Lightning Talks session and my first shader was my bespoke implementation of a palette swap. The general technique describes the process of mapping from some domain of colour values to another.


Treat this map in a programming sense; a set of keys and the values each maps to – they’re called dictionaries or associative arrays in some languages. Alright, this is too abstract so far. My talk was about a specific problem I was having: I want a fully customisable colour scheme for my player sprite, which features eight different shades of grey.

He’s going grey after doing this job so long.

There’s at least a couple of ways of providing this functionality. One: create a different set of sprites for each colour combination. Now, I plan on having presets for skin colour, hair colour, shirt colour and trouser colour, each with ten different combinations. Dunno about you, but I think I’d get repetitive strain injury from drawing this spritesheet 10,000 times. Not to mention the havoc it would wreak on the build size of the game and the memory needed to hold them all!

The second way (pause this article now to have a guess. Got it? Good) is to implement the colour changes with a palette swap shader. I know, it was so hard to guess, I’m unpredictable like that. To understand what we’ll be doing, I shall introduce a few of the concepts of shading languages.

First off, I’m using a shading language called Nvidia Cg. Unity is really weird in that it uses a wrapper language, ShaderLab, to give Unity-friendly semantics around Cg programs embedded in a bunch of ShaderLab code. It’s pretty flexible and simplifies the whole pipeline to minimise writing boilerplate code. Then Cg is cross-compiled to either GLSL (OpenGL Shading Language) or HLSL (High-Level Shading Language) for OpenGL and DirectX platforms respectively. Cg is useful, in that it only supports features found in both those languages, but it’s limited because… it only supports features common to both those languages. Not to mention, the entire language has been deprecated since 2012. Apparently, it’s still helpful, though.

In Cg, and also in GLSL, a colour is a 4-element vector of floating-point (decimal) numbers between 0 and 1. Those represent the red, blue and green colour channels, plus the alpha channel – transparency. Colours are like a successful relationship, you have to have good transparency.

Greyscale range. Not to be confused with scaling a range; that’s mountain-climbing.
Floating-point numbers keep your shaders ship-shape.

In my implementation of palette swap, I don’t need much information for the keys of my map. In fact, I’m going to do something a bit hacky and use my keys as a sort-of array index (you’ll see shortly what I mean), so I only need to use one of the colour channels of the input pixels; since the input is greyscale, I arbitrarily pick the red channel.

It’s easier to appreciate this visually. Imagine the range of greyscale colours split into eight equally-sized sections. Oh, look, there’s an image above for that! Now, if we were computing things on the CPU side, we’d just go “ahh yeah, stick a bunch of if-statements in there and just set the new colour to, I dunno, pink if it’s between 0 and 32, blue between 32 and 64 and so on”. But STOP! We’re writing a shader! Shaders don’t like if-statements. I asked why, and they said if-statements once called shaders smelly.

Shaders work best if we can re-write our problem as a vector or matrix operation, since they’re vector processors – this is their bread and butter. Control flow in a shader is slower than a snail riding a tortoise because of the way GPUs implement branching. Luckily, we can send over our choice of 8 target colours over to our shader as an 8×4 matrix. Except, no. Cg only supports arbitrary-sized matrices up to size 4×4, so we need to do something a bit different and send over a two-element array of 4×4 matrices.

// Sample the matrix at a point determined by the texture.
fixed4 x = tex2D(_MainTex, i.uv);
float y = x.r * 2;
fixed4 result = fixed4((_ColorMatrix[y])[(y % 1) * 4].rgb, x.a) * i.color;
Shaders: bringing excitement to your life, one pixel at a time.

We can then, as mentioned, use the red channel’s value as an array index. We multiply it by two to make sure we cover the range [0, 2] to access the correct matrix (_ColorMatrix[y]), and inside the matrix we want, we multiply the post-decimal-point part by four to act as a second index; this gets the row of the matrix we’re looking for. All you mathematicians out there are probably screaming at (y modulo 1), but in Cg, this’ll work on a floating-point by preserving only the part of the number after the decimal point. It’s a glorious exploit.

Now, we’ve completed the effect. When this is run in-game, you can switch out the greyscale sections of the sprite with some fancy new colours on the fly. I used this for a neat character customisation screen!




The Mesh Explosion shader

So far, all of the shaders I’ve shown off at the society have operated in the fragment shader. That is, they’ve been messing with the colour values of pixels. But this isn’t the only fancy thing one can achieve with shaders, oh no no no. You can mess with the actual geometry of a mesh in other shaders, such as the vertex shader, beforehand. I’m aiming for an effect that takes a mesh and pushes the triangles along their normals outwards – now, you actually can’t use a vertex shader for this; to calculate the triangle’s normal vector, you need access to all three of the vertices of the triangle. We need a geometry shader.

Directed by Michael Bay.

A geometry shader can operate on more than one vertex during each iteration; we’ll consume three at once to blast through triangles one at a time. For each triangle, we calculate the normal vector – this is a vector perpendicular to the face of the triangle, and it faces away from the front face of the triangle. The convention in computer graphics is that if, from your viewpoint, you can label the vertices of a face anticlockwise, then you are looking at the front face.



Finding the normal vector is a vector cross product away. Once we have that, we can add a multiple of that vector to each vertex of the triangle, so that the triangle packs its bags and departs the rest of the mesh. Furthermore, we can pass in a shader uniform variable to control how far the triangles move – I bound it to the mouse wheel on the CPU side. Here’s the pseudocode for calculating the normal vector:

// Calculate the normal vector in the normal way.
e0 = v0 - v1;
e1 = v2 - v1;
n = normalise(cross(e1, e0));
Just a normal day for a triangle.

The movement code is exactly as you’d imagine – add a fraction of n to each vertex. The rest is some icky convoluted stuff to do with how geometry shaders need to output vertices (after all, a geometry shader can also be used to add geometry at runtime), and I don’t want to blow your minds any further or your screens will get covered in grey matter, so let’s not go into too much detail about that.

I don’t have the source code to hand for this project yet; I thought it would be more productive to get this post written up and release the source code later, when it’s been cleaned up. I’ll update this post when the time comes, but in the meantime I hope you gleaned some succulent shader suggestions. Happy shading everyone!


Patchworks ~ The Fourth-Year Group Project

For the fourth and final year of the undergraduate Computer Science Masters course at The University of Warwick, students team up in groups of around five and work on a thing for the year. That thing can be research-oriented, or it can be a software development project; the latter is the more popular option. It has to be of sufficient technical challenge for a team of that size for the whole year. Our team of five is developing a game about creating and sharing levels in a similar mould to Super Mario Maker, but with our own spin on the concept. Due to the heavy focus on collaboration, we’ve called it Patchworks.


Mobile-Desktop Integration

The first major feature we’re implementing is the ability for Android devices to join in while someone is playing the game. The core of the game runs on a PC, with one ‘desktop player’ controlling the most important aspects of the game, but 1-4 ‘mobile players’ may join in over LAN (Local Area Network). We wanted to capitalise on the appeal of couch co-op games, and having people use the mobile devices they already own in a novel way seemed like the obvious solution.

Early on in development, we thought about what we would want to display on the mobile device. Do we want it to mirror the visuals of the desktop game? This would cause problems in synchronising the state of the game between devices, as every game event needs communicating over the network which could strain the LAN connection. Not only that, but the interface would be crowded on such a small screen if the rest of the visuals needed to take up space. We decided that it would be best to use the mobile devices purely as controllers instead, and our attention turned to what type of controller would work best.


Mobile games often have a controller ‘overlay’, where virtual buttons are superimposed on the game. None of us are a fan of this style, as the buttons are non-tactile and do not feel good to use, especially compared to a conventional gamepad. Nor is this type of controller suited to this type of game, as we don’t have any ‘action’ to display on the mobile screen, so the controller interface is free to take up the entire screen rather than being tucked into a corner. We thought about what kinds of control do work well on mobile, and came to the conclusion that a ‘scroll area’, similar to a mouse, would feel right at home on mobile. Housing this on one side of the screen leaves the other half free to display a range of buttons for different purposes; the buttons act just like any other button would on a mobile device, and while they still lack the satisfying tactile feel of a physical button press, this is the best we can achieve on mobile. Perhaps in the future we’ll add haptic feedback (vibration) on a button press.


Asymmetric Gameplay

The gameplay experience for desktop players is inherently different to that of mobile players due to the limitations and design choices detailed above. The mobile players won’t be controlling a conventional character like the desktop player will; they’ll be moving around a ‘cursor’, similar to how the character Murphy works in Rayman Legends on Wii U. Elements in each level will be interactive for mobile players only – enemies and obstacles can be manipulated to help or hinder the desktop player. Our initial prototype had a UFO being flown round the stage, shooting bullets downwards. We plan to add a range of enemies, some with unique controls, for the mobile characters to control, plus we want to use the accelerometer and/or gyroscope for some gameplay elements, also similar to Rayman Legends. Similarly, ropes can be cut by the mobile players by using some kind of ‘scissor’ tool and swiping through a rope.

Collaborative Creation

The focus of the project is on collaboration, and it is the Level Editor where this shines. A grid-based editor, similar to that of Mario Maker, allows players to place tiles wherever they want in a level. At this early stage in the project, we have implemented simple ‘painting’ of tiles, as well as undo and redo, clear and erase tools, and in the future you will be able to grab/select objects you’ve already placed and customise their behaviour. For example, you’ll be able to set the endpoints of a moving platform, set the length of a piece of rope (solving once and for all how long a piece of string actually is).


Mobile players can join in with editing levels, with most of the abilities of the desktop player, although only the latter can set the level name, save the level, go into Play Mode or move the camera. We turned to games like Ultimate Chicken Horse for inspiration – in that game, players take it in turns adding elements to the level with the goal of completing the level themselves but preventing their friends from doing so. Our goal is collaboration rather than competition, so we opted for complete realtime parallel creation, where anyone can add or remove elements at any point.

Online Interaction

Super Mario Maker worked so well because it encouraged a community of players to upload and share levels with each other. There were YouTube Let’s Plays of levels, there were notoriously difficult levels like the Panga levels, and there were inventive music-oriented levels in which players input nothing and listened to a tune crafted by nothing but enemies falling onto music blocks. All of those levels were only possible to create because of Mario Maker’s tight mechanics and emergent interactions between different level elements. We want to aim for the same sort of thing, with players able to upload their favourite creations once they’ve beaten them.


Using the mobile app, people can browse levels on the go and bookmark levels for later. This is somewhat similar to a system implemented in Mario Maker after it launched that allowed you to bookmark levels from any internet browser. Of course, this is all going to require some kind of unified account system – possibilities include using people’s Google Play accounts, although some players may not have one. Uploaded levels and bookmarks will be tied to a user account, and you’ll likely use the same account on desktop as you will on mobile. We also thought about a notifications system that lets players know when someone liked their level, or when they’ve reached a milestone of plays or likes. This could be baked into the mobile app and use Android notifications, and it can also feature on the desktop game homepage (a prototype for this functionality was already created).


This Start Menu is extremely rough.

So far, the project is on track to complete its objectives by the due date, which is around the end of Term 2 for feature-completeness, and the start of Term 3 for the final presentation. It’ll be interesting to see which features we get time to implement, as there are a number of ‘stretch’ features we might want to include.

Next time I make a post, it’ll be at or after the end of term 2 most likely, so I’ll have some amazing screenshots to show off. Watch this space!

Bomb Blast ~ WGD ‘Spooky 2017’ Jam

For the first two game jams of the year, Warwick Game Design Society usually opts for simple themes to help ease newcomers into the society and get existing members warmed up. The past few years, those themes have been ‘Retro’ and ‘Spooky’, for Halloween, and this year was no different. However, this year not a single game shown off for ‘Spooky’ had a skeleton in sight! Maybe that’s the spookiest thing about it.

Again: ignore the fact I made this game in Term 1. Just ignore it. I post on this blog promptly and suggesting anything contradictory to that is slander.

I didn’t have a lot of time free to make something, but I gave it my best shot and built a competitive 4-player game in about 6 hours on barely any sleep. Ignoring the infinite-jump bug (the game does not check if you are grounded before letting you jump), it worked surprisingly well! To make it a bit more interesting, I hooked up the Joy-cons and Pro Controller from my Nintendo Switch to act as players 1-3, while player 4 is a keyboard/mouse user.

What I found most interesting is that each individual Joy-con is classed as an entirely separate controller, even when both ‘halves’ are connected at the same time. The axis and button numbers somewhat follow those of a conventional Xbox controller, but with the obvious additions of the SL and SR buttons. Unfortunately, Unity has no built-in way to retrieve gyroscope or NFC data and can’t activate HD rumble, and I had no time to perform any hackery to get all that working, else I would’ve at least added some cool rumble effects.


You’re a kid now, you’re a squid n– oh, wait, wrong game…

The game itself is somewhat based on the Crash Bash minigame Space Bash, with four players on a destructible grid trying to blow each other up. However, I only had time to implement players dropping bombs onto their own position and pushing them at others. Bombs will destroy any floor around them and decimate any players in their blast radius. I also twisted things a little so the aim of the game is to paint as much of the floor in your own colour by walking over it, earning a couple of comparisons with Splatoon 2.

The game could do with a bit of a graphical overhaul, as I only had time for Placeholder characters and the most basic 16×16 sprites imaginable. As it stands, the bright green character especially is hard on the eyes. The stage might also be more interesting if it were larger, with a dynamic camera that zooms in and out, with perhaps more verticality, but it also needs to be easier to judge depth when jumping to the higher level.

In all, I think the experiments with using the Joy-cons as controllers were successful, although the game clearly needs more work in the fun department. Some powerups, some sound and a better control scheme would all be welcomed. But most importantly, I did manage to sneak in my guilty pleasure: screenshake.

Verdict: Bombed.

You can find the source code for this game on my Github.

Shader Showcase ~ WGD Lightning Talk

Every now and then, and definitely not when we could find any other speakers to come in, WGD holds Lightning Talks. Any member is free to come along and opine for a few minutes about whatever game-related topic they please. In the past, we’ve had talks on using OpenGL in fancy ways, using fancy DAWs (Digital Audio Workstations) to create funky tunes and “games for people”, for those fancy-pants people who love games involving no equipment except the bodies of you and your friends.

This time round, there were three talks – one was on palette-swapping and different techniques used in the past, and another was about making your own art (it was sorta like the Shia LaBeouf Just Do It talk, but with less shouting and more creepy drawings). Finally (or rather, in the middle of those two), was a talk from someone not at all built for public speech – me. I did an analysis of some of the cool shaders used in Super Mario Odyssey‘s Snapshot Mode, and I’m gonna do it again, right here. It was somewhat refreshing having a whole session of talks about arty subjects, even if they often swayed into technical artistry, given that the society is somewhat programmer-heavy. Never mind that the Lightning Talks took place last term – let’s pretend I’m prompt at blogging.

Anyway, it’s time to jump up in the air, jump up don’t be scared, jump up and make shaders every day!


I’ve set up this demo scene with a few coloured cuboids, one directional light and a boring standard Unity skybox. It’ll do the job for demonstration purposes. In the executable version, you can freely move the camera around.


I took the most saturated environment I could find, just to desaturate it.

First up is a simple colour transformation. To convert an image to greyscale, all we do is take the luminance of each pixel in the scene and use that value for the red, green and blue values of that pixel’s colour.

float lum = tex.r * 0.3 + tex.g * 0.59 + tex.b * 0.11;

The luminance value takes into account the sensitivity of the human eye to each component of colour; your eyes are more sensitive to green, so it is heavily weighted in this calculation. That’s basically all there is to the greyscale shader.


This is pretty much what you’d expect; the effect is very similar to Odyssey’s.


The city environment felt the most appropriate for a sepia-tone shot.

The sepia-tone shader is very similar to the greyscale one, although the transformation is a little more involved. Before, we used the same value for each of the resulting red, blue and green channels to get greyscale values, but this time, each of the resulting red, blue and green are separate functions of the original RGB values.

half3x3 magicNumbers = half3x3 
    0.393 * 1.25, 0.349 * 1.25, 0.272, // Red. 
    0.769 * 1.25, 0.686 * 1.25, 0.534, // Green. 
    0.189 * 1.25, 0.168 * 1.25, 0.131 // Blue. 

half3 sepia = mul(tex.rgb, magicNumbers);

We have to resort to some matrix multiplication to achieve this. The 3×3 matrix represents a bunch of numbers magically pulled out of Microsoft’s rear end – the mapping from the original colours to the new ones. The last line here essentially represents a system of three equations, where, for example:

sepia.r = tex.r * 0.393 * 1.25 + tex.g * 0.349 * 1.25 + tex.b * 0.272

If you’ve never done linear algebra or matrix multiplication before, this might look a bit exotic, but trust me – it works!


We end up with another unsurprising result. You might find this a bit too yellow, in which case just remove all the multiplications by 1.25 I added.


This one is where things get a bit more interesting. Upon seeing this effect, I immediately thought to myself, “Aha! They’re just using the depth buffer values! I’m a goddamn genius”! So, what is the depth buffer I hear you ask?

In computer graphics, you need to render things in the correct order. If you have a crate fully obscured by another crate, you wouldn’t want to render the obscured crate over the other one – it’d look really strange! Whenever a pixel is drawn, its depth – the distance from the camera plane to the object being drawn into this pixel in the z-direction – is recorded in the depth buffer (z-buffer); this is just a big ‘ol 2D array the same size as the program’s on-screen resolution. When you try and draw something, you first check the new object’s z-buffer value against whatever is already recorded at that position, and if it’s a higher value (the distance from the camera is further), that means it’s occluded and we discard that pixel.


This is the same scene from above, but from a different angle. Each pixel is just showing the greyscale z-buffer value of the final rendered image, which means it’s representing how far away the closest thing at that pixel position is. But why does this not look as good as the Odyssey example? Well, first of all, I’m not Nintendo EPD and they have a few more resources than I do. Second, I’ve not coloured the image myself, although as we saw earlier, this would be a fairly trivial change. The main reason is actually because the scene isn’t very large and we need to understand how the z-buffer does things to understand why.

There needs to be a bound on how small or large the z-buffer values are, else the values would be pretty meaningless. A camera has a near clip distance and a far clip distance, which are the minimum and maximum distances from the camera plane an object can be to be rendered. If you’ve ever clipped the camera through part of a wall in a game, that’s because the near clip plane is further away than the wall is, so that section of wall is culled (not rendered). Values in the z-buffer are typically floating point values between 0 and 1, so a small scene (i.e. one where the clip planes are fairly close together) would result in two objects having more different z-buffer values than the same objects in a large scene.

In the scene above, the near plane is very close to the camera and the far plane is just far enough to contain the whole scene. By default in Unity, the far clip plane is 1000 units away, which would result in the entire scene here being black – all z-buffer values would be close to 0. But having a small scene means you don’t really get much variation in colours, because things are really close and there’s nothing in the background – in the Odyssey screenshots above, there are things clearly in the foreground (Mario and the rocks he’s stood on) and the background (the section of island behind the waterfall). One detail that’s kind of hard to make out is that the furthest island actually shows up in very faint green in the snapshot, so it’s real geometry rather than part of a skybox. Also, the water particles don’t show up in this mode, so they’re likely reading from the depth buffer but not writing to it, and are rendered after all the opaque level geometry.


Seaside Kingdom Doggo raised this game’s review average by 19 points.

Next up is a blur effect. I won’t be talking about the radial blur starting from the edge that Super Mario Odyssey uses though; I’ll talk about a simpler blur that I had time to write – a Gaussian blur that operates on the whole image.

This type of blue works by throwing a ‘kernel’ over each pixel in the original image and looking at the neighbouring pixels. The larger the kernel, the more pixels are considered. Then, a weighted average of the pixel values, with the central pixel having most weight, is calculated for the new image. It’s a pretty simple concept, and probably how you’d expect, but is one step more complicated than a box blur which doesn’t bother with weighting the pixels and just does an unweighted average. Commonly, you would do two or three passes with different sized kernels, so the algorithm is run more than once and the resulting blur has more fidelity.


This image is doing three blurring passes, each with a larger and larger kernel. Each blurring pass actually requires two shader passes, one in the x-direction and another in the y-direction. You could reduce the amount of blurring by making the kernels smaller or reducing the number of passes.


TFW you turn up to a formal occasion in casual clothing.

The final algorithm I implemented is an image-space Sobel filter to attempt to detect edges in the image. This also acts in separate x-and y-direction passes; in each direction, a gradient is calculated, where a higher gradient means a more ‘different’ colour. By definition, you can’t do a gradient in both directions simultaneously, hence the need for two passes. This effect wouldn’t be used in a game if you wanted objects to stand out against objects of similar colour, but it’s pretty simple to understand conceptually.


Running the algorithm on our sample scene results in pillars that are pretty cleanly defined, but this is mostly due to their block-colouring. You can also see a couple of spots where pillars seem to ‘blend’ into one another, and all shadows are also edge-detected because they are dark areas right next to light flooring.

For a bit of fun, I ‘converted’ this effect into a cel-shading effect. Again, it’s not exactly Borderlands quality, but it was a fun exercise for me. This effect is also not present in Odyssey, so there is no reference image comparison.


The Sobel edge detection filter from above is inverted and multiplied by the source image, such that ‘edge’ areas become black and all other areas retain their original colouring. The areas where colours are similar clearly lack cel-shading here, such as the red pillars right-of-centre. You can also see on the yellow pillars where the skybox turns lighter that the black lines ‘break’ a little. Of course, because it’s based on the Sobel edge detection filter, you also get cel shading on the shadows, but the effect is less pronounced here because the shadows are dark.

All of the source code and a built executable for Windows can be found over on my Github. It’s really fun to pick apart effects you’ve seen in video games and I wholeheartedly recommend you try the same! If I have time over the holidays, I may try to pick apart some of the other effects, such as the NES/SNES filters which are probably just pixelation filters with a palette swap applied on top.

Power Surge ~ Ludum Dare 39 (Running out of Power)

Game: Power Surge
Event: Ludum Dare 39
Platforms: Windows (planned to be expanded soon)
Source Available

Last weekend was everyone’s favourite orgy of sleep-deprived video game development. That’s right, Ludum Dare rolled up in town once more and I couldn’t resist taking part. This time round it kinda crept up on me as I wasn’t aware it was happening until a couple of days prior, but luckily I was free to take part.

The theme for this one was “Running out of Power”, which, of course, spawned several games about electrical power. I’m one of the unoriginal people who did the same thing. Introducing, Power Surge!


The premise for this game is rather simple – the main generator’s output is slowly falling, and it’s your job to operate three different power generation stations to keep those sweet, sweet kilowatts flowing. Those three power generators each constitute a minigame requiring different kinds of mouse control.

Wind Turbine

The first game involves spinning an eco-friendly wind turbine round by rapidly spinning your mouse around it. So far, the feedback on the Ludum Dare website has been praising the graphical style, especially the background scenery. The depth of field effect emphasises the focal element of the scene, i.e. the turbine at the front of the scene – @maybelaterx says it’s “low poly done very well, and the depth-focus made it all the better”.

However, he also says “I would have liked more feedback on wind power generation”, due to the indirect nature of how the turbine reacts to your input, Rather than spin 1:1 with your mouse input, a torque is added proportional to the amount of spinning you do. That means it’s sluggish to start up and resists slowing down. During the competition, I aimed for the latter effect but did not want the former; I was unable to get the mechanic working satisfactorily in time and decided to move on rather than spend too much time on it.


Coal Power

The next game gets you to tear down walls of coal by clicking them with your imaginary pickaxe. Gameplay-wise, this has been the most popular game so far – @thesand says “I really enjoyed the coal mining, there was something [satisfying] about it” and @loktor remarks “I liked the mining part the most :)”. I’m inclined to agree, since I spent far too long mindlessly clicking through the mine while developing and testing the game. It’s the game I spent the most time on, and I believe it shows in the final product – it’s the mode with the fewest gameplay issues.

I think its enjoyability stems from the same vein as games like Cookie Clicker and almost every RPG ever; there’s something psychologically pleasing to watching a number increase because of your actions. It’s a form of operant conditioning – a Skinner box – which is a widely-used psychological phenomenon used to make games more addictive.

This game also requires refinement, however. @thesand “clicked every single coal until [they were] 100 meters down”. Clearly, I need to mention that you can just click and drag over coal to mine it! Even better, I may make it so you merely have to mouse over a coal piece to mine it to avoid causing players muscle strain, or at least include that as the default option.


Nuclear Power Station

Personally, I think this is the weakest of the three games. It’s the final one I developed and I think you can tell it had the least time put into it. All you have to do is click the inactive uranium sticks to make them glow again. On the left-hand side of the UI, there are eight bars that show how depleted each stick is. Unfortunately, I also forgot to modify the “tutorial” for this minigame; while the other two games have messages that appear when you are inactive for 5 seconds to nudge you into the correct action, this one has the message for the Wind Turbine minigame copied over by mistake, as you can see in the above screenshot.

This caused some confusion. @thesand “didn’t really understand the nuke power”, mostly due to the incorrect tutorial information, while @maybelaterx deciphered how to play the game but thought it was “by far the easiest”. This minigame, more than the other two, would benefit from a more difficult mechanic. Given it is a nuclear power station, a higher level of difficulty also makes thematic sense. @maybelaterx suggested that I “could make it more challenging by focusing on the precision of the task, maybe disposing and replacing the rod without touching any of the other rods”. I think this is the kind of gameplay I’ll aim for in my post-competition version.


General feedback

Each of the minigames is a source of shiny gems, which currently act as a points system. Problem is, they don’t do anything apart from sit there on the UI. While the effect is nice – they rotate, with a black border drawing attention to them – they don’t have a practical use, and a few players picked up on this. @wevel “wasn’t quite sure what the gems where for, other than a score system” and @loktor “didn’t really get what the gems were for” either. What I’d like to do is implement a sort of shop for them, so you can buy upgrades for your power generators. I’m not sure what for the upgrades will take yet, but they will likely involve faster energy generation or increased energy caps (each game produces a maximum of 360kW right now).

@milano23 hit the nail on the head though – he “liked the different games” but thought they “became tedious after a while”. I agree. They don’t have much depth (apart from the Coal Mine, it literally has a depth counter) so it’s difficult to invest in each of the minigames. What they really need is a hook – a reason to keep coming back.

This game might work better as a mobile game that you only need to visit for 5 minutes every few hours. It mirrors pretty well how a game like Magikarp Jump works; in that game, I spent a couple of minutes every now and then just hoovering up berries that had spawned and doing my three rounds of training. I’d just need a reason for the player to want to keep their power flowing.

What I’ve learned

Low-poly 3D models have rapidly become my aesthetic of choice, with visually-pleasing results. It’s a style I adopted for my Ludum Dare 37 game, Chemical Chaos, and continued for my last game, Aerochrome. While it was mostly a necessity for LD37 since I wanted to try 3D and didn’t have the time to make high-poly assets, it was a conscious choice for this game jam. I think it’s a style I’ll try to develop in my next few games, too.


I’ve also learned that focusing on a small number of simple games tends to lead to better results in game jams than focusing on a single large idea or more complex minigames. Chemical Chaos also had three different minigames, but they tended to be too complex, especially Flame Test. If I’m to do a similar compilation of smaller games, they each need to have simple and obvious control schemes and rules with immediate and tangible feedback – that’s why Wind Turbine and Nuclear Power Station were harder to understand than Coal Mine.

I think this game has a solid foundation and most of the work to be done is refining the gameplay and expanding the feature set – adding a shop, for example. Apart from that, most of my effort for the post-competition version will be adding detail to the environments.

If you took part in Ludum Dare, do give my game a go and leave some feedback when voting – I’ll try to get around to playing as many games as I can too. If you didn’t take part in Ludum Dare this time around, feedback would be appreciated anyway, even if you can’t vote!

Aerochrome ~ #1: WGD ‘Rainbow’ 48-hour Jam

It’s been a long while since I’ve posted anything, but let’s just gloss over that and pretend I’ve been active. More importantly, Warwick Game Design Society held another 48 hour game jam to celebrate the end of term a couple of weeks ago. So let’s delve deep into the game I made for that and assume January to June of this year never happened.

The theme was initially “Unexpected”, but to spite one of our members who perpetually wants the theme to be “Pride”, we made the secondary theme “Rainbow”. Close enough, right? My game opted most obviously for the “Rainbow” side of the theme, as you might see in the following screenshot:


It’s a strange fever dream of colour.

The basic idea of the game is that you control a spaceship and your goal is to fly into all of the Colour Bits distributed randomly in space to complete the (initially greyed-out) rainbow. Initially I went for one of those dodgy control schemes in which the spaceship flies in every direction without any resistance whatsoever, but that ended up being frustrating to play, so instead I made the ship controllable. There are 100 Colour Bits of each colour, so that’s an enormous 700 collectables up for grabs. At least they’re not Koroks.

It’s not the fanciest game I’ve ever made by any stretch of the imagination, but it is arguably complete. There’s even a win condition! I suppose if you fly off into the infinite void far enough and overflow the position vector of the ship so bad that the game crashes, that counts as a lose condition?



The controls are basic – you can accelerate the ship, use its boosters and rotate in all directions. That’s pretty much it! Since I’m writing this days after making it, I can’t for the life of me remember what the controls are, but you’re smart – you’ll figure it out. It has controller support too if you dislike keyboards and mice.

I plan to touch a couple of things in the game up in the coming days; adding a title screen and a few more objects to float around in space are my priorities. I’ll continue to follow and evolve the low-poly style I’ve adopted. After I’ve done that, I’ll put up a download. Hope you look forward to it!

Chemical Chaos ~ Ludum Dare 37 (One Room)

It’s been over a week since the last Ludum Dare competition ended. This time round, the theme was ‘One Room’, so naturally I made a game about a bunch of chemistry experiments that you have to run around and keep going. This decision was somewhat motivated by the fact that everyone I’ve ever talked to loved chemistry at school and would do anything to do a titration again. After all, knowing your market is half of the battle in game development.

The idea I was going for was a game where the player would be constantly running around, getting one experiment in working order, to find that another couple are starting to fail. It’d ideally be a very hectic game made up of several really easy minigames. For maximum effect, I’d need to create quite a few different experiments; however, I only had enough time to make three to a decent level of completion. I’d also have liked more time to polish some of the game’s aspects, mainly some smaller details like particle effects and other signposting to make it easier to see the status of experiments from afar.


Don’t ask why it turns from blue to red.

The first experiment was a distillation – in real life, you do this experiment to separate some liquid from a solution (you probably used it to purify water in school). The controls are simple – just mash the mouse buttons or controller shoulder buttons to turn down the temperature, as shown by the thermometer. When this experiment fails – when the thermometer is full – there’s a large explosion for no apparent reason.


The most important part of development – particle effects.

There’s a couple things I’d like to improve about this minigame. First of all, the apparatus should probably start to smoke and rattle around a bit as it’s getting close to exploding. And on the subject of explosions, I think the explosion needs to have more impact – sure, all the equipment goes flying and there’s a bit of fire, but I might add more flames that stay alight for a while on the table, and possibly violent screenshake when the explosion actually happens. On the plus side, I was very happy with the ‘reaction cam’ that pops up in the corner when an experiment fails.


John is the best video game character this year.

The second experiment was the good old sodium-in-water one, which as we all know, causes fire. The gameplay for this one involves pressing left click/LB to decrease the amount of sodium and potassium that materialises above John, the water tank, and right click/RB to spawn more. If he gets too little metal, he’ll get bored and will fall to sleep and if he gets too much, he’ll get scared then eventually die. Please try not to kill John.

There are a couple of problems in this screenshot – first of all, the sodium blocks sort of clip through the bottom of John instead of bursting into flames on the surface of the water, something I didn’t catch happening before submission. I’m also confused what may have caused it as I touched none of the relevant code for this between getting the feature working and submission. Secondly, the instructions for this experiment are incorrect, as I seem to have put up the instructions for experiment 1 instead; this was probably just an oversight. They’re both easy-to-fix problems, but they could have easily been avoided.


I’ve left my mixtape in all three of these Bunsen burners.

The third and final experiment is the Flame Test, which tasks you with carrying different elements into three flames. Each flame expects the element that makes it burn in a particular colour, as dictated by the chart next to the Bunsen burners and the base of each burner. All other elements go in the bin; if you’re too slow trashing elements, you’ll fail the game, and if you put the incorrect element in a flame, that flame will go out and losing all flames results in failure too.

In hindsight, I should have made the Bunsen burners explode when you get things wrong. It’s basically the only thing wrong with this minigame.


For the flame test minigame, I invented new elements so I could make more interesting flame colours. There’s a line of posters along the wall with short descriptions of them all. Given how long it took me to think up the new element types and make posters and models for each of them, I would have liked to make more games that utilise them all, but unfortunately I didn’t have enough time. One game idea I had was a solution-mixing table, which had a bunch of coloured solutions and you had to make the requested colour. However, it’s not really an experiment that’s easy to make hectic and if you’ve ever dealt with transition metal ions yourself, you’d know that this type of colour change chemistry can be a little complex. For the two people that got that tortured chemistry pun, I’ll see myself out.

I was, however, pleased with how the game came out graphically. The only issue I had was trying to make a roof and windows; doing that messed up the lighting, and any amount of time trying to work out Unity’s area lights and baked global illumination seemed to be wasted in the end.

You can give the game a go by visiting the Ludum Dare page, where you can vote if you took part in the competition, or directly on my Google Drive.