Honeycomb Engine: Dev Blog 3 ~ Quaternions for Rotations

Quaternions. The mere mention of these mathematical constructs would’ve sent shivers down my spine a couple of years ago. They’re really weird and complex. However, when you’re making a game engine and you have to model the rotations of objects, you’d be crazy not to use them for several reasons.

The easiest way for people to think about an arbitrary rotation in 3D space is to think of three separate rotations around the x-, y- and z-axes respectively. This way of representing a rotation is commonly known as Euler angles. The advantage is that it’s easy for someone to reason about a rotation represented in Euler angles because they’re so simple – I just rotate around x, then y, then z, it’s that easy. However, this brings a couple of problems with it too; namely, it’s inefficient to do three separate rotations for every single rotation you apply, and Euler angles can run into a problem called Gimbal lock.

In a system where three gimbals each provide an axis of movement around a central point, it is possible to lose one degree of freedom by rotating one of the gimbals 90° such that its axis converges on one of the other gimbals. When this happens, the other two gimbals will rotate along the same axis. The word ‘lock’ is a tad misleading, because you can fix the problem simply by rotating the first gimbal back, but it still leads to weird behaviour sometimes. It’s best explained through an animated diagram.

Quaternions fix both of these issues. Firstly, Euler’s rotation theorem (Yes, Euler got around a lot) tells us that we can represent any rotation or sequence of rotations around a fixed point as one single composite rotation around some axis through that point. As it turns out, we can model rotations around an arbitrary axis as a quaternion!

img_20161109_003123

Rotating around an axis rather than three rotations around x, y and z axes.

To understand how this works, we first need to know what a quaternion is. I’m not going to go into too much detail, because I want to keep this post relevant to how we can use them for rotations. At their most basic level, a quaternion, q, is made up of four real numbers, x, y, z and w, where:

q = w + xi + yj + zk

Here, i, j and k represent versors, or unit quaternions. Don’t worry about what they mean too much, because your head might go nuclear trying to wrap your brain around them. What we want to do is represent x, y, z and w in a way that’s useful to us. Luckily, it’s really simple: take our axis to be a three-dimensional vector (a, b, c) and our angle to be θ. Then our quaternion’s real number parts are:

x = a * sin(θ/2)
y = b * sin(θ/2)
z = c * sin(θ/2)
w = cos(θ/2)

Now we have a quaternion that represents an angle-axis rotation, immune from the problem of gimbal lock. To apply the rotation to an object, we multiply the point we want to rotate by the rotation quaternion. To figure out what the quaternion is doing, we can take x, y, z and w and calculate the angle and axis from them – first of all we can find the angle:

θ = 2 * cos⁻¹(w)

Then, we can use this angle to figure out what the rotational axis – a 3D vector – is:

a = x / sin(θ/2)
b = y / sin(θ/2)
c = z / sin(θ/2)

With this in mind, it’s a little easier to reason about what quaternions are doing. Keep in mind that all these rotations are about the origin, so if you want to rotate around an arbitrary point, you’ll need to translate your object by the inverse position vector of the point, do the rotation, then translate by the position vector of the point to undo the first translation.

rotation-by-translation

This image is about 2D translations, but the general idea still applies.

To make a rotation by compositing two different rotations, we can just multiply their respective quaternion rotations together, paying attention to the order:

new_rotation = second_rotation * first_rotation

The other great thing about quaternions is that we can mix between two rotations really easily, with more believable results that using Euler angles. Whereas we’d use a regular lerp operation (linear interpolation) to move from one rotation vector to another if we were using Euler angles, we can use the more sophisticated slerp (spherical linear interpolation) technique for quaternion-based rotations.

To imagine the difference between these two techniques, imagine you were stood on a completely flat world, whether it be in 2D or 3D, and I asked you to walk from point A to point B on that surface in a straight line at a constant speed – that’s you using a lerp. Now imagine doing the same, but on the surface of a perfectly spherical meteorite instead – of course, walking in a straight line would end up with you inside the meteorite, which is impossible, but if you walk across the surface directly to your destination, that’s what a slerp is doing. Super simple stuff!

on-top-of-the-world

The flag looks like the USB logo, but it’s supposed to be an angle-axis diagram.

That rounds off pretty much all I know about quaternions and how much I could be bothered to look up online. If I got anything wrong, please do tell me because it’s definitely possible. Remember, if quaternions start to hurt your head, which happened plenty of times to me, keep in mind the most important thing: Don’t Worry About It™. You don’t need to know them inside out to be able to use them.

Advertisements

Ghost Party ~ WGD ‘Spooky (2016)’ Three-Week Competition

It’s been a while since I’ve properly sat down and made a game, apart from the Zero Hour Game Jam, which I entered with an entirely broken game about shooting ghosts with a flashlight. Today, after having three weeks to work on a game, I decided to start one the day it was due to be shown.

3dspooky5me

My Zero Hour Game Jam entry, the lesser of my two games about flashing ghosts.

I decided that my Zero Hour Game Jam entry had a few flaws, the main one being that you magically die after about 30 seconds every time without being visibly hit by anything. You hit ghosts by aiming straight at them; unfortunately, this means you have to aim the centre of the screen at them as the game uses a raycast, but there is no crosshair so it’s almost impossible to tell if you’re hitting the mark except that the ghosts turn red.

I did like the idea of hitting ghosts with a flashlight though, so I kept it and made a brand new game today with a different art style: crayons. You might think I’m crazy, but when you’re time-constrained, it always helps to think of quick hacks to make things easy. I could have just as easily used normal coloured pencil to draw everything, but a bad artist (or in this case, a rushed one, although I’m not exactly Picasso to start with) will always make coloured pencil look crap. On the contrary, when someone thinks of wax crayons, you think of a 5-year-old’s drawing tacked onto the fridge. Essentially, wax crayons have a very low skill ceiling and I exploited the heck out of that. It made for a very consistent style and I actually really like it! It’s similar to the style I used for Tappy Dev, but I think it’s possibly better.

screenshot_01

Yellow ghost is far too happy.

The general idea is that ghosts spawn from left and right, and it’s your job to shoot them down with your flashlight by clicking on them. It’s better than 3DSpooky5Me (my 0h game jam game), in that you can see the mouse cursor to indicate where you’re aiming. The ghosts are also much cuter.

Given more time, I’d go back and make sure the proportions of all the objects in the game were consistent. As you can see, the ground’s outline is very thick compared to that of the ghosts, despite being drawn using the same crayons. I’d also change the design of the gravestones and the large tombstones, since my friend said they look like printers. While the thought of a haunted Konica Minolta fills me with glee, I might save that idea for when I’m in a real jam.

I also need to change the mechanics a bit. Similar to the 0h Game Jam, I didn’t make the flashlight affect an area, but stuck with a raycast; having the flashlight hit all ghosts in its range when flashed would open up the possibility of combos and make the mechanic feel more satisfying when you line up 5 ghosts in the same flash. Currently, the score increments by a strange formula I devised based on the speed, sine-waviness and z-distance of the ghosts, but I feel this should be better communicated to the player, as it’s difficult to even notice how many points you got for hitting one. Furthermore, one thing the game lacks entirely is a lose condition, which I was minutes away from getting in the game, but was hindered at the last moment by having to go to the WGD event in which I showed the game.

But the biggest and funniest problem in this build is the unintentional and bewilderingly fast escalation in difficulty. Above was a screenshot from about 30 seconds in, but the following one is from a couple minutes in:

screenshot_02

Points for spotting the cameo appearance.

At the start, the difficulty increases in a linear fashion. Every 5 seconds (although I’d planned to change it to every 20 or 30 seconds and forgot), the time interval between ghost spawns goes down by a fixed amount. It does this 10 times, then it starts going down in a geometric progression. You can see where this is going. Every 5 seconds, the time between spawns is divided by 1.2. Then you end up with a ghost party. It’s a classic example of “oh I’m sure this arbitrarily-picked number will be fine”.

At the early stages of development (read: while zoning out in a lecture this morning, thinking about spooky ghosts), I considered adding some element of a rhythm game into this, but as you’ll find out there is no sound whatsoever. It’s somewhat inspired by the Sneaky Spirits game from Rhythm Paradise (or Rhythm Heaven, for you Americans). I wanted each ghost type to have their own behaviour, but this didn’t come to fruition and they instead all follow sine waves of varying frequency and magnitude.

I wanted the spiky green ones to zig-zag, the puffy white ones to flutter upwards and fall back down a bit constantly, and the blue ones were going to have Pacman-style grid movement. The pale ones were going to have an animated tail that wiggled around and the rare Drifloon (please don’t sue me, Nintendo) would’ve occasionally grabbed a headstone with its tassels and flown off with it. Maybe next build!

All things considered, the most important lesson I’ve learned today is that I should drawn with crayons more often. It’s really easy and looks pretty nice, especially with the Paper Mario-style aesthetic. It’s also not really apt to keep referencing ‘today’, but I’m not a clock so it’s fine. You can download the game here.

Honeycomb Engine: Dev Blog 2 ~ More Vulkan and Honeycode

Since last week, I had planned to shift my focus mainly onto Honeycode, but I’ve mostly been working on the renderer some more. Yesterday I did a bit of work on the Honeycode parser, but it’s nowhere near complete yet. However, I have thought a little more about how the language will work conceptually, such as the syntax it will use and the features it will support. The idea is to make a programming language for non-programmers or beginners to programming, so I want it to be super-intuitive. I’ll detail a few of my ideas below, although it’s currently just me throwing everything at the wall and seeing what sticks.

Vulkan

On the plus side, I’ve finished the set of Vulkan tutorials I was using now, resulting in a really cool rendering of a model of a chalet found on Sketchfab. Now that I have a better understanding of how Vulkan operates, the next step is to get multiple different models loaded. Once I achieve that, the next step is to think up a level data format that allows me to save and load level data from disk. And once I’ve done that, I’ll need to set up a proper component-based architecture for objects. By using the Component programming pattern, it’ll allow users of Honeycomb to write their own behaviour scripts and attach them freely to game objects while keeping those sets of behaviour nicely encapsulated and decoupled from each other.

vulkan_model

This hut is almost worth the couple of weeks spent trawling through tutorials.

I’ve been reading through Bob Nystrom’s brilliant book, Game Programming Patterns, which details many different ways of organising code for all sorts of situations; I’ll be using some of the patterns used in this book in spades. For example, I currently have an input class that’s using the Singleton pattern – you have a publicly accessible static member of a class – purely because it’s the quickest way of making something globally visible (well, that and making all of the member functions and variables of a class static). I wanted to quickly add functionality to rotate the model and move the viewport, so please don’t judge me.

As pretty much every source on programming patterns will tell you, Singleton is awful for almost every case, so I will replace the input system with one that uses the Command pattern, which replaces hard-coded function calls for a given input with a bunch of different encoded behaviours – called commands – assigned to variables for each input, which will be executed using some execute() function.

Honeycode

Since the idea behind Honeycode is to be beginner-friendly, I want to minimise the number of scary features in the language as possible. You know the sort of stuff I mean, like requiring the user to write a million lines of boilerplate code before anything actually happens. At the same time, I want it to be expressive and I need it to be easily translatable to equivalent C++ code so I can easily compile it into an executable together with the engine.

There are a few broad ideas I currently have – some may be useful, some may be scrapped later on:

  • When a user creates a new script, the code in that script is automatically inside a class that inherits Component (the base class for the Component programming pattern), but this information is hidden from the user. They just write functions and variables in the script and attach it to game objects in the editor and let the engine worry about types and stuff.
  • There are some event functions that automatically fire and are ignored by scripts that don’t bother to implement them. Those events include a start() function that gets called after everything gets set up, but it is the first user-created code that is called, an update() function called every frame, an exit() function called when the level switches and a quit() function called when the game is closed. All the user needs to know is that they exist and roughly when they are called.
  • There should be simple one-function APIs for doing simple tasks and there should be minimal #includes or imports (or, get rid of them entirely and do it behind the scenes), then those simple functions will get translated to the appropriate C++ code during compilation. For example, if a user wants to save data (as a string) to a file, they should be able to write something like:
write(myString, myFileName.dat);
  • I might only have one floating point data type and one integer data type. For new users, I think it might be too confusing to immediately have to distinguish between all sorts of number types, like float, int, long, signed int, unsigned short int, char, wchar_t and who the hell knows what else is lurking in C++. At that rate, I’m surprised they didn’t throw in short long int to make everyone blink twice. I think including only float and int should suffice. In some instances during compilation, I can replace a type with a more sensible one anyway – array indexing and iteration would always use unsigned integers, for example. I’ll also keep char hanging around, because at least it’s clear what char is doing to a new programmer.
  • Memory management should be as simple and automatic as possible because C++ is confusing enough as it is without having to worry about destructors and all that jazz. I should be able to do all that behind the scenes.

To make things clear, my language doesn’t exist to radically change the syntax of C++, nor is it intending to provide any fancy optimisations. It’s essentially just a metric ton of syntactic sugar to make C++ more palatable to newbies. Since I’m not aiming to squeeze the absolute most out of the hardware, I can be a bit more lax on the performance side of things as long as games are efficient enough. I have a few more ideas, but most of them are just things I’d remove from C++ rather than add.

As for the progress I’ve made regarding the lexer/parser, I’m currently at the stage where I can recognise some keywords from an input file and convert them into tokens, although it needs a lot more work before it’s finished. The next step after that is to add all those tokens into a parse tree, then I’ll need to look through that tree to make sure the syntax of the input file is valid. I’ve also settled on the file extension .hny for Honeycode files (I’ve clearly got my priorities straight when it comes to fulfilling features).

So far on my schedule, I’m supposed to have written my specification (which is complete), learned the tools I’m using (including Vulkan, although I’ve left myself an additional week for that) and started working on Honeycode (which is why I’ve been brainstorming ideas). This week, I’m supposed to have learned all the Vulkan I need to know and started on the end-of-term progress report; as I’ve finished the Vulkan tutorial ahead of time, I will wait until early December when my copy of the Vulkan Programming Guide is scheduled to get to me before I do much more Vulkan and probably work mainly on Honeycode instead.

Honeycomb Engine: Dev Blog 1 ~ Working with Vulkan

Vulkan is a modern graphics API developed by the Khronos Group, the same group that maintains OpenGL, designed to bring high-efficiency graphics and computation to current generations of graphics cards. As mentioned in my previous post, I’m planning to use Vulkan for Honeycomb’s renderer, and as such I’ve been spending the best part of the last week learning the ins and outs of how it works. Believe me when I say it’s verbose; imagine if, to do your shopping, you had to go to the shop and buy each item in turn, go home, put the item away and then go back to the shop to get the next item – that’s what developing a Vulkan application often feels like. The upside is that the programmer has much more time to consider what the best items to buy are.

I won’t go into much detail about the inner workings of Vulkan, but as a broad explanation it’d suffice to say it’s like OpenGL without hand-holding. Not only do you have to create shaders and tell the GPU which vertices to render, which you’d of course have to do in OpenGL, but you have to deal with all the memory management – creation and safe deletion of buffers and queues, all of the device checks to determine if your hardware is capable of what you need and all of the semaphores and locks related to the asynchronous nature of how Vulkan processes things.

On top of that, whereas OpenGL gives you a graphics pipeline with predefined operations, some of which are programmable – a vertex array feeds into a programmable vertex shader, followed by rasterization etc etc, Vulkan requires the user to create the pipeline from scratch. This removes a ton of overhead by removing unused stages of the pipeline, but requires much more elbow grease on the programmer’s part. I really like the customisation and can clearly see how a well-behaved Vulkan program would end up more efficient than a similar OpenGL 4.5 or DirectX 11 program.

After about a week of following this amazing tutorial on and off, and after 1000-or-so lines of code, I finally reached a crescendo of joy when this popped up on my screen:

vulkan_triangle

It’s the most beautiful and efficient triangle I’ve ever seen.

Now that I have something rendered to screen, it’s much less of a jump to get more stuff on the screen than the journey to obtaining this triangle was. I’m using GLFW which handles all of the windowing for me, but all that does currently is remove the hassle of integrating my program with each platform’s native windowing system. If you’re planning to use Vulkan in your own project, be prepared to put in a lot of effort, but be excited when your application runs like silky-smooth butter.

One last thing I noticed – whereas OpenGL tends to fluctuate around 60fps when using Vsync (on my monitor at least), Vulkan tends to stick at exactly 60fps when left alone. It must just be very comfortable sat where it is. On the flipside, dragging the window around or switching programs tends to lag like hell to the point even my music stutters, although the actual program still reports 60fps so it could just be X11. I’ll report back if I ever find the solution to that issue.

d1-vulkan-60fps

Dat gradient, yo.

This week I’m scheduled to continue work on the Vulkan stuff, but I’m also supposed to be working on Honeycode. I’ll likely talk about Honeycode a bit more in my next post as I’ve only brainstormed ideas for the syntax and engine integration so far.

PS. I originally planned to write about 1,000 words for this post to match the verbosity of Vulkan, but I decided that might be annoying to read. Have 625 instead.

I’m Making a Game Engine

It’s sure been a while since I last posted, well, anything. Part of that was me getting reacquainted with uni life, part of it was me adjusting to a real-person sleep cycle and part of it was me thinking about the game engine this post will be about. But most of it was just me being lazy because my last post was in early July, about three months ago. That’s my longest disappearance yet. Oops.

However, since I was last here, a lot has happened! As part of my third year at uni (I’m already halfway through, where does the time go?!), I have to do a big project about something to do with computer science; I’ve decided to make a game engine, which has resulted in some people thinking I’m a tad insane. I’m inclined to disagree – doing it will make me insane. Without further ado, introducing my concept: the Honeycomb Game Engine!

honeycomb_logo

At some point I will vectorise this image.

The basic idea is that Honeycomb will be geared towards people who don’t have all that much experience in game design or programming (or both!). The way I will achieve that is by turning the idea of a game engine on its head. Some game engines try to give the developer tons of tools for any imaginable situation, but I think this is overwhelming for a new user – that’s how I felt way back when I started using Unity, anyway. Instead of following in those footsteps, I’m going to try and condense my game engine down to easy-to-use key features and avoid adding anything that’s unnecessary. Essentially it will lie somewhere between Unity and Game Maker.

In my opinion, a beginner-friendly game engine should have most of the following features:

  • A clean interface with easy-to-understand buttons and menus for common tasks;
  • A way to add behaviour to game objects that is intuitive and doesn’t require writing tons of boilerplate code (a simple scripting language, for example);
  • A room-building tool that lets you add game objects to a room and snap them to a grid easily;
  • Support for common 3D modelling packages, such as Blender or Maya. I think supporting .fbx files is sufficient for this;
  • A big, red button in the corner of the editor that donates an energy drink to an exhausted game engine developer somewhere in the world;
  • One-button exports to the platforms the engine supports. Users shouldn’t have to sift through mile-long lists of options for their exports;
  • A UI designer toolkit that lets users place buttons, text elements and sliders on a virtual ‘screen’ that then gets overlaid onto the running game;
  • A ‘play’ button that lets users run their game directly in the editor, or at least build the game quickly and provide debugging feedback to the user. This is one of the most difficult things I will try to implement (it’s a stretch goal);
  • A material editor that lets users plug in values and textures for various parameters. Then users can drag those materials onto models in their game;
  • An animation toolkit that lets users control how models are animated and how different animations work together;
  • Coroutines. Holy hell, they are so useful in Unity while writing behaviour.
  • Many many more things that I’ve inevitably missed out!

Some of these features obviously require more work than others. For example, I plan to create a scripting language (called Honeycode, because I’m an awful person) that will get compiled to C++. The advantage of creating my own scripting language is that I can abstract some of the less beginner-friendly features of C++ and ignore some less useful ones entirely, while making assumptions about what Honeycomb provides. This way, I can bake engine integration directly into the language, similar to how the Unity API augments C# to provide Unity’s features.

Essentially, my motivation for the engine is to create something that new users can just pick up and start making things with, without having to spend hours with setup and tutorials. A user should be able to drag in a default character controller and immediately have working character input and movement, the whole shebang. They should then be able to easily add behaviour to objects in the scene in a way that feels natural – telling a sheep to jump 17 feet in the air when the player approaches should be a simple line or two of code. Obviously sheep don’t jump that high though, it’d be terrifying if they did. Or baaaaad.

So that’s my approach. I’m going to add in the absolute basic features that a new user will want or need and let them build everything else on top of that framework. To aid that, I’ll also try and write a bunch of tutorials and sample code to guide users into writing the more complex and weird stuff that I don’t think needs to be baked into the engine itself. That way, it’s not as if I completely neglect the existence of the more useful features, but I don’t just provide them for the sake of making my engine do your tax returns and make toast for you while you’re working.

One of the more exciting things about my project is that I’m going to try and use Vulkan, the shiny new graphics API by the Khronos Group. It’s sorta like OpenGL++, but not really. Khronos basically took the remnants of AMD’s Mantle API and created a cross-platform graphics API for the modern age, which allowed them to make different design decisions to the ones made during OpenGL’s development. As a result, the API provides much lower-level access to hardware, but this makes everything so much more verbose; it’ll be a challenge to implement Vulkan support, but I’m confident I can get it working with a little elbow grease. By that, I mean a lot of coding all-nighters. And hey, if it doesn’t work, I can always fall back to OpenGL.

I’ll be returning to this blog every now and then to document my progress, and when the time comes, upload working downloads for people to try out the engine. That’s a very far way off currently though! I hope this whirlwind tour of my plans make sense, although I’ve probably missed some stuff out, so let me know if something doesn’t make sense. It’d actually be hugely useful for me to know what people’s most requested features or major gripes about their choice of game engine or development tool are, so feel free to bellyache and rant in the comments section about something your game engine does that you think could be improved!

ShaderLab Programming #1 – Sepia Blend Shader

Now that exams are over, I’ve had time to actually breathe and, more importantly, do game dev stuff. I’ve rarely touched computer graphics in the past, but it’s an area I’ve always wanted to explore in detail. This summer, I plan to mostly spend my time learning the intricacies of computer graphics, and that’s partially where Unity’s ShaderLab steps in.

ShaderLab is a graphics language in its own right, but it also encompasses Cg, a high-level shader language by Nvidia, and HLSL, a Microsoft shader language for use with DirectX, both of which were developed together. We don’t need to know too much about the structure of ShaderLab programs for what we’re doing, so I’m largely going to ignore it for now. I needed to create a post-processing effect similar to Unity’s built-in Sepia Tone, which is an on/off effect, with a tweak that allows it to fade in and out instead. It’s for a game I’m working on in which you can turn back time – more on that in the coming days when I’ve put together something a bit more concrete hopefully! Let’s deconstruct this super-simple program and explore what a sepia effect actually is. Unfortunately, Unity’s Sepia Tone shader seems to use some sort of maths wizardry for its fragment shader that I don’t understand, so I’ll also be writing mine mostly from scratch. Here’s the number bullshit right here:

fixed4 frag (v2f_img i) : SV_Target
{ 
    fixed4 original = tex2D(_MainTex, i.uv);
 
    // get intensity value (Y part of YIQ color space)
    fixed Y = dot (fixed3(0.299, 0.587, 0.114), original.rgb);

    // Convert to Sepia Tone by adding constant
    fixed4 sepiaConvert = float4 (0.191, -0.054, -0.221, 0.0);
    fixed4 output = sepiaConvert + Y;
    output.a = original.a;
 
    return output;
}

The infinite wisdom of the Internet tells us that to convert a source image to a sepia-tone version, we just define a function of the input colours as follows:

new red = min(in red * 0.393 + in green * 0.769 + in blue * 0.189, 255)
new green = min(in red * 0.349 + in green * 0.686 + in blue * 0.168, 255)
new blue = min(in red * 0.272 + in green * 0.534 + in blue * 0.131, 255)

This formula represents 100% sepia; that is, the source image completely turned into a sepia tone. But we don’t always want a completely sepia image – we need to incorporate some sort of progress measure. Using this formula, we can just interpolate between the input colour and the full sepia tone and use a separate script to control the ‘progress’ of the interpolation. I’ve talked about interpolation before, so if this really big word is confusing, it just means we’re picking a colour on an imaginary line between ‘original colour’ and ‘sepia colour’, with a parameter between 0 and 1. We can now create another function to get our final ‘blended’ sepia colour:

out red = parameter * new red + (1 - parameter) * in red
out green = parameter * new green + (1 - parameter) * in green
out blue = parameter * new blue + (1 - parameter) * blue

With the theory out of the way, we can now create the shader and the script that will control it. We’ll be utilising the post-processing effects provided by Unity, so make sure you import them from the Standard Assets using Assets->Import Package->Effects from the menu bar. We’ll start with the controller script.

There are two main ways to keep track of the progress, so I’ll cover both. The first way is to define a method such that another script can just pass in a float to act as the parameter.

using UnityEngine;

namespace UnityStandardAssets.ImageEffects
{
    [ExecuteInEditMode]
    [AddComponentMenu("Image Effects/Color Adjustments/Sepia Blend")]
    public class SepiaBlend : ImageEffectBase
    {
        private float progress = 0f;
        private bool active = false;

        private void SetProgress(float progress)
        {
            this.progress = progress;
            active = (progress == 0f);
        }

        // Called by camera to apply image effect.
        void OnRenderImage(RenderTexture source, RenderTexture destination)
        {
            if(active)
            {
                material.SetFloat("_Progress", progress);
                Graphics.Blit(source, destination, material);
            }
        }
    }
}

OnRenderImage() is the method called when we want to render a post-processing image. We set the ‘progress’ parameter of the material attached to this project – we’ll see how this works with that material’s shader shortly – and use Graphics.Blit() to apply the texture created by the shader to the screen. The second way to control the progress parameter is to allow external scripts to set the ‘active’ boolean and use a time-based interpolation to decide what the progress should be. The guts of the program will look more like this:

private float progress = 0f;
private bool active = false;

private void SetActive(bool active)
{
    this.active = active;
}

private void Update()
{
    progress = Mathf.Lerp(progress, active ? 1f : 0f, Time.deltaTime
        * 5f);
}

// Called by camera to apply image effect.
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
    material.SetFloat("_Progress", progress);
    Graphics.Blit(source, destination, material);
}

Now for the actual shader! I’ll explain very quickly what the fragment shader is doing and ignore the vertex shader since it’s doing nothing in this example; I may do another post in the future explaining the details of how the shader is constructed, but I’m glossing over a lot for now. Don’t even worry about it yet.

Shader "Hidden/SepiaBlendEffect"
{
    Properties
    {
        _MainTex ("Base (RGB)", 2D) = "white" {}
        _Progress ("Progress", Float) = 0
    }
    SubShader
    {
        Pass
        {
            ZTest Always Cull Off ZWrite Off

            CGPROGRAM
            #pragma vertex vert_img
            #pragma fragment frag
 
            #include "UnityCG.cginc"

            uniform sampler2D _MainTex;
            uniform float _Progress;

            fixed4 frag(v2f_img i) : SV_Target
            {
                fixed4 original = tex2D(_MainTex, i.uv);
 
                half3x3 vals = half3x3
                (
                    0.393, 0.349, 0.272, // Values for input red.
                    0.769, 0.686, 0.534, // Values for input green.
                    0.189, 0.168, 0.131  // Values for input blue.
                );

                half3 input = half3(original.rgb);

                half progress = half(_Progress);

                half3 intermed = (progress * mul(input, vals)) + 
                    ((1 - progress) * input);

                fixed4 output = half4(intermed, original.a);

                return output;
            }
            ENDCG
        }
    }
    Fallback Off
}

We’re interested in the stuff between ‘CGPROGRAM’ and ‘ENDCG’ – this is written in Nvidia’s Cg. All we need to know about the structure of shaders for now is that the fragment shader is run on every pixel of a processed image. I’ll also ignore all the types of variables for now and cover those in another post at a later date. Jumping straight to the frag() method, which constitutes the fragment shader, we’ll see a 3×3 table of numbers, vals, that looks strikingly like the one defined for our first function; the table is the transpose of a matrix formed by those functions’s numbers. Our shader performs a matrix multiplication on a vector made of the original rgb values, input, and our magical sepia number matrix vals (which is why it had to be transposed), then we do our interpolation to get intermed. Afterwards, we put together the final output by adding the original colour’s alpha onto intermed, and boom! We have our Sepia Blend shader. Now all that remains is to attach the SepiaBlend script to the main camera and assign the SepiaBlendEffect shader to the script, then we’re good to go.

I hope this has been a helpful introduction to the world of computer graphics. I’ve uploaded the three scripts to avoid you having to copy everything out (both variants of SepiaBlend.cs are included; pick your preferred version). Let me know if everything turns into a bugfest when you attempt to run any of the examples and I’ll try to diagnose the problem.

Oh Hell It’s Exam Season Again

This is only going to be a quick post about how often I will post (or not post) in the coming weeks. Exams are looming over the horizon, looking menacingly at me like the moon from Majora’s Mask, so I’m likely not going to have enough time to make games or post on here about, well, anything. Revision has settled in firmly, so I should probably keep my sanity and not split my time between revising and making games (even though one of those is far more fun than the other).

Since the last update for Shifting Dungeons, the only game design I’ve actually done is making bars look nicer. Well, it’s a work-in-progress, so it’ll look better by the next update. The main thing I need to add is some icons so you know which bar is for which stat. Hopefully next time I make a post, which will likely be after June 17th at the very least, I’ll have much more to show.

nice_bars_m8