Honeycomb Engine: Devblog #5 ~ A Component-Based Architecture

The backbone of a game engine is its architecture. When and how you update the game state can have massive performance implications; as such, a large chunk of the time taken while writing your main loop will be spent making trade-offs that make the engine more complicated but ensure maximum performance. The main loop is something you absolutely need to get right if your game engine stands any chance of running smoothly. Prepare yourself for a particularly wordy post all about Honeycomb’s inner workings!

Many game engines opt for an architecture in which every conceptual object that can go into a game – the player, a chair, an explosion – is modelled by a GameObject. By itself, a GameObject doesn’t do a great deal except hold a list of all the useful snippets of behaviour that actually define what the object does – those are usually called Components. It’s probably the most straightforward way of reasoning about the objects in your game, and that’s part of the reason this programming pattern, aptly called the Component design pattern, is used – GameObjects and Components are intuitive.

If you think of a person in real life, you can easily think up a whole host of behaviours they have and how those behaviours interact with each other – they use their legs to move, their eyes to see and their brain to work out a path from point A to point B. In a virtual environment in which the person were an NPC, the legs would be a movement component and the brain and eyes make up the AI component. But the person also has to interact with a physics component to make sure it doesn’t clip through walls, which would require a collider component to define the physical shape of the person. The movement component modifies the transform component (I’ll talk about those later), then the player’s orientation gets fed to the renderer component so that the game knows how to display them. The person also probably has to deal with an animation component that details exactly how it walks and maybe a sound component if it needs to speak. It’s easy to see how the number of components on a GameObject can snowball!

Usually, the only component that is actually required is called the Transform component. The Transform defines the position, rotation and scaling of an object, which is all data passed to the renderer each frame to calculate part of what’s known as the model-view-projection matrix. Without going into much detail, that matrix is responsible for taking a model as defined relative to itself (how you’d see a model in your modelling package) and transforming its vertices onto your screen. You can see why it’s important – everything has a position and orientation! You could theoretically merge the Transform component into the base GameObject class, although later on in this post I’ll go into a reason you might not want to.

Imagine a world without components. Instead, we just throw all of the ideas I just mentioned into one monolithic class called Player. You can probably already see a couple problems with that, such as:

  • You end up with a lot of code to trundle through if you want to modify any of the person’s behaviour, since all of the person’s behaviour is clumped together.
  • Every single piece of behaviour is coupled to every other piece of behaviour. Physics engine code relies on audio code, which relies on the renderer code, and so on. If you want to change how the renderer works, you might need to think about how the change affects the audio engine, which is obviously counter-intuitive.
  • If you want two different types of person that differ only in one small aspect, for example they use different AI components but the rest of their behaviour is the same, you’d have to copy and paste almost all of the code and change the tiny bit that differs. You’re not exploiting the features of object-oriented programming very well.
  • It’s really difficult for a different object to hold references to some part of Player. For example, if we have an Enemy class too, the enemy probably needs to know where the player is. With components, we’d just hold a reference to the Transform component, which holds all data related to the position and orientation of the Player. But without components, all we can do is hold a reference to the entire Player and hope there’s a sensible way hidden in the tangled mess to find just the position data.

By using components, we eliminate these problems. We can use the object-oriented principle of inheritance to make two components that are mostly similar and differ only slightly, then we can use polymorphism to interact with the common portions of instances of those objects as if they were one of the same. We can hold a reference to just the AI component of a Player GameObject and be reasonably sure it has the relevant functions you’d hope to find in an AI. And best of all, you don’t need to think about the audio engine while you’re writing rendering code, since they’re nicely encapsulated in separate components.

It’s worth noting that organising your game this way may incur a performance hit; however, it’s a reasonable trade-off to ensure that your game is easy to program and extensible later on. This is just one of the many instances of needing to make compromises in programming. And like any programming, there are ways to minimise the performance cost or even leverage features of your hardware to your advantage.

One way in which the hardware can actively make your program more efficient when using a component-based system is by utilising the processor cache. I won’t go into a great amount of detail here, but whenever you access some address in memory, the processor will grab not only the memory it needs, but a bunch of memory adjacent to it too, and it gets put in the cache. The cache is essentially ultra-fast memory – it is built directly onto the processor chip – so as you can imagine, anything in the cache gets processed blindingly quickly.

If your components are organised in memory contiguously, when you access one component’s memory, a bunch of other components stored next to it will get pulled into the cache too; hence, by storing all of one type of component together, we can process them all super-fast when we iterate over them serially. That’s much better than having all of our behaviour lumped into monolithic classes, because in that case we’d have to pull a lot of redundant data into the cache every time we call update() on a component and we’d get a cache miss when we try to access the next component. This is also a reason you’d want to separate out the Transform component from the base GameObject class, because you might want to do something with all of the GameObjects that doesn’t involve their Transform data. By ignoring the Transform component, you can exploit the data locality of the GameObjects fully.

This is just one of the things I’ve discovered while developing Honeycomb. Modern software development contains so many more considerations to make than I would have ever imagined! Who knew that the order in which you create objects could have such a profound impact on the overall performance of the software, or that random-access memory isn’t really random-access any more? Stuff like this makes software development complicated and ridiculous, but that’s also what makes it fun for me. This post is rather lacking in colour and is just a 1,300-word block of text – apologies for that, hopefully the next devblog has some nice screenshots and diagrams in it!


Honeycomb Engine: Dev Blog 4 ~ Plans for the Holidays

Term 1 is now over. While I still have some coursework due in at the start of next term, for the most part, I’m done with work. That means I’ll have tons of time free to get a good chunk of Honeycomb and Honeycode out of the way! I’ll just be outlining the most important plans in this post, so it’ll be a fairly short one; I wanted to get something out since I’ve been quiet for the past few weeks.


First off the bat is Honeycode, the scripting language that will accompany the engine. I’ve made a little progress with Honeycode, but perhaps not as much as I’d liked, so the first task for the holidays will be to finish a working version of Honeycode and its compiler. I think the fun part of creating the language will be testing it out to see what actually works or not – I have so many potential concepts I want to try out and I’m not sure where to even start. The core feature, as always, will be ease-of-use.

Saving and loading mechanisms

This bit might not be quite as interesting to most people, but a weird programmer type like me should find this fun – organising save data on disk. I’ll likely use JSON to save files, both in-game and in-editor. In tools like Unity, it’s often the case that the built-in saving mechanisms aren’t as robust as you’d like (for example, Unity PlayerPrefs aren’t designed for saving all game data, so you need to build a custom solution), so for Honeycomb I want to make sure a good saving system for game data ships with the engine. The in-editor saving system will only need to make sure that scene files and assets are saved properly, so I can use roughly the same sort of system for both use cases.

The Editor

This is the largest task left. The editor will be the main window in which users create games, so it’s arguably the most important remaining feature too. This task encompasses countless other small tasks, so it’s going to take a while. It’s the task I planned mostly for term 2, so any progress I make on it during the holidays is sort of a bonus; however, I think it will be crucial to get as far through the development of the editor as possible to give myself some slack during term 2 in case any major setback occurs. At the very least, I’d like to make the skeleton of the editor by the end of the holidays.

The renderer

While the renderer is somewhat functional, I want to improve it in a few ways. First of all, it can only render one object at a time currently; this is due to my relative inexperience with Vulkan, the graphics API I am using. When I figure out how to order my requests to the GPU properly, and hence get more than one model on screen, the next task is to work on the lighting engine since Honeycomb currently only supports a very broken diffuse lighting shader. The sooner I can work on fancy shaders, the better!

Honeycomb Engine: Dev Blog 3 ~ Quaternions for Rotations

Quaternions. The mere mention of these mathematical constructs would’ve sent shivers down my spine a couple of years ago. They’re really weird and complex. However, when you’re making a game engine and you have to model the rotations of objects, you’d be crazy not to use them for several reasons.

The easiest way for people to think about an arbitrary rotation in 3D space is to think of three separate rotations around the x-, y- and z-axes respectively. This way of representing a rotation is commonly known as Euler angles. The advantage is that it’s easy for someone to reason about a rotation represented in Euler angles because they’re so simple – I just rotate around x, then y, then z, it’s that easy. However, this brings a couple of problems with it too; namely, it’s inefficient to do three separate rotations for every single rotation you apply, and Euler angles can run into a problem called Gimbal lock.

In a system where three gimbals each provide an axis of movement around a central point, it is possible to lose one degree of freedom by rotating one of the gimbals 90° such that its axis converges on one of the other gimbals. When this happens, the other two gimbals will rotate along the same axis. The word ‘lock’ is a tad misleading, because you can fix the problem simply by rotating the first gimbal back, but it still leads to weird behaviour sometimes. It’s best explained through an animated diagram.

Quaternions fix both of these issues. Firstly, Euler’s rotation theorem (Yes, Euler got around a lot) tells us that we can represent any rotation or sequence of rotations around a fixed point as one single composite rotation around some axis through that point. As it turns out, we can model rotations around an arbitrary axis as a quaternion!


Rotating around an axis rather than three rotations around x, y and z axes.

To understand how this works, we first need to know what a quaternion is. I’m not going to go into too much detail, because I want to keep this post relevant to how we can use them for rotations. At their most basic level, a quaternion, q, is made up of four real numbers, x, y, z and w, where:

q = w + xi + yj + zk

Here, i, j and k represent versors, or unit quaternions. Don’t worry about what they mean too much, because your head might go nuclear trying to wrap your brain around them. What we want to do is represent x, y, z and w in a way that’s useful to us. Luckily, it’s really simple: take our axis to be a three-dimensional vector (a, b, c) and our angle to be θ. Then our quaternion’s real number parts are:

x = a * sin(θ/2)
y = b * sin(θ/2)
z = c * sin(θ/2)
w = cos(θ/2)

Now we have a quaternion that represents an angle-axis rotation, immune from the problem of gimbal lock. To apply the rotation to an object, we multiply the point we want to rotate by the rotation quaternion. To figure out what the quaternion is doing, we can take x, y, z and w and calculate the angle and axis from them – first of all we can find the angle:

θ = 2 * cos⁻¹(w)

Then, we can use this angle to figure out what the rotational axis – a 3D vector – is:

a = x / sin(θ/2)
b = y / sin(θ/2)
c = z / sin(θ/2)

With this in mind, it’s a little easier to reason about what quaternions are doing. Keep in mind that all these rotations are about the origin, so if you want to rotate around an arbitrary point, you’ll need to translate your object by the inverse position vector of the point, do the rotation, then translate by the position vector of the point to undo the first translation.


This image is about 2D translations, but the general idea still applies.

To make a rotation by compositing two different rotations, we can just multiply their respective quaternion rotations together, paying attention to the order:

new_rotation = second_rotation * first_rotation

The other great thing about quaternions is that we can mix between two rotations really easily, with more believable results that using Euler angles. Whereas we’d use a regular lerp operation (linear interpolation) to move from one rotation vector to another if we were using Euler angles, we can use the more sophisticated slerp (spherical linear interpolation) technique for quaternion-based rotations.

To imagine the difference between these two techniques, imagine you were stood on a completely flat world, whether it be in 2D or 3D, and I asked you to walk from point A to point B on that surface in a straight line at a constant speed – that’s you using a lerp. Now imagine doing the same, but on the surface of a perfectly spherical meteorite instead – of course, walking in a straight line would end up with you inside the meteorite, which is impossible, but if you walk across the surface directly to your destination, that’s what a slerp is doing. Super simple stuff!


The flag looks like the USB logo, but it’s supposed to be an angle-axis diagram.

That rounds off pretty much all I know about quaternions and how much I could be bothered to look up online. If I got anything wrong, please do tell me because it’s definitely possible. Remember, if quaternions start to hurt your head, which happened plenty of times to me, keep in mind the most important thing: Don’t Worry About It™. You don’t need to know them inside out to be able to use them.

Honeycomb Engine: Dev Blog 2 ~ More Vulkan and Honeycode

Since last week, I had planned to shift my focus mainly onto Honeycode, but I’ve mostly been working on the renderer some more. Yesterday I did a bit of work on the Honeycode parser, but it’s nowhere near complete yet. However, I have thought a little more about how the language will work conceptually, such as the syntax it will use and the features it will support. The idea is to make a programming language for non-programmers or beginners to programming, so I want it to be super-intuitive. I’ll detail a few of my ideas below, although it’s currently just me throwing everything at the wall and seeing what sticks.


On the plus side, I’ve finished the set of Vulkan tutorials I was using now, resulting in a really cool rendering of a model of a chalet found on Sketchfab. Now that I have a better understanding of how Vulkan operates, the next step is to get multiple different models loaded. Once I achieve that, the next step is to think up a level data format that allows me to save and load level data from disk. And once I’ve done that, I’ll need to set up a proper component-based architecture for objects. By using the Component programming pattern, it’ll allow users of Honeycomb to write their own behaviour scripts and attach them freely to game objects while keeping those sets of behaviour nicely encapsulated and decoupled from each other.


This hut is almost worth the couple of weeks spent trawling through tutorials.

I’ve been reading through Bob Nystrom’s brilliant book, Game Programming Patterns, which details many different ways of organising code for all sorts of situations; I’ll be using some of the patterns used in this book in spades. For example, I currently have an input class that’s using the Singleton pattern – you have a publicly accessible static member of a class – purely because it’s the quickest way of making something globally visible (well, that and making all of the member functions and variables of a class static). I wanted to quickly add functionality to rotate the model and move the viewport, so please don’t judge me.

As pretty much every source on programming patterns will tell you, Singleton is awful for almost every case, so I will replace the input system with one that uses the Command pattern, which replaces hard-coded function calls for a given input with a bunch of different encoded behaviours – called commands – assigned to variables for each input, which will be executed using some execute() function.


Since the idea behind Honeycode is to be beginner-friendly, I want to minimise the number of scary features in the language as possible. You know the sort of stuff I mean, like requiring the user to write a million lines of boilerplate code before anything actually happens. At the same time, I want it to be expressive and I need it to be easily translatable to equivalent C++ code so I can easily compile it into an executable together with the engine.

There are a few broad ideas I currently have – some may be useful, some may be scrapped later on:

  • When a user creates a new script, the code in that script is automatically inside a class that inherits Component (the base class for the Component programming pattern), but this information is hidden from the user. They just write functions and variables in the script and attach it to game objects in the editor and let the engine worry about types and stuff.
  • There are some event functions that automatically fire and are ignored by scripts that don’t bother to implement them. Those events include a start() function that gets called after everything gets set up, but it is the first user-created code that is called, an update() function called every frame, an exit() function called when the level switches and a quit() function called when the game is closed. All the user needs to know is that they exist and roughly when they are called.
  • There should be simple one-function APIs for doing simple tasks and there should be minimal #includes or imports (or, get rid of them entirely and do it behind the scenes), then those simple functions will get translated to the appropriate C++ code during compilation. For example, if a user wants to save data (as a string) to a file, they should be able to write something like:
write(myString, myFileName.dat);
  • I might only have one floating point data type and one integer data type. For new users, I think it might be too confusing to immediately have to distinguish between all sorts of number types, like float, int, long, signed int, unsigned short int, char, wchar_t and who the hell knows what else is lurking in C++. At that rate, I’m surprised they didn’t throw in short long int to make everyone blink twice. I think including only float and int should suffice. In some instances during compilation, I can replace a type with a more sensible one anyway – array indexing and iteration would always use unsigned integers, for example. I’ll also keep char hanging around, because at least it’s clear what char is doing to a new programmer.
  • Memory management should be as simple and automatic as possible because C++ is confusing enough as it is without having to worry about destructors and all that jazz. I should be able to do all that behind the scenes.

To make things clear, my language doesn’t exist to radically change the syntax of C++, nor is it intending to provide any fancy optimisations. It’s essentially just a metric ton of syntactic sugar to make C++ more palatable to newbies. Since I’m not aiming to squeeze the absolute most out of the hardware, I can be a bit more lax on the performance side of things as long as games are efficient enough. I have a few more ideas, but most of them are just things I’d remove from C++ rather than add.

As for the progress I’ve made regarding the lexer/parser, I’m currently at the stage where I can recognise some keywords from an input file and convert them into tokens, although it needs a lot more work before it’s finished. The next step after that is to add all those tokens into a parse tree, then I’ll need to look through that tree to make sure the syntax of the input file is valid. I’ve also settled on the file extension .hny for Honeycode files (I’ve clearly got my priorities straight when it comes to fulfilling features).

So far on my schedule, I’m supposed to have written my specification (which is complete), learned the tools I’m using (including Vulkan, although I’ve left myself an additional week for that) and started working on Honeycode (which is why I’ve been brainstorming ideas). This week, I’m supposed to have learned all the Vulkan I need to know and started on the end-of-term progress report; as I’ve finished the Vulkan tutorial ahead of time, I will wait until early December when my copy of the Vulkan Programming Guide is scheduled to get to me before I do much more Vulkan and probably work mainly on Honeycode instead.

Honeycomb Engine: Dev Blog 1 ~ Working with Vulkan

Vulkan is a modern graphics API developed by the Khronos Group, the same group that maintains OpenGL, designed to bring high-efficiency graphics and computation to current generations of graphics cards. As mentioned in my previous post, I’m planning to use Vulkan for Honeycomb’s renderer, and as such I’ve been spending the best part of the last week learning the ins and outs of how it works. Believe me when I say it’s verbose; imagine if, to do your shopping, you had to go to the shop and buy each item in turn, go home, put the item away and then go back to the shop to get the next item – that’s what developing a Vulkan application often feels like. The upside is that the programmer has much more time to consider what the best items to buy are.

I won’t go into much detail about the inner workings of Vulkan, but as a broad explanation it’d suffice to say it’s like OpenGL without hand-holding. Not only do you have to create shaders and tell the GPU which vertices to render, which you’d of course have to do in OpenGL, but you have to deal with all the memory management – creation and safe deletion of buffers and queues, all of the device checks to determine if your hardware is capable of what you need and all of the semaphores and locks related to the asynchronous nature of how Vulkan processes things.

On top of that, whereas OpenGL gives you a graphics pipeline with predefined operations, some of which are programmable – a vertex array feeds into a programmable vertex shader, followed by rasterization etc etc, Vulkan requires the user to create the pipeline from scratch. This removes a ton of overhead by removing unused stages of the pipeline, but requires much more elbow grease on the programmer’s part. I really like the customisation and can clearly see how a well-behaved Vulkan program would end up more efficient than a similar OpenGL 4.5 or DirectX 11 program.

After about a week of following this amazing tutorial on and off, and after 1000-or-so lines of code, I finally reached a crescendo of joy when this popped up on my screen:


It’s the most beautiful and efficient triangle I’ve ever seen.

Now that I have something rendered to screen, it’s much less of a jump to get more stuff on the screen than the journey to obtaining this triangle was. I’m using GLFW which handles all of the windowing for me, but all that does currently is remove the hassle of integrating my program with each platform’s native windowing system. If you’re planning to use Vulkan in your own project, be prepared to put in a lot of effort, but be excited when your application runs like silky-smooth butter.

One last thing I noticed – whereas OpenGL tends to fluctuate around 60fps when using Vsync (on my monitor at least), Vulkan tends to stick at exactly 60fps when left alone. It must just be very comfortable sat where it is. On the flipside, dragging the window around or switching programs tends to lag like hell to the point even my music stutters, although the actual program still reports 60fps so it could just be X11. I’ll report back if I ever find the solution to that issue.


Dat gradient, yo.

This week I’m scheduled to continue work on the Vulkan stuff, but I’m also supposed to be working on Honeycode. I’ll likely talk about Honeycode a bit more in my next post as I’ve only brainstormed ideas for the syntax and engine integration so far.

PS. I originally planned to write about 1,000 words for this post to match the verbosity of Vulkan, but I decided that might be annoying to read. Have 625 instead.

I’m Making a Game Engine

It’s sure been a while since I last posted, well, anything. Part of that was me getting reacquainted with uni life, part of it was me adjusting to a real-person sleep cycle and part of it was me thinking about the game engine this post will be about. But most of it was just me being lazy because my last post was in early July, about three months ago. That’s my longest disappearance yet. Oops.

However, since I was last here, a lot has happened! As part of my third year at uni (I’m already halfway through, where does the time go?!), I have to do a big project about something to do with computer science; I’ve decided to make a game engine, which has resulted in some people thinking I’m a tad insane. I’m inclined to disagree – doing it will make me insane. Without further ado, introducing my concept: the Honeycomb Game Engine!


At some point I will vectorise this image.

The basic idea is that Honeycomb will be geared towards people who don’t have all that much experience in game design or programming (or both!). The way I will achieve that is by turning the idea of a game engine on its head. Some game engines try to give the developer tons of tools for any imaginable situation, but I think this is overwhelming for a new user – that’s how I felt way back when I started using Unity, anyway. Instead of following in those footsteps, I’m going to try and condense my game engine down to easy-to-use key features and avoid adding anything that’s unnecessary. Essentially it will lie somewhere between Unity and Game Maker.

In my opinion, a beginner-friendly game engine should have most of the following features:

  • A clean interface with easy-to-understand buttons and menus for common tasks;
  • A way to add behaviour to game objects that is intuitive and doesn’t require writing tons of boilerplate code (a simple scripting language, for example);
  • A room-building tool that lets you add game objects to a room and snap them to a grid easily;
  • Support for common 3D modelling packages, such as Blender or Maya. I think supporting .fbx files is sufficient for this;
  • A big, red button in the corner of the editor that donates an energy drink to an exhausted game engine developer somewhere in the world;
  • One-button exports to the platforms the engine supports. Users shouldn’t have to sift through mile-long lists of options for their exports;
  • A UI designer toolkit that lets users place buttons, text elements and sliders on a virtual ‘screen’ that then gets overlaid onto the running game;
  • A ‘play’ button that lets users run their game directly in the editor, or at least build the game quickly and provide debugging feedback to the user. This is one of the most difficult things I will try to implement (it’s a stretch goal);
  • A material editor that lets users plug in values and textures for various parameters. Then users can drag those materials onto models in their game;
  • An animation toolkit that lets users control how models are animated and how different animations work together;
  • Coroutines. Holy hell, they are so useful in Unity while writing behaviour.
  • Many many more things that I’ve inevitably missed out!

Some of these features obviously require more work than others. For example, I plan to create a scripting language (called Honeycode, because I’m an awful person) that will get compiled to C++. The advantage of creating my own scripting language is that I can abstract some of the less beginner-friendly features of C++ and ignore some less useful ones entirely, while making assumptions about what Honeycomb provides. This way, I can bake engine integration directly into the language, similar to how the Unity API augments C# to provide Unity’s features.

Essentially, my motivation for the engine is to create something that new users can just pick up and start making things with, without having to spend hours with setup and tutorials. A user should be able to drag in a default character controller and immediately have working character input and movement, the whole shebang. They should then be able to easily add behaviour to objects in the scene in a way that feels natural – telling a sheep to jump 17 feet in the air when the player approaches should be a simple line or two of code. Obviously sheep don’t jump that high though, it’d be terrifying if they did. Or baaaaad.

So that’s my approach. I’m going to add in the absolute basic features that a new user will want or need and let them build everything else on top of that framework. To aid that, I’ll also try and write a bunch of tutorials and sample code to guide users into writing the more complex and weird stuff that I don’t think needs to be baked into the engine itself. That way, it’s not as if I completely neglect the existence of the more useful features, but I don’t just provide them for the sake of making my engine do your tax returns and make toast for you while you’re working.

One of the more exciting things about my project is that I’m going to try and use Vulkan, the shiny new graphics API by the Khronos Group. It’s sorta like OpenGL++, but not really. Khronos basically took the remnants of AMD’s Mantle API and created a cross-platform graphics API for the modern age, which allowed them to make different design decisions to the ones made during OpenGL’s development. As a result, the API provides much lower-level access to hardware, but this makes everything so much more verbose; it’ll be a challenge to implement Vulkan support, but I’m confident I can get it working with a little elbow grease. By that, I mean a lot of coding all-nighters. And hey, if it doesn’t work, I can always fall back to OpenGL.

I’ll be returning to this blog every now and then to document my progress, and when the time comes, upload working downloads for people to try out the engine. That’s a very far way off currently though! I hope this whirlwind tour of my plans make sense, although I’ve probably missed some stuff out, so let me know if something doesn’t make sense. It’d actually be hugely useful for me to know what people’s most requested features or major gripes about their choice of game engine or development tool are, so feel free to bellyache and rant in the comments section about something your game engine does that you think could be improved!