Obviously games aren’t magic. But they can look like it when you don’t know what’s going on behind the scenes. That was something recently highlighted when this GIF of Horizon: Zero Dawn appeared, showing how the game engine only renders (makes, effectively) what you’re looking at.
For a lot of people, this dynamic world creation was mind-blowing to see in action. Oddly, however, rather than being happy anyone was taking an interest, quite a lot of developers seemed sniffy that your average teenage gamer was ignorant of ‘frustrum culling.’ So let’s fix that and take a look at a few interesting technical things that most games do to make themselves look pretty. Or even just work at all, starting with Horizon’s magic camera.
Culling is all about reducing, and in games it means to reduce the amount of work an engine has to do by only rendering (or making) what the player can see. It makes a lot of sense. There isn’t a console or PC alive that could process the entire world of Horizon at once, so this cuts down the workload by ignoring anything that doesn’t fall into the camera’s view.
There are two main types – frustum culling, which renders everything in the camera’s field of view, and occlusion culling that only renders what the camera has a direct line of sight to (further reducing the work needed).
Here’s a GIF that shows the two in action. As you can see, the player view in the bottom right shows the same view each time.
Level of detail
Another thing a game can do to save precious resources is to only draw the details on things close up. If you ever see anything ‘pop’ in while playing you’re seeing a transition from one level of detail to the next. Take a look at these bunnies:
Obviously the first one looks the best while the last looks like some kind of horrible Origami accident. However, now let’s look at the same models over a distance:
That last bunny doesn’t look too bad when it’s that far away, and as a considerably simpler shape isn’t as hard to make. Whatever you’re playing on only has so much power to draw the world, animate stuff, make mist look pretty and so on – so steps like this help conserve that power for where it’s best used. Even so, large worlds or poorly optimised games don’t always blend between different levels of detail, which is when things ‘pop’.
Rays, or ‘ray casting’ is a fundamental way that games find out where things are in its world. If you ever heard mention of The Last of Us’ ‘punch lasers’, those were rays. The name is almost an explanation in of itself – an object casts out rays (draws a number of lines in certain directions) and, if they hit anything, they report back.
Here’s Jacob Minkoff, the then lead designer on The Last of Us, explaining how those ‘punch laser’ rays work practically when Joel is fighting: “When you push the punch button, it fires a bunch of rays around the game world, and says ‘What’s nearby?’. Oh, there’s a wall nearby. Or there’s a desk nearby. So if you press the punch button and make contact, it will smash this guy against the wall.”
Another good example of rays from The Last of Us is how the game decides where Ellie should stand in relation to Joel.
This image shows lines fired out from Joel to decide potential places for Ellie to stand. Other lines then answer various questions like if a spot is blocked by an object, can Ellie see ahead, or look at Joel and so on. Locations that tick all the boxes show up as green lines, with the red lines showing places Ellie can’t go. When characters end up in walls this is the bit that has gone wrong.
I’ve not gone into huge amounts of depth on anything really, but I’m definitely going to skim this because there are so many variations and options. The key thing is that what you see on screen doesn’t appear in one go. A game isn’t filming a virtual world and relaying it directly to your eyes, it’s creating a flat representation of it in layers, with various passes for different characteristics and details.
Here’s a GIF showing how Killzone 2 renders a scene in layers, each one calculating or handling a different element. A good analogy would be to think of each pass working out light, shadow, colour, texture, and so on, creating a flat 2D image of what the game’s simulating.
This is where all the post processing stuff follows. As the name suggests that’s stuff that comes after all the rendering grunt work of basic shapes of bodies and buildings – stuff like motion blur, and particle effects. Much of these passes are where various shaders and lighting stuff happens, which brings us neatly on to…
There are so many types of shaders but we’re going to focus just on vertex and pixel shaders to keep things simple, and show the two main effects these things can manage – changing the light effect or position of pixels in a game.
Pixel mapping usually affects colours and lighting, but can also affect the apparent position of pixels depending on that light. Take a look at this:
It shows a plain ball, a bump map (which is basically a black and white texture), and then the two combined showing how that black and white speckle has been turned into an orange skin like finish.
That’s the simple version. Here’s what it looks like in a game like The Order: 1886:
Each one of those little balls shows a texture map that adds to the finish of the water pump. It’s way of adding fine detail to a simple model without using loads of resources. The pump is a complex geometric shape where every point has to be calculated, the shaders are just a skin stretched over it.
Here’s another example showing normal mapping.
The fine detail added by the normal map on the right doesn’t use any more polygons – it’s still the same model on the left – it’s just a visual effect, creating something that’s far less resource intensive but still looks complex. For the most part, if you ever see anything in a game that looks flat, muddy and like it hasn’t loaded, it’s because the shaders haven’t worked.
Another important type of shader is a vertex shader. These are more focused on physically moving pixels, and if you’ve ever looked at water in a game you’ve probably seen one in action. It’s far easier to render water as a flat surface and apply a shader afterwards to move it than it would be to actually animate it. But it’s not just water, here’s a fantastic example of how Naughty Dog used a shader to animate dense vegetation in Uncharted 4:
While it was technically possible to individually animate each plant (and an idea that was considered) it would have required a huge amount of processing to detect collisions, calculate reactions, direction of movement and so on. Instead a shader applies deformation to all the pixels of the plants, creating the effect same without all the effort.
And that’s it for now. This is obviously only a tiny fraction of what happens inside a game like Prey or Overwatch, and a gentle introduction at that, but hopefully it should help you get a grasp on what’s going on behind the scenes. A single beam of light reflecting off an object, from a source and into your eyes requires more maths then you’d ever likely see in an exam. And games have to do that for every object, every light beam, and do physics, culling, rays, LODs and so much more – all while you’re yanking the controls left, right and centre without warning. When framerates drop, or people glitch through walls inside out without a face it’s likely because so much is happening occasionally things misplace the decimal. And, when you start to look at how all this pulls together, you have to marvel how any of it works at all.
Any gaming magic you’d like to know more about? Let me know in the comments and I’ll do my best to explain it (or ask someone who can).