Thursday 15 October 2015

The Invisible world of video game AI

The Invisible world of video game AI

It’s commonly believed that video game AI has barely improved since the 1990s, but is this true? Rick Lane investigates

The years between 1997 and 2001 are often considered a golden age of video game AI, encompassing a spate of fascinating games that appeared to make giant leaps forwards in how players could interact with non-player agents and vice versa.


The touchstones are all fondly remembered. Stealth games such as Thief introduced behaviour states that went beyond the straightforward, binary choice of passive or aggressive, which was previously the norm. Social simulations such as Dungeon Keeper and The Sims brought us NPCs with simulated emotions who interacted with each other and affected one another’s behaviour. Most famously of all, Lionhead’s Black & White AI could apparently learn new skills and employ them in ways that reflected a particular personality.

This cluster of games so rapidly expanded our understanding of the capabilities of game AI that it seemed as though the future games would be AI-driven, but that wasn’t the case. Aside from a couple of games, such as Monolith’s F.E.A.R, and Paradox’s Crusader Kings series, game AI has appears to still be relying on the basic principles established in the late 1990s and early 2000s. Is this really what’s happened?

‘No,’ says Hugo Desmeules, lead AI designer on Ubisoft’s Far Cry series. ‘I remember the games we were making in 1999, when we were programming enemies with simple brains and patterns. The technology at that time didn’t allow us the freedom we have now, which translates into sampling the environment with heavy physics simulation, tonnes of math processing and a great deal of objects in memory.

AI IN AN OPEN WORLD


‘Today you can have a gigantic navigable world in a game such as Far Cry 3, with a full day/night cycle, a whole bunch of enemies and vehicles at the same time, fire simulation, real physics and a lot of persistency. Those ingredients, when gathered together, have pushed the process of building AI brains to a whole new level.’

The Far Cry series provides an interesting case study in terms of how computer game AI has and hasn’t evolved. From Far Cry 2 onwards, the player explores vast open worlds pockmarked by enemy encampments. A major part of the game involves liberating those encampments from the AI’s grasp through a combination of stealth and gunplay.

On a fundamental level, the AI techniques that Ubisoft employs are little different from those seen in Thief. ‘The AI brain has three main states – idle, alert and combat,’ says Desmeules. These behaviour states, known in AI development as finite state machines, are essentially identical to those first explored in Thief. Enemies stick to set patrols while idle, switch to searching for the player when alerted and attack the player when in combat.

At the same time, the number of eventualities the AI needs to accommodate has expanded enormously. To start, the game world is now open, with both players and AI characters able to wander in almost any direction. While Thief’s AI characters had to navigate through fairly restricted environments of rooms and corridors, Far Cry’s AI pathfinding must account for hills, rocks, trees, rivers, vehicles, buildings, interiors – the list goes on.

And it isn’t simply navigation that’s more complicated. ‘There are so many emergent situations that the NPC brain needs to be a lot more complex,’ Desmeules says. ‘Simply take the example of an AI [character] completely surrounded by fire near a river. A simple evolution towards smarter AI would be to teach them how to swim. In a more linear game, you would probably be able to prevent this situation by not allowing the player to have a flamethrower, but in an open-world game such as Far Cry 3, the player can do pretty much anything they want.’

Indeed, the demands of both the player and the environment on the AI are so great that it’s tremendously difficult to take all the different potential scenarios into account. ‘The way we structure the AI code is crucial and can’t be a sum of special cases everywhere, so the hardest part is to keep a simple code/data model where it’s easy to scale the AI brain.’ says Desmeules. ‘To know we’ve done it right, we can test it by adding a feature such as swimming with the desired complexity, then creating a new animation, recording new dialogues, new particle effects and so on.

‘The brain part [of the AI] should already be generic enough to support navigation from A to B, and react to bullet hits and state transitions. Only the data used during this action should be different. Maintaining this philosophy isn’t easy and it’s a daily battle.’ Yet despite the fact that Far Cry 3’s AI characters can swim, drive vehicles and call in support from other bases, alongside many other abilities, from the player’s perspective, they don’t seem much more intelligent than the guards in Thief.

The reason is that AI development in video games is a knotty subject that depends upon a huge variety of factors. To start, the significance of AI development in a game depends on the type of game you want to make and whether there are AI techniques that cater towards that type of game. There are many well-established game genres now, and most have tried and tested ways of implementing AI that suits them. For example, when Thief launched, the ‘alert’ state crucial to making a stealth game work didn’t exist.

This innovation was part of what led to Thief being viewed as such a groundbreaking game. Far Cry requires this state to function too, but it doesn’t need to invent it. Plus, although it adds dozens of smaller details on top of this framework, we don’t notice the improvement because they’re subtler iterations on an existing idea.

For players, perceiving these subtle improvements is far harder than in other areas of game development such as graphics or animation, where every change is directly visible on the screen. As such, we only pay attention to a game’s AI when it deliberately draws our attention to it.

INSIDE THE XENOMORPH BRAIN


A recent example is the Creative Assembly’s sci-fi horror game Alien: Isolation, which places great emphasis on the intelligence of the game’s eponymous Xenomorph. The technology that lends the alien its apparent intelligence is certainly impressive. The Alien appears in the game dynamically, and its behaviour is dependent on various parameters that tell it which areas of the environment to search, for how long and whether or not it ‘knows’ the player is in a particular area.

‘If you follow someone into a small room, you have a good idea where they are and that you should search the room thoroughly,’ says Clive Gratton, technical director on Alien Isolation. ‘On top of this AI, we also have a meta-layer that makes more strategic decisions – when and where to appear. This layer also makes decisions about how to handle the player if they have a flamethrower and so on.’

In other words, the Alien receives a huge amount of support from the game itself in making it appear intelligent. The game’s excellent modelling, animation and sound design lend the Alien a terrifying presence when it’s in the same room as you, and the levels are constructed from small rooms and corridors to minimise glitches or pathfinding problems. Finally, Sevastopol Station’s ventilation system enables the Alien to seamlessly traverse environments, popping in and out of the game world without breaking the immersion.

Whereas Far Cry uses its AI to facilitate a specific idea, Alien: Isolation builds its game entirely around its AI, and that’s what makes its Xenomorph such a powerful portrayal of predatory cunning. The alien appears intelligent because the developers work hard to convince you of this fact through every aspect of the game, while minimising the number of factors that the Alien itself needs to think about.

This is why it’s difficult for gamers to gauge the extent of AI’s progression. From the developer’s perspective, there’s no point reinventing the wheel, and what’s important is that the AI feels convincing and makes for entertaining play, rather than its actual ‘intelligence’. From the player’s perspective, the desire for more ‘intelligent’ AI is conflated with the desire for innovative AI – different behaviour that we haven’t experienced before.

SIMULATING SOCIAL AWKWARDNESS


Although it’s difficult to state clearly that game AI is becoming more ‘intelligent’, it’s certainly becoming more detailed. A fine example is Simon Roth’s Maia, a space colony simulator in which you oversee a team of researchers on a verdant but hostile alien planet. Maia is inspired by previous social simulations such as The Sims and Dungeon Keeper, with the aim of making its colonists and their interactions as detailed and granular as possible.

‘To start, every colonist has over 50 base desires, ranging from those attached to bodily functions, hunger, thirst, fatigue and so on, to higher-level emotional wants such as social contact and a wish to express themselves,’ says Roth. ‘They also build their own model of the base’s requirements, safety, security, food production and so on, so they can plan workloads in an efficient manner.’

According to Roth, objects in the game world then ‘advertise’ their ability to satisfy specific needs. Any colonist or creature that interacts with an object will retain some knowledge of its effectiveness, which influences their likelihood of using it in the future. ‘This system also lets objects misleadingly advertise their usefulness, allowing for items such as animal traps, which offer food but don’t provide it,’ he adds.

Alongside needs, colonists also have emotions and moods that affect (and are affected by) the satisfaction of needs and interactions with other colonists. ‘Currently, the colonists have basic social interactions where they can wave to, talk to, hug, threaten or assault each other. This fulfils their desire for social interaction (attention), but also feeds back directly into their emotional simulation. Their moods rub off on each other, so an irritated colonist may end up making your whole base annoyed or upset as the interchanges go sour.’

One of the most important aspects of Maia is that every AI agent is individually simulated. There is no ‘higher-level’ AI that connects or oversees them. As Roth puts it, the colonists can’t read each other’s minds. Roth gives the example of two colonists who need to use a specific item, but don’t know about the other colonist’s need to use it. ‘If they arrive at an item already in use, they can choose to wait in the room if they can withstand the social awkwardness, but they may instead decide to use a different object to save themselves the embarrassment.’

This level of detail in individual, autonomous agents simply wouldn’t have been possible 15 years ago – the demand on the CPU would have been intolerable. However, Roth says the bigger problem is conveying the thoughts and emotions of colonists effectively to the player. ‘In recent builds, we’ve been adding a lot of hammed-up animations, expressing a wide range of mixed emotions, but often the cause of a mood isn’t as clear as it needs to be,’ he says. Again, our understanding of what makes a good game AI system is as much about our perception of it as players, as it is about the algorithms developers write in their code.

MACHINE LEARNING


The notion that AI ceased to progress around the year 2000 is a myth. That said, there’s one area of AI where games have stagnated – machine learning, where AI is capable of learning new skills and then applying them autonomously in a given environment. In the entirety of gaming history, only a handful of games have dipped their toes into machine learning – the most famous being Black & White. Not surprisingly, Black & White is frequently touted as the best AI-driven game of all time.

Since Black & White, the game industry has barely touched machine learning. Most of its applications are consigned to academic circles. The most notable time
the two have crossed over recently was in 2012, when a team from the University of Texas won that year’s Unreal Tournament ‘Botprize’, with a bot so lifelike that testers were unable to tell whether it was human or not.

The team was put together by AI researcher Risto Miikkulainen. The team tracked data on how human players navigated UT’s maps, and the bots were taught to apply those navigation techniques independently via a technique called ‘evolutionary computation’.

‘The idea was that first we would try to evolve the best possible combat behaviour,’ says Miikkulainen, ‘We’ll have a bunch of bots in the level, play with and against them, and the best versions of the bots, the best neural networks, will survive and be passed on.’ Unfortunately, this method resulted in bots with superhuman skills, so the team had to find a way to rein them in. ‘We added a bunch of constraints that we thought were human-like constraints, and then optimisation under those constraints resulted in behaviour that actually looked very human-like,’ Miikkulainen explains.

What’s interesting about this project isn’t so much the result of the team’s work – bots which passed the Turing test within the context of the game – but the process through which they got there. It hints at how machine learning could be adopted into mainstream gaming. Imagine a Pokémon game where you trained the creatures yourself, or a football management game where you coached AI players in tactics.

Miikkulainen even explored this idea in 2005 by developing a game called NERO, wherein players train special-ops soldiers for combat deployments.

‘The behaviour that comes out of these techniques is much more complex, and it can be more interesting – just the fact that you would have agents that adapt and change their behaviour is hugely interesting, and it can create entirely new games.’

Ultimately, the game industry is driven by ideas, rather than any specific technology or presiding artistic theory, and within gaming at least, AI is a technology – a tool rather than a goal. But that doesn’t mean the industry has given up on AI at all. Indeed, as hardware becomes increasingly powerful, it opens up new opportunities that AI programmers are already keen to explore.

‘We would love to see hardware that allows database storing of facts to help the AI make more complex decisions,’ says Desmeules. ‘This would require more memory space and processing power, so yes, hardware could definitely improve AI complexity.’