The primary problem for AIs attempting to perform spatial and pathfinding reasoning and determining strategy, is that the raw geometry of the level itself is incredibly difficult to parse. Attempting to perform reasoning with the raw geometry at runtime would be prohibitively expensive, because the geometry often contains so many extraneous and irrelevant details. A section of Solid Constructive Geometry (SCG), for example, might be composed of numerous aggregate polygons, but the AI is only concerned with its collision characteristic as a barrier to travel.
As with the global pathfinding problem, the most often employed solution is to build an optimized structural database. Developers can construct a very simple, streamlined database of spatial tactical data that contains only the key information the combat AI will require to understand the tactical significance of various spatial configurations of the level.
Customized Graphical User Interfaces can automatically analyze a given level’s static geometry, determine the tactical significance of different areas, and automatically generate a detailed tactical database that will be used to direct game agent mobility. Level designers also are required to embed specific indicator objects into their navigation systems, designating certain areas as ambush points, defense points, patrolling and event-driven activity, and other command options. (See also, NavigationPoint)
The only major drawback to a precomputed database is that it can sometimes work poorly in environments with a large number of dynamic obstacles.
With the generation of a CSG tactical database, we need to direct the game AIs to access it. A combat AI component will typically draw from a structured library of tactics, in which each tactic is responsible for executing a specific behavior in combat. Prospective tactics must communicate with the movement and animation subsystems to ensure that it exhibits the appropriate behaviors.
Another critical problem is the tactic selection problem. Given an arbitrary situation in combat, we need to pick the best tactic with which to attack our opponents so that the game is continually challenging. This decision will depend on three major factors: the nature of the tactic under consideration, the relative tactical significance of all the combatants’ various locations (as determined by the tactical database), and the current tactical situation (the AI agents health, weapon, ammo, and location, plus the values of those characteristics for all of its allies and opponents).
A related problem is the opponent-selection problem. Given a number of potential opponents, the combat AI needs to select one as the current “target”. Although it does not necessarily ignore all of its other adversaries, we need to designate a “target” to represent the fact that the AI will engage a single enemy at a time.
In complex game situations, it is usually easy to find a good target-picking heuristic by considering the relative tactical situation of the AI against every potential opponent. An AI is primarily concerned with defending itself first, and then attempting to identify whether any particular opponent is immediately threatening. If not, it can identify the most vulnerable target nearest to itself. A simple ranking function can easily make this determination and call the appropriate combat command.
After an AI has selected a target and initiated combat, it should consider changing its target whenever its tactical situation changes significantly. Obviously, if the AI or its target dies, that’s a good time to reconsider the current target.
Finally, there’s the weapon-firing problem. Most FPS weapons are high-velocity ranged (or hit-scan) weapons, so the key problem is determining where and along which line of sight to fire.
At the top of the AI system hierarchy is the dominant control component called the behavior controller. This controller is responsible for determining the AI agent’s current state and a range of high-level goals. It determines the AIs overall behavior – how it animates, what audio files it plays, where it moves, and when and how it enters combat.
There are any number of ways to model a behavior controller, depending on your game’s design requirements. Most FPS games use a finite-state machine (FSM) for this part of the AI.
The list below provides some typical states in the FSM for a typical FPS:
- Idle; the AI is standing guard, or not engaged in combat or movement.
- Patrolling; The AI is following a designer-specified patrol path.
- Combat; The AI is engaged in combat and has passed most of the responsibility for character control over to the combat controller.
- Wandering; probably a fugue state, maybe even suffering romantic difficulties.
- Fleeing; The AI is attempting to flee its opponent or any perceived threat.
- Searching; The AI is looking for an opponent to fight or searching for an opponent who fled during combat.
These behaviors are each represented by an object that is responsible for communicating with the movement, animation, and combat subsystems in order to represent the behaviors appropriately. Developing these behaviors will typically be very easy, since the movement, animation, and combat subsystems already do most of the work and provide a rich palette of basic behaviors for the behavioral component to assimilate.
Level designers will inevitably need some way to specify their design intentions in certain ambiguous or confusing sectors of the game. They need a way to take some degree of control over the AI for triggered gameplay sequences, cut scenes, or other “triggered” or “scripted” events that should happen during the game under certain circumstances. In order to make this happen, it’s necessary to design an interface for communication between the trigger/scripting system and the agent AI.
This communication will typically take two forms. First, the triggered events can set AI parameters. For example, an event might enable or disable certain states of the behavior controller’s finite-state machine, or modify various aggressiveness parameters to change the priority of various potential combat tactics.
The more common form of communication consists of sending commands to any of the AI subsystems from a triggered event. For example, the trigger system can tell an AI to move to a given point or to flee from its current target by issuing a command to its behavior controller. This command changes the current state of the FSM to one that will execute the appropriate behaviors.
A game player's perception (data input) is broken down into different subsystems. Different types of perception work differently, so we model our character's visual, auditory, and tactile subsystems seperately.
The visual subsystem should take into account such factors as the distance to a given visual stimulus, the angle of the stimulus relative to the AIs field of view, and the current visibility level at the location of the stimulus (lighting and fog, and obstructions).
In order to ensure that the AI can actually see the object, it's also essential to use a Ray-Tracing network. The AI should query the underlying game engine to ensure that there's a clear line-of-sight between the AIs eyes and the object its attempting to see.
The final sensory subsystem is the tactile subsystem. This system is responsible for anything the AI feels. This includes damage notifications (whenever the AI is wounded) as well as collision notifications (whenever the AI bumps into something, or some other object bumps into it).
Go back to: Artificial Intelligence
Go back to: Introduction to Game AI
Go Back to: First Person Shooter AI Architecture