Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
87 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:
Embodied Agents for Computer Games
Posted March 21 9:00 AM by Graham Wihlidal
As the demands of gamers increase, so does the technology powering the games in order to account for the pressure for more realism and immersion. A large chunk of realism comes from the artificial intelligence behind simulated actors. Now, normally artificial intelligence in older games could be loosely coupled from the implementation details of the physics and animation systems. Older games were far enough away from believable realism that gamers let many things slide, like popping animation, jerky or lack of animation transitions, or a reduction of agent perception and response in order to make it easier to produce a game. Now games have approached a level of realism where the slightest pop or unbelievable action could break the entire sense of player immersion.

The full day tutorial titled “Embodied Agents for Computer Games” was delivered by Bryan Stout and John O’Brien, and covered both theory and conceptual architecture around designing artificial intelligence in games, specifically embodied agents. Bryan contributed to the tutorial with academic concepts and theory, and John balanced the theory with implementation details from Red Storm’s latest game Rainbox Six: Lockdown. The tutorial started off with Bryan Stout defining what autonomous and non-autonomous actors are, and then progressed into discussing the attributes and behaviors that can be modeled from a human player.

After Bryan, John moved into ways to plan out an AI system, such as first determining what actors can do (walk, run, shoot, jump, broadcast radio messages, etc.), and then describing how these behaviors can be modeled and implemented from a high-level view. Even though the AI system has become increasingly more coupled with the animation and physics systems, it is still important to build modular code, and as decoupled from other systems as possible. Do not assume that you will never add a new unit or player type, because these assumptions will come back and haunt you. A great technique to decouple a lot of AI logic from the game is a message based communication system.

John delivered a number of real-world issues that he and his team at Red Storm encountered on their latest game, which really helped to put a lot of the issues and resolutions into perspective. One specific issue was around the ability for actors to throw grenades through a door on a “Frag and Clear” order. In order to provide a believable action, the trajectory of the grenade was not scripted to always travel through an open door frame. Instead, a frame reference point was used on the model’s hand to determine the grenade’s trajectory when the grenade was thrown. The animation system was then tweaked so that the actor would be in the correct position to throw the grenade, and this technique worked great… up until the player animator decided to rework the animation for grenade throwing so it didn’t look so wimpy, resulting in John and his team having to tinker with the AI and timing system for three days in order to account for the “subtle” change. Before the new fix, actor’s would line up to the door but the grenade would hit the door frame on the thrown in, and blow the entire Rainbow team up.

Another excellent tip is the golden rule “If the player didn’t see something, it didn’t happen”. A great example given is around the flanking system that was put in play. John and his team built an excellent flanking AI for enemies, but many times the player did not see the flanking maneuver play out. One post on the Red Storm forums from a player described how he noticed some sort of great flanking AI, and another player jumped to the conclusion that Red Storm just slapped some characters behind the player. Essentially the tip is around the idea that why should CPU cycles be wasted on determining realistic flanking behavior when the player will never really see it.

Later on, Bryan addressed planning and path-finding concepts, including some demos that visually displayed algorithm stages.

Here is a capture of the speakers:



Overall the tutorial was very well done. There was a nice balance between Bryan’s academic concepts and John’s engineering practices. Sometimes there was too much PowerPoint and not enough interaction, but I walked away from the tutorial with much more knowledge about embodied agents than went I went in.

~Graham
 
 
Menu
 Back to GDC 2006
 See more Programming Track
 Discuss this article