For various reasons, including trying to coordinate a larger release for the goal of pursuing an Epic MegaGrant, source code for these projects is currently private. If you want more information, or if you have a project you would like to use these in, please reach out and I'd be happy to discuss further.
For 2023 I'm planning on both expanding this section of the site, and making a series of YouTube tutorials that will demonstrate these ideas and provide some tips on how to develop your own system.
Agent intelligence is a subset of artificial intelligence controlling entities that interact with their environment by taking actions. Goal-based agents aim not to act purely reactively to stimuli, but to proactively achieve a certain goal state. In turn, planning allows those agents to construct sequences of actions that will achieve a goal when individual actions would not. This is sometimes known as Goal-Oriented Action Planning (GOAP).
This project involved implementing a mechanism for planning in the Unreal game engine by using pathfinding algorithms in the graph of “environment state”, rather than in the graph of physical location in the environment, constructing and selecting sequences of planned actions by their cost/consequence – a “plan”. By applying this approach in a hierarchical manner (known as Hierarchical Task Network planning, or HTN) and then parametrizing the cost / cost function associated with each action and with characteristics of potential plans, I was able to create rough “personality traits” for the AI in a way that is numerically tunable and combinable.
For example, a “scheming” personality implies preference for plans of greater length and a “straightforward” one would do the opposite. These are based on defining a cost coefficient for the plan length cost term. Other traits such as being “cautious” are represented in ways dependent on the rules of the simulation (is hiding behind cover in combat more, or less, cautious than just running away?) but could be freely combined with the above.
One notable challenge point is that some behaviors are context-dependent and so require a mapping to other aspects of the game world. An agent that (for whatever reason) fears a certain object should cost any plans that take it near this object much more highly, which requires evaluation of multiple tiers of navigation, a sense/perception system for the agent keeping track of the object's location, and some arbitrary-length mapping between "fears" and "object references" -- only made more complex if this applies to any object meeting some type criteria, rather than a specific instance of an object.
The challenge with this approach in general is managing performance and complexity when multiple agents and dynamic environments are involved in real-time. I'm investigating these ideas further in the context of integrating game AI with richer, extensible sensory systems. Because personality traits in this framework have a vector/matrix representation, one avenue is to apply machine learning methods (for example, using plan success or failure to perform supervised learning), although I have not done this yet. Even more advanced would to be apply a machine learning technique that can work with operators/expressions to generate cost functions, rather than just computing cost coefficients and constants.
An extension point I am investigating is the integration of this system with Unreal Engine's "Smart Objects", which represent objects that both player and non-player characters can interact with to meet their goal. The goal is to more tightly integrate this with AI in the game world without the need to necessarily write custom AI logic for every kind of smart object. Instead, we just expose use of any individual smart object as another set of HTN plan elements. This abstraction usefully allows us to keep the AI code as general as possible, but driven by the data made available to it.
The annual Fighting Game AI Competition hosted by Ritsumeikan University’s Intelligent Computer Entertainment Lab requires entrants to submit AI controllers for a sample fighting game with tight controls on what information is available (and when) to both the controller (about the current state of the match) and to entrants (about the game’s characters, which have unique traits and actions), to enforce a roughly human-equivalent level of perception and knowledge.
I considered the possibility of using known traits of game rules and characters, extracted from multiple similar games and encoded in a common model, as input to machine learning methods. In this way, the controller could learn patterns among characters with similar actions between different games in the genre. A close friend of mine has competed nationally in this type of game, so I felt his match data would be a good starting point.
Taking the commercial Guilty Gear series as an initial case, I used reverse-engineering techniques to both decode the proprietary binary format used to store match replay data, and to modify the game to inject my custom AI controller, which was based on a modified Markov chain model. However, this approach requires extensive knowledge of each game, and it is extremely difficult to model more complicated differences between games.
I intend to enter next year’s competition, which now focuses on limiting information to the controller to simulate disabilities such as visual and auditory impairment.
In interactive media, there is an innate tension between the intentional, scripted narrative and the freedom of the viewer to create their own experience. A fully scripted work could hardly be called interactive, but a fully interactive sandbox without intentionality offers only novelty. Somewhere in the middle is the “Goldilocks zone”, where self-directed and externally directed play are balanced, and the viewer receives both the benefit of a bespoke story and of self-actualization.
This ongoing project attempts to provide middleware toward that end using a “director-based” system, a la Left 4 Dead and The Hit, while providing an accessible interface for writers to populate this system via a derivative of the Ink interactive fiction scripting language. In traditional game narratives, the story is delivered through scenes which are either unavoidable, or which trigger under specific conditions. By using concepts from agent planning and defining meaningful story beats in terms of their narrative preconditions and effects, a software “director” can compute a path from arbitrary player states and adjust aspects of the game world in the background to “nudge” the player toward events that might capture their interest and drive the story, without causing the feeling that they are “stuck on a track”.
For example, the spontaneous appearance of a new type of challenge combined with a simultaneous difficulty spike may nudge the player toward returning to safety, priming them for the narrative beat of meeting a character (navigated by the director in the meantime) who has some helpful equipment for trade, in exchange for a favor. Using the director instead of hard-coding event triggers like this has the potential to ensure a unique experience guided by the player's own style of play. This director can also actively learn to some degree based on metrics of player engagement, such as time spent in dialogue. Ultimately this still requires a great deal of work to define events and keep track of variables in the narrative, but this does help make the problem more tractable and programmatic.
The most common feedback users get from software is either visual or auditory, but these senses don’t describe all of experience. In fact, in virtual worlds, users can take on the persona of entities with much different sensory capabilities than their own. For example, a magical or psychic character may have awareness of supernatural entities that are totally imperceptible to other characters, or a character who is a medical doctor might be able to “see” the health status of others.
Inspired by the open-source online game Space Station 13, in which a player’s in-character senses and knowledge can change dynamically and are relevant to the game mechanics, I sought to provide an easily extensible way to model characters with complex and dynamic senses in the context of an adversarial game. Using object-oriented design, I created a plugin/framework for the Unreal game engine which disentangles senses into three logical components: the sensor itself, the class of stimuli it can detect, and its interface representation.
The representation defines the policy for how the user is notified (e.g., by creating or updating a widget on a heads-up display, by a specific force feedback pattern, or by enabling visibility of new objects in the game world) and is fully configurable at runtime, rendering only information published to it by the sensor, which is controlled in a server-authoritative fashion. Because the sensor is an “object” in the game world like any other, it can also be manipulated like other objects, potentially compromising or destroying it. This also means that players can possess different controllable entities and their game interface will automatically adjust to the senses of the currently controlled character.
This provides a framework for supporting specialized interfaces for users with disabilities in these games, and for creating games/simulations where individual agents can directly mislead one another. As a side benefit, it helps modularize this functionality cleanly, easing the actual development of these systems.