LTH-image

Overcoming Limitations of Game-Theoretic Distributed Control

Jason R. Marden, University of Colorado

Abstract:

Game theory has recently received significant research attention as a tool for cooperative control of multi-agent systems. Utilizing game theory for cooperative control requires the following: (1) the interactions of a multi-agent distributed system are modeled as a non-cooperative game where the agents are designed as self-interested entities and (2) the agents are controlled using distributed learning algorithms that provide convergence to a stable operating point, e.g., Nash equilibrium. While there exists a large body of literature on distributed learning algorithms, unfortunately guidelines for how to design a "desirable" game are relatively unexplored. In this talk, we focus on the question of how to design agent objective functions. We demonstrate that the standard framework of non-cooperative game has inherent limitations with regard to designing agent objective functions. In particular, we prove that there does not exist a systematic method for constructing agent objective functions that are local, budget balanced and guarantee that the optimal control is a pure Nash equilibrium. However, we demonstrate that these limitations can be overcome by moving beyond the class of non-cooperative games and conditioning each player's objective function on additional information, i.e., a state.