Corelab Seminar

Fivos Kalogiannis
Min-Max Optimization in Two-Team Zero-Sum Games

Motivated by recent advances in both theoretical and applied aspects of multiplayer games, spanning from e-sports to multi-agent generative adversarial networks, we focus on min-max optimization in two-team zero-sum games. In this class of games, players are split into two teams with payoffs equal within the same team and of opposite sign across the opponent team. Unlike the textbook \red{two-player} zero-sum games, finding a Nash equilibrium in our class can be shown to be CLS-hard, i.e., it is unlikely to have a polynomial-time algorithm for computing Nash equilibria. Moreover, in this generalized framework, we establish that even asymptotic last iterate or time average convergence to a Nash Equilibrium is not possible using Gradient Descent Ascent (GDA), its optimistic variant, extra gradient, and optimistic multiplicative weight updates method. Specifically, we present a family of team games whose induced utility is non-multilinear with mixed Nash Equilibria, which are not attractive per se, as strict saddle points of the underlying optimization landscape. Leveraging techniques from control theory, we complement these negative results by designing a modified GDA that converges locally to Nash equilibria. Additionally, we provide a way to select the hyper-parameters of our proposed method such that local convergence is guaranteed when a sufficient condition holds. Finally, we discuss connections of our framework with AI architectures with team competition structures like multi-agent generative adversarial networks.