In multi-agent interactions, each agent often faces uncertainty over the incentives and the behavior of the other agents. The traditional approach assumes that agents each maximize their expected utility w.r.t. some common prior distribution. However in most real-world scenarios agents have no way to accurately or even approximately know this distribution. Moreover, numerous psychological experiments have demonstrated that human decision makers fail even at fairly simple tasks involving probabilistic reasoning, and are prone to cognitive biases such as risk-aversion and loss-aversion.
I will describe an alternative, non-probabilistic, model for representing players' uncertainty in games, inspired by artificial intelligence and bounded rationality approaches.
While the model is quite general, I will demonstrate how it applies for preference aggregation mechanisms (voting), overcoming many shortcomings of previous theories. My main result is that the behavior of bounded-rational agents boils down to a simple and natural dynamics, which is guaranteed to converge to an equilibrium. Extensive simulations show that the resulting equilibria replicate known phenomena from real-world voting.
Finally, I will show how key components of this approach can be extracted and applied to very different settings, including online scheduling on Doodle and routing in networks with uncertain congestion.
The talk is based on published and unpublished work with Omer Lev, David Parkes, Jeffrey S. Rosenschein, and James Zou.