ML cube co-founders won the Best Paper Award at NeurIPS 2020

Nips 683x313, ML cube

We are happy to announce that our co-founders Alberto Marchesi and Nicola Gatti (together with Andrea Celli and Gabriele Farina) won the prestigious Best Paper Award at NeurIPS 2020 with a work about no-regret learning dynamics in sequential decision-making.

The Neural Information Processing Systems (NeurIPS) conference is the most important AI and Machine Learning meeting, each year gathering thousands of people from academic institutions and industries all over the world. In this record-breaking year of 2020, the conference had more than 18,000 participants and 9,467 submitted papers.

After a highly selective reviewing process, only 1,898 papers were accepted for publication, and three of them also received the prestigious Best Paper Award for their groundbreaking contributions. Alberto and Nicola shared the award with Open AI, for their GPT-3 language model paper, and UC Berkeley, for a work on data summarization.

The winning paper “No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium” by Alberto and Nicola solves a long-standing open problem at the interface of game theory, computer science, and economics and can have substantial impact on settings that involve a mediator. In particular, the paper shows that equilibria involving a mediator can be reached in a decentralized way by agents playing selfishly, even in complex settings involving sequential decision-making and partial information. 

This work may have substantial applications in efficient traffic routing via navigation apps, users coordination in ride-sharing and food delivery apps, and planning for the use of shared resources. The groundbreaking contribution of the paper is that agents can achieve socially optimal behavior without resorting to a central mediator, but only acting in a decentralized manner according to the algorithm developed in the work.

Here, the technical abstract and the link to the full paper:

“The existence of simple, uncoupled no-regret dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normal-form game, the empirical frequency of play converges to a normal-form correlated equilibrium. Extensive-form (that is, tree-form) games generalize normal-form games by modeling both sequential and simultaneous moves, as well as private information. Because of the sequential nature and presence of partial information in the game, extensive-form correlation has significantly different properties than the normal-form counterpart, many of which are still open research directions. Extensive-form correlated equilibrium (EFCE) has been proposed as the natural extensive-form counterpart to normal-form correlated equilibrium. However, it was currently unknown whether EFCE emerges as the result of uncoupled agent dynamics. In this paper, we give the first uncoupled no-regret dynamics that converge to the set of EFCEs in n-player general-sum extensive-form games with perfect recall. First, we introduce a notion of trigger regret in extensive-form games, which extends that of internal regret in normal-form games. When each player has low trigger regret, the empirical frequency of play is close to an EFCE. Then, we give an efficient no-trigger-regret algorithm. Our algorithm decomposes trigger regret into local subproblems at each decision point for the player, and constructs a global strategy of the player from the local solutions at each decision point.”

Link to the paper “No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium”

ML cube