Hedging under uncertainty: regret minimization meets exponentially fast convergence - Equipe : Graphes, Algorithmes et Combinatoire Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Hedging under uncertainty: regret minimization meets exponentially fast convergence

Résumé

This paper examines the problem of multi-agent learning in N-person non-cooperative games. For concreteness, we focus on the so-called “hedge” variant of the exponential weights (EW) algorithm, one of the most widely studied algorithmic schemes for regret minimization in online learning. In this multi-agent context, we show that a) dominated strategies become extinct (a.s.); and b) in generic games, pure Nash equilibria are attracting with high probability, even in the presence of uncertainty and noise of arbitrarily high variance. Moreover, if the algorithm’s step-size does not decay too fast, we show that these properties occur at a quasi-exponential rate – that is, much faster than the algorithm’s O(1/\sqrt{T}) worst-case regret guarantee would suggest.

Dates et versions

hal-01382290 , version 1 (16-10-2016)

Identifiants

Citer

Johanne Cohen, Amélie Héliou, Panayotis Mertikopoulos. Hedging under uncertainty: regret minimization meets exponentially fast convergence. Symposium on Algorithmic Game Theory (SAGT) 2017, Sep 2017, L'Aquila, Italy. ⟨10.1007/978-3-319-66700-3_20⟩. ⟨hal-01382290⟩
406 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More