Bureau : P130
Salomon A., Forges F. (2015), Bayesian repeated games and reputation, Journal of Economic Theory, 159, p. 70-104
We consider two-person undiscounted and discounted infinitely repeated games in which every player privately knows his own payoffs (private values). Under a further assumption (existence of uniform punishment strategies), the Nash equilibria of the Bayesian infinitely repeated game without discounting are payoff-equivalent to tractable, completely revealing, equilibria. This characterization does not apply to discounted games with sufficiently patient players. We show that in a class of public good games, the set of Nash equilibrium payoffs of the undiscounted game can be empty, while limit (perfect Bayesian) Nash equilibrium payoffs of the discounted game, as players become increasingly patient, do exist. These equilibria share some features with the ones of two-sided reputation models.
Salomon A., Audibert J-Y. (2014), Robustness of stochastic bandit policies, Theoretical Computer Science, 519, p. 46-67
This paper studies the deviations of the regret in a stochastic multi-armed bandit problem. When the total number of plays n is known beforehand by the agent, Audibert et al.  exhibit a policy such that with probability at least 1-1/n, the regret of the policy is of order log n. They have also shown that such a property is not shared by the popular ucb1 policy of Auer et al. . This work first answers an open question: it extends this negative result to any anytime policy (i.e. any policy that does not take the number of plays n into account). Another contribution of this paper is to design robust anytime policies for specific multi-armed bandit problems in which some restrictions are put on the set of possible distributions of the different arms. We also show that, for any policy (i.e. even when the number of plays n is known), the regret is of order log n with probability at least 1-1/n, so that the policy of Audibert et al. has the best possible deviation properties.
Salomon A., Vieille N., Rosenberg D. (2013), On games of strategic experimentation, Games and Economic Behavior, 82, p. 31â51
We study a class of symmetric strategic experimentation games. Each of two players faces an (exponential) two-armed bandit problem, and must decide when to stop experimenting with the risky arm. The equilibrium amount of experimentation depends on the degree to which experimentation outcomes are observed, and on the correlation between the two individual bandit problems. When experimentation outcomes are public, the game is basically one of strategic complementarities. When experimentation decisions are public, but outcomes are private, the strategic interaction is more complex. We fully characterize the equilibrium behavior in both informational setups, leading to a clear comparison between the two. In particular, equilibrium payoffs are higher when equilibrium outcomes are public.
El Alaoui I., Audibert J-Y., Salomon A. (2013), Lower Bounds and Selectivity of Weak-Consistent Policies in Stochastic Multi-Armed Bandit Problem., Journal of Machine Learning Research, 14, 1, p. 187-207
This paper is devoted to regret lower bounds in the classical model of stochastic multi-armed bandit. A well-known result of Lai and Robbins, which has then been extended by Burnetas and Katehakis, has established the presence of a logarithmic bound for all consistent policies. We relax the notion of consistency, and exhibit a generalisation of the bound. We also study the existence of logarithmic bounds in general and in the case of Hannan consistency. Moreover, we prove that it is impossible to design an adaptive policy that would select the best of two algorithms by taking advantage of the properties of the environment. To get these results, we study variants of popular Upper Confidence Bounds (UCB) policies.
Salomon A., Forges F. (2013), Bayesian Repeated Games,, Paris, Université Paris-Dauphine, 38
We consider Bayesian games, with independent private values, in which uniform punishment strategies are available. We establish that the Nash equilibria of the Bayesian infinitely repeated game without discounting are payoff equivalent to tractable separating (i.e., completely revealing) equilibria and can be achieved as interim cooperative solutions of the initial Bayesian game. We also show, on a public good example, that the set of Nash equilibrium payoffs of the undiscounted game can be empty, while limit Nash equilibrium payoffs of the discounted game, as players become infinitely patient, do exist.