Close-to-opimal policies for Markovian bandits - TEL - Thèses en ligne Accéder directement au contenu
Thèse Année : 2022

Close-to-opimal policies for Markovian bandits

Politiques quasi-optimales de bandits Markoviens

Résumé

Multi-armed bandits are classical models of sequential decision making problems in which a controller (or learner) needs to decide at each time step how to allocate its resources to a finite set of alternatives (called arms or agents in the following). They are widely used in online learning today as they provide theoretical tools to solve practical problems (e.g., ad placement, routing or demand-response). When some information is available about the different arms, this falls into the class of Markovian bandits for which optimal allocation policies are notably difficult to solve as the number of arms grows.The main objective on this thesis is to provide an innovative framework for optimal control of stochastic dis- tributed agents. Restless bandit allocation is one particular example where the control that can be sent to each arm is restricted to an on/off signal. The originality of this framework is the use of novel method, called refined mean field approximation, developed in the performance evaluation context [3]. This framework will allow the development of control heuristics that are asymptotically optimal as the number of arms goes to infinity and that also have a better performance than existing heuristics [4] for a moderate number of arms. To demonstrate the effectiveness of our approach, we propose to apply this framework in the context of smart grids, to develop control policies for distributed electric appliances [1].Mean field approximation, that originates from statistical mechanics, has become a common tool to study systems of interacting agents. The rationale behind mean-field approximation is that when the number of agents n is large, each individual has a minor influence on the mass. This approximation is known to be exact as n goes to infinity, but it can be very poor for finite systems (n = 10 to n = 100). In a series of recent papers [2, 3], we have developed a new approximation, called the refined mean field approximation, that is used as a descriptive method to evaluate the performance of given heuristics. This approximation is much more accurate than the classical approximation while still being computationally non-expensive: on some problems, mean field approximation provides estimates that are 50% larger that the exact values for small system (n = 10) while the refined approximation has less than 1% error.The main objective of the thesis will be to formalize a theoretical framework on which the refined mean field approximation can be used to solve optimal control problems. This thesis will contribute to make a connection between bandit optimal control and mean field methods. Beyond this, we also plan to define a notion of refined mean field games that will help to design efficient allocation, for example in the context of wireless networks where mean field games are already used.References[1] Y. Chen, A. Buˇsi ́c, and S. P. Meyn. “State estimation for the individual and the population in mean field control with application to demand dispatch”. In: IEEE Transactions on Automatic Control 62.3 (2017), pp. 1138–1149.[2] N. Gast, L. Bortolussi, and M. Tribastone. “Size Expansions of Mean Field Approximation: Transient and Steady-State Analysis.” In: Performance Evaluation (2018).[3] N. Gast and B. Van Houdt. “A Refined Mean Field Approximation”. In: Proceedings of the ACM on Measurement and Analysis of Computing Systems - SIGMETRICS 1.2 (Dec. 2017), 33:1–33:28. issn: 2476-1249. doi: 10.1145/3154491. url: http://doi.acm.org/10.1145/3154491.[4] I. M. Verloop. “Asymptotically optimal priority policies for indexable and nonindexable restless bandits”. In: Ann. Appl. Probab. 26.4 (Aug. 2016), pp. 1947–1995. doi: 10.1214/15-AAP1137. url: https://doi.org/10.1214/15-AAP1137.
Les bandits à bras multiples sont des modèles classiques de problèmes de prise de décisions séquentiels dans lesquels un contrôleur (ou un apprenant) doit décider à chaque pas comment allouer ses ressources à un ensemble fini d'alternatives (appelées bras ou agents dans la suite). Aujourd'hui, ils sont largement utilisés dans l'apprentissage en ligne car ils fournissent des outils théoriques pour résoudre des problèmes pratiques (placement d'annonces, routage ou réponse à la demande, par exemple). Lorsque certaines informations sur les différents bras sont disponibles, ils appartiennent à la classe des bandits markoviens pour lesquels les politiques d'allocation optimales sont difficiles à résoudre à mesure que le nombre de bras augmente.L'objectif principal de cette thèse est de fournir un cadre innovant pour un contrôle optimal des agents stochastiques distribués. Le problème de bandits sans repos est un exemple particulier où le contrôle de chaque bras est limité à un signal marche / arrêt. L'originalité de ce cadre réside dans l'utilisation d'une nouvelle méthode, appelée approximation champ moyen raffinée, développée dans le contexte de l'évaluation de la performance [3]. Ce cadre permettra de développer des heuristiques de contrôle asymptotiquement optimales lorsque le nombre de bras ira à l'infini en offrant également de meilleures performances que les heuristiques existantes [4] pour un nombre modéré de bras. Pour démontrer l'efficacité de notre approche, nous proposons d'appliquer ce cadre dans le contexte des réseaux intelligents, afin d'élaborer des politiques de contrôle des appareils électriques distribués [1].References[1] Y. Chen, A. Buˇsi ́c, and S. P. Meyn. “State estimation for the individual and the population in mean field control with application to demand dispatch”. In: IEEE Transactions on Automatic Control 62.3 (2017), pp. 1138–1149.[2] N. Gast, L. Bortolussi, and M. Tribastone. “Size Expansions of Mean Field Approximation: Transient and Steady-State Analysis.” In: Performance Evaluation (2018).[3] N. Gast and B. Van Houdt. “A Refined Mean Field Approximation”. In: Proceedings of the ACM on Measurement and Analysis of Computing Systems - SIGMETRICS 1.2 (Dec. 2017), 33:1–33:28. issn: 2476-1249. doi: 10.1145/3154491. url: http://doi.acm.org/10.1145/3154491.[4] I. M. Verloop. “Asymptotically optimal priority policies for indexable and nonindexable restless bandits”. In: Ann. Appl. Probab. 26.4 (Aug. 2016), pp. 1947–1995. doi: 10.1214/15-AAP1137. url: https://doi.org/10.1214/15-AAP1137.
Fichier principal
Vignette du fichier
YAN_2022_archivage.pdf (7.33 Mo) Télécharger le fichier
Origine : Version validée par le jury (STAR)

Dates et versions

tel-04068056 , version 1 (09-01-2023)
tel-04068056 , version 2 (13-04-2023)

Identifiants

  • HAL Id : tel-04068056 , version 2

Citer

Yan Chen. Close-to-opimal policies for Markovian bandits. Computer Arithmetic. Université Grenoble Alpes [2020-..], 2022. English. ⟨NNT : 2022GRALM046⟩. ⟨tel-04068056v2⟩
165 Consultations
44 Téléchargements

Partager

Gmail Facebook X LinkedIn More