On the Study of Cooperative Multi-Agent Policy Gradient - Solutions for the future of road freight transport Access content directly
Reports (Research Report) Year : 2018

On the Study of Cooperative Multi-Agent Policy Gradient

Abstract

Reinforcement Learning (RL) for decentralized partially observable Markov decision processes (Dec-POMDPs) is lagging behind the spectacular breakthroughs of single-agent RL. That is because assumptions that hold in single-agent settings are often obsolete in decentralized multi-agent systems. To tackle this issue, we investigate the foundations of policy gradient methods within the centralized training for decentralized control (CTDC) paradigm. In this paradigm, learning can be accomplished in a centralized manner while execution can still be independent. Using this insight, we establish policy gradient theorem and compatible function approximations for decentralized multi-agent systems. Resulting actor-critic methods preserve the decentralized control at the execution phase, but can also estimate the policy gradient from collective experiences guided by a centralized critic at the training phase. Experiments demonstrate our policy gradient methods compare favorably against standard RL techniques in benchmarks from the literature.
Fichier principal
Vignette du fichier
RR-9188.pdf (2.07 Mo) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-01821677 , version 1 (22-06-2018)
hal-01821677 , version 2 (17-07-2018)

Identifiers

  • HAL Id : hal-01821677 , version 1

Cite

Guillaume Bono, Jilles Steeve Dibangoye, Laëtitia Matignon, Florian Pereyron, Olivier Simonin. On the Study of Cooperative Multi-Agent Policy Gradient. [Research Report] RR-9188, INSA Lyon; INRIA. 2018. ⟨hal-01821677v1⟩
650 View
973 Download

Share

Gmail Mastodon Facebook X LinkedIn More