Finite-time analysis of the multiarmed bandit problem, Machine Learning, vol.47, p.235256, 2002. ,
Reinforcement learning with highdimensional continuous actions, 1993. ,
Learning to act using real-time dynamic programming, Artificial Intelligence, vol.72, issue.1-2, p.81138, 1995. ,
DOI : 10.1016/0004-3702(94)00011-O
Dynamic Programming, 1957. ,
Arpad Rimmel, Fabien Teytaud, Olivier Teytaud, Paul Vayssière, and Ziqin Yut. Scalability and parallelization of monte-carlo tree search, Computers and Games, p.4858, 2010. ,
Decision-theoretic planning : Structural assumptions and computational leverage, Journal of Articial Intelligence Research, vol.11, p.194, 1999. ,
Reinforcement learning methods for continuous-time markov decision problems, Advances in Neural Information Processing Systems, p.393400, 1994. ,
R-max a general polynomial time algorithm for near-optimal reinforcement learning, Journal of Machine Leaning Research, vol.3, p.213231, 2001. ,
Open loop optimistic planning, Conference on Learning Theory, 2010. ,
URL : https://hal.archives-ouvertes.fr/hal-00943119
Online optimization in x-armed bandits, NIPS, pp.201208-139, 2008. ,
URL : https://hal.archives-ouvertes.fr/inria-00329797
Optimistic planning for sparsely stochastic systems, Proceedings of the 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, p.4855, 2011. ,
On the parallelization of uct, CGW 2007, p.93101, 2007. ,
An Adaptive Sampling Algorithm for Solving Markov Decision Processes, Operations Research, vol.53, issue.1, p.126139, 2005. ,
DOI : 10.1287/opre.1040.0145
Jaap van den Herik. Parallel monte-carlo tree search, Computers and Games, p.6071, 2008. ,
Bandit algorithms for tree search, 2007. ,
URL : https://hal.archives-ouvertes.fr/inria-00150207
Reinforcement Learning Using Neural Networks with Applications to Motor Control, 2002. ,
URL : https://hal.archives-ouvertes.fr/tel-00003985
Applying online search techniques to continuous-state reinforcement learning, Proceedings of the Fifteenth National Conference on Articial Intelligence, p.753760, 1998. ,
Treebased batch mode reinforcement learning, Journal of Machine Learning Research, vol.6, p.503556, 2005. ,
Q-Learning in Continuous State and Action Spaces, Australian Joint Conference on Articial Intelligence, p.417428, 1999. ,
DOI : 10.1007/3-540-46695-9_35
Exploration exploitation in go : Uct for monte-carlo go, Twentieth Annual Conference on Neural Information Processing Systems (NIPS), 2006. ,
URL : https://hal.archives-ouvertes.fr/hal-00115330
The parallelization of monte-carlo planning, ICINCO, 2008. ,
URL : https://hal.archives-ouvertes.fr/inria-00287867
Continuous-time hierarchical reinforcement learning, Proceedings of the Eighteenth International Conference on Machine Learning, pp.186193-140, 2001. ,
Reinforcement learning in feedback control -challenges and benchmarks from technical process control, Machine Learning, p.137169, 2011. ,
A formal basis for the heuristic determination of minimum cost paths, Systems Science and Cybernetics, p.100107, 1968. ,
Dynamic Programming and Markov Processes, 1960. ,
Optimistic Planning of Deterministic Systems, p.151164, 2008. ,
DOI : 10.1007/978-3-540-89722-4_12
URL : https://hal.archives-ouvertes.fr/hal-00830182
Near-optimal regret bounds for reinforcement learning, J. Mach. Learn. Res, vol.99, p.15631600, 2010. ,
Lipschitzian optimization without the Lipschitz constant, Journal of Optimization Theory and Applications, vol.20, issue.1, p.157181, 1993. ,
DOI : 10.1007/BF00941892
A sparse sampling algorithm for near-optimal planning in large markov decision processes ,
Bandit Based Monte-Carlo Planning, ECML, p.282293, 2006. ,
DOI : 10.1007/11871842_29
URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.1296
Real-time adaptive A*, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems , AAMAS '06, p.281288, 2006. ,
DOI : 10.1145/1160633.1160682
Incremental heuristic search in ai, Articial Intelligence Magazine, vol.25, issue.2, p.99112, 2004. ,
Depth-rst iterative-deepening : An optimal admissible tree search, Artif. Intell, vol.27, issue.1, p.97109, 1985. ,
Asymptotically ecient adaptive allocation rules, Advances in Applied Mathematics, vol.6, p.422, 1985. ,
Reinforcement learning in continuous action spaces through sequential monte carlo methods, NIPS, p.141, 2007. ,
ARA* : Anytime A* with provable bounds on sub-optimality, Advances in Neural Information Processing Systems, 2004. ,
Optimized Look-ahead Tree Search Policies, European Workshop on Reinforcement Learning (EWRL'9), 2011. ,
DOI : 10.1007/978-3-642-29946-9_20
URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.231.3017
Samplebased planning for continuous action markov decision processes, ICAPS, 2011. ,
Exploration of multi-state environments : Local measures and back-propagation of uncertainty, Machine Learning, p.117154, 1999. ,
Continuous-action q-learning, Mach. Learn, vol.49, issue.2-3, p.247265, 2002. ,
Optimistic optimization of deterministic functions without the knowledge of its smoothness, Advances in Neural Information Processing Systems, 2011. ,
URL : https://hal.archives-ouvertes.fr/hal-00830143
Ecient continuous-time reinforcement learning with adaptive state graphs, Proceedings of the 18th European conference on Machine Learning, ECML '07, p.250261, 2007. ,
Binary action search for learning continuous-action control policies, Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, p.100, 2009. ,
DOI : 10.1145/1553374.1553476
Learning continuous-action control policies, 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, p.169176, 2009. ,
DOI : 10.1109/ADPRL.2009.4927541
On-line search for solving markov decision processes via heuristic sampling, ECAI, p.530534, 2004. ,
Markov Decision Processes : Discrete Stochastic Dynamic Programming, 1994. ,
Reinforcement learning with factored states and actions, Journal of Machine Learning Research, vol.5, p.10631088, 2004. ,
Optimal and ecient path planning for partially-known environments, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '94), pp.33103317-142, 1994. ,
The focussed d* algorithm for real-time replanning, Proceedings of the International Joint Conference on Articial Intelligence, p.16521659, 1995. ,
Dyna, an integrated architecture for learning, planning, and reacting, ACM SIGART Bulletin, vol.2, issue.4, p.160163, 1991. ,
DOI : 10.1145/122344.122377
The many faces of optimism, Proceedings of the 25th international conference on Machine learning, ICML '08, p.10481055, 2008. ,
DOI : 10.1145/1390156.1390288
On-line policy improvement using monte-carlo search, Neural Information Processing Systems, p.10681074, 1996. ,
Learning from Delayed Rewards, 1989. ,