Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient
Standard
Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient. / Tosatto, Samuele; Carvalho, Joao; Peters, Jan.
In: IEEE T SOFTWARE ENG, Vol. 44, No. 10, 10.2022, p. 5996-6010.Research output: SCORING: Contribution to journal › SCORING: Journal article › Research › peer-review
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - JOUR
T1 - Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient
AU - Tosatto, Samuele
AU - Carvalho, Joao
AU - Peters, Jan
PY - 2022/10
Y1 - 2022/10
N2 - Off-policy reinforcement learning (RL) holds the promise of better data efficiency as it allows sample reuse and potentially enables safe interaction with the environment. Current off-policy policy gradient methods either suffer from high bias or high variance, delivering often unreliable estimates. The price of inefficiency becomes evident in real-world scenarios such as interaction-driven robot learning, where the success of RL has been rather limited, and a very high sample cost hinders straightforward application. In this paper, we propose a nonparametric Bellman equation, which can be solved in closed form. The solution is differentiable w.r.t the policy parameters and gives access to an estimation of the policy gradient. In this way, we avoid the high variance of importance sampling approaches, and the high bias of semi-gradient methods. We empirically analyze the quality of our gradient estimate against state-of-the-art methods, and show that it outperforms the baselines in terms of sample efficiency on classical control tasks.
AB - Off-policy reinforcement learning (RL) holds the promise of better data efficiency as it allows sample reuse and potentially enables safe interaction with the environment. Current off-policy policy gradient methods either suffer from high bias or high variance, delivering often unreliable estimates. The price of inefficiency becomes evident in real-world scenarios such as interaction-driven robot learning, where the success of RL has been rather limited, and a very high sample cost hinders straightforward application. In this paper, we propose a nonparametric Bellman equation, which can be solved in closed form. The solution is differentiable w.r.t the policy parameters and gives access to an estimation of the policy gradient. In this way, we avoid the high variance of importance sampling approaches, and the high bias of semi-gradient methods. We empirically analyze the quality of our gradient estimate against state-of-the-art methods, and show that it outperforms the baselines in terms of sample efficiency on classical control tasks.
U2 - 10.1109/TPAMI.2021.3088063
DO - 10.1109/TPAMI.2021.3088063
M3 - SCORING: Journal article
C2 - 34106848
VL - 44
SP - 5996
EP - 6010
JO - IEEE T SOFTWARE ENG
JF - IEEE T SOFTWARE ENG
SN - 0098-5589
IS - 10
ER -