Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient

Standard

Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient. / Tosatto, Samuele; Carvalho, Joao; Peters, Jan.

In: IEEE T SOFTWARE ENG, Vol. 44, No. 10, 10.2022, p. 5996-6010.

Research output: SCORING: Contribution to journalSCORING: Journal articleResearchpeer-review

Harvard

APA

Vancouver

Bibtex

@article{05def98d58e54379ac2e33a545174dc4,
title = "Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient",
abstract = "Off-policy reinforcement learning (RL) holds the promise of better data efficiency as it allows sample reuse and potentially enables safe interaction with the environment. Current off-policy policy gradient methods either suffer from high bias or high variance, delivering often unreliable estimates. The price of inefficiency becomes evident in real-world scenarios such as interaction-driven robot learning, where the success of RL has been rather limited, and a very high sample cost hinders straightforward application. In this paper, we propose a nonparametric Bellman equation, which can be solved in closed form. The solution is differentiable w.r.t the policy parameters and gives access to an estimation of the policy gradient. In this way, we avoid the high variance of importance sampling approaches, and the high bias of semi-gradient methods. We empirically analyze the quality of our gradient estimate against state-of-the-art methods, and show that it outperforms the baselines in terms of sample efficiency on classical control tasks.",
author = "Samuele Tosatto and Joao Carvalho and Jan Peters",
year = "2022",
month = oct,
doi = "10.1109/TPAMI.2021.3088063",
language = "English",
volume = "44",
pages = "5996--6010",
journal = "IEEE T SOFTWARE ENG",
issn = "0098-5589",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "10",

}

RIS

TY - JOUR

T1 - Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient

AU - Tosatto, Samuele

AU - Carvalho, Joao

AU - Peters, Jan

PY - 2022/10

Y1 - 2022/10

N2 - Off-policy reinforcement learning (RL) holds the promise of better data efficiency as it allows sample reuse and potentially enables safe interaction with the environment. Current off-policy policy gradient methods either suffer from high bias or high variance, delivering often unreliable estimates. The price of inefficiency becomes evident in real-world scenarios such as interaction-driven robot learning, where the success of RL has been rather limited, and a very high sample cost hinders straightforward application. In this paper, we propose a nonparametric Bellman equation, which can be solved in closed form. The solution is differentiable w.r.t the policy parameters and gives access to an estimation of the policy gradient. In this way, we avoid the high variance of importance sampling approaches, and the high bias of semi-gradient methods. We empirically analyze the quality of our gradient estimate against state-of-the-art methods, and show that it outperforms the baselines in terms of sample efficiency on classical control tasks.

AB - Off-policy reinforcement learning (RL) holds the promise of better data efficiency as it allows sample reuse and potentially enables safe interaction with the environment. Current off-policy policy gradient methods either suffer from high bias or high variance, delivering often unreliable estimates. The price of inefficiency becomes evident in real-world scenarios such as interaction-driven robot learning, where the success of RL has been rather limited, and a very high sample cost hinders straightforward application. In this paper, we propose a nonparametric Bellman equation, which can be solved in closed form. The solution is differentiable w.r.t the policy parameters and gives access to an estimation of the policy gradient. In this way, we avoid the high variance of importance sampling approaches, and the high bias of semi-gradient methods. We empirically analyze the quality of our gradient estimate against state-of-the-art methods, and show that it outperforms the baselines in terms of sample efficiency on classical control tasks.

U2 - 10.1109/TPAMI.2021.3088063

DO - 10.1109/TPAMI.2021.3088063

M3 - SCORING: Journal article

C2 - 34106848

VL - 44

SP - 5996

EP - 6010

JO - IEEE T SOFTWARE ENG

JF - IEEE T SOFTWARE ENG

SN - 0098-5589

IS - 10

ER -