Publications

Lab main publications

Enhanced Off-Policy Reinforcement Learning with Focused Experience Replay
Year 2021
Month
Journal May 2021 / IEEE Access [SCIE, IF 3.476, Top 48.17%], vol. 9, pp. 93152-93164
Author Seung-Hyun Kong, I Made Aswin Nahrendra, Dong-Hee Paek
File 첨부 Enhanced_Off-Policy_Reinforcement_Learning_With_Focused_Experience_Replay.pdf (2.7M) 17회 다운로드 DATE : 2022-10-04 16:34:05
Link 관련링크 https://ieeexplore.ieee.org/document/9444458 264회 연결

Utilizing the collected experience tuples in the replay buffer (RB) is the primary way of exploiting the experiences in the off-policy reinforcement learning (RL) algorithms, and, therefore, the sampling scheme for the experience tuples in the RB can be critical for experience utilization. In this paper, it is found that a widely used sampling scheme in the off-policy RL suffers from inefficiency due to the inadequate uneven sampling of experience tuples from the RB. In fact, the conventional uniform sampling of the experience tuples in the RB causes a severely unbalanced experience utilization, since experiences stored earlier in the RB is sampled with much higher frequency especially in the early stage of learning. We mitigate this fundamental problem by employing a half-normal sampling probability window that allocates a higher sampling probability to newer experiences in the RB. In addition, we propose general and local size adjustment schemes that determine the standard deviation of the half-normal sampling window to enhance the learning speed and performance and to mitigate the temporary performance degradation during training, respectively. For performance demonstration, we apply the proposed sampling technique to the state-of-the-art off-policy RL algorithms and test for various RL benchmark tasks such as MuJoCo gym and CARLA simulator. As a result, the proposed technique shows considerable learning speed and final performance improvement, especially on the tasks with large state and action space. Furthermore, the proposed sampling technique increases the stability of the considered RL algorithms, verified with less variance of the performance results across different random seeds of network initialization.