Privacy-preserving q-learning with functional noise in continuous spaces

B Wang, N Hegde - Advances in Neural Information …, 2019 - proceedings.neurips.cc
Advances in Neural Information Processing Systems, 2019proceedings.neurips.cc
We consider differentially private algorithms for reinforcement learning in continuous
spaces, such that neighboring reward functions are indistinguishable. This protects the
reward information from being exploited by methods such as inverse reinforcement learning.
Existing studies that guarantee differential privacy are not extendable to infinite state spaces,
as the noise level to ensure privacy will scale accordingly to infinity. Our aim is to protect the
value function approximator, without regard to the number of states queried to the function. It …
Abstract
We consider differentially private algorithms for reinforcement learning in continuous spaces, such that neighboring reward functions are indistinguishable. This protects the reward information from being exploited by methods such as inverse reinforcement learning. Existing studies that guarantee differential privacy are not extendable to infinite state spaces, as the noise level to ensure privacy will scale accordingly to infinity. Our aim is to protect the value function approximator, without regard to the number of states queried to the function. It is achieved by adding functional noise to the value function iteratively in the training. We show rigorous privacy guarantees by a series of analyses on the kernel of the noise space, the probabilistic bound of such noise samples, and the composition over the iterations. We gain insight into the utility analysis by proving the algorithm's approximate optimality when the state space is discrete. Experiments corroborate our theoretical findings and show improvement over existing approaches.
proceedings.neurips.cc
Showing the best result for this search. See all results