Abstract
Reinforcement learning (RL) concerns algorithms tasked with learning optimal control policies by interacting with or observing a system. In computer science and other fields in which RL originated, large sample sizes are the norm, because data can be generated at will from a generative model. Recently, RL methods have been adapted for use in clinical trials, resulting in much smaller sample sizes. Nonparametric methods are common in RL, but are likely to over-generalize when limited data is available. This paper proposes a novel methodology for learning optimal policies by leveraging the researcher's partial knowledge about the probability transition structure into an approximate generative model from which synthetic data can be produced. Our method is applied to a scenario where the researcher must create a medical prescription policy for managing a disease with sporadically appearing symptoms.
Original language | American English |
---|---|
Journal | Intelligent Decision Technologies |
Volume | 11 |
DOIs | |
State | Published - Jun 22 2017 |
Keywords
- Decision theory
- Fitted Q-iteration
- marginalized transition models
- nonparametric
- reinforcement learning
- sample size
DC Disciplines
- Physical Sciences and Mathematics
- Mathematics