Reinforcement Learning for Categorical Data and Marginalized Transition Models

Research output: Contribution to conferencePresentation

Abstract

Reinforcement Learning concerns algorithms tasked with learning optimal control policies from interacting with or observing a system. Fitted Q-iteration is a framework in which a regression method is iteratively used to approximate the value of states and actions. Because the state-action value function rarely has a predictable shape, non-parametric supervised learning methods are typical. This greater modeling flexibility comes at a cost of large data requirements. If only a small amount of data is available, the supervised learning method is likely to over-generalize and approximate the value function poorly. In this paper, we propose using Marginalized Transition Models to estimate the process which produces observations. From this estimated process, additional observations are generated. Our contention is that using these additional observations reduces the bias produced by the regression method's over-smoothing, and can produce better policies than only using the original data. This approach is applied to a scenario mimicking medical prescription policies for a disease with sporadically appearing symptoms as a proof-of-concept example.

Original languageAmerican English
StatePublished - Aug 12 2015
EventJoint Statistical Meetings (JSM) -
Duration: Aug 12 2015 → …

Conference

ConferenceJoint Statistical Meetings (JSM)
Period08/12/15 → …

Keywords

  • Machine learning
  • Marginalized transition models
  • Markov decision process
  • Reinforcement learning

DC Disciplines

  • Mathematics
  • Physical Sciences and Mathematics

Fingerprint

Dive into the research topics of 'Reinforcement Learning for Categorical Data and Marginalized Transition Models'. Together they form a unique fingerprint.

Cite this