The University of Edinburgh -
Division of Informatics
Forrest Hill & 80 South Bridge


PhD Thesis #9820

Title:Exploration and Inference in Learning from Reinforcement
Authors:Wyatt,J
Date: 1998
Presented:
Keywords:
Abstract:Recently there has been a good deal of interest in using techniques developed for learning from reinforcement to guide learning in robots. Motivated by the desire to find better robot learning methods, this thesis prsents a number of novel extensions to existing techniques for controlling exploration and inference in reinforcement learning.First I distinguish between the well known exploration-exploitation trade-off and what I term exploration for future exploitation. it is argued that there are many tasks where it is more appropriate to maximise this latter measure. In particular it is appropriate when we want to employ learning algorithms as part of the process of designing a controller.Informed by this insight I develop a number of novel measures of the probability of a particular course of action being the optimal ourse of action. Estimators are developed for this measure for boolean and non-boolean processes. These are used in turn to develp probability matching techniques for guiding the exploration-exploitation trade-off. A proof is presented that one such method will converge in the limit to the optimal policy.Following this I develop an engropic measure of task-knowledg, based on the previous measure.
Download:POSTSCRIPT COPY


[Search These Pages] [DAI Home Page] [Comment]