Abstract
This paper develops a rational computational analysis of the problem
of learning a concept from a small number of positive examples.
Despite its basic importance, this learning situation has been
relatively ignored by formal modelers in both cognitive psychology and
machine learning. The Bayesian learning framework presented here
provides a principled approach to fundamental questions of inductive
inference that have puzzled many philosophers but few children, such
as how far and in what ways to generalize a concept beyond the
examples encountered. This theory may lead to a better
understanding of human category learning, as well as to machine learning
algorithms that can learn from a human user's examples the way other
humans can.
Joshua B. Tenenbaum