Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-13T02:34:55.548Z Has data issue: false hasContentIssue false

Mechanical learners pay a price for Bayesianism

Published online by Cambridge University Press:  12 March 2014

Daniel N. Osherson
Affiliation:
Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
Michael Stob
Affiliation:
Department of Mathematics, Calvin College, Grand Rapids, Michigan 49506
Scott Weinstein
Affiliation:
Department of Philosophy, University of Pennsylvania, Philadelphia, Pennsylvania 19104

Extract

The price is failure on a class of inductive inference problems that are easily solved, in contrast, by nonBayesian mechanical learners. By “mechanical” is meant “simulable by Turing machine”.

One of the central tenets of Bayesianism, which is common to the heterogeneous collection of views which fall under this rubric, is that hypothesis change proceeds via conditionalization on accumulated evidence, the posterior probability of a given hypothesis on the evidence being computed using Bayes's theorem. We show that this strategy for hypothesis change precludes the solution of certain problems of inductive inference by mechanical means—problems which are solvable by mechanical means when the restriction to this Bayesian strategy is lifted. Our discussion proceeds as follows. After some technical preliminaries, the concept of (formal) learner is introduced along with a criterion of inferential success. Next we specify a class of inductive inference problems, and then define the notion of “Bayesian behavior” on those problems. Finally, we exhibit an inductive inference problem from the specified class such that (a) some nonmechanical Bayesian learner solves the problem, (b) some nonBayesian mechanical learner solves the problem, (c) some mechanical learner manifests Bayesian behavior on the problem, but (d) no mechanical Bayesian learner solves the problem.

Insofar as possible terminology and notation are drawn from Osherson, Stob, and Weinstein [1986].

Type
Research Article
Copyright
Copyright © Association for Symbolic Logic 1988

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCE

Adams, M. and Guillemin, V. [1986], Measure theory and probability, Wadsworth, Belmont, California.Google Scholar
Gold, E. M. [1967], Language identification in the limit, Information and Control, vol. 10, pp. 447474.CrossRefGoogle Scholar
Levy, A. [1979], Basic set theory, Springer-Verlag, Berlin.CrossRefGoogle Scholar
Osherson, D., Stob, M., and Weinstein, S. [1986], Systems that learn, MIT Press, Cambridge, Massachusetts.Google Scholar
Putnam, H. [1975], Probability and confirmation, reprinted in his Philosophical papers. Vol. 1: Mathematics, matter and method , Cambridge University Press, Cambridge, 1975, pp. 293304.Google Scholar
Rogers, H. [1967], Theory of recursive functions and effective computability, McGraw-Hill, New York.Google Scholar