Description |
x, 108 pages : illustrations ; 21 cm |
Series |
The Jean Nicod lectures ; 2007 |
|
The Jean Nicod lectures |
|
Jean Nicod lectures ; 2007
|
|
Jean Nicod lectures.
|
Contents |
The problem of induction -- Induction and VC dimension -- Induction and "simplicity" -- Neural networks, support vector machines, and transduction |
Summary |
"In Reliable Reasoning, Gilbert Harman and Sanjeev Kulkarni argue that philosophy and cognitive science can benefit from statistical learning theory (SLT), the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors - a central topic of SLT." |
|
"After discussing philosophical attempts to evade the problem of induction, Harman and Kulkarni provide an account of the basic framework of SLT and its implications for inductive reasoning. They explain the Vapnik-Chervonenkis (VC) dimension of a set of hypotheses and distinguish two kinds of inductive reasoning, describing fundamental results about the power and limits of those methods in terms of the VC dimension of the hypotheses being considered. The authors discuss various topics in machine learning, including nearest-neighbor methods, neural networks, and support vector machines. Finally, they describe transductive reasoning and offer possible new models of human reasoning suggested by developments in SLT."--BOOK JACKET |
Notes |
"A Bradford book." |
Bibliography |
Includes bibliographical references (pages [99]-104) and index |
Subject |
Reasoning.
|
|
Reliability.
|
|
Induction (Logic)
|
|
Computational learning theory.
|
Author |
Kulkarni, Sanjeev.
|
LC no. |
2006033527 |
ISBN |
9780262083607 hardcover alkaline paper |
|