The evolution of computer science from mathematical logic culminated in the 1930s, with two landmark papers: Claude Shannon’s“A Symbolic Analysis of Switching and Relay Circuits,” and Alan Turing’s“On Computable Numbers, With an Application to the Entscheidungsproblem.” In the history of computer science, Shannon and Turing are towering figures, but the importance of the philosophers and logicians who preceded them is frequently overlooked.- How Aristotle Created the Computer By Chris Dixon
Mechanisation of Argumentation:
In the 17th century, Leibniz, Thomas Hobbes and René Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[16]Hobbes famously wrote in Leviathan:"reason is nothing but reckoning".[17]Leibniz envisioned a universal language of reasoning(hischaracteristica universalis) which would reduce argumentation to calculation, so that"there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other(with a friend as witness, if they liked): Let us calculate.”
Possibilities and Limits of Mathemetical reasoning
Their answer was surprising in two ways. First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second(and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machine—a simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[12][21]
Search:
When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[33]Reasoning as search[edit]
Many early AI programs used the same basic algorithm. To achieve some goal(like winning a game or proving a theorem), they proceeded step by step towards it(by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called"reasoning as search".[46]The principal difficulty was that, for many problems, the number of possible paths through the"maze" was simply astronomical(a situation known as a"combinatorial explosion"). Researchers would reduce the search space by using heuristics or"rules of thumb" that would eliminate those paths that were unlikely to lead to a solution.[47]In 1955, Allen Newell and(future Nobel Laureate) Herbert A. Simon created the"Logic Theorist"(with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead'sPrincipia Mathematica, and find new and more elegant proofs for some.[34] Simon said that they had"solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind."[35](This was an early statement of the philosophical position John Searle would later call"Strong AI": that machines can contain minds just as human bodies do.)[36]
Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[99] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[100] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[101] Prolog uses a subset of logic(Horn clauses, closely related to"rules" and"production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum's expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[102]
Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[103] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problems—not machines that think as people do.
Popper argues that science should adopt a methodology based on falsifiability, because no number of experiments can ever prove a theory, but a reproducible experiment or observation can refute one. According to Popper:"non-reproducible single occurrences are of no significance to science. Thus a few stray basic statements contradicting a theory will hardly induce us to reject it as falsified. We shall take it as falsified only if we discover a reproducible effect which refutes the theory".[2]:66 Popper argues that science should adopt a methodology based on"anasymmetry between verifiability and falsifiability; an asymmetry which results from the logical form of universal statements. For these are never derivable from singular statements, but can be contradicted by singular statements".[3]
The Logic of Scientific Discovery is famous.[4] The psychologist Harry Guntrip wrote that its publication"greatly stimulated the discussion of the nature of scientific knowledge", including by philosophers who did not completely agree with Popper, such as Thomas Kuhn and Horace Romano Harré.[5]Carl Jung, founder of analytical psychology, valued the work. The biographer Vincent Brome recalls Jung remarking in 1938 that it exposed"some of the shortcomings of science".[6] The historian Peter Gay described Popper's work as"an important treatise in epistemology".[7] The philosopher Bryan Magee wrote that Popper's criticisms of logical positivism were"devastating". In his view, Popper's most important argument against logical positivism is that, while it claimed to be a scientific theory of the world, its central tenet, the verification principle, effectively destroyed all of science.[8] The physicists Alan Sokal and Jean Bricmont argued that critiques of Popper's work have provoked an"irrationalist drift", and that a significant part of the problems that currently affect the philosophy of science"can be traced to ambiguities or inadequacies" in Popper's book.[9]https://en.wikipedia.org/wiki/The_Logic_of_Scientific_Discovery
The brain does not learn by implementing a single, global optimization principle within a uniform and undifferentiated neural network(Marblestone et al., 2016). Rather, biological brains are modular, with distinct but interacting subsystems underpinning key functions such as memory, language, and cognitive control(Anderson et al., 2004; Shallice, 1988). This insight from neuroscience has been imported, often in an unspoken way, into many areas of current AI. One illustrative example is recent AI work on attention. Up until quite lately, most CNN models worked directly on entire images or video frames, with equal priority given to all image pixels at the earliest stage of processing. The primate visual system works differently. Rather than processing all input in parallel, visual attention shifts strategically among locations and objects, centering processing resources and representational coordinates on a series of regions in turn(Koch and Ullman, 1985; Moore and Zirnsak, 2017; Posner and Petersen, 1990). Detailed neurocomputational models have shown how this piecemeal approach benefits behavior, by prioritizing and isolating the information that is relevant at any given moment(Olshausen et al., 1993; Salinas and Abbott, 1997). As such, attentional mechanisms have been a source of inspiration for AI architectures that take‘‘glimpses’’ of the input image at each step, update internal state representations, and then select the next location to sample(Larochelle and Hinton, 2010; Mnih et al., 2014)(Figure 1A). One such network was able to use this selective attentional mechanism to ignore irrelevant objects in a
A. Historical Ideas
Search: