Have you ever thought about why AI became so popular only now? Why at this exact moment? There were many reasons for this emergence, but their foundation was laid back in the second part of the 20th century…

Header of the paper

Here’s the paper “Making a Mind Versus Modeling the Brain paper”. Authors: Hubert L. Dreyfus is professor of philosophy at the University of California at Berkeley. Stuart E. Dreyfus is professor of industrial engineering and operations research at the University of California at Berkeley. The paper described two approaches to AI:

  1. Symbolic Manipulation (Making a Mind): This approach views computers as systems for manipulating mental symbols, focusing on problem-solving through formal rules. It is rooted in the rationalist tradition and emphasizes the use of logic to achieve intelligent behavior.

  2. Neural Networks (Modeling the Brain): Inspired by neuroscience, this approach seeks to simulate the interactions of neurons to enable learning and pattern recognition, emphasizing a holistic view of intelligence.

Both approaches emerged in the early 1950s and initially showed promise. However, by the 1970s, symbolic manipulation dominated AI research, while neural networks were largely marginalized.

Allen Newell and Herbert Simon proposed that minds and computers are physical symbol systems capable of intelligent behavior through formal rules. This hypothesis became a cornerstone of symbolic AI.

Marvin Minsky and Seymour Papert’s critique of perceptrons (a type of neural network) in their 1969 book contributed significantly to the decline of neural network research. Their analysis highlighted limitations of single-layer perceptrons but was interpreted more broadly, affecting the trajectory of AI research.

The symbolic approach aligns with rationalist philosophy, while the neural network approach reflects a more holistic, idealized neuroscience perspective. The former emphasizes formal structures and problem-solving, while the latter focuses on learning and pattern recognition through statistical methods.

Best quotes from the paper:

In the early 1950s, as calculating machines were coming into their own, a few pioneer thinkers began to realize that digital computers could be more than number crunchers. At that point two opposed visions of what computers could be, each with its correlated research program, emerged and struggled for recognition. One faction saw computers as a system for manipulating mental symbols; the other, as a medium for modeling the brain. One sought to use computers to instantiate a formal representation of the world; the other, to simulate the interactions of neurons. One took problem solving as its paradigm of intelligence; the other, learning. One utilized logic; the other, statistics. One school was the heir to the rationalist, reductionist tradition in philosophy; the other viewed itself as idealized, holistic neuroscience.

Both approaches met with immediate and startling success. By 1956 Newell and Simon had succeeded in programming a computer using symbolic representations to solve simple puzzles and prove theorems in the propositional calculus. On the basis of these early impressive results it looked as if the physical symbol system hypothesis was about to be confirmed, and Newell and Simon were understandably euphoric. Simon announced: It is not my aim to surprise or shock you…. But the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until in a visible future the range of problems they can handle will be coextensive with the range to which the human mind has been applied. He and Newell explained: We now have the elements of a theory of heuristic (as contrasted with algorithmic) problem solving; and we can use this theory both to understand human heuristic processes and to simulate such processes with digital computers. Intuition, insight, and learning are no longer exclusive possessions of humans: any large high-speed computer can be programmed to exhibit them also.

Rosenblatt put his ideas to work in a type of device that he called a perceptron. By 1956 Rosenblatt was able to train a perceptron to classify certain types of patterns as similar and to separate these from other patterns that were dissimilar. By 1959 he too was jubilant and felt his approach had been vindicated: It seems clear that the … perceptron introduces a new kind of information processing automaton: For the first time, we have a machine which is capable of having original ideas. As an analogue of the biological brain, the perceptron, more precisely, the theory of statistical separability, seems to come closer to meeting the requirements of a functional explanation of the nervous system than any system previously proposed…. As concept, it would seem that the perceptron has established, beyond doubt, the feasibility and principle of non-human systems which may embody human cognitive functions…. The future of information processing devices which operate on statistical, rather than logical, principles seems to be clearly indicated.

In Minsky’s model of a frame, the “top level” is a developed version of what, in Husserl’s terminology, remains “inviolably the same” in the representation, and Husserl’s predelineations have become “default assignments”?additional features that can nor mally be expected. The result is a step forward in AI techniques from a passive model of information processing to one that tries to take account of the interactions between a knower and the world. The task of AI thus converges with the task of transcendental phenome nology. Both must try in everyday domains to find frames constructed from a set of primitive predicates and their formal relations.

If Heidegger and Wittgenstein are right, human beings are much more holistic than neural nets. Intelligence has to be motivated by purposes in the organism and goals picked up by the organism from an ongoing culture. If the minimum unit of analysis is that of a whole organism geared into a whole cultural world, neural nets as well as symbolically programmed computers still have a very long way to go.

Something from these quotes even resonates with today’s headers…