Simple unit interconnections can receive input and make output {connectionism, mind} {connectionism theory} {parallel distributed processing} {neural net}. Connectionist systems have no symbols, concepts, or representations [Anderson, 1964] [Arbib, 1972] [Arbib, 1995] [Bechtel and Abrahamsen, 1991] [Clark, 1989] [Clark, 1993] [Fahlman, 1979] [Feldman and Waltz, 1988] [Hillis, 1985] [Hinton and Anderson, 1981] [Hinton, 1992] [Hopfield and Tank, 1986] [Kableshkov, 1983] [McCulloch and Pitts, 1943] [McCulloch, 1947] [Pao and Ernst, 1982] [Pattee, 1973] [Pattee, 1995] [Pitts and McCulloch, 1947] [Rumelhart and McClelland, 1986].
input
Input can be nodes or node sets, with different weights.
process
Connectionism can dynamically use constraint satisfaction, energy minimization, or pattern recognition. Intermediate nodes process representations in parallel. Network nodes can have multiple functions and contribute to many representations or processes. Connections and/or node patterns can contain information. Representations are vectors in space. Distributed information allows parallel processing, increasing learning, and continuous variables. Connectionist networks have little recursion, much inhibition, artificial learning algorithms, and simple transfer functions.
process: layers
Software models use three layers of neuron-like units for pattern-matching. First layer receives input pattern. Units in second and third layers typically receive input from all units in previous layer. Third layer outputs display or file. Units can be On or Off. If total input to unit is above threshold, unit is On. Inputs can have adjustable weights. Experimenters set weights, or programs adjust weights based on matching between "training" input patterns and their output patterns.
Neural nets do not have programs or operations. Neural-net architecture provides information. Controllers go from layer to layer, processing all units simultaneously, by parallel processing. Distributed information tolerates degradation. Neural nets can still detect patterns if some units fail and so are more robust than algorithms.
output
Outputs are vectors, possibly with many dimensions. Outputs statistically derive from inputs. All outputs have equal weight. Similar outputs have similar coordinates. Output regions define category examples. Average or optimum examples define categories. Region boundaries change with new examples.
Neural nets can distinguish more than one pattern, using the same weights. Units can code for several representations, and many units code each representation {distributed representation}. Neural nets can recognize similar patterns and in this way appear to generalize.
Social Sciences>Philosophy>Mind>Theories>Connectionism
6-Philosophy-Mind-Theories-Connectionism
Outline of Knowledge Database Home Page
Description of Outline of Knowledge Database
Date Modified: 2022.0224