6-Philosophy-Mind-Theories-Connectionism

connectionism

Simple unit interconnections can receive input and make output {connectionism, mind} {connectionism theory} {parallel distributed processing} {neural net}. Connectionist systems have no symbols, concepts, or representations [Anderson, 1964] [Arbib, 1972] [Arbib, 1995] [Bechtel and Abrahamsen, 1991] [Clark, 1989] [Clark, 1993] [Fahlman, 1979] [Feldman and Waltz, 1988] [Hillis, 1985] [Hinton and Anderson, 1981] [Hinton, 1992] [Hopfield and Tank, 1986] [Kableshkov, 1983] [McCulloch and Pitts, 1943] [McCulloch, 1947] [Pao and Ernst, 1982] [Pattee, 1973] [Pattee, 1995] [Pitts and McCulloch, 1947] [Rumelhart and McClelland, 1986].

input

Input can be nodes or node sets, with different weights.

process

Connectionism can dynamically use constraint satisfaction, energy minimization, or pattern recognition. Intermediate nodes process representations in parallel. Network nodes can have multiple functions and contribute to many representations or processes. Connections and/or node patterns can contain information. Representations are vectors in space. Distributed information allows parallel processing, increasing learning, and continuous variables. Connectionist networks have little recursion, much inhibition, artificial learning algorithms, and simple transfer functions.

process: layers

Software models use three layers of neuron-like units for pattern-matching. First layer receives input pattern. Units in second and third layers typically receive input from all units in previous layer. Third layer outputs display or file. Units can be On or Off. If total input to unit is above threshold, unit is On. Inputs can have adjustable weights. Experimenters set weights, or programs adjust weights based on matching between "training" input patterns and their output patterns.

Neural nets do not have programs or operations. Neural-net architecture provides information. Controllers go from layer to layer, processing all units simultaneously, by parallel processing. Distributed information tolerates degradation. Neural nets can still detect patterns if some units fail and so are more robust than algorithms.

output

Outputs are vectors, possibly with many dimensions. Outputs statistically derive from inputs. All outputs have equal weight. Similar outputs have similar coordinates. Output regions define category examples. Average or optimum examples define categories. Region boundaries change with new examples.

Neural nets can distinguish more than one pattern, using the same weights. Units can code for several representations, and many units code each representation {distributed representation}. Neural nets can recognize similar patterns and in this way appear to generalize.

activation function

Outputs can perform functions {activation function}.

backpropagation

Systems can start with random weights, input training pattern, compare output to input, slightly reduce weights on units that are too high and slightly increase weights on units that are too low, and repeat {backpropagation, connectionism} {backward error propagation}. For example, after neural networks have processed input and sent output, teacher circuits signal node differences from expected values and correct weighting. System performs process again. As process repeats, total error decreases.

wake-sleep algorithm

In unsupervised neural networks {Helmholtz machine} {wake-sleep algorithm} with recurrent connections, first information comes from inputs to outputs and affects recurrent strengths. Then information comes from outputs to inputs {output generation} and affects original strengths.

6-Philosophy-Mind-Theories-Connectionism-Output

distributed output

Outputs can distribute among nodes {distributed output}.

localist output

Outputs can be nodes {localist output}.

Related Topics in Table of Contents

6-Philosophy-Mind-Theories

Drawings

Drawings

Contents and Indexes of Topics, Names, and Works

Outline of Knowledge Database Home Page

Contents

Glossary

Topic Index

Name Index

Works Index

Searching

Search Form

Database Information, Disclaimer, Privacy Statement, and Rights

Description of Outline of Knowledge Database

Notation

Disclaimer

Copyright Not Claimed

Privacy Statement

References and Bibliography

Consciousness Bibliography

Technical Information

Date Modified: 2022.0225