neural network

Interconnected units {neural network}| can adjust connection strengths to model processes or representations.

Each input-layer unit sends its signal intensity to all middle-layer units, which weight each input.

Each middle-layer unit sends its signal intensity to all output-layer units, which weight each input.

The system can use feedback [Hinton, 1992], feed-forward, and/or human intervention to adjust weights (connection strengths).

To calculate adjusted weight W' using feedback, subtract constant C times partial derivative D of error function e from original weight W: W' = W - C * D(e). The program or programmer can calculate constant C and error function e.

Alternatively, to calculate adjusted weight W' using feedback, add original weight W and constant C times the difference of the current amount c and estimated true amount t: W' = W + C * (c - t). The program or programmer can calculate constant C and estimated t.

Widrow-Huff procedure uses f(s) = s: W' = W + c * (d - f) * X, where d is value and X is factor.

Generalized delta procedure uses f(s) = 1 / (1 + e^-s): W' = W + c * (d - f) * f(1 - f) * X.

Input patterns and output patterns are vectors, so neural networks transform vectors (and so are like tensors). Computation can be serial or parallel (parallel processing).

Note: Units within a layer typically have no connections [Arbib, 2003].

Output units can represent one of the possible input patterns. For example, if the system has 26 output units to detect the 26 alphabet letters, for input pattern A, its output unit is on, and the other 25 output units are off.

Output unit values can represent one of the possible input patterns. For example, if the system has 1 output unit to detect the 26 alphabet letters, for input pattern A, output unit value is 1, and for input pattern Z, output unit value is 26.

The output pattern of the output layer can represent one of the possible input patterns. For example, if the system has 5 output units to detect the 26 alphabet letters, for input pattern A, the output pattern is binary number 00001 = decimal number 1, where 0 is off, 1 is on, and the code for A is 1, code for B is 2, and so on. For input pattern Z, the output pattern is binary number 11010 = decimal number 26.

Output-pattern values can represent one of the possible input patterns. For example, if the system has 2 output units to detect the 26 alphabet letters, for input pattern A, output-pattern value is 01, and for input pattern Z, output-pattern value is 26.

For an analog system, the output pattern of the output layer can resemble an input pattern. For example, to detect the 26 alphabet letters, the system can use 25 input units and 25 output units. For input pattern A, the output pattern resembles A. Units can have continuous values for different intensities.

uses

Neural networks can model processes or representations.

Large neural networks can recognize more than one pattern and distinguish between them.

They can detect pattern unions and intersections. For example, they can recognize words.

Neural networks can recognize patterns similar to the target pattern, so neural networks can generalize to a category. For example, neural networks can recognize the letter T in various fonts.

Because neural networks have many units, if some units fail, pattern recognition can still work.

Neural networks can use many different functions, so neural networks can model most processes and representations. For example, Gabor functions can represent different neuron types, so neural networks can model brain processes and representations.

Neural networks can use two middle layers, in which recurrent pathways between first and second middle layer further refine processing.

vectors

Input patterns and output patterns are vectors (a, b, c, ...), so neural networks transform vectors and so are like tensors.

feedforward

Neural networks use feed-forward parallel processing.

types: non-adaptive

Hopfield nets do not learn and are non-adaptive neural nets, which cannot model statistics.

types: adaptive

Adaptive neural nets can learn and can model statistical inference and data analysis. Hebbian learning can model principal-component analysis. Probabilistic neural nets can model kernel-discriminant analysis. Hamming net uses minimum distance.

types: adaptive with unsupervised learning

Unsupervised learning uses only internal learning, with no corrections from human modelers. Adaptive Resonance Theory requires no noise to learn and cannot model statistics. Linear-function models, such as Learning Matrix, Sparse Distributed Associative Memory, Fuzzy Associative Memory, and Counterpropagation, are feedforward nets with no hidden layer. Bidirectional Associative Memory uses feedback. Kohonen self-organizing maps and reinforcement learning can model Markov decision processes.

types: adaptive with supervised learning

Supervised learning uses internal learning and corrections from human modelers. Adaline, Madaline, Artmap, Backpropagation, Backpropagation through time, Boltzmann Machine, Brain-State-in-a-Box, Fuzzy Cognitive Map, General Regression Neural Network, Learning Vector Quantization, and Probabilistic Neural Network use feedforward. Perceptrons require no noise to learn and cannot model statistics. Kohonen nets for adaptive vector quantization can model K-means cluster analysis.

brains compared to neural networks

Brains and neural networks use parallel processing, can use recurrent processing, have many units (and so still work if units fail), have input and output vectors, use tensor processing, can generalize, can distinguish, and can use set union and intersection.

Brains use many same-layer neuron cross-connections, but neural networks do not need them because they add no processing power.

The neural-network input layer consists of cortical neuron-array registers that receive from retina and thalamus. Weighting of inputs to the middle layer depends on visual-system knowledge of information about the reference beam. The middle layer is neuron-array registers that store perceptual patterns and make coherent waves. The output layer is perceptions in mental space.

Neurons are not the input-layer, middle-layer, or output-layer units. Units are abstract registers that combine and integrate neurons to represent (complex) numbers. Input layer, middle layer, and output layer are not physical arrays but programmed arrays (in visual and association cortex).

Neural-network processing is not neural processing. Processing uses algorithms that calculate with the numbers in registers. Layers, units, and processing are abstract, not directly physical.

Related Topics in Table of Contents

Mathematical Sciences>Computer Science>System Analysis>Network>Neural Net

Whole Section in One File

3-Computer Science-System Analysis-Network-Neural Net

Drawings

Drawings

Contents and Indexes of Topics, Names, and Works

Outline of Knowledge Database Home Page

Contents

Glossary

Topic Index

Name Index

Works Index

Searching

Search Form

Database Information, Disclaimer, Privacy Statement, and Rights

Description of Outline of Knowledge Database

Notation

Disclaimer

Copyright Not Claimed

Privacy Statement

References and Bibliography

Consciousness Bibliography

Technical Information

Date Modified: 2022.0224