1-Consciousness-Speculations-Space

cross-sections and three-dimensional space

Flow cross-sections have two dimensions and can represent surfaces. Flow cross-sections can represent three dimensions {cross-sections and three-dimensional space}. To represent a squat cylinder, cross-section left region can represent cylinder top layer, middle region can represent cylinder middle layer, and right region can represent cylinder bottom layer. Alternatively, the three regions can interleaf throughout cross-sections, with cross-section points having top-, middle-, and bottom-layer points. Because cross-sections can represent three-dimensions, circuit flows can represent three-dimensional space over time.

layers and three-dimensional space

Because layers can represent two-dimensional images, multiple layers can represent a three-dimensional image {layers and three-dimensional space}. See Figure 1.

One layer can represent a three-dimensional image by skewing. See Figure 2. Left region represents top layer. Middle region represents middle layer. Right region represents bottom layer.

One layer can represent a three-dimensional image by interleafing. See Figure 3. Evenly distributed neuron sets represent top layer, middle layer, and bottom layer.

One topographic-map neuron layer can represent three-dimensional space, and layer series can represent three-dimensional space over time.

network to space

A network of nodes and links among nodes {network to space} can represent space. Sense processing uses neuron assemblies to represent nodes and links.

semi-space

Two-dimensional surfaces {pre-space} {semi-space} can add relative distance information to represent three-dimensional spaces. Semi-spaces are like two-and-a-half-dimension sketches [Marr, 1982].

Sense, and computer, processing uses intensity variations to find symbolic primitives, such as zero crossings, edges, contours, and blobs; detect boundaries and brightnesses; and represent two dimensions. From the primitives, sense and computer processing finds relative surface distances, depths, contours, and orientations and uses surface shading, orientation, scaling, and texture to find object and observer spatial relations, to simulate three dimensions. Later, sense and computer processing uses memory and global information to integrate the two-dimensional and depths-and-distances descriptions to build a three-dimensional model for object representation, manipulation, and recognition.

stimuli as media

Stimuli can serve as substrates/media on which to display sensations {stimuli as media}. Sense, and perhaps computer, processing can simulate stimulus input streams from physical space.

surface elements and mental space

Mental-space points are surface elements (differential surfaces), which have direction, distance, and orientation {surface elements and mental space}. Surface elements link to make space. Sense, and perhaps computer, processing can make surface elements in space.

1-Consciousness-Speculations-Space-Biology

adjacency and mental space

Skin touches objects, and touch receptors receive information about objects adjacent to body {adjacency and mental space}. As body moves around in space, mental space expands by adding adjacency information.

angle-comparison computations calculate distances

Eye-accommodation-muscle feedback to vision depth-calculation processes can calculate distances up to two meters. Using metric depth cues can calculate all distances. Observing objects requires at least two eye fixations, which allow vision processing to calculate two different perceived angles, for two different eye, head, and body positions. Vision and body angle-comparison computations can calculate line, surface, feature, and object distances {angle-comparison computations, distances} {distances, angle-comparison computations}.

two sight-line to surface angles

At first eye fixation on a line or surface point, vision calculates a sight-line to point angle. At second eye fixation on a collinear or co-surface point, vision calculates a different sight-line to point angle, because eye, head, and/or body have rotated. At nearest possible line or surface point, sight-line to point angle is 90 degrees. At farthest possible line or surface point, sight-line to point angle is 0 degrees. Angle decreases linearly with distance. If angle of sight-line to line or surface is more perpendicular, line or surface point is nearer. If angle of sight-line to line or surface is less perpendicular, line or surface point is farther.

Comparing sight-line angles to two collinear or co-surface points can calculate distance. Angle difference varies directly with distance. Larger angle change means object is nearer. Smaller angle change means object is farther.

two visual angles

At first eye fixation on an object edge or contour, object has a retinal visual angle, calculating object relative size. At second eye fixation on a different object edge or contour, object has a different retinal visual angle, because eye, head, and/or body have rotated. If sight-line to object edge or contour angle is 90 degrees, visual angle is maximum. At other angles, visual angle is less. Visual angle decreases linearly with distance. If sight-line to object edge or contour is more perpendicular, visual angle is more. If sight-line to object edge or contour is less perpendicular, visual angle is less.

Comparing first and second visual angles can calculate object distance. Angle difference varies directly with distance. Larger angle change means object is nearer. Smaller angle change means object is farther.

two sight-line to point angles

At first eye fixation on an object point, sight-line to point has an angle. At second eye fixation on the same object point, sight-line to point has a different angle, because eye, head, and/or body have rotated. At nearest possible object point, sight-line to point angle is 90 degrees. At other object points, sight-line to point angle is less. Angle decreases linearly with distance. If sight-line to object point is more perpendicular, object is nearer. If sight-line to object point is less perpendicular, object is farther.

Comparing first and second angles can calculate object distance. Angle difference varies directly with distance. Larger angle change means object is nearer. Smaller angle change means object is farther.

two concave or convex corner angles

The first eye fixation on a concave or convex corner determines its angle. The second eye fixation determines a different angle, because eye, head, and/or body have rotated. Smaller-angle concave corners are farther, and larger-angle concave corners are nearer. Smaller-angle convex corners are nearer, and larger-angle convex corners are farther.

Comparing first and second corner angles can calculate distance. Angle difference varies directly with distance. Larger angle change means object is nearer. Smaller angle change means object is farther. Angles and vertices use the same reasoning as corners.

body angle comparisons

First eye fixation and second eye fixation have two different eye, head, and/or body positions. The kinesthetic system determines their angle sets and sends kinesthetic angle-difference information to association cortex for comparison with the corresponding vision angle-difference information.

integration

Comparing the two sets of angle differences calculates absolute metric distances. Accumulating distance information allows building three-dimensional-space information.

body surface and mental space

Sensations impinge on body surface in repeated patterns at touch receptors. Nervous system occupies three dimensions and has information about receptor locations. From receptor activity patterns, nervous system builds a three-dimensional sensory surface {body surface and mental space}.

carrier waves and mental space

Senses make a global carrier-wave function, and whole brain-and-body has a carrier-wave function {carrier waves and mental space}. Global functions are regular and form coordinate grids, establishing egocentric space. Local disturbances affect global function to indicate location.

convexity and concavity and mental space

Frontal-lobe region derives three-dimensional images from two-dimensional topographic maps by assigning convexity, concavity, and boundary edges [Horn, 1986] to lines and vertices and making convexities and concavities consistent {convexity and concavity and mental space}.

cortical processing and mental space

Primary-visual-cortex topographic map represents scene intensities. After primary visual cortex, cortical topographic-map neurons {cortical processing and mental space} respond to orientations, locations, and distances [Burkhalter and Van Essen, 1986] [DeValois and DeValois, 1975] [Newsome et al., 1989] [Tootell et al., 1997] [Zeki, 1985]. Topographic maps use thresholds to make boundaries and regions. Vision system sends information to motor and other sense systems [Bridgeman et al., 1997] [Owens, 1987]. Topographic maps use movements, angles, and perspective to add distance and depth by interpolation and extrapolation and represent egocentric space. Brain integrates and synthesizes spatial information [Andersen et al., 1997] [Gross and Graziano, 1995] [Olson et al., 1999].

frames and mental space

Nose, cheeks, and eyebrow ridges frame vision scenes. Silent regions frame sounds. Untouched surrounding areas frame pressures. Neutral-temperature regions frame warm or cool areas. Nose touch sensations frame odors. Mouth touch sensations frame tastes. Silent sensors frame active sensors. Sensations have frames that provide context for near and far locations {frames and mental space}.

memory and mental space

Long-term memory recall makes space {memory and mental space}. Short-term memory builds space modifications. Awaking activates memory, which activates space. Perception and recall occur on space background. Memory is stronger than perception, because people can remember images and override perceptions.

motions and mental space

Retinal regions can receive repeated light-pattern series that correlate with motion {motions and mental space}. For example, when moving toward light source, as visual horizon lowers, source appears lower in visual field. When moving away, source appears higher in visual field. When turning, rotations are around sense organ.

When people move, other objects do not move. Correlated movements belong to body region, and correlated non-movements belong to other region. Moving establishes a boundary between adjacent moving and non-moving regions. Moving is inside region, and non-moving is outside region. In and out make a space axis. When finger slides across surface, or feet walk across ground, touch correlates with vision moving/non-moving boundary.

motor feedback and space

Brain senses, moves, senses, moves, and so on, to have feedback, so brain processes are multisensory and sensorimotor. Visual-motor and touch-motor feedback loops interact to locate surfaces {motor feedback and space}, also using kinesthetic and vestibular systems. Vertical gaze center near midbrain oculomotor nucleus detects up and down motions [Pelphrey et al., 2003] [Tomasello et al., 1999]. Horizontal gaze center near pons abducens nucleus detects right-to-left and left-to-right motions [Löwel and Singer, 1992].

multimodal neurons and mental space

Midbrain tectum and cuneiform nucleus have multimodal neurons, whose axons envelop reticular thalamic nucleus and other thalamic nuclei to map three-dimensional space {multimodal neurons and mental space}.

multiple neurons for multiple space points

To experience multiple space points simultaneously, neuron assemblies have 200-millisecond intervals in which events are simultaneous {multiple neurons for multiple space points}.

topographic map continuum

Topographic-map neurons, dendrites, axons, and synapses are so numerous that overlapping forms a continuum {topographic map continuum}. Perhaps, the continuum carries analog signals and geometric figures, like TV screens, and models continuous space.

1-Consciousness-Speculations-Space-Biology-Boundaries

analog to digital conversion and mental space

Neuron thresholds reduce instantaneous below-threshold input to 0 and set instantaneous above-threshold input to 1. Thresholds differentiate regions by establishing boundaries {analog to digital conversion and mental space}.

boundary and mental space

Brain can compare outgoing (inner) and incoming (outer) signals, which differ. Inner signals have loops and loop patterns and include memories and imaginings. Outer signals have non-looping patterns and include stimuli. Nervous system builds a boundary {boundary and mental space} between inner (self) and outer (other). Boundary is at nervous-system edges. Waking and dreaming rebuild the boundary.

inequalities and boundaries

To trigger a neuron impulse, membrane potential, caused by input neuron impulses, must be greater than neuron threshold potential. Neuron threshold potentials establish inequalities. Lower potential has no effect. Higher potentials cause one impulse. (Higher potentials over time cause higher impulse rate.) Inequalities establish boundaries {inequalities and boundaries}. At space boundaries, one region has response above threshold, and adjacent region has response below threshold. (Neuron thresholds can change.)

lateral inhibition and spatial regions

Adjacent neurons can inhibit central neuron. Such lateral inhibition reduces central-neuron activity. Lateral inhibition can contract regions {lateral inhibition and spatial regions}. Lateral inhibition can move boundaries inwards. Lateral inhibition can suppress and eliminate boundaries. Spreading activation and lateral inhibition can join or separate regions.

spreading activation and spatial regions

Central neuron can excite adjacent neurons. Such spreading activation increases adjacent-neuron activity. Spreading activation can expand regions {spreading activation and spatial regions} {spreading excitation and spatial regions}. Spreading activation can move boundaries outwards. Spreading activation can establish and emphasize boundaries. Spreading activation and lateral inhibition can join or separate regions.

1-Consciousness-Speculations-Space-Biology-Coordinates

coordinate transformation and allocentric space

People see objects in space as external and stationary (allocentric) [Rizzolatti et al., 1997] [Velmans, 1993]. Cerebellum and forebrain anticipate, coordinate, and compensate for movements.

Frontal-lobe topographic maps can represent egocentric space [Olson et al., 1999], with vertical, right-left, and front-back directions. Coordinate-origin egocenter is in head center, on a line passing through nosebridge. Space points have directions and distances from egocenter. All points make vector space.

As body, head, or eyes move, egocentric space moves, spatial axes move, and point coordinates and geometric figures transform linearly to new coordinate values [Shepard and Metzler, 1971]. Transformations are translation, rotation, reflection, inversion, and scaling (zooming). Motor processing uses tensor transform functions to describe changes from former to current output-vector field [Pellionisz and Llinás, 1982]. To maintain stationary allocentric space, so point coordinates do not change when body moves, visual processing must cancel egocentric spatial-axis coordinate transformations {coordinate transformation and allocentric space}. Visual processing inverts motor-system tensors to transform egocentric coordinate systems in opposite directions from body movements [Pouget and Sejnowski, 1997]. Topographic maps can describe tensors that transform from egocentric to allocentric space. Topographic maps can represent allocentric space.

example

Translating and rotating make spatial axes change direction. After movement, new axes relate to old axes by coordinate transformations. For example, two-dimensional vector (0,1) can translate on y-axis to make vector (0,0), rotate both axes to make vector (1,0), or reflect y-axis to make vector (0,-1). Coordinate transformations do not change dimension number.

stationary space

Perception typically maintains an absolute spatial reference frame. Stationary space allows optimum feature tracking during object and/or body motions. Moving reference frames make all motions three-dimensional, but stationary space makes many movements one-dimensional or two-dimensional.

gravity and vertical direction

Gravity exerts vertical force on feet and body. Nervous system analyzes this distributed information and defines vertical axis in space {gravity and vertical direction}.

ground and mental space

Foot motions stop at ground. Touch and kinesthetic receptors repeatedly record this information. Nervous system analyzes this distributed information and defines a horizontal plane in space {ground and mental space}. Ground nearest to eye has sight-line perpendicular to ground. Farther-away ground points have sight-lines at smaller angles. All objects are on or vertically above ground.

invariants and coordinate axes

Vision observes moving and stationary points in space with varying brightnesses and colors. Nervous system analyzes this information to detect perceptual invariants. For space, invariant points are stationary reference points. Invariant lines are stationary coordinate axes {invariants and coordinate axes}: vertical, horizontal right-left, and horizontal near-far. Because invariants stay constant over many situations, invariants can be grounds for meaning.

motions and touches

Nervous system correlates body motions and touch and kinesthetic receptors to extract reference points and three-dimensional space {motions and touches}. Repeated body movements define perception metrics. Such ratios build standard length, angle, time, and mass units that model physical-space lengths, angles, times, and masses. As body, head, and eyes move, they trace geometric structures and motions.

tracking

During body movements, neuron activations follow trajectories across topographic maps. Brain can track moving stimuli. Brain can study before and after effects by tracking stimuli.

stimuli and motions

Stimuli can trigger attention and orientation, and so body moves or turns toward or away. Different stimulus intensities cause different moving or turning rates.

distance

Because distance equals rate times time, motion provides information about distances. Brain can track locations over time. Brain can use interpolation and extrapolation.

horizontal directions and motions

Moving toward or away from stimuli maximizes visual flow and light-intensity gradient, and establishes forward-backward direction. Moving perpendicular to sight-line to stimuli minimizes visual flow and light-intensity gradient, and establishes left-right direction.

vertical direction and motion

Body raising and lowering can indicate vertical direction.

orientation columns and direction

Vision topographic maps have orientation macrocolumns, which align and link orientations to detect line directions and establish all spatial directions {orientation columns and direction} [Blasdel, 1992].

pole and dimension

As body moves in a straight line, visual flow and light-intensity gradient establish one forward point (pole). Eye to forward point defines the forward-backward spatial dimension {pole and dimension}.

rotation centers and mental space

Body and body parts rotate around balance or equilibrium points {rotation centers and mental space}. Kinesthetic receptors send information to brain, which defines those reference points and builds three-dimensional space.

tensors and mental space

Topographic-map series can store matrices and so represent tensors {tensors and mental space}. Motor processing uses tensor transform functions to describe changes from former to current output-vector field [Pellionisz and Llinás, 1982]. Tensors can linearly transform coordinates from one coordinate system to another. Output vectors are linear input-vector and spatial-axis-vector functions. Motor-system topographic maps send vector-field output-vector spatial pattern to motor neurons. Muscles move body, head, and eye to specific space locations, or for specific distances or times. Current output-vector field differs from preceding output-vector field by a coordinate transformation.

topographic maps and coordinate axes

Topographic-map-neuron types have regular horizontal, vertical, and diagonal spacings, at different small, medium, and large distances. Neuron grids make a spatial network of nodes and links. Neuron grids allow measuring distances and angles and using coordinates. Topographic-map neuron grids have up/down, left/right, and near/far axes {topographic maps and coordinate axes}. Topographic-map spatial axes intersect to establish a coordinate origin and make a coordinate system, so points, lines, and regions have spatial coordinates.

Sensory topographic maps can have lattices of superficial pyramidal cells, whose non-myelinated non-branched axons travel horizontally 0.4 to 0.9 millimeters to synapse in clusters on next superficial pyramidal cells. The skipping pattern aids macrocolumn neuron-excitation synchronization [Calvin, 1995].

topographic maps and distances

Topographic maps have neurons specific for space locations {topographic maps and distances}. Locations involve space direction and distance. If 100 neurons are for radial distance one unit, to have same visual acuity 400 neurons must be for radial distance two units. To have less acuity, 100 neurons can be for radial distance two units.

vestibular system and direction

Vestibular-system saccule, utricle, and semicircular canals detect gravity, body accelerations, and head rotations. From that information, nervous system establishes vertical direction and two horizontal directions {vestibular system and direction}.

vision and direction

Animal eyes are right and left, not above and below, and establish a horizontal plane that visual brain regions maintain {vision and direction}. Vision processing can detect vertical lines and determine height and angle above horizontal plane. Body has right and left as well as front and back, and visual brain regions maintain right, left, front, and back in the horizontal plane.

1-Consciousness-Speculations-Space-Computer Science

models for three dimensions from two dimensions

Models can build three dimensions from two-dimensional images {models for three dimensions from two dimensions}. Stacks of two-dimensional layers can model three-dimensional space. Rotation of one two-dimensional layer can sweep out three-dimensional space.

reading and writing and mental space

Mental space has no reading or writing {reading and writing and mental space}, because output becomes input and input becomes output simultaneously and in parallel.

1-Consciousness-Speculations-Space-Computer Science-Algorithm

segmentation and mental space

Region boundaries have high contrast. Surfaces have coarser or finer and other texture types. Textures depend on surface slant, surface tilt, object size, object motion, shape constancy, surface smoothness, and reflectance. Segmentation algorithms {segmentation and mental space} separate observed regions by contrast and surface texture. Contrast and steep texture gradients define large domains. Subdomains have different surface textures.

self-calibration and mental space

Camera algorithms can use epipolar transform and absolute conic image in Kruppa equation to find standard metric and relative distances and positions {self-calibration and mental space}.

shape from shading and mental space

Vision processing can find convexities, concavities, and boundary edges. Later vision processing makes these consistent to build three-dimensional space {shape from shading and mental space}.

structure from motion and mental space

Motions cause disparities and disparity rates that can reveal structure {structure from motion and mental space}. Bundle-adjustment algorithm can find three-dimensional scene structure and eye trajectories. First, projective reconstruction can construct the projected structure, and then Euclidean upgrading can find actual shape. Affine projective reconstruction uses Tomasi-Kanade factorization.

synthesis algorithms

Synthesis algorithms {synthesis algorithms} compare vectors and coordinates to build images and space.

vision algorithms and space

Vision algorithms can use fiducials as reference points for calibration to make space coordinates {vision algorithms and space}.

1-Consciousness-Speculations-Space-Mathematics

continuity and mental space

Continuous surfaces have no gaps and no overlaps. Phenomenal space seems continuous {continuity and mental space}.

cross products and mental space

Two vectors define one plane or surface. Two vectors can multiply to make a vector perpendicular to both vectors. Perhaps, mental space gets the distance dimension from cross products {cross products and mental space}.

derivatives and mental space

Derivatives indicate changes, gradients, and directions at space and time points. Second derivatives indicate gradient and direction changes and so apply to curves. Perhaps, brain calculates derivatives to find directions and surfaces, and their curvatures, and so build mental space {derivatives and mental space}.

generators and mental space

Brain not only represents space, but also generates/constructs space {generators and mental space}. From an origin, each space direction has a function that indicates distance and color. Functions extend from origin, in brain, into space, outside body, so there is no action at a distance. Space is nonphysical abstract vector space.

mathematics and mental space

Mathematical ideas can relate to mental space {mathematics and mental space}. Neuron assemblies can represent mathematical objects and mathematical operations.

number

Over a one-millisecond interval, neurons have (1) or do not have (0) an impulse, so neuron series can represent binary numbers. Over a one-second interval, one neuron's series of 0s and 1s can represent a binary number with 1000 digits.

Over a one-second interval, single-neuron axon-impulse number or released-neurotransmitter-packet number can represent a whole number. Neurons have impulse frequencies up to 800 Hz, so one neuron can represent numbers from, say, 1 to 800.

Neuron series can use positional notation to represent larger numbers. For example, one neuron can represent numbers from 0 to 99, and the other can represent numbers from 0000 to 9900, so neuron pairs can represent numbers from 0 to 9999.

number: integer

In neuron series, one neuron can represent sign, so neuron series can represent integers.

number: rational

In neuron series, one neuron can represent decimal point, so neuron series can represent rational numbers.

number: real

Real numbers have rational-number approximations, so neuron series can represent real numbers.

number: imaginary

In neuron series, one neuron can represent square root of -1, so neuron series can represent imaginary numbers.

number: complex

Complex numbers add real number and imaginary number, so two neuron series can represent complex numbers.

ratio

Neurons can compare receptive-field center input to surround input to measure stimulus-intensity ratio. Opponent processes compare inputs from two neurons to find ratio. Ratios are dimensionless, because dividing cancels units.

ratio: metrics

Comparing current and memorized ratios builds standard relative lengths, angles, and other measurement units (standardized metrics).

addition

To add two numbers, neuron series can receive input from two neuron series that represent numbers. To subtract, one input is negative.

Single neurons can accumulate membrane potential or neurotransmitter over time to represent simple summation.

addition: tables

If tables are available, arithmetic operations can use table lookup. First number is in first column, second is in second column, and answer is in third column. Neuron arrays can store number tables. Using indexes allows table lookup.

multiplication

To multiply two numbers, neuron series can receive input from two neuron series that represent numbers.

multiplication: amplification

Single neurons can amplify input. Cell body priming can cause inputs to dendrites to make more membrane voltage. Axon gating near synapse can cause synapse to release more neurotransmitter. Amplification is like multiplication.

multiplication: logarithm

Neuron series can store bases and exponents, so three neuron series can represent exponentials and logarithms. Neuron-series sets can add logarithms to perform multiplications. Logarithms are smaller than original number. For example, if number is 100, logarithm is 2: 100 = 10^2.

multiplication: power and root

Powers are multiplication series: a^3 = a*a*a. Roots are multiples of reciprocals: a^0.5 = (1/a) * (1/a). Neuron-series sets can repeat multiplications and divisions to find powers and roots.

symbol

Alphabet letters and punctuation symbols can have number representations. Neuron series can represent numbers and so letters, symbols, and variables.

mathematical term

Mathematical terms are constants times variables raised to powers: a*x^b. Neuron series can represent symbols and can use powers and multiply, so five neuron series can represent mathematical terms.

polynomial

Polynomials are mathematical-term sums. Neuron-series arrays can represent mathematical terms, so neuron-series-array series can represent mathematical-term sums. For infinite polynomials, higher terms have negligibly small values, so finite polynomials can approximate infinite polynomials.

polynomial: functions

Over space, time, or numeric intervals, polynomials can represent functions, so neuron-series-array series can represent functions. Polynomials can represent periodic, trigonometric, and wave functions: sin(x) = x - x^3/3! + x^5/5! - x^7/7! + ..., and cos(x) = 1 - x^2/2! + x^4/4! - x^6/6! + ... Polynomials can represent exponential functions: e^a = 1 + a + a^2/2! + a^3/3! + ..., and e^(i*a) = cos(a) + i*sin(a).

polynomial: factoring

Polynomials can have smaller polynomials that divide evenly into the polynomial. For example, a^2 + 2*a*b + b^2 = (a + b)^2, so (a^2 + 2*a*b + b^2)/(a + b) = (a + b). Neuron-series-array arrays can factor.

equation

Equations set two functions equal to each other: 3*x + 2 = 2*x + 3. Neuron assemblies can represent functions and the equals operation, so neuron assemblies can represent equations. Because they can subtract, factor, and divide, neuron assemblies can solve linear equations. Linear equations can approximate other equations.

equation: inequality and relation

Neuron assemblies can represent equations, so neuron assemblies can represent inequalities. Inequalities can indicate relations: more, same, and less, or before and after.

equation: system

Two or more equations with same variables are equation systems. For example, 3*x + 2*y = 6 and 2*x + 3*y = -6. Large neuron assemblies can represent an equation system. Because they can subtract, multiply, and divide, and so substitute, neuron assemblies can solve linear-equation systems. Linear-equation systems can approximate other-equation systems.

algebra

Algebras have elements, such as integers. Algebras have operations on elements, such as addition and multiplication. Operations on elements result in existing elements. Neuron series can represent numbers and perform arithmetic operations, so neuron assemblies can represent algebra.

calculus

All differentiations and integrations use only exponentials, multiplications, and powers. Neuron series can represent logarithms, multiplication, and powers, so neuron assemblies can differentiate and integrate.

mathematical group

Mathematical groups have elements, such as triangles. Mathematical groups have one operation, such as addition or rotation. Operations map every element to the same or another group element. For example, if element is equilateral triangle, 120-degree rotations result in same element. Tables show group-operation results for all element pairs. Neuron assemblies can represent number tables and table lookup and so represent mathematical groups.

logic

Neuron series can represent letters and symbols, so neuron-series arrays can represent words and statements. Statements can use nested variable relations. Neuron assemblies can represent and understand grammar.

logic: truth value

Neurons can represent TRUE or FALSE by potential above threshold or below threshold.

logic: operations

Two or three neuron series can represent NOT, AND, and OR operations. NOT operations can change input into no output, or vice versa, using excitation or inhibition. AND operations add two inputs to pass high threshold, which neither one alone can pass. OR operations add two inputs to pass low threshold, which either input alone can pass.

logic: tables

Logic operations can use table lookup. First variable is in first column, second variable is in second column, and truth-values are in third column. Neuron assemblies can store tables and perform table lookup.

logic: conditionals

Conditional statements combine NOT and AND operators: p -> q = ~(p & ~q). Neuron assemblies can represent NOT and AND operations and so represent conditionals.

logic: reasoning

Reasoning uses statement series. Neuron-assembly series can represent statement series and so reasoning.

computation

Neuron assemblies can represent numbers and statements and perform logic operations, so complex neuron assemblies can use programming languages and compute. Neuron-assembly activity patterns can represent cellular automata, which can simulate universal Turing machines and so compute any algorithm.

geometry

Visual processing can represent geometric objects, relations, and operations [Burgess and O'Keefe, 2003] [Moscovitch et al., 1995]. Representations have same relative lengths, angles, and orientations as physical geometric objects in space.

Geometric objects are points, lines, angles, and surfaces. Geometric objects have location, extension, and shape. Geometric objects have brightness, hue, and saturation. Geometric-object relations are up, down, above, below, right, left, in, out, near, and far. Geometric operations are constructions, transformations, vector operations, topological operations, region marking, and boundary making and removing.

geometry: point

Dendritic-tree center-region input excites ON-center neurons. Surrounding-annulus input inhibits ON-center neurons. ON-center neurons can represent points [Hubel and Wiesel, 1959] [Kuffler, 1953].

geometry: line

Lines are point series, so ON-center-neuron series can represent straight and curved lines [Livingstone, 1998] [Wilson et al., 1990]. Neuron-series length can represent line length.

Lines are boundaries of regions. Distance and intensity change rates are greatest at boundaries.

geometry: surface

Surfaces are line series, so ON-center-neuron arrays can represent flat and curved surfaces. Distance and intensity change rates are small in surfaces. Neuron-array area can represent surface area. Line boundaries are surface edges and separate surfaces.

geometry: orientation

Lines and surfaces have orientation/direction. Topographic-map orientation columns, perpendicular to cortical neuron layers, detect orientation. Orientation columns are for specific space locations. Orientation columns are for specific line lengths and sizes. Therefore, orientation columns represent one space location, one orientation, and one line length [Blasdel, 1992] [Das and Gilbert, 1997] [Dow, 2002] [Hübener et al., 1997] [LeVay and Nelson, 1991].

geometry: angle

For same space location and line length, adjacent orientation columns detect orientations. Neuron assemblies calculate plane angles between two line orientations or solid angles between three line orientations. Object and body rotation movements have angle changes.

geometry: geometric figures

Neuron assemblies can represent points, lines, orientations, angles, and surfaces, so neuron assemblies can represent geometric figures, such as spheres, cylinders, and ellipsoids.

geometry: distance

Neuron-series length can represent distance between two points. Neuron series can have all orientations, so neuron series can detect distance in any direction.

Topographic-map orientation columns calculate line and surface orientations. At farther distances, concave angles appear smaller, and convex angles appear larger.

Closer regions are brighter, and farther regions are darker, so neuron excitation can estimate distance.

Closer surfaces have larger average surface-texture size and larger spatial-frequency-change gradient. Neuron assemblies can detect surface texture and spatial-frequency-change gradients and estimate distance.

Object movements and body movements occur over distances, and neuron assemblies can track trajectories.

geometry: triangulation

To find triangle lengths and angles, neuron assemblies can use trigonometry cosine rule or sine rule.

geometry: trilateralization

Trilateralization finds point coordinates, using three reference points. The four points form a tetrahedron, with four triangles. Distance from first reference point defines a sphere. Distance from second reference point defines a circle on the sphere. Distance from third reference point defines two points on the circle. Neuron assemblies can measure distances between points and angles, and can use the cosine rule or sine rule to find all triangle angles and sides.

Animals continually track distances and directions to distinctive landmarks. Animals navigate environments using maps with centroid reference points and gradient slopes [O'Keefe, 1991].

geometry: space

Brain can represent perceptual space in topographic maps [Andersen et al., 1997] [Bridgeman et al., 1997] [Gross and Graziano, 1995] [Owens, 1987] [Rizzolatti et al., 1997].

Midbrain tectum and cuneiform nucleus have multimodal neurons, whose axons envelop reticular thalamic nucleus and other thalamic nuclei to map three-dimensional space.

Vision processing derives three-dimensional images from two-dimensional ones by assigning convexity and concavity to lines and vertices and making convexities and concavities consistent.

geometry: spatial axes

Vestibular-system saccule, utricle, and semicircular canals establish vertical axis by determining gravity direction and horizontal directions by detecting body accelerations and head rotations. Three planes, one horizontal and two vertical, define vertical axis and two horizontal axes.

Animal eyes are right and left, not above and below, and establish horizontal plane that visual brain regions maintain.

Vision processing can detect vertical lines and determine height and angle above horizontal plane. Vertical gaze center near midbrain oculomotor nucleus detects up and down motions [Pelphrey et al., 2003] [Tomasello et al., 1999].

Body has right-left and front-back, and visual brain regions maintain right-left and front-back in horizontal plane. Horizontal gaze center near pons abducens nucleus detects right-to-left motion and left-to-right motion [Löwel and Singer, 1992].

Topographic-map orientation columns with same orientation align and link to establish coordinate axes, in all directions.

Sense and motor topographic maps have regularly spaced lattices of special pyramidal cells. Non-myelinated and non-branched superficial-pyramidal-cell axons travel horizontally 0.4 to 0.9 millimeters and synapse in clusters on next superficial pyramidal cells. The skipping pattern aids macrocolumn neuron-excitation synchronization [Calvin, 1995]. The regularly spaced pyramidal-cell lattice can represent topographic-map reference points and make vertical, horizontal, and other-orientation axes. Lattice helps determine spatial frequencies, distances, and lengths.

Medial entorhinal cortex has some grid cells that fire when body is at many spatial locations, which form a triangular grid [Sargolini et al., 2006].

geometry: coordinate system

Vision processing relates spatial axes to make a coordinate system. Spatial axes intersect at a coordinate origin. In spherical coordinates, space points have distance to origin, horizontal angle to horizontal axis, and azimuthal angle to vertical axis. In Cartesian coordinates, points have distances to vertical, right-left-horizontal, and front-back-horizontal axes. Brain and external three-dimensional space use the same spatial axes and coordinate system. Coordinate origin establishes an egocenter, for egocentric space.

tensor

Neuron series can represent number magnitudes and space directions, so two neuron series can represent mathematical vectors. Neuron arrays can represent vectors and motions, so they can represent spinors as rotating vectors.

Neuron arrays can represent vectors, so they can represent matrices, which can represent surfaces. Matrices can be two-dimensional tensors, which have all vector-component products as elements. For example, |x1*x2, y1*x2 / x1*y2, y1*y2| for vectors (x1, y1) and (x2, y2) has four elements. |x1*x2, y1*x2, z1*x2 / x1*y2, y1*y2 z1*y2 / x1*z2, y1*z2 z1*z2| for vectors (x1, y1, z1) and (x2, y2, z2) has nine elements.

Three-dimensional tensors have all vector-component products. Neuron arrays can represent matrices, so neuron assemblies can represent three-dimensional tensors. During eye, head, and body movements, tensors can transform egocentric-space coordinates to maintain stationary allocentric-space coordinates.

self-reference and mental space

Gödel numbers can contain compressed-information descriptions of themselves. Nesting allows self-reference. Topographic maps can contain descriptions of themselves. Topographic-map space information can contain space-information descriptions. Mental space can contain space descriptions. Complete brain-based mental-space descriptions can contain mental space {self-reference and mental space}. By nesting, mental space can be internal, in observer, and observer can be in mental space.

space by relaxation

Color processing finds surfaces and distances by mathematical relaxation techniques that locate complete and consistent positions {space by relaxation}.

state space and mental space

Color phase space can have three spatial dimensions, one time dimension, surface orientation dimension, black-white dimension, red dimension, blue dimension, and green dimension {state space and mental space}.

1-Consciousness-Speculations-Space-Mathematics-Vectors

spinors and mental space

Spinors are rotating three-dimensional vectors or quaternions. Perhaps, spins can define space axes, and three real-number orthogonal independent spinor components make three-dimensional space {spinors and mental space}.

tensors and space

Tensors are scalars, vectors, matrices, three-dimensional arrays, and so on, that represent linear operations. Tensors can model flows and fields. Integrating tensors over one dimension decreases dimension by one. Differentiating tensors over one dimension increases dimension by one. Three tensor differentiations can build three dimensions from one scalar {tensors and space}.

1-Consciousness-Speculations-Space-Physics

ether and mental space

The ether can fill space or define space {ether and mental space}. The ether can provide a substrate for sensations and observer.

holography and mental space

Holography can make three-dimensional images in space from two-dimensional interference patterns illuminated by a coherent-light beam {holography and mental space}. Perhaps, association cortex stores two-dimensional interference patterns and makes beams. However, association cortex has no coherent beam to make interference patterns, and interference patterns and intensities, features, and objects have no relation. Brain does not send out signals.

projection and mental space

Projectors illuminate film or otherwise decode stored representations to create two-dimensional or three-dimensional displays in media, such as screens or monitors. Perhaps, mind is projection, and brain is projector {projection and mental space}. However, mind must know sensations, not just display them. Projection starts with a geometric figure. Projection needs something on which to project. Projection has no opposites.

quantum mechanics entanglement and non-locality

Physical interactions are local. Forces are particle exchanges. For example, masses exchange gravitons to affect each other. Force fields change space and so affect particle motions. Physical interactions do not allow action at distance, except for quantum-mechanical entanglement. Two particles that have interacted have a joint wavefunction, made of superposition of the two particle wavefunctions. Because particle waves are infinite, the joint wavefunction is infinite. The two particles have quantum-mechanical entanglement over all space. Consciousness has experiences at distant places, with no interceding events. Perhaps, consciousness entangles everything over all space {quantum mechanics entanglement and non-locality}.

non-locality

In quantum mechanics, observation at one location can appear to immediately affect another observation at a distant location. Though physical waves send information at finite speed, quantum-mechanical waves collapse everywhere at once. Perhaps, mind involves non-locality. Consciousness links separate space points, and sense system and sensation, and so is non-local.

However, brain processing does not use waves, entanglement does not include knowing, and any entanglement in brain collapses in less than a microsecond.

tunneling and mental space

Perhaps, brain has potential barriers to outside world, but mind can tunnel through barriers to experience outside world {tunneling and mental space}.

1-Consciousness-Speculations-Space-Psychology

sense qualities and mental space

People seem to experience a sensory field outside themselves [Velmans, 1993]. Sense experiences are at locations in three-dimensional space. Sense qualities are the type of thing that allows consciousness of space {sense qualities and mental space}. Experiencing mental space requires sense qualities.

surface distances and mental space

The farthest surfaces, like the sky or distant mountains, seem to be a few kilometers away. The closest surfaces, like a book, appear smaller than their retinal visual angle indicates. Perhaps, rather than varying directly with distance, perceived sizes are logarithms of distances {surface distances and mental space}.

surface texture and depth

Gradient location-orientation histograms define surface textures. People assign depth using corresponding points in stereo or successive images and other monocular techniques {surface texture and depth}. Near objects have more texture details, and far objects have less texture details.

surface transparency and perspective

Windowpanes and perspective paintings represent depth and three-dimensional scenes in two dimensions, and their two-dimensional surfaces are apparent. If such surfaces have no reflection or any other property and so are invisible, they represent three-dimensional space perfectly {surface transparency and perspective}.

zooming and mental space

Scaling (zooming) maintains relative distances and angles {zooming and mental space} {scaling and mental space}. Zooming in can make a finite region equivalent to an infinite region, because the boundary becomes far away. Zooming out can make large regions smaller. Attention is like zooming.

Related Topics in Table of Contents

1-Consciousness-Speculations

Drawings

Drawings

Contents and Indexes of Topics, Names, and Works

Outline of Knowledge Database Home Page

Contents

Glossary

Topic Index

Name Index

Works Index

Searching

Search Form

Database Information, Disclaimer, Privacy Statement, and Rights

Description of Outline of Knowledge Database

Notation

Disclaimer

Copyright Not Claimed

Privacy Statement

References and Bibliography

Consciousness Bibliography

Technical Information

Date Modified: 2022.0225