1-Consciousness-Speculations-Space-Biology

adjacency and mental space

Skin touches objects, and touch receptors receive information about objects adjacent to body {adjacency and mental space}. As body moves around in space, mental space expands by adding adjacency information.

angle-comparison computations calculate distances

Eye-accommodation-muscle feedback to vision depth-calculation processes can calculate distances up to two meters. Using metric depth cues can calculate all distances. Observing objects requires at least two eye fixations, which allow vision processing to calculate two different perceived angles, for two different eye, head, and body positions. Vision and body angle-comparison computations can calculate line, surface, feature, and object distances {angle-comparison computations, distances} {distances, angle-comparison computations}.

two sight-line to surface angles

At first eye fixation on a line or surface point, vision calculates a sight-line to point angle. At second eye fixation on a collinear or co-surface point, vision calculates a different sight-line to point angle, because eye, head, and/or body have rotated. At nearest possible line or surface point, sight-line to point angle is 90 degrees. At farthest possible line or surface point, sight-line to point angle is 0 degrees. Angle decreases linearly with distance. If angle of sight-line to line or surface is more perpendicular, line or surface point is nearer. If angle of sight-line to line or surface is less perpendicular, line or surface point is farther.

Comparing sight-line angles to two collinear or co-surface points can calculate distance. Angle difference varies directly with distance. Larger angle change means object is nearer. Smaller angle change means object is farther.

two visual angles

At first eye fixation on an object edge or contour, object has a retinal visual angle, calculating object relative size. At second eye fixation on a different object edge or contour, object has a different retinal visual angle, because eye, head, and/or body have rotated. If sight-line to object edge or contour angle is 90 degrees, visual angle is maximum. At other angles, visual angle is less. Visual angle decreases linearly with distance. If sight-line to object edge or contour is more perpendicular, visual angle is more. If sight-line to object edge or contour is less perpendicular, visual angle is less.

Comparing first and second visual angles can calculate object distance. Angle difference varies directly with distance. Larger angle change means object is nearer. Smaller angle change means object is farther.

two sight-line to point angles

At first eye fixation on an object point, sight-line to point has an angle. At second eye fixation on the same object point, sight-line to point has a different angle, because eye, head, and/or body have rotated. At nearest possible object point, sight-line to point angle is 90 degrees. At other object points, sight-line to point angle is less. Angle decreases linearly with distance. If sight-line to object point is more perpendicular, object is nearer. If sight-line to object point is less perpendicular, object is farther.

Comparing first and second angles can calculate object distance. Angle difference varies directly with distance. Larger angle change means object is nearer. Smaller angle change means object is farther.

two concave or convex corner angles

The first eye fixation on a concave or convex corner determines its angle. The second eye fixation determines a different angle, because eye, head, and/or body have rotated. Smaller-angle concave corners are farther, and larger-angle concave corners are nearer. Smaller-angle convex corners are nearer, and larger-angle convex corners are farther.

Comparing first and second corner angles can calculate distance. Angle difference varies directly with distance. Larger angle change means object is nearer. Smaller angle change means object is farther. Angles and vertices use the same reasoning as corners.

body angle comparisons

First eye fixation and second eye fixation have two different eye, head, and/or body positions. The kinesthetic system determines their angle sets and sends kinesthetic angle-difference information to association cortex for comparison with the corresponding vision angle-difference information.

integration

Comparing the two sets of angle differences calculates absolute metric distances. Accumulating distance information allows building three-dimensional-space information.

body surface and mental space

Sensations impinge on body surface in repeated patterns at touch receptors. Nervous system occupies three dimensions and has information about receptor locations. From receptor activity patterns, nervous system builds a three-dimensional sensory surface {body surface and mental space}.

carrier waves and mental space

Senses make a global carrier-wave function, and whole brain-and-body has a carrier-wave function {carrier waves and mental space}. Global functions are regular and form coordinate grids, establishing egocentric space. Local disturbances affect global function to indicate location.

convexity and concavity and mental space

Frontal-lobe region derives three-dimensional images from two-dimensional topographic maps by assigning convexity, concavity, and boundary edges [Horn, 1986] to lines and vertices and making convexities and concavities consistent {convexity and concavity and mental space}.

cortical processing and mental space

Primary-visual-cortex topographic map represents scene intensities. After primary visual cortex, cortical topographic-map neurons {cortical processing and mental space} respond to orientations, locations, and distances [Burkhalter and Van Essen, 1986] [DeValois and DeValois, 1975] [Newsome et al., 1989] [Tootell et al., 1997] [Zeki, 1985]. Topographic maps use thresholds to make boundaries and regions. Vision system sends information to motor and other sense systems [Bridgeman et al., 1997] [Owens, 1987]. Topographic maps use movements, angles, and perspective to add distance and depth by interpolation and extrapolation and represent egocentric space. Brain integrates and synthesizes spatial information [Andersen et al., 1997] [Gross and Graziano, 1995] [Olson et al., 1999].

frames and mental space

Nose, cheeks, and eyebrow ridges frame vision scenes. Silent regions frame sounds. Untouched surrounding areas frame pressures. Neutral-temperature regions frame warm or cool areas. Nose touch sensations frame odors. Mouth touch sensations frame tastes. Silent sensors frame active sensors. Sensations have frames that provide context for near and far locations {frames and mental space}.

memory and mental space

Long-term memory recall makes space {memory and mental space}. Short-term memory builds space modifications. Awaking activates memory, which activates space. Perception and recall occur on space background. Memory is stronger than perception, because people can remember images and override perceptions.

motions and mental space

Retinal regions can receive repeated light-pattern series that correlate with motion {motions and mental space}. For example, when moving toward light source, as visual horizon lowers, source appears lower in visual field. When moving away, source appears higher in visual field. When turning, rotations are around sense organ.

When people move, other objects do not move. Correlated movements belong to body region, and correlated non-movements belong to other region. Moving establishes a boundary between adjacent moving and non-moving regions. Moving is inside region, and non-moving is outside region. In and out make a space axis. When finger slides across surface, or feet walk across ground, touch correlates with vision moving/non-moving boundary.

motor feedback and space

Brain senses, moves, senses, moves, and so on, to have feedback, so brain processes are multisensory and sensorimotor. Visual-motor and touch-motor feedback loops interact to locate surfaces {motor feedback and space}, also using kinesthetic and vestibular systems. Vertical gaze center near midbrain oculomotor nucleus detects up and down motions [Pelphrey et al., 2003] [Tomasello et al., 1999]. Horizontal gaze center near pons abducens nucleus detects right-to-left and left-to-right motions [Löwel and Singer, 1992].

multimodal neurons and mental space

Midbrain tectum and cuneiform nucleus have multimodal neurons, whose axons envelop reticular thalamic nucleus and other thalamic nuclei to map three-dimensional space {multimodal neurons and mental space}.

multiple neurons for multiple space points

To experience multiple space points simultaneously, neuron assemblies have 200-millisecond intervals in which events are simultaneous {multiple neurons for multiple space points}.

topographic map continuum

Topographic-map neurons, dendrites, axons, and synapses are so numerous that overlapping forms a continuum {topographic map continuum}. Perhaps, the continuum carries analog signals and geometric figures, like TV screens, and models continuous space.

1-Consciousness-Speculations-Space-Biology-Boundaries

analog to digital conversion and mental space

Neuron thresholds reduce instantaneous below-threshold input to 0 and set instantaneous above-threshold input to 1. Thresholds differentiate regions by establishing boundaries {analog to digital conversion and mental space}.

boundary and mental space

Brain can compare outgoing (inner) and incoming (outer) signals, which differ. Inner signals have loops and loop patterns and include memories and imaginings. Outer signals have non-looping patterns and include stimuli. Nervous system builds a boundary {boundary and mental space} between inner (self) and outer (other). Boundary is at nervous-system edges. Waking and dreaming rebuild the boundary.

inequalities and boundaries

To trigger a neuron impulse, membrane potential, caused by input neuron impulses, must be greater than neuron threshold potential. Neuron threshold potentials establish inequalities. Lower potential has no effect. Higher potentials cause one impulse. (Higher potentials over time cause higher impulse rate.) Inequalities establish boundaries {inequalities and boundaries}. At space boundaries, one region has response above threshold, and adjacent region has response below threshold. (Neuron thresholds can change.)

lateral inhibition and spatial regions

Adjacent neurons can inhibit central neuron. Such lateral inhibition reduces central-neuron activity. Lateral inhibition can contract regions {lateral inhibition and spatial regions}. Lateral inhibition can move boundaries inwards. Lateral inhibition can suppress and eliminate boundaries. Spreading activation and lateral inhibition can join or separate regions.

spreading activation and spatial regions

Central neuron can excite adjacent neurons. Such spreading activation increases adjacent-neuron activity. Spreading activation can expand regions {spreading activation and spatial regions} {spreading excitation and spatial regions}. Spreading activation can move boundaries outwards. Spreading activation can establish and emphasize boundaries. Spreading activation and lateral inhibition can join or separate regions.

1-Consciousness-Speculations-Space-Biology-Coordinates

coordinate transformation and allocentric space

People see objects in space as external and stationary (allocentric) [Rizzolatti et al., 1997] [Velmans, 1993]. Cerebellum and forebrain anticipate, coordinate, and compensate for movements.

Frontal-lobe topographic maps can represent egocentric space [Olson et al., 1999], with vertical, right-left, and front-back directions. Coordinate-origin egocenter is in head center, on a line passing through nosebridge. Space points have directions and distances from egocenter. All points make vector space.

As body, head, or eyes move, egocentric space moves, spatial axes move, and point coordinates and geometric figures transform linearly to new coordinate values [Shepard and Metzler, 1971]. Transformations are translation, rotation, reflection, inversion, and scaling (zooming). Motor processing uses tensor transform functions to describe changes from former to current output-vector field [Pellionisz and Llinás, 1982]. To maintain stationary allocentric space, so point coordinates do not change when body moves, visual processing must cancel egocentric spatial-axis coordinate transformations {coordinate transformation and allocentric space}. Visual processing inverts motor-system tensors to transform egocentric coordinate systems in opposite directions from body movements [Pouget and Sejnowski, 1997]. Topographic maps can describe tensors that transform from egocentric to allocentric space. Topographic maps can represent allocentric space.

example

Translating and rotating make spatial axes change direction. After movement, new axes relate to old axes by coordinate transformations. For example, two-dimensional vector (0,1) can translate on y-axis to make vector (0,0), rotate both axes to make vector (1,0), or reflect y-axis to make vector (0,-1). Coordinate transformations do not change dimension number.

stationary space

Perception typically maintains an absolute spatial reference frame. Stationary space allows optimum feature tracking during object and/or body motions. Moving reference frames make all motions three-dimensional, but stationary space makes many movements one-dimensional or two-dimensional.

gravity and vertical direction

Gravity exerts vertical force on feet and body. Nervous system analyzes this distributed information and defines vertical axis in space {gravity and vertical direction}.

ground and mental space

Foot motions stop at ground. Touch and kinesthetic receptors repeatedly record this information. Nervous system analyzes this distributed information and defines a horizontal plane in space {ground and mental space}. Ground nearest to eye has sight-line perpendicular to ground. Farther-away ground points have sight-lines at smaller angles. All objects are on or vertically above ground.

invariants and coordinate axes

Vision observes moving and stationary points in space with varying brightnesses and colors. Nervous system analyzes this information to detect perceptual invariants. For space, invariant points are stationary reference points. Invariant lines are stationary coordinate axes {invariants and coordinate axes}: vertical, horizontal right-left, and horizontal near-far. Because invariants stay constant over many situations, invariants can be grounds for meaning.

motions and touches

Nervous system correlates body motions and touch and kinesthetic receptors to extract reference points and three-dimensional space {motions and touches}. Repeated body movements define perception metrics. Such ratios build standard length, angle, time, and mass units that model physical-space lengths, angles, times, and masses. As body, head, and eyes move, they trace geometric structures and motions.

tracking

During body movements, neuron activations follow trajectories across topographic maps. Brain can track moving stimuli. Brain can study before and after effects by tracking stimuli.

stimuli and motions

Stimuli can trigger attention and orientation, and so body moves or turns toward or away. Different stimulus intensities cause different moving or turning rates.

distance

Because distance equals rate times time, motion provides information about distances. Brain can track locations over time. Brain can use interpolation and extrapolation.

horizontal directions and motions

Moving toward or away from stimuli maximizes visual flow and light-intensity gradient, and establishes forward-backward direction. Moving perpendicular to sight-line to stimuli minimizes visual flow and light-intensity gradient, and establishes left-right direction.

vertical direction and motion

Body raising and lowering can indicate vertical direction.

orientation columns and direction

Vision topographic maps have orientation macrocolumns, which align and link orientations to detect line directions and establish all spatial directions {orientation columns and direction} [Blasdel, 1992].

pole and dimension

As body moves in a straight line, visual flow and light-intensity gradient establish one forward point (pole). Eye to forward point defines the forward-backward spatial dimension {pole and dimension}.

rotation centers and mental space

Body and body parts rotate around balance or equilibrium points {rotation centers and mental space}. Kinesthetic receptors send information to brain, which defines those reference points and builds three-dimensional space.

tensors and mental space

Topographic-map series can store matrices and so represent tensors {tensors and mental space}. Motor processing uses tensor transform functions to describe changes from former to current output-vector field [Pellionisz and Llinás, 1982]. Tensors can linearly transform coordinates from one coordinate system to another. Output vectors are linear input-vector and spatial-axis-vector functions. Motor-system topographic maps send vector-field output-vector spatial pattern to motor neurons. Muscles move body, head, and eye to specific space locations, or for specific distances or times. Current output-vector field differs from preceding output-vector field by a coordinate transformation.

topographic maps and coordinate axes

Topographic-map-neuron types have regular horizontal, vertical, and diagonal spacings, at different small, medium, and large distances. Neuron grids make a spatial network of nodes and links. Neuron grids allow measuring distances and angles and using coordinates. Topographic-map neuron grids have up/down, left/right, and near/far axes {topographic maps and coordinate axes}. Topographic-map spatial axes intersect to establish a coordinate origin and make a coordinate system, so points, lines, and regions have spatial coordinates.

Sensory topographic maps can have lattices of superficial pyramidal cells, whose non-myelinated non-branched axons travel horizontally 0.4 to 0.9 millimeters to synapse in clusters on next superficial pyramidal cells. The skipping pattern aids macrocolumn neuron-excitation synchronization [Calvin, 1995].

topographic maps and distances

Topographic maps have neurons specific for space locations {topographic maps and distances}. Locations involve space direction and distance. If 100 neurons are for radial distance one unit, to have same visual acuity 400 neurons must be for radial distance two units. To have less acuity, 100 neurons can be for radial distance two units.

vestibular system and direction

Vestibular-system saccule, utricle, and semicircular canals detect gravity, body accelerations, and head rotations. From that information, nervous system establishes vertical direction and two horizontal directions {vestibular system and direction}.

vision and direction

Animal eyes are right and left, not above and below, and establish a horizontal plane that visual brain regions maintain {vision and direction}. Vision processing can detect vertical lines and determine height and angle above horizontal plane. Body has right and left as well as front and back, and visual brain regions maintain right, left, front, and back in the horizontal plane.

Related Topics in Table of Contents

1-Consciousness-Speculations-Space

Drawings

Drawings

Contents and Indexes of Topics, Names, and Works

Outline of Knowledge Database Home Page

Contents

Glossary

Topic Index

Name Index

Works Index

Searching

Search Form

Database Information, Disclaimer, Privacy Statement, and Rights

Description of Outline of Knowledge Database

Notation

Disclaimer

Copyright Not Claimed

Privacy Statement

References and Bibliography

Consciousness Bibliography

Technical Information

Date Modified: 2022.0225