People can attend to verbal or other stimuli with intention to remember facts or events {learning}|.
types: knowledge
People can learn facts and concepts about world, declarative knowledge. People can learn how to perform tasks in world, procedural knowledge [Campbell, 1994].
types: sense qualities
Sense qualities change with learning or training. Learning affects verbal and spatial abilities. Sights, smells, feelings, and sounds change as relations to other things change. Long training typically makes sense qualities less salient or makes them vanish, as people do more things automatically and/or become habituated to stimuli. Varying information flow changes seeing [Underwood and Stevens, 1979].
properties: relearning
Relearning same verbal-item sequence requires fewer repetitions than the first learning.
requirements
Learning requires sensation and perception.
processes
Systems can learn if they or outside forces can alter system relations. Learning uses input information to direct mechanism that can change system relations and/or rules. Learning leads to new states or new state trajectories.
processes: behavior
Learning requires both old and new behaviors to exist simultaneously for testing and comparison until one proves better. Old-behavior structures still remain and are available for other uses.
processes: cognitive map
Animals can orient themselves in space and make cognitive maps of environment to guide behavior [1950].
processes: cue
New learning requires distinctive stimuli {cue, learning} to elicit new responses. Cueing sends signals to lower elements to get responses, along paths to elements or around circuits. Cueing can search, question, request address, activate behavior, or change state.
processes: description
Learning combines several descriptions into one description, groups incompatible descriptions, modifies description, or integrates structures, functions, or actions.
processes: drive reduction
Learning can involve drive reduction. However, learning can happen without drive reduction.
processes: expectation
Association cortex compares expected to actual, to maximize new information. People know expected value because they encounter same situations many times. Perceptual learning requires ability to detect differences.
processes: experience
Perhaps, learning phenomenal concepts requires phenomenal qualities. Perhaps, learning phenomenal concepts is purely physical.
processes: goal
Learning sets goals. However, learning can happen without goal seeking.
processes: imitation
Animals can change behavior by imitation.
processes: information
Minds learn patterns that have least amount of new information, because they happen most often and so are most redundant.
processes: memory
Learning stores information in mind. Learning {verbal learning} and memory can be about words, sentences, and stories. People can learn word sounds and visual appearances.
processes: parameters
Task uses muscles. Signals to muscles are parameter or variable values, which have limited range. Success requires combining parameter values. To learn to perform task requires methods to set and remember variable values and record success or failure. Upon failure, system suppresses parameter settings. Upon success, system enhances settings. Systems cannot change variables themselves, unless they alter from outside or change system level.
processes: reasoning
Animals can change behavior by reasoning.
processes: result knowledge
Direct and precise knowledge of action results is best for learning.
processes: repetition
For verbal items, more rehearsal improves learning and recall. More repetition also results in fewer errors. Learning longer sequences requires more repetition to achieve same success percentage. For example, learning twice as many items requires more than twice as much repetition.
processes: reward
Learning is painful because it is hard, slow, and takes time from other activities, so later rewards must overcome current pain. Reward affects practice amount, not learning itself. Rewards assist learning if people have physical or psychological overt response. Cognitive or mediational response, like idea, logical deduction, perception, or definition, does not help learning.
Rewards strengthen successful subsystem processes or combinations.
Reward for proper behavior is pleasure and satisfaction.
Stimulation variety is itself rewarding. The penalty for not seeking and not finding stimulation is boredom.
People do not need to know rewards for them to be good rewards.
Reward should immediately follow success, but not every time, so people do not expect it.
If organism does not have essential needs, it becomes active.
The punishment for biologically wrong behavior is unhappiness.
Rewards and punishments determine attention to features and objects, so learning affects attention.
Something that animal chooses over something else is rewarding. Rewards are relative.
processes: visualization
If people can visualize referents, learning word sets is easier.
causes
Responses to successes or failures in performing functions can cause learning. Evaluation function sends input information.
effects
Learning does not change fundamental behaviors, such as postures, calls, and scratching behaviors.
effects: action coordination
Learning coordinates actions.
effects: emotion
Emotions are automatic but learning and consciousness can affect them.
factors: amnesia
Amnesia still allows short-term memory, procedural learning, and conditioning. People can improve performance even if they cannot remember previous practice [Farthing, 1992] [Young, 1996].
factors: activity level
Person's activity level affects learning rate and retention.
factors: body geometry
Learning new behavior depends on body spatial geometry.
factors: exploration
Active exploration aids learning. Scanning and exploration precede understanding and decision.
factors: motivation
Interest and concern aid learning.
factors: metaknowledge in learning
Knowledge about knowledge aids memory and learning. Metaknowledge includes perception, thinking, purpose, situation, mental process, function, and set patterns. It relates to previous knowledge. It relates learning to larger units. It finds learning patterns. It applies knowledge to new situations.
factors: stress level
Person's stress level affects learning rate and retention.
factors: temperature
Person's temperature affects learning rate and retention.
biology
All mammals learn.
biology: ape
Apes recognize objects using fast multisensory processes and slow single-sense processes. Apes do not transfer learning from one sense to another.
The bonobo Kanzi learned to use and understand 150 words, typically to express desires or refer to present objects, using instrumental association. The words probably did not refer to things, as humans mean them to do. Kanzi did not learn grammar [Savage-Rumbaugh, 1986].
biology: invertebrate learning
Bees can learn [Menzel and Erber, 1978].
Fruitflies can learn by trace conditioning or delay conditioning [Tully and Quinn, 1985].
Snails can learn [Alkon, 1983] [Alkon, 1987].
biology: drug
Depressants and stimulants affect learning.
biology: cerebellum
All timed perceptions and responses, such as eye-blink conditioning, involve cerebellum.
biology: frontal lobe
Frontal lobe damage causes impaired associational learning.
biology: immediate early gene
Learning activates immediate early genes, which use cAMP signal path.
ventral premotor area
New brain area in humans aided visually guided hand movements and learning by watching.
biology: zinc
Low zinc can cause slow learning.
People can learn behavior, perception, or statement {acquisition, learning}.
Remembering and analyzing stimuli and responses can form associations and generalizations from particular examples to class or set {chaining}| {verbal association}.
People can derive knowledge from experiences, perform other actions related to learned skills, predict future situations from past experiences, and make analogies {generalization, learning}|.
Learning {overlearning} can become automatic. Overlearning comes from frequently repeated experience.
Learning has stages {learning stages}.
stage 1
In learning stage, students follow rules, in proper order. Rules are "if A then B" statements. Rules can send student backward or forward to another rule or switch student to another rule set.
The first learning stage is to become familiar with situation, possible actions based on abilities, and possible goals, inputs, and outputs. Playing, reading, and wide experience all contribute to first stage. Students need time to gain experience in particular area and to organize data. Students do not take such time unless personal need or goal makes student want to take time to learn.
The first rules are general hypotheses and are often abstract and verbal, coming from parents or teachers. They can also be personal rules coming from previous situations.
stage 2
In learning stage two, students find rule incompleteness or inconsistency, or find cases not covered by rules, and modify rules to make them more complete, consistent, or specific to situation, based on environment facts. New statements have "if A" clauses including environment facts and "then B" clauses telling what to do if situation has that fact. Second stage is to pair input to output relative to task.
stage 3
In learning stage three, students reorganize rules to make hierarchy and group facts and rules. Rules and facts have importance. Learner makes overall action plan.
stage 4
In learning stage four, students see many similar situations and organize the whole scene or situation into a hierarchy. Students can have 10,000 to 100,000 possible situations. Rules can recognize situations.
stage 5
In learning stage five, students integrate goal, situation, and action into unconscious process and gain confidence and competence.
Strong reward is friend and parent affection and approval {social reinforcement}.
Learning skills or facts can affect performance on other tasks, as perceptual abilities and skills transfer from one body part to others {transfer of learning}| {learning transfer}. Learning transfer happens only in similar situations. Learning transfer can generalize stimuli to make a stimulus class. Perceptual-skill transfer goes from one sense to another sense. Motor skill transfer goes from one muscle to another muscle. Learning transfer can go from one body side to the other {bilateral transfer of learning}.
Learning can be physically disrupted {learning disorder}.
Neuron diseases {neurofibromatosis} can disrupt learning.
Organizing information using standard and general memory techniques {mnemonics}| aids learning and remembering. Mnemonics always uses mental imagery. For example, method of loci associates a sequence of familiar places with images about information, by attaching symbols to sequence of place objects.
Learning techniques {method of loci}| {loci method} can associate a sequence of familiar places with images {imagines} about information, by attaching symbols to sequences of place objects.
Learning skill has development stage, in which inadequate movements are only secondarily corrected. Learning skill then has skilled stage, in which secondary corrections become primary corrections, mind has developed movement pattern, and mistakes do not require secondary corrections.
Learning movements uses self-regulatory system. Movements start with goals, which provide models of expected future results.
All nervous-system levels integrate, from reflex or spinal level, to coordination or thalamo-striatum level, to spatial/symbolic or cortical level {afferent field, learning} [Bernstein, 1947] [Bernstein, 1967].
Learning involves dissociation and association equally {association theory}.
Learning finds optimum and maintains it {Baldwin effect}. Learning optimizes whole-system input-output function, by altering structures and relations, and requires method to inform system about optimum output.
Mind as whole has processing capacity, and brain modules have processing capacities {capacity model of learning}. For example, while learning word lists, seeing or hearing second list earlier or later uses mental capacity and interferes with learning list.
Simultaneity can be sufficient for learning {contiguity theory}, with no reinforcement. Mind automatically joins objects or events perceived or performed simultaneously.
Learning has eight types {cumulative learning theory} {cumulative learning model} [Gagné, 1977].
Several code types can operate in cognitive tasks {dual-coding hypothesis}. Learning can be passive increase in association strength during repetition, or it can be an active cognitive process using conscious strategies.
Recognizing image, situation, or problem type {learning set} can solve problem. Monkeys repeatedly trained to select one of two food objects improved learning speed. Perhaps, they learned rule: Correct means repeat, and incorrect means change to the other. All vertebrates show learning set formation, at similar rates [Harlow and Harlow, 1949].
Learning new behavior depends on learning thousands of simpler behaviors {learning unit}.
Memorizing uses attention and cognitive strategies, just like other cognitive processes {levels-of-processing model}. Memory strength depends on processing amount, which moves information to different coding levels in system: physical properties, phonemes, and semantic meanings. Recall is worse for incidental learning than for deliberate learning. However, studying difficult sentences longer does not increase memory ability. Coding phonemically does not necessarily code semantically.
Perhaps, learning requires rewards and reinforcement for motivation and attention {reinforcement theory}.
Animals form hypotheses and expectations. They can recognize problem types in environment. Signs or cues indicate problem type, especially goal type {sign-gestalt theory} [Tolman, 1932] [Tolman and Brunswik, 1935].
Learning first builds new stimulus-response associations {stimulus-response bond} and then organizes them into systems. Situations have specific responses and no general rules.
If skill is more complex to learn, it needs less motivation to learn it {Yerkes-Dodson law}. Important goals aid simple learning but hinder complex learning.
Learning releases more vesicles from presynaptic terminals release than sensitization does {activity-dependent enhancement}. Calcium ion binds to calmodulin, and complex binds to and activates adenyl cyclase. Increased transmitted glutamate binds to ionotropic alpha-amino-3-hydroxy-5-methyl-4 isoxazole proprionic-acid receptor (AMPA receptor), which lets sodium ion in and potassium ion out. If action potentials increase, postsynaptic-membrane depolarization increases, and magnesium ions leave N-methyl-D-aspartate receptor (NMDA receptor) channels and go into intercellular space. NMDA receptors are glutamate-gated channels that can open with the artificial substance NMDA, which does not affect other glutamate-gated channels. The empty channel allows more sodium ion to enter, potassium ion to leave, and calcium ion to enter, and changes metabolism to make and send transmitter back to presynaptic terminal to make more action potentials.
conditioning
Fruitflies with mutations to proteins in this pathway cannot learn many classical conditioning tasks involving harmful stimuli, pleasant stimuli, and different responses. Mutants {dunce gene mutant} do not break down cAMP. Mutants {rutabaga mutant} can have little adenyl cyclase. Mutants {amnesiac mutant} do not make peptide transmitter that activates adenyl cyclase. Mutants {DCO mutant} can have altered cAMP-dependent protein-kinase-A catalytic subunits. Other mutants {cabbage mutant} {turnip mutant} can happen.
If stimulus precedes stimulus that causes behavior, first stimulus then causes the behavior {autoshaping}. Autoshaping is stimulus-stimulus response, as in classical conditioning.
If conditional stimulus pairs with reinforcer, and then second stimulus pairs with first stimulus and reinforcer, animals do not later respond to only second stimulus {blocking effect}. Low attention, little surprise, or looking for likely cause can cause blocking effect. However, cognition can prevent conjunctions from causing associations.
Learned associations can happen only at specific times {critical period}| {sensitive period} during development. Psychological processes can develop quickly over short times. For example, in the first year, children learn to trust other people. In preadolescence, delinquent behavior can begin.
Cerebellar cortex and interpositus nucleus store eye-blink conditioning {eye-blink conditioning}. Mossy-fiber input comes from pons and goes to granule cells, which send parallel fibers to Purkinje cells. Climbing-fiber input comes from dorsal accessory olivary nucleus and goes to Purkinje cells. Purkinje cells send to interpositus nucleus, which sends to superior cerebellar peduncle and then to red nucleus to perform conditioned response.
Learning tasks can use verbal-item lists. Recall can be in the same order {order recall} or any order {free recall}.
Reward or reinforcement can be greater or smaller to change behavior {response-stimulus conditioning} {habit formation} [Watson, 1913] [Watson, 1924].
In response to signal, people can unconsciously repeat mental tasks {habit learning}. Habit learning improves with practice. Habit learning involves activating neostriatum, caudate nucleus, putamen, and substantia nigra after learning. Neostriatum receives from sense and motor cortex and associates them. Substantia nigra and caudate nucleus have dopamine neurons. Perhaps, they are feedback channels for rewards.
If two inputs to one neuron are almost simultaneous {pairing}, either input later has larger effect on neuron than it did before {Hebbian learning}.
Stimulus association can happen even with no reward {latent learning}. If animals can explore region before learning path to goal, learning is faster.
Mind can discriminate objects and events {multiple discrimination learning}, to understand scenes or situations.
Learning {observational learning} {imitation learning} can use watching and copying. More imitation results if imitated person's prestige is high, if imitated person is similar to imitator, if rewards are more, and if responses are specific.
Omitting expected reward {omission training} changes behavior.
Reading is a perceptual skill. Repeating perceptual discriminations in context {perceptual learning} unconsciously improves discriminations up to weeks later. Coordinating perception with action and adapting to new perceptions involve different learning than for concepts or conditioning.
factors
Discrimination depends on feature such as texture, motion direction, and line orientation, with no reward or feedback. Seemingly, people learn underlying rules.
transfer
No learning transfer goes to other locations, other brain parts, or similar objects.
comparisons
Besides perceptual learning, there is also language learning and social learning, such as imitation, modeling, and teaching.
People can observe, manipulate, and analyze multiple examples, scenes, or situations to generalize and discriminate and to combine concepts to form principles or laws {principle learning}.
Subliminal learning is not effective {subliminal learning}| [Merikle, 2000].
Articulating repeated simple linguistic units while hearing target items decreases memory {articulatory suppression}. Articulatory suppression causes no difference in memory with different vowel sound lengths.
While learning two lists, people assign items to List1 or List2 and build concepts of List1 and List2 {list distinctiveness} {list differentiation} {discriminability}. More list repetitions make more discriminability.
People can see target sequence, then see distractor sequence {distractor, learning}, and then take test. People remember the first trial perfectly, by semantic coding.
Simultaneously presented unrelated linguistic items {irrelevant speech} decreases memory.
Children over five can use word that symbolizes category {mediated generalization} {learned generalization}. First, word overgeneralizes, and then word further discriminates.
Artificial syllables {nonsense syllable} have beginning and ending consonants and middle vowel. Consonant-vowel-consonant nonsense syllables can standardize material to learn. It can minimize affects of meaning, emotion, attention, imagery, and background knowledge. Nonsense syllables can prevent previous associations from affecting learning or memory [Ebbinghaus, 1913].
However, nonsense syllables are not equal in learning ease, because learners still try to match sound or symbol sequences to real words. People no longer use nonsense-syllable learning.
People can learn verbal-item-pair lists {paired-associate learning}. Later, learners hear or see the first item of pair, then recall second.
Given a sentence sequence, subjects recall sentence meaning and last sentence word {reading span task}. Number of sentences recalled correctly is reading span, which correlates strongly with prose comprehension and short-term-memory information content, better than with word span or digit span.
Adding an independent verbal item with a new vowel sound, in any language or with no meaning, to series ends can decrease memory {suffix effect}. Other items do not affect memory.
People can unconsciously learn repeated motor procedure {skill, learning}| {motor skill} in response to instruction or will. Skill improves with practice.
brain
At skill-learning beginning, prefrontal cortex stores temporary information, parietal cortex is for attention, and cerebellum coordinates movements. Skill learning enlarges sensorimotor cortex. After learning, neostriatum caudate nucleus and putamen activity increases and prefrontal-cortex, parietal-cortex, and cerebellum activity decreases.
practice
Training and experience make behaviors more coordinated, nuanced, and unconscious. Practice develops efficient strategies. Improvement with practice is rapid at first and then is gradual but always continuing.
properties
Skill holds over many years. Interference from other learning, not decay over time, causes people to forget discrete motor skills over time.
Skill involves learning efficient chained action programs, from initiation to result {performance strategy}. Performance strategy involves perceptions, analyses, and responses. Practice develops efficient strategies.
People can unconsciously learn to perform movement sequence {sequence learning}. Such motor skill learning improves with practice.
Mind can form object or event idea {conceptual learning}, by deriving abstract ideas and rules from perception.
comparison
Conceptual learning differs from action learning, conditioning, language learning, and social learning, such as imitation, modeling, and teaching.
process
To form concept, mind uses example object or event and then generalizes. Mind does not use abstract statements.
Mind compares later perceptions to generalized example, using both denotations and connotations for identification, categorization, and discrimination.
process: combination
New concepts can combine existing-concept parts. Methods of combining ideas are type, token, argument, function, predication, and quantification.
referents
Concept categories are actions, amounts, events, objects, places, paths, properties, and states. Concept categories include subjects, verbs, adjectives, and other syntactic categories.
Concrete concepts are easiest to learn. Spatial concepts are next easiest to learn. Number concepts are hardest to learn [Dehaene, 1997].
relations
Concepts depend on shared place or time {locational concept}, stimulus part {analytic concept}, idea or attribute {categorical concept} {superordinate concept}, or relation {relational concept}.
Older children use fewer relational concepts and more categorical and analytic concepts.
Inferences can be associations.
truth
Truth is judgment about concepts in conceptual structure.
status
Concepts can have good or poor articulation.
validity
Person's concepts can match other people's concepts.
biology
All mammals can form concepts.
Concepts can be communicable and so useful for others {accessibility, concept}. Models or interpretations can allow people to know possible worlds.
Object meaning depends on object actions, uses, movements, and interactions with other things {activity theory}. People develop meaning as they learn about motion types. Children learn how to move things and then build concepts of how things can move. Activities involve person's own movements and reactions and so are not merely symbolic.
Cognition involves different levels {cognitive unit}. Image is first-level unit. Object or image symbol is second-level unit. Concept or class of symbols, objects, or images is third-level unit. Rule about concept relations is fourth-level unit.
Concepts have forms, connect to other concepts using rules {conceptual well-formedness rule}, and belong to categories. These properties allow concept learning.
People have a meaningful visual-scene overview {gist}| [Biederman, 1972] [Hochstein and Ahissar, 2002] [Kreiman et al., 2000] [Mack and Rock, 1998] [Potter and Levy, 1969] [Wolfe and Bennett, 1997] [Wolfe, 1998] [Wolfe, 1999]. Perhaps, gist involves weak associations {proto-object} [Rensink, 2000]. Perhaps, gist involves weak associations {fringe consciousness} [Galin, 1997] [James, 1962].
People have thought formation process {ideation}|. New ideas combine existing-idea parts.
Animals seem to assume cognitive principle that effect requires cause {minimum sufficient causation}.
Mind can build object or event classes {categorization} {conceptualizing} {categorizing, learning} {category learning} and can apply verbal labels to objects or events. Categories have an overall concept.
categories
People typically use categories whose members have approximately same values for several independent attributes. People typically do not use categories based on relations between attributes. People typically do not use categories that have two member types, two relation types, or two attribute values.
Category members typically do not share necessary and sufficient attributes. Category members have many independent attributes, and members have different sets of values, with some values outside normal range. Different member pairs typically share different attribute values.
processes
Categorization can generalize several examples, combine existing categories, divide existing categories, or make analogies from existing categories to other objects or events. Learning generalizes unconsciously and consciously from specific objects, scenes, and situations to what they have in common, what is invariant, or what is similar. Perhaps, sensory cortex averages over examples.
processes: definition
To form category, propose category member, choose attribute, and use attribute value. For example, for bird, choose wing color, and use the color blue.
People typically do not define categories using non-member or opposite attribute value.
requirements
Categorization requires perceiving whole objects and their attributes or actions, understanding truth and falsehood, using reference and association, using words as symbols for things, knowing to which attributes people pay attention, and knowing what people already know.
development
Children first make semantic categories and then build grammatical categories.
Category items can be of same class {equivalence category} or be the same {identity category}. Items in equivalence category can have same attribute value or same attribute relations.
Find situation that makes one hypothesis true, find second situation that differs from first in only one way, and test hypothesis on second situation {conservative focusing}.
Find situation that matches one hypothesis, find any other situation, and test hypothesis on other situation {focus gambling}.
For situations, test all hypotheses {simultaneous scanning}.
For hypotheses, test all situations {successive scanning}.
Behavior that satisfies need reduces drive stimuli {drive reduction} and so causes reinforcement [Hull, 1940] [Hull, 1943].
Deviation from equilibrium {need, learning} causes drive stimuli. Needs are physiological {primary need} or psychological {secondary need} [Hull, 1940] [Hull, 1943].
People can try to memorize {deliberate learning}. Recall is worse for incidental learning than for deliberate learning.
Doing cognitive tasks strengthens cognitive processes and results in memory {incidental learning}. Recall is worse for incidental learning than for deliberate learning. People can learn just by observation, consciously but with no instructions how to learn or to what to attend and no reason to learn.
Learning {conditioning, learning}| can be association between stimulus and response or response and reward.
theories
Main theories about conditioning are stimulus-stimulus (S-S), stimulus-response (S-R), and expectancy [Watson, 1913] [Watson, 1924].
factors
Animal drives, habits, and sensitivities affect conditioning.
factors: reward
Punishment intensity or reward intensity affects conditioning speed and effectiveness.
Conditioning can depend on reinforcement unexpectedness. Surprise is a cognitive act.
factors: stimulus
The stronger the conditioned stimulus, the greater the reflex {stimulus strength, conditioning}
effects
Only conditioning can alter autonomic nervous system, which controls heart rate and blood pressure. Conditioning can alter voluntary nervous system.
timing
Maximum interval for conditioning is 30 minutes, but 0.5 sec is best.
biology
Conditioning is in brains, not peripheral organs.
biology: animals
Classical and instrumental conditioning are similar in many species [Hull, 1940] [Hull, 1943].
awareness
Instrumental conditioning can reflect learning about relationship between action and reinforcement, rather than just unconsciously increasing reflex or habit frequency.
If unconditioned stimulus elicits response and stimulus pairs in space and time repeatedly with another stimulus, second conditioned stimulus elicits conditioned response {classical conditioning}| {signal learning} {Pavlovian conditioning}.
properties: rules
Conditioned stimulus must predict conditioned response {contingency, conditioning}. Conditioned stimulus must be close in time to unconditioned stimulus {temporal contiguity, conditioning}.
passivity
Conditioning does not depend on human or animal actions. Pavlovian conditioning is unconscious for reflexes, autonomic nervous system, and emotions.
extinction
If pairing ceases, conditioning decreases by extinction.
comparison: sensitization
Classical conditioning is stronger and longer than sensitization.
Classical conditioning can teach people to avoid taste {conditioned taste aversion}.
For reflexes, classical conditioning can apply conditioned stimulus and then unconditioned stimulus, to cause conditioned response {delay conditioning} [Carrillo et al., 2000] [Carter et al., 2003] [Clark and Squire, 1998] [Clark and Squire, 1999] [Han et al., 2003] [Knuttinen et al., 2001] [Lovibond and Shanks, 2002] [Öhman and Soares, 1998] [Quinn et al., 2002].
Shock, noise, or scary image {fear conditioning, learning} {conditioned fear} changes skin conductance or makes animal stand still. Putting animal in same location used for fear conditioning causes fear {context fear conditioning} [Quinn et al., 2002].
In senses, when second stimulus follows first stimulus, second stimulus can pair with first stimulus {sensory preconditioning}. Second stimulus can cause the behavior that first stimulus causes. Sensory preconditioning is stimulus-stimulus classical conditioning.
Classical conditioning can use conscious conditioned stimuli {trace conditioning}, which involve declarative memory [Carrillo et al., 2000] [Carter et al., 2003] [Clark and Squire, 1998] [Clark and Squire, 1999] [Han et al., 2003] [Knuttinen et al., 2001] [Lovibond and Shanks, 2002] [Öhman and Soares, 1998] [Quinn et al., 2002].
After conditioning, conditioned stimuli elicit the same response {conditioned response} (CR) that unconditioned stimuli elicit.
After conditioning, stimuli {conditioned stimulus} (CS), such as musical notes, that were neutral before conditioning elicit conditioned responses.
Stimuli {unconditioned stimulus} (US) can naturally elicit behavioral responses and can pair in space and time with conditioned stimuli.
Conditioned stimuli have an optimum interval, starting 0.2 to 1 second before unconditioned stimulus and ending when both stimuli stop simultaneously {activity dependence}.
Conditioned stimuli must predict conditioned responses {contingency}.
Conditioned stimuli must be close in time to unconditioned stimuli {temporal contiguity, learning}.
If a stimulus elicits a response, and then organism gets a reward, response frequency to stimulus increases {instrumental learning}| {instrumental conditioning} {stimulus-response learning} {trial and error learning}.
process
Learning can be by trial and error, using instinctive movements. Accidental successes have satisfying effects. Failures have annoying effects. Behavior changes gradually, rather than by sudden insights. Over time, only correct movements survive.
Training on one task can transfer to training on different tasks, but does not necessarily transfer [Thorndike, 1903] [Thorndike, 1911].
emotion
People learn reactions, such as aggression, withdrawal, and persistence, to emotions through instrumental conditioning.
If organism performs behavior and receives reward, response frequency increases {operant conditioning, learning}| {response conditioning}. Higher animals can perform new behaviors, and rewarded operants reappear more frequently. Response conditioning does not associate stimulus and response. Operant conditioning does not need goals, only rewards. Operant conditioning is stronger if rewards are unpredictable [Watson, 1913] [Watson, 1924].
Instrumental learning experiments can use maze or box {puzzle-box}, from which animal escapes [Thorndike, 1903] [Thorndike, 1911].
Operant conditioning can happen in spontaneous, not learned, motor activities. Reinforced actions increase in frequency. Reward kinds and timing {token economy} affect instrumental conditioning [Bekhterev, 1913].
People repeat behaviors useful for drive and need reduction {continuity theory of learning}. As they develop, children internalize repeated actions [Hull, 1940] [Hull, 1943].
As children develop, they internalize repeated actions {behavior segment}. Practice leads to memory. Young children cannot combine behavior segments, but older children can combine two behavior segments to reach goal [Hull, 1940] [Hull, 1943].
Shock, noise, or scary image {fear conditioning, freezing} can make animal stand still {freeze, animal}|.
Fear conditioning changes skin conductance {galvanic skin conductance}.
External stimulus can cause covert internal response {mediation, stimulus}, which causes internal stimulus, which causes body response.
Over time, without stimulus repetition, conditioned responses to conditioned stimuli decrease {forgetting}|. Over time, without reinforcement, instrumental responses to conditioned stimuli decrease. All stimulus-response associative links or conditioned reflexes gradually disappear without reinforcement.
cause
Forgetting happens because events repeat without reward, not because time passed or people did not use response.
level
Forgetting can be complete, with no response or memory.
purpose
Forgetting allows retaining most-useful information.
forgetting rate
Maximum forgetting rate is immediately after learning. Forgetting rate decreases over one day and then levels off.
If stimulus pairing ceases, conditioned response fades {extinction, learning}|. Extinction has same stages and processes as conditioning. Near extinction time, activity level, response variation, and response force increase [Watson, 1913] [Watson, 1924].
Other learning causes forgetting {suppression, learning}|. Newer memories can modify older ones. More suppression results when more activities intervene between learning and recall. Blocking new learning prevents suppression.
Pleasurable or painful experience, above minimum level but not beyond maximum intensity, strengthens the bond between stimulus and response {law of effect}. People learn, remember, and repeat actions that immediately lead to pleasure, and these become habits. People do not remember actions leading to pain, to avoid painful behavior later [Thorndike, 1903] [Thorndike, 1911].
Repeating response under good conditions strengthens stimulus-response association, and reinforcement increases practice {law of exercise} {law of use} [Thorndike, 1903] [Thorndike, 1911].
Learning can happen if learner can respond, has interest, has background knowledge, is mature enough, and has motivation {law of readiness} [Thorndike, 1903] [Thorndike, 1911].
Behaviors can be similar to previous behaviors {response-response law} (R-R law).
Behaviors can always happen, given environment states or events {stimulus-response law} (S-R law).
Training can take short time {massed training}.
Training can take long time {spaced training}.
Outline of Knowledge Database Home Page
Description of Outline of Knowledge Database
Date Modified: 2022.0225