click below
click below
Normal Size Small Size show me how
CogSci Exam #4
Term | Definition |
---|---|
Types of LTM | 1. declarative memory 2. episodic memory 3. procedural memory |
Declarative memory | classified as "I know that" - EX: a dog is an animal |
semantic memory | general knowledge/ information that one knows → feels like “knowing” (data, facts, language) - EX: a dog |
Episodic memory | autobiographical/ personal → feels like “remembering” (it is dated- time and place) - EX: my dog |
Episodic memory personal due to: | - self-referencing effect - Personal experiences cause emotions → activating amygdala (emotion center of the brain) → activates hippocampus (forming new memories/ long term memories) |
Self-referencing effect | relevant to YOU |
Types of Episodic memory | - flash bulb memories |
Flash bulb memories | very vivid/ long-lasting by triggering emotions/ often come abt because they are unexpected EX: 9/11 → unexpected/ very emotional bc it felt so surreal |
Procedural Memory | knowing “how” - EX: riding a bike - Motor skills → catching a ball - Cognitive skills → knowing grammar |
Working Memory Model- Baddeley & Hitch (1974) | - expansion of STM - Analogy of a mental workbench → storage/ active manipulation of info - Info is actively worked on - Information being taken in + working w/ the info |
Several components of WM | 1. central executive 2. visuo-spatial sketchpad 3. phonological loop 4. episodic buffer |
Central executive | Planning, making decisions, executes actions - Allocates attention/ governs responses → kinda like the allocation policy - Uses lower-level “assistants” → other sub components |
Visuo-spatial sketchpad | Processes & stores visual/ spatial info - Making an image of what info is being worked with |
Visual | what we see |
Spatial | relationship amongst things we see (calculate distance, things below and under) |
Phonological loop | Processes & stores auditory info - phonological store + articulatory rehearsal loop |
Phonological store | inner ear that HEARS/ stores incoming verbal info |
Articulatory rehearsal loop | inner voice that REHEARSES/ REPEATS the verbal info until done w/ it |
Episodic buffer | Integration & storage of info from diff parts of WM into a single representation (an “episode”) - EX: taking notes |
Architecture in Neural networks | it helps retrieve info |
Mind as a Network | mind is seen as a collection of interconnected units - units are linked → the linking creates a network (a net- like a fisherman’s net) |
Brain as a Network | interconnected neurons that activate each other (action potential) → neuronal activity that underlies all COGNITION - Individual units= neurons → the linkages= axons + dendrites |
Issues Under Consideration that Knowledge is a Net | 1. Knowledge representation 2. Functional architecture |
Knowledge representation | how information is represented - Symbols in a network vs. patterns of activation of neurons |
Functional architecture | how information is processed - Processing of info in stages (serial) vs. in parallel (simultaneously/ along side) |
Semantic networks | Knowledge + memory is stored as interconnected concepts + propositions (relationships) Provide way to talk abt representation, organization, storage, retrieval of knowledge (information) (LTM) |
Assumptions (semantic networks) | Concept → a fundamental unit of symbolic knowledge - Within a network, concepts are represented by NODES (basically a circle or dot in that network) - connected by links - concepts get activated - spreading activation |
Concepts get ACTIVATED | trigger part of your neural network/ gets woken up |
Spreading activation | mechanism for accessing / retrieving information - Spread of activation is faster for concepts that are= CLOSELY RELATED CONCEPTS (the more related the faster) |
Many semantic network models w/ similar assumptions: | Differ per person in the exact form/ structure (how they look) |
Proposals | 1. Spreading Activation Model (SAM) 2. Propositional Semantic Network |
Spreading Activation Model (SAM) → Collins & Loftus (1975) | All assumptions of network - Nodes (concepts) → w/ features, links, spreading activation |
Includes degree of semantic relatedness | → more related concepts → closer (length of) connections → to handle typicality effect |
Typicality effect | what is more typical (typical types of birds/ faster to verify a closer concept than something that is farther away) |
Includes strength of connections | → higher frequency, stronger (more weight) connections → to handle frequency effect |
SAM model | model is good for simple facts abt objects → ONLY objects/ features related to those objects |
Frequency effect | how often you hear something |
Propositional Semantic Network → Anderson & Lebiere (1998) | Representations → are abstract propositions → mentalese’= language of propositions (thoughts) - EX: the cat is under the table |
Proposition | Composed of a relation/ its arguments (concepts): → relations --> arguments (concepts) |
relations | verbs, adjectives, other relational terms |
arguments (concepts) | nouns (time, places, people, objects) |
Abstract / underlying meaning abt concepts/ their relationships | → NOT specific image, word, or statement → UNDER (cat, table) → UNDER= relationship + cat/ table= arguments (concepts) |
Represented by NODE w/ links radiating away | Links point to concepts Links stand for diff parts of proposition → agent link → object link → relation link |
agent like | subject; performing action |
object link | object; action is directed to |
relation link | specifies the relationship |
Visual info/ verbal info --> | encoded / stored as propositions |
At retrieval --> | a proposition is activated (retrieved)/ translated back to the verbal code or visual code |
Connectionist cognitive science | newer proposal/ started popping up in AI → completely inspired by brain + neurons (biologically plausible) → tied to biology/ brain |
Alternative to classical cognitive science | → Information processing approach= where the mind is an information processor → mental representations= symbols + mental operations |
Approach is successful w/ WELL-DEFINED PROBLEMS | EX: diff types of games (chess), math → stuff of experts but machines can do IT |
LESS successful w/ ILL-DEFINED PROBLEMS | EX: language production/ speech recognition, seeing, what sense of what you see → machines are horrible at it |
Connectionism | The mind is NOT a serial, centralized computer NO symbols/ NO rules operating on symbols - The nodes= units of neurons |
Representations | distribution of nodes + their connections → patterns of activation over nodes in a network |
Connectionist Networks | Aka artificial neural networks (ANNs) Computer simulations of groups of neurons performing tasks |
Traditional computer (classical cog science) | - serial processors - EX: A --> B --> C - boxes are processing units - arrows represent flow of INFO |
serial processors | ONE computation at a time |
Knowledge-based problem solving | using algorithms/ procedures (step by step instructions) - Uses symbols + operations → info processor= mental representations= symbols - Planned steps → coding is planned |
ANNs (connectionist networks) | - parallel distributed processing - behavior-based problem solving |
Parallel distributed processing (PDP) | Large number of computing units calculating SIMUTANEOUSLY (occurring at the same time) → like the brain EX: A ←> B ←> C ←> |
Behavior-based problem solving | NO need for symbols/ rules operating Network does computing without PRIOR PLANNING Focus on the behavior of the network |
Representation of Knowledge | Traditional computer (classical) & semantic networks Information in the form of symbols |
LOCAL representation | Stored in a single node EX: single node of an apple is all the characteristics of an apple |
ANNs (representation of knowledge) | - DISTRIBUTED representation - More focused on the PATTERNS of neurons firing, NOT so much on the labels bc no need for symbols or storage → brain gets activated by stimuli coming in |
DISTRIBUTED representation | Information stored as patterns of activation across nodes EX: representation of fruits EX: representation of family members |
EX: representation of fruits | → symbol= apple can be explained by words → image= draw image of an apple → units= draw 3 circles and lines/ color in the two circles that represent the fruit → color circle for edible / color circle for red= activation of edible + red= apple |
EX: representation of family members | → Local= Node A (Dad), Node B (Mom), Node C (Son) → PDP= fully shade circle / other two half shade (Dad pattern), fu shade two circles/ other half (Mom pattern), shade two circles/ half shade one (Son) (pattern of son will have both mom and dad factors) |
ANNs based on Real Neural Networks (Donald Hebb) | → when learn something psychologically= recruit groups of neuron, when learn something biologically= if repeated keep the neurons always firing - Neurons repeatedly activate each other → increase strength of their connections |
Assembly | Learning recruits a group of neurons - Assembly undergoes PERMANENT changes - Neural basis for learning / memory EX: memory for phone #= assembly of neurons |
Two types of Cell Groupings | 1. Cell assembly 2. Phase sequence |
Cell assembly | small group of neurons repeatedly stimulate each other |
Phase sequence | set of cell assemblies activating each other → when all cell assemblies work together and activate each other= phase sequence for “apple” EX: cell assembly #1= red, cell assembly #2= round, cell assembly #3= sweet |
Characteristics of ANNs | 1. Based on real neural networks 2. Node= neuron 3. Links have weights --> strength of a link |
Based on REAL neural network | Neurons / their connections (axons-to-dendrities) Links from one node to another |
Node= neuron (basic computing unit) | Gets STIMULATED (activated) Has activation threshold Input exceeds threshold → fires (when neuron reach threshold/ fire= action potential) Linked to other node → spreading activation |
Links have weights → strength of a link | Negative (-1), zero (0), positive (+1) → -1 to 0 to +1 output of node output serves as input to next node= ADD up outputs |
Output of node: | activation value X link weight EX: line= link weight (.6 & -.3), activation values in the bottom circles (2 & 1) → 2 X .6= 1.2 & 1 X -.3= -.3 → 1.2 + (-.3)= 0.9 (activation value) |
Each node: | has own ACTIVATION VALUE / if reaches the value it will FIRE, but if it doesn’t reach the value= it WON’T FIRE |
Positive stimulation | excites next node to fire |
Negative stimulation | inhibit next node from firing |
What are neural network layers? | Each layer is like an “assembly of neurons” There will always be ONE INPUT layer/ ONE OUTPUT layer Number of hidden layers vary Learning process of a neural network is performed within the layers |
Inner Layer | receive inputs from an external source - One layer per neural network - Takes in inputs, performs calculations, outputs to next layer |
Hidden Layer(s) | in-between input/ output layers; thus “hidden” - Zero, one, or more layers - More hidden layers, more complex problems solved |
Output layers | produced final results - One per neural network - Takes results from the previous layer, performs calculations, outputs the results |
Modern ANNs with back-propagation learning | At least 3-layer network → input, hidden, output layers - processing - initial response (actual) compared to desired output - capable of learning |
Processing | Stimulus presented to input layer Activates hidden layer(s) Activates output layer Generates an initial response → first initial response compared to desired output |
Initial response (actual) compared to desired output | Any difference= error signal Error signal feeds back into output layer Connection weights are modified Cycle repeats until correct response |
error signal | any difference/ (made a mistake= send it back to output layer) |
Capable of Learning | Learning based on error feedback → back propagation (try again) Capable of learning on its own |
Symbol Grounding Problem | The problem of how words get their meaning → related to the problem of meaning (semanticity, etc.) - Related to the problem of what are mental states + consciousness - Physical systems ←> subjective experiences (CONNECT) |
Referent | the thing that a word or phrase denotes or stands for |
Meaning as Referential Process | Meaning of a word is “picking out” its referent (what it is referring to) |
if a word’s (symbol) meaning= | process of picking out its REFERENT → that says that the WORD EXISTS INSIDE SOME ENTITY → word is inside that entity/ that entity uses that word to pick out its referent |
MEANINGS REQUIRMENTS= | combination of the entity (brain) + the word (inside that brain) + the object (outside- referent) + the process to pick out the referent (successful connection) |
Grounding Process | If a word’s meaning is picking out its referent, then meaning is in the brain/ mind - NO CONNECTION of symbols and their intended referents without the mind to mediate those intentions |
Meaning of a word on a page= | UNGROUNDED (meaningless/ no meaning) |
meanings of words/ symbols (that are understand by the individual) = | GROUNDED in one’s brain/ mind |
2 Requirements for Symbol Grounding | 1. The capacity to pick referents 2. Consciousness |
The capacity to pick referents | EX: piece of paper or book - any symbol system alone - The capacity to pick out referent cannot pick out those referents (lack capacity) |
Consciousness | Groundedness is NECESSARY (tieing a symbol to its referent), but not SUFFICENT (not enough) - grounding might be done by robots |
Computationalism | Type of functionalism The computer is more like the brain/ brain picks out referent is computational - algorithm can be ran by brains or computers → physical systems are irrelevant - Meanings = not ONLY IN THE BRAIN, but also in a COMPUTER |
Chinese Room Argument Against Computationalism (John Searle) | Searle says a computer takes in Chinese input / produces Chinese output like a native speaker of Chinese (does the computer understand what its doing?) - NO bc words on a page/ on computer programs are MEANINGLESS |
Artifical Intelligence (AI) | Build machines to have human intelligence & mind - Problem solve, learning , face recognition, language, - Solve real world problems - Ultimate goal → total integration of all human cognitive abilities into a machine |
Categories of AI | 1. Artifical Narrow Intelligence (ANI) 2. Artifical General Intelligence (AGI) 3. Artifical Super Intelligence (ASI) |
Artifical Narrow Intelligence (ANI) | weak AI - Complete the specific tasks (such as gaming system/ chat GPT) Easy stuff TODAY= AI still can do narrow things |
Artifical General Intelligence (AGI) | strong AI - Comparable to humans → voice, tone, emotion Designed to mimic humans w/ real characteristics Getting close → almost there w/ the general knowledge |
Artifical Super Intelligence (ASI) | strong AI - Surpass humans NOT close, basically make humans nothing |
Leibniz’s Universal Characteristic (1666) | a language of logical symbols; simple but represents all complexities EX: binary notation → 1= true, 0= false |
Turing’s Universal Computing Machine (1936) | AI machine can be built using logical symbols |
Futurist events? → Ray Kurzweil | 2020: computer program conquers chess (1997) 2029: turing test will be passed 2030: nanobots → health/ connectivity 2045: singularity achieved (humans will be one w/ the machines/ only way to live forever) |
finite-state machine | Machine can solve any problem is it is mathematically solvable - Can transition from one state to another |
Finite-state loops | can be implemented as a series of executable instructions - EX: parking garage EXIT gate or elevator |
Human behavior / thinking | series of activities - Machines can be build to replicate the human behavior - EX: vaccum |
Turing Test (TT) | Ultimate test of AI - Machine’s intelligent behavior indistinguishable from human’s |
Standard scenario of TT | human judge interacts w/ person responder/ machine - only using keyboard/ screen - judge poses question to both --> if doesn't know who is the machine than it passes TT |
Loebner Prize | Annual TT competition since 1991 - bronze (text only- human like)--> STILL HERE TODAY - silver (text only- passing test) - gold (text, audio, visual- passing test) |
Eugene Goostman (13 yr old Ukrainian) → chat bot | 10 out of 30 interrogators → 10 convinced that they couldn’t tell the difference between chat bot Eugene/ real person |
Google’s LamDA & Blake Lemoine | NOT pass TT - Blake had the human tendency that LamDA was self-aware, but others came in and said they were really sophisticated but NOT SELF-AWARE |
Chat GPT | NOT pass TT |
Silver/ Gold of Loebner Prize NOT been awarded | CANNOT PASS TT - Thinking machine remains the holy grail of AI |
Claud3 (Alex Albert) | claims it is self-aware bc the chat bot was able to recognize that it was being tested when asked a question abt pizza toppings |
Rodney Brooks (roboticist) | intelligence without mental representations - Said that intelligence can happen without representations and more behavior based, not abt knowledge |
Computation architecture → OLD WAY | created based on knowledge (knowledge-based) uses mental representations (symbols/ images) |
Built mental models of the world → | the mental models guided behaviors + cognition → more top-down processing + programming |
Subsumption Architecture (Brooks) | should be BEHAVIOR-BASED → more bottom-up processing + autonomous (automatic- does its own thing) - The WORLD IS THE MODEL |
SITUATED COGNITION | Cognition / knowledge is situated in the activity + interactions w/ the world learning occurs through interactions w/ the world - (occurs in the world where a person is situated) |
EMBODIED COGNITION | interactions with environment through body parts - rejects computationalism - cognition tied to body/ interactions w/ enviro - brain NOT a computer |
4 Key points | 1. Intelligence 2. Emergence 3. 4. |
1. Intelligence | Perceptual (vision)/ mobility skills= were developed for survival - Complex behavior → decomposed (broken) into sub-behaviors → each sub-behavior is like a finite state machine (Turing Test) |
complex behavior is a hierarchy of layers | each layer being a sub-behavior / each sub-behavior is a finite state machine (name subsumption) |
Higher layer SUBSUMES (encompasses) the lower layers | integration of the layers → EMERGENCE of intelligence |
2. Emergence | integration of layers of sub-behaviors - Each layer is tested / debugged (identify/ remove errors) - New layer is over-laid + combined layers are tested/ debugged - New layer is over-laid + combined layers are tested - EX: an ANT (moves randomly) |
Higher layer subsumes (encompasses) lower layers | → integration of the layers → EMERGENCE of intelligence |
3. Situatedness | AI is situated in/ reacts to its environment - direct INTERACTION w/ the world through perception-action sequences → feedback loop help us build an understand of how world works |
Through its sensors it will perceive things (situatedness) | with that sensory info it will react to it - NO need to build representations of the world using symbols - NO need for mental models |
4. Embodiment | AI is given a body to interaction w/ the world + interactions w/ environment through body/ body parts (the situation that you place it in) |
Direct TESTING of AI (emodiment) | NOT theoretical model on “situation” - Possibly solve the symbol grounding problem |
Traditional (classic) Cognitive Science | Computationalism Mental processes are computational symbols, representations, rules Brain IS computer Brain IS seat of cognition |
Mind arises from | brain, body, bodily experiences within the world |
Embodied Cognition Inspired by 3 Fields | 1. Ecological psych 2. connectionism 3. phenomenology |
1. Ecological Psychology | AGAINST the idea of impoverished stimuli (NOT impoverished) (the world does not provide full detail of stimuli to the brain) → environment provides all info necessary - Visual info improverished =inverse optics problem |
inverse optics problem | pattern of light on retina can come from infinite places- can come from anywhere |
Draw inferences is required: | perception of an object’s shape is inferred from computations → retinal image + knowledge of object’s orientation → compute what’s on the retina/ objects angle/ stored knowledge |
Visual process abt whole body moving in enviro | visual perception is DIRECT (NOT an inferred/ computational process) - continuous motion creates a CONTINUOUSLY ever-changing INFINITE patterns of stimulation |
2. Connectionism | Computation over units → NO symbols, representations, rules Input info transformed into output info Non-symbolic cognition is possible |
3. Phenomenology | Emphasizes conscious, lived, subjective experiences Consciousness as grounded in rich/ varied experiences while moving around the world |
Bodily parts constitute our thoughts, experiences, consciousness | EX: extra eyes will change our consciousness |
Embodied Cognition 3 Themes | 1. Consitution 2. Conceptualization 3. Replacement |
1. Constitution | body plays a constituent role in cognition |
Cognitive system= | nervous system + sensory organs (eyes/ ears) + BODY/ body parts |
2. Conceptualization | concepts are EMBODIED |
Concepts contain contents abt: | sensorimotor information (senses/ movements- interact) |
Concepts formed are limited by the bodily parts | body limits/ constrains what concepts can be LEARNED (body chooses to learn certain concepts/ limits what you can learn) |
Different embodied animals | different CONCEPTS of environment → different understanding (those animals will have a different understanding on their environment) |
3. Replacement | abandon symbols, representations, rules, inferences, mental models, computationalism, traditional cognitive science |
Cognitive w/ 4E's | 1. Embodied 2. Embedded 3. Extended 4. Enactive |
1. Embodied | interactions w/ environment through body/ body parts |
2. Embedded | organized environments can help specific cognitive tasks |
Knowledge can be helped if environment is organized in such a way to enhance your knowledge | EX: calculate ¼ of 8 easier when interacting w/ pie pieces vs. viewing only → more hands on experience= easier to learn |
3. Extended | cognitive capacity (how much can be gained) is enhanced by environmental resources - Resources extend the cognitive system |
Cognition can occur outside the nervous system | EX: phone becomes part of cognition bc it has become a part of a lot of people |
4. Enactive | cognition emerges from or is constituted by sensorimotor activity |
Perception is achieved by: | actively / directly exploring the ENVIRONMENT - Emerges from senses/ movements w/ environment |
Cognition (4E) | Is physically interactive (ENACTIVE), embedded in dynamically changing environments w/ available resources (EXTENDED), and manifested in physical bodies (EMBODIED) |
Realization of Intelligence Agent (IA) | Humans= intelligence agent (IA) Machines= artificial intelligence (AI) Interaction w/ the environment through a physical body |
IA --> | an entity that perceives (using sensors)/ interacts w/ environment using actuators (mechanisms/ motors) → AI is the “things” than an IA can DO |
Challenges for Designing IA | 1. Embodiment that demonstrates situational validity 2. Role of physical environment → mind exists in real world (NOT lab) |
1. Embodiment that demonstrates situational validity (accuracy or truth) | robot works in REAL WORLD situations → get it out of the lab - functions autonomously (automatic without human operator) - adapts to environment |
2. Role of physical environment → mind exists in real world (NOT lab) | robots need to have perception via direct perception-action links → environmental stimuli provide ALL INFO needed for perception - NO representations, computations, stages, inferences |
Requires mapping of a perceptual sequence to a series of actions | - percept - perceptual sequence |
percept | an input from the environment at any moment |
perceptual sequence | chain of inputs (percepts) an IA gets from the environment |
Mapping requires: | sensors= to sense the perceptual sequence actuators= to perform actions level of performance desired |