220 likes | 329 Vues
This chapter provides a comprehensive overview of cognitive architecture, emphasizing the use of flow charts to express procedures and cognitive processes. It explores Turing's contributions, discussing the non-mental explanations of mental operations and the principles of compositionality and productivity in understanding meanings. The text delves into modularity, examining features such as domain specificity and information encapsulation. Additionally, it highlights the importance of connectionism, innate dispositions, and the dynamics of belief updating in human cognition, drawing connections between theoretical frameworks and practical implications.
E N D
Chapter Three of Green: Intro to Cogsci Spring 2005
Review: Boxes and Flows • Needed with Crane • Flow Charts: Used to express procedure and algorithms; boxes represent operations or decisions and arrows represent flow of control. “How to do it” • Box/arrow diagram: boxes represent cognitive processes and arrows represent flow of information. “How it happens”
Flow charts • How to do it • Example: Recipe NO Is oven on to 350? Turn on to 350 Yes Open Package
Box/arrows • How it is done, from input to output Proximal Stimuli Perception of distal stimuli
Review: Attractions of Turing • Non-mental explanation of mental: a Turing machine does not have to understand meanings in order to perform its basic operations. • Retains compositionality, systematicity and so productivity • Compositionality of X: meaning of X determined by parts and rules of compostion. • Examples: Grass is green. Blood is red. • Compositionality seems to give us sytematicity: can understand same rules, same elements combined differently. • Example: Grass is red. Blood is green. • Productivity: potential infinite number of X’s can be understood.
Architecture and modularity • What is cognitive architecture and how does it differ from the brain’s architecture?
Features of a Module • Domain specificity • Information encapsulation • Mandatory • Speedy (because of first three) • Shallow output representations • Same ontogency across species • Characteristic and isolatable breakdowns • Associated with a fixed and sometimes localized neural architecture Note: 6-8 & innately prespecified
Modularity in practice • SAQ 3.1 • 3.2 • 3.3 • 3.4
Other Issues re modularity of lang system • Domain specificity • McGurk effect (p. 66) • Encapsulation • Parsing • Word recognition
Parsing • “When you are happy, visiting relatives…” [people, activity] • When you are happy, visiting relatives will enjoy your home. • When you are happy, visiting relatives can be a good idea. • Two views compatible with Fodorean modularity: • All interpretations present and then selected • Done in fixed order with no contextual influence on order • Why is contextual influence important? • According to Green, evidence favors encapsulation
Word Recognition: • A possible problem for Fodorean modularity: • Example: The player went to the coach. • Responding quicker = primed • Priming: Process faster/easier because of earlier process. • Fodor: this is dumb association, not informationally informed.
The Frame Problem • What is it? • Why is it concerned with “central systems” • Humans just do update their beliefs reasonably successfully. • See Crane on relevance and Dreyfus
How modular should the mind be? • Marr and the principle of modular design • Fodorean arguments for modularity: • We need some systems to be fast, automatic, etc • Fodor’s teleological argument for non-modularity: is it evolutionarily sensible?
Piaget • Epigenetic constructivism • Self-organizing system structured and shaped by its environment • 3 basic operations and interactions with the environment explain adult cognition
Karmiloff-Smith • Innate dispositions to attend to particular stimuli and some innate skeletal knowledge structures. • Thinks information encapsulation is acquired, not inborn • Questions poverty of stimulus argument: environments are more structured than we thought. • Infant mind is very plastic.
Connectionism: Advantages?? • Neurally more realistic? • Learns in a way that allows generalizing – e.g., pattern learning/voice recognition • Graceful degredation: unlike Turing machines
Pattern associators • Learning rule • Activation function • Which is the Hebb rule? • Instead of “a little learning is a dangerous thing,” we can have “a lot of learning is a dangerous thing.”
Delta Rule • Two advantages over Hebb Rule. • What are they? • How they operate • What they operate on • Why the Perceptron?