1 / 51

How Cognition Could Be Computing Semiotic Systems, Computers, & the Mind

How Cognition Could Be Computing Semiotic Systems, Computers, & the Mind. William J. Rapaport Department of Computer Science & Engineering, Department of Philosophy, Department of Linguistics, and Center for Cognitive Science rapaport@buffalo.edu http://www.cse.buffalo.edu/~rapaport.

jamese
Télécharger la présentation

How Cognition Could Be Computing Semiotic Systems, Computers, & the Mind

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How Cognition Could Be ComputingSemiotic Systems, Computers, & the Mind William J. Rapaport Department of Computer Science & Engineering, Department of Philosophy, Department of Linguistics, and Center for Cognitive Science rapaport@buffalo.edu http://www.cse.buffalo.edu/~rapaport

  2. Summary • Computationalism = cognition is computable. • Mental processes can be the result of algorithmic procedures… • …that can be affected by emotions/attitudes/individual histories. • Computers that implement these (cognitive) procedures really exhibit those mental processes. • They are “semiotic” (= sign-using) systems. • They really think. • Computers can possess minds. • “Syntactic semantics” explains how all this is possible.

  3. I. What Is “Computationalism”? • What is AI? • Not “artificial”: It’s a computational theory • Not about “intelligence”: It’s about cognition • Better name: • Computational cognition • cf. “computational linguistics” “computational statistics” “computational geometry”, etc.

  4. What Is “Computationalism”? • “Computationalism” =? cognition is computation • Hobbes, McCulloch/Pitts, Putnam, Fodor, Pylyshyn, … • interesting, worth exploring, possibly true • BUT: • Not what “computational” usually means! • What should “computationalism” be? • Must preserve crucial insight: • cognition is explainable via mathematical theory of computation • First, some definitions…

  5. Preliminary Definitions • “Cognition” ≈ whatever cognitive scientists study, including: • believing • consciousness • emotion • language • learning • memory • perception • planning • problem solving • reasoning • representation • including categorization, concepts, mental imagery, etc. • sensation • thought, etc.

  6. Preliminary Definitions • “Algorithm” (informal notion) • A is an algorithm for executor Eto accomplish goal G ≈informally • A is a procedure (finite set/seq of statements/rules/instructions) such that: • Each statement/rule/instruction S is such that: • S is composed of finite # of symbols/marks from finite alphabet • S is unambiguous for E—i.e.: • E knows how to execute S • E canexecute S • S can be executed in finite time • after executing S, Eknows what to do next • Ahalts (≈ takes finite time) • A halts with G accomplished

  7. Preliminary Definitions • “Effective” • Church: left undefined • Rosser: • each step is precisely determined • “method” produces an answer… • …in finite # of steps • Markov: • “process” produces an answer • Kleene: • effective procedure = algorithm • Knuth: • all operations can be done exactly, in finite time

  8. Preliminary Definitions • “Algorithm” (formal notion) • A is an algorithm =formallyA is (logically equivalent to) a Turing machine • Church-Turing Thesis: • algorithminformal = algorithmformal

  9. Preliminary Definitions • “Computable” • task/goal/field of study G is computable iff  algorithm(s)formal for G

  10. The Proper Treatment of Computationalism • Computationalism ≠ Cognition is computation

  11. The Proper Treatment of Computationalism • Computationalism = Cognition is computable • i.e.,  algorithm(s) that compute cognitive functions • Basic research question of computational cognitive science: • How much of cognition is computable? • Working assumption of computational cognitive science: • All cognition is computable • …

  12. Proper Treatment of Computationalism • Implementational implication(multiple realization): • If cognition is computable, then: • anything that implements cognitive computationswould be cognitive (would really think) • even if humans don’t do it that way! • Piccinini: neural spike trains are not representable as digit strings;  not computational • BUT: ~  functions whose O/P they produce are not computable • ◊: human cognition is computable but not computed

  13. Proper Treatment of Computationalism • 2 Views of Cognitive Science: • How do humans cognize? (part of CogSci) • Might not be computationally. • BUT: Can abstract away from the specifically human to a more general issue: • How is cognition possible? (also part of CogSci) • Might be computable.

  14. The Proper Treatment of Computationalism • 2 Views of “Computationalism” • Cognition is computation • “strong” / “narrow” / “nearsighted” view • the mind or brain is a computer • how the M/B does what it does is by computing vs. the proper treatment: • Cognition is computable • “weak” / “wide” / “farsighted” view • what the mind or brain does can be described in computational terms • how it does it is a matter for neuroscience to determine

  15. Proper Treatment of Computationalism • Turing on his test: • “the use of words and general educated opinionwill alter so much that one will be able to speak of machines thinking without expecting to be contradicted.” • “general educated opinion” • changes when we abstract & generalize • “the use of words” • changes when reference shifts from word’s initial / narrow application to more abstract / general phenomenon • cf. “fly”, “compute”, “algorithm” • ditto for “cognition” / “think”

  16. II. Syntactic Semanticsas a theory underlying computationalism • Cognition is internal • Cognitive agents have direct access only to internal representatives of external objects • Semantics is syntactic •  Words, meanings, & semantic relations between them are all syntactic items • Understanding is recursive • Recursive Case: • We understand one thing in terms of another that must already be understood; • Base Case: • We understand something in terms of itself(syntactic understanding)

  17. Syntactic Semantics • Internalism:Cognitive agents have direct access only to internal representatives of external objects • A cognitive agent understands the world by “pushing the world into the mind” (Jackendoff 2002) •  Both words & their meanings (including external objects) are represented internally in a single LOT • Humans: biological neural network • Adrian 1926,1928: nervous systems transduce different physical stimuli into a common internal medium (cf. Piccinini) • Computers: • artificial neural network • symbolic knowledge-representation & reasoning system

  18. Syntactic Semantics: Internalism • Hume: argument from double vision • Kant: “noumena” vs. “phenomena” • Ayer: argument from illusion • Fodor anti-Putnam: “methodological solipsism” & G.Segal anti-Burge: “individualism” • Pylyshyn: • “output of sensory transducers is the only contact the cognitive system ever has with the environment” • Changizi: argument from time delay in perception

  19. Syntactic Semantics • (Internalism ) Syntacticism Words, meanings, & semantic relations between them are all syntactic • syntax = study of relations among members of a single set • set of signs / marks / neurons / … • semantics = study of relations between members of two sets • set of signs / marks / neurons / … • & set of (external) meanings / … (with its own syntax!) • “Pushing” meanings into same set as symbols for them allows semantics to be done syntactically • turns semantic relations between 2 sets (internal signs, external meanings) into syntactic relations among the marks of a single (internal) LOT • e.g.: truth tables & formal semantics are both syntactic • e.g.: neurons representing both signs & external meanings • Symbol-manipulating computers can do semantics by doing syntax

  20. Syntactic Semantics: Syntacticism • “Syntactic semantics” underlies the Semantic Web: • “syntactic” info on web pages gains meaning from… • “syntactic” info (metadata encoded in RDF) in HTML source files • metadata annotates webpage data • metadata provides “semantic interpretation” of webpage data

  21. Syntactic Semantics • Understanding is recursive: • Recursive cases: • We understand a syntactic domain (SYN-1) indirectlyby interpreting it in terms of a semantic domain (SEM-1) • but SEM-1 must be antecedently understood • SEM-1 can be understood by considering it as a syntactic domain SYN-2 interpreted in terms of yet anothersemantic domain • which also must be antecedently understood, etc. • Base case: • A domain that is understood directly (i.e., not “antecedently”) • in terms of itself • i.e., syntactically • & perhaps holistically

  22. Syntactic Semantics: Recursiveness • Syntactic understanding: • “the meaning of an internal state (which may or may not be linked to an external state of affairs) for the system itself is most naturally defined in terms of that state’s relations to its other states.” • Edelman, Shimon (2008), “On the Nature of Minds, or: Truth and Consequences”, JETAI 20: 181-196; quote on pp.188-189. • I.e., syntactically

  23. III. Rapaport’s Thesis • Syntax suffices for semantic cognition • cognition is computable &  computers are capable of thinking James H. Fetzer’s Thesis • It doesn’t, • it isn’t, • & they aren’t

  24. Fetzer’s Thesis • Computers differ from cognitive agents in 3 ways: • statically (symbolically) • dynamically (algorithmically) • affectively (emotionally) • Simulation is not the real thing

  25. Fetzer’s Static Difference ARGUMENT 1:Computers are mark-manipulating systems, minds are not. Premise 1: Computers manipulate marks on the basis of their size, shapes, and relative locations. Premise 2: (a) These shapes, sizes, and relative locations exert causal influence upon computers but (b) do not stand for anything for those systems. Premise 3: Minds operate by utilizing signs that stand for other things in some respect or other for them as sign-using (or “semiotic”) systems. __________________________________________________________________ Conclusion 1: Computers are not semiotic (or sign-using) systems. ___________________________________________________________________ Conclusion 2: Computers are not the possessors of minds. Figure 9. The Static Difference

  26. The Static Difference • Static Premise 1: • Is computer manipulation of symbols independent of meaning? • depends on what ‘meaning’ means: • Computational symbol-manipulation is independent of external, 3rd-person meaning imposed on the symbols • But not independent of internal, 1st-person meaning • arises from syntactic relations among internal symbols • “intrinsic” meaning

  27. The Static Difference • Static Premise 2b: • The symbols that computers manipulate “do not” stand for anything for those computers. • But: • Fetzer’s locution allows for the possibility that symbols could stand for something for the computer • Insofar as they could, such machines might be capable of thinking • He should have said “could not stand for anything” • But then he’d be wrong :-)

  28. Fetzer |- Computers Are Not Semiotic Systems • In a “semiotic system” (e.g., a mind): • something (S) is a sign of something (x) for somebody (z) • x “grounds” sign S • x “is an interpretant w.r.t. a context” to sign-user z • S is in a “causation” relation with z

  29. Fetzer |- Computers Are Not Semiotic Systems • In a computer (I/O) system: • input i (playing role of sign S)is in a “causation” relation with computer c (playing role of sign-user z) • output o (playing role of thing x)is in an “interpretant” relation with computer c • BUT: No “grounding” relation between i & o

  30. Fetzer |- Computers Are Not Semiotic Systems •  Computers only have causal relationships, no mediation between I/P & O/P (?!) • But semiotic systems require such mediation • Peirce:interpretant is “mediately determined by” the sign • [ “interpretant” is really the sign-user’s mental concept of the thing x (!!) ] •  Computers are not semiotic systems • But minds are. •  Minds are not computers& computers can’t be minds.

  31. Incardona |- Computers Are Semiotic Systems! • Something is a semiotic system iffit carries out a process that mediates between a sign & its interpretant • Semiotic systems interpret signs • Algorithms describe processes that mediate between I/Ps & O/Ps • An algorithm’s O/P is an interpretation of its I/P • Algorithms ground the I/O relation • Computers are algorithm machines. •  Computers are semiotic systems

  32. The Static Difference • Argument that computers are semiotic systems from embedding in the world: • Fetzer’s (counter?)example: • “A red light at an intersection stands for applying the brakes and coming to a complete halt, only proceeding when the light turns green, for those who know ‘the rules of the road’.” • Can such a red light stand for applying the brakes, etc., for a computer? • It could, if the computer “knows the rules of the road” • But a computer can “know” those rules… • if it has those rules stored in a knowledge base • and if it uses those rules to drive a vehicle • cf. Stanley the VW (2005 DARPA Grand Challenge) * Parisien & Thagard 2008, “Robosemantics: How Stanley Represents the World”, Minds & Machines

  33. The Static Difference • Does a calculator that computes GCDs understand what it’s doing? • Fetzer & Rapaport: No • Could a computer that computes GCDs understand what it’s doing? • Fetzer: No • Rapaport & Goldfain: Yes, it could… • as long as it had enough background / contextual / supporting information • a computer with a full-blown theory of math at the level of an algebra student learning GCDs could understand GCDs as well as the student

  34. The Static Difference • Goldfain |- Computers could be semiotic systems: • G1: The natural #s that a cognitive agent refers to are denoted by a sequence of unique marks exemplifying a finite initial segment of the natural-# structure. • G2: Such a finite initial segment can be generated by a computational cognitive agent (a computer) via perception & action in the world during an act of counting (e.g., using Lisp’s “gensym”) • they have a history of how they became marks that signify something for the agent (the computer). • G3: These marks (e.g., b4532, b182, b9000…) have no meaning for a human user who lacks access to their ordering. • G4: Such private marks (“numerons”) are associable with publicly meaningful marks (“numerlogs”) • e.g., b4532 denotes the same number as “1”, b182 denotes the same number as “2”, etc. • G5: A computational cognitive agent (a computer) can do math solely on the basis of its numerons. • C1:  These marks stand for something for the computer (the agent). • C2: & we can check the math because of G4.

  35. Summary: No “Static Differences” • Both computers & minds manipulate marks • The marks can “stand for something” for both computers & minds • Computers (and minds) are “semiotic systems” • Computers can possess minds

  36. Fetzer’s Dynamic Difference ARGUMENT 2: Computers are governed by algorithms, but minds are not. Premise 1: Computers are governed by programs,which are causal models of algorithms. Premise 2:Algorithms are effective decision procedures for arriving at definite solutions to problems in a finite number of steps. Premise 3: Most human thought processes, including dreams, daydreams, and ordinary thinking, are not procedures for arriving at solutions to problems in a finite number of steps. ______________________________________________________________________ Conclusion 1: Most human thought processes are not governed by programs as causal models of algorithms. _______________________________________________________________________ Conclusion 2: Minds are not computers. Figure 10. The Dynamic Difference

  37. The Dynamic Difference • Premises 1 & 2: • Def of ‘algorithm’ is OK • But algorithms may be the wrong entity • may need a more general notion of “procedure” (Shapiro) • like an algorithm, but: • need not halt • need not yield “correct” output

  38. The Dynamic Difference • Premise 3: Most human thinking is not algorithmic • Dreams are not algorithms • Ordinary stream-of-consciousness thinking is not “algorithmic” • BUT: • Some human thought processes may indeed not be algorithms • But that’s not the real issue, which is: • Could there be algorithms/procedures that produce these(or other mental states or processes)? • If dreams are our interpretations of random neuron firings during sleep, as if they were due to external causes… • …then: if non-dream neuron-firings are computable (& there’s every reason to think they are) then so are dreams • Stream of consciousness might be computable • e.g., via spreading activation in a semantic network

  39. The Dynamic Difference • Whether a mental state/process is computable is at least an empirical question • Must avoid the Hubert Dreyfus fallacy: • one philosopher’s idea of a non-computable processis another computer scientist’s research project • what no one has yet written a program for is not thereby necessarily non-computable • In fact: Mueller, Erik T. (1990), Daydreaming in Humans & Machines: A Computer Model of the Stream of Thought (Ablex) • Cf. Edelman, Shimon (2008), Computing the Mind (Oxford) •  burden of proof is on Fetzer!

  40. The Dynamic Difference • Dynamic Conclusion 2: • Are minds computers? • Maybe, maybe not • I prefer to say (with Shimon Edelman, et al.): • The (human) mind is a virtual machine,computationally implemented (in the nervous system)

  41. Summary: No “Dynamic Difference” • All (human) thought processes are/might be describable by algorithms/procedures = computationalism properly treated

  42. Fetzer’s Affective Difference ARGUMENT 3: Mental thought transitions are affected by emotions, attitudes, and histories,but computers are not. Premise 1: Computers are governed by programs, which are causal models of algorithms. Premise 2: Algorithms are effective decisions, which are not affected by emotions, attitudes, or histories. Premise 3: Mental thought transitions are affected by values of variables that do not affect computers. _____________________________________________________________________ Conclusion 1: The processes controlling mental thought transitions are fundamentally different than those that control computer procedures. _____________________________________________________________________ Conclusion 2: Minds are not computers. Figure 11. The Affective Difference

  43. Contra Affective Premises 2 & 3: • Programs can be based on (idiosyncratic)emotions, attitudes, & histories • Rapaport-Ehrlich contextual vocabulary acquisition program • Learns a meaning for an unfamiliar word from: • the word’s textual context • integrated with the reader’s idiosyncratic … • “denotations”, “connotations”, • emotions, attitudes, histories, • & prior beliefs • Sloman, Picard, Thagard • Developing computational theories of affect, emotion, etc. • Emotions, attitudes, & histories can affect computers that model them.

  44. Summary: No “Affective Differences” • Processes controlling mental thought transitions are not fundamentally different from those controlling algorithms/procedures. • Algorithms can take emotions/attitudes/histories into account. • Both computers & minds can be affected by emotions/attitudes/histories

  45. The Matter of Simulation ARGUMENT 4: Digital machines can nevertheless simulate thought processes and other forms of human behavior. Premise 1: Computer programmers and those who design the systems that they control can increase their performance capabilities, making them better and better simulations. Premise 2: Their performance capabilities may be closer and closer approximations to the performance capabilities of human beings without turning them into thinking things. Premise 3: Indeed, the static, dynamic, and affective differences that distinguish computer performance from human performance preclude them from being thinking things. ______________________________________________________________________________ Conclusion: Although the performance capabilities of digital machines can become better and better approximations of human behavior, they are still not thinking things. Figure 15. The Matter of Simulation

  46. Argument from Simulation • Agreed:A computer that “simulates” some process P is not necessarily “really” doing P • But what is “really doing P” vs. “simulating P”? • What is the “scope” of a simulation? • Computer simulations of hurricanes don’t get real people really wet • Real people are outside the scope of the simulation • BUT: a computer simulation of a hurricane could get simulated people simulatedly wet • Computer simulation of the daily operations of a bank is not thereby the daily operations of a (real) bank • BUT: I can do my banking online • Simulations can be used as if they were real

  47. Argument from Simulation • Some simulations of Xs are real Xs: • scale model of a scale model of X is a scale model of X • Xeroxed/faxed/PDF copies of documents are those documents • A computer that simulates an “informational process” is thereby actually doing that informational process • Because a computer simulation of information is information…

  48. Argument from Simulation • Computer simulation of a picture is a picture • digital photography • Computer simulation of language is language • computers really do parse sentences (Woods) • IBM’s Watson really answers questions • Computer simulation of math is math • “A simulation of a computation and the computation itself are equivalent: try to simulate the addition of 2 and 3, and the result will be just as good as if you ‘actually’ carried out the addition—that is the nature of numbers” (Edelman) • Computer simulation of reasoning is reasoning • automated theorem proving, computational logic,…

  49. Argument from Simulation • Computer simulation of cognition is cognition • “if the mind is a computational entity, a simulation of the relevant computations would constitute its fully functional replica” (Edelman) • cf. “implementational implication”

  50. Summary:Simulation Can Be(come) the Real Thing • Close approximation to human thought processes can turn computers into thinking things • only asymptotically? • actually? • cf. Turing on “general educated opinion” & “the use of words”

More Related