1 / 10

EECS 690

EECS 690. April 23. Affective Computing and “stupid” machines. Computer scientists and roboticists are beginning to conclude that the largest factor in making a machine “stupid” is its lack of capacity to deal with affective states. Addressing this stupidity:. Requires three capabilities:

Télécharger la présentation

EECS 690

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EECS 690 April 23

  2. Affective Computing and “stupid” machines • Computer scientists and roboticists are beginning to conclude that the largest factor in making a machine “stupid” is its lack of capacity to deal with affective states.

  3. Addressing this stupidity: • Requires three capabilities: • Detecting the emotional state of the user • Putting that state in context • Responding or adapting to the state appropriately

  4. Do emotions make you smarter? • Stupid machines might not only be machines that cannot interface with humans in a way that humans have come to interface with each other. • Research of various kinds as described by Walach and Allen suggests that certain kinds of affective state machinery enhances the performance abilities of the machines that have such machinery over those that do not. • For example, Gadanho’s machines that have only cognitive machinery outperform those with only affective machinery, but those with both perform best of all.

  5. Representing persons • The researchers who brought us cog and Kismet have new projects designed to develop capacities for learning by social interaction. Some rudimentary successes have been achieved. The goal of the Leonardo project is to achieve a machine that could pass the false belief test.

  6. The False Belief Test and its Importance • The false belief test is routinely failed by children less than about 3 and routinely passed by older children. • Several distinct abilities should be inferred from an ability to pass a sufficiently robust false belief test. • the capacity to have beliefs • the ability to distinguish its own beliefs from those of others • the capacity to distinguish between those things that have beliefs and those that do not • These abilities contribute to a capability central to moral agents: the capability to recognize moral subjects.

  7. Beyond the False Belief Test • Passing the FBT is only an incremental step in a more full blown representation of what is called by philosophers a “Theory of Mind” (ToM). • ‘Theory’ used in this sense is not intended to connote the rigor associated with the word as used in science. Rather, it is supposed to refer to how people practically approach the problem of other minds.

  8. The Problem of Other Minds • The statement of this problem is owed to Descartes. Put simply, he asks what makes us so sure anybody else has a mind, seeing that we have no access to it in the way we have to ours. • As a matter of course, people don’t lose sleep over this kind of skepticism. In any case, the question is more germanely stated as “What is our ordinary theory of mind?” • An ordinary theory of mind seems to require at least three abilities: • having a mind • distinguishing our minds from those of others • distinguishing between those things with and without minds.

  9. So, must a machine have emotions the way we have them to have a ToM? • This question represents a legitimate divide between two perspectives. • On one side, some researchers suggest that empathy is necessary for a sufficiently humanlike ToM. This requires having affective states akin to human affective states. • Opposing this is the idea that appropriate recognition of emotional states, and appropriate communication of information that will be interpreted by people as emotional states is sufficient without spending too much effort assuming that there are any universally human affective states.

  10. Beetles in Boxes • The second approach is influenced by Wittgenstein, who approaches the conscious experience of affective states with (among other things) an analogy. • “Suppose everyone had a box with something in it: we call it a “beetle”. No one can look into anyone else’s box, and everyone who says he knows what a beetle is only by looking at his beetle.—Here it would be quite possible for everyone to have something different in his box. One might even imagine the thing constantly changing.—But suppose the word ‘beetle’ had a use in these peoples’ language?—If so it would not be used as the name of a thing The thing in the box has no place in the language at all; not even as a something for the box might even be empty…That is to say: if we construe the grammar of the expression of sensation on the model of ‘object and designation’ the object drops out of consideration as irrelevant.” (Wittgenstein, “Philosophical Investigations” Section 293)

More Related