1 / 9

EECS 690

EECS 690. April 30. Moral Agency and Intention. One aspect of moral agency that is often considered is the actual effects that result from a person’s actions. However, we sometimes revise our judgment of the person when their motives are revealed.

thyra
Télécharger la présentation

EECS 690

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EECS 690 April 30

  2. Moral Agency and Intention • One aspect of moral agency that is often considered is the actual effects that result from a person’s actions. However, we sometimes revise our judgment of the person when their motives are revealed. • This provides the difference between intentionally doing harm and making an honest mistake. • The difficulty in applying this idea to AMAs is that people have the idea that machines don’t have intentions or motives. • Dennett’s approach, similar to Floridi and Sanders’ approach, provides interesting questions.

  3. Dennett’s proposal • Taking the intentional stance toward some system means that at some point you have to believe it when it says what it is doing and why. • This is the point at which even the design stance becomes too huge and unwieldy to do any real predictive work. (See “Brainchildren: Essays on Designing Minds”)

  4. Experimental Philosophy • This is essentially “philosophy by survey”. As you may be able to tell, it has its critics. • In any case, one finding that experimental philosophers have made is that people react differently to trolley cases that would require them to physically touch the person to be killed. This indicates that physical contact is a morally important consideration. As a result of this, perhaps when people physically interact with sufficiently complex machines, they will thereby recognize them as moral agents.

  5. Anthropocentrism and Ethics • It has been remarked in the text that all theories of ethics are anthropocentric (because all moral agents are human beings). • The text singles out Kantians as the most resistant to machines being genuine moral agents. I disagree. Kant himself specifically used the phrase “rational beings” instead of “human beings” because he thought that the categorical imperative applied to anything with reason, including angels and other non-human rational beings. Rational robots fit right into this schema. • That being said, anthropocentrism in ethics is an idea that might be worth persuading people out of before long. Environmental ethicists have been at this for several decades now.

  6. Responsibility versus accountability • Whether a machine can make decisions by itself that have ethical impacts that are considered by the machine is a question of whether the machine can be responsible. It seems like this question can be answered simply by means of what kind of machine it is. • Whether a machine can be rewarded or punished solely and on the basis of its actions is to ask whether it can be accountable. This is where the legal and social structures come in and make the question extremely difficult.

  7. Corporate law doing us a favor • The suggestion has been made that corporate law already provides the basis for a non-human entity treated as a person under the law. • However, remember early in the semester that any duty implies a corresponding right. If we ask that machines perform their legal duties, must we grant them legal rights? Would they want or require the same or similar rights that we have?

  8. The Phenomenal Self Model • It is best not to overly confuse this point. • What Metzinger refers to here is simply metacognition, or thinking about one’s thinking. • While this may or may not be the “key to personhood” when should we believe that a machine is doing this? • This brings us back around to Dennett’s proposal.

  9. The moral Turing Test • Dennett would likely say that such a test is too hard, and might be unfair. • Is the point at which we are willing to hold someone responsible prior to the point at which we are willing to say that they morally good? If the answer to this question is ‘yes’ then the moral Turing Test has a problem.

More Related