Télécharger la présentation
## Lecture 9

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -

**Lecture 9**DADSS Decision Analysis Subjective Probability and Imperfect Information**Imperfect Information**• Last class: • Intro to Bayes’ Theorem for decision problems under imperfect information • We also briefly introduced the calculation for EVII • Today: • Calculating EVII • Multiple Experts • Multiple (non-dichotomous) Forecasts • Multiple Uncertainties • Additivity**Bayes Table**• Not need but often a shortcut • Basic properties of probability: • P(A) = P(A & B) + P(A & ~B) = Σn P(A & Bn) • P(A & B) = P(A|B) * P(B) • P(A|B) + P(~A|B) = 1**EVII**• Like EVPI • EV of II = EV with II – EV base • Now, however, EVII is dependent on accuracy (recall that EVPI always assumes 100% accuracy) • Back to the party problem • P(Sun) = 0.60 • Accuracy of the forecast is 75%**The Basic Party Problem**0.6 200 120 Sun Rain 0.4 0 Outdoors EV base = 120 Indoors 120 0.6 Sun 112 Rain 100 0.4**Conditional Probabilities**• Assuming • P(Sun) = 0.60 • Accuracy of the forecast is 75% • You want the following: • P(“Sun”) • P(S | “S”) = • P(R | “R”) = • P(S | “R”) = • P(R | “S”) =**Conditional Probabilities**• Assuming • P(Sun) = 0.60 • Accuracy of the forecast is 75% • Bayes Table:**Conditional Probabilities**• Assuming • P(Sun) = 0.60 • Accuracy of the forecast is 75% • Bayes Table:**Conditional Probabilities**• Posteriors: • P(S | “S”) = P(S & “S”) / P(“S”) = 0.45 / 0.55 = 0.82 • P(R | “R”) = 0.30/0.45 = 0.67 • P(S | “R”) = 0.15/0.45 = 0.33 • P(R | “S”) = 0.10/0.55 = 0.18**0.55**0.45 The EV of II Tree Get Forecast Decide Weather 0.82 164 200 Sun Outdoors Rain 0 164 0.18 0.82 120 Indoors Sun “Sun” Rain 100 116.4 0.18 EV with II = 138.2 200 0.33 66 Sun “Rain” Outdoors Rain 0 0.67 0.33 106.6 120 Indoors Sun Rain 106.6 100 0.67 EV of II = EV with II – EV base EV of II = 138.2 – 120.0 EV of II = 18.2**EVII and Accuracy**26.8 18.2 85% 75%**EVII and Accuracy**• Why is the previous graphic V-shaped? • Now we can answer questions such as: • How much would it be worth to you to find a forecast who was slightly more accurate – say 85% instead of 75%? • Roughly, every 1% change in accuracy has a value of $0.90 in this problem • 75% accuracy, EVII = $18.0 • 76% accuracy, EVII = $18.9 • 85% accuracy, EVII = $26.8**Multiple Uncertainties**• Suppose we return to a problem with uncertainty not only over weather, but also over the “quality of the group” • Under imperfect information, what are we to assume about the accuracy of the forecaster for both uncertainties? • It may not make sense at all to assume that the imperfect forecaster is equally accurate for all uncertainties**The Disgruntled Stock Broker**-15.00 0.3 $300 Stocks Weak -$150 0.7 Strong 2.70 0.3 $30 $2.70 Bonds Weak -$9 0.7 0 Nothing $0 EV base = $2.70**Perfect Information**Stocks $300 Says Strong $30 .3 EV with PI $0 $90.00 -$150 .7 -$9 Says Weak $0 Nothing EVPI = $90-$2.70 = $87.30**Bayes**If an investment advisor is accurate 90% of the time, what’s the maximum amount that one should pay for the prediction?**EVII**207.30 Strong $300 .794 Stocks Weak -$150 .206 Strong .794 $30 21.97 Says Strong Bonds 207.30 Weak -$9 .206 0 .34 Nothing $0 EV with II $70.48 Strong -129.75 .045 $300 Stocks Weak .66 -$150 .955 Strong -7.25 $30 .045 Bonds 0 Weak -$9 .955 Says Weak 0 EVII = $70.48-$2.70 = $67.78 Nothing $0**Multiple Uncertainties, Multiple Experts**• In this case (different accuracies), then, it’s as if we actually have more than one forecaster • So…let’s talk about multiple experts (realizing that the “multiple uncertainties” case is procedurally similar) • Suppose in our basic party problem we now have two independent experts with identical accuracies (75%) • If their accuracy is truly identical, why hire both?**200**200 Sun Sun Rain Rain 0 0 Outdoors Outdoors Indoors Indoors 120 120 Sun Sun Rain Rain 100 100 “Sun” “Sun” 200 200 Sun Sun “Rain” “Rain” Rain Rain 0 0 Outdoors Outdoors 120 120 Indoors Indoors Sun Sun Rain Rain 100 100 “Sun” “Rain” The Decision Tree with 2 Experts Expert #1 Expert #2**Lots of Probabilities**• P(Sun) = 0.60 • Accuracy of each expert is 75% • Before, we sought P(S | “S”) • Now, we have to consider what each forecaster is forecasting • We now have: • P(S | “S1”,”S2”) • P(S | “S1”,”R2”) • P(S | “R1”,”S2”) • P(S | “R1”,”R2”)**The First Expert’s Probabilities**This table should be familiar by now! P(S | “S1”) = 0.45/0.55 = 0.82 P(R | “R1”) = 0.30/0.45 = 0.67 P(S | “R1”) = 0.15/0.45 = 0.33 P(R | “S1”) = 0.10/0.55 = 0.18 Where do these probabilities go?**The Second Expert’s Probabilities**P(S | “S1”,”S2”) = 0.6136/0.6591 = 0.9310 Updated probabilities after first expert’s forecast P(S | “R1”,”S2”) = 0.2500/0.4167 = 0.6000**200**200 Sun Sun Rain Rain 0 0 Outdoors Outdoors Indoors Indoors 120 120 Sun Sun Rain Rain 100 100 “Sun” “Sun” 200 200 Sun Sun “Rain” “Rain” Rain Rain 0 0 Outdoors Outdoors 120 120 Indoors Indoors Sun Sun Rain Rain 100 100 “Sun” “Rain” The Complete Tree 2nd expert 0.93 0.07 0.66 0.93 0.07 1st expert 0.60 0.34 0.55 0.40 0.60 0.40 0.60 0.40 0.45 0.42 0.60 0.40 0.14 2nd expert 0.58 0.86 0.14 0.86**200**200 Sun Sun Rain Rain 0 0 Outdoors Outdoors Indoors Indoors 120 120 Sun Sun Rain Rain 100 100 “Sun” “Sun” 200 200 Sun Sun “Rain” “Rain” Rain Rain 0 0 Outdoors Outdoors 120 120 Indoors Indoors Sun Sun Rain Rain 100 100 “Sun” “Rain” The Complete Tree 0.93 0.07 0.66 0.93 2nd expert 0.07 0.60 0.34 0.55 0.40 0.60 1st expert 0.40 0.60 0.40 0.45 0.42 0.60 2nd expert 0.40 0.14 EV of II = EV with II – EV base EVII = 148.7 – 120.0 EVII(two experts) = 28.7 EVII(one expert) = 18.2 previous result 0.58 0.86 0.14 0.86**Some Observations**• Note that the second expert’s marginal value is less than the first expert • EVII(Two Experts) = 28.7, EVII(One Expert) = 18.2 • Marginal contribution of second expert is 10.5 (28.7 – 18.2) • Note that prior P(Sun) was 0.60 • If one expert said “Sun,” P(Sun|”Sun”) was 0.82 • If both experts said “Sun,” P(Sun | “Sun”,”Sun”) = 0.93 • But, if the experts disagreed, P(Sun | “Sun”,”Rain”) = 0.60 • Why is this? Does it make sense to return to the prior in this case? • Consensus made the revised probability stronger; disagreement returned the decision-maker to his/her prior**Going Further (on your own)**• What if the experts had different accuracy levels? • What if the two experts’ judgments weren’t independent? • What about retaining the experts sequentially? • After getting Expert #1’s forecast, should you get Expert #2’s forecast? • If you did, what would it be worth? • Should you pursue the more accurate expert first?* • What is the value of each imperfect expert individually? • What about additivity? • FOR THE EXAMS: Think about possible variations of these types of problems • Given EVII, could you back out accuracy? • Given accuracy and EVII, could you determine what the decision-maker’s prior beliefs must have been? * This is a not a trick question! Without costs, how can you tell?**Allowing Expert Ignorance**• When our uncertainties were Sun and Rain, the only responses our experts have been able to give is “Sun” and “Rain” • What would happen if they could also say “I don’t know”?**Allowing Expert Ignorance**• P(Sun) = 0.60 • Suppose that when it was sunny in the past, the expert has said “Sun” 75% of the time, said “Rain” 15% of the time, and said “???” 10% of the time • Suppose that when it was rainy in the past, the expert has said “Rain” 75% of the time, said “Sun” 15% of the time, and said “???” 10% of the time • What are the posterior probabilities now? • What is P(Sun | “Sun”)? • What is P(Sun | “???”)?**Allowing Expert Ignorance**P(S | “S”) = 0.45/0.51 = 0.88 P(S | “R”) = 0.09/0.39 = 0.23 P(S | “???”) = 0.06/0.10 = 0.60 P(R | “S”) = 0.06/0.51 = 0.12 P(R | “R”) = 0.30/0.39 = 0.77 P(R | “???”) = 0.04/0.10 = 0.40 These are only equal to the priors when it is equally likely for the expert to say “???”