1 / 163

CHAPTER 19

M c. Graw. Hill. ENGINEERING ECONOMY Fifth Edition. Blank and Tarquin. CHAPTER 19. MORE ON VARIATION AND DECISION MAKING UNDER RISK. 19. Learning Objectives. Understand certainty and risk. Examine variables and distributions. Relate to the context of random variables.

leo-fox
Télécharger la présentation

CHAPTER 19

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mc Graw Hill ENGINEERING ECONOMYFifth Edition Blank and Tarquin CHAPTER 19 MORE ON VARIATION AND DECISION MAKING UNDER RISK

  2. 19. Learning Objectives • Understand certainty and risk. • Examine variables and distributions. • Relate to the context of random variables. • Estimate expected value and standard deviation from sampling. • Understand and apply Monte Carlo techniques and simulation to engineering economy problems.

  3. 19.1 Certainty, Risk, and Uncertainty • It is said, “There is nothing certain in this world other than death and taxes.” • Situations and the passage of time create change, variation, instability. • Engineering economy deals with aspects of a very uncertain future.

  4. 19.1 A Parable worth remembering • Yesterday is history… • today is “now”… • and tomorrow is a mystery. • Putting this into perspective: • Accountants deal with yesterday. • Engineers deal with tomorrow… • when it comes to engineering economy. • So, how can we deal with the uncertainties of future estimation?

  5. 19.1 Certainty • Most of the chapters in this text have presented problems where values were given: • Assume certainty of occurrence; • Sorry, the real world is not like that! • About the only “certain” or near-certain parameter in a problem might be the purchase price of an asset; the rest of the parameters will vary with time.

  6. 19.1 Decision Making under Risk • Risk is associated with knowing the following about a parameter: 1. The number of observable values and, 2. The probability of each value occurring. 3. We know the “state of nature” of the process at hand. • Decision making under risk

  7. 19.1 Decision Making under Uncertainty • We will have two or more observable values; • However, we find it most difficult to assign the probability of occurrence of the possible outcomes; • At times, no one is even willing to try to assign probabilities to the possible outcomes.

  8. 19.1 Discrete vs. Continuous Outcomes • If a parameter is “discrete,” then there are a finite number of occurrences that can occur, and we attempt to assign a probability for each outcome. • If a parameter is continuous in nature, then it can take on an infinite number of values between two set limits, and we would have to deal with continuous functions.

  9. 19.1 Example 19.1 • Two individuals assessing wedding costs: (Charles and Sue) • Charles’ estimates are (subjective) Estimated CostsProb(Cost) $3,000 0.65 $5,000 0.25 $10,000 0.10

  10. 19.1 Ex. 19.1 Histogram Plot: Charles Output produced by Palisade’s RiskView Excel add-in software. Note the mean value for Charles: $4,200 for the wedding costs.

  11. 19.1 Example 19.1 • Sue, on the other hand, estimates the following estimated costs for the Estimated CostsProb(Cost) $8,000 0.333 $10,000 0.333 $15,000 0.333 You should feel sorry for Sue’s father: He will have to pay for the wedding!

  12. 19.1 Sue’s Probability Distribution Sue’s mean value for the wedding is $11,000.

  13. 19.1 The Merged PDF Plots for Charles and Sue After discussion, they agree that the wedding should cost from $7,500 to no more than $10,000 with equal probability. The agreed-to distribution uniform from 7,500 to 10,000 with equal probability.

  14. 19.1 Before a Study Is Started: • Must decide the following: • Analysis under certainty (point estimates); • Analysis under risk: • Assign probability values or distributions to the specified parameters; • Account for variances; • Which of the parameters are to be probabilistic and which are to be treated as “certain” to occur?

  15. 19.1 Two Ways to Account for Risk • Expected Value Analysis 1. Discrete or continuous? 2. Must assign or assume probabilities/probability distributions. • Simulation Analysis 1. Assign relevant probability distributions: 2. Generate simulated data by applying sampling techniques from the assumed distributions.

  16. 19.1 Analysis under Uncertainty • This is the “worst” situation to be in. • Here, the states of nature may or may not be known, or; • States of nature may be defined, but assignment of probability distributions is at best “a shot in the dark.” • What do you do? • Try to move from a degree to an improved level of acceptable risk. (Hereon, we assume decision making under risk.)

  17. Mc Graw Hill ENGINEERING ECONOMYFifth Edition Blank and Tarquin CHAPTER 19 19.2 ELEMENTS IMPORTANT TO DECISION MAKING UNDER RISK

  18. 19.2 Basic Probability and Statistics • RANDOM VARIABLE • A rule that assigns a numerical outcome to a sample space. • Describes a parameter that can assume any one of several values over some range. • Random Variables (RV’s) can be: • Discrete or, • Continuous.

  19. 19.2 Two Types Discrete – assumes only finite values. Continuous – can assume an infinite number of values over a defined range.

  20. 19.2 Probability • A number between 0 and 1. • Represents a “chance” of some event occurring. • Notations: • P(Xi), • P(X = Xi): read as: “The probability that the random variable, X, assumes a value of, say, Xi”

  21. 19.2 Probability • For a given event and all of that event’s possible outcomes: • The summation of the probabilities for all possible outcomes must sum to 1.00. • A probability assignment of “0” means that the event is impossible to occur.

  22. 19.2 Probability Distributions • A function that defines how probability is distributed over the different values a random variable can assume.

  23. 19.2 Probability Distributions Individual probability values are stated as: • P(Xi) = probability that X = Xi [19.1] • “X” represents the random variable or rule (math function, for example) • Xi represents a specific value generated from the random variable, X. • Remember, the random variable, X, is most likely a rule or function that assigns probabilities.

  24. 19.2 Probability Distributions • Probabilities are developed two ways: 1. Listing the outcome and the associated probability: 2. From a mathematical function that is a proper probability function.

  25. 19.2 Cumulative Probability Distribution • Cumulative Probability Distribution, or • CDF – Cumulative distribution function: • Represents the accumulation for probability over all values of the variable. • Notation: F(Xi)

  26. 19.2 The CDF General Form • F(Xi)= P(X  Xi) for all i in the domain of X.

  27. 19.2 Example 19.2 Data Discrete data with seven possible outcomes. The probabilities were most likely developed from experimental observations.

  28. 19.2 Example 19.2 CDF Computed

  29. 19.2 Discrete Probability Distributions • PDF and CDF from Example 19.2

  30. 19.2 Continuous Distribution: Uniform • If the distribution in question represents outcomes that can assume a continuous range of values, then: • The model is described by some assignment of fitted continuous-type distribution. • One common type of distribution is the: UNIFORM DISTRIBUTION

  31. 19.2 Uniform Distribution • Discrete or continuous: • For the continuous case, new notation: • Let f(X) denote the PDF of the random variable; • F(X) denotes the cumulative density function (CDF) of the random variable.

  32. 19.2 Equal Probability • For the uniform distribution: • All of the values from A to B are considered equally likely to occur. • Thus, all values from A to B have the same probability of occurring. • See example 19.3 for cash-flow modeling.

  33. 19.2 Example 19.3 – Cash-Flow Modeling • Client 1 • Estimated low cash flow: $10,000 • Estimated high cash flow: $15,000 • Distributed uniformly between $10,000 and $15,000. • The cash flow is assumed to be best described as a uniformly distributed random variable between $10,000 and $15,000.

  34. 19.2 Example 19.2: Client 1 Distribution CDF from the PDF PDF – Uniform{10k – 15k}

  35. 19.2 Parameters for the Uniform • Density f(X): • Cumulative Density, F(X):

  36. 19.2 Uniform Distribution • Mean: • Variance:

  37. 19.2 Client 1: Question • What is the probability that the monthly cash flow will be no more than $12,000? • P(X  12,000) = F(12,000); • A = 10,000; B = 15,000 • f(X) = 1/(15 – 10) = 1/5 = 0.200 • F(12) = Cumulative of “12”; • = (12 – 10)/5 = 4/5 = 0.80: • 80% chance that the CF will be $12,000 or less!

  38. 19.2 Example 19.3: Client 2 • Parameters for Client 2: • Assumed Distribution – Triangular • Parameters: • Low = 20 ($ x 1000) • Most Likely: 28 ($ x 1000) • High: 30 ($ x 1000). • The triangular distribution is used to model Client 2’s cash flow.

  39. 19.2 Triangular Distribution • Typical Triangular pdf and cdf: PDF – Client 2 CDF – Client 2

  40. 19.2 Parameters for a Triangular • L = low value; • M = most likely; • H = High value. • f(X) is in two parts:

  41. 19.2 Example 19.2: Client 2, P(X  25,000) • “M” is the mode of the distribution; • Mode is the most frequently occurring value. • For the Triangular: • f(mode) = f(M) = 2/(H-L); (19.5) • Cumulative F(M) = (M-L)/(H-L) (19.6) • f(28) = 2/(30-20)= 0.2 = 20% • The breakpoint is at the mode, M = 28; • F(28) = P(X  28) = (28-20)/30-20) = 0.8

  42. 19.2 Example 19.3: Client 2 Analysis Given the CDF, F(X) one locates 25 on the x-axis, projects up to the curve and over to read off 0.3125.

  43. 19.3 Random Samples • Assume an economic parameter can be described by a random variable – X. • We assume that the random variable, X, possesses a true mean and variance denoted by: •  - the parameter’s true (but possibly unknown) mean value and; • 2 – the parameter’s true (but possibly unknown) variance.

  44. 19.3 Random Samples – Population • A population is defined as a collection of objects, elements, or all of the possible outcomes a variable can assume. • A population may be: • Infinite in number, or • Finite. • A population is characterized numerically by the population parameters.

  45. 19.3 Random Samples: Population Parameters • Just as a random variable, a population possesses (numerically) a: • Mean , • Variance 2. • In practice, we can define the population, but probably do not know the true mean and variance of the population.

  46. 19.3 Random Samples: Inference • In the area of applied statistics, one usually samples from the defined population in order to make inferences concerning the population. • There is always uncertainty present when one samples from a parent population. • This is why the area of probability is usually studied before one studies statistics.

  47. 19.3 Random Samples: Key Relationships • The diagram below presents an overview of the relations between: • Populations, probability, a sample, and statistics. Probability Sample Population Statistics Ref: Probability and Statistics for Engineering and the Sciences, 4th Edition, Jay Devore (Duxbury), p. 3.

  48. 19.3 Random Sample: Point Estimate • In many cases we assume certainty. • We supply a point estimate of the parameter in question. • A point estimate is a sample of size 1 taken from the specified population. • An analysis under “certainty” is essentially applying a point estimate, which is a sample of size 1.

  49. 19.3 Random Sample • If we research the parameter of interest and make another estimate, then we have: • The original estimate and, • A second estimate: • So that we now have a sample of size 2. • A point estimate is then: • The most likely value we perceive at the time of the estimate, or a mean value estimate.

  50. 19.3 Random Sample • A population is comprised of two or more outcomes (values). • The population mean, , is: Often, we attempt to estimate  from a sample drawn from the population.

More Related