1 / 49

MATH408: Probability & Statistics Summer 1999 WEEK 5

MATH408: Probability & Statistics Summer 1999 WEEK 5. Dr. Srinivas R. Chakravarthy Professor of Mathematics and Statistics Kettering University (GMI Engineering & Management Institute) Flint, MI 48504-4898 Phone: 810.762.7906 Email: schakrav@kettering.edu

elinor
Télécharger la présentation

MATH408: Probability & Statistics Summer 1999 WEEK 5

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MATH408: Probability & StatisticsSummer 1999WEEK 5 Dr. Srinivas R. Chakravarthy Professor of Mathematics and Statistics Kettering University (GMI Engineering & Management Institute) Flint, MI 48504-4898 Phone: 810.762.7906 Email: schakrav@kettering.edu Homepage: www.kettering.edu/~schakrav

  2. Joint PDF • So far we saw one random variable at a time. However, in practice, we often see situations where more than one variable at a time need to be studied. • For example, tensile strength (X) and diameter(Y) of a beam are of interest. • Diameter (X) and thickness(Y) of an injection-molded disk are of interest.

  3. Joint PDF (Cont’d)X and Y are continuous • f(x,y) dx dy = P( x < X < x+dx, y < Y < y+dy) is the probability that the random variables X will take values in (x, x+dx) and Y will take values in (y,y+dy). • f(x,y)  0 for all x and y and

  4. Measures of Joint PDF

  5. Independence We say that two random variables X and Y are independent if and only if P(XA, YB) = P(XA)P(YB) for all A and B.

  6. EXAMPLES

  7. Groundwork for Inferential Statistics • Recall that, our primary concern is to make inference about the population under study. • Since we cannot study the entire population we rely on a subset of the population, called sample, to make inference. • We saw how to take samples. • Having taken the sample, how do we make inference on the population?

  8. Basic Concepts

  9. Figure 3-36(a) Probability density function of a pull-off force measurement in Example 3-33.

  10. Figure 3-36 (b) Probability density function of the average of 8 pull-off force measurements in Example 3-33.

  11. Figure 3-36 (c) Probability density Probability density function function of the sample variance of 8 pull-off force measurements in Example 3-33.

  12. An important result

  13. Examples

  14. Central Limit Theorem • One of the most celebrated results in Probability and Statistics • History of CLT is fascinating and should read “The Life and Times of the Central Limit Theorem” by William J. Adams • Has found applications in many areas of science and engineering.

  15. CLT (cont’d) • A great many random phenomena that arise in physical situations result from the combined actions of many individual ones. • Shot noise from electrons; holes in a vacuum tube or transistor; atmospheric noise, turbulence in a medium, thermal agitation of electrons in a conductor, ocean waves, fluctuations in stock market, etc.

  16. CLT (cont’d) • Historically, the CLT was born out of the investigations of the theory of errors involved in measurements, mainly in astronomy. • Abraham de Moivre (1667-1754) obtained the first version. • Gauss, in the context of fitting curves, developed the method of Least Squares, which lead to normal distribution.

  17. Examples

  18. HOMEWORK PROBLEMS Sections 3.11 through 3.12 109,111, 114-116-119, 121-123, 129-130

  19. Examples

  20. Tests of Hypotheses • Two types of hypotheses: Null (H0)and alternative (H1)

  21. Basic Ideas in Tests of Hypotheses • Set up H0 and H1. For a one-sided case, make sure these are set correctly. Usually these are done such that type 1 error becomes “costly” error. • Choose appropriate test statistic. This is usually based on the UMV estimator of the parameter under study. • Set up the decision rule if  = P(type 1 error) is specified. If not, report a p-value. • Choose a random sample and make the decision.

  22. Setting up Hoand H1 • Suppose that the manufacturer of airbags for automobiles claims that the mean time to inflate airbag is no more than 0.1 second. • Suppose that the “costly error” is to conclude erroneously that the mean time is < 0.1. • How do we set up the hypotheses?

  23. ILLUSTRATIVE EXAMPLE

  24. Test on µ using normal • Sample size is large • Sample size is small, population is approximately normal with known .

  25. DNR Region µ CP_1 CP_2

  26. Computation of P(type 2 error)

  27. Example (page 142) • µ = Mean propellant burning rate (in cm/s). • H0:µ = 50 vs H1:µ  50. • Two-sided hypotheses. • A sample of n=10 observations is used to test the hypotheses. • Suppose that we are given the decision rule. • Question 1: Compute P(type 1 error) • Question 2: Compute P(type 2 error when µ =52.

  28. DECISION RULE

  29. Calculation of P(type 1 error)

  30. Example

  31. Confidence Interval • Recall point estimate for the parameter under study. • For example, suppose that µ= mean tensile strength of a piece of wire. • If a random sample of size 36 yielded a mean of 242.4psi. • Can we attach any confidence to this value? • Answer: No! What do we do?

  32. Confidence Interval (cont’d) • Given a parameter, say, , let denote its UMV estimator. • Given , 100(1-  )% CI for is constructed using the sampling (probability) distribution of as follows. • Find L and U such that P(L < < U) = 1- . • Note that L and U are functions of .

More Related