1 / 11

Class notes for ISE 201 San Jose State University Industrial & Systems Engineering Dept.

Probability & Statistics for Engineers & Scientists, by Walpole, Myers, Myers & Ye ~ Chapter 9 Notes. Class notes for ISE 201 San Jose State University Industrial & Systems Engineering Dept. Steve Kennedy. Unbiased Estimators.

sawyer
Télécharger la présentation

Class notes for ISE 201 San Jose State University Industrial & Systems Engineering Dept.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Probability & Statistics for Engineers & Scientists, byWalpole, Myers, Myers & Ye ~Chapter 9 Notes Class notes for ISE 201 San Jose State University Industrial & Systems Engineering Dept. Steve Kennedy

  2. Unbiased Estimators • A statistic hat is an unbiased estimator of the parameter  if • Note that in calculating S2, the reason we divide by n-1 rather than n is so that S2 will be an unbiased estimator of 2. • Of all unbiased estimators of a parameter , the one with the smallest variance is called the most efficient estimator of . • Xbar is the most efficient estimator of . And,is called the standard error of the estimator Xbar

  3. Confidence Intervals • When we use Xbar to estimate , we don't expect the estimate to be exact. A confidence interval is a statement that we are 100(1-)% confident that lies between two specified limits. • If xbar is the mean of a random sample of size n from a population with known variance 2, then is a 100(1-)% confidence interval for . • Here z/2 is the z value with area /2 to the right. • For example, for a 95% confidence interval,  = .05, and z.025 = 1.96. • If population not normal, this is still okay if n  30

  4. Error of Estimate and Sample Size • If xbar is used as an estimate of , we can be 100(1-)% confident that the error of the estimate e will not exceed • It is possible to calculate the value of n necessary to achieve an error of size e. We can be 100(1-)% confident that the error will not exceed e when

  5. One-Sided Confidence Bounds • Sometimes, instead of a confidence interval, we're only interested in a bound in a single direction. • In this case, a (1-)100% confidence bound uses z in the appropriate direction rather z/2 in either direction. • So the (1-)100% confidence bound would be eitherdepending upon the direction of interest.

  6. Confidence Interval if  is Unknown • If  is unknown, the calculations are the same, using t/2 with  = n-1 degrees of freedom, instead of z/2, and using s calculated from the sample rather than . • As before, use of the t-distribution requires that the original population be normally distributed. • The standard error of the estimate (i.e., the standard deviation of the estimator) in this case is • Note that if  is unknown, but n  30, s is still used instead of , but the normal distribution is used instead of the t-distribution. • This is called a large sample confidence interval.

  7. Difference Between Two Means • If xbar1 and xbar2 are the means of independent random samples of size n1 and n2, drawn from two populations with variances 12 and 22, then, if z/2 is the z-value with area /2 to the right of it, a 100(1-)% confidence interval for 1 - 2 is given by • Requires a reasonable sample size or a normal-like population for the central limit theorem to apply. • It is important that the two samples be randomly selected (and independent of each other). • Can be used if  unknown as long as sample sizes are large.

  8. Estimating a Proportion • An estimator of p in a binomial experiment is Phat = X / n , where X is a binomial random variable indicating the number of successes in n trials. The sample proportion, phat = x / n is a point estimator of p. • What is the mean and variance of a binomial random variable X? • To find a confidence interval for p, first find the mean and variance of Phat:

  9. Confidence Interval for a Proportion • If phat is the proportion of successes in a random sample of size n, and qhat = 1 - phat , then a (1-)100% confidence interval for the binomial parameter p is given by • Note that n must be reasonably large and p not too close to 0 or 1. • Rule of thumb: both np and nq must be  5. • This also works if the binomial is used to approximate the hypergeometric distribution (when n is small relative to N).

  10. Error of Estimate for a Proportion • If phat is used to estimate p, we can be (1-)100% confident that the error of estimate will not exceed • Then, to achieve an error of e, the sample size must be at least • If phat is unknown, we can be at least 100(1-)% confident using an upper limit on the sample size of

  11. The Difference of Two Proportions • If p1hat and p2hat are the proportion of successes in random samples of size n1 and n2, an approximate (1-)100% confidence interval for the difference of two binomial parameters is

More Related