1 / 6

第三章 Estimation

第三章 Estimation. Methods of estimation (MOM, Least squares, MLE) Sample properties of estimators Confidence intervals. A. METHODS OF ESTIMATION. METHOD OF MOMENTS suppose x 1 , x 2 , ... are iid. Fix k a positive integer. Then are iid and by the weak law of large numbers

melantha
Télécharger la présentation

第三章 Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 第三章Estimation • Methods of estimation (MOM, Least squares, MLE) • Sample properties of estimators • Confidence intervals

  2. A. METHODS OF ESTIMATION • METHOD OF MOMENTS suppose x1, x2, ... are iid. Fix k a positive integer. Then are iid and by the weak law of large numbers which are the kth moments about the origin. Define mk = Exk as the kth moment of x so 2. LEAST SQUARES We have an unknown mean . Our estimator will be . Take each xi, subtract its predicted value, i.e. its mean, square the difference and add up the squared differences This is nothing more than , which we know to be the least squares estimator of for any distribution. 3. MAXIMUM LIKELIHOOD ESTIMATORS

  3. B. PROPERTIES OF ESTIMATORS SMALL SAMPLE PROPERTIES UNBIASEDNESS An estimator is said to be unbiased if in the long run it takes on the value of the population parameter. That is, if you were to draw a sample, compute the statistic, repeat this many, many times, then the average over all of the sample statistics would equal the population parameter. EFFICIENCY An estimator is said to be efficient if in the class of unbiased estimators it has minimum variance. SUFFICIENCY We say that an estimator is sufficient if it uses all the sample information. The median, because it considers only rank, is not sufficient. The sample mean considers each member of the sample as well as its size, so is a sufficient statistic.

  4. LARGE SAMPLE PROPERTIES ASYMPTOTIC UNBIASEDNESS: An estimator is said to be asymptotically unbiased if the following is true ASYMPTOTIC EFFICIENCY: Define the asymptotic variance as An asymptotically efficient estimator is an unbiased estimator with smallest asymptotic variance. CONSISTENCY: A sequence of estimators is said to be consistent if it converges in probability to the true value of the parameter

  5. C. CONFIDENCE INTERVALS Consider the first sample mean problems we dealt with Where . This can be rewritten as This is called a confidence interval. INTERPRETATION: Of all confidence intervals calculated in a similar fashion {95%, n=81} we would expect that 95% of them would cover. does not change, only the position of the interval. Think of a big barrel containing 1000 different confidence intervals, different because they each use a different value of the random variable. The probability of us reaching in and grabbing a "correct" interval is 95%. But, as soon as we break open the capsule and read the numbers the mean is either there or it isn't.

More Related