1.03k likes | 1.49k Vues
統計通訊. 黃偉傑. Table of Contents. Estimation Theory Assessing Estimator Performance Minimum Variance Unbiased (MVU) Estimation Cramer-Rao Lower Bound. Consider the data set shown in Figure 1. That x [ n ] consists of a DC level A in noise. We could model the data as
E N D
統計通訊 黃偉傑
Table of Contents • Estimation Theory • Assessing Estimator Performance • Minimum Variance Unbiased (MVU) Estimation • Cramer-Rao Lower Bound
Consider the data set shown in Figure 1. That x[n] consists of a DC level A in noise. We could model the data as where w[n] denotes some zero mean noise process. Assessing Estimator Performance ---Figure 1
Assessing Estimator Performance • Based on the data set {x[0],x[1],…,x[N-1]}, we would like to estimate A. • It would be reasonable to estimate A as or by the sample mean of the data. • Several questions come to mind: • How close will  be to A? • Are there better estimator than the sample mean ?
Assessing Estimator Performance • For the data set in Figure 1, it turns out that Â=0.9, which is close to the true value of A=1. • Another estimator might be • For the data set in Figure1, Ă=0.95, which is closer to the true value of A than the sample mean estimate. • Can we conclude that Ă is better estimator than  ? • An estimator is a function of the data, which are random variables, it too is a random variable, subject to many possible outcomes.
Assessing Estimator Performance Suppose we repeat the experiment by fixing A=1 and adding different noise. We determine the values of the two estimators for each data set. For 100 realizations the histograms are shown in Figure 2 and 3. ---Figure 2
Assessing Estimator Performance It should be evident that  is a better than Ă because the values obtained are more concentrated about the true value of A=1.  will usually produce a value closer to the true one than Ă. ---Figure 3
Assessing Estimator Performance • To prove that  is better we could establish that the variance is less. • The modeling assumptions that we must employ are that the w[n]’s, in addition to bring zero mean, are uncorrelated and have equal variance 2. • We first show that the mean of each estimator is the true value or
Assessing Estimator Performance • Second, the variances are • Since the w[n]’s are uncorrelated and thus
Table of Contents • Minimum Variance Unbiased (MVU) Estimation • Unbiased Estimators • Minimum Variance Criterion
Unbiased Estimators • For an estimator to be unbiased we mean that on the average the estimator will yield the true value of the unknown parameter. • Mathematically, an estimator is unbiased if where (a, b) denotes the range of possible values of . • Unbiased estimators tend to have symmetric PDFs centered about the true value of . • For example 1 the PDF is shown in Figure 4 and is easily shown to be N ~ (A,2/N).
Unbiased Estimators The restriction that for all is an important one. Letting ,where x=[x[0],x[1],…x[N-1]]T, it asserts that Figure 4 Probability density function for sample mean function
Unbiased Estimators • Example 1 – Unbiased Estimator for DC level in WGN • Consider the observations Where −∞<A <∞. Then, a reasonable estimator for the average value of x[n] is • Due to the linearity properties of the expectation operator for all A.
Unbiased Estimators • Example 2 Biased Estimator for DC Level in White Noise • Consider again Example 1 but with modified sample mean estimator • Then, • That an estimator is unbiased does not necessarily mean that it is a good estimator. It only guarantees that on the average it will attain the true value. • A persistent bias will always result in a poor estimator.
Unbiased Estimators • Combining estimators problem • It sometimes occurs that multiple estimates of the same parameter are available, i.e., • A reasonable procedure is to combine these estimates into a better one by averaging them to form • Assuming the estimators are unbiased, with the same variance, and uncorrelated with each other,
Unbiased Estimators • Combining estimators problem (cont.) • So that as more estimates are averaged, the variance will decrease. • However, if the estimators are biased or then and no matter how many estimators are averaged, will not converge to the true value. is defined as the bias of the estimator.
Unbiased Estimators Combining estimators problem (cont.)
Minimum Variance Criterion Mean square error (MSE) Unfortunately, adoption of this natural criterion leads to unrealizable estimators, ones that cannot be written solely as a function of the data. To understand the problem, we first rewrite the MSE as
Minimum Variance Criterion The equation shows that the MSE is composed of errors due to the variance of the estimator as well as the bias. As an example, from the problem in Example 1 consider the modified estimator We will attempt to find the a which results in the minimum MSE. Since E(Ă)=aA and var(Ă)=a2σ2/N, we have
Minimum Variance Criterion Differentiating the MSE with respect to a yields which upon setting to zero and solving yields the optimum value It is seen that the optimal value of a depends upon the unknown parameter A. The estimator is therefore not realizable.
Minimum Variance Criterion • It would seem that any criterion which depends on the bias will lead to an unrealizable estimator. • Although this is generally true, on occasion realizable minimum MSE estimator can be found. From a practical viewpoint the minimum MSE estimator needs to be abandoned. • An alternative approach is: • Constrain the bias to be zero. • Find the estimator which minimizes the variance. • Such an estimator is termed the minimum variance unbiased (MVU) estimator.
Minimum Variance Criterion Possible dependence of estimator variance with . In the former case is sometimes referred to as the uniformly minimum variance unbiased estimator. In general, the MVU estimator does not always exist.
Minimum Variance Criterion • Example 3 – Counterexample to Existence of MVU Estimator. • If the form of the PDF changes with , then is would be expected that the best estimator would also change with . • Assume that we have two independent observations
Minimum Variance Criterion • Example (cont.) • The two estimators can easily be shown to be unbiased. To compute the variances we have that
Minimum Variance Criterion • Example (cont.) so that • Clearly, between these two estimators no MVU estimator exists.
Table of Contents • Cramer-Rao Lower Bound • Estimator Accuracy Considerations • Cramer-Rao Lower Bound • Transformation of Parameters
Estimator Accuracy Considerations • If a single sample is observed as • It is desired to estimate A, then we expect a better estimate if 2 is small. • A good unbiased estimator is Â=x[0]. The variance is 2, so that the estimator accuracy improves as 2 decreases. • The PDFs for two different variances are shown in Figure 5. • They are for i=1,2.
Estimator Accuracy Considerations • The PDF has been plotted versus the unknown parameter A for a given value of x[0]. • If , then we should be able to estimate A more accurately based on p1(x[0];A). Figure 5 PDF dependence on unknown parameter
Estimator Accuracy Considerations • When the PDF is viewed as a function of the unknown parameter (with x fixed), it is termed the likelihood function. • If we consider the natural logarithm of the PDF • Then the first derivative is • And the negative of the second derivative becomes
Estimator Accuracy Considerations • The curvature increase as 2 decrease. Since we already know that the estimator Â=x[0] has variance 2, then for this example and the variance decrease as the curvature increase. • A more appropriate measure of curvature is which measures the average curvature of the log-likehood function.
Cramer-Rao Lower Bound • Theorem (Cramer-Rao Lower Bound-Scalar Parameter) • It is assumed that the PDF p(x,) satisfies the “regularity” condition • Then, the variance of any unbiased estimator must satisfy where the derivative is evaluated as the true value of and the expectation is taken with respect to p(x,).
Cramer-Rao Lower Bound • An unbiased estimator may be found that attains the bound for all if and only if • That estimator , which is the MVU estimator, and the minimum variance is 1/I().
Cramer-Rao Lower Bound • Example 4 DC level in White Gaussian Noise where w[n] is WGN with variance 2. • To determine the CRLB for A
Cramer-Rao Lower Bound • Example 4(cont.) • Taking the first derivative • Differentiating again
Cramer-Rao Lower Bound • Example 4(cont.) • And noting that the second derivative is a constant, we have from as the CRLB. • By comparing, we see that the sample mean estimator attains the bound and must therefore be the MVU estimator.
Cramer-Rao Lower Bound • We now prove that when the CRLB is attained where • From Cramer-Rao Lower Bound and
Cramer-Rao Lower Bound • Differentiating the latter produces • And taking the negative expected value yields • And therefore • In the next example we will see that the CRLB is not always satisfied.
Cramer-Rao Lower Bound • Example 5 Phase Estimator • Assume that we wish to estimate the phase of a sinusoid embedded in WGN, then • The amplitude A and frequency f0 are assumed known. • The PDF is
Cramer-Rao Lower Bound • Example 5(cont.) • Differentiating the log-likelihood function produce • And
Cramer-Rao Lower Bound • Example 5(cont.) • Upon taking the negative expected value we have
Cramer-Rao Lower Bound • Example 5(cont.) • Since therefore,
Cramer-Rao Lower Bound • In this example the conditions for the bound to hold is not satisfied. Hence a phase estimator does not exist. • An estimator which is unbiased and attains the CRLB, as the sample mean estimator in Example 4 does, is said to be efficiently uses the data.
Transformation of Parameters • In Example 4 we may not be interested in the sign of A but instead may wish to estimate A2 or the power of the signal. • Knowing the CRLB for A, we can easily obtain it for A2. • If it is desired to estimate =g(), then the CRLB is • For the present example this become =g(A)=A2 and
Transformation of Parameters • We saw in Example 4 that sample mean estimator was efficient for A. • It might be supposed that is efficient for A2. To quickly dispel this notion we first show that is not even an unbiased estimator. • Since ~ N (A,2/N) • Hence, we immediately conclude that the efficiency of an estimator is destroyed by a nonlinear transformation.
Transformation of Parameters • That it is maintained for linear transformations is easily verified. • Assume that an efficient estimator for exists and is given by . • It is desired to estimate g()= a + b. We choose • The CRLB for g() • But , so that the CRLB is achieved.
Transformation of Parameters • Although efficient is preserved only over linear transformations, it is approximately maintained over nonlinear transformations if the data record is large enough. • To see why this property holds, we return to the previous example of estimating A2 by . • Although is biased, we note from that is asymptotically unbiased or unbiased as N.
Transformation of Parameters • Since , we can evaluate the variance • By using the result that if ~ N(,2), then • Therefore
Transformation of Parameters • For our problem we have then • As N, the variance approaches 4A22/N, the last term converging to zero faster than the first term. • Our assertion that is an asymptotically efficient estimator of A2 is verified. • This situation occurs due to the statistical linearity of the transformation, as illustrated in Figure 6.
Transformation of Parameters • As N increase, the PDF of becomes more concentrated about the mean A. Therefore, the value of that are observed lie in a small interval about . • Over this small interval the nonlinear transformation is approximately linear. Figure 6 Statistical linearity of nonlinear transformations
Minimum Variance Unbiased Estimator for the Linear Model • If the data observed can be modeled as where x is an N× 1 vector of observations, H is a known N × p observation matrix, is p×1 vector of parameters to be estimated, and w is an N×1 noise vector with PDF N(0,2I). • The MVU estimator is and the covariance matrix of is