1 / 37

Constrained Minimax Estimation

Constrained Minimax Estimation. Dmitry Rudoy. Agenda. Linear Minimax MSE estimator with weighted norm constraint Linear Robust Minimax MSE estimator Linear Minimax regret estimator with weighted norm constraint Linearly Biased Estimation. Problem Setup. Model: Where

kiril
Télécharger la présentation

Constrained Minimax Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Constrained Minimax Estimation Dmitry Rudoy

  2. Agenda • Linear Minimax MSE estimator with weighted norm constraint • Linear Robust Minimax MSE estimator • Linear Minimax regret estimator with weighted norm constraint • Linearly Biased Estimation

  3. Problem Setup • Model: • Where • is known matrix with rank • is zero-mean vector with covariance • is unknown deterministic parameter and satisfies weighted norm constraint:

  4. Motivation • The common approach is to use unbiased estimator (LS). There are also biased alternatives, like Tikhonov regularization, shrunken estimator, etc. • Allowing bias may improve the performance but then we can’t generally minimize the MSE (which depends on x). • The constraint on x will be used to minimize the maximal possible MSE and get minimax MSE estimator.

  5. The Constraint Set • When L isn’t given one can use technique called Blind Minimax Estimation. • We take the norm of the LS estimator of x as L. • The result can be extended to any • In other words, we always can use the minimax technique, and if constraints aren’t given we estimate them.

  6. Linear Minimax MSE Estimator • We use linear estimator of : where some matrix. • Its MSE is: • Generally it cannot been minimized because of dependence on x. • We minimize worst-case MSE:

  7. SDP Formulation The problem above can be formulated as semidefinite programming (SDP) problem: subject to where

  8. Closed Form Solution • If and are jointly diagonalizable, i.e.there is a closed form solution for the SDP problem. • If , i.e. x has bounded Euclidian norm there is even simpler closed form for the estimator.

  9. Case of T=I • If the estimator is simply scaled version of LS with optimal choice of shrinkage factor. • It’s also have been proposed by Mayer and Willke. • It has very simple intuition:where is the variance of the LS estimator.

  10. Discussion • Constrained Linear Minimax MSE Estimator can be formulated as SDP problem, which can be solved very efficiently. • Significantly improves the LS estimator. • When L approaches infinity the estimator reduces to LS. • Won’t Minimaxity lead to “too conservative” solution? • Do we have to be linear?

  11. Agenda • Linear Minimax MSE estimator with weighted norm constraint • Linear Robust Minimax MSE estimator • Linear Minimax regret estimator with weighted norm constraint • Linearly Biased Estimation

  12. Motivation • In many application we can’t be sure that the model matrix H is known exactly. • In this case the previous estimator may perform poorly. • One can try to develop estimator that takes those “perturbations” in H into account.

  13. New “Unknown” Model • The original model changed slightly: • Where • is known matrix with rank • is unknown matrix satisfying where denotes the spectral norm of the matrix, i.e. the largest singular value. • and are as in the previous model.

  14. Linear Robust Estimator • Similarly we want to minimize the maximum MSE of the linear estimator. • But now the model “perturbation” constraint will be added.

  15. SDP Formulation Again, the problem is equivalent to the following SDP problem: subject to where

  16. Jointly Diagonalizable Matrices • If and , and are jointly diagonalizable the problem reduces to a simple convex optimization problem in two unknowns. • In case and the matrices above are jointly diagonalizable. • This is approximately the case when H and T represent convolution with some filter and w is stationary.

  17. Example • The resulting image of LS isn’t shown since its MSE is too big (9.07).

  18. Discussion • Robust minimax MSE estimator can be developed and formulated as SDP problem. • It coincides with the non-robust version when the model is known. • It improves the minimax non-robust estimator even more (specifically where the latter performs poorly).

  19. Agenda • Linear Minimax MSE estimator with weighted norm constraint • Linear Robust Minimax MSE estimator • Linear Minimax regret estimator with weighted norm constraint • Linearly Biased Estimation

  20. Motivation • Try to improve the Minimax linear estimator by “removing the pessimism”. • Instead of minimizing maximum possible MSE other criterion can be minimized • The choice is to minimize regret , which measures how close is the estimator to the perfect one.

  21. Regret • Regret is the difference between MSE of the linear estimator that doesn’t know the parameter x and the MSE of the linear estimator that knows x. • In the last case G may be function of x. • Since we’re restricted to linear estimators the last MSE isn’t zero. • In our case we calculate regret as by differentiating MSE of linear estimator with respect to G.

  22. Minimax regret estimator (1) The Minimax regret estimator is the solution of the following problem: where

  23. Minimax regret estimator (2) It can be shown that if and are jointly diagonalizable the Minimax regret estimator has the form: where are the solution of some convex optimization problem.

  24. Minimax regret estimator (3) The convex optimization problem to solve is: and it can be simplified for certain choices of T.

  25. Special Cases • In the case of we can solve the optimization problem above and get closed form solution. • If the Euclidian norm bounded, i.e. We have to solve m simple convex optimization problems.

  26. Discussion • It can be shown by simulations that both minimax MSE and minimax regret estimators over perform LS. • In many cases regret estimator performs better than minimax. • Both estimators are formulated as optimization problems. • It may be interesting to develop the robust version of the regret estimator.

  27. Agenda • Linear Minimax MSE estimator with weighted norm constraint • Linear Robust Minimax MSE estimator • Linear Minimax regret estimator with weighted norm constraint • Linearly Biased Estimation

  28. Scalar Problem • We want to estimate scalar deterministic parameter based on its measurements. • Assume we have MVU (minimum variance unbiased) estimator with variance . • We want to reduce the MSE by allowing linear bias:and minimizing the MSE:

  29. General Solution • It’s clear that • The optimal m is given bywhich depends on the unknown parameter.

  30. Constant MUSNR • MUSNR (maximum unbiased signal-to-noise ratio) is defined as: • If it’s independent of the MMSE with linear bias can be implemented as:

  31. Example: Exponential PDF • We have N IID observations of random variable with exponential PDF: • The MVU is: • And the MMSE:

  32. MUSNR depends on θ • If the MUSNR depends on the MMSE cannot been implemented. • Thus minimax strategy is employed to maximize the smallest difference between MSE of the MVU estimator and the MSE of the biased one. • This gives us: • Domination over MVU, since we perform better in the worst case. • Admissibility among linearly biased estimators, since we perform as good as possible in the worst case.

  33. Constant Minimum Variance • If the variance of the MVU is constant (denoted by V) we can simplify the problem. • The resulting estimator is: • And its MSE:which is smaller than V for every .

  34. Example: Gaussian Location • Suppose we have N IID observations of Gaussian random variable: • In this case the MVU is mean and its variance is: • The biased estimator is for :which has smaller MSE in the given range.

  35. The Vector Case • All the results can be extended to the vector case:

  36. Discussion • Allowing bias can give significant improvement of the MSE of estimator. • The restriction is that only linear bias is analysed. • This approach is based on solving optimization problems.

  37. References • Y. C. Eldar, A. Ben-Tal, and A. Nemirovski, “Robust Mean-Squared Error Estimation in Presence of Model Uncertainties,” IEEE Trans. Signal Process., vol. 53, no. 1, pp. 168–181, Jan. 2004. • Y. C. Eldar, A. Ben-Tal, and A. Nemirovski, “Linear Minimax regret estimation of deterministic parameters with bounded data uncertainties,” IEEE Trans. Signal Process., vol. 52, no. 8, pp. 2177–2188, Aug. 2004. • S. Kay, Y. C. Eldar, “Rethinking Biased Estimation”. • Z. Ben-Haim and Y. C. Eldar, “Blind Minimax Estimation,” IEEE Trans. on Inform. Theory, vol. 53, no. 9, pp. 3145-3157, Sep. 2007.

More Related