1 / 66

Lecture 9 Inexact Theories

Lecture 9 Inexact Theories. Syllabus.

nam
Télécharger la présentation

Lecture 9 Inexact Theories

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 9 Inexact Theories

  2. Syllabus Lecture 01 Describing Inverse ProblemsLecture 02 Probability and Measurement Error, Part 1Lecture 03 Probability and Measurement Error, Part 2 Lecture 04 The L2 Norm and Simple Least SquaresLecture 05 A Priori Information and Weighted Least SquaredLecture 06 Resolution and Generalized Inverses Lecture 07 Backus-Gilbert Inverse and the Trade Off of Resolution and VarianceLecture 08 The Principle of Maximum LikelihoodLecture 09 Inexact TheoriesLecture 10 Nonuniqueness and Localized AveragesLecture 11 Vector Spaces and Singular Value Decomposition Lecture 12 Equality and Inequality ConstraintsLecture 13 L1 , L∞ Norm Problems and Linear ProgrammingLecture 14 Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15 Nonlinear Problems: Newton’s Method Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17 Factor AnalysisLecture 18 Varimax Factors, Empirical Orthogonal FunctionsLecture 19 Backus-Gilbert Theory for Continuous Problems; Radon’s ProblemLecture 20 Linear Operators and Their AdjointsLecture 21 Fréchet DerivativesLecture 22 Exemplary Inverse Problems, incl. Filter DesignLecture 23 Exemplary Inverse Problems, incl. Earthquake LocationLecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

  3. Purpose of the Lecture • Discuss how an inexact theory can be represented • Solve the inexact, linear Gaussian inverse problem Use maximization of relative entropy as a guiding principle for solving inverse problems Introduce F-test as way to determine whether one solution is “better” than another

  4. Part 1How Inexact Theories can be Represented

  5. How do we generalize the case ofan exact theoryto one that is inexact?

  6. exact theory case model,m dobs dpre theory d=g(m) datum,d map mest

  7. to make theory inexact ... must make thetheory probabilisticor fuzzy model,m dobs dpre d=g(m) datum,d map mest

  8. a prior p.d.f. theory combination model,m model,m model,m dobs dobs dobs dpre datum,d datum,d datum,d map map map mest

  9. how do youcombinetwo probability density functions ?

  10. how do youcombinetwo probability density functions ? so that the information in them is combined ...

  11. desirable properties order shouldn’t matter combining something with the null distribution should leave it unchanged combination should be invariant under change of variables

  12. Answer

  13. a priori , pA theory, pg total, pT model,m model,m model,m dobs dobs dobs dpre datum,d datum,d datum,d map map map mest (D) (E) (F) model,m model,m model,m dobs dobs dobs dpre datum,d datum,d datum,d map map map mest

  14. “solution to inverse problem”maximum likelihood point of (withpN∝constant) simultaneously gives mest and dpre

  15. probability that the estimated model parameters are near m and the predicted data are near d T probability that the estimated model parameters are near m irrespective of the value of the predicted data

  16. conceptual problem and T do not necessarily have maximum likelihood points at the same value of m

  17. model,m dobs dpre datum,d map mest p(m) model,m mest’

  18. illustrates the problem in defining a definitivesolution to an inverse problem

  19. illustrates the problem in defining a definitivesolution to an inverse problem fortunately if all distributions are Gaussian the two points are the same

  20. Part 2Solution of the inexact linear Gaussian inverse problem

  21. Gaussian a priori information

  22. Gaussian a priori information a priori values of model parameters their uncertainty

  23. Gaussian observations

  24. Gaussian observations observed data measurement error

  25. Gaussian theory

  26. Gaussian theory linear theory uncertainty in theory

  27. mathematical statement of problem find (m,d) that maximizes pT(m,d) = pA(m) pA(d) pg(m,d) and, along the way, work out the form of pT(m,d)

  28. notational simplification group m and d into single vector x = [dT,mT]T group [covm]A and [covd]A into single matrix write d-Gm=0 asFx=0 with F=[I, –G]

  29. after much algebra, we findpT(x) is a Gaussian distributionwith mean and variance

  30. after much algebra, we findpT(x) is a Gaussian distributionwith mean and variance solution to inverse problem

  31. after pulling mest out of x*

  32. after pulling mest out of x* reminiscent of GT(GGT)-1 minimum length solution

  33. after pulling mest out of x* error in theory adds to error in data

  34. after pulling mest out of x* solution depends on the values of the prior information only to the extent that the model resolution matrix is different from an identity matrix

  35. and after algebraic manipulation which also equals reminiscent of (GTG)-1 GT least squares solution

  36. interesting aside weighted least squares solution is equal to the weighted minimum length solution

  37. what did we learn? for linear Gaussian inverse problem inexactness of theory just adds to inexactness of data

  38. Part 3Use maximization of relative entropy as a guiding principle for solving inverse problems

  39. from last lecture

  40. assessing the information contentin pA(m) Do we know a little about m or a lot about m ?

  41. Information Gain, S • -S called Relative Entropy

  42. (A) pA(m) pN(m) m (B) S(σA) σA

  43. Principle ofMaximum Relative Entropyor if you prefer Principle ofMinimum Information Gain

  44. find solutionp.d.f.pT(m) that has the largest relative entropy as compared to a priori p.d.f. pA(m) • or if you prefer • find solutionp.d.f.pT(m) that has smallest possible new information as compared to a priori p.d.f. pA(m)

  45. properly normalized • p.d.f. • data is satisfied in the mean • or • expected value of error is zero

  46. After minimization using Lagrange Multipliers process pT(m) is Gaussian with maximum likelihood point mest satisfying

  47. After minimization using Lagrane Multipliers process pT(m) is Gaussian with maximum likelihood point mest satisfying just the weighted minimum length solution

  48. What did we learn? Only that the Principle of Maximum Entropy is yet another way of deriving the inverse problem solutions we are already familiar with

  49. Part 4F-testas way to determine whether one solution is “better” than another

More Related