1 / 47

Environmental Data Analysis with MatLab

Environmental Data Analysis with MatLab. Lecture 6: The Principle of Least Squares. SYLLABUS.

field
Télécharger la présentation

Environmental Data Analysis with MatLab

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Environmental Data Analysis with MatLab Lecture 6: The Principle of Least Squares

  2. SYLLABUS Lecture 01 Using MatLabLecture 02 Looking At DataLecture 03Probability and Measurement ErrorLecture 04 Multivariate DistributionsLecture 05Linear ModelsLecture 06The Principle of Least SquaresLecture 07 Prior InformationLecture 08 Solving Generalized Least Squares ProblemsLecture 09 Fourier SeriesLecture 10 Complex Fourier SeriesLecture 11 Lessons Learned from the Fourier Transform Lecture 12 Power SpectraLecture 13 Filter Theory Lecture 14 Applications of Filters Lecture 15 Factor Analysis Lecture 16 Orthogonal functions Lecture 17 Covariance and AutocorrelationLecture 18 Cross-correlationLecture 19 Smoothing, Correlation and SpectraLecture 20 Coherence; Tapering and Spectral Analysis Lecture 21 InterpolationLecture 22 Hypothesis testing Lecture 23 Hypothesis Testing continued; F-TestsLecture 24 Confidence Limits of Spectra, Bootstraps

  3. purpose of the lecture estimate model parameters using the principle of least-squares

  4. part 1 the least squares estimation of model parameters and their covariance

  5. the prediction error motivates us to define an error vector, e

  6. prediction error in straight line case dipre ei diobs data, d auxiliary variable, x

  7. total errorsingle number summarizing the error sum of squares of individual errors

  8. principle of least-squares that minimizes

  9. least-squares and probability suppose that each observation has a Normal p.d.f. 2

  10. for uncorrelated datathe joint p.d.f. is just the product of the individual p.d.f.’s least-squares formula for E suggests a link between probability and least-squares

  11. now assume that Gm predicts the mean of d Gm substituted for d minimizingE(m) is equivalent to maximizing p(d)

  12. the principle of least-squaresdetermines the mthat makes the observations “most probable”in the sense of maximizingp(dobs)

  13. the principle of least-squaresdetermines the model parametersthat makes the observations “most probable”(provided that the data are Normal)this isthe principle of maximum likelihood

  14. a formula for mestat the point of minimum error, E∂E / ∂mi = 0so solve this equation for mest

  15. Result

  16. where the result comes from E = so

  17. unity when k=jzero when k≠jsince m’s are independent use the chain rule so just delete sum over j and replace j with k

  18. which gives

  19. covariance of mest mestis a linear function of d of the form mest = M d so Cm = M CdMT, with M=[GTG]-1GT assume Cd uncorrelated with uniform variance, σd2 then

  20. two methods of estimating the variance of the data prior estimate: use knowledge of measurement technique the ruler has 1mm tic marks, so σd≈½mm posterior estimate: use prediction error

  21. posterior estimates are overestimates when the model is poor reduce N by M since an M-parameter model can exactly fit N data

  22. confidence intervals for the estimated model parameters(assuming uncorrelated data of equal variance) • so • σmi= √[Cm]ii • and • m=mest±2σmi (95% confidence)

  23. MatLab script for least squares solution mest = (G’*G)\(G’*d); Cm = sd2 * inv(G’*G); sm = sqrt(diag(Cm));

  24. part 2 exemplary least squares problems

  25. Example 1: the mean of data the constant will turn out to be the mean

  26. usual formula for the mean variance decreases with number of data

  27. formula for mean • formula for covariance 2σd ± m1est = d = √N • combining the two into confidence limits • (95% confidence)

  28. Example 2: fitting a straight line intercept slope

  29. [GTG]-1= (uses the rule)

  30. intercept and slope are uncorrelated when the mean of x is zero

  31. keep in mind that none of this algrbraic manipulation is needed if we just compute using MatLab

  32. Generic MatLab scriptfor least-squares problems mest = (G’*G)\(G’*dobs); dpre = G*mest; e = dobs-dpre; E = e’*e; sigmad2 = E / (N-M); covm = sigmad2 * inv(G’*G); sigmam = sqrt(diag(covm)); mlow95 = mest – 2*sigmam; mhigh95 = mest + 2*sigmam;

  33. Example 3: modeling long-term trend and annual cycle in Black Rock Forest temperature data d(t)obs time t, days d(t)pre time t, days error, e(t) time t, days

  34. the model: long-term trend annual cycle

  35. MatLab script to create the data kernel Ty=365.25; G=zeros(N,4); G(:,1)=1; G(:,2)=t; G(:,3)=cos(2*pi*t/Ty); G(:,4)=sin(2*pi*t/Ty);

  36. prior variance of databased on accuracy of thermometerσd = 0.01 deg C posterior variance of databased on error of fitσd = 5.60 deg C huge difference, since the model does not include diurnal cycle of weather patterns

  37. long-term slope 95% confidence limits based on prior variance m2 = -0.03 ± 0.00002 deg C / yr 95% confidence limits based on posterior variance m2 = -0.03 ± 0.00460 deg C / yr in both cases, the cooling trend is significant, in the sense that the confidence intervals do not include zero or positive slopes.

  38. However The fit to the data is poor, so the results should be used with caution. More effort needs to be put into developing a better model.

  39. part 3 covariance and the shape of the error surface

  40. solutions within the region of low error are almost as good as mest m2est 4 0 m2 0 mest m1est large range of m1 4 E(m) m1 mi miest small range of m2 near the minimum the error is shaped like a parabola. The curvature of the parabola controls the with of the region of low error

  41. near the minimum, the Taylor series for the error is: curvature of the error surface

  42. starting with the formula for error we compute its 2nd derivative

  43. but so covariance of the model parameters curvature of the error surface

  44. the covariance of the least squares solution is expressed in the shape of the error surface large variance small variance E(m) E(m) mi mi miest miest

More Related