1 / 32

Seismology Part X: Interpretation of Seismograms

Seismology Part X: Interpretation of Seismograms. Seismogram Interpretation How do we interpret the information we get from a recording of ground motion?. What kinds of information can we extract? How do we figure out what phases we are looking at? These guys look pretty random!.

katen
Télécharger la présentation

Seismology Part X: Interpretation of Seismograms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Seismology Part X: Interpretation of Seismograms

  2. Seismogram Interpretation How do we interpret the information we get from a recording of ground motion? What kinds of information can we extract? How do we figure out what phases we are looking at? These guys look pretty random!

  3. Until we have a model, we have to look at many seismograms and look for coherent energy.

  4. Here’s another example from an IRIS poster. This is essentially how the first models of the Earth’s interior were generated by Harold Jeffreys and Keith Bullen Harold Jeffreys (1891-1989)

  5. Once we identy phases, we can plot them up on a T-D graph and see if we can match their arrival times with a simple model. It turns out we can explain most of what we see with a very simple crust-mantle-core 1D structure!

  6. The simple 1D model is shown here, along with the basics of seismic phase nomencalture

  7. Local Body Wave Phases • Direct Waves: P & S • Short distance: Pg (granite) • Critically refracted (head): Pn (moho), P* (Conrad; sometimes Pb – basaltic) • Reflected from Moho: PmP (and PmS, etc.)

  8. Basics of Nomencalture Reflected close to source (depth phase): pP, pS, sP, sS Reflected at a distance: PP &SS (and additional multiples) Reflected from the CMB: c, like PcP or ScS Outer Core P wave: K (Kernwellen -> German for Core) Inner Core P wave: I Inner Core S wave: J Reflection at Outer-Inner Core boundary: i (like PKiKP) Ocean wave in SOFAR Channel: T

  9. Example of T phase generation in the SOFAR channel

  10. Paths of P waves in the Earth. Notice the refractions at the CMB that cause a P shadow zone.

  11. Penetration of the Shadow zone by Refractions from the Inner Core

  12. Details of the Current 1D model of the Earth. Note phase transitions in the upper mantle

  13. Interpretation of travel time curves (T vs ) • Locating Earthquakes: • 1. Triangulation • 2. General Inverse problem (below) • Grid search • Tomography: Explaining the • remainder with wavespeed variations

  14. Waveforms: Source processes and details of structure.

  15. How to model just about anything In general, we can formulate a relationship between model m and observables d as: g(m) = d

  16. we can solve the above by taking a guess mo of m, and expanding g(m) about this guess: Then or Gm = R where G is a matrix containing the partial derivatives. G is an m x n matrix, where m is the number of observations (rows) and n is the number of variables (columns). R is a m x 1 vector or residuals. The idea is to solve the above for m, add it to mo, and then repeat the operation as often as is deemed (statistically) useful.

  17. How to do this? Let’s from the matrix S from G and it’s complex conjugate transpose as: Note that S is a square matrix (N + M) x (N + M) and also S’ = S, which means that there exists an orthogonal set of eigenvectors w and real eigenvalues l such that We solve this eigenvalue problem by noting that non trivial solutions to

  18. will occur only if the determinate of the matrix in the parentheses vanishes: in general there will be (N+M) eigenvalues. Now, each eigenvector w will have N+M components, and it will be useful to consider w as composed of an N dimensional vector v and an M dimensional vector u: Thus, we arrive at the coupled equations:

  19. Note that we can change the sign of  and have (-ui, vi) be a solution as well. Thus, there are p pairs of nonzero eigenvalues +-. Let’s suppose there are p pairs of nonzero eigenvalues for the above equations. For the remaining zero eigenvalues, we have Or, in other words, u and v are independent. Now, note that in the non-zero case: GGT and GTG are both Hermitian, so each of V and U forms an orthogonal set of eigenvectors with real eigenvalues. After normalization, we can the write:

  20. VTV = VVT = I UTU = UUT = I Where V is an N x N matrix of the v eigenvectors, and U is an M x M matrix of the u eigenvectors. We say that U spans the data space, and V spans the model space. In the case of zero eigenvalues, we can divide the U and V matrices up into “p” space and “o” space: U = (Up,Uo) V = (Vp,Vo) Because of orthogonality, UpTUp = I VpTVp = I

  21. BUT UpUpT != I VpVpT != I In this case, we write: or, in sum

  22. since VVT = I, we have This is called the Singular Value Decomposition of G, and it shows that we can reconstruct G just using the "p" part. Uo and Vo are "blind spots" not illuminated by G. NB: for overdetermined problems, be can further subdivide U into a U1 and U2 space, where U2 has extra information not required by the problem at hand. This can be very useful as a null operator for some problems.

  23. Since UoTGm = 0 Then the prediction (Gm=d) will have no component in Uo space, only Up space. If the data have some components in Uo space, then there is no way that they can be explained by G! It is thus a source of discrepancy between d and Gm.

  24. The Generalized Inverse Note that if there are no zero singular values, then we can write down the inverse of G immediately as: G-1 = V-1UT Since G-1G = V-1UT UVT = V-1VT = VVT = I In the case of zero singular values, we can try using the p space only, in which case we have the generalized inverse operator: Gg-1 = Vpp-1UpT Let's see what happens when we apply this

  25. Suppose that there is a Uo space but no Vo space. In this case • GTG = VppUpTUppVpT = Vpp2VpT • has an exact inverse: • (GTG)-1 = Vpp-2VpT • Then we can write: • GTGm = GTd • and • m = (GTG)-1 GTd = Vpp-2VpTVppUpTd • = Vpp-1UpTd = Gg-1d

  26. Note that the generalized inverse gives the least squares solution, defined as the minimum of the squares of the residuals: min|d - Gm|2 Let mg = Gg-1d. Then d - G mg = d - UppVpTVpp-1UpTd = d - UpUpTd Thus UpT(d - G mg) = UpT(d - UpUpTd) = UpTd - UpTd = 0 Which means that there is no component of the residual vector in Up space.

  27. Also, UoT G mg = 0 And so there is no component of the predicted data in Uo space. Imagine the total data space spanned by Uo and Up. The generalized inverse explains all the data that project onto Up, and the residual (d - Gmg) projects onto Uo. What if there is not Uo but Vo space exists? Let's try our mg solution again and see what happens. mg = Gg-1d so Gmg = GGg-1d = UppVpTVpp-1UpTd = UpUpTd = d

  28. So, the solution mg will satisfy d with a model restricted to Vp space. We are free to add on any components of Vo space we like to mg to generate a model m - these will have no effect on the fit to the data. Note however that any such addition will mean a larger net change in m. Thus mg is our minimum step solution out of all possible solutions (cf. Occam's razor!). Finally, when there are both Vo and Uo spaces, the generalized inverse operator will both minimize the model step and the residual. Pretty good, eh?

  29. Resolution and Error Let's compare the "real" solution m to our generalized inverse estimate of it: mg. Recall that mg = Gg-1d Gm = d So mg = Gg-1Gm = Vpp-1UpTUppVpTm = VpVpTm Thus, if there is no Vo space, VpVpT = I and mg and m are the same. In the event of Vo space then mg will be a smoothed version of m. We call the product VpVpT the Model Resolution Matrix.

  30. How about in data space? In this case, we compare the actual data "d" with our predicted data dg: mg = Gg-1d dg = Gmg= GGg-1d = UppVpTVpp-1UpTd = UpUpTd Thus, in the absence of Uo space, the prediction matches the observed. If Uo exists, then dg is a weighted average of d and a discrepancy exists. In this case, we say that parts of d cannot be fit with the model; only certain combinations of d can be fit.

  31. Finally, we can estimate how data uncertainties translate into model uncertainties: mg = Gg-1d We write the covariance of these terms as: mgmgT> = Gg-1<ddT> GgT-1 If all the uncertainties in d are statistically independant and equal to a constant variance d2, then mgmgT> = d2Gg-1GgT-1 =d2Vpp-1UpTUpp-1VpT =d2Vpp-2VpT

  32. Note from above that (GTG)-1 = Vpp-2VpT and so mgmgT> = d2(GTG)-1 Note that the covariance goes up as the singular values get small, so including the associated eigenvectors could be very unstable. There are a couple of ways around this. One is just not to use them (i.e., enforce a cutoff below which a singular value is defined to be zero). The other is damping. To talk about damping quantitatively we need to do some more statistics.

More Related