1 / 6

Propagation of Errors ( Chapter 3, Taylor )

Propagation of Errors ( Chapter 3, Taylor ). Introduction Example: Suppose we measure the current (I) and resistance (R) of a resistor. Ohm's law relates V and I: V = IR If we know the uncertainties (e.g. standard deviations) in I and R, what is the uncertainty in V?

orly
Télécharger la présentation

Propagation of Errors ( Chapter 3, Taylor )

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Propagation of Errors(Chapter 3, Taylor) Introduction Example: Suppose we measure the current (I) and resistance (R) of a resistor. Ohm's law relates V and I: V = IR If we know the uncertainties (e.g. standard deviations) in I and R, what is the uncertainty in V? More formally, given a functional relationship between several measured variables (x, y, z), Q=f(x, y, z) What is the uncertainty in Q if the uncertainties in x, y, and z are known? To answer this question we use a technique called Propagation of Errors. Usually when we talk about uncertainties in a measured variable such as x, we write: x±s. In most cases we assume that the uncertainty is “Gaussian” in the sense that 68% of the time we expect the “true” (but unknown) value of x to be in an interval given by [x-s, x+s]. BUT not all measurements can be represented by Gaussian distributions (more on that later)! Propagation of Error Formula To calculate the variance in Q as a function of the variances in x and y we use the following: If the variables x and y are uncorrelated then xy = 0 and the last term in the above equation is zero. We can derive the above formula as follows: Assume we have several measurement of the quantities x (x1, x2...xN) and y (y1, y2...yN). The average of x and y: P416 Lecture 4

  2. define: • evaluated at the average values • expand Qi about the average values: • assume the measured values are close to the average values, and neglect the higher order terms: • If the measurements are uncorrelated the summation in the above equation is zero Since the derivatives are evaluated at the average values (mx, my) we can pull them out of the summation uncorrelated errors P416 Lecture 4

  3. If x and y are correlated, define xy as: Example: Power in an electric circuit. P = I2R Let I = 1.0 ± 0.1 amp and R = 10. ± 1.0  P = 10 watts calculate the variance in the power using propagation of errors assuming I and R are uncorrelated P = 10± 2 watts If the true value of the power was 10 W and we measured it many times with an uncertainty (s) of ± 2 W and Gaussian statistics apply then 68% of the measurements would lie in the range [8,12] watts Sometimes it is convenient to put the above calculation in terms of relative errors: In this example the uncertainty in the current dominates the uncertainty in the power! Thus the current must be measured more precisely if we want to reduce the uncertainty in the power. correlated errors P416 Lecture 4

  4. It can be shown that if a function is of the form: f(x,y,z)=xaybzc then the relative variance of f(x,y,z) is: generalization of Eq. 3.18 Example: The error in the average (“error in the mean”). The average of several measurements each with the same uncertainty (s) is given by: We can determine the mean better by combining measurements. But the precision only increases as the square root of the number of measurements. nDo not confusesmwiths ! nsis related to the width of the pdf (e.g. Gaussian) that the measurements come from. ns does not get smaller as we combine measurements. The problem with the Propagation of Errors technique: In calculating the variance using propagation of errors: We usually assume the error in measured variable (e.g. x) is Gaussian. BUT if x is described by a Gaussian distribution f(x) may not be described by a Gaussian distribution! “error in the mean” P416 Lecture 4

  5. lWhat does the standard deviation that we calculate from propagation of errors mean? Example: The new distribution is Gaussian. n Let y = Ax, with A = a constant and x a Gaussian variable. y= Ax and y= Ax n Let the probabilitydistribution for x be Gaussian: Thus the new probability distribution for y, p(y, y, y), is also described by a Gaussian. 100 y = 2x with x = 10 ± 2 sy= 2sx= 4 80 Start with a Gaussian with m= 10, s= 2 Get another Gaussian with m= 20, s= 4 60 dN/dy 40 y 20 0 0 10 20 30 40 P416 Lecture 4

  6. uExample: When the new distribution is non-Gaussian: y = 2/x. nThe transformed probability distribution function for y does not have the form of a Gaussian. l Unphysical situations can arise if we use the propagation of errors results blindly! u Example: Suppose we measure the volume of a cylinder: V = pR2L. n Let R = 1 cm exact, and L = 1.0 ± 0.5 cm. n Using propagation of errors: sV = pR2sL = p/2 cm3. V = p ± p/2 cm3 n If the error on V (sV) is to be interpreted in the Gaussian sense then there is a finite probability (≈ 3%) that the volume (V) is < 0 since V is only 2s away from 0! Clearly this is unphysical! Care must be taken in interpreting the meaning of sV. 100 y = 2/x with x = 10 ± 2 sy= 2sx /x2 80 Start with a Gaussian with m= 10, s= 2. DO NOT get another Gaussian! Get a pdf with m= 0.2, s= 0.04. This new pdf has longer tails than a Gaussian pdf: Prob(y > my + 5sy) = 5x10-3, for Gaussian » 3x10-7 60 dN/dy 40 20 0 0.1 0.2 0.3 0.4 0.5 0.6 y P416 Lecture 4

More Related