1 / 26

Combining averages and single measurements in a lognormal model

Combining averages and single measurements in a lognormal model. Dr. Nagaraj K. Neerchal and Justin Newcomer Department of Mathematics and Statistics University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250. Motivating Example.

oberon
Télécharger la présentation

Combining averages and single measurements in a lognormal model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Combining averages and single measurements in a lognormal model Dr. Nagaraj K. Neerchal and Justin Newcomer Department of Mathematics and Statistics University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250

  2. Motivating Example • Goal: To develop a protocol (methodology) for obtaining confidence bounds for the “Mean Emissions” for each welding process and rod type combination, incorporating all of the data • Three Welding Processes • Three Rod Types • Multiple Sources of Data • Some report individual measurements • Some report only averages without the original observations

  3. The Data

  4. Traditional Approaches • Assume Normality? • Sample sizes are very small for certain combinations • Here the bounds obtained assuming normality give meaningless results (e.g. negative bounds) • Transform the data to Normality? • In environmental studies, particularly with concentration measurements, the data most often tends to be skewed, therefore there is a temptation to use the lognormal model • It is hard to transform the confidence bounds back to the original scale (mean of the log is not the same log of the mean!)

  5. Traditional Approaches • Weighted Regression? • Estimates have good properties, such as Best Linear Unbiased Estimates, in general • But the confidence bounds are sensitive to the normality assumption, especially when the sample sizes are small as in our case • Nonparametric Approaches? • Nonparametric approaches usually use ranks. When only averages are reported we completely lose the information regarding ranks. Therefore, means can not be incorporated into nonparametric approaches

  6. The Data – In General { Not Available

  7. The Setup • Our goal is to estimate the mean and variance from a population of lognormal random variables under the following setup • Consider: • The observations for the first group are available, but for the remaining k groups only the average of the observations (i.e. ) is available

  8. Normality Approach – Large Sample • In practice it is common to assume Normality when the sample sizes are large • In this case the sample means and sample variances are sufficient statistics and therefore the individual observations are not needed

  9. Normality Approach – Large Sample • Assume nj’s are large • Then • The likelihood equation then reduces to

  10. Normality Approach – Large Sample • This gives us the following normal equations • Which gives us the following MLE estimates

  11. Normality Approach – Large Sample • Remarks • Although this method works well for large samples, in practice it is common for sample means to be based on a small number of observations, such as n=2,3,4 • In this case, when the original data follows a lognormal distribution, the sample mean does not follow a normal distribution • Our goal then becomes finding the distribution of the sample mean from a random sample of lognormal random variables

  12. Assume Lognormal - Naïve • In practice a common naïve approach is to assume that the sample means are lognormal random variables • This would imply that • However this does not hold… Why?

  13. Direct Approach • The exact approach to this problem is to derive the distribution of by convoluting • Hence, we can write the likelihood function as where is the probability distribution of • The problem is that the distribution of the sum of lognormal random variables does not have a closed form and therefore does not have a closed form

  14. Numerical Approximation • We can approximate the convolution numerically by replacing the integral • For small samples, n=2,3,4 it can be seen that the plot of appears to be approximated better by a lognormal distribution with an adjusted mean and variance rather than an approximate normal random variable

  15. Numerical Approximation

  16. Numerical Approximation • Remarks • Here a separate approximation must be performed for each sample mean • Therefore this approach can become computationally intensive since the numerical approximations must be computed at each iteration • The simulations show that a lognormal model, with an adjusted mean and variance, is a good fit when the sample sizes are small

  17. Adjusted Lognormal Distribution • Here we assume that approximately follows a lognormal distribution with parameters • We then have

  18. Adjusted Lognormal Distribution • Also, since the original sample comes from a lognormal distribution we have • Equating the expected values and variances gives us

  19. Adjusted Lognormal Distribution • Therefore we have • Which gives us the following likelihood function

  20. Adjusted Lognormal Distribution • This gives us the following normal equations • The numerical solutions of these equations will give the MLE’s of and and hence the MLE’s of and (by the invariance property)

  21. Adjusted Lognormal Distribution • Remarks • This method works well when dealing with small sample sizes n=2,3,4 • The likelihood becomes quite complicated and therefore numerical methods must be employed to obtain the MLE’s of the parameters • There is an advantage over the convolution since the approximations do not need to be made at each iteration

  22. Conclusions • The distribution of the mean of lognormal observations does not yield a useful closed form expression • Approximations either by normal when the sample size is large or by lognormal (with appropriately chosen parameters) when the sample size is small can be used for obtaining estimates of the population parameters

  23. Future Work • Implementation of these methods within standard software packages, such as PROC NLIN in SAS • Performing simulation techniques, such as Monte Carlo, to explore the efficiency of these methods • Other numerical methods can be explored, such as the EM algorithm, for obtaining the MLE’s • Generalizing these methods to other standard power transformations

  24. Bootstrapping • What is Bootstrapping? • Resampling the observed data • It is a simulation type of method where the observed data (not a mathematical model) is repeatedly sampled for generating representative data sets • Only indispensable assumption is that “observations are a random sample from a single population” • There are some fixes available when the single population assumption is violated as in our case. • Can be implemented in quite a few software packages: e.g. SPLUS, SAS • Millard and Neerchal (2000) gives S-Plus code

  25. Bootstrapping - The Details Bootstrapping inference is based on the distribution of the replicated values of the statistic : T*1,T*2,….T*B. For example, Bootstrap 95% Upper Confidence Bound based on T is given by the 95th percentile of the distribution of T*s.

  26. Bootstrapping the Combined Data • Group the data points according to the number of tests used in reporting the average, within each welding process and rod type combination. Then bootstrap within each such group. • i.e. for GMAW and E316: 0.253 Note: Each color represents a separate group

More Related