1 / 25

Summary of Experimental Uncertainty Assessment Methodology

Summary of Experimental Uncertainty Assessment Methodology. F. Stern, M. Muste, M-L. Beninati, W.E. Eichinger. Table of Contents. Introduction Test Design Philosophy Definitions Measurement Systems, Data-Reduction Equations, and Error Sources Uncertainty Propagation Equation

Télécharger la présentation

Summary of Experimental Uncertainty Assessment Methodology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Summary of Experimental Uncertainty Assessment Methodology F. Stern, M. Muste, M-L. Beninati, W.E. Eichinger

  2. Table of Contents • Introduction • Test Design Philosophy • Definitions • Measurement Systems, Data-Reduction Equations, and Error Sources • Uncertainty Propagation Equation • Uncertainty Equations for Single and Multiple Tests • Implementation & Recommendations

  3. Introduction • Experiments are an essential and integral tool for engineering and science • Experimentation: procedure for testing or determination of a truth, principle, or effect • True values are seldom known and experiments have errors due to instruments, data acquisition, data reduction, and environmental effects • Therefore, determination of truth requires estimates for experimental errors, i.e., uncertainties • Uncertainty estimates are imperative for risk assessments in design both when using data directly or in calibrating and/or validating simulation methods

  4. Introduction • Uncertainty analysis (UA): rigorous methodology for uncertainty assessment using statistical and engineering concepts • ASME (1998) and AIAA (1999) standards are the most recent updates of UA methodologies, which are internationally recognized • Presentation purpose: to provide summary of EFD UA methodology accessible and suitable for student and faculty use both in classroom and research laboratories

  5. Test design philosophy • Purposes for experiments: • Science & technology • Research & development • Design, test, and product liability and acceptance • Instruction • Type of tests: • Small- scale laboratory • Large-scale TT, WT • In-situ experiments • Examples of fluids engineering tests: • Theoretical model formulation • Benchmark data for standardized testing and evaluation of facility biases • Simulation validation • Instrumentation calibration • Design optimization and analysis • Product liability and acceptance

  6. Test design philosophy • Decisions on conducting experiments: governed by the ability of the expected test outcome to achieve the test objectives within allowable uncertainties • Integration of UA into all test phases should be a key part of entire experimental program • Test description • Determination of error sources • Estimation of uncertainty • Documentation of the results

  7. Test design philosophy

  8. Definitions • Accuracy:closeness of agreement between measured and true value • Error:difference between measured and true value • Uncertainties (U):estimate of errors in measurements of individual variables Xi (Uxi) or results (Ur) obtained by combining Uxi • Estimates of U made at95% confidence level

  9. Definitions • Bias errorb: fixed, systematic • Bias limitB: estimate of b • Precision errore: random • Precision limit P: estimate of e • Total error:d = b + e

  10. Measurement systems, data reduction equations, & error sources • Measurement systems for individual variables Xi: instrumentation, data acquisition and reduction procedures, and operational environment (laboratory, large-scale facility, in situ) often including scale models • Results expressed through data-reduction equations r = r(X1, X2, X3,…, Xj) • Estimates of errors are meaningful only when considered in the context of the process leading to the value of the quantity under consideration • Identification and quantification of error sources require considerations of: • Steps used in the process to obtain the measurement of the quantity • The environment in which the steps were accomplished

  11. Measurement systems and data reduction equations • Block diagram showing elemental error sources, individual measurement systems, measurement of individual variables, data reduction equations, and experimental results

  12. Error sources • Estimation assumptions: 95% confidence level, large-sample, statistical parent distribution

  13. Uncertainty propagation equation • Bias and precision errors in the measurement of Xi propagate through the data reduction equation r = r(X1, X2, X3,…, Xj) resulting in bias and precision errors in the experimental result r • A small error (Xi) in the measured variable leads to a small error in the result (r) that can be approximated using Taylor series expansion of r(Xi) about rtrue(Xi) as • The derivative is referred to as sensitivity coefficient. The larger the derivative/slope, the more sensitive the value of the result is to a small error in a measured variable

  14. Uncertainty propagation equation • Overview given for derivation of equation describing the error propagation with attention to assumptions and approximations used to obtain final uncertainty equation applicable for single and multiple tests • Two variables, kth set of measurements (xk, yk) The total error in the kth determination of r (1) sensitivity coefficients

  15. Uncertainty propagation equation • We would like to know the distribution of dr (called the parent distribution) for a large number of determinations of the result r • A measure of the parent distribution is its variance defined as (2) • Substituting (1) into (2), taking the limit for N approaching infinity, using definitions of variances similar to equation (2) for b ’s and e ’sand their correlation, and assuming no correlated bias/precision errors (3) • s’s in equation (3) are not known; estimates for them must be made

  16. Uncertainty propagation equation • Defining • estimate for • estimates for the variances and covariances (correlated bias errors) of the bias error distributions • estimates for the variances and covariances ( correlated precision errors) of the precision error distributions equation (3) can be written as Valid for any type of error distribution • To obtain uncertainty Ur at a specified confidence level (C%), a coverage factor (K) must be used for uc: • For normal distribution, K is the t value from the Student t distribution. For N  10, t = 2 for 95% confidence level

  17. Uncertainty propagation equation • Generalization for J variables in a result r = r(X1, X2, X3,…, Xj) sensitivity coefficients Example:

  18. Uncertainty equations for single and multiple tests Measurements can be made in several ways: • Single test (for complex or expensive experiments): one set of measurements (X1, X2, …, Xj) for r • According to the present methodology, a test is considered a single test if the entire test is performed only once, even if the measurements of one or more variables are made from many samples (e.g., LDV velocity measurements) • Multiple tests (ideal situations): many sets of measurements (X1, X2, …, Xj) for r at a fixed test condition with the same measurement system

  19. Uncertainty equations for single and multiple tests • The total uncertainty of the result (4) • Br : same estimation procedure for single and multiple tests • Pr : determined differently for single and multiple tests

  20. Uncertainty equations for single and multiple tests: bias limits • Br : • Sensitivity coefficients • Bi: estimate of calibration, data acquisition, data reduction, conceptual bias errors for Xi.. Within each category, there may be several elemental sources of bias. If for variable Xi there are J significant elemental bias errors [estimated as (Bi)1, (Bi)2, …(Bi)J], the bias limit for Xi is calculated as • Bik: estimate of correlated bias limits for Xi and Xk

  21. Uncertainty equations for single test: precision limits • Precision limit of the result (end to end): t: coverage factor (t = 2 for N > 10) Sr: the standard deviation for the N readings of the result. Sr must be determined from N readings over an appropriate/sufficient time interval • Precision limit of the result (individual variables): the precision limits for Xi Often is the case that the time interval is inappropriate/insufficient and Pi’s or Pr’s must be estimated based on previous readings or best available information

  22. Uncertainty equations for multiple tests: precision limits • The average result: • Precision limit of the result (end to end): t: coverage factor (t = 2 for N > 10) : standard deviation for M readings of the result • The total uncertainty for the average result: • Alternatively can be determined by RSS of the precision limits of the individual variables

  23. Implementation • Define purpose of the test • Determine data reduction equation: r = r(X1, X2, …, Xj) • Construct the block diagram • Construct data-stream diagrams from sensor to result • Identify, prioritize, and estimate bias limits at individual variable level • Uncertainty sources smaller than 1/4 or 1/5 of the largest sources are neglected • Estimate precision limits (end-to-end procedure recommended) • Computed precision limits are only applicable for the random error sources that were “active” during the repeated measurements • Ideally M 10, however, often this is no the case and for M < 10, a coverage factor t = 2 is still permissible if the bias and precision limits have similar magnitude. • If unacceptably large P’s are involved, the elemental error sources contributions must be examined to see which need to be (or can be) improved • Calculate total uncertainty using equation (4) • For each r, report total uncertainty and bias and precision limits

  24. Recommendations • Recognize that uncertainty depends on entire testing process and that any changes in the process can significantly affect the uncertainty of the test results • Integrate uncertainty assessment methodology into all phases of the testing process (design, planning, calibration, execution and post-test analyses) • Simplify analyses by using prior knowledge (e.g., data base), concentrate on dominant error sources and use end-to-end calibrations and/or bias and precision limit estimation • Document: • test design, measurement systems, and data streams in block diagrams • equipment and procedures used • error sources considered • all estimates for bias and precision limits and the methods used in their estimation (e.g., manufacturers specifications, comparisons against standards, experience, etc.) • detailed uncertainty assessment methodology and actual data uncertainty estimates

  25. References • AIAA, 1999, “Assessment of Wind Tunnel Data Uncertainty,” AIAA S-071A-1999. • ASME, 1998, “Test Uncertainty,” ASME PTC 19.1-1998. • ANSI/ASME, 1985, “Measurement Uncertainty: Part 1, Instrument and Apparatus,” ANSI/ASME PTC 19.I-1985. • Coleman, H.W. and Steele, W.G., 1999, Experimentation and Uncertainty Analysis for Engineers, 2nd Edition, John Wiley & Sons, Inc., New York, NY. • Coleman, H.W. and Steele, W.G., 1995, “Engineering Application of Experimental Uncertainty Analysis,” AIAA Journal, Vol. 33, No.10, pp. 1888 – 1896. • ISO, 1993, “Guide to the Expression of Uncertainty in Measurement,", 1st edition, ISBN 92-67-10188-9. • ITTC, 1999, Proceedings 22nd International Towing Tank Conference, “Resistance Committee Report,” Seoul Korea and Shanghai China.

More Related