1 / 27

Engineering Analysis ENG 3420 Fall 2009

Engineering Analysis ENG 3420 Fall 2009. Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00. Lecture 23. Attention: The last homework HW5 and the last project are due on Tuesday November 24!! Last time: Linear regression versus sample mean. Coefficient of determination

miguela
Télécharger la présentation

Engineering Analysis ENG 3420 Fall 2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00

  2. Lecture 23 Attention: The last homework HW5 and the last project are due on Tuesday November 24!! Last time: Linear regression versus sample mean. Coefficient of determination Polynomial least squares fit Multiple linear regression General linear squares More on non-linear models Interpolation (Chapter 15) Today Lagrange interpolating polynomials Splines Cubic splines Searching and sorting. Next Time More on Splines Numerical integration (chapter 17) Lecture 23 2 2

  3. Newton interpolating polynomial of degree n-1 • In general, an (n-1)th Newton interpolating polynomial has all the terms of the (n-2)th polynomial plus one extra. • The general formula is: • where • and the f[…] represent divided differences.

  4. Divided differences • Divided difference are calculated as follows: • Divided differences are calculated using divided difference of a smaller number of terms:

  5. Lagrange interpolating polynomials • Another method that uses shifted value to express an interpolating polynomial is the Lagrange interpolating polynomial. • The differences between a simply polynomial and Lagrange interpolating polynomials for first and second order polynomials is: where the Li are weighting coefficients that are functions of x.

  6. First-order Lagrange interpolating polynomial • The first-order Lagrange interpolating polynomial may be obtained from a weighted combination of two linear interpolations, as shown. • The resulting formula based on known points x1 and x2 and the values of the dependent function at those points is:

  7. Lagrange interpolating polynomial for n points • In general, the Lagrange polynomial interpolation for n points is: • where Li is given by:

  8. Inverse interpolation • Interpolation  find the value f(x) for some x between given independent data points. • Inverse interpolation find the argument x for which f(x) has a certain value. • Rather than finding an interpolation of x as a function of f(x), it may be useful to find an equation for f(x) as a function of x using interpolation and then solve the corresponding roots problem:f(x)-fdesired=0 for x.

  9. Extrapolation • Extrapolation estimating a value of f(x) that lies outside the range of the known base pointsx1, x2, …, xn. • Extreme care should be exercised when extrapolating!

  10. Extrapolation Hazards • The following shows the results of extrapolating a seventh-order population data set:

  11. Oscillations • Higher-order polynomials can not only lead to round-off errors due to ill-conditioning, but can also introduce oscillations to an interpolation or fit where they should not be. • The dashed line represents a function, the circles represent samples of the function, and the solid line represents the results of a polynomial interpolation:

  12. Splines –picewise interpolation • Splines an alternative approach to using a single (n-1)th order polynomial to interpolate between n points • Apply lower-order polynomials in a piecewise fashion to subsets of data points. The connecting polynomials are called spline functions. • Splines eliminate oscillations by using small subsets of points for each interval rather than every point. Especially useful when there are jumps in the data as is the case in this figure at the right where we use: • 3rd order polynomial • 5th order polynomial • 7th order polynomial • Linear spline • seven 1st order polynomials generated by using pairs of points at a time

  13. Splines (cont’d) • Spline function (si(x)) coefficients are calculated for each interval of a data set. • The number of data points (fi) used for each spline function depends on the order of the spline function. The conditions: • First-order splines find straight-line equations between each pair of points that • Go through the points • Second-order splinesfind quadratic equations between each pair of points that • Go through the points • Match first derivatives at the interior points • Third-order splines find cubic equations between each pair of points that • Go through the points • Match first and second derivatives at the interior pointsNote that the results of cubic spline interpolation are different from the results of an interpolating cubic.

  14. Cubic splines • Cubic splines the simplest representation with the appearance of smoothness and without the problems of higher order polynomials. • Linear splines have discontinuous first derivatives • Quadratic splines have discontinuous second derivatives and require setting the second derivative at some point to a pre-determined value • Quartic or higher-order splines tend to exhibit ill-conditioning or oscillations. • The cubic spline function for the ith interval can be written as: • For n data points, there are (n-1) intervals and thus 4(n-1) unknowns to evaluate to solve all the spline function coefficients.

  15. Conditions to determine the spline coefficients • The first condition the spline function goes through the first and last point of the interval; this leads to 2(n-1) equations: • The second conditionthe first derivative should be continuous at each interior point; this leads to (n-2) equations: • The third conditionthe second derivative should be continuous at each interior point; this leads to (n-2) equations: • So far we have (4n-6) equations; we need (4n-4) equations!

  16. Two additional equations • There are several options for the final two equations: • Natural end conditions  the second derivative at the end knots are zero. • Clamped end conditions  the first derivatives at the first and last knots are known. • “Not-a-knot” end conditions  force continuity of the third derivative at the second and penultimate points (results in the first two intervals having the same spline function and the last two intervals having the same spline function)

  17. Built-in functions for piecewise interpolation • MATLAB has several built-in functions to implement piecewise interpolation. • spline yy=spline(x, y, xx) • Performs cubic spline interpolation, generally using not-a-knot conditions. • If y contains two more values than x has entries, then the first and last value in y are used as the derivatives at the end points (i.e. clamped) • Example: • Generate data:x = linspace(-1, 1, 9);y = 1./(1+25*x.^2); • Calculate 100 model points anddetermine not-a-knot interpolationxx = linspace(-1, 1);yy = spline(x, y, xx); • Calculate actual function values at model points and data points, the 9-point not-a-knot interpolation (solid), and the actual function (dashed), yr = 1./(1+25*xx.^2)plot(x, y, ‘o’, xx, yy, ‘-’, xx, yr, ‘--’)

  18. Clamped example • Generate data w/ first derivative information:x = linspace(-1, 1, 9);y = 1./(1+25*x.^2);yc = [1 y -4] • Calculate 100 model points anddetermine not-a-knot interpolationxx = linspace(-1, 1);yyc = spline(x, yc, xx); • Calculate actual function values at model points and data points, the 9-point clamped interpolation (solid), and the actual function (dashed), yr = 1./(1+25*xx.^2)plot(x, y, ‘o’, xx, yyc, ‘-’, xx, yr, ‘--’)

  19. interp1 built-in function • interp1 function performs several different kinds of interpolation:yi = interp1(x, y, xi, ‘method’) • x & y contain the original data • xi contains the points at which to interpolate • ‘method’ is a string containing the desired method: • ‘nearest’ - nearest neighbor interpolation • ‘linear’ - connects the points with straight lines • ‘spline’ - not-a-knot cubic spline interpolation • ‘pchip’ or ‘cubic’ - piecewise cubic Hermite interpolation

  20. Piecewise Polynomial Comparisons

  21. Multidimensional Interpolation • The interpolation methods for one-dimensional problems can be extended to multidimensional interpolation. • Example - bilinear interpolation using Lagrange-form equations:

  22. Built-in functions for two- and three-dimensional piecewise interpolation • 2-D interpolation: the inputs are vectors or same-size matrices. zi = interp2(x, y, z, xi, yi, ‘method’) • 3-D interpolation: the inputs are vectors or same-size 3-D arrays. vi = interp3(x, y, z, v, xi, yi, zi, ‘method’) • ‘method’ is a string containing the desired method: ‘nearest’, ‘linear’, ‘spline’, ‘pchip’, ‘cubic’

  23. Search algorithms • Find an element of a set based upon some search criteria. • Linear search: • Compare each element of the set with the “target” • Requires O(n) operations if the set of n elements is not sorted • Binary search: • Can be done only when the list is sorted. • Requires O(log(n)) comparisons. • Algorithm: • Check the middle element. • If the middle element is equal to the sought value, then the position has been found; • Otherwise, the upper half or lower half is chosen for search based on whether the element is greater than or less than the middle element.

  24. Sorting algorithms • Algorithms that puts elements of a list in a certain order, e.g., numerical order and lexicographical order.  • Input: a list of n unsorted elements. • Output: the list sorted in increasing order. • Bubble sort complexity: average O(n2); )); worst case O(n2). • Compare each pair of elements; swap them if they are in the wrong order. • Go again through the list until no swaps are necessary. • Quick sort complexity: average O(n log(n)); worst case O(n2). • Pick an element, called a pivot, from the list. • Reorder the list so that • all elements which are less than the pivot come before the pivot and • all elements greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. • Recursively sort the sub-list of lesser elements and the sub-list of greater elements.

  25. Sorting algorithms (cont’d) • Merge sort – invented by John von Neumann: • Complexity: average O(n log(n)); worst case O(n log(n)); • If the list is of length 0 or 1, then it is already sorted. Otherwise: • Divide the unsorted list into two sublists of about half the size. • Sort each sublist recursively by re-applying merge sort. • Merge the two sublists back into one sorted list. • Tournament sort: • Complexity: average O(n log(n)); worst case O(n log(n)); • It imitates conducting a tournament in which two players play with each other. • Compare numbers in pairs, then form a temporary array with the winning elements. • Repeat this process until you get the greatest or smallest element based on your choice.

More Related