1 / 100

EMGT 501 HW Solutions Chapter 14 - SELF TEST 3 Chapter 14 - SELF TEST 14

EMGT 501 HW Solutions Chapter 14 - SELF TEST 3 Chapter 14 - SELF TEST 14. 14-3 a. Let x 1 = number of units of product 1 produced x 2 = number of units of product 2 produced. , , , , , , , ³ 0. b. .

kalb
Télécharger la présentation

EMGT 501 HW Solutions Chapter 14 - SELF TEST 3 Chapter 14 - SELF TEST 14

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EMGT 501 HW Solutions Chapter 14 - SELF TEST 3 Chapter 14 - SELF TEST 14

  2. 14-3 a. Let x1 = number of units of product 1 produced x2 = number of units of product 2 produced , , , , , , , ³ 0

  3. b. In the graphical solution, point A provides the optimal solution. Note that with x1 = 250 and x2 = 100, this solution achieves goals 1 and 2, but underachieves goal 3 (profit) by $100 since 4(250) + 2(100) = $1200.

  4. c. The graphical solution indicates that there are four extreme points. The profit corresponding to each extreme point is as follows: Thus, the optimal product mix is x1 = 350 and x2 = 0 with a profit of $1400.

  5. d. The solution to part (a) achieves both labor goals, whereas the solution to part (b) results in using only 2(350) + 5(0) = 700 hours of labor in department B. Although (c) results in a $100 increase in profit, the problems associated with underachieving the original department labor goal by 300 hours may be more significant in terms of long-term considerations. e.Refer to the graphical solution in part (b). The solution to the revised problem is point B, with x1 = 281.25 and x2 = 87.5. Although this solution achieves the original department B labor goal and the profit goal, this solution uses 1(281.25) + 1(87.5) = 368.75 hours of labor in department A, which is 18.75 hours more than the original goal.

  6. 14-14 a. b.

  7. Home Work 15-9 and 15-35 Due Day: Dec 2, 2007 No Class on Nov 25, 2007

  8. Chapter 15Forecasting • Quantitative Approaches to Forecasting • The Components of a Time Series • Measures of Forecast Accuracy • Using Smoothing Methods in Forecasting • Using Trend Projection in Forecasting • Using Trend and Seasonal Components in Forecasting • Using Regression Analysis in Forecasting • Qualitative Approaches to Forecasting

  9. Quantitative Approaches to Forecasting • Quantitative methods are based on an analysis of historical data concerning one or more time series. • A time series is a set of observations measured at successive points in time or over successive periods of time. • If the historical data used are restricted to past values of the series that we are trying to forecast, the procedure is called a time series method. • If the historical data used involve other time series that are believed to be related to the time series that we are trying to forecast, the procedure is called a causal method.

  10. Time Series Methods • Three time series methods are: • smoothing • trend projection • trend projection adjusted for seasonal influence

  11. Components of a Time Series • The trend component accounts for the gradual shifting of the time series over a long period of time. • Any regular pattern of sequences of values above and below the trend line is attributable to the cyclical component of the series.

  12. Components of a Time Series • The seasonal component of the series accounts for regular patterns of variability within certain time periods, such as over a year. • The irregular component of the series is caused by short-term, unanticipated and non-recurring factors that affect the values of the time series. One cannot attempt to predict its impact on the time series in advance.

  13. Measures of Forecast Accuracy • Mean Squared Error The average of the squared forecast errors for the historical data is calculated. The forecasting method or parameter(s) which minimize this mean squared error is then selected. • Mean Absolute Deviation The mean of the absolute values of all forecast errors is calculated, and the forecasting method or parameter(s) which minimize this measure is selected. The mean absolute deviation measure is less sensitive to individual large forecast errors than the mean squared error measure.

  14. Smoothing Methods • In cases in which the time series is fairly stable and has no significant trend, seasonal, or cyclical effects, one can use smoothing methods to average out the irregular components of the time series. • Four common smoothing methods are: • Moving averages • Centered moving averages • Weighted moving averages • Exponential smoothing

  15. Smoothing Methods • Moving Average Method The moving average method consists of computing an average of the most recent n data values for the series and using this average for forecasting the value of the time series for the next period.

  16. Time Series A time series is a series of observations over time of some quantity of interest (a random variable). Thus, if is the random variable of interest at time i, and if observations are taken at times i = 1, 2, …., t, then the observed values are a time series.

  17. Several typical time series patterns: Constant level Seasonal effect Linear trend

  18. Example Constant level : the random variable observed at time I : the constant level of the model : the random error occurring at time i. forecast of the values of the time series at time t + 1, given the observed values,

  19. Forecasting Methods for a Constant-Level Model (1) Last-Value Forecasting Method (2) Averaging Forecasting Method (3) Moving-Average Forecasting Method (4) Exponential Smoothing Forecasting Method

  20. (1) Last-Value Forecasting Method By interpreting t as thecurrent time, the last-value forecasting procedure uses the value of the time series observed at time , as the forecast at time t + 1. The last-value forecasting method sometimes is called thenaive method, because statisticians consider it naïve to use just asample size of onewhen additional relevant data are available.

  21. (2) Averaging Forecasting Method This method uses all the data points in the time series and simplyaveragesthese points. This estimate is an excellent one if the process is entirely stable.

  22. (3) Moving-Average Forecasting Method This method averages the data for only the last n periods as the forecast for the next period. The moving-averageestimator combines the advantages of thelast value and averagingestimators. A disadvantage of this method is that it places as much weight on as on .

  23. (4) Exponential Smoothing Forecasting Method Where is called thesmoothing constant. Thus, the forecast is just a weighted sum of the last observation and the preceding forecast for the period just ended.

  24. Because of this recursive relationship between and , alternatively can be expressed as Another alternative form for the exponential smoothing technique is given by

  25. Seasonal Factor It is fairly common for a time series to have a seasonal pattern with higher values at certain times of the year than others.

  26. Example Three-Year Average Seasonal Factor Quarter

  27. Seasonally Adjusted Volume Actual Volume Seasonal Factor Year Quarter

  28. An Exponential Smoothing Method for a Linear Trend Model Linear trend Suppose that the generating process of the observed time series can be represented by a linear trend superimposed with random fluctuations.

  29. The model is represented by Where is the random variable that is observed at time i, A is a constraint. B is the trend factor, and is the random error occurring at time i.

  30. Adapting Exponential Smoothing to this Model Let Exponential smoothing estimate of the trend factor B at time t + 1, given the observed values, Given , the forecast of the value of the time series at time t + 1( ) is obtained simply by adding to the formula for .

  31. The most recent observations are the most reliable ones for estimating the current parameters. latest trend at time t + 1 based on the last two values ( and ) and the last two forecasts ( and ). The exponential smoothing formula used for is

  32. Then is calculated as where is thetrend smoothing constantwhich must be between 0 and 1.

  33. Getting started with this forecasting method requires making two initial estimates. initial estimate of theexpected valueof the time series initial estimate of thetrendof the time series

  34. The resulting forecasts for the first two periods are

  35. Forecasting Errors The goal of several forecasting methods is to generate forecasts that are as accurate as possible, so it is natural to base a measure of performance on the forecasting errors.

  36. Theforecasting errorfor any period t is the absolute value of the deviation of the forecast for period t ( ) from what then turns out to be the observed value of the time series for period . Thus, letting denote this error,

  37. Given the forecasting errors for n time periods (t =1, 2, …, n), two popular measures of performance are available. Mean Absolute Deviation (MAD) Mean Square Error (MSE)

  38. The advantages of MAD (a) its ease of calculation (b) its straightforward interpretation The advantages of MSE (c) it imposes a relatively large penalty for a large forecasting error while almost ignoring inconsequentially small forecasting errors.

  39. Causal Forecasting with Linear Regression In the preceding sections, we have focused on time series forecasting methods. We now turn to another type of approach to forecasting. Causal forecasting: Causal forecasting obtains a forecast of the quantity of interest by relating it directly to one ore more other quantities that drive the quantity of interest.

  40. Linear Regression We will focus on the type of causal forecasting where the mathematical relationship between the dependent variable and the independent variable(s) is assumed to be a linear one. The analysis in this case is referred to as linear regression.

  41. The number of variable A is denoted by X and the number of variable B is denoted by Y, then the random variables X and Y exhibit a degree of association. For any given number of variable A, there is a range of possible variable B, and vice versa. This relationship between X and Y is referred to as a degree of association model.

  42. In some cases, there exists a functional relationship between two variables that may be linked linearly. The previous example is It follows that Both the degree of association model and the exact functional relationship model lead to the same linear relationship.

  43. With t taking on integer values starting with 1, leads to certain simplified expressions. In the standard notation of regression analysis, X represents the independent variable and Y represents the dependent variable of interest. Consequently, the notational expression for this special time series model becomes

  44. Method of Least Squares The usual method for identifying the “best” fitted line is the method of least squares. Regression Line

  45. Suppose that an arbitrary line, given by the expression , is drawn through the data. A measure of how well this line fits the data can be obtained by computingthe sum of squaresof the vertical deviations of the actual points from the fitting line.

  46. This method chooses that line a + bx that makes Q a minimum.

  47. and where and

More Related