1 / 10

Bangladesh Short term Discharge Forecasting

Bangladesh Short term Discharge Forecasting. time series forecasting Tom Hopson A project supported by USAID. Forecasting Probabilities. Rainfall Probability. Discharge Probability. Rainfall [mm]. Discharge [m^3/s]. Above danger level probablity 36%

orpah
Télécharger la présentation

Bangladesh Short term Discharge Forecasting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bangladesh Short term Discharge Forecasting time series forecasting Tom Hopson A project supported by USAID

  2. Forecasting Probabilities Rainfall Probability Discharge Probability Rainfall [mm] Discharge [m^3/s] Above danger level probablity 36% Greater than climatological seasonal risk?

  3. Data-Based ModelingLinear Transfer Function Approach (=> Used for the lumped model) Mass Balance Combine to get Q=S/T S dS/dt=u-Q TdQ/dt=u-Q For a catchment composed of linear stores in series and in parallel (using finite differences) Qt=a1ut-1+a2ut-2+…+amut-m+b1Qt-1+b2Qt-2+…+bnQt-n where u is effective catchment-averaged rainfall Derived from non-linear rainfall filter ut=(Qt)c Rt Reference: Beven, 2000

  4. Linear Transfer Function Approach (cont) or for a 3-day forecast, say: Qt+3=a1ut+2+a2ut+1+…+amut-m+b1Qt-1+b2Qt-2+…+bnQt-n Our approach: for each day and forecast, use the AIC (Akaike information criterion) to optimize a’s, m, b’s, n, c, and precip smoothing Residuals (model biases) are then corrected using an ARMA (auto-regressive moving average) model => Something available in R

  5. Autoregressive integrated moving average (ARIMA) in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalisation of an autoregressive moving average (ARMA) model. These models are fitted to time series data either to better understand the data or to predict future points in the series. They are applied in some cases where data show evidence of non-stationarity, where an initial differencing step (corresponding to the "integrated" part of the model) can be applied to remove the non-stationarity. The model is generally referred to as an ARIMA(p,d,q) model where p, d, and q are integers greater than or equal to zero and refer to the order of the autoregressive, integrated, and moving average parts of the model respectively. ARIMA models form an important part of the Box-Jenkins approach to time-series modelling. Reference: The Analysis of Time Series: An introduction. Texts in Statistical Science. Chatfield. 1996

  6. Semi-distributed Model -- 2-layer model for soil moisture states S1, S2 + • Parameters to be estimated from FAO soil map of the world • Solved with a 6hr time-step (for daily 0Z discharge) using 4th-order Runge-Kutta semi-implicit scheme • t_s1, tp, t_s2 time constants; r_s1, r_s2 reservoir depths

  7. Model selection -- Akaike information criterion Akaike's information criterion, developed by Hirotsugu Akaike under the name of "an information criterion" (AIC) in 1971 and proposed in Akaike (1974), is a measure of the goodness of fit of an estimated statistical model. It is grounded in the concept of entropy, in effect offering a relative measure of the information lost when a given model is used to describe reality and can be said to describe the tradeoff between bias and variance in model construction, or loosely speaking that of precision and complexity of the model. The AIC is not a test on the model in the sense of hypothesis testing, rather it is a tool for model selection. Given a data set, several competing models may be ranked according to their AIC, with the one having the lowest AIC being the best. From the AIC value one may infer that e.g the top three models are in a tie and the rest are far worse, but one should not assign a value above which a given model is 'rejected'

  8. Model selection -- Akaike information criterion AIC = 2k – 2 ln(L) k = # model parameters; L = maximum likelihood estimator (e.g. square error sum) Bayesian information criterion BIC = ln(n) k - 2 ln(L) n = # of data points BIC penalty function is more demanding than AIC

  9. Model selection -- Cross-validation -- most robust (secure), but most computationally-demanding! -- Set aside part of data for testing, ‘train’ on other part; best to cycle through to use all data for testing. e.g. If divide in halves (minimum), then 2X the computations required!

More Related