My (rough) notes of the first part of the Theory and Algorithms for Forecasting Non-Stationary Time Series NIPS tutorial on 12/15/2016.

• Time series prediction appears in many real-world applications
• stocks
• econ variables
• weather
• sensors
• earthquakes
• energy demand
• signal processing
• sales forecasting
• Timeseries are challenging - different than what we typically see in machine learning.
• classic framework
• postulate generative model
• use given sample to estimate unknown parameters
• use estimated models to make predictions
• Autoregressive models: next observation is weighted linear combination of past values.
• Moving average model: observation is a weighted linear combination of uncertainty at previous times
• ARMA model: combines autoregressive and moving average models.
• Stationarity: a sequence of random variables is stationary if their distribution is invariant wrt time
• Weak Stationarity: only the first two moments (mean and variance) must be invariant wrt time.
• Lag operator: $$L(Y_t) = Y_T-1$$
• ARIMA: ARMA models of a process that applies $$(1-L)^D$$ to $$Y_T$$.
• Different withods for estimating model params
• Maximum likelihood
• method of moments