A test to determine whether a time series is stationary or, specifically, whether the null hypothesis of a unit root can be rejected. A time series can be nonstationary because of a deterministic trend (a stationary trend or TS series) or a stochastic trend (a difference stationary or DS series) or both. Unit root tests are intended to detect stochastic trend, although they are not powerful at doing so, and they can give misleading inferences if a deterministic trend is present but is not allowed for. The augmented Dickey-Fuller test, which adds lagged dependent variables to the test equation, is often used. Adding the lagged variables (usually at the rate corresponding to n/3, where n is the sample size) removes distortions to the level of statistical significance but lowers the power of the test to detect a unit root when one is present. There is a difference between forecasting with trend-stationary (TS) and difference-stationary (DS) models (though probably little difference in point forecasts and intervals for short horizons, h = 1 or 2). The point forecasts of a TS series change by a constant amount (other things being equal) as the forecast horizon is incremented. Their prediction intervals are almost constant. The point forecasts of a DS series are constant as the horizon is increased (like naive no-change forecasts), other things being equal, while the prediction intervals widen rapidly. There is a vast literature on unit roots. The expression "unit root test$" ($ indicates a wildcard) generated 281 hits in the Econolit database of OVID (as of mid-December, 1999), although when it was combined with “forecast$,” the number fell to 12. Despite this literature, we can say little about the usefulness of a unit-root test, such as the Dickey-Fuller test, as part of a testing strategy to improve forecasting accuracy. Meese and Geweke (1984) examined 150 quarterly and monthly macroeconomic series and found that forecasts from detrended data (i.e., assuming TS) were more accurate than forecasts from differenced data. Campbell and Perron (1991) conducted a Monte Carlo simulation with an ARMA (1,1) Data Generating Process and samples of 100. When there was an autoregressive unit root or near unit root (.95 or higher), an autoregressive model in differences forecasted better at h = 1 and h = 20 horizons. When there was an autoregressive unit root and the moving average parameter was 0.9 or less, the model in differences was also better. Otherwise the AR model in levels with a trend variable was better. Since most economic series appear to contain a unit root, the Campbell and Perron study seems to call for using a DS model, exactly the opposite of the strategy indicated by Meese and Geweke. But what if the parameter values are unknown? Campbell and Perron also considered a mixed strategy: Use a levels model if the augmented Dickey-Fuller test and the Phillips-Perron test for a unit root were both rejected at the five percent level of significance; otherwise use a model in differences. Such a strategy gave almost as good results as using the better model given knowledge of the parameter values. This slender evidence provides some support for using a unit-root test to select a forecasting model. Maddala and Kim (1998) provide a helpful summary.