SAS/ETS 9.2 User's Guide
 1590479491, 9781590479490

Citation preview

®

SAS/ETS 9.2 User’s Guide

The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2008. SAS/ETS ® 9.2 User’s Guide. Cary, NC: SAS Institute Inc.

SAS/ETS® 9.2 User’s Guide Copyright © 2008, SAS Institute Inc., Cary, NC, USA ISBN 978-1-59047-949-0 All rights reserved. Produced in the United States of America. For a hard-copy book: No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, or otherwise, without the prior written permission of the publisher, SAS Institute Inc. For a Web download or e-book: Your use of this publication shall be governed by the terms established by the vendor at the time you acquire this publication. U.S. Government Restricted Rights Notice: Use, duplication, or disclosure of this software and related documentation by the U.S. government is subject to the Agreement with SAS Institute and the restrictions set forth in FAR 52.227-19, Commercial Computer Software-Restricted Rights (June 1987). SAS Institute Inc., SAS Campus Drive, Cary, North Carolina 27513. 1st electronic book, March 2008 2nd electronic book, February 2009 1st printing, March 2009 SAS® Publishing provides a complete selection of books and electronic products to help customers use SAS software to its fullest potential. For more information about our e-books, e-learning products, CDs, and hard-copy books, visit the SAS Publishing Web site at support.sas.com/publishing or call 1-800727-3228. ®

SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration. Other brand and product names are registered trademarks or trademarks of their respective companies.

Contents I

General Information

1

Chapter 1.

What’s New in SAS/ETS

. . . . . . . . . . . . . . . . . . .

3

Chapter 2.

Introduction . . . . . . . . . . . . . . . . . . . . . . . .

15

Chapter 3.

Working with Time Series Data . . . . . . . . . . . . . . . . .

63

Chapter 4.

Date Intervals, Formats, and Functions

Chapter 5.

SAS Macros and Functions

Chapter 6.

Nonlinear Optimization Methods

. . . . . . . . . . . . . .

129

. . . . . . . . . . . . . . . . . .

149

. . . . . . . . . . . . . . . .

165

II Procedure Reference

187

Chapter 7.

The ARIMA Procedure . . . . . . . . . . . . . . . . . . . .

189

Chapter 8.

The AUTOREG Procedure . . . . . . . . . . . . . . . . . . .

313

Chapter 9.

The COMPUTAB Procedure . . . . . . . . . . . . . . . . . .

429

Chapter 10. The COUNTREG Procedure . . . . . . . . . . . . . . . . . .

483

Chapter 11. The DATASOURCE Procedure . . . . . . . . . . . . . . . . .

519

Chapter 12. The ENTROPY Procedure (Experimental) . . . . . . . . . . . . .

615

Chapter 13. The ESM Procedure . . . . . . . . . . . . . . . . . . . . .

681

Chapter 14. The EXPAND Procedure

. . . . . . . . . . . . . . . . . . .

719

. . . . . . . . . . . . . . . . . .

773

. . . . . . . . . . . . . . . . . . . .

827

Chapter 17. The MDC Procedure . . . . . . . . . . . . . . . . . . . . .

869

Chapter 18. The MODEL Procedure . . . . . . . . . . . . . . . . . . . .

943

Chapter 19. The PANEL Procedure . . . . . . . . . . . . . . . . . . . .

1261

Chapter 20. The PDLREG Procedure

. . . . . . . . . . . . . . . . . . .

1349

Chapter 21. The QLIM Procedure . . . . . . . . . . . . . . . . . . . . .

1375

Chapter 22. The SIMILARITY Procedure (Experimental) . . . . . . . . . . . .

1441

Chapter 23. The SIMLIN Procedure . . . . . . . . . . . . . . . . . . . .

1511

Chapter 24. The SPECTRA Procedure . . . . . . . . . . . . . . . . . . .

1541

Chapter 15. The FORECAST Procedure Chapter 16. The LOAN Procedure

Chapter 25. The STATESPACE Procedure

. . . . . . . . . . . . . . . . .

1567

Chapter 26. The SYSLIN Procedure . . . . . . . . . . . . . . . . . . . .

1613

Chapter 27. The TIMESERIES Procedure . . . . . . . . . . . . . . . . . .

1679

Chapter 28. The TSCSREG Procedure . . . . . . . . . . . . . . . . . . .

1727

Chapter 29. The UCM Procedure . . . . . . . . . . . . . . . . . . . . .

1741

Chapter 30. The VARMAX Procedure . . . . . . . . . . . . . . . . . . .

1855

Chapter 31. The X11 Procedure

. . . . . . . . . . . . . . . . . . . . .

2033

Chapter 32. The X12 Procedure

. . . . . . . . . . . . . . . . . . . . .

2101

III Data Access Engines

2187

Chapter 33. The SASECRSP Interface Engine . . . . . . . . . . . . . . . .

2189

Chapter 34. The SASEFAME Interface Engine . . . . . . . . . . . . . . . .

2289

Chapter 35. The SASEHAVR Interface Engine . . . . . . . . . . . . . . . .

2339

IV Time Series Forecasting System

2377

Chapter 36. Overview of the Time Series Forecasting System

. . . . . . . . . .

2379

Chapter 37. Getting Started with Time Series Forecasting . . . . . . . . . . . .

2383

Chapter 38. Creating Time ID Variables

. . . . . . . . . . . . . . . . . .

2439

Chapter 39. Specifying Forecasting Models . . . . . . . . . . . . . . . . .

2453

Chapter 40. Choosing the Best Forecasting Model . . . . . . . . . . . . . . .

2491

Chapter 41. Using Predictor Variables . . . . . . . . . . . . . . . . . . .

2511

Chapter 42. Command Reference . . . . . . . . . . . . . . . . . . . . .

2545

Chapter 43. Window Reference

2553

. . . . . . . . . . . . . . . . . . . . .

Chapter 44. Forecasting Process Details

. . . . . . . . . . . . . . . . . .

2661

V Investment Analysis

2695

Chapter 45. Overview . . . . . . . . . . . . . . . . . . . . . . . . .

2697

Chapter 46. Portfolios . . . . . . . . . . . . . . . . . . . . . . . . .

2701

Chapter 47. Investments

2709

. . . . . . . . . . . . . . . . . . . . . . . .

iv

Chapter 48. Computations

. . . . . . . . . . . . . . . . . . . . . . .

2753

. . . . . . . . . . . . . . . . . . . . . . . . .

2765

Chapter 50. Details . . . . . . . . . . . . . . . . . . . . . . . . . .

2781

Subject Index

2793

Syntax Index

2835

Chapter 49. Analyses

v

vi

Credits and Acknowledgments

Credits

Documentation Editing

Ed Huddleston, Anne Jones

Technical Review

Jackie Allen, Evan L. Anderson, Ming-Chun Chang, Jan Chvosta, Brent Cohen, Allison Crutchfield, Paige Daniels, Gül Ege, Bruce Elsheimer, Donald J. Erdman, Kelly Fellingham, Laura Jackson, Wilma S. Jackson, Wen Ji, Kurt Jones, Kathleen Kiernan, Jennifer Sloan, Michael J. Leonard, Mark R. Little, Kevin Meyer, Gina Marie Mondello, Steve Morrison, Youngjin Park, David Schlotzhauer, Jim Seabolt, Rajesh Selukar, Arthur Sinko, Michele A. Trovero, Charles Sun, Donna E. Woodward

Documentation Production

Michele A. Trovero, Tim Arnold

Software The procedures in SAS/ETS software were implemented by members of the Advanced Analytics department. Program development includes design, programming, debugging, support, documentation, and technical review. In the following list, the names of the developers who currently support the procedure are listed first.

ARIMA

Rajesh Selukar, Michael J. Leonard, Terry Woodfield

AUTOREG

Xilong Chen, Arthur Sinko, Jan Chvosta, John P. Sall

COMPUTAB

Michael J. Leonard, Alan R. Eaton, David F. Ross

COUNTREG

Jan Chvosta, Laura Jackson

DATASOURCE

Kelly Fellingham, Meltem Narter

ENTROPY

Xilong Chen, Arthur Sinko, Greg Sterijevski, Donald J. Erdman

ESM

Michael J. Leonard

EXPAND

Marc Kessler, Michael J. Leonard, Mark R. Little

FORECAST

Michael J. Leonard, Mark R. Little, John P. Sall

LOAN

Gül Ege

MDC

Jan Chvosta

MODEL

Donald J. Erdman, Mark R. Little, John P. Sall

PANEL

Jan Chvosta, Greg Sterijevski

PDLREG

Xilong Chen, Arthur Sinko, Jan Chvosta, Leigh A. Ihnen

QLIM

Jan Chvosta

SIMILARITY

Michael J. Leonard

SIMLIN

Mark R. Little, John P. Sall

SPECTRA

Rajesh Selukar, Donald J. Erdman, John P. Sall

STATESPACE

Donald J. Erdman, Michael J. Leonard

SYSLIN

Donald J. Erdman, Leigh A. Ihnen, John P. Sall

TIMESERIES

Marc Kessler, Michael J. Leonard

TSCSREG

Jan Chvosta, Meltem Narter

UCM

Rajesh Selukar

VARMAX

Youngjin Park

X11

Wilma S. Jackson, R. Bart Killam, Leigh A. Ihnen,

viii

Richard D. Langston X12

Wilma S. Jackson

Time Series Investment Analysis System

Evan L. Anderson, Michael J. Leonard, Meltem Narter, Gül Ege Gül Ege, Scott Gray, Michael J. Leonard

Compiler and Symbolic Differentiation

Stacey Christian

SASEHAVR

Kelly Fellingham

SASECRSP

Kelly Fellingham, Peng Zang

SASEFAME

Kelly Fellingham

Testing

Jackie Allen, Ming-Chun Chang, Bruce Elsheimer, Kelly Fellingham, Jennifer Sloan, Charles Sun, Linda Timberlake, Mark Traccarella, Peng Zang

Technical Support

Members

Paige Daniels, Wen Ji, Kurt Jones, Kathleen Kiernan, Kevin Meyer, Gina Marie Mondello, David Schlotzhauer, Donna E. Woodward

Acknowledgments Hundreds of people have helped the SAS System in many ways since its inception. The following individuals have been especially helpful in the development of the procedures in SAS/ETS software. Acknowledgments for the SAS System generally appear in Base SAS® software documentation and SAS/ETS software documentation.

ix

David Amick David M. DeLong David Dickey Douglas J. Drummond William Fortney Wayne Fuller A. Ronald Gallant Phil Hanser Marvin Jochimsen Jeff Kaplan Ken Kraus George McCollister Douglas Miller Brian Monsell Robert Parks Gregory Sali Bob Spatz Mary Young

Idaho Office of Highway Safety Duke University North Carolina State University Center for Survey Statistics Boeing Computer Services Iowa State University The University North Carolina at Chapel Hill Sacramento Municipal Utilities District Mississippi R&O Center Sun Guard Center for Research in Security Prices San Diego Gas & Electric Purdue University U.S. Census Bureau Washington University Idaho Office of Highway Safety Center for Research in Security Prices Salt River Project

The final responsibility for the SAS System lies with SAS Institute alone. We hope that you will always let us know your opinions about the SAS System and its documentation. It is through your participation that SAS software is continuously improved.

x

Part I

General Information

2

Chapter 1

What’s New in SAS/ETS Contents What’s New in SAS/ETS for SAS 9.2 . . . . . . . . Overview . . . . . . . . . . . . . . . . . . . . AUTOREG Procedure . . . . . . . . . . . . . COUNTREG Procedure . . . . . . . . . . . . DATASOURCE Procedure . . . . . . . . . . . New ESM Procedure . . . . . . . . . . . . . . MODEL Procedure . . . . . . . . . . . . . . PANEL Procedure . . . . . . . . . . . . . . . QLIM Procedure . . . . . . . . . . . . . . . . SASECRSP Engine . . . . . . . . . . . . . . SASEFAME Engine . . . . . . . . . . . . . . SASEHAVR Engine . . . . . . . . . . . . . . New SIMILARITY Procedure (Experimental) UCM Procedure . . . . . . . . . . . . . . . . VARMAX Procedure . . . . . . . . . . . . . X12 Procedure . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

3 3 4 5 5 5 5 6 6 7 7 8 9 9 10 12 13

This chapter summarizes the new features available in SAS/ETS software with SAS 9.2. It also describes other new features that were added with SAS 9.1.

What’s New in SAS/ETS for SAS 9.2

Overview Many SAS/ETS procedures now produce graphical output using the SAS Output Delivery System. This output is produced when you turn on ODS graphics with the following ODS statement: ods graphics on;

4 F Chapter 1: What’s New in SAS/ETS

Several procedures now support the PLOTS= option to control the graphical output produced. (See the chapters for individual SAS/ETS procedures for details on the plots supported.) With SAS 9.2, SAS/ETS offers three new modules:  The new ESM procedure provides forecasting using exponential smoothing models with optimized smoothing weights.  The SASEHAVR interface engine is now production and available to Windows users for accessing economic and financial data residing in a HAVER ANALYTICS Data Link Express (DLX) database.  The new SIMILARITY (experimental) procedure provides similarity analysis of time series data. New features have been added to the following SAS/ETS components:  PROC AUTOREG  PROC COUNTREG  PROC DATASOURCE  PROC MODEL  PROC PANEL  PROC QLIM  SASECRSP Interface Engine  SASEFAME Interface Engine  SASEHAVR Interface Engine  PROC UCM  PROC VARMAX  PROC X12

AUTOREG Procedure Two new features have been added to the AUTOREG procedure.  An alternative test for stationarity, proposed by Kwiatkowski, Phillips, Schmidt, and Shin (KPSS), is implemented. The null hypothesis for this test is a stationary time series, which is a natural choice for many applications. Bartlett and quadratic spectral kernels for estimating long-run variance can be used. Automatic bandwidth selection is an option.

COUNTREG Procedure F 5

 Corrected Akaike information criterion (AICC) is implemented. This modification of AIC corrects for small-sample bias. Along with the corrected Akaike information criterion, the mean absolute error (MAE) and mean absolute percentage error (MAPE) are now included in the summary statistics.

COUNTREG Procedure Often the data that is being analyzed take the form of nonnegative integer (count) values. The new COUNTREG procedure implements count data models that take this discrete nature of data into consideration. The dependent variable in these models is a count that represents various discrete events (such as number of accidents, number of doctor visits, or number of children). The conditional mean of the dependent variable is a function of various covariates. Typically, you are interested in estimating the probability of the number of event occurrences using maximum likelihood estimation. The COUNTREG procedure supports the following types of models:  Poisson regression  negative binomial regression with linear (NEGBIN1) and quadratic (NEGBIN2) variance functions (Cameron and Trivedi 1986)  zero-inflated Poisson (ZIP) model (Lambert 1992)  zero-inflated negative binomial (ZINB) model

DATASOURCE Procedure PROC DATASOURCE now supports the newest Compustat Industrial Universal Character Annual and Quarterly data by providing the new filetypes CSAUCY3 for annual data and CSQUCY3 for quarterly data.

New ESM Procedure The ESM (Exponential Smoothing Models) procedure provides a quick way to generate forecasts for many time series or transactional data in one step. All parameters associated with the forecast model are optimized based on the data.

MODEL Procedure The t copula and the normal mixture copula have been added to the MODEL procedure. Both copulas support asymmetric parameters. The copula is used to modify the correlation structure of

6 F Chapter 1: What’s New in SAS/ETS

the model residuals for simulation. Starting with SAS 9.2, the MODEL procedure stores MODEL files in SAS datasets using an XMLlike format instead of in SAS catalogs. This makes MODEL files more readily extendable in the future and enables Java-based applications to read the MODEL files directly. More information is stored in the new format MODEL files; this enables some features that are not available when the catalog format is used. The MODEL procedure continues to read and write old-style catalog MODEL files, and model files created by previous releases of SAS/ETS continue to work, so you should experience no direct impact from this change. The CMPMODEL= option can be used in an OPTIONS statement to modify the behavior of the MODEL when reading and writing MODEL files. The values allowed are CMPMODEL= BOTH | XML | CATALOG. For example, the following statements restore the previous behavior: options cmpmodel=catalog;

The CMPMODEL= option defaults to BOTH in SAS 9.2; this option is intended for transitional use while customers become accustomed to the new file format. If CMPMODEL=BOTH, the MODEL procedure writes both formats; when loading model files, PROC MODEL attempts to load the XML version first and the CATALOG version second (if the XML version is not found). If CMPMODEL=XML the MODEL procedure reads and writes only the XML format. If CMPMODEL=CATALOG, only the catalog format is used.

PANEL Procedure The PANEL procedure expands the estimation capability of the TSCSREG procedure in the timeseries cross-sectional framework. The new methods include: between estimators, pooled estimators, and dynamic panel estimators using GMM method. Creating lags of variables in a panel setting is simplified by the LAG statement. Because the presence of heteroscedasticity can result in inefficient and biased estimates of the variance covariance matrix in the OLS framework, several methods that produce heteroscedasticity-corrected covariance matrices (HCCME) are added. The new RESTRICT statement specifies linear restrictions on the parameters. New ODS Graphics plots simplify model development by providing visual analytical tools.

QLIM Procedure Stochastic frontier models are now available in the QLIM procedure. Specification of these models allows for random shocks of production or cost along with technological or cost inefficiencies. The nonnegative error-term component that represents technological or cost inefficiencies has halfnormal, exponential, or truncated normal distributions.

SASECRSP Engine F 7

SASECRSP Engine The SASECRSP interface now supports reading of CRSP stock, indices, and combined stock/indices databases by using a variety of keys, not just CRSP’s primary key PERMNO. In addition, SASECRSP can now read the CRSP/Compustat Merged (CCM) database and fully supports cross-database access, enabling you to access the CCM database by CRSP’s main identifiers PERMNO and PERMCO, as well as to access the CRSP Stock databases by Compustat’s GVKEY identifier. A list of other new features follows:  SASECRSP now fully supports access of fiscal CCM data members by both fiscal and calendar date range restrictions. Fiscal to calendar date shifting has been added as well.  New date fields have been added for CCM fiscal members. Now fiscal members have three different dates: a CRSP date, a fiscal integer date, and a calendar integer date.  An additional date function has been added which enables you to convert from fiscal to calendar dates.  Date range restriction for segment members has also been added.

SASEFAME Engine The SASEFAME interface enables you to access and process financial and economic time series data that resides in a FAME database. SASEFAME for SAS 9.2 supports Windows, Solaris, AIX, Linux, Linux Opteron, and HP-UX hosts. You can now use the SAS windowing environment to view FAME data and use the SAS viewtable commands to navigate your FAME data base. You can select the time span of data by specifying a range of dates in the RANGE= option. You can use an input SAS data set with a WHERE clause to specify selection of variables based on BY variables, such as tickers or issues stored in a FAME string case series. You can use a FAME crosslist to perform selection based on the crossproduct of two FAME namelists. The new FAMEOUT= option now supports the following classes and types of data series objects: FORMULA, TIME, BOOLEAN, CASE, DATE, and STRING. It is easy to use a SAS input data set with the INSET= option to create a specific view of your FAME data. Multiple views can be created by using multiple LIBNAME statements with customized options tailored to the unique view that you want to create. See “Example 34.10: Selecting Time Series Using CROSSLIST= Option with INSET= and WHERE=TICK” on page 2325 in Chapter 34, “The SASEFAME Interface Engine.” The INSET variables define the BY variables that enable you to view cross sections or slices of your data. When used in conjunction with the WHERE clause and the CROSSLIST= option, SASEFAME can show any or all of your BY groups in the same view or in multiple views. The INSET= option is invalid without a WHERE that clause specifies the BY variables you want to use in your view, and it must be used with the CROSSLIST=option.

8 F Chapter 1: What’s New in SAS/ETS

The CROSSLIST= option provides a more efficient means of selecting cross sections of financial time series data. This option can be used without using the INSET= option. There are two methods for performing the crosslist selection function. The first method uses two FAME namelists, and the second method uses one namelist and one BY group specified in the WHERE= clause of the INSET=option. See “Example 34.9: Selecting Time Series Using CROSSLIST= Option with a FAME Namelist of Tickers” on page 2322 in Chapter 34, “The SASEFAME Interface Engine.” The FAMEOUT= option provides efficient selection of the class and type of the FAME data series objects you want in your SAS output data set. The possible values for fame_data_object_class_type are FORMULA, TIME, BOOLEAN, CASE, DATE, and STRING. If the FAMEOUT=option is not specified, numeric time series are output to the SAS data set. FAMEOUT=CASE defaults to case series of numeric type, so if you want another type of case series in your output, then you must specify it. Scalar data objects are not supported. See “Example 34.6: Reading Other FAME Data Objects with the FAMEOUT= Option” on page 2316 in Chapter 34, “The SASEFAME Interface Engine.”

SASEHAVR Engine The SASEHAVR interface engine is now production, giving Windows users random access to economic and financial data that resides in a Haver Analytics Data Link Express (DLX) database. You can now use the SAS windowing environment to view HAVERDLX data and use the SAS viewtable commands to navigate your Haver database. You can use the SQL procedure to create a view of your resulting SAS data set. You can limit the range of data that is read from the time series and specify a desired conversion frequency. Start dates are recommended in the LIBNAME statement to help you save resources when processing large databases or when processing a large number of observations. You can further subset your data by using the WHERE, KEEP, or DROP statements in your DATA step. New options are provided for more efficient subsetting by time series variables, groups, or sources. You can force the aggregation of all variables selected to be of the frequency specified by the FREQ= option if you also specify the FORCE=FREQ option. Aggregation is supported only from a more frequent time interval to a less frequent time interval, such as from weekly to monthly. A list of other new features follows:  You can see the available data sets in the SAS LIBNAME window of the SAS windowing environment by selecting the SASEHAVR libref in the LIBNAME window that you have previously used in your LIBNAME statement. You can view your SAS output observations by double clicking on the desired output data set libref in the libname window of the SAS windowing environment. You can type Viewtable on the SAS command line to view any of your SASEHAVR tables, views, or librefs, both for input and output data sets.  By default, the SASEHAVR engine reads all time series in the Haver database that you reference by using your SASEHAVR libref. The START= option is specified in the form YYYYMMDD, as is the END= option. The start and end dates are used to limit the time span of data; they can help you save resources when processing large databases or when processing a large number of observations.

New SIMILARITY Procedure (Experimental) F 9

 It is also possible to select specific variables to be included or excluded from the SAS data set by using the KEEP= or the DROP= option. When the KEEP= or the DROP= option is used, the resulting SAS data set keeps or drops the variables that you select in that option. There are three wildcards currently available: ‘*’, ‘?’, and ‘#’. The ‘*’ wildcard corresponds to any character string and will include any string pattern that corresponds to that position in the matching variable name. The ‘?’ means that any single alphanumeric character is valid. The ‘#’ wildcard corresponds to a single numeric character.  You can also select time series in your data by using the GROUP= or the SOURCE= option to select on group name or on source name. Alternatively, you can deselect time series by using the DROPGROUP= or the DROPSOURCE= option. These options also support the wildcards ‘*’, ‘?’, and ‘#’.  By default, SASEHAVR selects only the variables that are of the specified frequency in the FREQ= option. If this option is not specified, SASEHAVR selects the variables that match the frequency of the first selected variable. If no other selection criteria are specified, the first selected variable is the first physical DLXRecord read from the Haver database. The FORCE=FREQ option can be specified to force the aggregation of all variables selected to be of the frequency specified by the FREQ= option. Aggregation is supported only from a more frequent time interval to a less frequent time interval, such as from weekly to monthly. The FORCE= option is ignored if the FREQ= option is not specified.

New SIMILARITY Procedure (Experimental) The new SIMILARITY procedure provides similarity analysis between two time series and other sequentially ordered numeric data. The SIMILARITY procedure computes similarity measures between an input sequence and target sequence, as well as similarity measures that “slide” the target sequence with respect to the input sequence. The “slides” can be by observation index (sliding-sequence similarity measures) or by seasonal index (seasonal-sliding-sequence similarity measures).

UCM Procedure The following features are new to the UCM procedure:  The new RANDOMREG statement enables specification of regressors with time-varying regression coefficients. The coefficients are assumed to follow independent random walks. Multiple RANDOMREG statements can be specified, and each statement can specify multiple regressors. The regression coefficient random walks for regressors specified in the same RANDOMREG statement are assumed to have the same disturbance variance parameter. This arrangement enables a very flexible specification of regressors with time-varying coefficients.

10 F Chapter 1: What’s New in SAS/ETS

 The new SPLINEREG statement enables specification of a spline regressor that can optionally have time-varying coefficients. The spline specification is useful when the series being forecast depends on a regressor in a nonlinear fashion.  The new SPLINESEASON statement enables parsimonious modeling of long and complex seasonal patterns using the spline approximation.  The SEASON statement now has options that enable complete control over the constituent harmonics that make up the trigonometric seasonal model.  It is now easy to obtain diagnostic test statistics useful for detecting structural breaks such as additive outliers and level shifts.  As an experimental feature, you can now model the irregular component as an autoregressive moving-average (ARMA) process.  The memory management and numerical efficiency of the underlying algorithms have been improved.

VARMAX Procedure The VARMAX procedure now enables independent (exogenous) variables with their distributed lags to influence dependent (endogenous) variables in various models, such as VARMAX, BVARX, VECMX, BVECMX, and GARCH-type multivariate conditional heteroscedasticity models.

Multivariate GARCH Models—New GARCH Statement Multivariate GARCH modeling is now a production feature of VARMAX. To enable greater flexibility in specifying multivariate GARCH models, the new GARCH statement has been added to the VARMAX procedure. With the addition of the GARCH statement, the GARCH= option is no longer supported on the MODEL statement. The OUTHT= option can be specified in the GARCH statement to write the estimated conditional covariance matrix to an output data set. See “GARCH Statement” on page 1907 in Chapter 30, “The VARMAX Procedure,” for details.

The VARMAX Model The VARMAX procedure provides modeling of a VARMAX(p; q; s) process which is written as

yt D ı C

p X

ˆi yt

i

i D1

where Pq ˆ.B/i D Ik iD1 ‚i B .

C

s X

‚i xt i

i D0

Pp

i D1 ˆi B

C t

q X

‚i t

i

i D1 i,

‚ .B/ D ‚0 C ‚1 B C    C ‚s B s , and ‚.B/ D Ik

VARMAX Procedure F 11

If the Kalman filtering method is used for the parameter estimation of the VARMAX(p,q,s) model, then the dimension of the state-space vector is large, which takes time and memory for computing. For convenience, the parameter estimation of the VARMAX(p,q,s) model uses the two-stage estimation method, which computes the estimation of deterministic terms and exogenous parameters and then maximizes the log-likelihood function of the VARMA(p,q) model. Some examples of VARMAX modeling are: model y1 y2 = x1 / q=1; nloptions tech=qn; model y1 y2 = x1 / p=1 q=1 xlag=1 nocurrentx; nloptions tech=qn;

The BVARX Model Bayesian modeling allows independent (exogenous) variables with their distributed lags. For example: model y1 y2 = x1 / p=2 prior=(theta=0.2 lambda=5);

The VECMX Model Vector error correction modeling now allows independent (exogenous) variables with their distributed lags. For example: model y1 y2 = x1 / p=2 ecm=(rank=1);

The BVECMX Model Bayesian vector error correction modeling allows independent (exogenous) variables with their distributed lags. For example: model y1 y2 = x1 / p=2 prior=(theta=0.2 lambda=5)

ecm=(rank=1);

The VARMAX-GARCH Model VARMAX modeling now supports an error term that has a GARCH-type multivariate conditional heteroscedasticity model. For example: model y1 y2 = x1 / p=1 q=1; garch q=1;

12 F Chapter 1: What’s New in SAS/ETS

New Printing Control Options The PRINT= option can be used in the MODEL statement to control the results printed. See the description of the PRINT= option in Chapter 30, “The VARMAX Procedure,” for details.

X12 Procedure The X12 procedure has many new statements and options. Many of the new features are related to the regARIMA modeling, which is used to extend the series to be seasonally adjusted. A new experimental input and output data set has been added which describes the times series model fit to the series. The following miscellaneous statements and options are new:  The NOINT option on the AUTOMDL statement suppresses the fitting of a constant term in automatically identified models.  The following tables are now available through the OUTPUT statement: A7, A9, A10, C20, D1, and D7.  The TABLES statement enables you to display some tables that represent intermediate calculations in the X11 method and that are not displayed by default. The following statements and options related to the regression component of regARIMA modeling are new:  The SPAN= option on the OUTLIER statement can be used to limit automatic outlier detection to a subset of the time series observations.  The following predefined variables have been added to the PREDEFINED option on the REGRESSION statement: EASTER(value), SCEASTER(value), LABOR(value), THANK(value), TDSTOCK(value), SINCOS(value . . . ).  User-defined regression variables can be included on the regression model by specifying them in the USERVAR=(variables) option in the REGRESSION statement or the INPUT statement.  Events can be included as user-defined regression variables in the regression model by specifying them in the EVENT statement. SAS predefined events do not require an INEVENT= data set, but an INEVENT= data set can be specified to define other events.  You can now supply initial or fixed parameter values for regression variables by using the B=(value < F > . . . ) option in the EVENT statement, the B=(value < F > . . . ) option in the INPUT statement, the B=(value < F > . . . ) option in the REGRESSION statement, or by using the MDLINFOIN= data set in the PROC X12 statement. Some regression variable parameters can be fixed while others are estimated.

References F 13

 You can now assign user-defined regression variables to a group by the USERTYPE= option in the EVENT statement, the USERTYPE= option in the INPUT statement, the USERTYPE= option in the REGRESSION statement, or by using the MDLINFOIN= data set in the PROC X12 statement. Census Bureau predefined variables are automatically assigned to a regression group, and this cannot be modified. But assigning user-defined regression variables to a regression group allows them to be processed similarly to the predefined variables.  You can now supply initial or fixed parameters for ARMA coefficients using the MDLINFOIN= data set in the PROC X12 statement. Some ARMA coefficients can be fixed while others are estimated.  The INEVENT= option on the PROC X12 statement enables you to supply an EVENT definition data set so that the events defined in the EVENT definition data set can be used as user-defined regressors in the regARIMA model.  User-defined regression variables in the input data set can be identified by specifying them in the USERDEFINED statement. User-defined regression variables specified in the USERVAR=(variables) option of the REGRESSION statement or the INPUT statement do not need to be specified in the USERDEFINED statement, but user-defined variables specified only in the MDLINFOIN= data set need to be identfied in the USERDEFINED statement. The following new experimental options specify input and output data sets that describe the times series model:  The MDLINFOIN= and MDLINFOOUT= data sets specified in the PROC X12 statement enable you to store the results of model identification and use the stored information as input when executing the X12 Procedure.

References Center for Research in Security Prices (2003), CRSP/Compustat Merged Database Guide, Chicago: The University of Chicago Graduate School of Business, http://www.crsp.uchicago.edu/ support/documentation/pdfs/ccm_database_guide.pdf. Center for Research in Security Prices (2003), CRSP Data Description Guide, Chicago: The University of Chicago Graduate School of Business, http://www.crsp.uchicago.edu/ support/documentation/pdfs/stock_indices_data_descriptions.pdf. Center for Research in Security Prices (2002), CRSP Programmer’s Guide, Chicago: The University of Chicago Graduate School of Business, http://www.crsp.uchicago.edu/support/ documentation/pdfs/stock_indices_programming.pdf. Center for Research in Security Prices (2003), CRSPAccess Database Format Release Notes, Chicago: The University of Chicago Graduate School of Business, http://www.crsp. uchicago.edu/support/documentation/release_notes.html.

14 F Chapter 1: What’s New in SAS/ETS

Center for Research in Security Prices (2003), CRSP Utilities Guide, Chicago: The University of Chicago Graduate School of Business, http://www.crsp.uchicago.edu/support/ documentation/pdfs/stock_indices_utilities.pdf. Center for Research in Security Prices (2002), CRSP SFA Guide, Chicago: The University of Chicago Graduate School of Business. Gomez, V. and Maravall, A. (1997a), “Program TRAMO and SEATS: Instructions for the User, Beta Version,” Banco de Espana. Gomez, V. and Maravall, A. (1997b), “Guide for Using the Programs TRAMO and SEATS, Beta Version,” Banco de Espana. Haver Analytics (2001), DLX API Programmer’s Reference, New York, NY. Stoffer, D. and Toloi, C. (1992), “A Note on the Ljung-Box-Pierce Portmanteau Statistic with Missing Data,” Statistics & Probability Letters 13, 391-396. SunGard Data Management Solutions (1998), Guide to FAME Database Servers, 888 Seventh Avenue, 12th Floor, New York, NY 10106 USA, http://www.fame.sungard.com/support. html, http://www.data.sungard.com. SunGard Data Management Solutions (1995), User’s Guide to FAME, Ann Arbor, MI, http:// www.fame.sungard.com/support.html. SunGard Data Management Solutions (1995), Reference Guide to Seamless C HLI, Ann Arbor, MI, http://www.fame.sungard.com/support.html. SunGard Data Management Solutions(1995), Command Reference for Release 7.6, Vols. 1 and 2, Ann Arbor, MI, http://www.fame.sungard.com/support.html.

Chapter 2

Introduction Contents Overview of SAS/ETS Software . . . . . . . . . . . . . . . . . . . . . . . . . . . Uses of SAS/ETS Software . . . . . . . . . . . . . . . . . . . . . . . . . . Contents of SAS/ETS Software . . . . . . . . . . . . . . . . . . . . . . . . Experimental Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . About This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Typographical Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . Where to Turn for More Information . . . . . . . . . . . . . . . . . . . . . . . . . Accessing the SAS/ETS Sample Library . . . . . . . . . . . . . . . . . . . Online Help System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAS Short Courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAS Technical Support Services . . . . . . . . . . . . . . . . . . . . . . . . Major Features of SAS/ETS Software . . . . . . . . . . . . . . . . . . . . . . . . Discrete Choice and Qualitative and Limited Dependent Variable Analysis . Regression with Autocorrelated and Heteroscedastic Errors . . . . . . . . . Simultaneous Systems Linear Regression . . . . . . . . . . . . . . . . . . . Linear Systems Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . Polynomial Distributed Lag Regression . . . . . . . . . . . . . . . . . . . . Nonlinear Systems Regression and Simulation . . . . . . . . . . . . . . . . ARIMA (Box-Jenkins) and ARIMAX (Box-Tiao) Modeling and Forecasting Vector Time Series Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . State Space Modeling and Forecasting . . . . . . . . . . . . . . . . . . . . Spectral Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seasonal Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structural Time Series Modeling and Forecasting . . . . . . . . . . . . . . . Time Series Cross-Sectional Regression Analysis . . . . . . . . . . . . . . . Automatic Time Series Forecasting . . . . . . . . . . . . . . . . . . . . . . Time Series Interpolation and Frequency Conversion . . . . . . . . . . . . . Trend and Seasonal Analysis on Transaction Databases . . . . . . . . . . . Access to Financial and Economic Databases . . . . . . . . . . . . . . . . . Spreadsheet Calculations and Financial Report Generation . . . . . . . . . . Loan Analysis, Comparison, and Amortization . . . . . . . . . . . . . . . . Time Series Forecasting System . . . . . . . . . . . . . . . . . . . . . . . . Investment Analysis System . . . . . . . . . . . . . . . . . . . . . . . . . .

16 17 18 20 20 21 22 22 22 23 23 23 23 24 26 27 28 29 29 31 32 33 34 35 36 37 37 39 41 42 44 44 45 46

16 F Chapter 2: Introduction

ODS Graphics . . . . . . . . . . . . . . . . Related SAS Software . . . . . . . . . . . . . . . Base SAS Software . . . . . . . . . . . . . SAS Forecast Studio . . . . . . . . . . . . . SAS High-Performance Forecasting . . . . . SAS/GRAPH Software . . . . . . . . . . . SAS/STAT Software . . . . . . . . . . . . . SAS/IML Software . . . . . . . . . . . . . . SAS Stat Studio . . . . . . . . . . . . . . . SAS/OR Software . . . . . . . . . . . . . . SAS/QC Software . . . . . . . . . . . . . . MLE for User-Defined Likelihood Functions JMP Software . . . . . . . . . . . . . . . . SAS Enterprise Guide . . . . . . . . . . . . SAS Add-In for Microsoft Office . . . . . . Enterprise Miner—Time Series nodes . . . . SAS Risk Products . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

47 48 48 51 51 52 53 54 54 55 56 56 57 58 59 59 60 61

Overview of SAS/ETS Software SAS/ETS software, a component of the SAS System, provides SAS procedures for:  econometric analysis  time series analysis  time series forecasting  systems modeling and simulation  discrete choice analysis  analysis of qualitative and limited dependent variable models  seasonal adjustment of time series data  financial analysis and reporting  access to economic and financial databases  time series data management In addition to SAS procedures, SAS/ETS software also includes seamless access to economic and financial databases and interactive environments for time series forecasting and investment analysis.

Uses of SAS/ETS Software F 17

Uses of SAS/ETS Software SAS/ETS software provides tools for a wide variety of applications in business, government, and academia. Major uses of SAS/ETS procedures are economic analysis, forecasting, economic and financial modeling, time series analysis, financial reporting, and manipulation of time series data. The common theme relating the many applications of the software is time series data: SAS/ETS software is useful whenever it is necessary to analyze or predict processes that take place over time or to analyze models that involve simultaneous relationships. Although SAS/ETS software is most closely associated with business, finance and economics, time series data also arise in many other fields. SAS/ETS software is useful whenever time dependencies, simultaneous relationships, or dynamic processes complicate data analysis. For example, an environmental quality study might use SAS/ETS software’s time series analysis tools to analyze pollution emissions data. A pharmacokinetic study might use SAS/ETS software’s features for nonlinear systems to model the dynamics of drug metabolism in different tissues. The diversity of problems for which econometrics and time series analysis tools are needed is reflected in the applications reported by SAS users. The following listed items are some applications of SAS/ETS software presented by SAS users at past annual conferences of the SAS Users Group International (SUGI).  forecasting college enrollment (Calise and Earley 1997)  fitting a pharmacokinetic model (Morelock et al. 1995)  testing interaction effect in reducing sudden infant death syndrome (Fleming, Gibson, and Fleming 1996)  forecasting operational indices to measure productivity changes (McCarty 1994)  spectral decomposition and reconstruction of nuclear plant signals (Hoyer and Gross 1993)  estimating parameters for the constant-elasticity-of-substitution translog model (Hisnanick 1993)  applying econometric analysis for mass appraisal of real property (Amal and Weselowski 1993)  forecasting telephone usage data (Fishetti, Heathcote, and Perry 1993)  forecasting demand and utilization of inpatient hospital services (Hisnanick 1992)  using conditional demand estimation to determine electricity demand (Keshani and Taylor 1992)  estimating tree biomass for measurement of forestry yields (Parresol and Thomas 1991)  evaluating the theory of input separability in the production function of U.S. manufacturing (Hisnanick 1991)

18 F Chapter 2: Introduction

 forecasting dairy milk yields and composition (Benseman 1990)  predicting the gloss of coated aluminum products subject to weathering (Khan 1990)  learning curve analysis for predicting manufacturing costs of aircraft (Le Bouton 1989)  analyzing Dow Jones stock index trends (Early, Sweeney, and Zekavat 1989)  analyzing the usefulness of the composite index of leading economic indicators for forecasting the economy (Lin and Myers 1988)

Contents of SAS/ETS Software Procedures SAS/ETS software includes the following SAS procedures: ARIMA

ARIMA (Box-Jenkins) and ARIMAX (Box-Tiao) modeling and forecasting

AUTOREG

regression analysis with autocorrelated or heteroscedastic errors and ARCH and GARCH modeling

COMPUTAB

spreadsheet calculations and financial report generation

COUNTREG

regression modeling for dependent variables that represent counts

DATASOURCE

access to financial and economic databases

ENTROPY

maximum entropy-based regression

ESM

forecasting by using exponential smoothing models with optimized smoothing weights

EXPAND

time series interpolation, frequency conversion, and transformation of time series

FORECAST

automatic forecasting

LOAN

loan analysis and comparison

MDC

multinomial discrete choice analysis

MODEL

nonlinear simultaneous equations regression and nonlinear systems modeling and simulation

PANEL

panel data models

PDLREG

polynomial distributed lag regression

QLIM

qualitative and limited dependent variable analysis

SIMILARITY

similarity analysis of time series data for time series data mining

SIMLIN

linear systems simulation

SPECTRA

spectral and cross-spectral analysis

STATESPACE

state space modeling and automated forecasting of multivariate time series

SYSLIN

linear simultaneous equations models

Contents of SAS/ETS Software F 19

TIMESERIES

analysis of time-stamped transactional data

TSCSREG

time series cross-sectional regression analysis

UCM

unobserved components analysis of time series

VARMAX

vector autoregressive and moving-average modeling and forecasting

X11

seasonal adjustment (Census X-11 and X-11 ARIMA)

X12

seasonal adjustment (Census X-12 ARIMA)

Macros SAS/ETS software includes the following SAS macros: %AR

generates statements to define autoregressive error models for the MODEL procedure

%BOXCOXAR

investigates Box-Cox transformations useful for modeling and forecasting a time series

%DFPVALUE

computes probabilities for Dickey-Fuller test statistics

%DFTEST

performs Dickey-Fuller tests for unit roots in a time series process

%LOGTEST

tests to determine whether a log transformation is appropriate for modeling and forecasting a time series

%MA

generates statements to define moving-average error models for the MODEL procedure

%PDL

generates statements to define polynomial distributed lag models for the MODEL procedure

These macros are part of the SAS AUTOCALL facility and are automatically available for use in your SAS program. Refer to SAS Macro Language: Reference for information about the SAS macro facility.

Access Interfaces to Economic and Financial Databases In addition to PROC DATASOURCE, these SAS/ETS access interfaces provide seamless access to financial and economic databases: SASECRSP

LIBNAME engine for accessing time series and event data residing in CRSPAccess database.

SASEFAME

LIBNAME engine for accessing time or case series data residing in a FAME database.

SASEHAVR

LIBNAME engine for accessing time series residing in a HAVER ANALYTICS Data Link Express (DLX) database.

20 F Chapter 2: Introduction

The Time Series Forecasting System SAS/ETS software includes an interactive forecasting system, described in Part IV. This graphical user interface to SAS/ETS forecasting features was developed with SAS/AF software and uses PROC ARIMA and other internal routines to perform time series forecasting. The Time Series Forecasting System makes it easy to forecast time series and provides many features for graphical data exploration and graphical comparisons of forecasting models and forecasts. (You must have SAS/GRAPH® installed to use the graphical features of the system.)

The Investment Analysis System The Investment Analysis System, described in Part V, is an interactive environment for analyzing the time-value of money in a variety of investments. Various analyses are provided to help analyze the value of investment alternatives: time value, periodic equivalent, internal rate of return, benefitcost ratio, and break-even analysis.

Experimental Software Experimental software is sometimes included as part of a production-release product. It is provided to (sometimes targeted) customers in order to obtain feedback. All experimental uses are marked Experimental in this document. Whenever an experimental procedure, statement, or option is used, a message is printed to the SAS log to indicate that it is experimental. The design and syntax of experimental software might change before any production release. Experimental software has been tested prior to release, but it has not necessarily been tested to productionquality standards, and so should be used with care.

About This Book This book is a user’s guide to SAS/ETS software. Since SAS/ETS software is a part of the SAS System, this book assumes that you are familiar with Base SAS software and have the books SAS Language Reference: Dictionary and Base SAS Procedures Guide available for reference. It also assumes that you are familiar with SAS data sets, the SAS DATA step, and with basic SAS procedures such as PROC PRINT and PROC SORT. Chapter 3, “Working with Time Series Data,” in this book summarizes the aspects of Base SAS software that are most relevant to the use of SAS/ETS software.

Chapter Organization F 21

Chapter Organization Following a brief What’s New, this book is divided into five major parts. Part I contains general information to aid you in working with SAS/ETS Software. Part II explains the SAS procedures of SAS/ETS software. Part III describes the available data access interfaces for economic and financial databases. Part IV is the reference for the Time Series Forecasting System, an interactive forecasting menu system that uses PROC ARIMA and other routines to perform time series forecasting. Finally, Part V is the reference for the Investment Analysis System. The new features added to SAS/ETS software since the publication of SAS/ETS Software: Changes and Enhancements for Release 8.2 are summarized in Chapter 1, “What’s New in SAS/ETS.” If you have used SAS/ETS software in the past, you may want to skim this chapter to see what’s new. Part I contains the following chapters. Chapter 2, the current chapter, provides an overview of SAS/ETS software and summarizes related SAS publications, products, and services. Chapter 3, “Working with Time Series Data,” discusses the use of SAS data management and programming features for time series data. Chapter 4, “Date Intervals, Formats, and Functions,” summarizes the time intervals, date and datetime informats, date and datetime formats, and date and datetime functions available in the SAS System. Chapter 5, “SAS Macros and Functions,” documents SAS macros and DATA step financial functions provided with SAS/ETS software. The macros use SAS/ETS procedures to perform Dickey-Fuller tests, test for the need for log transformations, or select optimal Box-Cox transformation parameters for time series data. Chapter 6, “Nonlinear Optimization Methods,” documents the NonLinear Optimization subsystem used by some ETS procedures to perform nonlinear optimization tasks. Part II contains chapters that explain the SAS procedures that make up SAS/ETS software. These chapters appear in alphabetical order by procedure name. Part III contains chapters that document the ETS access interfaces to economic and financial databases. Each of the chapters that document the SAS/ETS procedures (Part II) and the SAS/ETS access interfaces (Part III) is organized as follows: 1. The “Overview” section gives a brief description of the procedure. 2. The “Getting Started” section provides a tutorial introduction on how to use the procedure. 3. The “Syntax” section is a reference to the SAS statements and options that control the procedure. 4. The “Details” section discusses various technical details. 5. The “Examples” section contains examples of the use of the procedure.

22 F Chapter 2: Introduction

6. The “References” section contains technical references on methodology. Part IV contains the chapters that document the features of the Time Series Forecasting System. Part V contains chapters that document the features of the Investment Analysis System.

Typographical Conventions This book uses several type styles for presenting information. The following list explains the meaning of the typographical conventions used in this book: roman

is the standard type style used for most text.

UPPERCASE ROMAN

is used for SAS statements, options, and other SAS language elements when they appear in the text. However, you can enter these elements in your own SAS programs in lowercase, uppercase, or a mixture of the two.

UPPERCASE BOLD

is used in the “Syntax” sections’ initial lists of SAS statements and options.

oblique

is used for user-supplied values for options in the syntax definitions. In the text, these values are written in italic.

helvetica

is used for the names of variables and data sets when they appear in the text.

bold

is used to refer to matrices and vectors and to refer to commands.

italic

is used for terms that are defined in the text, for emphasis, and for references to publications.

bold monospace

is used for example code. In most cases, this book uses lowercase type for SAS statements.

Where to Turn for More Information This section describes other sources of information about SAS/ETS software.

Accessing the SAS/ETS Sample Library The SAS/ETS Sample Library includes many examples that illustrate the use of SAS/ETS software, including the examples used in this documentation. To access these sample programs, select Help from the menu and then select SAS Help and Documentation. From the Contents list, select the section Sample SAS Programs under Learning to Use SAS.

Online Help System F 23

Online Help System You can access online help information about SAS/ETS software in two ways, depending on whether you are using the SAS windowing environment in the command line mode or the pulldown menu mode. If you are using a command line, you can access the SAS/ETS help menus by typing help on the SAS windowing environment command line. Or you can issue the command help ARIMA (or another procedure name) to display the help for that particular procedure. If you are using the SAS windowing environment pull-down menus, you can pull-down the Help menu and make the following selections:  SAS Help and Documentation  Learning to Use SAS in the Contents list  SAS Products  SAS/ETS The content of the Online Help System follows closely that of this book.

SAS Short Courses The SAS Education Division offers a number of training courses that might be of interest to SAS/ETS users. Please check the SAS web site for the current list of available training courses.

SAS Technical Support Services As with all SAS products, the SAS Technical Support staff is available to respond to problems and answer technical questions regarding the use of SAS/ETS software.

Major Features of SAS/ETS Software The following sections briefly summarize major features of SAS/ETS software. See the chapters on individual procedures for more detailed information.

24 F Chapter 2: Introduction

Discrete Choice and Qualitative and Limited Dependent Variable Analysis The MDC procedure provides maximum likelihood (ML) or simulated maximum likelihood estimates of multinomial discrete choice models in which the choice set consists of unordered multiple alternatives. The MDC procedure supports the following models and features:  conditional logit  nested logit  heteroscedastic extreme value  multinomial probit  mixed logit  pseudo-random or quasi-random numbers for simulated maximum likelihood estimation  bounds imposed on the parameter estimates  linear restrictions imposed on the parameter estimates  SAS data set containing predicted probabilities and linear predictor (x0 ˇ) values  decision tree and nested logit  model fit and goodness-of-fit measures including – likelihood ratio – Aldrich-Nelson – Cragg-Uhler 1 – Cragg-Uhler 2 – Estrella – Adjusted Estrella – McFadden’s LRI – Veall-Zimmermann – Akaike Information Criterion (AIC) – Schwarz Criterion or Bayesian Information Criterion (BIC) The QLIM procedure analyzes univariate and multivariate limited dependent variable models where dependent variables take discrete values or dependent variables are observed only in a limited range of values. This procedure includes logit, probit, Tobit, and general simultaneous equations models. The QLIM procedure supports the following models:

Discrete Choice and Qualitative and Limited Dependent Variable Analysis F 25

 linear regression model with heteroscedasticity  probit with heteroscedasticity  logit with heteroscedasticity  Tobit (censored and truncated) with heteroscedasticity  Box-Cox regression with heteroscedasticity  bivariate probit  bivariate Tobit  sample selection models  multivariate limited dependent models The COUNTREG procedure provides regression models in which the dependent variable takes nonnegative integer count values. The COUNTREG procedure supports the following models:  Poisson regression  negative binomial regression with quadratic and linear variance functions  zero inflated Poisson (ZIP) model  zero inflated negative binomial (ZINB) model  fixed and random effect Poisson panel data models  fixed and random effect NB (negative binomial) panel data models The PANEL procedure deals with panel data sets that consist of time series observations on each of several cross-sectional units. The models and methods the PANEL procedure uses to analyze are as follows:  one-way and two-way models  fixed and random effects  autoregressive models – the Parks method – dynamic panel estimator – the Da Silva method for moving-average disturbances

26 F Chapter 2: Introduction

Regression with Autocorrelated and Heteroscedastic Errors The AUTOREG procedure provides regression analysis and forecasting of linear models with autocorrelated or heteroscedastic errors. The AUTOREG procedure includes the following features:  estimation and prediction of linear regression models with autoregressive errors  any order autoregressive or subset autoregressive process  optional stepwise selection of autoregressive parameters  choice of the following estimation methods: – – – –

exact maximum likelihood exact nonlinear least squares Yule-Walker iterated Yule-Walker

 tests for any linear hypothesis that involves the structural coefficients  restrictions for any linear combination of the structural coefficients  forecasts with confidence limits  estimation and forecasting of ARCH (autoregressive conditional heteroscedasticity), GARCH (generalized autoregressive conditional heteroscedasticity), I-GARCH (integrated GARCH), E-GARCH (exponential GARCH), and GARCH-M (GARCH in mean) models  combination of ARCH and GARCH models with autoregressive models, with or without regressors  estimation and testing of general heteroscedasticity models  variety of model diagnostic information including the following: – – – – – – – – – – –

autocorrelation plots partial autocorrelation plots Durbin-Watson test statistic and generalized Durbin-Watson tests to any order Durbin h and Durbin t statistics Akaike information criterion Schwarz information criterion tests for ARCH errors Ramsey’s RESET test Chow and PChow tests Phillips-Perron stationarity test CUSUM and CUMSUMSQ statistics

 exact significance levels (p-values) for the Durbin-Watson statistic  embedded missing values

Simultaneous Systems Linear Regression F 27

Simultaneous Systems Linear Regression The SYSLIN and ENTROPY procedures provide regression analysis of a simultaneous system of linear equations. The SYSLIN procedure includes the following features:  estimation of parameters in simultaneous systems of linear equations  full range of estimation methods including the following: – ordinary least squares (OLS) – two-stage least squares (2SLS) – three-stage least squares (3SLS) – iterated 3SLS (IT3SLS) – seemingly unrelated regression (SUR) – iterated SUR (ITSUR) – limited-information maximum likelihood (LIML) – full-information maximum likelihood (FIML) – minimum expected loss (MELO) – general K-class estimators  weighted regression  any number of restrictions for any linear combination of coefficients, within a single model or across equations  tests for any linear hypothesis, for the parameters of a single model or across equations  wide range of model diagnostics and statistics including the following: – usual ANOVA tables and R-square statistics – Durbin-Watson statistics – standardized coefficients – test for overidentifying restrictions – residual plots – standard errors and t tests – covariance and correlation matrices of parameter estimates and equation errors  predicted values, residuals, parameter estimates, and variance-covariance matrices saved in output SAS data sets  other features of the SYSLIN procedure that enable you to do the following: – impose linear restrictions on the parameter estimates

28 F Chapter 2: Introduction

– test linear hypotheses about the parameters – write predicted and residual values to an output SAS data set – write parameter estimates to an output SAS data set – write the crossproducts matrix (SSCP) to an output SAS data set – use raw data, correlations, covariances, or cross products as input The ENTROPY procedure supports the following models and features:  generalized maximum entropy (GME) estimation  generalized cross entropy (GCE) estimation  normed moment generalized maximum entropy  maximum entropy-based seemingly unrelated regression (MESUR) estimation  pure inverse estimation  estimation of parameters in simultaneous systems of linear equations  Markov models  unordered multinomial choice problems  weighted regression  any number of restrictions for any linear combination of coefficients, within a single model or across equations  tests for any linear hypothesis, for the parameters of a single model or across equations

Linear Systems Simulation The SIMLIN procedure performs simulation and multiplier analysis for simultaneous systems of linear regression models. The SIMLIN procedure includes the following features:  reduced form coefficients  interim multipliers  total multipliers  dynamic multipliers  multipliers for higher order lags  dynamic forecasts and simulations  goodness-of-fit statistics  acceptance of the equation system coefficients estimated by the SYSLIN procedure as input

Polynomial Distributed Lag Regression F 29

Polynomial Distributed Lag Regression The PDLREG procedure provides regression analysis for linear models with polynomial distributed (Almon) lags. The PDLREG procedure includes the following features:  entry of any number of regressors as a polynomial lag distribution and the use of any number of covariates  use of any order lag length and degree polynomial for lag distribution  optional upper and lower endpoint restrictions  specification of any number of linear restrictions on covariates  option to repeat analysis over a range of degrees for the lag distribution polynomials  support for autoregressive errors to any lag  forecasts with confidence limits

Nonlinear Systems Regression and Simulation The MODEL procedure provides parameter estimation, simulation, and forecasting of dynamic nonlinear simultaneous equation models. The MODEL procedure includes the following features:  nonlinear regression analysis for systems of simultaneous equations, including weighted nonlinear regression  full range of parameter estimation methods including the following: – nonlinear ordinary least squares (OLS) – nonlinear seemingly unrelated regression (SUR) – nonlinear two-stage least squares (2SLS) – nonlinear three-stage least squares (3SLS) – iterated SUR – iterated 3SLS – generalized method of moments (GMM) – nonlinear full-information maximum likelihood (FIML) – simulated method of moments (SMM)  supports dynamic multi-equation nonlinear models of any size or complexity  uses the full power of the SAS programming language for model definition, including lefthand-side expressions

30 F Chapter 2: Introduction

 hypothesis tests of nonlinear functions of the parameter estimates  linear and nonlinear restrictions of the parameter estimates  bounds imposed on the parameter estimates  computation of estimates and standard errors of nonlinear functions of the parameter estimates  estimation and simulation of ordinary differential equations (ODE’s)  vector autoregressive error processes and polynomial lag distributions easily specified for the nonlinear equations  variance modeling (ARCH, GARCH, and others)  computation of goal-seeking solutions of nonlinear systems to find input values needed to produce target outputs  dynamic, static, or n-period-ahead-forecast simulation modes  simultaneous solution or single equation solution modes  Monte Carlo simulation using parameter estimate covariance and across-equation residuals covariance matrices or user-specified random functions  a variety of diagnostic statistics including the following – model R-square statistics – general Durbin-Watson statistics and exact p-values – asymptotic standard errors and t tests – first-stage R-square statistics – covariance estimates – collinearity diagnostics – simulation goodness-of-fit statistics – Theil inequality coefficient decompositions – Theil relative change forecast error measures – heteroscedasticity tests – Godfrey test for serial correlation – Hausman specification test – Chow tests  block structure and dependency structure analysis for the nonlinear system  listing and cross-reference of fitted model  automatic calculation of needed derivatives by using exact analytic formula  efficient sparse matrix methods used for model solution; choice of other solution methods Model definition, parameter estimation, simulation, and forecasting can be performed interactively in a single SAS session or models can also be stored in files and reused and combined in later runs.

ARIMA (Box-Jenkins) and ARIMAX (Box-Tiao) Modeling and Forecasting F 31

ARIMA (Box-Jenkins) and ARIMAX (Box-Tiao) Modeling and Forecasting The ARIMA procedure provides the identification, parameter estimation, and forecasting of autoregressive integrated moving-average (Box-Jenkins) models, seasonal ARIMA models, transfer function models, and intervention models. The ARIMA procedure includes the following features:  complete ARIMA (Box-Jenkins) modeling with no limits on the order of autoregressive or moving-average processes  model identification diagnostics including the following: – autocorrelation function – partial autocorrelation function – inverse autocorrelation function – cross-correlation function – extended sample autocorrelation function – minimum information criterion for model identification – squared canonical correlations  stationarity tests  outlier detection  intervention analysis  regression with ARMA errors  transfer function modeling with fully general rational transfer functions  seasonal ARIMA models  ARIMA model-based interpolation of missing values  several parameter estimation methods including the following: – exact maximum likelihood – conditional least squares – exact nonlinear unconditional least squares (ELS or ULS)  prewhitening transformations  forecasts and confidence limits for all models  forecasting tied to parameter estimation methods: finite memory forecasts for models estimated by maximum likelihood or exact nonlinear least squares methods and infinite memory forecasts for models estimated by conditional least squares  diagnostic statistics to help judge the adequacy of the model including the following:

32 F Chapter 2: Introduction

– Akaike’s information criterion (AIC) – Schwarz’s Bayesian criterion (SBC or BIC) – Box-Ljung chi-square test statistics for white-noise residuals – autocorrelation function of residuals – partial autocorrelation function of residuals – inverse autocorrelation function of residuals – automatic outlier detection

Vector Time Series Analysis The VARMAX procedure enables you to model the dynamic relationship both between the dependent variables and between the dependent and independent variables. The VARMAX procedure includes the following features:  several modeling features: – vector autoregressive model – vector autoregressive model with exogenous variables – vector autoregressive and moving-average model – Bayesian vector autoregressive model – vector error correction model – Bayesian vector error correction model – GARCH-type multivariate conditional heteroscedasticity models  criteria for automatically determining AR and MA orders: – Akaike information criterion (AIC) – corrected AIC (AICC) – Hannan-Quinn (HQ) criterion – final prediction error (FPE) – Schwarz Bayesian criterion (SBC), also known as Bayesian information criterion (BIC)  AR order identification aids: – partial cross-correlations – Yule-Walker estimates – partial autoregressive coefficients – partial canonical correlations  testing the presence of unit roots and cointegration: – Dickey-Fuller tests

State Space Modeling and Forecasting F 33

– Johansen cointegration test for nonstationary vector processes of integrated order one – Stock-Watson common trends test for the possibility of cointegration among nonstationary vector processes of integrated order one – Johansen cointegration test for nonstationary vector processes of integrated order two  model parameter estimation methods: – least squares (LS) – maximum likelihood (ML)  model checks and residual analysis using the following tests: – Durbin-Watson (DW) statistics – F test for autoregressive conditional heteroscedastic (ARCH) disturbance – F test for AR disturbance – Jarque-Bera normality test – Portmanteau test  seasonal deterministic terms  subset models  multiple regression with distributed lags  dead-start model that does not have present values of the exogenous variables  Granger-causal relationships between two distinct groups of variables  infinite order AR representation  impulse response function (or infinite order MA representation)  decomposition of the predicted error covariances  roots of the characteristic functions for both the AR and MA parts to evaluate the proximity of the roots to the unit circle  contemporaneous relationships among the components of the vector time series  forecasts  conditional covariances for GARCH models

State Space Modeling and Forecasting The STATESPACE procedure provides automatic model selection, parameter estimation, and forecasting of state space models. (State space models encompass an alternative general formulation of multivariate ARIMA models.) The STATESPACE procedure includes the following features:

34 F Chapter 2: Introduction

 multivariate ARIMA modeling by using the general state space representation of the stochastic process  automatic model selection using Akaike’s information criterion (AIC)  user-specified state space models including restrictions  transfer function models with random inputs  any combination of simple and seasonal differencing; input series can be differenced to any order for any lag lengths  forecasts with confidence limits  ability to save selected and fitted model in a data set and reuse for forecasting  wide range of output options including the ability to print any statistics concerning the data and their covariance structure, the model selection process, and the final model fit

Spectral Analysis The SPECTRA procedure provides spectral analysis and cross-spectral analysis of time series. The SPECTRA procedure includes the following features:  efficient calculation of periodogram and smoothed periodogram using fast finite Fourier transform and Chirp-Z algorithms  multiple spectral analysis, including raw and smoothed spectral and cross-spectral function estimates, with user-specified window weights  choice of kernel for smoothing  output of the following spectral estimates to a SAS data set: – Fourier sine and cosine coefficients – periodogram – smoothed periodogram – cospectrum – quadrature spectrum – amplitude – phase spectrum – squared coherency  Fisher’s Kappa and Bartlett’s Kolmogorov-Smirnov test statistic for testing a null hypothesis of white noise

Seasonal Adjustment F 35

Seasonal Adjustment The X11 procedure provides seasonal adjustment of time series by using the Census X-11 or X-11 ARIMA method. The X11 procedure is based on the U.S. Bureau of the Census X-11 seasonal adjustment program and also supports the X-11 ARIMA method developed by Statistics Canada. The X11 procedure includes the following features:  decomposition of monthly or quarterly series into seasonal, trend, trading day, and irregular components  both multiplicative and additive form of the decomposition  all the features of the Census Bureau program  support of the X-11 ARIMA method  support of sliding spans analysis  processing of any number of variables at once with no maximum length for a series  computation of tests for stable, moving, and combined seasonality  optional printing or storing in SAS data sets of the individual X11 tables that show the various components at different stages of the computation; full control over what is printed or output  ability to project seasonal component one year ahead, which enables reintroduction of seasonal factors for an extrapolated series The X12 procedure provides seasonal adjustment of time series using the X-12 ARIMA method. The X12 procedure is based on the U.S. Bureau of the Census X-12 ARIMA seasonal adjustment program (version 0.3). It also supports the X-11 ARIMA method developed by Statistics Canada and the previous X-11 method of the U.S. Census Bureau. The X12 procedure includes the following features:  decomposition of monthly or quarterly series into seasonal, trend, trading day, and irregular components  support of multiplicative, additive, pseudo-additive, and log additive forms of decomposition  support of the X-12 ARIMA method  support of regARIMA modeling  automatic identification of outliers  support of TRAMO-based automatic model selection  use of regressors to process missing values within the span of the series  processing of any number of variables at once with no maximum length for a series

36 F Chapter 2: Introduction

 computation of tests for stable, moving, and combined seasonality  spectral analysis of original, seasonally adjusted, and irregular series  optional printing or storing in a SAS data set of the individual X11 tables that show the various components at different stages of the decomposition; full control over what is printed or output  optional projection of seasonal component one year ahead, which enables reintroduction of seasonal factors for an extrapolated series

Structural Time Series Modeling and Forecasting The UCM procedure provides a flexible environment for analyzing time series data using structural time series models, also called unobserved components models (UCM). These models represent the observed series as a sum of suitably chosen components such as trend, seasonal, cyclical, and regression effects. You can use the UCM procedure to formulate comprehensive models that bring out all the salient features of the series under consideration. Structural models are applicable in the same situations where Box-Jenkins ARIMA models are applicable; however, the structural models tend to be more informative about the underlying stochastic structure of the series. The UCM procedure includes the following features:  general unobserved components modeling where the models can include trend, multiple seasons and cycles, and regression effects  maximum-likelihood estimation of the model parameters  model diagnostics that include a variety of goodness-of-fit statistics, and extensive graphical diagnosis of the model residuals  forecasts and confidence limits for the series and all the model components  Model-based seasonal decomposition  extensive plotting capability that includes the following: – forecast and confidence interval plots for the series and model components such as trend, cycles, and seasons – diagnostic plots such as residual plot, residual autocorrelation plots, and so on – seasonal decomposition plots such as trend, trend plus cycles, trend plus cycles plus seasons, and so on  model-based interpolation of series missing values  full sample (also called smoothed) estimates of the model components

Time Series Cross-Sectional Regression Analysis F 37

Time Series Cross-Sectional Regression Analysis The TSCSREG procedure provides combined time series cross-sectional regression analysis. The TSCSREG procedure includes the following features:  estimation of the regression parameters under several common error structures: – Fuller and Battese method (variance component model) – Wansbeek-Kapteyn method – Parks method (autoregressive model) – Da Silva method (mixed variance component moving-average model) – one-way fixed effects – two-way fixed effects – one-way random effects – two-way random effects  any number of model specifications  unbalanced panel data for the fixed or random-effects models  variety of estimates and statistics including the following: – underlying error components estimates – regression parameter estimates – standard errors of estimates – t-tests – R-square statistic – correlation matrix of estimates – covariance matrix of estimates – autoregressive parameter estimate – cross-sectional components estimates – autocovariance estimates – F tests of linear hypotheses about the regression parameters – specification tests

Automatic Time Series Forecasting The ESM procedure provides a quick way to generate forecasts for many time series or transactional data in one step by using exponential smoothing methods. All parameters associated with the forecasting model are optimized based on the data. You can use the following smoothing models:

38 F Chapter 2: Introduction

 simple  double  linear  damped trend  seasonal  Winters method (additive and multiplicative) Additionally, PROC ESM can transform the data before applying the smoothing methods using any of these transformations:  log  square root  logistic  Box-Cox In addition to forecasting, the ESM procedure can also produce graphic output. The ESM procedure can forecast both time series data, whose observations are equally spaced at a specific time interval (for example, monthly, weekly), or transactional data, whose observations are not spaced with respect to any particular time interval. (Internet, inventory, sales, and similar data are typical examples of transactional data. For transactional data, the data are accumulated based on a specified time interval to form a time series.) The ESM procedure is a replacement for the older FORECAST procedure. ESM is often more convenient to use than PROC FORECAST but it supports only exponential smoothing models. The FORECAST procedure provides forecasting of univariate time series using automatic trend extrapolation. PROC FORECAST is an easy-to-use procedure for automatic forecasting and uses simple popular methods that do not require statistical modeling of the time series, such as exponential smoothing, time trend with autoregressive errors, and the Holt-Winters method. The FORECAST procedure supplements the powerful forecasting capabilities of the econometric and time series analysis procedures described previously. You can use PROC FORECAST when you have many series to forecast and you want to extrapolate trends without developing a model for each series. The FORECAST procedure includes the following features:  choice of the following forecasting methods: – EXPO method—exponential smoothing: single, double, triple, or Holt two-parameter smoothing – exponential smoothing as an ARIMA Model

Time Series Interpolation and Frequency Conversion F 39

– WINTERS method—using updating equations similar to exponential smoothing to fit model parameters – ADDWINTERS method—like the WINTERS method except that the seasonal parameters are added to the trend instead of multiplied with the trend – STEPAR method—stepwise autoregressive models with constant, linear, or quadratic trend and autoregressive errors to any order – Holt-Winters forecasting method with constant, linear, or quadratic trend – additive variant of the Holt-Winters method  support for up to three levels of seasonality for Holt-Winters method: time-of-year, day-ofweek, or time-of-day  ability to forecast any number of variables at once  forecast confidence limits for all methods

Time Series Interpolation and Frequency Conversion The EXPAND procedure provides time interval conversion and missing value interpolation for time series. The EXPAND procedure includes the following features:  conversion of time series frequency; for example, constructing quarterly estimates from annual series or aggregating quarterly values to annual values  conversion of irregular observations to periodic observations  interpolation of missing values in time series  conversion of observation types; for example, estimate stocks from flows and vice versa. All possible conversions are supported between any of the following: – beginning of period – end of period – period midpoint – period total – period average  conversion of time series phase shift; for example, conversion between fiscal years and calendar years  identifying observations including the following: – identification of the time interval of the input values – validation of the input data set observations – computation of the ID values for the observations in the output data set

40 F Chapter 2: Introduction

 choice of four interpolation methods: – cubic splines – linear splines – step functions – simple aggregation  ability to perform extrapolation by a linear projection of the trend of the cubic spline curve fit to the input data  ability to transform series before and after interpolation (or without interpolation) by using any of the following: – constant shift or scale – sign change or absolute value – logarithm, exponential, square root, square, logistic, inverse logistic – lags, leads, differences – classical decomposition – bounds, trims, reverse series – centered moving, cumulative, or backward moving average – centered moving, cumulative, or backward moving range – centered moving, cumulative, or backward moving geometric mean – centered moving, cumulative, or backward moving maximum – centered moving, cumulative, or backward moving median – centered moving, cumulative, or backward moving minimum – centered moving, cumulative, or backward moving product – centered moving, cumulative, or backward moving corrected sum of squares – centered moving, cumulative, or backward moving uncorrected sum of squares – centered moving, cumulative, or backward moving rank – centered moving, cumulative, or backward moving standard deviation – centered moving, cumulative, or backward moving sum – centered moving, cumulative, or backward moving median – centered moving, cumulative, or backward moving t-value – centered moving, cumulative, or backward moving variance  support for a wide range of time series frequencies: – YEAR – SEMIYEAR – QUARTER – MONTH – SEMIMONTH

Trend and Seasonal Analysis on Transaction Databases F 41

– TENDAY – WEEK – WEEKDAY – DAY – HOUR – MINUTE – SECOND  support for repeating of shifting the basic interval types to define a great variety of different frequencies, such as fiscal years, biennial periods, work shifts, and so forth Refer to Chapter 3, “Working with Time Series Data,” and Chapter 4, “Date Intervals, Formats, and Functions,” for more information about time series data transformations.

Trend and Seasonal Analysis on Transaction Databases The TIMESERIES procedure can accumulate transactional data to time series and perform trend and seasonal analysis on the accumulated time series. Time series analyses performed by the TIMESERIES procedure include the follows:  descriptive statistics relevant for time series data  seasonal decomposition and seasonal adjustment analysis  correlation analysis  cross-correlation analysis The TIMESERIES procedure includes the following features:  ability to process large amounts of time-stamped transactional data  statistical methods useful for large-scale time series analysis or (temporal) data mining  output data sets stored in either a time series format (default) or a coordinate format (transposed) The TIMESERIES procedure is normally used to prepare data for subsequent analysis that uses other SAS/ETS procedures or other parts of the SAS system. The time series format is most useful when the data are to be analyzed with SAS/ETS procedures. The coordinate format is most useful when the data are to be analyzed with SAS/STAT® procedures or SAS Enterprise MinerTM . (For example, clustering time-stamped transactional data can be achieved by using the results of TIMESERIES procedure with the clustering procedures of SAS/STAT and the nodes of SAS Enterprise Miner.)

42 F Chapter 2: Introduction

Access to Financial and Economic Databases The DATASOURCE procedure and the SAS/ETS data access interface LIBNAME Engines (SASECRSP, SASEFAME and SASEHAVR) provide seamless, efficient access to time series data from data files supplied by a variety of commercial and governmental data vendors. The DATASOURCE procedure includes the following features:  support for data files distributed by the following data vendors: – DRI/McGraw-Hill – FAME Information Services – HAVER ANALYTICS – Standard & Poors Compustat Service – Center for Research in Security Prices (CRSP) – International Monetary Fund – U.S. Bureau of Labor Statistics – U.S. Bureau of Economic Analysis – Organization for Economic Cooperation and Development (OECD)  ability to select the series, frequency, time range, and cross sections of extracted data  ability to create an output data set containing descriptive information on the series available in the data file  ability to read EBCDIC data on ASCII systems and vice versa The SASECRSP interface LIBNAME engine includes the following features:  enables random access to time series data residing in CRSPAccess databases  provides a seamless interface between CRSP and SAS data processing  uses the LIBNAME statement to enable you to specify which time series you would like to read from the CRSPAccess database, and how you would like to perform selection  enables you access to CRSP Stock, CRSP/COMPUSTAT Merged (CCM) or CRSP Indices Data.  provides convenient formats, informats, and functions for CRSP and SAS datetime conversions The SASEFAME interface LIBNAME engine includes the following features:  provides SAS and FAME users flexibility in accessing and processing time series data, case series, and formulas that reside in either a FAME database or a SAS data set

Access to Financial and Economic Databases F 43

 provides a seamless interface between FAME and SAS data processing  uses the LIBNAME statement to enable you to specify which time series you would like to read from the FAME database  enables you to convert the selected time series to the same time scale  works with the SAS DATA step to perform further subsetting and to store the resulting time series into a SAS data set  performs more analysis if desired either in the same SAS session or in another session at a later time  supports the FAME CROSSLIST function for subsetting via BYGROUPS using the CROSSLIST= option – you can use a FAME namelist that contains your BY variables for selection in the CROSSLIST – you can use a SAS input dataset, INSET, that contains the BY selection variables along with the WHERE= option in your SASEFAME libref  supports the use of FAME in a client/server environment that uses the FAME CHLI capability on your FAME server  enables access to your FAME remote data when you specify the port number of the TCP/IP service that is defined for your FAME server and the node name of your FAME master server in your SASEFAME libref’s physical path The SASEHAVR interface LIBNAME engine includes the following features:  enables Windows users random access to economic and financial data residing in a HAVER ANALYTICS Data Link Express (DLX) database  the following types of HAVER data sets are available: – United States Economic Indicators – Specialized Databases – Financial Indicators – Industry – Industrial Countries – Emerging Markets – International Organizations – Forecasts and As Reported Data – United States Regional  enables you to limit the range of data that is read from the time series  enables you to specify a desired conversion frequency. Start dates are recommended on the LIBNAME statement to help you save resources when processing large databases or when processing a large number of observations.

44 F Chapter 2: Introduction

 enables you to use the WHERE, KEEP, or DROP statements in your DATA step to further subset your data  supports use of the SQL procedure to create a view of your resulting SAS data set

Spreadsheet Calculations and Financial Report Generation The COMPUTAB procedure generates tabular reports using a programmable data table. The COMPUTAB procedure is especially useful when you need both the power of a programmable spreadsheet and a report-generation system and you want to set up a program to run in batch mode and generate routine reports. The COMPUTAB procedure includes the following features:  report generation facility for creating tabular reports such as income statements, balance sheets, and other row and column reports for analyzing business or time series data  ability to tailor report format to almost any desired specification  use of the SAS programming language to provide complete control of the calculation and format of each item of the report  ability to report definition in terms of a data table on which programming statements operate  ability for a single reference to a row or column to bring the entire row or column into a calculation  ability to create new rows and columns (such as totals, subtotals, and ratios) with a single programming statement  access to individual table values when needed  built-in features to provide consolidation reports over summarization variables

Loan Analysis, Comparison, and Amortization The LOAN procedure provides analysis and comparison of mortgages and other installment loans; it includes the following features:  ability to specify contract terms for any number of different loans and ability to analyze and compare various financing alternatives  analysis of four different types of loan contracts including the following: – fixed rate – adjustable rate – buy-down rate

Time Series Forecasting System F 45

– balloon payment  full control over adjustment terms for adjustable rate loans: life caps, adjustment frequency, and maximum and minimum rates  support for a wide variety of payment and compounding intervals  ability to incorporate initialization costs, discount points, down payments, and prepayments (uniform or lump-sum) in loan calculations  analysis of different rate adjustment scenarios for variable rate loans including the following: – worst case – best case – fixed rate case – estimated case  ability to make loan comparisons at different points in time  ability to make loan comparisons at each analysis date on the basis of five different economic criteria: – present worth of cost (net present value of all payments to date) – true interest rate (internal rate of return to date) – current periodic payment – total interest paid to date – outstanding balance  ability to base loan comparisons on either after-tax or before-tax analysis  report of the best alternative when loans of equal amount are compared  amortization schedules for each loan contract  output that shows payment dates, rather than just payment sequence numbers, when starting date is specified  optional printing or output of the amortization schedules, loan summaries, and loan comparison information to SAS data sets  ability to specify rounding of payments to any number of decimal places

Time Series Forecasting System SAS/ETS software includes the Time Series Forecasting System, a point-and-click application for exploring and analyzing univariate time series data. You can use the automatic model selection facility to select the best-fitting model for each time series, or you can use the system’s diagnostic features and time series modeling tools interactively to develop forecasting models customized to

46 F Chapter 2: Introduction

best predict your time series. The system provides both graphical and statistical features to help you choose the best forecasting method for each series. The system can be invoked by selecting AnalysisISolutions, by the FORECAST command, and by clicking the Forecasting icon in the Data Analysis folder of the SAS Desktop. The following is a brief summary of the features of the Time Series Forecasting system. With the system you can:  use a wide variety of forecasting methods, including several kinds of exponential smoothing models, Winters method, and ARIMA (Box-Jenkins) models. You can also produce forecasts by combining the forecasts from several models.  use predictor variables in forecasting models. Forecasting models can include time trend curves, regressors, intervention effects (dummy variables), adjustments you specify, and dynamic regression (transfer function) models.  view plots of the data, predicted versus actual values, prediction errors, and forecasts with confidence limits. You can plot changes or transformations of series, zoom in on parts of the graphs, or plot autocorrelations.  use hold-out samples to select the best forecasting method  compare goodness-of-fit measures for any two forecasting models side-by-side or list all models sorted by a particular fit statistic  view the predictions and errors for each model in a spreadsheet or view and compare the forecasts from any two models in a spreadsheet  examine the fitted parameters of each forecasting model and their statistical significance  control the automatic model selection process: the set of forecasting models considered, the goodness-of-fit measure used to select the best model, and the time period used to fit and evaluate models  customize the system by adding forecasting models for the automatic model selection process and for point-and-click manual selection  save your work in a project catalog  print an audit trail of the forecasting process  save and print system output including spreadsheets and graphs

Investment Analysis System The Investment Analysis System is an interactive environment for analyzing the time-value of money for a variety of investments:  loans

ODS Graphics F 47

 savings  depreciations  bonds  generic cash flows Various tools are provided to help analyze the value of investment alternatives: time value, periodic equivalent, internal rate of return, benefit-cost ratio, and breakeven analysis. These analyses can help answer a number of questions you might have about your investments:  Which option is more profitable or less costly?  Is it better to buy or rent?  Are the extra fees for refinancing at a lower interest rate justified?  What is the balance of this account after saving this amount periodically for so many years?  How much is legally tax-deductible?  Is this a reasonable price? Investment Analysis can be beneficial to users in many industries for a variety of decisions:  manufacturing: cost justification of automation or any capital investment, replacement analysis of major equipment, or economic comparison of alternative designs  government: setting funds for services  finance: investment analysis and portfolio management for fixed-income securities

ODS Graphics Many SAS/ETS procedures produce graphical output using the SAS Output Delivery System (ODS). The ODS Graphics system provides several advantages:  Plots and graphs are output objects in the Output Delivery System (ODS) and can be manipulated with ODS commands.  There is no need to write SAS/GRAPH statements or use special plotting macros.  There are multiple formats to choose from: html, gif, and rtf.  Templates control the appearance of plots.  Styles control the color scheme.

48 F Chapter 2: Introduction

 You can edit or create templates and styles for all graphs. To enable graphical output from SAS/ETS procedures, you must use the following statement in your SAS program. ods graphics on;

The graphical output produced by many SAS/ETS procedures can be controlled using the PLOTS= option on the PROC statement. For more information about the features of the ODS Graphics system, including the many ways that you can control or customize the plots produced by SAS procedures, refer to Chapter 21, “Statistical Graphics Using ODS” (SAS/STAT User’s Guide). For more information about the SAS Output Delivery system, refer to the SAS Output Delivery System: User’s Guide.

Related SAS Software Many features not found in SAS/ETS software are available in other parts of the SAS System, such as Base SAS®, SAS® Forecast Server, SAS/STAT® software, SAS/OR® software, SAS/QC® software, SAS® Stat Studio, and SAS/IML® software. If you do not find something you need in SAS/ETS software, you might be able to find it in SAS/STAT software and in Base SAS software. If you still do not find it, look in other SAS software products or contact SAS Technical Support staff. The following subsections summarize the features of other SAS products that might be of interest to users of SAS/ETS software.

Base SAS Software The features provided by SAS/ETS software are extensions to the features provided by Base SAS software. Many data management and reporting capabilities you need are part of Base SAS software. Refer to SAS Language Reference: Dictionary and Base SAS Procedures Guide for documentation of Base SAS software. In particular, refer to Base SAS Procedures Guide: Statistical Procedures for information about statistical analysis features included with Base SAS. The following sections summarize Base SAS software features of interest to users of SAS/ETS software. See Chapter 3, “Working with Time Series Data,” for further discussion of some of these topics as they relate to time series data and SAS/ETS software.

Base SAS Software F 49

SAS DATA Step The DATA step is your primary tool for reading and processing data in the SAS System. The DATA step provides a powerful general purpose programming language that enables you to perform all kinds of data processing tasks. The DATA step is documented in SAS Language Reference: Dictionary.

Base SAS Procedures Base SAS software includes many useful SAS procedures, which are documented in Base SAS Procedures Guide and Base SAS Procedures Guide: Statistical Procedures. The following is a list of Base SAS procedures you might find useful: CATALOG

for managing SAS catalogs

CHART

for printing charts and histograms

COMPARE

for comparing SAS data sets

CONTENTS

for displaying the contents of SAS data sets

COPY

for copying SAS data sets

CORR

for computing correlations

CPORT

for moving SAS data libraries between computer systems

DATASETS

for deleting or renaming SAS data sets

FCMP

for compiling functions for use in SAS programs. The SAS Function Compiler Procedure (FCMP) enables you to create, test, and store SAS functions and subroutines before you use them in other SAS procedures. PROC FCMP accepts slight variations of DATA step statements, and most features of the SAS programming language can be used in functions and subroutines that are processed by PROC FCMP.

FREQ

for computing frequency crosstabulations

MEANS

for computing descriptive statistics and summarizing or collapsing data over cross sections

PLOT

for printing scatter plots

PRINT

for printing SAS data sets

PROTO

for accessing external functions from the SAS system. The PROTO procedure enables you to register external functions that are written in the C or C++ programming languages. You can use these functions in SAS as well as in Clanguage structures and types. After the C-language functions are registered in PROC PROTO, they can be called from any SAS function or subroutine that is declared in the FCMP procedure, as well as from any SAS function, subroutine, or method block that is declared in the COMPILE procedure.

RANK

for computing rankings or order statistics

SORT

for sorting SAS data sets

50 F Chapter 2: Introduction

SQL

for processing SAS data sets with Structured Query Language

STANDARD

for standardizing variables to a fixed mean and variance

TABULATE

for printing descriptive statistics in tabular format

TIMEPLOT

for plotting variables over time

TRANSPOSE

for transposing SAS data sets

UNIVARIATE

for computing descriptive statistics

Global Statements Global statements can be specified anywhere in your SAS program, and they remain in effect until changed. Global statements are documented in SAS Language Reference: Dictionary. You may find the following SAS global statements useful: FILENAME

for accessing data files

FOOTNOTE

for printing footnote lines at the bottom of each page

%INCLUDE

for including files of SAS statements

LIBNAME

for accessing SAS data libraries

OPTIONS

for setting various SAS system options

QUIT

for ending an interactive procedure step

RUN

for executing the preceding SAS statements

TITLE

for printing title lines at the top of each page

X

for issuing host operating system commands from within your SAS session

Some Base SAS statements can be used with any SAS procedure, including SAS/ETS procedures. These statements are not global, and they affect only the SAS procedure they are used with. These statements are documented in SAS Language Reference: Dictionary. The following Base SAS statements are useful with SAS/ETS procedures: BY

for computing separate analyses for groups of observations

FORMAT

for assigning formats to variables

LABEL

for assigning descriptive labels to variables

WHERE

for subsetting data to restrict the range of data processed or to select or exclude observations from the analysis

SAS Functions SAS functions can be used in DATA step programs and in the COMPUTAB and MODEL procedures. The following kinds of functions are available:  character functions for manipulating character strings

SAS Forecast Studio F 51

 date and time functions for performing date and calendar calculations  financial functions for performing financial calculations such as depreciation, net present value, periodic savings, and internal rate of return  lagging and differencing functions for computing lags and differences  mathematical functions for computing data transformations and other mathematical calculations  probability functions for computing quantiles of statistical distributions and the significance of test statistics  random number functions for simulation experiments  sample statistics functions for computing means, standard deviations, kurtosis, and so forth SAS functions are documented in SAS Language Reference: Dictionary. Chapter 3, “Working with Time Series Data,” discusses the use of date, time, lagging, and differencing functions. Chapter 4, “Date Intervals, Formats, and Functions,” contains a reference list of date and time functions.

Formats, Informats, and Time Intervals Base SAS software provides formats to control the printing of data values, informats to read data values, and time intervals to define the frequency of time series. See Chapter 4, “Date Intervals, Formats, and Functions,” for more information.

SAS Forecast Studio SAS Forecast Studio is part of the SAS Forecast Server product. It provides an interactive environment for modeling and forecasting very large collections of hierarchically organized time series, such as SKUs in product lines and sales regions of a retail business. Forecast Studio greatly extends the capabilities provided by the Time Series Forecasting System included with SAS/ETS and described in Part IV. Forecast Studio is documented in SAS Forecast Server User’s Guide.

SAS High-Performance Forecasting SAS High-Performance Forecasting (HPF) software provides a system of SAS procedures for largescale automatic forecasting in business, government, and academic applications. Major uses of High-Performance Forecasting procedures include: forecasting, forecast scoring, market response modeling, and time series data mining. The software includes the following automatic forecasting process:

52 F Chapter 2: Introduction

 accumulates the time-stamped data to form a fixed-interval time series  diagnoses the time series using time series analysis techniques  creates a list of candidate model specifications based on the diagnostics  fits each candidate model specification to the time series  generates forecasts for each candidate fitted model  selects the most appropriate model specification based on either in-sample or holdout-sample evaluation using a model selection criterion  refits the selected model specification to the entire range of the time series  creates a forecast score from the selected fitted model  generate forecasts from the forecast score  evaluates the forecast using in-sample analysis  provides for out-of-sample forecast performance analysis  performs top-down, middle-out, or bottom-up reconciliations of forecasts in the hierarchy

SAS/GRAPH Software SAS/GRAPH software includes procedures that create two- and three-dimensional high resolution color graphics plots and charts. You can generate output that graphs the relationship of data values to one another, enhance existing graphs, or simply create graphics output that is not tied to data. With the addition of ODS Graphics features to SAS/ETS procedures, there is now less need for the use of SAS/GRAPH procedures with SAS/ETS. However, SAS/GRAPH procedures allow you to create additional graphical displays of your results. SAS/GRAPH software can produce the following types of output:  charts  plots  maps  text  three-dimensional graphs With SAS/GRAPH software you can produce high-resolution color graphics plots of time series data.

SAS/STAT Software F 53

SAS/STAT Software SAS/STAT software is of interest to users of SAS/ETS software because many econometric and other statistical methods not included in SAS/ETS software are provided in SAS/STAT software. SAS/STAT software includes procedures for a wide range of statistical methodologies including the following:  logistic regression  censored regression  principal component analysis  structural equation models using covariance structure analysis  factor analysis  survival analysis  discriminant analysis  cluster analysis  categorical data analysis; log-linear and conditional logistic models  general linear models  mixed linear and nonlinear models  generalized linear models  response surface analysis  kernel density estimation  LOESS regression  spline regression  two-dimensional kriging  multiple imputation for missing values  survey data analysis

54 F Chapter 2: Introduction

SAS/IML Software SAS/IML software gives you access to a powerful and flexible programming language (Interactive Matrix Language) in a dynamic, interactive environment. The fundamental object of the language is a data matrix. You can use SAS/IML software interactively (at the statement level) to see results immediately, or you can store statements in a module and execute them later. The programming is dynamic because necessary activities such as memory allocation and dimensioning of matrices are done automatically. You can access built-in operators and call routines to perform complex tasks such as matrix inversion or eigenvector generation. You can define your own functions and subroutines using SAS/IML modules. You can perform operations on an entire data matrix. You have access to a wide choice of data management commands. You can read, create, and update SAS data sets from inside SAS/IML software without ever using the DATA step. SAS/IML software is of interest to users of SAS/ETS software because it enables you to program your own econometric and time series methods in the SAS System. It contains subroutines for time series operators and for general function optimization. If you need to perform a statistical calculation not provided as an automated feature by SAS/ETS or other SAS software, you can use SAS/IML software to program the matrix equations for the calculation.

Kalman Filtering and Time Series Analysis in SAS/IML SAS/IML software includes CALL routines and functions for Kalman filtering and time series analysis, which perform the following:  generate univariate, multivariate, and fractional time series  compute likelihood function of ARMA, VARMA, and ARFIMA models  compute an autocovariance function of ARMA, VARMA, and ARFIMA models  check the stationarity of ARMA and VARMA models  filter and smooth time series models using Kalman method  fit AR, periodic AR, time-varying coefficient AR, VAR, and ARFIMA models  handle Bayesian seasonal adjustment models

SAS Stat Studio SAS Stat Studio is a highly interactive tool for data exploration and analysis. Stat Studio runs on a PC in the Microsoft Windows operating environment. You can use Stat Studio to do the following:  explore data through graphs linked across multiple windows

SAS/OR Software F 55

 transform data  subset data  analyze univariate distributions  discover structure and features in multivariate data  fit and evaluate explanatory models  create your own customized statistical graphics  add legends, curves, maps, or other custom features to statistical graphics  develop interactive programs that use dialog boxes  extend the built-in analyses by calling SAS procedures  create custom analyses  repeat an analysis on different data  extend the results of SAS procedures by using IML  share analyses with colleagues who also use Stat Studio  call functions from libraries written in C/C++, FORTRAN, or Java See for more information.

SAS/OR Software SAS/OR software provides SAS procedures for operations research and project planning and includes a menu driven system for project management. SAS/OR software has features for the following:  solving transportation problems  linear, integer, and mixed-integer programming  nonlinear programming and optimization  scheduling projects  plotting Gantt charts  drawing network diagrams  solving optimal assignment problems  network flow programming

56 F Chapter 2: Introduction

SAS/OR software might be of interest to users of SAS/ETS software for its mathematical programming features. In particular, the NLP and OPTMODEL procedures in SAS/OR software solve nonlinear programming problems and can be used for constrained and unconstrained maximization of user-defined likelihood functions. See SAS/OR User’s Guide: Mathematical Programming for more information.

SAS/QC Software SAS/QC software provides a variety of procedures for statistical quality control and quality improvement. SAS/QC software includes procedures for the following:  Shewhart control charts  cumulative sum control charts  moving average control charts  process capability analysis  Ishikawa diagrams  Pareto charts  experimental design SAS/QC software also includes the SQC menu system for interactive application of statistical quality control methods and the ADX Interface for experimental design.

MLE for User-Defined Likelihood Functions There are several SAS procedures that enable you to do maximum likelihood estimation of parameters in an arbitrary model with a likelihood function that you define: PROC MODEL, PROC NLP, PROC OPTMODEL and PROC IML. The MODEL procedure in SAS/ETS software enables you to minimize general log-likelihood functions for the error term of a model. The NLP and OPTMODEL procedures in SAS/OR software are general nonlinear programming procedures that can maximize a general function subject to linear equality or inequality constraints. You can use PROC NLP or OPTMODEL to maximize a user-defined nonlinear likelihood function. You can use the IML procedure in SAS/IML software for maximum likelihood problems. The optimization routines used by PROC NLP are available through IML subroutines. You can write the likelihood function in the SAS/IML matrix language and call the constrained and unconstrained nonlinear programming subroutines to maximize the likelihood function with respect to the parameter vector.

JMP Software F 57

®

JMP Software JMP software uses a flexible graphical interface to display and analyze data. JMP dynamically links statistics and graphics so you can easily explore data, make discoveries, and gain the knowledge you need to make better decisions. JMP provides a comprehensive set of statistical tools as well as design of experiments (DOE) and advanced quality control (QC and SPC) tools for Six Sigma in a single package. JMP is software for interactive statistical graphics and includes:  a data table window for editing, entering, and manipulating data  a broad range of graphical and statistical methods for data analysis  a facility for grouping data and computing summary statistics  JMP scripting language (JSL)—a scripting language for saving and creating frequently used routines  JMP automation  Formula Editor—a formula editor for each table column to compute values as needed  linear models, correlations, and multivariate  design of experiments module  options to highlight and display subsets of data  statistical quality control and variability charts—special plots, charts, and communication capability for quality-improvement techniques  survival analysis  time series analysis, which includes the following: – Box-Jenkins ARIMA forecasting – seasonal ARIMA forecasting – transfer function modeling – smoothing models: Winters method, single, double, linear, damped trend linear, and seasonal exponential smoothing – diagnostic charts (autocorrelation, partial autocorrelation, and variogram) and statistics of fit – a model comparison table to compare all forecasts generated – spectral density plots and white noise tests  tools for printing and for moving analyses results between applications

58 F Chapter 2: Introduction

SAS Enterprise Guide

®

SAS Enterprise Guide has the following features:  integration with the SAS9 platform: – open metadata repository (OMR) integration – SAS report integration  create report interface  ODS support  Web report studio integration – access to information maps – ETL studio impact analysis – ESRI integration within the OLAP analyzer – data mining scoring task  the user interface and workflow – – – – – – –

process flow ability to create stored processes from process flows SAS folders window project parameters query builder interface code node OLAP analyzer  ESRI integration  tree-diagram-based OLAP explorer  SAS report snapshots  SAS Web OLAP viewer for .NET ability to create EG projects – workspace maximization With Enterprise Guide, you can perform time series analysis with the following EG procedures:  prepare time series data—the Prepare Time Series Data task can be used to make data more suitable for analysis by other time series tasks.  create time series data—the Create Time Series Data wizard helps you convert transactional data into fixed-interval time series. Transactional data are time-stamped data collected over time with irregular or varied frequency.  ARIMA Modeling and Forecasting task  Basic Forecasting task  Regression Analysis with Autoregressive Errors  Regression Analysis of Panel Data

SAS Add-In for Microsoft Office F 59

®

SAS Add-In for Microsoft Office The main time series tasks in SAS Add-in for Microsoft Office (AMO) are as follows:  Prepare Time Series Data  Basic Forecasting  ARIMA Modeling and Forecasting  Regression Analysis with Autoregressive Errors  Regression Analysis of Panel Data  Create Time Series Data  Forecast Studio Create Project  Forecast Studio Open Project  Forecast Studio Submit Overrides

SAS Enterprise MinerTM —Time Series Node SAS Enterprise MinerTM is the SAS solution for data mining, streamlining the data mining process to create highly accurate predictive and descriptive models. Enterprise Miner’s process flow diagram eliminates the need for manual coding and reduces the model development time for both business analysts and statisticians. The system is customizable and extensible; users can integrate their code and build new nodes for redistribution. The Time Series node is a method of investigating time series data. It belongs to the Modify category of the SAS SEMMA (sample, explore, modify, model, assess) data mining process. The Time Series node enables you to understand trends and seasonal variation in large amounts of time series and transactional data. The Time Series node in SAS Enterprise Miner enables you to do the following:  perform time series analysis  perform forecasting  work with transactional data

60 F Chapter 2: Introduction

SAS Risk Products The SAS Risk products include SAS Risk Dimensions®, SAS Credit Risk Management for Banking, SAS OpRisk VaR, and SAS OpRisk Monitor. The analytical methods of SAS Risk Dimensions measure market risk and credit risk. SAS Risk Dimensions creates an environment where market and position data are staged for analysis using SAS data access and warehousing methodologies. SAS Risk Dimensions delivers a full range of modern credit, market and operational risk analysis techniques including:  mark-to-market  scenario analysis  profit/loss curves and surfaces  sensitivity analysis  delta normal VaR  historical simulation VaR  Monte Carlo VaR  current exposure  potential exposure  credit VaR  optimization SAS Credit Risk Management for Banking is a complete end-to-end application for measuring, exploring, managing, and reporting credit risk. SAS Credit Risk Management for Banking integrates data access, mapping, enrichment, and aggregation with advanced analytics and flexible reporting, all in an open, extensible, client-server framework. SAS Credit Risk Management for Banking enables you to do the following:  access and aggregate credit risk data across disparate operating systems and sources  seamlessly integrate credit scoring/internal rating with credit portfolio risk assessment  accurately measure, monitor, and report potential credit risk exposures within entities of an organization and aggregated across the entire organization, both on the counterparty level and the portfolio level  evaluate alternative strategies for pricing, hedging, or transferring credit risk  optimize the allocation of credit risk mitigants or assign the mitigants to lower the regulatory capital requirement

References F 61

 optimize the allocation of regulatory capital and economic capital  facilitate regulatory compliance and risk disclosure requirements for a wide variety of regulations such as Basel I, Basel II, and the Capital Requirements Directive (CAD III)

References Amal, S. and Weselowski, R. (1993), “Practical Econometric Analysis for Assessment of Real Property: Using the SAS System on Personal Computers,” Proceedings of the Eighteenth Annual SAS Users Group International Conference, 385-390. Cary, NC: SAS Institute Inc. Benseman, B. (1990), “Better Forecasting with SAS/ETS Software,” Proceedings of the Fifteenth Annual SAS Users Group International Conference, 494-497. Cary, NC: SAS Institute Inc. Calise, A. and Earley, J. (1997), “Forecasting College Enrollment Using the SAS System,” Proceedings of the Twenty-Second Annual SAS Users Group International Conference, 1326-1329. Cary, NC: SAS Institute Inc. Early, J., Sweeney, J., and Zekavat, S. M. (1989), “PROC ARIMA and the Dow Jones Stock Index,” Proceedings of the Fourteenth Annual SAS Users Group International Conference, 371-375. Cary, NC: SAS Institute Inc. Fischetti, T., Heathcote, S. and Perry, D. (1993), “Using SAS to Create a Modular Forecasting System,” Proceedings of the Eighteenth Annual SAS Users Group International Conference, 580585. Cary, NC: SAS Institute Inc. Fleming, N. S., Gibson, E. and Fleming, D. G. (1996), “The Use of PROC ARIMA to Test an Intervention Effect,” Proceedings of the Twenty-First Annual SAS Users Group International Conference, 1317-1326. Cary, NC: SAS Institute Inc. Hisnanick, J. J. (1991), “Evaluating Input Separability in a Model of the U.S. Manufacturing Sector,” Proceedings of the Sixteenth Annual SAS Users Group International Conference, 688-693. Cary, NC: SAS Institute Inc. Hisnanick, J. J. (1992), “Using PROC ARIMA in Forecasting the Demand and Utilization of Inpatient Hospital Services,” Proceedings of the Seventeenth Annual SAS Users Group International Conference, 383-391. Cary, NC: SAS Institute Inc. Hisnanick, J. J. (1993), “Using SAS/ETS in Applied Econometrics: Parameters Estimates for the CES-Translog Specification,” Proceedings of the Eighteenth Annual SAS Users Group International Conference, 275-279. Cary, NC: SAS Institute Inc. Hoyer, K. K. and Gross, K. C. (1993), “Spectral Decomposition and Reconstruction of Nuclear Plant Signals,” Proceedings of the Eighteenth Annual SAS Users Group International Conference, 1153-1158. Cary, NC: SAS Institute Inc.

62 F Chapter 2: Introduction

Keshani, D. A. and Taylor, T. N. (1992), “Weather Sensitive Appliance Load Curves; Conditional Demand Estimation,” Proceedings of the Annual SAS Users Group International Conference, 422430. Cary, NC: SAS Institute Inc. Khan, M. H. (1990), “Transfer Function Model for Gloss Prediction of Coated Aluminum Using the ARIMA Procedure,” Proceedings of the Fifteenth Annual SAS Users Group International Conference, 517-522. Cary, NC: SAS Institute Inc. Le Bouton, K. J. (1989), “Performance Function for Aircraft Production Using PROC SYSLIN and L2 Norm Estimation,” Proceedings of the Fourteenth Annual SAS Users Group International Conference, 424-426. Cary, NC: SAS Institute Inc. Lin, L. and Myers, S. C. (1988), “Forecasting the Economy using the Composite Leading Index, Its Components, and a Rational Expectations Alternative,” Proceedings of the Thirteenth Annual SAS Users Group International Conference, 181-186. Cary, NC: SAS Institute Inc. McCarty, L. (1994), “Forecasting Operational Indices Using SAS/ETS Software,” Proceedings of the Nineteenth Annual SAS Users Group International Conference, 844-848. Cary, NC: SAS Institute Inc. Morelock, M. M., Pargellis, C. A., Graham, E. T., Lamarre, D., and Jung, G. (1995), “TimeResolved Ligand Exchange Reactions: Kinetic Models for Competitive Inhibitors with Recombinant Human Renin,” Journal of Medical Chemistry, 38, 1751–1761. Parresol, B. R. and Thomas, C. E. (1991), “Econometric Modeling of Sweetgum Stem Biomass Using the IML and SYSLIN Procedures,” Proceedings of the Sixteenth Annual SAS Users Group International Conference, 694-699. Cary, NC: SAS Institute Inc.

Chapter 3

Working with Time Series Data Contents Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time Series and SAS Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reading a Simple Time Series . . . . . . . . . . . . . . . . . . . . . . . . . Dating Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAS Date, Datetime, and Time Values . . . . . . . . . . . . . . . . . . . . . Reading Date and Datetime Values with Informats . . . . . . . . . . . . . . Formatting Date and Datetime Values . . . . . . . . . . . . . . . . . . . . . The Variables DATE and DATETIME . . . . . . . . . . . . . . . . . . . . . Sorting by Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Subsetting Data and Selecting Observations . . . . . . . . . . . . . . . . . . . . . Subsetting SAS Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the WHERE Statement with SAS Procedures . . . . . . . . . . . . . Using SAS Data Set Options . . . . . . . . . . . . . . . . . . . . . . . . . . Storing Time Series in a SAS Data Set . . . . . . . . . . . . . . . . . . . . . . . . Standard Form of a Time Series Data Set . . . . . . . . . . . . . . . . . . . Several Series with Different Ranges . . . . . . . . . . . . . . . . . . . . . Missing Values and Omitted Observations . . . . . . . . . . . . . . . . . . Cross-Sectional Dimensions and BY Groups . . . . . . . . . . . . . . . . . Interleaved Time Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . Output Data Sets of SAS/ETS Procedures . . . . . . . . . . . . . . . . . . . Time Series Periodicity and Time Intervals . . . . . . . . . . . . . . . . . . . . . Specifying Time Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Intervals with SAS/ETS Procedures . . . . . . . . . . . . . . . . . . Time Intervals, the Time Series Forecasting System, and the Time Series Viewer Plotting Time Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Time Series Viewer . . . . . . . . . . . . . . . . . . . . . . . . . Using PROC SGPLOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using PROC PLOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using PROC TIMEPLOT . . . . . . . . . . . . . . . . . . . . . . . . . . . Using PROC GPLOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Calendar and Time Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computing Dates from Calendar Variables . . . . . . . . . . . . . . . . . . Computing Calendar Variables from Dates . . . . . . . . . . . . . . . . . .

64 65 65 66 67 68 69 70 72 72 73 73 74 75 75 76 77 78 79 80 82 84 84 85 86 86 86 87 92 93 94 95 96 96

64 F Chapter 3: Working with Time Series Data

Converting between Date, Datetime, and Time Values . . Computing Datetime Values . . . . . . . . . . . . . . . . Computing Calendar and Time Variables . . . . . . . . . Interval Functions INTNX and INTCK . . . . . . . . . . . . . Incrementing Dates by Intervals . . . . . . . . . . . . . . Alignment of SAS Dates . . . . . . . . . . . . . . . . . . Computing the Width of a Time Interval . . . . . . . . . Computing the Ceiling of an Interval . . . . . . . . . . . Counting Time Intervals . . . . . . . . . . . . . . . . . . Checking Data Periodicity . . . . . . . . . . . . . . . . . Filling In Omitted Observations in a Time Series Data Set Using Interval Functions for Calendar Calculations . . . . Lags, Leads, Differences, and Summations . . . . . . . . . . . The LAG and DIF Functions . . . . . . . . . . . . . . . . Multiperiod Lags and Higher-Order Differencing . . . . . Percent Change Calculations . . . . . . . . . . . . . . . . Leading Series . . . . . . . . . . . . . . . . . . . . . . . Summing Series . . . . . . . . . . . . . . . . . . . . . . Transforming Time Series . . . . . . . . . . . . . . . . . . . . Log Transformation . . . . . . . . . . . . . . . . . . . . Other Transformations . . . . . . . . . . . . . . . . . . . The EXPAND Procedure and Data Transformations . . . Manipulating Time Series Data Sets . . . . . . . . . . . . . . . Splitting and Merging Data Sets . . . . . . . . . . . . . . Transposing Data Sets . . . . . . . . . . . . . . . . . . . Time Series Interpolation . . . . . . . . . . . . . . . . . . . . . Interpolating Missing Values . . . . . . . . . . . . . . . . Interpolating to a Higher or Lower Frequency . . . . . . . Interpolating between Stocks and Flows, Levels and Rates Reading Time Series Data . . . . . . . . . . . . . . . . . . . . Reading a Simple List of Values . . . . . . . . . . . . . . Reading Fully Described Time Series in Transposed Form

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97 97 98 98 99 100 101 102 102 103 104 104 105 105 109 110 112 113 115 115 117 118 118 118 119 124 124 125 125 125 126 126

Overview This chapter discusses working with time series data in the SAS System. The following topics are included:  dating time series and working with SAS date and datetime values  subsetting data and selecting observations

Time Series and SAS Data Sets F 65

 storing time series data in SAS data sets  specifying time series periodicity and time intervals  plotting time series  using calendar and time interval functions  computing lags and other functions across time  transforming time series  transposing time series data sets  interpolating time series  reading time series data recorded in different ways In general, this chapter focuses on using features of the SAS programming language and not on features of SAS/ETS software. However, since SAS/ETS procedures are used to analyze time series, understanding how to use the SAS programming language to work with time series data is important for the effective use of SAS/ETS software. You do not need to read this chapter to use SAS/ETS procedures. If you are already familiar with SAS programming you might want to skip this chapter, or you can refer to sections of this chapter for help on specific time series data processing questions.

Time Series and SAS Data Sets

Introduction To analyze data with the SAS System, data values must be stored in a SAS data set. A SAS data set is a matrix (or table) of data values organized into variables and observations. The variables in a SAS data set label the columns of the data matrix, and the observations in a SAS data set are the rows of the data matrix. You can also think of a SAS data set as a kind of file, with the observations representing records in the file and the variables representing fields in the records. (See SAS Language Reference: Concepts for more information about SAS data sets.) Usually, each observation represents the measurement of one or more variables for the individual subject or item observed. Often, the values of some of the variables in the data set are used to identify the individual subjects or items that the observations measure. These identifying variables are referred to as ID variables. For many kinds of statistical analysis, only relationships among the variables are of interest, and the identity of the observations does not matter. ID variables might not be relevant in such a case.

66 F Chapter 3: Working with Time Series Data

However, for time series data the identity and order of the observations are crucial. A time series is a set of observations made at a succession of equally spaced points in time. For example, if the data are monthly sales of a company’s product, the variable measured is sales of the product and the unit observed is the operation of the company during each month. These observations can be identified by year and month. If the data are quarterly gross national product, the variable measured is final goods production and the unit observed is the economy during each quarter. These observations can be identified by year and quarter. For time series data, the observations are identified and related to each other by their position in time. Since SAS does not assume any particular structure to the observations in a SAS data set, there are some special considerations needed when storing time series in a SAS data set. The main considerations are how to associate dates with the observations and how to structure the data set so that SAS/ETS procedures and other SAS procedures recognize the observations of the data set as constituting time series. These issues are discussed in following sections.

Reading a Simple Time Series Time series data can be recorded in many different ways. The section “Reading Time Series Data” on page 125 discusses some of the possibilities. The example below shows a simple case. The following SAS statements read monthly values of the U.S. Consumer Price Index for June 1990 through July 1991. The data set USCPI is shown in Figure 3.1. data uscpi; input year month cpi; datalines; 1990 6 129.9 1990 7 130.4 ... more lines ... proc print data=uscpi; run;

Dating Observations F 67

Figure 3.1 Time Series Data Obs

year

1 2 3 4 5 6 7 8 9 10 11 12 13 14

1990 1990 1990 1990 1990 1990 1990 1991 1991 1991 1991 1991 1991 1991

month 6 7 8 9 10 11 12 1 2 3 4 5 6 7

cpi 129.9 130.4 131.6 132.7 133.5 133.8 133.8 134.6 134.8 135.0 135.2 135.6 136.0 136.2

When a time series is stored in the manner shown by this example, the terms series and variable can be used interchangeably. There is one observation per row and one series/variable per column.

Dating Observations The SAS System supports special date, datetime, and time values, which make it easy to represent dates, perform calendar calculations, and identify the time period of observations in a data set. The preceding example uses the ID variables YEAR and MONTH to identify the time periods of the observations. For a quarterly data set, you might use YEAR and QTR as ID variables. A daily data set might have the ID variables YEAR, MONTH, and DAY. Clearly, it would be more convenient to have a single ID variable that could be used to identify the time period of observations, regardless of their frequency. The following section, “SAS Date, Datetime, and Time Values” on page 68, discusses how the SAS System represents dates and times internally and how to specify date, datetime, and time values in a SAS program. The section “Reading Date and Datetime Values with Informats” on page 69 discusses how to read in date and time values from data records and how to control the display of date and datetime values in SAS output. Later sections discuss other issues concerning date and datetime values, specifying time intervals, data periodicity, and calendar calculations. SAS date and datetime values and the other features discussed in the following sections are also described in SAS Language Reference: Dictionary. Reference documentation on these features is also provided in Chapter 4, “Date Intervals, Formats, and Functions.”

68 F Chapter 3: Working with Time Series Data

SAS Date, Datetime, and Time Values SAS Date Values SAS software represents dates as the number of days since a reference date. The reference date, or date zero, used for SAS date values is 1 January 1960. For example, 3 February 1960 is represented by SAS as 33. The SAS date for 17 October 1991 is 11612. SAS software correctly represents dates from the year 1582 to the year 20,000. Dates represented in this way are called SAS date values. Any numeric variable in a SAS data set whose values represent dates in this way is called a SAS date variable. Representing dates as the number of days from a reference date makes it easy for the computer to store them and perform calendar calculations, but these numbers are not meaningful to users. However, you never have to use SAS date values directly, since SAS automatically converts between this internal representation and ordinary ways of expressing dates, provided that you indicate the format with which you want the date values to be displayed. (Formatting of date values is explained in the section “Formatting Date and Datetime Values” on page 70.)

Century of Dates Represented with Two-Digit Year Values SAS software informats, functions, and formats can process dates that are represented with twodigit year values. The century assumed for a two-digit year value can be controlled with the YEARCUTOFF= option in the OPTIONS statement. The YEARCUTOFF= system option controls how dates with two-digit year values are interpreted by specifying the first year of a 100-year span. The default value for the YEARCUTOFF= option is 1920. Thus by default the year ‘17’ is interpreted as 2017, while the year ‘25’ is interpreted as 1925. (See SAS Language Reference: Dictionary for more information about YEARCUTOFF=.)

SAS Date Constants SAS date values are written in a SAS program by placing the dates in single quotes followed by a D. The date is represented by the day of the month, the three letter abbreviation of the month name, and the year. For example, SAS reads the value ‘17OCT1991’D the same as 11612, the SAS date value for 17 October 1991. Thus, the following SAS statements print DATE=11612: data _null_; date = ’17oct1991’d; put date=; run;

The year value can be given with two or four digits, so ‘17OCT91’D is the same as ‘17OCT1991’D.

Reading Date and Datetime Values with Informats F 69

SAS Datetime Values and Datetime Constants To represent both the time of day and the date, SAS uses datetime values. SAS datetime values represent the date and time as the number of seconds the time is from a reference time. The reference time, or time zero, used for SAS datetime values is midnight, 1 January 1960. Thus, for example, the SAS datetime value for 17 October 1991 at 2:45 in the afternoon is 1003329900. To specify datetime constants in a SAS program, write the date and time in single quotes followed by DT. To write the date and time in a SAS datetime constant, write the date part using the same syntax as for date constants, and follow the date part with the hours, the minutes, and the seconds, separating the parts with colons. The seconds are optional. For example, in a SAS program you would write 17 October 1991 at 2:45 in the afternoon as ‘17OCT91:14:45’DT. SAS reads this as 1003329900. Table 3.1 shows some other examples of datetime constants. Table 3.1

Examples of Datetime Constants

Datetime Constant ‘17OCT1991:14:45:32’DT ‘17OCT1991:12:5’DT ‘17OCT1991:2:0’DT ‘17OCT1991:0:0’DT

Time 32 seconds past 2:45 p.m., 17 October 1991 12:05 p.m., 17 October 1991 2:00 a.m., 17 October 1991 midnight, 17 October 1991

SAS Time Values The SAS System also supports time values. SAS time values are just like datetime values, except that the date part is not given. To write a time value in a SAS program, write the time the same as for a datetime constant, but use T instead of DT. For example, 2:45:32 p.m. is written ‘14:45:32’T. Time values are represented by a number of seconds since midnight, so SAS reads ‘14:45:32’T as 53132. SAS time values are not very useful for identifying time series, since usually both the date and the time of day are needed. Time values are not discussed further in this book.

Reading Date and Datetime Values with Informats SAS provides a selection of informats for reading SAS date and datetime values from date and time values recorded in ordinary notations. A SAS informat is an instruction that converts the values from a character-string representation into the internal numerical value of a SAS variable. Date informats convert dates from ordinary notations used to enter them to SAS date values; datetime informats convert date and time from ordinary notation to SAS datetime values.

70 F Chapter 3: Working with Time Series Data

For example, the following SAS statements read monthly values of the U.S. Consumer Price Index. Since the data are monthly, you could identify the date with the variables YEAR and MONTH, as in the previous example. Instead, in this example the time periods are coded as a three-letter month abbreviation followed by the year. The informat MONYY. is used to read month-year dates coded this way and to express them as SAS date values for the first day of the month, as follows: data uscpi; input date : monyy7. cpi; format date monyy7.; label cpi = "US Consumer Price Index"; datalines; jun1990 129.9 jul1990 130.4 ... more lines ...

The SAS System provides informats for most common notations for dates and times. See Chapter 4 for more information about the date and datetime informats available.

Formatting Date and Datetime Values SAS provides formats to convert the internal representation of date and datetime values used by SAS to ordinary notations for dates and times. Several different formats are available for displaying dates and datetime values in most of the commonly used notations. A SAS format is an instruction that converts the internal numerical value of a SAS variable to a character string that can be printed or displayed. Date formats convert SAS date values to a readable form; datetime formats convert SAS datetime values to a readable form. In the preceding example, the variable DATE was set to the SAS date value for the first day of the month for each observation. If the data set USCPI were printed or otherwise displayed, the values shown for DATE would be the number of days since 1 January 1960. (See the “DATE with no format” column in Figure 3.2.) To display date values appropriately, use the FORMAT statement. The following example processes the data set USCPI to make several copies of the variable DATE and uses a FORMAT statement to give different formats to these copies. The format cases shown are the MONYY7. format (for the DATE variable), the DATE9. format (for the DATE1 variable), and no format (for the DATE0 variable). The PROC PRINT output in Figure 3.2 shows the effect of the different formats on how the date values are printed. data fmttest; set uscpi; date0 = date; date1 = date; label date = "DATE date1 = "DATE date0 = "DATE format date monyy7.

with MONYY7. format" with DATE9. format" with no format"; date1 date9.;

Formatting Date and Datetime Values F 71

run; proc print data=fmttest label; run;

Figure 3.2 SAS Date Values Printed with Different Formats

Obs 1 2 3 4 5 6 7 8 9 10 11 12 13 14

DATE with MONYY7. format

US Consumer Price Index

DATE with no format

DATE with DATE9. format

JUN1990 JUL1990 AUG1990 SEP1990 OCT1990 NOV1990 DEC1990 JAN1991 FEB1991 MAR1991 APR1991 MAY1991 JUN1991 JUL1991

129.9 130.4 131.6 132.7 133.5 133.8 133.8 134.6 134.8 135.0 135.2 135.6 136.0 136.2

11109 11139 11170 11201 11231 11262 11292 11323 11354 11382 11413 11443 11474 11504

01JUN1990 01JUL1990 01AUG1990 01SEP1990 01OCT1990 01NOV1990 01DEC1990 01JAN1991 01FEB1991 01MAR1991 01APR1991 01MAY1991 01JUN1991 01JUL1991

The appropriate format to use for SAS date or datetime valued ID variables depends on the sampling frequency or periodicity of the time series. Table 3.2 shows recommended formats for common data sampling frequencies and shows how the date ’17OCT1991’D or the datetime value ’17OCT1991:14:45:32’DT is displayed by these formats. Table 3.2

Formats for Different Sampling Frequencies

ID values SAS date

SAS datetime

Periodicity annual quarterly monthly weekly daily hourly minutes seconds

FORMAT YEAR4. YYQC6. MONYY7. WEEKDATX23. DATE9. DATETIME10. DATETIME13. DATETIME16.

Example 1991 1991:4 OCT1991 Thursday, 17 Oct 1991 17OCT1991 17OCT91:14 17OCT91:14:45 17OCT91:14:45:32

See Chapter 4, “Date Intervals, Formats, and Functions,” for more information about the date and datetime formats available.

72 F Chapter 3: Working with Time Series Data

The Variables DATE and DATETIME SAS/ETS procedures enable you to identify time series observations in many different ways to suit your needs. As discussed in preceding sections, you can use a combination of several ID variables, such as YEAR and MONTH for monthly data. However, using a single SAS date or datetime ID variable is more convenient and enables you to take advantage of some features SAS/ETS procedures provide for processing ID variables. One such feature is automatic extrapolation of the ID variable to identify forecast observations. These features are discussed in following sections. Thus, it is a good practice to include a SAS date or datetime ID variable in all the time series SAS data sets you create. It is also a good practice to always give the date or datetime ID variable a format appropriate for the data periodicity. (For information about creating SAS date and datetime values from multiple ID variables, see the section “Computing Dates from Calendar Variables” on page 96.) You can assign a SAS date- or datetime-valued ID variable any name that conforms to SAS variable name requirements. However, you might find working with time series data in SAS easier and less confusing if you adopt the practice of always using the same name for the SAS date or datetime ID variable. This book always names the date- or datetime-values ID variable DATE if it contains SAS date values or DATETIME if it contains SAS datetime values. This makes it easy to recognize the ID variable and also makes it easy to recognize whether this ID variable uses SAS date or datetime values.

Sorting by Time Many SAS/ETS procedures assume the data are in chronological order. If the data are not in time order, you can use the SORT procedure to sort the data set. For example, proc sort data=a; by date; run;

There are many ways of coding the time ID variable or variables, and some ways do not sort correctly. If you use SAS date or datetime ID values as suggested in the preceding section, you do not need to be concerned with this issue. But if you encode date values in nonstandard ways, you need to consider whether your ID variables will sort. SAS date and datetime values always sort correctly, as do combinations of numeric variables such as YEAR, MONTH, and DAY used together. Julian dates also sort correctly. (Julian dates are numbers of the form yyddd, where yy is the year and ddd is the day of the year. For example, 17 October 1991 has the Julian date value 91290.)

Subsetting Data and Selecting Observations F 73

Calendar dates such as numeric values coded as mmddyy or ddmmyy do not sort correctly. Character variables that contain display values of dates, such as dates in the notation produced by SAS date formats, generally do not sort correctly.

Subsetting Data and Selecting Observations It is often necessary to subset data for analysis. You might need to subset data to do the following:  restrict the time range. For example, you want to perform a time series analysis using only recent data and ignoring observations from the distant past.  select cross sections of the data. (See the section “Cross-Sectional Dimensions and BY Groups” on page 79.) For example, you have a data set with observations over time for each of several states, and you want to analyze the data for a single state.  select particular kinds of time series from an interleaved-form data set. (See the section “Interleaved Time Series” on page 80.) For example, you have an output data set produced by the FORECAST procedure that contains both forecast and confidence limits observations, and you want to extract only the forecast observations.  exclude particular observations. For example, you have an outlier in your time series, and you want to exclude this observation from the analysis. You can subset data either by using the DATA step to create a subset data set or by using a WHERE statement with the SAS procedure that analyzes the data. A typical WHERE statement used in a procedure has the following form: proc arima data=full; where ’31dec1993’d < date < ’26mar1994’d; identify var=close; run;

For complete reference documentation on the WHERE statement, see SAS Language Reference: Dictionary.

Subsetting SAS Data Sets To create a subset data set, specify the name of the subset data set in the DATA statement, bring in the full data set with a SET statement, and specify the subsetting criteria with either subsetting IF statements or WHERE statements. For example, suppose you have a data set that contains time series observations for each of several states. The following DATA step uses a WHERE statement to exclude observations with dates before 1970 and uses a subsetting IF statement to select observations for the state NC:

74 F Chapter 3: Working with Time Series Data

data subset; set full; where date >= ’1jan1970’d; if state = ’NC’; run;

In this case, it makes no difference logically whether the WHERE statement or the IF statement is used, and you can combine several conditions in one subsetting statement. The following statements produce the same results as the previous example: data subset; set full; if date >= ’1jan1970’d & state = ’NC’; run;

The WHERE statement acts on the input data sets specified in the SET statement before observations are processed by the DATA step program, whereas the IF statement is executed as part of the DATA step program. If the input data set is indexed, using the WHERE statement can be more efficient than using the IF statement. However, the WHERE statement can refer only to variables in the input data set, not to variables computed by the DATA step program. To subset the variables of a data set, use KEEP or DROP statements or use KEEP= or DROP= data set options. See SAS Language Reference: Dictionary for information about KEEP and DROP statements and SAS data set options. For example, suppose you want to subset the data set as in the preceding example, but you want to include in the subset data set only the variables DATE, X, and Y. You could use the following statements: data subset; set full; if date >= ’1jan1970’d & state = ’NC’; keep date x y; run;

Using the WHERE Statement with SAS Procedures Use the WHERE statement with SAS procedures to process only a subset of the input data set. For example, suppose you have a data set that contains monthly observations for each of several states, and you want to use the AUTOREG procedure to analyze data since 1970 for the state NC. You could use the following statements: proc autoreg data=full; where date >= ’1jan1970’d & state = ’NC’; ... additional statements ... run;

You can specify any number of conditions in the WHERE statement. For example, suppose that a strike created an outlier in May 1975, and you want to exclude that observation. You could use the

Storing Time Series in a SAS Data Set F 75

following statements: proc autoreg data=full; where date >= ’1jan1970’d & state = ’NC’ & date ^= ’1may1975’d; ... additional statements ... run;

Using SAS Data Set Options You can use the OBS= and FIRSTOBS= data set options to subset the input data set. For example, the following statements print observations 20 through 25 of the data set FULL: proc print data=full(firstobs=20 obs=25); run;

Figure 3.3 Partial Listing of Data Set FULL Obs

date

20 21 22 23 24 25

21OCT1993 22OCT1993 23OCT1993 24OCT1993 25OCT1993 26OCT1993

state

i

x

y

close

NC NC NC NC NC NC

20 21 22 23 24 25

0.44803 0.03186 -0.25232 0.42524 0.05494 -0.29096

0.35302 1.67414 -1.61289 0.73112 -0.88664 -1.17275

0.44803 0.03186 -0.25232 0.42524 0.05494 -0.29096

You can use KEEP= and DROP= data set options to exclude variables from the input data set. See SAS Language Reference: Dictionary for information about SAS data set options.

Storing Time Series in a SAS Data Set This section discusses aspects of storing time series in SAS data sets. The topics discussed are the standard form of a time series data set, storing several series with different time ranges in the same data set, omitted observations, cross-sectional dimensions and BY groups, and interleaved time series. Any number of time series can be stored in a SAS data set. Normally, each time series is stored in a separate variable. For example, the following statements augment the USCPI data set read in the previous example with values for the producer price index: data usprice; input date : monyy7. cpi ppi;

76 F Chapter 3: Working with Time Series Data

format date monyy7.; label cpi = "Consumer Price Index" ppi = "Producer Price Index"; datalines; jun1990 129.9 114.3 jul1990 130.4 114.5 ... more lines ...

proc print data=usprice; run;

Figure 3.4 Time Series Data Set Containing Two Series Obs

date

1 2 3 4 5 6 7 8 9 10 11 12 13 14

JUN1990 JUL1990 AUG1990 SEP1990 OCT1990 NOV1990 DEC1990 JAN1991 FEB1991 MAR1991 APR1991 MAY1991 JUN1991 JUL1991

cpi

ppi

129.9 130.4 131.6 132.7 133.5 133.8 133.8 134.6 134.8 135.0 135.2 135.6 136.0 136.2

114.3 114.5 116.5 118.4 120.8 120.1 118.7 119.0 117.2 116.2 116.0 116.5 116.3 116.0

Standard Form of a Time Series Data Set The simple way the CPI and PPI time series are stored in the USPRICE data set in the preceding example is termed the standard form of a time series data set. A time series data set in standard form has the following characteristics:  The data set contains one variable for each time series.  The data set contains exactly one observation for each time period.  The data set contains an ID variable or variables that identify the time period of each observation.  The data set is sorted by the ID variables associated with date time values, so the observations are in time sequence.  The data are equally spaced in time. That is, successive observations are a fixed time interval apart, so the data set can be described by a single sampling interval such as hourly, daily,

Several Series with Different Ranges F 77

monthly, quarterly, yearly, and so forth. This means that time series with different sampling frequencies are not mixed in the same SAS data set. Most SAS/ETS procedures that process time series expect the input data set to contain time series in this standard form, and this is the simplest way to store time series in SAS data sets. (The EXPAND and TIMESERIES procedures can be helpful in converting your data to this standard form.) There are more complex ways to represent time series in SAS data sets. You can incorporate cross-sectional dimensions with BY groups, so that each BY group is like a standard form time series data set. This method is discussed in the section “Cross-Sectional Dimensions and BY Groups” on page 79. You can interleave time series, with several observations for each time period identified by another ID variable. Interleaved time series data sets are used to store several series in the same SAS variable. Interleaved time series data sets are often used to store series of actual values, predicted values, and residuals, or series of forecast values and confidence limits for the forecasts. This is discussed in the section “Interleaved Time Series” on page 80.

Several Series with Different Ranges Different time series can have values recorded over different time ranges. Since a SAS data set must have the same observations for all variables, when time series with different ranges are stored in the same data set, missing values must be used for the periods in which a series is not available. Suppose that in the previous example you did not record values for CPI before August 1990 and did not record values for PPI after June 1991. The USPRICE data set could be read with the following statements: data usprice; input date : monyy7. cpi ppi; format date monyy7.; datalines; jun1990 . 114.3 jul1990 . 114.5 aug1990 131.6 116.5 sep1990 132.7 118.4 oct1990 133.5 120.8 nov1990 133.8 120.1 dec1990 133.8 118.7 jan1991 134.6 119.0 feb1991 134.8 117.2 mar1991 135.0 116.2 apr1991 135.2 116.0 may1991 135.6 116.5 jun1991 136.0 116.3 jul1991 136.2 . ;

78 F Chapter 3: Working with Time Series Data

The decimal points with no digits in the data records represent missing data and are read by SAS as missing value codes. In this example, the time range of the USPRICE data set is June 1990 through July 1991, but the time range of the CPI variable is August 1990 through July 1991, and the time range of the PPI variable is June 1990 through June 1991. SAS/ETS procedures ignore missing values at the beginning or end of a series. That is, the series is considered to begin with the first nonmissing value and end with the last nonmissing value.

Missing Values and Omitted Observations Missing data can also occur within a series. Missing values that appear after the beginning of a time series and before the end of the time series are called embedded missing values. Suppose that in the preceding example you did not record values for CPI for November 1990 and did not record values for PPI for both November 1990 and March 1991. The USPRICE data set could be read with the following statements: data usprice; input date : monyy. cpi ppi; format date monyy.; datalines; jun1990 . 114.3 jul1990 . 114.5 aug1990 131.6 116.5 sep1990 132.7 118.4 oct1990 133.5 120.8 nov1990 . . dec1990 133.8 118.7 jan1991 134.6 119.0 feb1991 134.8 117.2 mar1991 135.0 . apr1991 135.2 116.0 may1991 135.6 116.5 jun1991 136.0 116.3 jul1991 136.2 . ;

In this example, the series CPI has one embedded missing value, and the series PPI has two embedded missing values. The ranges of the two series are the same as before. Note that the observation for November 1990 has missing values for both CPI and PPI; there is no data for this period. This is an example of a missing observation. You might ask why the data record for this period is included in the example at all, since the data record contains no data. However, deleting the data record for November 1990 from the example would cause an omitted observation in the USPRICE data set. SAS/ETS procedures expect input data sets to contain observations for a contiguous time sequence. If you omit observations from a time series data set and then try to analyze the data set with SAS/ETS procedures, the omitted

Cross-Sectional Dimensions and BY Groups F 79

observations will cause errors. When all data are missing for a period, a missing observation should be included in the data set to preserve the time sequence of the series. If observations are omitted from the data set, the EXPAND procedure can be used to fill in the gaps with missing values (or to interpolate nonmissing values) for the time series variables and with the appropriate date or datetime values for the ID variable.

Cross-Sectional Dimensions and BY Groups Often, time series in a collection are related by a cross sectional dimension. For example, the national average U.S. consumer price index data shown in the previous example can be disaggregated to show price indexes for major cities. In this case, there are several related time series: CPI for New York, CPI for Chicago, CPI for Los Angeles, and so forth. When these time series are considered as one data set, the city whose price level is measured is a cross sectional dimension of the data. There are two basic ways to store such related time series in a SAS data set. The first way is to use a standard form time series data set with a different variable for each series. For example, the following statements read CPI series for three major U.S. cities: data citycpi; input date : monyy7. cpiny cpichi cpila; format date monyy7.; datalines; nov1989 133.200 126.700 130.000 dec1989 133.300 126.500 130.600 ... more lines ...

The second way is to store the data in a time series cross-sectional form. In this form, the series for all cross sections are stored in one variable and a cross section ID variable is used to identify observations for the different series. The observations are sorted by the cross section ID variable and by time within each cross section. The following statements indicate how to read the CPI series for U.S. cities in time series crosssectional form: data cpicity; length city $11; input city $11. date : monyy. cpi; format date monyy.; datalines; New York JAN1990 135.100 New York FEB1990 135.300 ... more lines ...

80 F Chapter 3: Working with Time Series Data

proc sort data=cpicity; by city date; run;

When processing a time series cross sectional form data set with most SAS/ETS procedures, use the cross section ID variable in a BY statement to process the time series separately. The data set must be sorted by the cross section ID variable and sorted by date within each cross section. The PROC SORT step in the preceding example ensures that the CPICITY data set is correctly sorted. When the cross section ID variable is used in a BY statement, each BY group in the data set is like a standard form time series data set. Thus, SAS/ETS procedures that expect a standard form time series data set can process time series cross sectional data sets when a BY statement is used, producing an independent analysis for each cross section. It is also possible to analyze time series cross-sectional data jointly. The PANEL procedure (and the older TSCSREG procedure) expects the input data to be in the time series cross-sectional form described here. See Chapter 19, “The PANEL Procedure,” for more information.

Interleaved Time Series Normally, a time series data set has only one observation for each time period, or one observation for each time period within a cross section for a time series cross-sectional-form data set. However, it is sometimes useful to store several related time series in the same variable when the different series do not correspond to levels of a cross-sectional dimension of the data. In this case, the different time series can be interleaved. An interleaved time series data set is similar to a time series cross-sectional data set, except that the observations are sorted differently and the ID variable that distinguishes the different time series does not represent a cross-sectional dimension. Some SAS/ETS procedures produce interleaved output data sets. The interleaved time series form is a convenient way to store procedure output when the results consist of several different kinds of series for each of several input series. (Interleaved time series are also easy to process with plotting procedures. See the section “Plotting Time Series” on page 86.) For example, the FORECAST procedure fits a model to each input time series and computes predicted values and residuals from the model. The FORECAST procedure then uses the model to compute forecast values beyond the range of the input data and also to compute upper and lower confidence limits for the forecast values. Thus, the output from PROC FORECAST consists of up to five related time series for each variable forecast. The five resulting time series for each input series are stored in a single output variable with the same name as the series that is being forecast. The observations for the five resulting series are identified by values of the variable _TYPE_. These observations are interleaved in the output data set with observations for the same date grouped together. The following statements show how to use PROC FORECAST to forecast the variable CPI in the USCPI data set. Figure 3.5 shows part of the output data set produced by PROC FORECAST and illustrates the interleaved structure of this data set.

Interleaved Time Series F 81

proc forecast data=uscpi interval=month lead=12 out=foreout outfull outresid; var cpi; id date; run; proc print data=foreout(obs=6); run;

Figure 3.5 Partial Listing of Output Data Set Produced by PROC FORECAST Obs

date

1 2 3 4 5 6

JUN1990 JUN1990 JUN1990 JUL1990 JUL1990 JUL1990

_TYPE_

_LEAD_

ACTUAL FORECAST RESIDUAL ACTUAL FORECAST RESIDUAL

0 0 0 0 0 0

cpi 129.900 130.817 -0.917 130.400 130.678 -0.278

Observations with _TYPE_=ACTUAL contain the values of CPI read from the input data set. Observations with _TYPE_=FORECAST contain one-step-ahead predicted values for observations with dates in the range of the input series and contain forecast values for observations for dates beyond the range of the input series. Observations with _TYPE_=RESIDUAL contain the difference between the actual and one-step-ahead predicted values. Observations with _TYPE_=U95 and _TYPE_=L95 contain the upper and lower bounds, respectively, of the 95% confidence interval for the forecasts.

Using Interleaved Data Sets as Input to SAS/ETS Procedures Interleaved time series data sets are not directly accepted as input by SAS/ETS procedures. However, it is easy to use a WHERE statement with any procedure to subset the input data and select one of the interleaved time series as the input. For example, to analyze the residual series contained in the PROC FORECAST output data set with another SAS/ETS procedure, include a WHERE _TYPE_=’RESIDUAL’ statement. The following statements perform a spectral analysis of the residuals produced by PROC FORECAST in the preceding example: proc spectra data=foreout out=spectout; var cpi; where _type_=’RESIDUAL’; run;

82 F Chapter 3: Working with Time Series Data

Combined Cross Sections and Interleaved Time Series Data Sets Interleaved time series output data sets produced from BY-group processing of time series crosssectional input data sets have a complex structure that combines a cross-sectional dimension, a time dimension, and the values of the _TYPE_ variable. For example, consider the PROC FORECAST output data set produced by the following statements: title "FORECAST Output Data Set with BY Groups"; proc forecast data=cpicity interval=month method=expo lead=2 out=foreout outfull outresid; var cpi; id date; by city; run; proc print data=foreout(obs=6); run;

The output data set FOREOUT contains many different time series in the single variable CPI. (The first few observations of FOREOUT are shown in Figure 3.6.) BY groups that are identified by the variable CITY contain the result series for the different cities. Within each value of CITY, the actual, forecast, residual, and confidence limits series are stored in interleaved form, with the observations for the different series identified by the values of _TYPE_. Figure 3.6 Combined Cross Sections and Interleaved Time Series Data FORECAST Output Data Set with BY Groups Obs 1 2 3 4 5 6

city Chicago Chicago Chicago Chicago Chicago Chicago

date JAN90 JAN90 JAN90 FEB90 FEB90 FEB90

_TYPE_ ACTUAL FORECAST RESIDUAL ACTUAL FORECAST RESIDUAL

_LEAD_ 0 0 0 0 0 0

cpi 128.100 128.252 -0.152 129.200 128.896 0.304

Output Data Sets of SAS/ETS Procedures Some SAS/ETS procedures (such as PROC FORECAST) produce interleaved output data sets, and other SAS/ETS procedures produce standard form time series data sets. The form a procedure uses depends on whether the procedure is normally used to produce multiple result series for each of many input series in one step (as PROC FORECAST does). For example, the ARIMA procedure can output actual series, forecast series, residual series, and confidence limit series just as the FORECAST procedure does. The PROC ARIMA output data set

Output Data Sets of SAS/ETS Procedures F 83

uses the standard form because PROC ARIMA is designed for the detailed analysis of one series at a time and so forecasts only one series at a time. The following statements show the use of the ARIMA procedure to produce a forecast of the USCPI data set. Figure 3.7 shows part of the output data set that is produced by the ARIMA procedure’s FORECAST statement. (The printed output from PROC ARIMA is not shown.) Compare the PROC ARIMA output data set shown in Figure 3.7 with the PROC FORECAST output data set shown in Figure 3.6. title "PROC ARIMA Output Data Set"; proc arima data=uscpi; identify var=cpi(1); estimate q=1; forecast id=date interval=month lead=12 out=arimaout; run; proc print data=arimaout(obs=6); run;

Figure 3.7 Partial Listing of Output Data Set Produced by PROC ARIMA PROC ARIMA Output Data Set Obs

date

1 2 3 4 5 6

JUN1990 JUL1990 AUG1990 SEP1990 OCT1990 NOV1990

cpi 129.9 130.4 131.6 132.7 133.5 133.8

FORECAST . 130.368 130.881 132.354 133.306 134.046

STD

L95

U95

RESIDUAL

. 0.36160 0.36160 0.36160 0.36160 0.36160

. 129.660 130.172 131.645 132.597 133.337

. 131.077 131.590 133.063 134.015 134.754

. 0.03168 0.71909 0.34584 0.19421 -0.24552

The output data set produced by the ARIMA procedure’s FORECAST statement stores the actual values in a variable with the same name as the response series, stores the forecast series in a variable named FORECAST, stores the residuals in a variable named RESIDUAL, stores the 95% confidence limits in variables named L95 and U95, and stores the standard error of the forecast in the variable STD. This method of storing several different result series as a standard form time series data set is simple and convenient. However, it works well only for a single input series. The forecast of a single series can be stored in the variable FORECAST. But if two series are forecast, two different FORECAST variables are needed. The STATESPACE procedure handles this problem by generating forecast variable names FOR1, FOR2, and so forth. The SPECTRA procedure uses a similar method. Names such as FOR1, FOR2, RES1, RES2, and so forth require you to remember the order in which the input series are listed. This is why PROC FORECAST, which is designed to forecast a whole list of input series at once, stores its results in interleaved form.

84 F Chapter 3: Working with Time Series Data

Other SAS/ETS procedures are often used for a single input series but can also be used to process several series in a single step. Thus, they are not clearly like PROC FORECAST nor clearly like PROC ARIMA in the number of input series they are designed to work with. These procedures use a third method for storing multiple result series in an output data set. These procedures store output time series in standard form (as PROC ARIMA does) but require an OUTPUT statement to give names to the result series.

Time Series Periodicity and Time Intervals A fundamental characteristic of time series data is how frequently the observations are spaced in time. How often the observations of a time series occur is called the sampling frequency or the periodicity of the series. For example, a time series with one observation each month has a monthly sampling frequency or monthly periodicity and so is called a monthly time series. In SAS, data periodicity is described by specifying periodic time intervals into which the dates of the observations fall. For example, the SAS time interval MONTH divides time into calendar months. Many SAS/ETS procedures enable you to specify the periodicity of the input data set with the INTERVAL= option. For example, specifying INTERVAL=MONTH indicates that the procedure should expect the ID variable to contain SAS date values, and that the date value for each observation should fall in a separate calendar month. The EXPAND procedure uses interval name values with the FROM= and TO= options to control the interpolation of time series from one periodicity to another. SAS also uses time intervals in several other ways. In addition to indicating the periodicity of time series data sets, time intervals are used with the interval functions INTNX and INTCK and for controlling the plot axis and reference lines for plots of data over time.

Specifying Time Intervals Intervals are specified in SAS by using interval names such as YEAR, QTR, MONTH, DAY, and so forth. Table 3.3 summarizes the basic types of intervals.

Using Intervals with SAS/ETS Procedures F 85

Table 3.3

Basic Interval Types

Name YEAR SEMIYEAR QTR MONTH SEMIMONTH TENDAY WEEK WEEKDAY DAY HOUR MINUTE SECOND

Periodicity yearly semiannual quarterly monthly 1st and 16th of each month 1st, 11th, and 21st of each month weekly daily ignoring weekend days daily hourly every minute every second

Interval names can be abbreviated in various ways. For example, you could specify monthly intervals as MONTH, MONTHS, MONTHLY, or just MON. SAS accepts all these forms as equivalent. Interval names can also be qualified with a multiplier to indicate multi-period intervals. For example, biennial intervals are specified as YEAR2. Interval names can also be qualified with a shift index to indicate intervals with different starting points. For example, fiscal years starting in July are specified as YEAR.7. Intervals are classified as either date or datetime intervals. Date intervals are used with SAS date values, while datetime intervals are used with SAS datetime values. The interval types YEAR, SEMIYEAR, QTR, MONTH, SEMIMONTH, TENDAY, WEEK, WEEKDAY, and DAY are date intervals. HOUR, MINUTE, and SECOND are datetime intervals. Date intervals can be turned into datetime intervals for use with datetime values by prefixing the interval name with ‘DT’. Thus DTMONTH intervals are like MONTH intervals but are used with datetime ID values instead of date ID values. See Chapter 4, “Date Intervals, Formats, and Functions,” for more information about specifying time intervals and for a detailed reference to the different kinds of intervals available.

Using Intervals with SAS/ETS Procedures SAS/ETS procedures use the date or datetime interval and the ID variable in the following ways:  to validate the data periodicity. The ID variable is used to check the data and verify that successive observations have valid ID values that correspond to successive time intervals.  to check for gaps in the input observations. For example, if INTERVAL=MONTH and an input observation for January 1990 is followed by an observation for April 1990, there is a gap in the input data with two omitted observations.

86 F Chapter 3: Working with Time Series Data

 to label forecast observations in the output data set. The values of the ID variable for the forecast observations after the end of the input data set are extrapolated according to the frequency specifications of the INTERVAL= option.

Time Intervals, the Time Series Forecasting System, and the Time Series Viewer Time intervals are used in the Time Series Forecasting System and Time Series Viewer to identify the number of seasonal cycles or seasonality associated with a DATE, DATETIME, or TIME ID variable. For example, monthly time series have a seasonality of 12 because there are 12 months in a year; quarterly time series have a seasonality of 4 because there are four quarters in a year. The seasonality is used to analyze seasonal properties of time series data and to estimate seasonal forecasting methods.

Plotting Time Series This section discusses SAS procedures that are available for plotting time series data, but it covers only certain aspects of the use of these procedures with time series data. The Time Series Viewer displays and analyzes time series plots for time series data sets that do not contain cross sections. See Chapter 37, “Getting Started with Time Series Forecasting.” The SGPLOT procedure produces high resolution color graphics plots. See the SAS/GRAPH: Statistical Graphics Procedures Guide and SAS/GRAPH Software: Reference for more information. The PLOT procedure and the TIMEPLOT procedure produce low-resolution line-printer type plots. See the Base SAS Procedures Guide for information about these procedures.

Using the Time Series Viewer The following command starts the Time Series Viewer to display the plot of CPI in the USCPI data set against DATE. (The USCPI data set was shown in the previous example; the time series used in the following example contains more observations than previously shown.) tsview data=uscpi var=cpi timeid=date

The TSVIEW DATA= option specifies the data set to be viewed; the VAR= option specifies the variable that contains the time series observations; the TIMEID= option specifies the time series ID variable.

Using PROC SGPLOT F 87

The Time Series Viewer can also be invoked by selecting SolutionsIAnalyzeITime Series Viewer from the menu in the SAS Display Manager.

Using PROC SGPLOT The following statements use the SGPLOT procedure to plot CPI in the USCPI data set against DATE. (The USCPI data set was shown in a previous example; the data set plotted in the following example contains more observations than shown previously.) title "Plot of USCPI Data"; proc sgplot data=uscpi; series x=date y=cpi / markers; run;

The plot is shown in Figure 3.8. Figure 3.8 Plot of Monthly CPI Over Time

88 F Chapter 3: Working with Time Series Data

Controlling the Time Axis: Tick Marks and Reference Lines It is possible to control the spacing of the tick marks on the time axis. The following statements use the XAXIS statement to tell PROC SGPLOT to mark the axis at the start of each quarter: proc sgplot data=uscpi; series x=date y=cpi / markers; format date yyqc.; xaxis values=(’1jan90’d to ’1jul91’d by qtr); run;

The plot is shown in Figure 3.9. Figure 3.9 Plot of Monthly CPI Over Time

Overlay Plots of Different Variables You can plot two or more series stored in different variables on the same graph by specifying multiple plot requests in one SGPLOT statement. For example, the following statements plot the CPI, FORECAST, L95, and U95 variables produced by PROC ARIMA in a previous example. A reference line is drawn to mark the start of the forecast

Using PROC SGPLOT F 89

period. Quarterly tick marks with YYQC format date values are used. title "ARIMA Forecasts of CPI"; proc arima data=uscpi; identify var=cpi(1); estimate q=1; forecast id=date interval=month lead=12 out=arimaout; run; title "ARIMA forecasts of CPI"; proc sgplot data=arimaout noautolegend; scatter x=date y=cpi; scatter x=date y=forecast / markerattrs=(symbol=asterisk); scatter x=date y=l95 / markerattrs=(symbol=asterisk color=green); scatter x=date y=u95 / markerattrs=(symbol=asterisk color=green); format date yyqc4.; xaxis values=(’1jan90’d to ’1jul92’d by qtr); refline ’15jul91’d / axis=x; run;

The plot is shown in Figure 3.10. Figure 3.10 Plot of ARIMA Forecast

90 F Chapter 3: Working with Time Series Data

Overlay Plots of Interleaved Series You can also plot several series on the same graph when the different series are stored in the same variable in interleaved form. Plot interleaved time series by using the values of the ID variable in GROUP= option to distinguish the different series. The following example plots the output data set produced by PROC FORECAST in a previous example. Since the residual series has a different scale than the other series, it is excluded from the plot with a WHERE statement. The _TYPE_ variable is used in the PLOT statement to identify the different series and to select the SCATTER statements to use for each plot. title "Plot of Forecasts of USCPI Data"; proc forecast data=uscpi interval=month lead=12 out=foreout outfull outresid; var cpi; id date; run; proc sgplot data=foreout; where _type_ ^= ’RESIDUAL’; scatter x=date y=cpi / group=_type_ markerattrs=(symbol=asterisk); format date yyqc4.; xaxis values=(’1jan90’d to ’1jul92’d by qtr); refline ’15jul91’d / axis=x; run;

The plot is shown in Figure 3.11.

Using PROC SGPLOT F 91

Figure 3.11 Plot of Forecast

Residual Plots The following example plots the residuals series that was excluded from the plot in the previous example. The NEEDLE statement specifies a needle plot, so that each residual point is plotted as a vertical line showing deviation from zero. proc sgplot data=foreout; where _type_ = ’RESIDUAL’; needle x=date y=cpi / markers; format date yyqc4.; xaxis values=(’1jan90’d to ’1jul91’d by qtr); run;

The plot is shown in Figure 3.12.

92 F Chapter 3: Working with Time Series Data

Figure 3.12 Plot of Residuals

Using PROC PLOT The following statements use the PLOT procedure in Base SAS to plot CPI in the USCPI data set against DATE. (The data set plotted contains more observations than shown in the previous examples.) The plotting character used is a plus sign (+). title "Plot of USCPI Data"; proc plot data=uscpi; plot cpi * date = ’+’ / vaxis= 129 to 137 by 1; run;

The plot is shown in Figure 3.13.

Using PROC TIMEPLOT F 93

Figure 3.13 Plot of Monthly CPI Over Time Plot of USCPI Data Plot of cpi*date.

U S C o n s u m e r P r i c e I n d e x

Symbol used is ’+’.

137 + | | + 136 + + | + | + 135 + + | + + | 134 + | + + + | 133 + | + | 132 + | + | 131 + | | + 130 + + | | 129 + --+-------------+-------------+-------------+-------------+-------------+MAY1990 AUG1990 DEC1990 MAR1991 JUN1991 OCT1991 date

Using PROC TIMEPLOT The TIMEPLOT procedure in Base SAS plots time series data vertically on the page instead of horizontally across the page as the PLOT procedure does. PROC TIMEPLOT can also print the data values as well as plot them. The following statements use the TIMEPLOT procedure to plot CPI in the USCPI data set. Only the last 14 observations are included in this example. The plot is shown in Figure 3.14. title "Plot of USCPI Data"; proc timeplot data=uscpi; plot cpi; id date; where date >= ’1jun90’d; run;

94 F Chapter 3: Working with Time Series Data

Figure 3.14 Output Produced by PROC TIMEPLOT Plot of USCPI Data date

JUN1990 JUL1990 AUG1990 SEP1990 OCT1990 NOV1990 DEC1990 JAN1991 FEB1991 MAR1991 APR1991 MAY1991 JUN1991 JUL1991

US Consumer Price Index 129.90 130.40 131.60 132.70 133.50 133.80 133.80 134.60 134.80 135.00 135.20 135.60 136.00 136.20

min max 129.9 136.2 *--------------------------------------------------* |c | | c | | c | | c | | c | | c | | c | | c | | c | | c | | c | | c | | c | | c| *--------------------------------------------------*

The TIMEPLOT procedure has several interesting features not discussed here. See “The TIMEPLOT Procedure” in the Base SAS Procedures Guide for more information.

Using PROC GPLOT The GPLOT procedure in SAS/GRAPH software can also be used to plot time series data, although the newer SGPLOT procedure is easier to use. The following is an example of how GPLOT can be used to produce a plot similar to the graph produced by PROC SGPLOT in the preceding section. title "Plot of USCPI Data"; proc gplot data=uscpi; symbol i=spline v=circle h=2; plot cpi * date; run;

The plot is shown in Figure 3.15.

Calendar and Time Functions F 95

Figure 3.15 Plot of Monthly CPI Over Time

For more information about the GPLOT procedure, see SAS/GRAPH Software: Reference.

Calendar and Time Functions Calendar and time functions convert calendar and time variables such as YEAR, MONTH, DAY, and HOUR, MINUTE, SECOND into SAS date or datetime values, and vice versa. The SAS calendar and time functions are DATEJUL, DATEPART, DAY, DHMS, HMS, HOUR, JULDATE, MDY, MINUTE, MONTH, QTR, SECOND, TIMEPART, WEEKDAY, YEAR, and YYQ. See SAS Language Reference: Dictionary for more details about these functions.

96 F Chapter 3: Working with Time Series Data

Computing Dates from Calendar Variables The MDY function converts MONTH, DAY, and YEAR values to a SAS date value. For example, MDY(2010,17,91) returns the SAS date value ’17OCT2010’D. The YYQ function computes the SAS date for the first day of a quarter. For example, YYQ(2010,4) returns the SAS date value ’1OCT2010’D. The DATEJUL function computes the SAS date for a Julian date. For example, DATEJUL(91290) returns the SAS date ’17OCT2010’D. The YYQ and MDY functions are useful for creating SAS date variables when the ID values recorded in the data are year and quarter; year and month; or year, month, and day. For example, the following statements read quarterly data from records in which dates are coded as separate year and quarter values. The YYQ function is used to compute the variable DATE. data usecon; input year qtr gnp; date = yyq( year, qtr ); format date yyqc.; datalines; 1990 1 5375.4 1990 2 5443.3 ... more lines ...

The monthly USCPI data shown in a previous example contained time ID values represented in the MONYY format. If the data records instead contain separate year and month values, the data can be read in and the DATE variable computed with the following statements: data uscpi; input month year cpi; date = mdy( month, 1, year ); format date monyy.; datalines; 6 90 129.9 7 90 130.4 ... more lines ...

Computing Calendar Variables from Dates The functions YEAR, MONTH, DAY, WEEKDAY, and JULDATE compute calendar variables from SAS date values.

Converting between Date, Datetime, and Time Values F 97

Returning to the example of reading the USCPI data from records that contain date values represented in the MONYY format, you can find the month and year of each observation from the SAS dates of the observations by using the following statements. data uscpi; input date monyy7. cpi; format date monyy7.; year = year( date ); month = month( date ); datalines; jun1990 129.9 jul1990 130.4 ... more lines ...

Converting between Date, Datetime, and Time Values The DATEPART function computes the SAS date value for the date part of a SAS datetime value. The TIMEPART function computes the SAS time value for the time part of a SAS datetime value. The HMS function computes SAS time values from HOUR, MINUTE, and SECOND time variables. The DHMS function computes a SAS datetime value from a SAS date value and HOUR, MINUTE, and SECOND time variables. See the section “Date, Time, and Datetime Functions” on page 143 for more information about these functions.

Computing Datetime Values To compute datetime ID values from calendar and time variables, first compute the date and then compute the datetime with DHMS. For example, suppose you read tri-hourly temperature data with time recorded as YEAR, MONTH, DAY, and HOUR. The following statements show how to compute the ID variable DATETIME: data weather; input year month day hour temp; datetime = dhms( mdy( month, day, year ), hour, 0, 0 ); format datetime datetime10.; datalines; 91 10 16 21 61 91 10 17 0 56 91 10 17 3 53 ... more lines ...

98 F Chapter 3: Working with Time Series Data

Computing Calendar and Time Variables The functions HOUR, MINUTE, and SECOND compute time variables from SAS datetime values. The DATEPART function and the date-to-calendar variables functions can be combined to compute calendar variables from datetime values. For example, suppose the date and time of the tri-hourly temperature data in the preceding example were recorded as datetime values in the datetime format. The following statements show how to compute the YEAR, MONTH, DAY, and HOUR of each observation and include these variables in the SAS data set: data weather; input datetime : datetime13. temp; format datetime datetime10.; hour = hour( datetime ); date = datepart( datetime ); year = year( date ); month = month( date ); day = day( date ); datalines; 16oct91:21:00 61 17oct91:00:00 56 17oct91:03:00 53 ... more lines ...

Interval Functions INTNX and INTCK The SAS interval functions INTNX and INTCK perform calculations with date values, datetime values, and time intervals. They can be used for calendar calculations with SAS date values to increment date values or datetime values by intervals and to count time intervals between dates. The INTNX function increments dates by intervals. INTNX computes the date or datetime of the start of the interval a specified number of intervals from the interval that contains a given date or datetime value. The form of the INTNX function is INTNX ( interval, from, n < , alignment > ) ;

The arguments to the INTNX function are as follows:

Incrementing Dates by Intervals F 99

interval

is a character constant or variable that contains an interval name from

is a SAS date value (for date intervals) or datetime value (for datetime intervals) n

is the number of intervals to increment from the interval that contains the from value alignment

controls the alignment of SAS dates, within the interval, used to identify output observations. Allowed values are BEGINNING, MIDDLE, END, and SAMEDAY. The number of intervals to increment, n, can be positive, negative, or zero. For example, the statement NEXTMON=INTNX(’MONTH’,DATE,1) assigns to the variable NEXTMON the date of the first day of the month following the month that contains the value of DATE. Thus INTNX(’MONTH’,’21OCT2007’D,1) returns the date 1 November 2007. The INTCK function counts the number of interval boundaries between two date values or between two datetime values. The form of the INTCK function is INTCK ( interval, from, to ) ;

The arguments of the INTCK function are as follows: interval

is a character constant or variable that contains an interval name from

is the starting date value (for date intervals) or datetime value (for datetime intervals) to

is the ending date value (for date intervals) or datetime value (for datetime intervals) For example, the statement NEWYEARS=INTCK(’YEAR’,DATE1,DATE2) assigns to the variable NEWYEARS the number of New Year’s Days between the two dates.

Incrementing Dates by Intervals Use the INTNX function to increment dates by intervals. For example, suppose you want to know the date of the start of the week that is six weeks from the week of 17 October 1991. The function INTNX(’WEEK’,’17OCT91’D,6) returns the SAS date value ’24NOV1991’D. One practical use of the INTNX function is to generate periodic date values. For example, suppose the monthly U.S. Consumer Price Index data in a previous example were recorded without any time identifier on the data records. Given that you know the first observation is for June 1990, the following statements use the INTNX function to compute the ID variable DATE for each observation:

100 F Chapter 3: Working with Time Series Data

data uscpi; input cpi; date = intnx( ’month’, ’1jun1990’d, _n_-1 ); format date monyy7.; datalines; 129.9 130.4 ... more lines ...

The automatic variable _N_ counts the number of times the DATA step program has executed; in this case _N_ contains the observation number. Thus _N_–1 is the increment needed from the first observation date. Alternatively, you could increment from the month before the first observation, in which case the INTNX function in this example would be written INTNX(’MONTH’,’1MAY1990’D,_N_).

Alignment of SAS Dates Any date within the time interval that corresponds to an observation of a periodic time series can serve as an ID value for the observation. For example, the USCPI data in a previous example might have been recorded with dates at the 15th of each month. The person recording the data might reason that since the CPI values are monthly averages, midpoints of the months might be the appropriate ID values. However, as far as SAS/ETS procedures are concerned, what is important about monthly data is the month of each observation, not the exact date of the ID value. If you indicate that the data are monthly (with an INTERVAL=MONTH) option, SAS/ETS procedures ignore the day of the month in processing the ID variable. The MONYY format also ignores the day of the month. Thus, you could read in the monthly USCPI data with mid-month DATE values by using the following statements: data uscpi; input date : date9. cpi; format date monyy7.; datalines; 15jun1990 129.9 15jul1990 130.4 ... more lines ...

The results of using this version of the USCPI data set for analysis with SAS/ETS procedures would be the same as with first-of-month values for DATE. Although you can use any date within the interval as an ID value for the interval, you might find working with time series in SAS less confusing if you always use date ID values normalized to the start of the interval.

Computing the Width of a Time Interval F 101

For some applications it might be preferable to use end of period dates, such as 31Jan1994, 28Feb1994, 31Mar1994, . . . , 31Dec1994. For other applications, such as plotting time series, it might be more convenient to use interval midpoint dates to identify the observations. (Some SAS/ETS procedures provide an ALIGN= option to control the alignment of dates for output time series observations. In addition, the INTNX library function supports an optional argument to specify the alignment of the returned date value.) To normalize date values to the start of intervals, use the INTNX function with a 0 increment. The INTNX function with an increment of 0 computes the date of the first day of the interval (or the first second of the interval for datetime values). For example, INTNX(’MONTH’,’17OCT1991’D,0,’BEG’) returns the date ’1OCT1991’D. The following statements show how the preceding example can be changed to normalize the midmonth DATE values to first-of-month and end-of-month values. For exposition, the first-of-month value is transformed back into a middle-of-month value. data uscpi; input date : date9. cpi; format date monyy7.; monthbeg = intnx( ’month’, date, 0, ’beg’ ); midmonth = intnx( ’month’, monthbeg, 0, ’mid’ ); monthend = intnx( ’month’, date, 0, ’end’ ); datalines; 15jun1990 129.9 15jul1990 130.4 ... more lines ...

If you want to compute the date of a particular day within an interval, you can use calendar functions, or you can increment the starting date of the interval by a number of days. The following example shows three ways to compute the seventh day of the month: data test; set uscpi; mon07_1 = mdy( month(date), 7, year(date) ); mon07_2 = intnx( ’month’, date, 0, ’beg’ ) + 6; mon07_3 = intnx( ’day’, date, 6 ); run;

Computing the Width of a Time Interval To compute the width of a time interval, subtract the ID value of the start of the next interval from the ID value of the start of the current interval. If the ID values are SAS dates, the width is in days. If the ID values are SAS datetime values, the width is in seconds. For example, the following statements show how to add a variable WIDTH to the USCPI data set

102 F Chapter 3: Working with Time Series Data

that contains the number of days in the month for each observation: data uscpi; input date : date9. cpi; format date monyy7.; width = intnx( ’month’, date, 1 ) - intnx( ’month’, date, 0 ); datalines; 15jun1990 129.9 15jul1990 130.4 15aug1990 131.6 ... more lines ...

Computing the Ceiling of an Interval To shift a date to the start of the next interval if it is not already at the start of an interval, subtract 1 from the date and use INTNX to increment the date by 1 interval. For example, the following statements add the variable NEWYEAR to the monthly USCPI data set. The variable NEWYEAR contains the date of the next New Year’s Day. NEWYEAR contains the same value as DATE when the DATE value is the start of year and otherwise contains the date of the start of the next year. data test; set uscpi; newyear = intnx( ’year’, date - 1, 1 ); format newyear date.; run;

Counting Time Intervals Use the INTCK function to count the number of interval boundaries between two dates. Note that the INTCK function counts the number of times the beginning of an interval is reached in moving from the first date to the second. It does not count the number of complete intervals between two dates. Following are two examples:  The function INTCK(’MONTH’,’1JAN1991’D,’31JAN1991’D) returns 0, since the two dates are within the same month.  The function INTCK(’MONTH’,’31JAN1991’D,’1FEB1991’D) returns 1, since the two dates lie in different months that are one month apart. When the first date is later than the second date, INTCK returns a negative count. For example, the function INTCK(’MONTH’,’1FEB1991’D,’31JAN1991’D) returns –1.

Checking Data Periodicity F 103

The following example shows how to use the INTCK function with shifted interval specifications to count the number of Sundays, Mondays, Tuesdays, and so forth, in each month. The variables NSUNDAY, NMONDAY, NTUESDAY, and so forth, are added to the USCPI data set. data uscpi; set uscpi; d0 = intnx( ’month’, date, 0 ) - 1; d1 = intnx( ’month’, date, 1 ) - 1; nSunday = intck( ’week.1’, d0, d1 ); nMonday = intck( ’week.2’, d0, d1 ); nTuesday = intck( ’week.3’, d0, d1 ); nWedday = intck( ’week.4’, d0, d1 ); nThurday = intck( ’week.5’, d0, d1 ); nFriday = intck( ’week.6’, d0, d1 ); nSatday = intck( ’week.7’, d0, d1 ); drop d0 d1; run;

Since the INTCK function counts the number of interval beginning dates between two dates, the number of Sundays is computed by counting the number of week boundaries between the last day of the previous month and the last day of the current month. To count Mondays, Tuesdays, and so forth, shifted week intervals are used. The interval type WEEK.2 specifies weekly intervals starting on Mondays, WEEK.3 specifies weeks starting on Tuesdays, and so forth.

Checking Data Periodicity Suppose you have a time series data set and you want to verify that the data periodicity is correct, the observations are dated correctly, and the data set is sorted by date. You can use the INTCK function to compare the date of the current observation with the date of the previous observation and verify that the dates fall into consecutive time intervals. For example, the following statements verify that the data set USCPI is a correctly dated monthly data set. The RETAIN statement is used to hold the date of the previous observation, and the automatic variable _N_ is used to start the verification process with the second observation. data _null_; set uscpi; retain prevdate; if _n_ > 1 then if intck( ’month’, prevdate, date ) ^= 1 then put "Bad date sequence at observation number " _n_; prevdate = date; run;

104 F Chapter 3: Working with Time Series Data

Filling In Omitted Observations in a Time Series Data Set Most SAS/ETS procedures expect input data to be in the standard form, with no omitted observations in the sequence of time periods. When data are missing for a time period, the data set should contain a missing observation, in which all variables except the ID variables have missing values. You can replace omitted observations in a time series data set with missing observations with the EXPAND procedure. The following statements create a monthly data set, OMITTED, from data lines that contain records for an intermittent sample of months. (Data values are not shown.) The OMITTED data set is sorted to make sure it is in time order. data omitted; input date : monyy7. x y z; format date monyy7.; datalines; jan1991 ... mar1991 ... apr1991 ... jun1991 ... ... etc. ... ; proc sort data=omitted; by date; run;

This data set is converted to a standard form time series data set by the following PROC EXPAND step. The TO= option specifies that monthly data is to be output, while the METHOD=NONE option specifies that no interpolation is to be performed, so that the variables X, Y, and Z in the output data set STANDARD will have missing values for the omitted time periods that are filled in by the EXPAND procedure. proc expand data=omitted out=standard to=month method=none; id date; run;

Using Interval Functions for Calendar Calculations With a little thought, you can come up with a formula that involves INTNX and INTCK functions and different interval types to perform almost any calendar calculation.

Lags, Leads, Differences, and Summations F 105

For example, suppose you want to know the date of the third Wednesday in the month of October 1991. The answer can be computed as intnx( ’week.4’, ’1oct91’d - 1, 3 )

which returns the SAS date value ’16OCT91’D. Consider this more complex example: how many weekdays are there between 17 October 1991 and the second Friday in November 1991, inclusive? The following formula computes the number of weekdays between the date value contained in the variable DATE and the second Friday of the following month (including the ending dates of this period): n = intck( ’weekday’, date - 1, intnx( ’week.6’, intnx( ’month’, date, 1 ) - 1, 2 ) + 1 );

Setting DATE to ’17OCT91’D and applying this formula produces the answer, N=17.

Lags, Leads, Differences, and Summations When working with time series data, you sometimes need to refer to the values of a series in previous or future periods. For example, the usual interest in the consumer price index series shown in previous examples is how fast the index is changing, rather than the actual level of the index. To compute a percent change, you need both the current and the previous values of the series. When you model a time series, you might want to use the previous values of other series as explanatory variables. This section discusses how to use the DATA step to perform operations over time: lags, differences, leads, summations over time, and percent changes. The EXPAND procedure can also be used to perform many of these operations; see Chapter 14, “The EXPAND Procedure,” for more information. See also the section “Transforming Time Series” on page 115.

The LAG and DIF Functions The DATA step provides two functions, LAG and DIF, for accessing previous values of a variable or expression. These functions are useful for computing lags and differences of series. For example, the following statements add the variables CPILAG and CPIDIF to the USCPI data set. The variable CPILAG contains lagged values of the CPI series. The variable CPIDIF contains the changes of the CPI series from the previous period; that is, CPIDIF is CPI minus CPILAG. The new data set is shown in part in Figure 3.16.

106 F Chapter 3: Working with Time Series Data

data uscpi; set uscpi; cpilag = lag( cpi ); cpidif = dif( cpi ); run; proc print data=uscpi; run;

Figure 3.16 USCPI Data Set with Lagged and Differenced Series Plot of USCPI Data Obs

date

1 2 3 4 5 6 7 8 9 10 11 12 13 14

JUN1990 JUL1990 AUG1990 SEP1990 OCT1990 NOV1990 DEC1990 JAN1991 FEB1991 MAR1991 APR1991 MAY1991 JUN1991 JUL1991

cpi 129.9 130.4 131.6 132.7 133.5 133.8 133.8 134.6 134.8 135.0 135.2 135.6 136.0 136.2

cpilag . 129.9 130.4 131.6 132.7 133.5 133.8 133.8 134.6 134.8 135.0 135.2 135.6 136.0

cpidif . 0.5 1.2 1.1 0.8 0.3 0.0 0.8 0.2 0.2 0.2 0.4 0.4 0.2

Understanding the DATA Step LAG and DIF Functions When used in this simple way, LAG and DIF act as lag and difference functions. However, it is important to keep in mind that, despite their names, the LAG and DIF functions available in the DATA step are not true lag and difference functions. Rather, LAG and DIF are queuing functions that remember and return argument values from previous calls. The LAG function remembers the value you pass to it and returns as its result the value you passed to it on the previous call. The DIF function works the same way but returns the difference between the current argument and the remembered value. (LAG and DIF return a missing value the first time the function is called.) A true lag function does not return the value of the argument for the “previous call,” as do the DATA step LAG and DIF functions. Instead, a true lag function returns the value of its argument for the “previous observation,” regardless of the sequence of previous calls to the function. Thus, for a true lag function to be possible, it must be clear what the “previous observation” is. If the data are sorted chronologically, then LAG and DIF act as true lag and difference functions. If in doubt, use PROC SORT to sort your data before using the LAG and DIF functions. Beware of missing observations, which can cause LAG and DIF to return values that are not the actual lag and difference values.

The LAG and DIF Functions F 107

The DATA step is a powerful tool that can read any number of observations from any number of input files or data sets, can create any number of output data sets, and can write any number of output observations to any of the output data sets, all in the same program. Thus, in general, it is not clear what “previous observation” means in a DATA step program. In a DATA step program, the “previous observation” exists only if you write the program in a simple way that makes this concept meaningful. Since, in general, the previous observation is not clearly defined, it is not possible to make true lag or difference functions for the DATA step. Instead, the DATA step provides queuing functions that make it easy to compute lags and differences.

Pitfalls of DATA Step LAG and DIF Functions The LAG and DIF functions compute lags and differences provided that the sequence of calls to the function corresponds to the sequence of observations in the output data set. However, any complexity in the DATA step that breaks this correspondence causes the LAG and DIF functions to produce unexpected results. For example, suppose you want to add the variable CPILAG to the USCPI data set, as in the previous example, and you also want to subset the series to 1991 and later years. You might use the following statements: data subset; set uscpi; if date >= ’1jan1991’d; cpilag = lag( cpi ); /* WRONG PLACEMENT! */ run;

If the subsetting IF statement comes before the LAG function call, the value of CPILAG will be missing for January 1991, even though a value for December 1990 is available in the USCPI data set. To avoid losing this value, you must rearrange the statements to ensure that the LAG function is actually executed for the December 1990 observation. data subset; set uscpi; cpilag = lag( cpi ); if date >= ’1jan1991’d; run;

In other cases, the subsetting statement should come before the LAG and DIF functions. For example, the following statements subset the FOREOUT data set shown in a previous example to select only _TYPE_=RESIDUAL observations and also to compute the variable LAGRESID: data residual; set foreout; if _type_ = "RESIDUAL"; lagresid = lag( cpi ); run;

108 F Chapter 3: Working with Time Series Data

Another pitfall of LAG and DIF functions arises when they are used to process time series crosssectional data sets. For example, suppose you want to add the variable CPILAG to the CPICITY data set shown in a previous example. You might use the following statements: data cpicity; set cpicity; cpilag = lag( cpi ); run;

However, these statements do not yield the desired result. In the data set produced by these statements, the value of CPILAG for the first observation for the first city is missing (as it should be), but in the first observation for all later cities, CPILAG contains the last value for the previous city. To correct this, set the lagged variable to missing at the start of each cross section, as follows: data cpicity; set cpicity; by city date; cpilag = lag( cpi ); if first.city then cpilag = .; run;

Alternatives to LAG and DIF Functions You can also use the EXPAND procedure to compute lags and differences. For example, the following statements compute lag and difference variables for CPI: proc expand data=uscpi out=uscpi method=none; id date; convert cpi=cpilag / transform=( lag 1 ); convert cpi=cpidif / transform=( dif 1 ); run;

You can also calculate lags and differences in the DATA step without using LAG and DIF functions. For example, the following statements add the variables CPILAG and CPIDIF to the USCPI data set: data uscpi; set uscpi; retain cpilag; cpidif = cpi - cpilag; output; cpilag = cpi; run;

The RETAIN statement prevents the DATA step from reinitializing CPILAG to a missing value at the start of each iteration and thus allows CPILAG to retain the value of CPI assigned to it in the last statement. The OUTPUT statement causes the output observation to contain values of the variables before CPILAG is reassigned the current value of CPI in the last statement. This is the approach that must be used if you want to build a variable that is a function of its previous lags.

Multiperiod Lags and Higher-Order Differencing F 109

LAG and DIF Functions in PROC MODEL The preceding discussion of LAG and DIF functions applies to LAG and DIF functions available in the DATA step. However, LAG and DIF functions are also used in the MODEL procedure. The MODEL procedure LAG and DIF functions do not work like the DATA step LAG and DIF functions. The LAG and DIF functions supported by PROC MODEL are true lag and difference functions, not queuing functions. Unlike the DATA step, the MODEL procedure processes observations from a single input data set, so the “previous observation” is always clearly defined in a PROC MODEL program. Therefore, PROC MODEL is able to define LAG and DIF as true lagging functions that operate on values from the previous observation. See Chapter 18, “The MODEL Procedure,” for more information about LAG and DIF functions in the MODEL procedure.

Multiperiod Lags and Higher-Order Differencing To compute lags at a lagging period greater than 1, add the lag length to the end of the LAG keyword to specify the lagging function needed. For example, the LAG2 function returns the value of its argument two calls ago, the LAG3 function returns the value of its argument three calls ago, and so forth. To compute differences at a lagging period greater than 1, add the lag length to the end of the DIF keyword. For example, the DIF2 function computes the differences between the value of its argument and the value of its argument two calls ago. (The maximum lagging period is 100.) The following statements add the variables CPILAG12 and CPIDIF12 to the USCPI data set. CPILAG12 contains the value of CPI from the same month one year ago. CPIDIF12 contains the change in CPI from the same month one year ago. (In this case, the first 12 values of CPILAG12 and CPIDIF12 are missing.) data uscpi; set uscpi; cpilag12 = lag12( cpi ); cpidif12 = dif12( cpi ); run;

To compute second differences, take the difference of the difference. To compute higher-order differences, nest DIF functions to the order needed. For example, the following statements compute the second difference of CPI: data uscpi; set uscpi; cpi2dif = dif( dif( cpi ) ); run;

Multiperiod lags and higher-order differencing can be combined. For example, the following statements compute monthly changes in the inflation rate, with inflation rate computed as percent change

110 F Chapter 3: Working with Time Series Data

in CPI from the same month one year ago: data uscpi; set uscpi; infchng = dif( 100 * dif12( cpi ) / lag12( cpi ) ); run;

Percent Change Calculations There are several common ways to compute the percent change in a time series. This section illustrates the use of LAG and DIF functions by showing SAS statements for various kinds of percent change calculations.

Computing Period-to-Period Change To compute percent change from the previous period, divide the difference of the series by the lagged value of the series and multiply by 100. data uscpi; set uscpi; pctchng = dif( cpi ) / lag( cpi ) * 100; label pctchng = "Monthly Percent Change, At Monthly Rates"; run;

Often, changes from the previous period are expressed at annual rates. This is done by exponentiation of the current-to-previous period ratio to the number of periods in a year and expressing the result as a percent change. For example, the following statements compute the month-over-month change in CPI as a percent change at annual rates: data uscpi; set uscpi; pctchng = ( ( cpi / lag( cpi ) ) ** 12 - 1 ) * 100; label pctchng = "Monthly Percent Change, At Annual Rates"; run;

Computing Year-over-Year Change To compute percent change from the same period in the previous year, use LAG and DIF functions with a lagging period equal to the number of periods in a year. (For quarterly data, use LAG4 and DIF4. For monthly data, use LAG12 and DIF12.) For example, the following statements compute monthly percent change in CPI from the same month one year ago: data uscpi; set uscpi;

Percent Change Calculations F 111

pctchng = dif12( cpi ) / lag12( cpi ) * 100; label pctchng = "Percent Change from One Year Ago"; run;

To compute year-over-year percent change measured at a given period within the year, subset the series of percent changes from the same period in the previous year to form a yearly data set. Use an IF or WHERE statement to select observations for the period within each year on which the year-over-year changes are based. For example, the following statements compute year-over-year percent change in CPI from December of the previous year to December of the current year: data annual; set uscpi; pctchng = dif12( cpi ) / lag12( cpi ) * 100; label pctchng = "Percent Change: December to December"; if month( date ) = 12; format date year4.; run;

Computing Percent Change in Yearly Averages To compute changes in yearly averages, first aggregate the series to an annual series by using the EXPAND procedure, and then compute the percent change of the annual series. (See Chapter 14, “The EXPAND Procedure,” for more information about PROC EXPAND.) For example, the following statements compute percent changes in the annual averages of CPI: proc expand data=uscpi out=annual from=month to=year; convert cpi / observed=average method=aggregate; run; data annual; set annual; pctchng = dif( cpi ) / lag( cpi ) * 100; label pctchng = "Percent Change in Yearly Averages"; run;

It is also possible to compute percent change in the average over the most recent yearly span. For example, the following statements compute monthly percent change in the average of CPI over the most recent 12 months from the average over the previous 12 months: data uscpi; retain sum12 0; drop sum12 ave12 cpilag12; set uscpi; sum12 = sum12 + cpi; cpilag12 = lag12( cpi ); if cpilag12 ^= . then sum12 = sum12 - cpilag12; if lag11( cpi ) ^= . then ave12 = sum12 / 12; pctchng = dif12( ave12 ) / lag12( ave12 ) * 100;

112 F Chapter 3: Working with Time Series Data

label pctchng = "Percent Change in 12 Month Moving Ave."; run;

This example is a complex use of LAG and DIF functions that requires care in handling the initialization of the moving-window averaging process. The LAG12 of CPI is checked for missing values to determine when more than 12 values have been accumulated, and older values must be removed from the moving sum. The LAG11 of CPI is checked for missing values to determine when at least 12 values have been accumulated; AVE12 will be missing when LAG11 of CPI is missing. The DROP statement prevents temporary variables from being added to the data set. Note that the DIF and LAG functions must execute for every observation, or the queues of remembered values will not operate correctly. The CPILAG12 calculation must be separate from the IF statement. The PCTCHNG calculation must not be conditional on the IF statement. The EXPAND procedure provides an alternative way to compute moving averages.

Leading Series Although the SAS System does not provide a function to look ahead at the “next” value of a series, there are a couple of ways to perform this task. The most direct way to compute leads is to use the EXPAND procedure. For example: proc expand data=uscpi out=uscpi method=none; id date; convert cpi=cpilead1 / transform=( lead 1 ); convert cpi=cpilead2 / transform=( lead 2 ); run;

Another way to compute lead series in SAS software is by lagging the time ID variable, renaming the series, and merging the result data set back with the original data set. For example, the following statements add the variable CPILEAD to the USCPI data set. The variable CPILEAD contains the value of CPI in the following month. (The value of CPILEAD is missing for the last observation, of course.) data temp; set uscpi; keep date cpi; rename cpi = cpilead; date = lag( date ); if date ^= .; run; data uscpi; merge uscpi temp; by date; run;

Summing Series F 113

To compute leads at different lead lengths, you must create one temporary data set for each lead length. For example, the following statements compute CPILEAD1 and CPILEAD2, which contain leads of CPI for 1 and 2 periods, respectively: data temp1(rename=(cpi=cpilead1)) temp2(rename=(cpi=cpilead2)); set uscpi; keep date cpi; date = lag( date ); if date ^= . then output temp1; date = lag( date ); if date ^= . then output temp2; run; data uscpi; merge uscpi temp1 temp2; by date; run;

Summing Series Simple cumulative sums are easy to compute using SAS sum statements. The following statements show how to compute the running sum of variable X in data set A, adding XSUM to the data set. data a; set a; xsum + x; run;

The SAS sum statement automatically retains the variable XSUM and initializes it to 0, and the sum statement treats missing values as 0. The sum statement is equivalent to using a RETAIN statement and the SUM function. The previous example could also be written as follows: data a; set a; retain xsum; xsum = sum( xsum, x ); run;

You can also use the EXPAND procedure to compute summations. For example: proc expand data=a out=a method=none; convert x=xsum / transform=( sum ); run;

Like differencing, summation can be done at different lags and can be repeated to produce higherorder sums. To compute sums over observations separated by lags greater than 1, use the LAG and

114 F Chapter 3: Working with Time Series Data

SUM functions together, and use a RETAIN statement that initializes the summation variable to zero. For example, the following statements add the variable XSUM2 to data set A. XSUM2 contains the sum of every other observation, with even-numbered observations containing a cumulative sum of values of X from even observations, and odd-numbered observations containing a cumulative sum of values of X from odd observations. data a; set a; retain xsum2 0; xsum2 = sum( lag( xsum2 ), x ); run;

Assuming that A is a quarterly data set, the following statements compute running sums of X for each quarter. XSUM4 contains the cumulative sum of X for all observations for the same quarter as the current quarter. Thus, for a first-quarter observation, XSUM4 contains a cumulative sum of current and past first-quarter values. data a; set a; retain xsum4 0; xsum4 = sum( lag3( xsum4 ), x ); run;

To compute higher-order sums, repeat the preceding process and sum the summation variable. For example, the following statements compute the first and second summations of X: data a; set a; xsum + x; x2sum + xsum; run;

The following statements compute the second order four-period sum of X: data a; set a; retain xsum4 x2sum4 0; xsum4 = sum( lag3( xsum4 ), x ); x2sum4 = sum( lag3( x2sum4 ), xsum4 ); run;

You can also use PROC EXPAND to compute cumulative statistics and moving window statistics. See Chapter 14, “The EXPAND Procedure,” for details.

Transforming Time Series F 115

Transforming Time Series It is often useful to transform time series for analysis or forecasting. Many time series analysis and forecasting methods are most appropriate for time series with an unrestricted range, a linear trend, and a constant variance. Series that do not conform to these assumptions can often be transformed to series for which the methods are appropriate. Transformations can be useful for the following:  range restrictions. Many time series cannot have negative values or can be limited to a maximum possible value. You can often create a transformed series with an unbounded range.  nonlinear trends. Many economic time series grow exponentially. Exponential growth corresponds to linear growth in the logarithms of the series.  series variability that changes over time. Various transformations can be used to stabilize the variance.  nonstationarity. The %DFTEST macro can be used to test a series for nonstationarity which can then be removed by differencing.

Log Transformation The logarithmic transformation is often useful for series that must be greater than zero and that grow exponentially. For example, Figure 3.17 shows a plot of an airline passenger miles series. Notice that the series has exponential growth and the variability of the series increases over time. Airline passenger miles must also be zero or greater.

116 F Chapter 3: Working with Time Series Data

Figure 3.17 Airline Series

The following statements compute the logarithms of the airline series: data lair; set sashelp.air; logair = log( air ); run;

Figure 3.18 shows a plot of the log-transformed airline series. Notice that the log series has a linear trend and constant variance.

Other Transformations F 117

Figure 3.18 Log Airline Series

The %LOGTEST macro can help you decide if a log transformation is appropriate for a series. See Chapter 5, “SAS Macros and Functions,” for more information about the %LOGTEST macro.

Other Transformations The Box-Cox transformation is a general class of transformations that includes the logarithm as a special case. The %BOXCOXAR macro can be used to find an optimal Box-Cox transformation for a time series. See Chapter 5 for more information about the %BOXCOXAR macro. The logistic transformation is useful for variables with both an upper and a lower bound, such as market shares. The logistic transformation is useful for proportions, percent values, relative frequencies, or probabilities. The logistic function transforms values between 0 and 1 to values that can range from -1 to +1. For example, the following statements transform the variable SHARE from percent values to an unbounded range: data a;

118 F Chapter 3: Working with Time Series Data

set a; lshare = log( share / ( 100 - share ) ); run;

Many other data transformation can be used. You can create virtually any desired data transformation using DATA step statements.

The EXPAND Procedure and Data Transformations The EXPAND procedure provides a convenient way to transform series. For example, the following statements add variables for the logarithm of AIR and the logistic of SHARE to data set A: proc expand data=a out=a method=none; convert air=logair / transform=( log ); convert share=lshare / transform=( / 100 logit ); run;

See Table 14.2 in Chapter 14, “The EXPAND Procedure,” for a complete list of transformations supported by PROC EXPAND.

Manipulating Time Series Data Sets This section discusses merging, splitting, and transposing time series data sets and interpolating time series data to a higher or lower sampling frequency.

Splitting and Merging Data Sets In some cases, you might want to separate several time series that are contained in one data set into different data sets. In other cases, you might want to combine time series from different data sets into one data set. To split a time series data set into two or more data sets that contain subsets of the series, use a DATA step to create the new data sets and use the KEEP= data set option to control which series are included in each new data set. The following statements split the USPRICE data set shown in a previous example into two data sets, USCPI and USPPI: data uscpi(keep=date cpi) usppi(keep=date ppi); set usprice; run;

Transposing Data Sets F 119

If the series have different time ranges, you can subset the time ranges of the output data sets accordingly. For example, if you know that CPI in USPRICE has the range August 1990 through the end of the data set, while PPI has the range from the beginning of the data set through June 1991, you could write the previous example as follows: data uscpi(keep=date cpi) usppi(keep=date ppi); set usprice; if date >= ’1aug1990’d then output uscpi; if date ’1may1991’d & date < ’1oct1991’d; run; proc transpose data=foreout out=trans(drop=_name_);

120 F Chapter 3: Working with Time Series Data

var cpi; id _type_; by date; where date > ’1may1991’d & date < ’1oct1991’d; run; title "Transposed Data Set"; proc print data=trans(obs=10); run;

The TRANSPOSE procedure adds the variables _NAME_ and _LABEL_ to the output data set. These variables contain the names and labels of the variables that were transposed. In this example, there is only one transposed variable, so _NAME_ has the value CPI for all observations. Thus, _NAME_ and _LABEL_ are of no interest and are dropped from the output data set by using the DROP= data set option. (If none of the variables transposed have a label, PROC TRANSPOSE does not output the _LABEL_ variable and the DROP=_LABEL_ option produces a warning message. You can ignore this message, or you can prevent the message by omitting _LABEL_ from the DROP= list.) The original and transposed data sets are shown in Figure 3.19 and Figure 3.20. (The observation numbers shown for the original data set reflect the operation of the WHERE statement.) Figure 3.19 Original Data Sets Original Data Set Obs

date

37 38 39 40 41 42 43 44 45 46

JUN1991 JUN1991 JUN1991 JUL1991 JUL1991 JUL1991 AUG1991 AUG1991 AUG1991 SEP1991

_TYPE_ ACTUAL FORECAST RESIDUAL ACTUAL FORECAST RESIDUAL FORECAST L95 U95 FORECAST

_LEAD_ 0 0 0 0 0 0 1 1 1 2

cpi 136.000 136.146 -0.146 136.200 136.566 -0.366 136.856 135.723 137.990 137.443

Figure 3.20 Transposed Data Sets Transposed Data Set Obs 1 2 3 4

date JUN1991 JUL1991 AUG1991 SEP1991

_LABEL_ US US US US

Consumer Consumer Consumer Consumer

Price Price Price Price

ACTUAL FORECAST RESIDUAL Index Index Index Index

136.0 136.2 . .

L95

U95

136.146 -0.14616 . . 136.566 -0.36635 . . 136.856 . 135.723 137.990 137.443 . 136.126 138.761

Transposing Data Sets F 121

Transposing Cross-Sectional Dimensions The following statements transpose the variable CPI in the CPICITY data set shown in a previous example from time series cross-sectional form to a standard form time series data set. (Only a subset of the data shown in the previous example is used here.) Note that the method shown in this example works only for a single variable. title "Original Data Set"; proc print data=cpicity; run; proc sort data=cpicity out=temp; by date city; run; proc transpose data=temp out=citycpi(drop=_name_); var cpi; id city; by date; run; title "Transposed Data Set"; proc print data=citycpi; run;

The names of the variables in the transposed data sets are taken from the city names in the ID variable CITY. The original and the transposed data sets are shown in Figure 3.21 and Figure 3.22.

122 F Chapter 3: Working with Time Series Data

Figure 3.21 Original Data Sets Transposed Data Set Obs 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

city

date

cpi

Chicago Chicago Chicago Chicago Chicago Chicago Chicago Los Angeles Los Angeles Los Angeles Los Angeles Los Angeles Los Angeles Los Angeles New York New York New York New York New York New York New York

JAN90 FEB90 MAR90 APR90 MAY90 JUN90 JUL90 JAN90 FEB90 MAR90 APR90 MAY90 JUN90 JUL90 JAN90 FEB90 MAR90 APR90 MAY90 JUN90 JUL90

128.1 129.2 129.5 130.4 130.4 131.7 132.0 132.1 133.6 134.5 134.2 134.6 135.0 135.6 135.1 135.3 136.6 137.3 137.2 137.1 138.4

cpilag . 128.1 129.2 129.5 130.4 130.4 131.7 . 132.1 133.6 134.5 134.2 134.6 135.0 . 135.1 135.3 136.6 137.3 137.2 137.1

Figure 3.22 Transposed Data Sets Transposed Data Set

Obs 1 2 3 4 5 6 7

date JAN90 FEB90 MAR90 APR90 MAY90 JUN90 JUL90

Chicago

Los_ Angeles

128.1 129.2 129.5 130.4 130.4 131.7 132.0

132.1 133.6 134.5 134.2 134.6 135.0 135.6

New_York 135.1 135.3 136.6 137.3 137.2 137.1 138.4

The following statements transpose the CITYCPI data set back to the original form of the CPICITY data set. The variable _NAME_ is added to the data set to tell PROC TRANSPOSE the name of the variable in which to store the observations in the transposed data set. (If the (DROP=_NAME_ _LABEL_) option were omitted from the first PROC TRANSPOSE step, this would not be necessary. PROC TRANSPOSE assumes ID _NAME_ by default.) The NAME=CITY option in the PROC TRANSPOSE statement causes PROC TRANSPOSE to store the names of the transposed variables in the variable CITY. Because PROC TRANSPOSE recodes the values of the CITY variable to create valid SAS variable names in the transposed data set, the values of the variable CITY in the retransposed data set are not the same as in the original.

Transposing Data Sets F 123

The retransposed data set is shown in Figure 3.23. data temp; set citycpi; _name_ = ’CPI’; run; proc transpose data=temp out=retrans name=city; by date; run; proc sort data=retrans; by city date; run; title "Retransposed Data Set"; proc print data=retrans; run;

Figure 3.23 Data Set Transposed Back to Original Form Retransposed Data Set Obs

date

city

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

JAN90 FEB90 MAR90 APR90 MAY90 JUN90 JUL90 JAN90 FEB90 MAR90 APR90 MAY90 JUN90 JUL90 JAN90 FEB90 MAR90 APR90 MAY90 JUN90 JUL90

Chicago Chicago Chicago Chicago Chicago Chicago Chicago Los_Angeles Los_Angeles Los_Angeles Los_Angeles Los_Angeles Los_Angeles Los_Angeles New_York New_York New_York New_York New_York New_York New_York

CPI 128.1 129.2 129.5 130.4 130.4 131.7 132.0 132.1 133.6 134.5 134.2 134.6 135.0 135.6 135.1 135.3 136.6 137.3 137.2 137.1 138.4

124 F Chapter 3: Working with Time Series Data

Time Series Interpolation The EXPAND procedure interpolates time series. This section provides a brief summary of the use of PROC EXPAND for different kinds of time series interpolation problems. Most of the issues discussed in this section are explained in greater detail in Chapter 14. By default, the EXPAND procedure performs interpolation by first fitting cubic spline curves to the available data and then computing needed interpolating values from the fitted spline curves. Other interpolation methods can be requested. Note that interpolating values of a time series does not add any real information to the data because the interpolation process is not the same process that generated the other (nonmissing) values in the series. While time series interpolation can sometimes be useful, great care is needed in analyzing time series that contain interpolated values.

Interpolating Missing Values To use the EXPAND procedure to interpolate missing values in a time series, specify the input and output data sets in the PROC EXPAND statement, and specify the time ID variable in an ID statement. For example, the following statements cause PROC EXPAND to interpolate values for missing values of all numeric variables in the data set USPRICE: proc expand data=usprice out=interpl; id date; run;

Interpolated values are computed only for embedded missing values in the input time series. Missing values before or after the range of a series are ignored by the EXPAND procedure. In the preceding example, PROC EXPAND assumes that all series are measured at points in time given by the value of the ID variable. In fact, the series in the USPRICE data set are monthly averages. PROC EXPAND can produce a better interpolation if this is taken into account. The following example uses the FROM=MONTH option to tell PROC EXPAND that the series is monthly and uses the CONVERT statement with the OBSERVED=AVERAGE to specify that the series values are averages over each month: proc expand data=usprice out=interpl from=month; id date; convert cpi ppi / observed=average; run;

Interpolating to a Higher or Lower Frequency F 125

Interpolating to a Higher or Lower Frequency You can use PROC EXPAND to interpolate values of time series at a higher or lower sampling frequency than the input time series. To change the periodicity of time series, specify the time interval of the input data set with the FROM= option, and specify the time interval for the desired output frequency with the TO= option. For example, the following statements compute interpolated weekly values of the monthly CPI and PPI series: proc expand data=usprice out=interpl from=month to=week; id date; convert cpi ppi / observed=average; run;

Interpolating between Stocks and Flows, Levels and Rates A distinction is made between variables that are measured at points in time and variables that represent totals or averages over an interval. Point-in-time values are often called stocks or levels. Variables that represent totals or averages over an interval are often called flows or rates. For example, the annual series Gross National Product represents the final goods production of over the year and also the yearly average rate of that production. However, the monthly variable Inventory represents the cost of a stock of goods at the end of the month. The EXPAND procedure can convert between point-in-time values and period average or total values. To convert observation characteristics, specify the input and output characteristics with the OBSERVED= option in the CONVERT statement. For example, the following statements use the monthly average price index values in USPRICE to compute interpolated estimates of the price index levels at the midpoint of each month. proc expand data=usprice out=midpoint from=month; id date; convert cpi ppi / observed=(average,middle); run;

Reading Time Series Data Time series data can be coded in many different ways. The SAS System can read time series data recorded in almost any form. Earlier sections of this chapter show how to read time series data coded in several commonly used ways. This section shows how to read time series data from data records coded in two other commonly used ways not previously introduced.

126 F Chapter 3: Working with Time Series Data

Several time series databases distributed by major data vendors can be read into SAS data sets by the DATASOURCE procedure. See Chapter 11, “The DATASOURCE Procedure,” for more information. The SASECRSP, SASEFAME, and SASEHAVR interface engines enable SAS users to access and process time series data in CRSPAccess data files, FAME databases, and Haver Analytics Data Link Express (DLX) data bases, respectively. See Chapter 33, “The SASECRSP Interface Engine,” Chapter 34, “The SASEFAME Interface Engine,” and Chapter 35, “The SASEHAVR Interface Engine,” for more details.

Reading a Simple List of Values Time series data can be coded as a simple list of values without dating information and with an arbitrary number of observations on each data record. In this case, the INPUT statement must use the trailing “@@” option to retain the current data record after reading the values for each observation, and the time ID variable must be generated with programming statements. For example, the following statements read the USPRICE data set from data records that contain pairs of values for CPI and PPI. This example assumes you know that the first pair of values is for June 1990. data usprice; input cpi ppi @@; date = intnx( ’month’, format date monyy7.; datalines; 129.9 114.3 130.4 114.5 132.7 118.4 133.5 120.8 134.6 119.0 134.8 117.2 135.6 116.5 136.0 116.3 ;

’1jun1990’d, _n_-1 );

131.6 133.8 135.0 136.2

116.5 120.1 133.8 118.7 116.2 135.2 116.0 116.0

Reading Fully Described Time Series in Transposed Form Data for several time series can be coded with separate groups of records for each time series. Data files coded this way are transposed from the form required by SAS procedures. Time series data can also be coded with descriptive information about the series included with the data records. The following example reads time series data for the USPRICE data set coded with separate groups of records for each series. The data records for each series consist of a series description record and one or more value records. The series description record gives the series name, starting month and year of the series, number of values in the series, and a series label. The value records contain the observations of the time series. The data are first read into a temporary data set that contains one observation for each value of each series.

Reading Fully Described Time Series in Transposed Form F 127

data temp; length _name_ $8 _label_ $40; keep _name_ _label_ date value; format date monyy.; input _name_ month year nval _label_ &; date = mdy( month, 1, year ); do i = 1 to nval; input value @; output; date = intnx( ’month’, date, 1 ); end; datalines; cpi 8 90 12 Consumer Price Index 131.6 132.7 133.5 133.8 133.8 134.6 134.8 135.0 135.2 135.6 136.0 136.2 ppi 6 90 13 Producer Price Index 114.3 114.5 116.5 118.4 120.8 120.1 118.7 119.0 117.2 116.2 116.0 116.5 116.3 ;

The following statements sort the data set by date and series name, and the TRANSPOSE procedure is used to transpose the data into a standard form time series data set. proc sort data=temp; by date _name_; run; proc transpose data=temp out=usprice(drop=_name_); by date; var value; run; proc contents data=usprice; run; proc print data=usprice; run;

The final data set is shown in Figure 3.25. Figure 3.24 Contents of USPRICE Data Set Retransposed Data Set The CONTENTS Procedure Alphabetic List of Variables and Attributes #

Variable

Type

3 1 2

cpi date ppi

Num Num Num

Len

Format

8 8 8

MONYY.

Label Consumer Price Index Producer Price Index

128 F Chapter 3: Working with Time Series Data

Figure 3.25 Listing of USPRICE Data Set Retransposed Data Set Obs

date

ppi

cpi

1 2 3 4 5 6 7 8 9 10 11 12 13 14

JUN90 JUL90 AUG90 SEP90 OCT90 NOV90 DEC90 JAN91 FEB91 MAR91 APR91 MAY91 JUN91 JUL91

114.3 114.5 116.5 118.4 120.8 120.1 118.7 119.0 117.2 116.2 116.0 116.5 116.3 .

. . 131.6 132.7 133.5 133.8 133.8 134.6 134.8 135.0 135.2 135.6 136.0 136.2

Chapter 4

Date Intervals, Formats, and Functions Contents Overview . . . . . . . . . . . . . . . . . . . . . Time Intervals . . . . . . . . . . . . . . . . . . . Constructing Interval Names . . . . . . . . Shifted Intervals . . . . . . . . . . . . . . Beginning Dates and Datetimes of Intervals Summary of Interval Types . . . . . . . . Examples of Interval Specifications . . . . Date and Datetime Informats . . . . . . . . . . . Date, Time, and Datetime Formats . . . . . . . . Date Formats . . . . . . . . . . . . . . . . Datetime and Time Formats . . . . . . . . Alignment of SAS Dates . . . . . . . . . . . . . Date, Time, and Datetime Functions . . . . . . . SAS Date, Time, and Datetime Functions . References . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

129 129 130 131 132 132 135 136 137 138 142 142 143 144 148

Overview This chapter summarizes the time intervals, date and datetime informats, date and datetime formats, and date, time, and datetime functions available in SAS software. The use of these features is explained in Chapter 3, “Working with Time Series Data.” The material in this chapter is also contained in SAS Language Reference: Concepts and SAS Language Reference: Dictionary. Because these features are useful for work with time series data, documentation of these features is consolidated and repeated here for easy reference.

Time Intervals This section provides a reference for the different kinds of time intervals supported by SAS but does not cover how they are used. For an introduction to the use of time intervals see Chapter 3.

130 F Chapter 4: Date Intervals, Formats, and Functions

Some interval names are used with SAS date values, while other interval names are used with SAS datetime values. The interval names used with SAS date values are YEAR, SEMIYEAR, QTR, MONTH, SEMIMONTH, TENDAY, WEEK, WEEKDAY, DAY, YEARV, R445YR, R454YR, R544YR, R445QTR, R454QTR, R544QTR, R445MON, R454MON, R544MON, and WEEKV. The interval names used with SAS datetime or time values are HOUR, MINUTE, and SECOND. Various abbreviations of these names are also allowed, as described in the section “Summary of Interval Types” on page 132. Interval names for use with SAS date values can be prefixed with ‘DT’ to construct interval names for use with SAS datetime values. The interval names DTYEAR, DTSEMIYEAR, DTQTR, DTMONTH, DTSEMIMONTH, DTTENDAY, DTWEEK, DTWEEKDAY, DTDAY, DTYEARV, DTR445YR, DTR454YR, DTR544YR, DTR445QTR, DTR454QTR, DTR544QTR, DTR445MON, DTR454MON, DTR544MON, and DTWEEKV are used with SAS datetime or time values.

Constructing Interval Names Multipliers and shift indexes can be used with the basic interval names to construct more complex interval specifications. The general form of an interval name is as follows: NAMEn.s The three parts of the interval name are shown below: NAME

the name of the basic interval type. For example, YEAR specifies yearly intervals.

n

an optional multiplier that specifies that the interval is a multiple of the period of the basic interval type. For example, the interval YEAR2 consists of two-year, or biennial, periods.

s

an optional starting subperiod index that specifies that the intervals are shifted to later starting points. For example, YEAR.3 specifies yearly periods shifted to start on the first of March of each calendar year and to end in February of the following year.

Both the multiplier n and the shift index s are optional and default to 1. For example, YEAR, YEAR1, YEAR.1, and YEAR1.1 are all equivalent ways of specifying ordinary calendar years. To test for a valid interval specification, use the INTTEST function: interval = ’MONTH3.2’; valid = INTTEST( interval ); valid = INTTEST( ’YEAR4’);

INTTEST returns a value of 0 if the argument is not a valid interval specification and 1 if the argument is a valid interval specification. The INTTEST function can also be used in a DATA step to test an interval before calling an interval function:

Shifted Intervals F 131

valid = INTTEST( interval ); if ( valid = 1 ) then do; end = INTNX( interval, date, 0, ’E’ ); end;

For more information about the INTTEST function, see SAS Language Reference: Dictionary.

Shifted Intervals Different kinds of intervals are shifted by different subperiods.  YEAR, SEMIYEAR, QTR, and MONTH intervals are shifted by calendar months.  WEEK and DAY intervals are shifted by days.  SEMIMONTH intervals are shifted by semi-monthly periods.  TENDAY intervals are shifted by 10-day periods.  YEARV intervals are shifted by WEEKV intervals.  R445YR, R445QTR, and R445MON intervals are shifted by R445MON intervals.  R454YR, R454QTR, and R454MON intervals are shifted by R454MON intervals.  R544YR, R544QTR, and R544MON intervals are shifted by R544MON intervals.  WEEKV intervals are shifted by days.  WEEKDAY intervals are shifted by weekdays.  HOUR intervals are shifted by hours.  MINUTE intervals are shifted by minutes.  SECOND intervals are shifted by seconds. The INTSHIFT function returns the shift interval: interval = ’MONTH3.2’; shift_interval = INTSHIFT( interval );

In this example, the value of shift_interval is ‘MONTH’. For more information about the INTSHIFT function see SAS Language Reference: Dictionary. If a subperiod is specified, the shift index cannot be greater than the number of subperiods in the whole interval. For example, you could use YEAR2.24, but YEAR2.25 would be an error because there is no 25th month in a two-year interval. For interval types that shift by subperiods that are the same as the basic interval type, only multiperiod intervals can be shifted.

132 F Chapter 4: Date Intervals, Formats, and Functions

For example, MONTH type intervals shift by MONTH subintervals; thus, monthly intervals cannot be shifted because there is only one month in MONTH. However, bimonthly intervals can be shifted because there are two MONTH intervals in each MONTH2 interval. The interval name MONTH2.2 specifies bimonthly periods that start on the first day of even-numbered months.

Beginning Dates and Datetimes of Intervals Intervals that represent divisions of a year begin with the start of the year (1 January). YEARV, R445YR, R454YR, and R544YR intervals begin with the first week of the International Organization for Standardization (ISO) year, the Monday on or immediately preceding January 4th. R445QTR, R454QTR, and R544QTR intervals begin with the 1st, 14th, 27th, and 40th weeks of the ISO year. MONTH2 periods begin with odd-numbered months (January, March, May, and so on). Likewise, intervals that represent divisions of a day begin with the start of the day (midnight). Thus, HOUR8.7 intervals divide the day into the periods 06:00 to 14:00, 14:00 to 22:00, and 22:00 to 06:00. Intervals that do not nest within years or days begin relative to the SAS date or datetime value 0. The arbitrary reference time of midnight on January 1, 1960, is used as the origin for nonshifted intervals, and shifted intervals are defined relative to that reference point. For example, MONTH13 defines the intervals January 1, 1960, February 1, 1961, March 1, 1962, and so forth, and the intervals December 1, 1959, November 1, 1958, and so on before the base date January 1, 1960. Similarly, the WEEK2 interval begins relative to the Sunday of the week of January 1, 1960. The interval specification WEEK6.13 defines six-week periods starting on second Fridays, and the convention of counting relative to the period containing January 1, 1960, indicates the starting date or datetime of the interval closest to January 1, 1960, that corresponds to the second Fridays of six-week intervals. Intervals always begin on the date or datetime defined by the base interval name, the multiplier, and the shift value. The end of the interval immediately precedes the beginning of the next interval. However, an interval can be identified by any date or datetime value between its starting and ending values, inclusive. See the section “Alignment of SAS Dates” on page 142 for more information about generating identifying dates for intervals.

Summary of Interval Types The interval types are summarized as follows. YEAR

specifies yearly intervals. Abbreviations are YEAR, YEARS, YEARLY, YR, ANNUAL, ANNUALLY, and ANNUALS. The starting subperiod s is in months (MONTH).

Summary of Interval Types F 133

YEARV

specifies ISO 8601 yearly intervals. The ISO 8601 year starts on the Monday on or immediately preceding January 4th. Note that it is possible for the ISO 8601 year to start in December of the preceding year. Also, some ISO 8601 years contain a leap week. For further discussion of ISO weeks, see Technical Committee ISO/TC 154, Documents in Commerce, and Administration (2004). The starting subperiod s is in ISO 8601 weeks (WEEKV). R445YR

is the same as YEARV except that the starting subperiod s is in retail 4-4-5 months (R445MON). R454YR

is the same as YEARV except that the starting subperiod s is in retail 4-5-4 months (R454MON). For a discussion of the retail 4-5-4 calendar, see Federation (2007). R544YR

is the same as YEARV except that the starting subperiod s is in retail 5-4-4 months (R544MON). SEMIYEAR

specifies semiannual intervals (every six months). Abbreviations are SEMIYEAR, SEMIYEARS, SEMIYEARLY, SEMIYR, SEMIANNUAL, and SEMIANN. The starting subperiod s is in months (MONTH). For example, SEMIYEAR.3 intervals are March–August and September–February. QTR

specifies quarterly intervals (every three months). Abbreviations are QTR, QUARTER, QUARTERS, QUARTERLY, QTRLY, and QTRS. The starting subperiod s is in months (MONTH). R445QTR

specifies retail 4-4-5 quarterly intervals (every 13 ISO 8601 weeks). Some fourth quarters will contain a leap week. The starting subperiod s is in retail 4-4-5 months (R445MON). R454QTR

specifies retail 4-5-4 quarterly intervals (every 13 ISO 8601 weeks). Some fourth quarters will contain a leap week. For a discussion of the retail 4-5-4 calendar, see Federation (2007). The starting subperiod s is in retail 4-5-4 months (R454MON). R544QTR

specifies retail 5-4-4 quarterly intervals (every 13 ISO 8601 weeks). Some fourth quarters will contain a leap week. The starting subperiod s is in retail 5-4-4 months (R544MON). MONTH

specifies monthly intervals. Abbreviations are MONTH, MONTHS, MONTHLY, and MON. The starting subperiod s is in months (MONTH). For example, MONTH2.2 intervals are February–March, April–May, June–July, August–September, October–November, and December–January of the following year.

134 F Chapter 4: Date Intervals, Formats, and Functions

R445MON

specifies retail 4-4-5 monthly intervals. The 3rd, 6th, 9th, and 12th months are five ISO 8601 weeks long with the exception that some 12th months contain leap weeks. All other months are four ISO 8601 weeks long. R445MON intervals begin with the 1st, 5th, 9th, 14th, 18th, 22nd, 27th, 31st, 35th, 40th, 44th, and 48th weeks of the ISO year. The starting subperiod s is in retail 4-4-5 months (R445MON). R454MON

specifies retail 4-5-4 monthly intervals. The 2nd, 5th, 8th, and 11th months are five ISO 8601 weeks long. All other months are four ISO 8601 weeks long with the exception that some 12th months contain leap weeks. R454MON intervals begin with the 1st, 5th, 10th, 14th, 18th, 23rd, 27th, 31st, 36th, 40th, 44th, and 49th weeks of the ISO year. For a discussion of the retail 4-5-4 calendar, see Federation (2007). The starting subperiod s is in retail 4-5-4 months (R454MON). R544MON

specifies retail 5-4-4 monthly intervals. The 1st, 4th, 7th, and 10th months are five ISO 8601 weeks long. All other months are four ISO 8601 weeks long with the exception that some 12th months contain leap weeks. R544MON intervals begin with the 1st, 6th, 10th, 14th, 19th, 23rd, 27th, 32nd, 36th, 40th, 45th, and 49th weeks of the ISO year. The starting subperiod s is in retail 5-4-4 months (R544MON). SEMIMONTH

specifies semimonthly intervals. SEMIMONTH breaks each month into two periods, starting on the 1st and 16th days. Abbreviations are SEMIMONTH, SEMIMONTHS, SEMIMONTHLY, and SEMIMON. The starting subperiod s is in SEMIMONTH periods. For example, SEMIMONTH2.2 specifies intervals from the 16th of one month through the 15th of the next month. TENDAY

specifies 10-day intervals. TENDAY breaks the month into three periods, the 1st through the 10th day of the month, the 11th through the 20th day of the month, and the remainder of the month. (TENDAY is a special interval typically used for reporting automobile sales data.) The starting subperiod s is in TENDAY periods. For example, TENDAY4.2 defines 40-day periods that start at the second TENDAY period. WEEK

specifies weekly intervals of seven days. Abbreviations are WEEK, WEEKS, and WEEKLY. The starting subperiod s is in days (DAY), with the days of the week numbered as 1=Sunday, 2=Monday, 3=Tuesday, 4=Wednesday, 5=Thursday, 6=Friday, and 7=Saturday. For example, WEEK.7 means weekly with Saturday as the first day of the week. WEEKV

specifies ISO 8601 weekly intervals of seven days. Each week starts on Monday. The starting subperiod s is in days (DAY). Note that WEEKV differs from WEEK in that WEEKV.1 starts on Monday, WEEKV.2 starts on Tuesday, and so forth.

Examples of Interval Specifications F 135

WEEKDAY WEEKDAYdW WEEKDAYddW WEEKDAYdddW

specifies daily intervals with weekend days included in the preceding weekday. Note that for a five-day work week that starts on Monday, the appropriate interval is WEEKDAY5.2. Abbreviations are WEEKDAY and WEEKDAYS. The starting subperiod s is in weekdays (WEEKDAY). The WEEKDAY interval is the same as DAY except that weekend days are absorbed into the preceding weekday. Thus there are five WEEKDAY intervals in a calendar week: Monday, Tuesday, Wednesday, Thursday, and the three-day period Friday-Saturday-Sunday. The default weekend days are Saturday and Sunday, but any one to six weekend days can be listed after the WEEKDAY string and followed by a W. Weekend days are specified as ‘1’ for Sunday, ‘2’ for Monday, and so forth. For example, WEEKDAY67W specifies a FridaySaturday weekend. WEEKDAY1W specifies a six-day work week with a Sunday weekend. WEEKDAY17W is the same as WEEKDAY. DAY

specifies daily intervals. Abbreviations are DAY, DAYS, and DAILY. The starting subperiod s is in days (DAY). HOUR

specifies hourly intervals. Abbreviations are HOUR, HOURS, HOURLY, and HR. The starting subperiod s is in hours (HOUR). MINUTE

specifies minute intervals. Abbreviations are MINUTE, MINUTES, and MIN. The starting subperiod s is in minutes (MINUTE). SECOND

specifies second intervals. Abbreviations are SECOND, SECONDS, and SEC. The starting subperiod s is in seconds (SECOND).

Examples of Interval Specifications Table 4.1 shows examples of different kinds of interval specifications.

136 F Chapter 4: Date Intervals, Formats, and Functions

Table 4.1

Examples of Intervals

Name YEAR YEAR.10 YEAR2.7 YEAR2.19 YEAR4.11

Kind of Interval years that start in January fiscal years that start in October biennial intervals that start in July of even years biennial intervals that start in July of odd years four-year intervals that start in November of leap years (frequency of U.S. presidential elections) YEAR4.35 four-year intervals that start in November of even years between leap years (frequency of U.S. midterm elections) WEEK weekly intervals that start on Sundays WEEK2 biweekly intervals that start on first Sundays WEEK1.1 same as WEEK WEEK.2 weekly intervals that start on Mondays WEEK6.3 six-week intervals that start on first Tuesdays WEEK6.11 six-week intervals that start on second Wednesdays WEEKDAY daily with Friday-Saturday-Sunday counted as the same day (fiveday work week with a Saturday-Sunday weekend) WEEKDAY17W same as WEEKDAY WEEKDAY5.2 five weekdays that start on Monday. If WEEKDAY data are accumulated into weekly data, the interval of the accumulated data is WEEKDAY5.2 WEEKDAY67W daily with Thursday-Friday-Saturday counted as the same day (five-day work week with a Friday-Saturday weekend) WEEKDAY1W daily with Saturday-Sunday counted as the same day (six-day work week with a Sunday weekend) WEEKDAY3.2 three-weekday intervals (with Friday-Saturday-Sunday counted as one weekday) with the cycle three-weekday periods aligned to Monday 4 Jan 1960 HOUR8.7 eight-hour intervals that start at 6 a.m., 2 p.m., and 10 p.m. (might be used for work shifts)

Date and Datetime Informats Table 4.2 lists some of the SAS date and datetime informats available in SAS to read date, time, and datetime values. See Chapter 3, “Working with Time Series Data,” for a discussion of the use of date and datetime informats. See SAS Language Reference: Concepts for a complete description of these informats. For each informat, Table 4.2 shows an example of a date or datetime value written in the style that the informat is designed to read. The date 17 October 1991 and the time 2:25:32 p.m. are used for the example in all cases. Table 4.2 shows the width range allowed by the informat and the default width.

Date, Time, and Datetime Formats F 137

Table 4.2

Frequently Used SAS Date and Datetime Informats

Informat Example

Width Range

Default Width

Description

DATEw. 17oct91

day, month abbreviation, and year: ddmonyy

7–32

7

DATETIMEw.d 17oct91:14:45:32

date and time: ddmonyy:hh:mm:ss

13–40

18

DDMMYYw. 17/10/91

day, month, year: ddmmyy, dd/mm/yy, dd-mm-yy, or dd mm yy

6–32

6

JULIANw. 91290

year and day of year (Julian dates): yyddd

5–32

5

MMDDYYw. 10/17/91

month, day, year: mmddyy, mm/dd/yy, mm-dd-yy, or mm dd yy

6–32

6

MONYYw. Oct91

month abbreviation and year

5–32

5

NENGOw. H.03/10/17

Japanese Nengo notation

7–32

10

TIMEw.d 14:45:32

hours, minutes, seconds: hh:mm:ss or hours, minutes: hh:mm.

5–32

8

WEEKVw. 1991-W42-04

ISO 8601 year, week, day of week: yyyy-Www-dd

3–200

11

YYMMDDw. 91/10/17

year, month, day: yymmdd, yy/mm/dd, yy-mm-dd, or yy mm dd

6–32

6

YYQw. 91Q4

year and quarter of year: yyQq

4–32

4

Date, Time, and Datetime Formats Some of the SAS date and datetime formats commonly used to write out SAS date and datetime values are listed in Table 4.3 and Table 4.4. A width value can be specified with each format. The tables list the range of width values allowed and the default width value for each format. The notation used by a format is abbreviated in different ways depending on the width option used.

138 F Chapter 4: Date Intervals, Formats, and Functions

For example, the format MMDDYY8. writes the date 17 October 1991 as 10/17/91, while the format MMDDYY6. writes this date as 101791. In particular, formats that display the year show two-digit or four-digit year values depending on the width option. The examples shown in the tables are for the default width. The interval function INTFMT returns a recommended format for time ID values based on the interval that describes the frequency of the values. The following example uses INTFMT to select a format to display the quarterly time ID variable qtrDate. This selected format is stored in a macro variable created by the CALL SYMPUT statement. The macro variable &FMT is then used in the FORMAT statement in the PROC PRINT step. data b(keep=qtrDate); interval = ’QTR’; form = INTFMT( interval, ’long’ ); call symput(’fmt’,form); do i=1 to 4; qtrDate = INTNX( interval, ’01jan00’d, i-1 ); output; end; run; proc print; format qtrDate &fmt; run;

The second argument to INTFMT is ‘long,’ ‘l,’ ‘short,’ or ‘s’ and controls the width of the year (2 or 4) for date formats. For more information about the INTFMT function, see SAS Language Reference: Dictionary. See SAS Language Reference: Concepts for a complete description of these formats, including the variations of the formats produced by different width options. See Chapter 3, “Working with Time Series Data,” for a discussion of the use of date and datetime formats.

Date Formats Table 4.3 lists some of the date formats available in SAS. For each format, an example is shown of a date value in the notation produced by the format. The date ‘17OCT91’D is used as the example. Table 4.3

Frequently Used SAS Date Formats

Format Example

Width Range

Default Width

Description

DATEw. 17OCT91

day, month abbreviation, year: ddmonyy

5–9

7

DAYw. 17

day of month

2–32

2

Date Formats F 139

Table 4.3

continued

Format Example

Description

Width Range

Default Width

DDMMYYw. 17/10/91

day, month, year: dd/mm/yy

2–8

8

DOWNAMEw. Thursday

name of day of the week

1–32

9

JULDAYw. 290

day of year

3–32

3

JULIANw. 91290

year and day of year: yyddd

5–7

5

MMDDYYw. 10/17/91

month, day, year: mm/dd/yy

2–8

8

MMYYw. 10M1991

month and year: mmMyy

5–32

7

MMYYCw. 10:1991

month and year: mm:yy

5–32

7

MMYYDw. 10-1991

month and year: mm-yy

5–32

7

MMYYPw. 10.1991

month and year: mm.yy

5–32

7

MMYYSw. 10/1991

month and year: mm/yy

5–32

7

MMYYNw. 101991

month and year: mmyy

5–32

6

MONNAMEw. October

name of month

1–32

9

MONTHw. 10

month of year

1–32

2

MONYYw. OCT91

month abbreviation and year: monyy

5–7

5

140 F Chapter 4: Date Intervals, Formats, and Functions

Table 4.3

continued

Format Example

Description

Width Range

Default Width

QTRw. 4

quarter of year

1–32

1

QTRRw. IV

quarter in roman numerals

3–32

3

NENGOw. H.03/10/17

Japanese Nengo notation

2–10

10

WEEKDATEw. Thursday, October 17, 1991

day-of-week, month-name dd, yy

3–37

29

WEEKDATXw. Thursday, 17 October 1991

day-of-week, dd month-name yy

3–37

29

WEEKDAYw. 5

day of week

1–32

1

WEEKVw. 1991-W42-04

ISO 8601 year, week, day of week: yyyy-Www-dd

3–200

11

WORDDATEw. October 17, 1991

month-name dd, yy

3–32

18

WORDDATXw. 17 October 1991

dd month-name yy

3–32

18

YEARw. 1991

year

2–32

4

YYMMw. 1991M10

year and month: yyMmm

5–32

7

YYMMCw. 1991:10

year and month: yy:mm

5–32

7

YYMMDw. 1991-10

year and month: yy-mm

5–32

7

YYMMPw. 1991.10

year and month: yy.mm

5–32

7

YYMMSw. 1991/10

year and month: yy/mm

5–32

7

Date Formats F 141

Table 4.3

continued

Format Example

Description

Width Range

Default Width

YYMMNw. 199110

year and month: yymm

5–32

7

YYMONw. 1991OCT

year and month abbreviation: yyMON

5–32

7

YYMMDDw. 91/10/17

year, month, day: yy/mm/dd

2–8

8

YYQw. 91Q4

year and quarter: yyQq

4–6

4

YYQCw. 1991:4

year and quarter: yy:q

4–32

6

YYQDw. 1991-4

year and quarter: yy-q

4–32

6

YYQPw. 1991.4

year and quarter: yy.q

4–32

6

YYQSw. 1991/4

year and quarter: yy/q

4–32

6

YYQNw. 19914

year and quarter: yyq

3–32

5

YYQRw. 1991QIV

year and quarter in roman numerals: yyQrr

6–32

8

YYQRCw. 1991:IV

year and quarter in roman numerals: yy:rr

6–32

8

YYQRDw. 1991-IV

year and quarter in roman numerals: yy-rr

6–32

8

YYQRPw. 1991.IV

year and quarter in roman numerals: yy.rr

6–32

8

YYQRSw. 1991/IV

year and quarter in roman numerals: yy/rr

6–32

8

YYQRNw.

year and quarter in roman

6–32

8

142 F Chapter 4: Date Intervals, Formats, and Functions

Table 4.3

continued

Format Example

Description

1991IV

numerals: yyrr

Width Range

Default Width

Datetime and Time Formats Table 4.4 lists some of the datetime and time formats available in the SAS system. For each format, an example is shown of a datetime value in the notation produced by the format. The datetime value ‘17OCT91:14:25:32’DT is used as the example. Table 4.4

Frequently Used SAS Datetime and Time Formats

Format Example

Description

Width Range

Default Width

DATETIMEw.d 17OCT91:14:25:32

ddMONyy:hh:mm:ss

7–40

16

DTWKDATXw. Thursday, 17 October 1991

day-of-week, dd month yyyy

3–37

29

HHMMw.d 14:25

hour and minute: hh:mm

2–20

5

HOURw.d 14

hour

2–20

2

MMSSw.d 25:32

minutes and seconds: mm:ss

2–20

5

TIMEw.d 14:25:32

time of day: hh:mm:ss

2–20

8

TODw.d 14:25:32

time of day: hh:mm:ss

2–20

8

Alignment of SAS Dates SAS date values used to identify time series observations produced by SAS/ETS and SAS HighPerformance Forecasting procedures are normally aligned with the beginning of the time intervals

Date, Time, and Datetime Functions F 143

that correspond to the observations. For example, for monthly data for 1994, the date values that identify the observations are 1Jan94, 1Feb94, 1Mar94, . . . , 1Dec94. However, for some applications it might be preferable to use end-of-period dates, such as 31Jan94, 28Feb94, 31Mar94, . . . , 31Dec94. For other applications, such as plotting time series, it might be more convenient to use interval midpoint dates to identify the observations. Many SAS/ETS and SAS High-Performance Forecasting procedures provide an ALIGN= option to control the alignment of dates for outputting time series observations. SAS/ETS procedures that support the ALIGN= option are ARIMA, DATASOURCE, ESM, EXPAND, FORECAST, SIMILARITY, TIMESERIES, UCM, and VARMAX. SAS High-Performance Forecasting procedures that support the ALIGN= option are HPFRECONCILE, HPF, HPFDIAGNOSE, HPFENGINE, and HPFEVENTS. ALIGN=

The ALIGN= option can have the following values: BEGINNING

specifies that dates be aligned to the start of the interval. This is the default. BEGINNING can be abbreviated as BEGIN, BEG, or B.

MIDDLE

specifies that dates be aligned to the interval midpoint, the average of the beginning and ending values. MIDDLE can be abbreviated as MID or M.

ENDING

specifies that dates be aligned to the end of the interval. ENDING can be abbreviated as END or E.

For information about the calculation of the beginning and ending values of intervals, see the section “Beginning Dates and Datetimes of Intervals” on page 132.

Date, Time, and Datetime Functions SAS provides functions to perform calculations with SAS date, time, and datetime values. SAS date, time, and datetime functions are used to perform the following tasks:  compute date, time, and datetime values from calendar and time-of-day values  compute calendar and time-of-day values from date and datetime values  convert between date, time, and datetime values  perform calculations that involve time intervals  provide information about time intervals  provide information about seasonality SAS date, time, and datetime functions are listed in alphabetical order in the following section. The intervals and other character arguments for interval functions can be supplied either directly as a

144 F Chapter 4: Date Intervals, Formats, and Functions

quoted string or as a SAS character variable. However, when you use a character variable, you should set the length of the character variable to at least the length of the longest string for that variable used in the DATA step. Also, to ensure correct results when using interval functions, use date intervals with date values and datetime intervals with datetime values. See SAS Language Reference: Dictionary for a complete description of these functions.

SAS Date, Time, and Datetime Functions DATE()

returns today’s date as a SAS date value. DATEJUL( yyddd )

returns the SAS date value given the Julian date in yyddd or yyyyddd format. For example, DATE = DATEJUL(99001); assigns the SAS date value ‘01JAN99’D to DATE, and DATE = DATEJUL(1999365); assigns the SAS date value ‘31DEC1999’D to DATE. DATEPART( datetime )

returns the date part of a SAS datetime value as a date value. DATETIME()

returns the current date and time of day as a SAS datetime value. DAY( date )

returns the day of the month from a SAS date value. DHMS( date, hour, minute, second )

returns a SAS datetime value for date, hour, minute, and second values. HMS( hour, minute, second )

returns a SAS time value for hour, minute, and second values. HOLIDAY( ‘holiday ’, year )

returns a SAS date value for the holiday and year specified. Valid values for holiday are ‘BOXING’, ‘CANADA’, ‘CANADAOBSERVED’, ‘CHRISTMAS’, ‘COLUMBUS’, ‘EASTER’, ‘FATHERS’, ‘HALLOWEEN’, ‘LABOR’, ‘MLK’, ‘MEMORIAL’, ‘MOTHERS’, ‘NEWYEAR’,‘THANKSGIVING’, ‘THANKSGIVINGCANADA’, ‘USINDEPENDENCE’, ‘USPRESIDENTS’, ‘VALENTINES’, ‘VETERANS’, ‘VETERANSUSG’, ‘VETERANSUSPS’, and ‘VICTORIA’. For example: EASTER2000 = HOLIDAY(’EASTER’, 2000);

HOUR( datetime )

returns the hour from a SAS datetime or time value.

SAS Date, Time, and Datetime Functions F 145

INTCINDEX( ‘date-interval’, date ) INTCINDEX( ‘datetime-interval’, datetime )

returns the index of the seasonal cycle given an interval and an appropriate SAS date, datetime, or time value. For example, the seasonal cycle for INTERVAL=‘DAY’ is ‘WEEK’, so INTCINDEX(’DAY’,’01SEP78’D); returns 35 since September 1, 1978, is the sixth day of the 35th week of the year. For correct results, date intervals should be used with date values, and datetime intervals should be used with datetime values. INTCK( ‘date-interval’, date1, date2 ) INTCK( ‘datetime-interval’, datetime1, datetime2 )

returns the number of boundaries of intervals of the given kind that lie between the two date or datetime values. INTCYCLE( ‘interval’ )

returns the interval of the seasonal cycle, given a date, time, or datetime interval. For example, INTCYCLE(‘MONTH’) returns ‘YEAR’ since the months January, February, . . . , December constitute a yearly cycle. INTCYCLE(‘DAY’) returns ‘WEEK’ since Sunday, Monday, . . . , Saturday is a weekly cycle. INTFIT( date1, date2, ‘D’ ) INTFIT( datetime1, datetime2, ‘DT ’ ) INTFIT( obs1, obs2, ‘OBS’ )

returns an interval that fits exactly between two SAS date, datetime, or observation values, in the sense of the INTNX function uses SAMEDAY alignment. In the following example, result1 is the same as date1 and result2 is the same as date2. FitInterval = INTFIT( date1, date2, ’D’ ); result1 = INTNX( FitInterval, date1, 0, ’SAMEDAY’); result2 = INTNX( FitInterval, date1, 1, ’SAMEDAY’);

More than one interval can fit the preceding definition. For instance, two SAS date values that are seven days apart could be fit with either ‘DAY7’ or ‘WEEK’. The INTFIT algorithm chooses the more common interval, so ‘WEEK’ is the result when the dates are seven days apart. The INTFIT function can be used to detect the possible frequency of the time series or to analyze frequencies of other events in a time series, such as outliers or missing values. INTFMT(‘interval’ ,‘size’)

returns a recommended format, given a date, time, or datetime interval for displaying the time ID values associated with a time series of the given interval. The valid values of size (‘long,’ ‘l,’ ‘short,’ ‘s’) specify whether the returned format uses a two-digit or four-digit year to display the SAS date value. INTGET( date1, date2, date3 ) INTGET( datetime1, datetime2, datetime3 )

returns an interval that fits three consecutive SAS date or datetime values. The INTGET function examines two intervals: the first interval between date1 and date2, and the second interval between date2 and date3. In order for an interval to be detected, either the two intervals must be the same or one interval must be an integer multiple of the other interval. That is, INTGET assumes that at least two of the dates are consecutive points in the time series, and that the other two dates are also consecutive or represent the points before and after

146 F Chapter 4: Date Intervals, Formats, and Functions

missing observations. The INTGET algorithm assumes that large values are SAS datetime values, which are measured in seconds, and that smaller values are SAS date values, which are measured in days. The INTGET function can be used to detect the possible frequency of the time series or to analyze frequencies of other events in a time series, such as outliers or missing values. INTINDEX( ‘date-interval’, date ) INTINDEX( ‘datetime-interval’, datetime )

returns the seasonal index, given a date, time, or datetime interval and an appropriate date, time, or datetime value. The seasonal index is a number that represents the position of the date, time, or datetime value in the seasonal cycle of the specified interval. For example, INTINDEX(’MONTH’,’01DEC2000’D); returns 12 because monthly data is yearly periodic and DECEMBER is the 12th month of the year. However, INTINDEX(’DAY’,’01DEC2000’D); returns 6 because daily data is weekly periodic and December 01, 2000, is a Friday, the sixth day of the week. To correctly identify the seasonal index, the interval specification should agree with the date, time, or datetime value. For example, INTINDEX(’DTMONTH’,’01DEC2000’D); and INTINDEX(’MONTH’,’01DEC2000:00:00:00’DT); do not return the expected value of 12. However, both INTINDEX(’MONTH’,’01DEC2000’D); and INTINDEX(’DTMONTH’,’01DEC2000:00:00:00’DT); return the expected value of 12. INTNX( ‘date-interval’, date, n < , ‘alignment’ > ) INTNX( ‘datetime-interval’, datetime, n < , ‘alignment’ > )

returns the date or datetime value of the beginning of the interval that is n intervals from the interval that contains the given date or datetime value. The optional alignment argument specifies that the returned date is aligned to the beginning, middle, or end of the interval. Beginning is the default. In addition, you can specify SAME (or S) alignment. The SAME alignment bases the alignment of the calculated date or datetime value on the alignment of the input date or datetime value. As illustrated in the following example, the SAME alignment can be used to calculate the meaning of “same day next year” or “same day 2 weeks from now.” nextYear = INTNX( ’YEAR’, ’15Apr2007’D, 1, ’S’ ); TwoWeeks = INTNX( ’WEEK’, ’15Apr2007’D, 2, ’S’ );

The preceding example returns ‘15Apr2008’D for nextYear and ‘29Apr2007’D for TwoWeeks. For all values of alignment, the number of intervals n between the input date and the resulting date agrees with the input value. date2 = INTNX( interval, date1, n1, align ); n2 = INTCK( interval, date1, date2 );

The result is always that n2 = n1. INTSEAS( ‘interval’ )

returns the length of the seasonal cycle, given a date, time, or datetime interval. The length of a seasonal cycle is the number of intervals in a seasonal cycle. For example, when the interval for a time series is described as monthly, many procedures use the option INTERVAL=MONTH to indicate that each observation in the data then corresponds to a particular month. Monthly data are considered to be periodic for a one-year seasonal cycle. There are

SAS Date, Time, and Datetime Functions F 147

12 months in one year, so the number of intervals (months) in a seasonal cycle (year) is 12. For quarterly data, there are 4 quarters in one year, so the number of intervals in a seasonal cycle is 4. The periodicity is not always one year. For example, INTERVAL=DAY is considered to have a seasonal cycle of one week, and because there are 7 days in a week, the number of intervals in a seasonal cycle is 7. INTSHIFT( ‘interval’ )

returns the shift interval that applies to the shift index if a subperiod is specified. For example, YEAR intervals are shifted by MONTH, so INTSHIFT(‘YEAR’) returns ‘MONTH’. INTTEST( ‘interval’ )

returns 1 if the interval name is a valid interval, 0 otherwise. For example, VALID = INTTEST(’MONTH’); should set VALID to 1, while VALID = INTTEST(’NOTANINTERVAL’); should set VALID to 0. The INTTEST function can be useful in verifying which values of multiplier n and the shift index s are valid in constructing an interval name. JULDATE( date )

returns the Julian date from a SAS date value. The format of the Julian date is either yyddd or yyyyddd depending on the value of the system option YEARCUTOFF=. For example, using the default system option values, JULDATE( ’31DEC1999’D ); returns 99365, while JULDATE( ’31DEC1899’D ); returns 1899365. MDY( month, day, year )

returns a SAS date value for month, day, and year values. MINUTE( datetime )

returns the minute from a SAS time or datetime value. MONTH( date )

returns the numerical value for the month of the year from a SAS date value. For example, MONTH=MONTH(’01JAN2000’D); returns 1, the numerical value for January. NWKDOM( n, weekday, month, year )

returns a SAS date value for the nth weekday of the month and year specified. For example, Thanksgiving is always the fourth (n D 4) Thursday (weekday D 5) in November (month D 11). Thus THANKS2000 = NWKDOM( 4, 5, 11, 2000); returns the SAS date value for Thanksgiving in the year 2000. The last weekday of a month can be specified using n D 5. Memorial Day in the United States is the last (n D 5) Monday (weekday D 2) in May (mont h D 5), and so MEMORIAL2002 = NWKDOM( 5, 2, 5, 2002); returns the SAS date value for Memorial Day in 2002. Because n D 5 always specifies the last occurrence of the month and most months have only 4 instances of each day, the result for n D 5 is often the same as the result for n D 4. NWKDOM is useful for calculating the SAS date values of holidays that are defined in this manner. QTR( date )

returns the quarter of the year from a SAS date value. SECOND( date )

returns the second from a SAS time or datetime value.

148 F Chapter 4: Date Intervals, Formats, and Functions

TIME()

returns the current time of day. TIMEPART( datetime )

returns the time part of a SAS datetime value. TODAY()

returns the current date as a SAS date value. (TODAY is another name for the DATE function.) WEEK( date < , ‘descriptor’ > )

returns the week of year from a SAS date value. The algorithm used to calculate the week depends on the descriptor. If the descriptor is ‘U,’ weeks start on Sunday and the range is 0 to 53. If weeks 0 and 53 exist, they are only partial weeks. Week 52 can be a partial week. If the descriptor is ‘V’, the result is equivalent to the ISO 8601 week of year definition. The range is 1 to 53. Week 53 is a leap week. The first week of the year, Week 1, and the last week of the year, Week 52 or 53, can include days in another Gregorian calendar year. If the descriptor is ‘W’, weeks start on Monday and the range is 0 to 53. If weeks 0 and 53 exist, they are only partial weeks. Week 52 can be a partial week. WEEKDAY( date )

returns the day of the week from a SAS date value. For example WEEKDAY=WEEKDAY(’17OCT1991’D); returns 5, the numerical value for Thursday. YEAR( date )

returns the year from a SAS date value. YYQ( year, quarter )

returns a SAS date value for year and quarter values.

References Federation, N. R. (2007), National Retail Federation 4-5-4 Calendar, Washington, DC: NRF. Technical Committee ISO/TC 154, D. E., Processes, Documents in Commerce, I., and Administration (2004), ISO 8601:2004 Data Elements and Interchange Formats–Information Interchange– Representation of Dates and Times, 3rd Edition, Technical report, International Organization for Standardization.

Chapter 5

SAS Macros and Functions Contents SAS Macros . . . . . . . . . . . . . . . . . . . . BOXCOXAR Macro . . . . . . . . . . . . DFPVALUE Macro . . . . . . . . . . . . DFTEST Macro . . . . . . . . . . . . . . LOGTEST Macro . . . . . . . . . . . . . Functions . . . . . . . . . . . . . . . . . . . . . PROBDF Function for Dickey-Fuller Tests References . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

149 150 153 154 156 158 158 163

SAS Macros This chapter describes several SAS macros and the SAS function PROBDF that are provided with SAS/ETS software. A SAS macro is a program that generates SAS statements. Macros make it easy to produce and execute complex SAS programs that would be time-consuming to write yourself. SAS/ETS software includes the following macros: %AR

generates statements to define autoregressive error models for the MODEL procedure.

%BOXCOXAR

investigates Box-Cox transformations useful for modeling and forecasting a time series.

%DFPVALUE

computes probabilities for Dickey-Fuller test statistics.

%DFTEST

performs Dickey-Fuller tests for unit roots in a time series process.

%LOGTEST

tests to see if a log transformation is appropriate for modeling and forecasting a time series.

%MA

generates statements to define moving-average error models for the MODEL procedure.

%PDL

generates statements to define polynomial-distributed lag models for the MODEL procedure.

150 F Chapter 5: SAS Macros and Functions

These macros are part of the SAS AUTOCALL facility and are automatically available for use in your SAS program. See SAS Macro Language: Reference for information about the SAS macro facility. Since the %AR, %MA, and %PDL macros are used only with PROC MODEL, they are documented with the MODEL procedure. See the sections on the %AR, %MA, and %PDL macros in Chapter 18, “The MODEL Procedure,” for more information about these macros. The %BOXCOXAR, %DFPVALUE, %DFTEST, and %LOGTEST macros are described in the following sections.

BOXCOXAR Macro The %BOXCOXAR macro finds the optimal Box-Cox transformation for a time series. Transformations of the dependent variable are a useful way of dealing with nonlinear relationships or heteroscedasticity. For example, the logarithmic transformation is often used for modeling and forecasting time series that show exponential growth or that show variability proportional to the level of the series. The Box-Cox transformation is a general class of power transformations that include the log transformation and no transformation as special cases. The Box-Cox transformation is ( Yt D

.Xt Cc/ 1 

for  ¤ 0 ln.Xt C c/ for  D 0

The parameter  controls the shape of the transformation. For example, =0 produces a log transformation, while =0.5 results in a square root transformation. When =1, the transformed series differs from the original series by c 1. The constant c is optional. It can be used when some Xt values are negative or 0. You choose c so that the series Xt is always greater than c. The %BOXCOXAR macro tries a range of  values and reports which of the values tried produces the optimal Box-Cox transformation. To evaluate different  values, the %BOXCOXAR macro transforms the series with each  value and fits an autoregressive model to the transformed series. It is assumed that this autoregressive model is a reasonably good approximation to the true time series model appropriate for the transformed series. The likelihood of the data under each autoregressive model is computed, and the  value that produces the maximum likelihood over the values tried is reported as the optimal Box-Cox transformation for the series. The %BOXCOXAR macro prints and optionally writes to a SAS data set all of the  values tried, the corresponding log-likelihood value, and related statistics for the autoregressive model. You can control the range and number of  values tried. You can also control the order of the autoregressive models fit to the transformed series. You can difference the transformed series before the autoregressive model is fit. Note that the Box-Cox transformation might be appropriate when the data have a common distribution (apart from heteroscedasticity) but not when groups of observations for the variable are quite

BOXCOXAR Macro F 151

different. Thus the %BOXCOXAR macro is more often appropriate for time series data than for cross-sectional data.

Syntax The form of the %BOXCOXAR macro is %BOXCOXAR ( SAS-data-set, variable < , options > ) ;

The first argument, SAS-data-set, specifies the name of the SAS data set that contains the time series to be analyzed. The second argument, variable, specifies the time series variable name to be analyzed. The first two arguments are required. The following options can be used with the %BOXCOXAR macro. Options must follow the required arguments and are separated by commas. AR=n

specifies the order of the autoregressive model fit to the transformed series. The default is AR=5. CONST=value

specifies a constant c to be added to the series before transformation. Use the CONST= option when some values of the series are 0 or negative. The default is CONST=0. DIF=( differencing-list )

specifies the degrees of differencing to apply to the transformed series before the autoregressive model is fit. The differencing-list is a list of positive integers separated by commas and enclosed in parentheses. For example, DIF=(1,12) specifies that the transformed series be differenced once at lag 1 and once at lag 12. For more details, see the section “IDENTIFY Statement” on page 228 in Chapter 7, “The ARIMA Procedure.” LAMBDAHI=value

specifies the maximum value of lambda for the grid search. The default is LAMBDAHI=1. A large (in magnitude) LAMBDAHI= value can result in problems with floating point arithmetic. LAMBDALO=value

specifies the minimum value of lambda for the grid search. The default is LAMBDALO=0. A large (in magnitude) LAMBDALO= value can result in problems with floating point arithmetic. NLAMBDA=value

specifies the number of lambda values considered, including the LAMBDALO= and LAMBDAHI= option values. The default is NLAMBDA=2. OUT=SAS-data-set

writes the results to an output data set. The output data set includes the lambda values tried (LAMBDA), and for each lambda value, the log likelihood (LOGLIK), residual mean squared error (RMSE), Akaike Information Criterion (AIC), and Schwarz’s Bayesian Criterion (SBC).

152 F Chapter 5: SAS Macros and Functions

PRINT=YES | NO

specifies whether results are printed. The default is PRINT=YES. The printed output contains the lambda values, log likelihoods, residual mean square errors, Akaike Information Criterion (AIC), and Schwarz’s Bayesian Criterion (SBC).

Results The value of  that produces the maximum log likelihood is returned in the macro variable &BOXCOXAR. The value of the variable &BOXCOXAR is “ERROR” if the %BOXCOXAR macro is unable to compute the best transformation due to errors. This might be the result of large lambda values. The Box-Cox transformation parameter involves exponentiation of the data, so that large lambda values can cause floating-point overflow. Results are printed unless the PRINT=NO option is specified. Results are also stored in SAS data sets when the OUT= option is specified.

Details Assume that the transformed series Yt is a stationary pth order autoregressive process generated by independent normally distributed innovations. .1

‚.B//.Yt

/ D t

t  i id N.0;  2 / Given these assumptions, the log-likelihood function of the transformed data Yt is lY ./ D

1 n n ln.2/ ln.j†j/ ln. 2 / 2 2 2 1 .Y 1/0 † 1 .Y 1/ 2 2

In this equation, n is the number of observations,  is the mean of Yt , 1 is the n-dimensional column vector of 1s,  2 is the innovation variance, Y D .Y1 ;   ; Yn /0 , and † is the covariance matrix of Y. The log-likelihood function of the original data X1 ;   ; Xn is lX ./ D lY ./ C .

1/

n X

ln.Xt C c/

t D1

where c is the value of the CONST= option. For each value of , the maximum log-likelihood of the original data is obtained from the maximum log-likelihood of the transformed data given the maximum likelihood estimate of the autoregressive model. The maximum log-likelihood values are used to compute the Akaike Information Criterion (AIC) and Schwarz’s Bayesian Criterion (SBC) for each  value. The residual mean squared error based

DFPVALUE Macro F 153

on the maximum likelihood estimator is also produced. To compute the mean squared error, the predicted values from the model are transformed again to the original scale (Pankratz 1983, pp. 256–258, and Taylor 1986). After differencing as specified by the DIF= option, the process is assumed to be a stationary autoregressive process. You can check for stationarity of the series with the %DFTEST macro. If the process is not stationary, differencing with the DIF= option is recommended. For a process with moving-average terms, a large value for the AR= option might be appropriate.

DFPVALUE Macro The %DFPVALUE macro computes the significance of the Dickey-Fuller test. The %DFPVALUE macro evaluates the p-value for the Dickey-Fuller test statistic  for the test of H0 : “The time series has a unit root” versus Ha : “The time series is stationary” using tables published by Dickey (1976) and Dickey, Hasza, and Fuller (1984). The %DFPVALUE macro can compute p-values for tests of a simple unit root with lag 1 or for seasonal unit roots at lags 2, 4, or 12. The %DFPVALUE macro takes into account whether an intercept or deterministic time trend is assumed for the series. The %DFPVALUE macro is used by the %DFTEST macro described later in this chapter. Note that the %DFPVALUE macro has been superseded by the PROBDF function described later in this chapter. It remains for compatibility with past releases of SAS/ETS.

Syntax The %DFPVALUE macro has the following form: %DFPVALUE ( tau, nobs < , options > ) ;

The first argument, tau, specifies the value of the Dickey-Fuller test statistic. The second argument, nobs, specifies the number of observations on which the test statistic is based. The first two arguments are required. The following options can be used with the %DFPVALUE macro. Options must follow the required arguments and are separated by commas. DLAG=1 | 2 | 4 | 12

specifies the lag period of the unit root to be tested. DLAG=1 specifies a one-period unit root test. DLAG=2 specifies a test for a seasonal unit root with lag 2. DLAG=4 specifies a test for a seasonal unit root with lag 4. DLAG=12 specifies a test for a seasonal unit root with lag 12. The default is DLAG=1. TREND=0 | 1 | 2

specifies the degree of deterministic time trend included in the model. TREND=0 specifies no trend and assumes the series has a zero mean. TREND=1 includes an intercept term.

154 F Chapter 5: SAS Macros and Functions

TREND=2 specifies both an intercept and a deterministic linear time trend term. The default is TREND=1. TREND=2 is not allowed with DLAG=2, 4, or 12.

Results The computed p-value is returned in the macro variable &DFPVALUE. If the p-value is less than 0.01 or larger than 0.99, the macro variable &DFPVALUE is set to 0.01 or 0.99, respectively.

Minimum Observations The minimum number of observations required by the %DFPVALUE macro depends on the value of the DLAG= option. The minimum observations are as follows: DLAG= 1 2 4 12

Minimum Observations 9 6 4 12

DFTEST Macro The %DFTEST macro performs the Dickey-Fuller unit root test. You can use the %DFTEST macro to decide whether a time series is stationary and to determine the order of differencing required for the time series analysis of a nonstationary series. Most time series analysis methods require that the series to be analyzed is stationary. However, many economic time series are nonstationary processes. The usual approach to this problem is to difference the series. A time series that can be made stationary by differencing is said to have a unit root. For more information, see the discussion of this issue in the section “Getting Started: ARIMA Procedure” on page 191 of Chapter 7, “The ARIMA Procedure.” The Dickey-Fuller test is a method for testing whether a time series has a unit root. The %DFTEST macro tests the hypothesis H0 : “The time series has a unit root” versus Ha : “The time series is stationary” based on tables provided in Dickey (1976) and Dickey, Hasza, and Fuller (1984). The test can be applied for a simple unit root with lag 1, or for seasonal unit roots at lag 2, 4, or 12. Note that the %DFTEST macro has been superseded by the PROC ARIMA stationarity tests. See Chapter 7, “The ARIMA Procedure,” for details.

Syntax The %DFTEST macro has the following form: %DFTEST ( SAS-data-set, variable < , options > ) ;

DFTEST Macro F 155

The first argument, SAS-data-set, specifies the name of the SAS data set that contains the time series variable to be analyzed. The second argument, variable, specifies the time series variable name to be analyzed. The first two arguments are required. The following options can be used with the %DFTEST macro. Options must follow the required arguments and are separated by commas. AR=n

specifies the order of autoregressive model fit after any differencing specified by the DIF= and DLAG= options. The default is AR=3. DIF=( differencing-list )

specifies the degrees of differencing to be applied to the series. The differencing list is a list of positive integers separated by commas and enclosed in parentheses. For example, DIF=(1,12) specifies that the series be differenced once at lag 1 and once at lag 12. For more details, see the section “IDENTIFY Statement” on page 228 in Chapter 7, “The ARIMA Procedure.” If the option DIF=( d 1 ,   , d k ) is specified, the series analyzed is .1 B d1 /  .1 B dk /Yt , where Yt is the variable specified, and B is the backshift operator defined by BYt D Yt 1 . DLAG=1 | 2 | 4 | 12

specifies the lag to be tested for a unit root. The default is DLAG=1. OUT=SAS-data-set

writes residuals to an output data set. OUTSTAT=SAS-data-set

writes the test statistic, parameter estimates, and other statistics to an output data set. TREND=0 | 1 | 2

specifies the degree of deterministic time trend included in the model. TREND=0 includes no deterministic term and assumes the series has a zero mean. TREND=1 includes an intercept term. TREND=2 specifies an intercept and a linear time trend term. The default is TREND=1. TREND=2 is not allowed with DLAG=2, 4, or 12.

Results The computed p-value is returned in the macro variable &DFTEST. If the p-value is less than 0.01 or larger than 0.99, the macro variable &DFTEST is set to 0.01 or 0.99, respectively. (The same value is given in the macro variable &DFPVALUE returned by the %DFPVALUE macro, which is used by the %DFTEST macro to compute the p-value.) Results can be stored in SAS data sets with the OUT= and OUTSTAT= options.

Minimum Observations The minimum number of observations required by the %DFTEST macro depends on the value of the DLAG= option. Let s be the sum of the differencing orders specified by the DIF= option, let t be

156 F Chapter 5: SAS Macros and Functions

the value of the TREND= option, and let p be the value of the AR= option. The minimum number of observations required is as follows: DLAG= 1 2 4 12

Minimum Observations 1 C p C s C max.9; p C t C 2/ 2 C p C s C max.6; p C t C 2/ 4 C p C s C max.4; p C t C 2/ 12 C p C s C max.12; p C t C 2/

Observations are not used if they have missing values for the series or for any lag or difference used in the autoregressive model.

LOGTEST Macro The %LOGTEST macro tests whether a logarithmic transformation is appropriate for modeling and forecasting a time series. The logarithmic transformation is often used for time series that show exponential growth or variability proportional to the level of the series. The %LOGTEST macro fits an autoregressive model to a series and fits the same model to the log of the series. Both models are estimated by the maximum-likelihood method, and the maximum log-likelihood values for both autoregressive models are computed. These log-likelihood values are then expressed in terms of the original data and compared. You can control the order of the autoregressive models. You can also difference the series and the log-transformed series before the autoregressive model is fit. You can print the log-likelihood values and related statistics (AIC, SBC, and MSE) for the autoregressive models for the series and the log-transformed series. You can also output these statistics to a SAS data set.

Syntax The %LOGTEST macro has the following form: %LOGTEST ( SAS-data-set, variable, < options > ) ;

The first argument, SAS-data-set, specifies the name of the SAS data set that contains the time series variable to be analyzed. The second argument, variable, specifies the time series variable name to be analyzed. The first two arguments are required. The following options can be used with the %LOGTEST macro. Options must follow the required arguments and are separated by commas. AR=n

specifies the order of the autoregressive model fit to the series and the log-transformed series. The default is AR=5.

LOGTEST Macro F 157

CONST=value

specifies a constant to be added to the series before transformation. Use the CONST= option when some values of the series are 0 or negative. The series analyzed must be greater than the negative of the CONST= value. The default is CONST=0. DIF=( differencing-list )

specifies the degrees of differencing applied to the original and log-transformed series before fitting the autoregressive model. The differencing-list is a list of positive integers separated by commas and enclosed in parentheses. For example, DIF=(1,12) specifies that the transformed series be differenced once at lag 1 and once at lag 12. For more details, see the section “IDENTIFY Statement” on page 228 in Chapter 7, “The ARIMA Procedure.” OUT=SAS-data-set

writes the results to an output data set. The output data set includes a variable TRANS that identifies the transformation (LOG or NONE), the log-likelihood value (LOGLIK), residual mean squared error (RMSE), Akaike Information Criterion (AIC), and Schwarz’s Bayesian Criterion (SBC) for the log-transformed and untransformed cases. PRINT=YES | NO

specifies whether the results are printed. The default is PRINT=NO. The printed output shows the log-likelihood value, residual mean squared error, Akaike Information Criterion (AIC), and Schwarz’s Bayesian Criterion (SBC) for the log-transformed and untransformed cases.

Results The result of the test is returned in the macro variable &LOGTEST. The value of the &LOGTEST variable is ‘LOG’ if the model fit to the log-transformed data has a larger log likelihood than the model fit to the untransformed series. The value of the &LOGTEST variable is ‘NONE’ if the model fit to the untransformed data has a larger log likelihood. The variable &LOGTEST is set to ‘ERROR’ if the %LOGTEST macro is unable to compute the test due to errors. Results are printed when the PRINT=YES option is specified. Results are stored in SAS data sets when the OUT= option is specified.

Details Assume that a time series Xt is a stationary pth order autoregressive process with normally distributed white noise innovations. That is, .1

‚.B//.Xt

x / D t

where x is the mean of Xt . The log likelihood function of Xt is l1 ./ D

n 1 ln.2/ ln.j†xx j/ 2 2 1 .X 1x /0 †xx1 .X 2e2

n ln.e2 / 2 1x /

158 F Chapter 5: SAS Macros and Functions

where n is the number of observations, 1 is the n-dimensional column vector of 1s, e2 is the variance of the white noise, X D .X1 ;   ; Xn /0 , and †xx is the covariance matrix of X. On the other hand, if the log-transformed time series Yt D ln.Xt C c/ is a stationary pth order autoregressive process, the log-likelihood function of Xt is l0 ./ D

n 1 ln.2/ ln.j†yy j/ 2 2 1 .Y 1y /0 †yy1 .Y 2e2

n ln.e2 / 2 n X 1y / ln.Xt C c/ tD1

where y is the mean of Yt , Y D .Y1 ;   ; Yn /0 , and †yy is the covariance matrix of Y. The %LOGTEST macro compares the maximum values of l1 ./ and l0 ./ and determines which is larger. The %LOGTEST macro also computes the Akaike Information Criterion (AIC), Schwarz’s Bayesian Criterion (SBC), and residual mean squared error based on the maximum likelihood estimator for the autoregressive model. For the mean squared error, retransformation of forecasts is based on Pankratz (1983, pp. 256–258). After differencing as specified by the DIF= option, the process is assumed to be a stationary autoregressive process. You might want to check for stationarity of the series using the %DFTEST macro. If the process is not stationary, differencing with the DIF= option is recommended. For a process with moving average terms, a large value for the AR= option might be appropriate.

Functions

PROBDF Function for Dickey-Fuller Tests The PROBDF function calculates significance probabilities for Dickey-Fuller tests for unit roots in time series. The PROBDF function can be used wherever SAS library functions can be used, including DATA step programs, SCL programs, and PROC MODEL programs.

Syntax PROBDF( x, n < , d < , type > > )

x

is the test statistic.

n

is the sample size. The minimum value of n allowed depends on the value specified for the third argument d. For d in the set (1,2,4,6,12), n must be an integer greater than or equal to max.2d; 5/; for other values of d the minimum value of n is 24.

PROBDF Function for Dickey-Fuller Tests F 159

d

is an optional integer giving the degree of the unit root tested for. Specify d D 1 for tests of a simple unit root .1 B/. Specify d equal to the seasonal cycle length for tests for a seasonal unit root .1 Bd /. The default value of d is 1; that is, a test for a simple unit root .1 B/ is assumed if d is not specified. The maximum value of d allowed is 12.

type

is an optional character argument that specifies the type of test statistic used. The values of type are the following: SZM

studentized test statistic for the zero mean (no intercept) case

RZM

regression test statistic for the zero mean (no intercept) case

SSM

studentized test statistic for the single mean (intercept) case

RSM

regression test statistic for the single mean (intercept) case

STR studentized test statistic for the deterministic time trend case RTR

regression test statistic for the deterministic time trend case

The values STR and RTR are allowed only when d D 1. The default value of type is SZM.

Details Theoretical Background

When a time series has a unit root, the series is nonstationary and the ordinary least squares (OLS) estimator is not normally distributed. Dickey (1976) and Dickey and Fuller (1979) studied the limiting distribution of the OLS estimator of autoregressive models for time series with a simple unit root. Dickey, Hasza, and Fuller (1984) obtained the limiting distribution for time series with seasonal unit roots. Consider the (p +1)th order autoregressive time series Yt D ˛1 Yt

C ˛2 Yt

1

2

C    C ˛pC1 Yt

p 1

C et

and its characteristic equation mpC1

˛1 mp

˛2 mp

1



˛pC1 D 0

If all the characteristic roots are less than 1 in absolute value, Yt is stationary. Yt is nonstationary if there is a unit root. If there is a unit root, the sum of the autoregressive parameters is 1, and hence you can test for a unit root by testing whether the sum of the autoregressive parameters is 1 or not. The no-intercept model is parameterized as rYt D ıYt

1

where rYt D Yt

C 1 rYt

Yt

1

˛kC1



C    C p rYt

and

ı D ˛1 C    C ˛pC1 k D

1

1

˛pC1

p

C et

160 F Chapter 5: SAS Macros and Functions

The estimators are obtained by regressing rYt on Yt 1 ; rYt 1 ;   ; rYt p . The t statistic of the ordinary least squares estimator of ı is the test statistic for the unit root test. If the type argument value specifies a test for a nonzero mean (intercept case), the autoregressive model includes a mean term ˛0 . If the type argument value specifies a test for a time trend, the model also includes a time trend term and the model is as follows: rYt D ˛0 C t C ıYt

1

C 1 rYt

1

C    C p rYt

p

C et

For testing for a seasonal unit root, consider the multiplicative model .1

˛d B d /.1

Let r d Yt Yt

Yt

d.

1 B

p B p /Yt D et



The test statistic is calculated in the following steps:

1. Regress r d Yt on r d Yt 1   r d Yt p to obtain the initial estimators Oi and compute residuals eOt . Under the null hypothesis that ˛d D 1, Oi are consistent estimators of i . 2. Regress eOt on .1 O1 B ı D ˛d 1 and i Oi .



Op B p /Yt

d;r

dY

t 1,

  , r d Yt

p

to obtain estimates of

The t ratio for the estimate of ı produced by the second step is used as a test statistic for testing for a seasonal unit root. The estimates of i are obtained by adding the estimates of i Oi from the second step to Oi from the first step. The series .1 B d /Yt is assumed to be stationary, where d is the value of the third argument to the PROBDF function. If the series is an ARMA process, a large value of p might be desirable in order to obtain a reliable test statistic. To determine an appropriate value for p; see Said and Dickey (1984). Test Statistics

The Dickey-Fuller test is used to test the null hypothesis that the time series exhibits a lag d unit root against the alternative of stationarity. The PROBDF function computes the probability of observing a test statistic more extreme than x under the assumption that the null hypothesis is true. You should reject the unit root hypothesis when PROBDF returns a small (significant) probability value. There are several different versions of the Dickey-Fuller test. The PROBDF function supports six versions, as selected by the type argument. Specify the type value that corresponds to the way that you calculated the test statistic x. The last two characters of the type value specify the kind of regression model used to compute the Dickey-Fuller test statistic. The meaning of the last two characters of the type value are as follows: ZM

zero mean or no-intercept case. The test statistic x is assumed to be computed from the regression model yt D ˛d yt

d

C et

PROBDF Function for Dickey-Fuller Tests F 161

SM

single mean or intercept case. The test statistic x is assumed to be computed from the regression model yt D ˛0 C ˛d yt

TR

d

C et

intercept and deterministic time trend case. The test statistic x is assumed to be computed from the regression model yt D ˛0 C t C ˛1 yt

1

C et

The first character of the type value specifies whether the regression test statistic or the studentized test statistic is used. Let ˛O d be the estimated regression coefficient for the dth lag of the series, and let se˛O be the standard error of ˛O d . The meaning of the first character of the type value is as follows: R

the regression-coefficient-based test statistic. The test statistic is x D n.˛O d

S

1/

the studentized test statistic. The test statistic is xD

.˛O d 1/ se˛O

See Dickey and Fuller (1979), Dickey, Hasza, and Fuller (1984), and Hamilton (1994) for more information about the Dickey-Fuller test null distribution. The preceding formulas are for the basic Dickey-Fuller test. The PROBDF function can also be used for the augmented Dickey-Fuller test, in which the error term et is modeled as an autoregressive process; however, the test statistic is computed somewhat differently for the augmented Dickey-Fuller test. See Dickey, Hasza, and Fuller (1984) and Hamilton (1994) for information about seasonal and nonseasonal augmented Dickey-Fuller tests. The PROBDF function is calculated from approximating functions fit to empirical quantiles that are produced by a Monte Carlo simulation that employs 108 replications for each simulation. Separate simulations were performed for selected values of n and for d D 1; 2; 4; 6; 12 (where n and d are the second and third arguments to the PROBDF function). The maximum error of the PROBDF function is approximately ˙10 3 for d in the set (1,2,4,6,12) and can be slightly larger for other d values. (Because the number of simulation replications used to produce the PROBDF function is much greater than the 60,000 replications used by Dickey and Fuller (1979) and Dickey, Hasza, and Fuller (1984), the PROBDF function can be expected to produce results that are substantially more accurate than the critical values reported in those papers.)

Examples Suppose the data set TEST contains 104 observations of the time series variable Y, and you want to test the null hypothesis that there exists a lag 4 seasonal unit root in the Y series. The following statements illustrate how to perform the single-mean Dickey-Fuller regression coefficient test using PROC REG and PROBDF.

162 F Chapter 5: SAS Macros and Functions

data test1; set test; y4 = lag4(y); run; proc reg data=test1 outest=alpha; model y = y4 / noprint; run; data _null_; set alpha; x = 100 * ( y4 - 1 ); p = probdf( x, 100, 4, "RSM" ); put p= pvalue5.3; run;

To perform the augmented Dickey-Fuller test, regress the differences of the series on lagged differences and on the lagged value of the series, and compute the test statistic from the regression coefficient for the lagged series. The following statements illustrate how to perform the single-mean augmented Dickey-Fuller studentized test for a simple unit root using PROC REG and PROBDF: data test1; set test; yl = lag(y); yd = dif(y); yd1 = lag1(yd); yd2 = lag2(yd); yd3 = lag3(yd); yd4 = lag4(yd); run; proc reg data=test1 outest=alpha covout; model yd = yl yd1-yd4 / noprint; run; data _null_; set alpha; retain a; if _type_ = ’PARMS’ then a = yl - 1; if _type_ = ’COV’ & _NAME_ = ’YL’ then do; x = a / sqrt(yl); p = probdf( x, 99, 1, "SSM" ); put p= pvalue5.3; end; run;

The %DFTEST macro provides an easier way to perform Dickey-Fuller tests. The following statements perform the same tests as the preceding example: %dftest( test, y, ar=4 ); %put p=&dftest;

References F 163

References Dickey, D. A. (1976), “Estimation and Testing of Nonstationary Time Series,” Unpublished Ph.D. Thesis, Iowa State University, Ames. Dickey, D. A. and Fuller, W. A. (1979), “Distribution of the Estimation for Autoregressive Time Series with a Unit Root,” Journal of the American Statistical Association, 74, 427-431. Dickey, D. A., Hasza, D. P., and Fuller, W. A. (1984), “Testing for Unit Roots in Seasonal Time Series,” Journal of the American Statistical Association, 79, 355-367. Hamilton, J. D. (1994), Time Series Analysis, Princeton, NJ: Princeton University Press. Microsoft Excel 2000 Online Help, Redmond, WA: Microsoft Corp. Pankratz, A. (1983), Forecasting with Univariate Box-Jenkins Models: Concepts and Cases. New York: John Wiley. Said, S. E. and Dickey, D. A. (1984), “Testing for Unit Roots in ARMA Models of Unknown Order,” Biometrika, 71, 599-607. Taylor, J. M. G. (1986) “The Retransformed Mean After a Fitted Power Transformation,” Journal of the American Statistical Association, 81, 114-118.

164

Chapter 6

Nonlinear Optimization Methods Contents Overview . . . . . . . . . . . . . . . . . . Options . . . . . . . . . . . . . . . . . . . Details of Optimization Algorithms . . . . Overview . . . . . . . . . . . . . . . Choosing an Optimization Algorithm Algorithm Descriptions . . . . . . . Remote Monitoring . . . . . . . . . . . . . ODS Table Names . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

165 165 175 175 176 177 181 183 185

Overview Several SAS/ETS procedures (COUNTREG, ENTROPY, MDC, QLIM, UCM, and VARMAX) use the nonlinear optimization (NLO) subsystem to perform nonlinear optimization. This chapter describes the options of the NLO system and some technical details of the available optimization methods. Note that not all options have been implemented for all procedures that use the NLO susbsystem. You should check each procedure chapter for more details about which options are available.

Options The following table summarizes the options available in the NLO system. Table 6.1

Option

NLO options

Description

Optimization Specifications TECHNIQUE= minimization technique UPDATE= update technique

166 F Chapter 6: Nonlinear Optimization Methods

Table 6.1

continued

Option

Description

LINESEARCH= LSPRECISION= HESCAL= INHESSIAN= RESTART=

line-search method line-search precision type of Hessian scaling start for approximated Hessian iteration number for update restart

Termination Criteria Specifications MAXFUNC= maximum number of function calls MAXITER= maximum number of iterations MINITER= minimum number of iterations MAXTIME= upper limit seconds of CPU time ABSCONV= absolute function convergence criterion ABSFCONV= absolute function convergence criterion ABSGCONV= absolute gradient convergence criterion ABSXCONV= absolute parameter convergence criterion FCONV= relative function convergence criterion FCONV2= relative function convergence criterion GCONV= relative gradient convergence criterion XCONV= relative parameter convergence criterion FSIZE= used in FCONV, GCONV criterion XSIZE= used in XCONV criterion Step Length Options DAMPSTEP= damped steps in line search MAXSTEP= maximum trust region radius INSTEP= initial trust region radius Printed Output Options PALL display (almost) all printed optimization-related output PHISTORY display optimization history PHISTPARMS display parameter estimates in each iteration PSHORT reduce some default optimization-related output PSUMMARY reduce most default optimization-related output NOPRINT suppress all printed optimization-related output Remote Monitoring Options SOCKET= specify the fileref for remote monitoring

These options are described in alphabetical order. ABSCONV=r ABSTOL=r

specifies an absolute function convergence criterion. For minimization, termination requires f . .k/ /  r. The default value of r is the negative square root of the largest double-precision value, which serves only as a protection against overflows.

Options F 167

ABSFCONV=rŒn ABSFTOL=rŒn

specifies an absolute function convergence criterion. For all techniques except NMSIMP, termination requires a small change of the function value in successive iterations: jf . .k

1/

/

f . .k/ /j  r

The same formula is used for the NMSIMP technique, but  .k/ is defined as the vertex with the lowest function value, and  .k 1/ is defined as the vertex with the highest function value in the simplex. The default value is r D 0. The optional integer value n specifies the number of successive iterations for which the criterion must be satisfied before the process can be terminated. ABSGCONV=rŒn ABSGTOL=rŒn

specifies an absolute gradient convergence criterion. Termination requires the maximum absolute gradient element to be small: max jgj . .k/ /j  r j

This criterion is not used by the NMSIMP technique. The default value is r D 1E 5. The optional integer value n specifies the number of successive iterations for which the criterion must be satisfied before the process can be terminated. ABSXCONV=rŒn ABSXTOL=rŒn

specifies an absolute parameter convergence criterion. For all techniques except NMSIMP, termination requires a small Euclidean distance between successive parameter vectors, k  .k/

 .k

1/

k2  r

For the NMSIMP technique, termination requires either a small length ˛ .k/ of the vertices of a restart simplex, ˛ .k/  r or a small simplex size, ı .k/  r where the simplex size ı .k/ is defined as the L1 distance from the simplex vertex  .k/ with .k/ the smallest function value to the other n simplex points l ¤  .k/ : ı .k/ D

X

.k/

k l

 .k/ k1

l ¤y

The default is r D 1E 8 for the NMSIMP technique and r D 0 otherwise. The optional integer value n specifies the number of successive iterations for which the criterion must be satisfied before the process can terminate.

168 F Chapter 6: Nonlinear Optimization Methods

DAMPSTEP[=r ]

specifies that the initial step length value ˛ .0/ for each line search (used by the QUANEW, HYQUAN, CONGRA, or NEWRAP technique) cannot be larger than r times the step length value used in the former iteration. If the DAMPSTEP option is specified but r is not specified, the default is r D 2. The DAMPSTEP=r option can prevent the line-search algorithm from repeatedly stepping into regions where some objective functions are difficult to compute or where they could lead to floating point overflows during the computation of objective functions and their derivatives. The DAMPSTEP=r option can save time-costly function calls during the line searches of objective functions that result in very small steps. FCONV=rŒn FTOL=rŒn

specifies a relative function convergence criterion. For all techniques except NMSIMP, termination requires a small relative change of the function value in successive iterations, jf . .k/ / f . .k 1/ /j r max.jf . .k 1/ /j; FSIZE/ where FSIZE is defined by the FSIZE= option. The same formula is used for the NMSIMP technique, but  .k/ is defined as the vertex with the lowest function value, and  .k 1/ is defined as the vertex with the highest function value in the simplex. The default value may depend on the procedure. In most cases, you can use the PALL option to find it. FCONV2=rŒn FTOL2=rŒn

specifies another function convergence criterion. For all techniques except NMSIMP, termination requires a small predicted reduction df .k/  f . .k/ /

f . .k/ C s .k/ /

of the objective function. The predicted reduction df .k/ D D

g .k/T s .k/

1 .k/T .k/ .k/ H s s 2

1 .k/T .k/ s g 2

 r is computed by approximating the objective function f by the first two terms of the Taylor series and substituting the Newton step s .k/ D

ŒH .k/ 

1 .k/

g

For the NMSIMP technique, termination requires a small deviation of the function r standard h i2 P .k/ .k/ 1 .k/ / values of the nC1 simplex vertices l , l D 0; : : : ; n, nC1 f . / f . r l l P .k/ 1 .k/ , the where f . .k/ / D nC1 l f .l /. If there are nact boundary constraints active at  mean and standard deviation are computed only for the n C 1 nact unconstrained vertices.

Options F 169

The default value is r D 1E 6 for the NMSIMP technique and r D 0 otherwise. The optional integer value n specifies the number of successive iterations for which the criterion must be satisfied before the process can terminate. FSIZE=r

specifies the FSIZE parameter of the relative function and relative gradient termination criteria. The default value is r D 0. For more details, see the FCONV= and GCONV= options. GCONV=rŒn GTOL=rŒn

specifies a relative gradient convergence criterion. For all techniques except CONGRA and NMSIMP, termination requires that the normalized predicted function reduction is small, f racg. .k/ /T ŒH .k/ 

1

g. .k/ /max.jf . .k/ /j; FSIZE/  r

where FSIZE is defined by the FSIZE= option. For the CONGRA technique (where a reliable Hessian estimate H is not available), the following criterion is used: k g. .k/ / k22 k s. .k/ / k2 r k g. .k/ / g. .k 1/ / k2 max.jf . .k/ /j; FSIZE/ This criterion is not used by the NMSIMP technique. The default value is r D 1E 8. The optional integer value n specifies the number of successive iterations for which the criterion must be satisfied before the process can terminate. HESCAL=0j1j2j3 HS=0j1j2j3

specifies the scaling version of the Hessian matrix used in NRRIDG, TRUREG, NEWRAP, or DBLDOG optimization. If HS is not equal to 0, the first iteration and each restart iteration sets the diagonal scaling .0/ matrix D .0/ D diag.di /: q .0/ .0/ di D max.jHi;i j; / .0/

where Hi;i are the diagonal elements of the Hessian. In every other iteration, the diagonal .0/

scaling matrix D .0/ D diag.di / is updated depending on the HS option: HS=0

specifies that no scaling is done.

HS=1

specifies the Moré (1978) scaling update:   q .kC1/ .k/ .k/ di D max di ; max.jHi;i j; /

HS=2

specifies the Dennis, Gay, & Welsch (1981) scaling update:   q .k/ .kC1/ .k/ di D max 0:6  di ; max.jHi;i j; /

HS=3

specifies that di is reset in each iteration: q .kC1/ .k/ di D max.jHi;i j; /

170 F Chapter 6: Nonlinear Optimization Methods

In each scaling update,  is the relative machine precision. The default value is HS=0. Scaling of the Hessian can be time consuming in the case where general linear constraints are active. INHESSIAN[= r ] INHESS[= r ]

specifies how the initial estimate of the approximate Hessian is defined for the quasi-Newton techniques QUANEW and DBLDOG. There are two alternatives: 

If you do not use the r specification, the initial estimate of the approximate Hessian is set to the Hessian at  .0/ .



If you do use the r specification, the initial estimate of the approximate Hessian is set to the multiple of the identity matrix rI .

By default, if you do not specify the option INHESSIAN=r, the initial estimate of the approximate Hessian is set to the multiple of the identity matrix rI , where the scalar r is computed from the magnitude of the initial gradient. INSTEP=r

reduces the length of the first trial step during the line search of the first iterations. For highly nonlinear objective functions, such as the EXP function, the default initial radius of the trust-region algorithm TRUREG or DBLDOG or the default step length of the line-search algorithms can result in arithmetic overflows. If this occurs, you should specify decreasing values of 0 < r < 1 such as INSTEP=1E 1, INSTEP=1E 2, INSTEP=1E 4, and so on, until the iteration starts successfully. 

For trust-region algorithms (TRUREG, DBLDOG), the INSTEP= option specifies a factor r > 0 for the initial radius .0/ of the trust region. The default initial trust-region radius is the length of the scaled gradient. This step corresponds to the default radius factor of r D 1.



For line-search algorithms (NEWRAP, CONGRA, QUANEW), the INSTEP= option specifies an upper bound for the initial step length for the line search during the first five iterations. The default initial step length is r D 1.



For the Nelder-Mead simplex algorithm, using TECH=NMSIMP, the INSTEP=r option defines the size of the start simplex.

LINESEARCH=i LIS=i

specifies the line-search method for the CONGRA, QUANEW, and NEWRAP optimization techniques. Refer to Fletcher (1987) for an introduction to line-search techniques. The value of i can be 1; : : : ; 8. For CONGRA, QUANEW and NEWRAP, the default value is i D 2. LIS=1

specifies a line-search method that needs the same number of function and gradient calls for cubic interpolation and cubic extrapolation; this method is similar to one used by the Harwell subroutine library.

LIS=2

specifies a line-search method that needs more function than gradient calls for quadratic and cubic interpolation and cubic extrapolation; this method is implemented as shown in Fletcher (1987) and can be modified to an exact line search by using the LSPRECISION= option.

Options F 171

LIS=3

specifies a line-search method that needs the same number of function and gradient calls for cubic interpolation and cubic extrapolation; this method is implemented as shown in Fletcher (1987) and can be modified to an exact line search by using the LSPRECISION= option.

LIS=4

specifies a line-search method that needs the same number of function and gradient calls for stepwise extrapolation and cubic interpolation.

LIS=5

specifies a line-search method that is a modified version of LIS=4.

LIS=6

specifies golden section line search (Polak 1971), which uses only function values for linear approximation.

LIS=7

specifies bisection line search (Polak 1971), which uses only function values for linear approximation.

LIS=8

specifies the Armijo line-search technique (Polak 1971), which uses only function values for linear approximation.

LSPRECISION=r LSP=r

specifies the degree of accuracy that should be obtained by the line-search algorithms LIS=2 and LIS=3. Usually an imprecise line search is inexpensive and successful. For more difficult optimization problems, a more precise and expensive line search may be necessary (Fletcher 1987). The second line-search method (which is the default for the NEWRAP, QUANEW, and CONGRA techniques) and the third linesearch method approach exact line search for small LSPRECISION= values. If you have numerical problems, you should try to decrease the LSPRECISION= value to obtain a more precise line search. The default values are shown in the following table. Table 6.2

Line Search Precision Defaults

TECH=

UPDATE=

LSP default

QUANEW QUANEW CONGRA NEWRAP

DBFGS, BFGS DDFP, DFP all no update

r r r r

= 0.4 = 0.06 = 0.1 = 0.9

For more details, refer to Fletcher (1987). MAXFUNC=i MAXFU=i

specifies the maximum number i of function calls in the optimization process. The default values are 

TRUREG, NRRIDG, NEWRAP: 125



QUANEW, DBLDOG: 500



CONGRA: 1000



NMSIMP: 3000

172 F Chapter 6: Nonlinear Optimization Methods

Note that the optimization can terminate only after completing a full iteration. Therefore, the number of function calls that is actually performed can exceed the number that is specified by the MAXFUNC= option. MAXITER=i MAXIT=i

specifies the maximum number i of iterations in the optimization process. The default values are 

TRUREG, NRRIDG, NEWRAP: 50



QUANEW, DBLDOG: 200



CONGRA: 400



NMSIMP: 1000

These default values are also valid when i is specified as a missing value. MAXSTEP=rŒn

specifies an upper bound for the step length of the line-search algorithms during the first n iterations. By default, r is the largest double-precision value and n is the largest integer available. Setting this option can improve the speed of convergence for the CONGRA, QUANEW, and NEWRAP techniques. MAXTIME=r

specifies an upper limit of r seconds of CPU time for the optimization process. The default value is the largest floating-point double representation of your computer. Note that the time specified by the MAXTIME= option is checked only once at the end of each iteration. Therefore, the actual running time can be much longer than that specified by the MAXTIME= option. The actual running time includes the rest of the time needed to finish the iteration and the time needed to generate the output of the results. MINITER=i MINIT=i

specifies the minimum number of iterations. The default value is 0. If you request more iterations than are actually needed for convergence to a stationary point, the optimization algorithms can behave strangely. For example, the effect of rounding errors can prevent the algorithm from continuing for the required number of iterations. NOPRINT

suppresses the output. (See procedure documentation for availability of this option.) PALL

displays all optional output for optimization. (See procedure documentation for availability of this option.) PHISTORY

displays the optimization history. (See procedure documentation for availability of this option.)

Options F 173

PHISTPARMS

display parameter estimates in each iteration. (See procedure documentation for availability of this option.) PINIT

displays the initial values and derivatives (if available). (See procedure documentation for availability of this option.) PSHORT

restricts the amount of default output. (See procedure documentation for availability of this option.) PSUMMARY

restricts the amount of default displayed output to a short form of iteration history and notes, warnings, and errors. (See procedure documentation for availability of this option.) RESTART=i > 0 REST=i > 0

specifies that the QUANEW or CONGRA algorithm is restarted with a steepest descent/ascent search direction after, at most, i iterations. Default values are as follows: 

CONGRA UPDATE=PB: restart is performed automatically, i is not used.



CONGRA UPDATE¤PB: i D min.10n; 80/, where n is the number of parameters.



QUANEW i is the largest integer available.

SOCKET=fileref

Specifies the fileref that contains the information needed for remote monitoring. See the section “Remote Monitoring” on page 181 for more details. TECHNIQUE=value TECH=value

specifies the optimization technique. Valid values are as follows: 

CONGRA performs a conjugate-gradient optimization, which can be more precisely specified with the UPDATE= option and modified with the LINESEARCH= option. When you specify this option, UPDATE=PB by default.



DBLDOG performs a version of double-dogleg optimization, which can be more precisely specified with the UPDATE= option. When you specify this option, UPDATE=DBFGS by default.



NMSIMP performs a Nelder-Mead simplex optimization.



NONE does not perform any optimization. This option can be used as follows:

174 F Chapter 6: Nonlinear Optimization Methods

– to perform a grid search without optimization – to compute estimates and predictions that cannot be obtained efficiently with any of the optimization techniques 

NEWRAP performs a Newton-Raphson optimization that combines a line-search algorithm with ridging. The line-search algorithm LIS=2 is the default method.



NRRIDG performs a Newton-Raphson optimization with ridging.



QUANEW performs a quasi-Newton optimization, which can be defined more precisely with the UPDATE= option and modified with the LINESEARCH= option. This is the default estimation method.



TRUREG performs a trust region optimization.

UPDATE=method UPD=method

specifies the update method for the QUANEW, DBLDOG, or CONGRA optimization technique. Not every update method can be used with each optimizer. Valid methods are as follows: 

BFGS performs the original Broyden, Fletcher, Goldfarb, and Shanno (BFGS) update of the inverse Hessian matrix.



DBFGS performs the dual BFGS update of the Cholesky factor of the Hessian matrix. This is the default update method.



DDFP performs the dual Davidon, Fletcher, and Powell (DFP) update of the Cholesky factor of the Hessian matrix.



DFP performs the original DFP update of the inverse Hessian matrix.



PB performs the automatic restart update method of Powell (1977) and Beale (1972).



FR performs the Fletcher-Reeves update (Fletcher 1987).



PR performs the Polak-Ribiere update (Fletcher 1987).



CD performs a conjugate-descent update of Fletcher (1987).

Details of Optimization Algorithms F 175

XCONV=rŒn XTOL=rŒn

specifies the relative parameter convergence criterion. For all techniques except NMSIMP, termination requires a small relative parameter change in subsequent iterations. .k/

maxj jj

.k 1/

j

j

.k/ .k 1/ max.jj j; jj j; XSIZE/

r .k/

For the NMSIMP technique, the same formula is used, but j

is defined as the vertex with

.k 1/ j

is defined as the vertex with the highest function value the lowest function value and in the simplex. The default value is r D 1E 8 for the NMSIMP technique and r D 0 otherwise. The optional integer value n specifies the number of successive iterations for which the criterion must be satisfied before the process can be terminated. XSIZE=r > 0

specifies the XSIZE parameter of the relative parameter termination criterion. The default value is r D 0. For more detail, see the XCONV= option.

Details of Optimization Algorithms

Overview There are several optimization techniques available. You can choose a particular optimizer with the TECH=name option in the PROC statement or NLOPTIONS statement. Table 6.3

Optimization Techniques

Algorithm

TECH=

trust region Method Newton-Raphson method with line search Newton-Raphson method with ridging quasi-Newton methods (DBFGS, DDFP, BFGS, DFP) double-dogleg method (DBFGS, DDFP) conjugate gradient methods (PB, FR, PR, CD) Nelder-Mead simplex method

TRUREG NEWRAP NRRIDG QUANEW DBLDOG CONGRA NMSIMP

No algorithm for optimizing general nonlinear functions exists that always finds the global optimum for a general nonlinear minimization problem in a reasonable amount of time. Since no single optimization technique is invariably superior to others, NLO provides a variety of optimization techniques that work well in various circumstances. However, you can devise problems for which none of the techniques in NLO can find the correct solution. Moreover, nonlinear optimization can

176 F Chapter 6: Nonlinear Optimization Methods

be computationally expensive in terms of time and memory, so you must be careful when matching an algorithm to a problem. All optimization techniques in NLO use O.n2 / memory except the conjugate gradient methods, which use only O.n/ of memory and are designed to optimize problems with many parameters. These iterative techniques require repeated computation of the following:  the function value (optimization criterion)  the gradient vector (first-order partial derivatives)  for some techniques, the (approximate) Hessian matrix (second-order partial derivatives) However, since each of the optimizers requires different derivatives, some computational efficiencies can be gained. Table 6.4 shows, for each optimization technique, which derivatives are required. (FOD means that first-order derivatives or the gradient is computed; SOD means that second-order derivatives or the Hessian is computed.) Table 6.4

Optimization Computations

Algorithm

FOD

SOD

TRUREG NEWRAP NRRIDG QUANEW DBLDOG CONGRA NMSIMP

x x x x x x -

x x x -

Each optimization method employs one or more convergence criteria that determine when it has converged. The various termination criteria are listed and described in the previous section. An algorithm is considered to have converged when any one of the convergence criterion is satisfied. For example, under the default settings, the QUANEW algorithm will converge if ABSGCON V < 1E 5, F CON V < 10 FDIGI T S , or GCON V < 1E 8.

Choosing an Optimization Algorithm The factors that go into choosing a particular optimization technique for a particular problem are complex and might involve trial and error. For many optimization problems, computing the gradient takes more computer time than computing the function value, and computing the Hessian sometimes takes much more computer time and memory than computing the gradient, especially when there are many decision variables. Unfortunately, optimization techniques that do not use some kind of Hessian approximation usually require many more iterations than techniques that do use a Hessian matrix, and as a result the total run time of these techniques is often longer. Techniques that do not use the Hessian also tend to be

Algorithm Descriptions F 177

less reliable. For example, they can more easily terminate at stationary points rather than at global optima. A few general remarks about the various optimization techniques follow.  The second-derivative methods TRUREG, NEWRAP, and NRRIDG are best for small problems where the Hessian matrix is not expensive to compute. Sometimes the NRRIDG algorithm can be faster than the TRUREG algorithm, but TRUREG can be more stable. The NRRIDG algorithm requires only one matrix with n.n C 1/=2 double words; TRUREG and NEWRAP require two such matrices.  The first-derivative methods QUANEW and DBLDOG are best for medium-sized problems where the objective function and the gradient are much faster to evaluate than the Hessian. The QUANEW and DBLDOG algorithms, in general, require more iterations than TRUREG, NRRIDG, and NEWRAP, but each iteration can be much faster. The QUANEW and DBLDOG algorithms require only the gradient to update an approximate Hessian, and they require slightly less memory than TRUREG or NEWRAP (essentially one matrix with n.n C 1/=2 double words). QUANEW is the default optimization method.  The first-derivative method CONGRA is best for large problems where the objective function and the gradient can be computed much faster than the Hessian and where too much memory is required to store the (approximate) Hessian. The CONGRA algorithm, in general, requires more iterations than QUANEW or DBLDOG, but each iteration can be much faster. Since CONGRA requires only a factor of n double-word memory, many large applications can be solved only by CONGRA.  The no-derivative method NMSIMP is best for small problems where derivatives are not continuous or are very difficult to compute.

Algorithm Descriptions Some details about the optimization techniques are as follows. Trust Region Optimization (TRUREG)

The trust region method uses the gradient g..k/ / and the Hessian matrix H..k/ /; thus, it requires that the objective function f . / have continuous first- and second-order derivatives inside the feasible region. The trust region method iteratively optimizes a quadratic approximation to the nonlinear objective function within a hyperelliptic trust region with radius  that constrains the step size that corresponds to the quality of the quadratic approximation. The trust region method is implemented using Dennis, Gay, and Welsch (1981), Gay (1983), and Moré and Sorensen (1983). The trust region method performs well for small- to medium-sized problems, and it does not need many function, gradient, and Hessian calls. However, if the computation of the Hessian matrix is computationally expensive, one of the (dual) quasi-Newton or conjugate gradient algorithms may be more efficient.

178 F Chapter 6: Nonlinear Optimization Methods

Newton-Raphson Optimization with Line Search (NEWRAP)

The NEWRAP technique uses the gradient g..k/ / and the Hessian matrix H..k/ /; thus, it requires that the objective function have continuous first- and second-order derivatives inside the feasible region. If second-order derivatives are computed efficiently and precisely, the NEWRAP method can perform well for medium-sized to large problems, and it does not need many function, gradient, and Hessian calls. This algorithm uses a pure Newton step when the Hessian is positive definite and when the Newton step reduces the value of the objective function successfully. Otherwise, a combination of ridging and line search is performed to compute successful steps. If the Hessian is not positive definite, a multiple of the identity matrix is added to the Hessian matrix to make it positive definite. In each iteration, a line search is performed along the search direction to find an approximate optimum of the objective function. The default line-search method uses quadratic interpolation and cubic extrapolation (LIS=2). Newton-Raphson Ridge Optimization (NRRIDG)

The NRRIDG technique uses the gradient g..k/ / and the Hessian matrix H..k/ /; thus, it requires that the objective function have continuous first- and second-order derivatives inside the feasible region. This algorithm uses a pure Newton step when the Hessian is positive definite and when the Newton step reduces the value of the objective function successfully. If at least one of these two conditions is not satisfied, a multiple of the identity matrix is added to the Hessian matrix. The NRRIDG method performs well for small- to medium-sized problems, and it does not require many function, gradient, and Hessian calls. However, if the computation of the Hessian matrix is computationally expensive, one of the (dual) quasi-Newton or conjugate gradient algorithms might be more efficient. Since the NRRIDG technique uses an orthogonal decomposition of the approximate Hessian, each iteration of NRRIDG can be slower than that of the NEWRAP technique, which works with Cholesky decomposition. Usually, however, NRRIDG requires fewer iterations than NEWRAP. Quasi-Newton Optimization (QUANEW)

The (dual) quasi-Newton method uses the gradient g..k/ /, and it does not need to compute secondorder derivatives since they are approximated. It works well for medium to moderately large optimization problems where the objective function and the gradient are much faster to compute than the Hessian; but, in general, it requires more iterations than the TRUREG, NEWRAP, and NRRIDG techniques, which compute second-order derivatives. QUANEW is the default optimization algorithm because it provides an appropriate balance between the speed and stability required for most nonlinear mixed model applications. The QUANEW technique is one of the following, depending upon the value of the UPDATE= option.  the original quasi-Newton algorithm, which updates an approximation of the inverse Hessian

Algorithm Descriptions F 179

 the dual quasi-Newton algorithm, which updates the Cholesky factor of an approximate Hessian (default) You can specify four update formulas with the UPDATE= option:  DBFGS performs the dual Broyden, Fletcher, Goldfarb, and Shanno (BFGS) update of the Cholesky factor of the Hessian matrix. This is the default.  DDFP performs the dual Davidon, Fletcher, and Powell (DFP) update of the Cholesky factor of the Hessian matrix.  BFGS performs the original BFGS update of the inverse Hessian matrix.  DFP performs the original DFP update of the inverse Hessian matrix. In each iteration, a line search is performed along the search direction to find an approximate optimum. The default line-search method uses quadratic interpolation and cubic extrapolation to obtain a step size ˛ satisfying the Goldstein conditions. One of the Goldstein conditions can be violated if the feasible region defines an upper limit of the step size. Violating the left-side Goldstein condition can affect the positive definiteness of the quasi-Newton update. In that case, either the update is skipped or the iterations are restarted with an identity matrix, resulting in the steepest descent or ascent search direction. You can specify line-search algorithms other than the default with the LIS= option. The QUANEW algorithm performs its own line-search technique. All options and parameters (except the INSTEP= option) that control the line search in the other algorithms do not apply here. In several applications, large steps in the first iterations are troublesome. You can use the INSTEP= option to impose an upper bound for the step size ˛ during the first five iterations. You can also use the INHESSIAN[=r] option to specify a different starting approximation for the Hessian. If you specify only the INHESSIAN option, the Cholesky factor of a (possibly ridged) finite difference approximation of the Hessian is used to initialize the quasi-Newton update process. The values of the LCSINGULAR=, LCEPSILON=, and LCDEACT= options, which control the processing of linear and boundary constraints, are valid only for the quadratic programming subroutine used in each iteration of the QUANEW algorithm.

Double-Dogleg Optimization (DBLDOG)

The double-dogleg optimization method combines the ideas of the quasi-Newton and trust region methods. In each iteration, the double-dogleg algorithm computes the step s .k/ as the linear com.k/ bination of the steepest descent or ascent search direction s1 and a quasi-Newton search direction .k/ s2 . .k/

.k/

s .k/ D ˛1 s1 C ˛2 s2

The step is requested to remain within a prespecified trust region radius; see Fletcher (1987, p. 107). Thus, the DBLDOG subroutine uses the dual quasi-Newton update but does not perform a line search. You can specify two update formulas with the UPDATE= option:

180 F Chapter 6: Nonlinear Optimization Methods

 DBFGS performs the dual Broyden, Fletcher, Goldfarb, and Shanno update of the Cholesky factor of the Hessian matrix. This is the default.  DDFP performs the dual Davidon, Fletcher, and Powell update of the Cholesky factor of the Hessian matrix. The double-dogleg optimization technique works well for medium to moderately large optimization problems where the objective function and the gradient are much faster to compute than the Hessian. The implementation is based on Dennis and Mei (1979) and Gay (1983), but it is extended for dealing with boundary and linear constraints. The DBLDOG technique generally requires more iterations than the TRUREG, NEWRAP, or NRRIDG technique, which requires second-order derivatives; however, each of the DBLDOG iterations is computationally cheap. Furthermore, the DBLDOG technique requires only gradient calls for the update of the Cholesky factor of an approximate Hessian.

Conjugate Gradient Optimization (CONGRA)

Second-order derivatives are not required by the CONGRA algorithm and are not even approximated. The CONGRA algorithm can be expensive in function and gradient calls, but it requires only O.n/ memory for unconstrained optimization. In general, many iterations are required to obtain a precise solution, but each of the CONGRA iterations is computationally cheap. You can specify four different update formulas for generating the conjugate directions by using the UPDATE= option:  PB performs the automatic restart update method of Powell (1977) and Beale (1972). This is the default.  FR performs the Fletcher-Reeves update (Fletcher 1987).  PR performs the Polak-Ribiere update (Fletcher 1987).  CD performs a conjugate-descent update of Fletcher (1987). The default, UPDATE=PB, behaved best in most test examples. You are advised to avoid the option UPDATE=CD, which behaved worst in most test examples. The CONGRA subroutine should be used for optimization problems with large n. For the unconstrained or boundary constrained case, CONGRA requires only O.n/ bytes of working memory, whereas all other optimization methods require order O.n2 / bytes of working memory. During n successive iterations, uninterrupted by restarts or changes in the working set, the conjugate gradient algorithm computes a cycle of n conjugate search directions. In each iteration, a line search is performed along the search direction to find an approximate optimum of the objective function. The default line-search method uses quadratic interpolation and cubic extrapolation to obtain a step size ˛ satisfying the Goldstein conditions. One of the Goldstein conditions can be violated if the feasible region defines an upper limit for the step size. Other line-search algorithms can be specified with the LIS= option.

Remote Monitoring F 181

Nelder-Mead Simplex Optimization (NMSIMP)

The Nelder-Mead simplex method does not use any derivatives and does not assume that the objective function has continuous derivatives. The objective function itself needs to be continuous. This technique is quite expensive in the number of function calls, and it might be unable to generate precise results for n much greater than 40. The original Nelder-Mead simplex algorithm is implemented and extended to boundary constraints. This algorithm does not compute the objective for infeasible points, but it changes the shape of the simplex by adapting to the nonlinearities of the objective function, which contributes to an increased speed of convergence. It uses a special termination criteria.

Remote Monitoring The SAS/EmMonitor is an application for Windows that enables you to monitor and stop from your PC a CPU-intensive application performed by the NLO subsystem that runs on a remote server. On the server side, a FILENAME statement assigns a fileref to a SOCKET-type device that defines the IP address of the client and the port number for listening. The fileref is then specified in the SOCKET= option in the PROC statement to control the EmMonitor. The following statements show an example of server-side statements for PROC ENTROPY. data one; do t = 1 to 10; x1 = 5 * ranuni(456); x2 = 10 * ranuni( 456); x3 = 2 * rannor(1456); e1 = rannor(1456); e2 = rannor(4560); tmp1 = 0.5 * e1 - 0.1 * e2; tmp2 = -0.1 * e1 - 0.3 * e2; y1 = 7 + 8.5*x1 + 2*x2 + tmp1; y2 = -3 + -2*x1 + x2 + 3*x3 + tmp2; output; end; run; filename sock socket ’your.pc.address.com:6943’; proc entropy data=one tech=tr gmenm gconv=2.e-5 socket=sock; model y1 = x1 x2 x3; run;

On the client side, the EmMonitor application is started with the following syntax: EmMonitor options

The options are:

182 F Chapter 6: Nonlinear Optimization Methods

-p port_number

defines the port number

-t title

defines the title of the EmMonitor window

-k

keeps the monitor alive when the iteration is completed

The default port number is 6943. The server does not need to be running when you start the EmMonitor, and you can start or dismiss the server at any time during the iteration process. You only need to remember the port number. Starting the PC client, or closing it prematurely, does not have any effect on the server side. In other words, the iteration process continues until one of the criteria for termination is met. Figure 6.1 through Figure 6.4 show screenshots of the application on the client side. Figure 6.1 Graph Tab Group 0

Figure 6.2 Graph Tab Group 1

ODS Table Names F 183

Figure 6.3 Status Tab

Figure 6.4 Options Tab

ODS Table Names The NLO subsystem assigns a name to each table it creates. You can use these names when using the Output Delivery System (ODS) to select tables and create output data sets. Not all tables are created by all SAS/ETS procedures that use the NLO subsystem. You should check the procedure chapter for more details. The names are listed in the following table.

184 F Chapter 6: Nonlinear Optimization Methods

Table 6.5

ODS Tables Produced by the NLO Subsystem

ODS Table Name

Description

ConvergenceStatus InputOptions IterHist IterStart IterStop Lagrange LinCon LinConDel LinConSol ParameterEstimatesResults ParameterEstimatesStart ProblemDescription ProjGrad

Convergence status Input options Iteration history Iteration start Iteration stop Lagrange multipliers at the solution Linear constraints Deleted linear constraints Linear constraints at the solution Estimates at the results Estimates at the start of the iterations Problem description Projected gradients

References F 185

References Beale, E.M.L. (1972), “A Derivation of Conjugate Gradients,” in Numerical Methods for Nonlinear Optimization, ed. F.A. Lootsma, London: Academic Press. Dennis, J.E., Gay, D.M., and Welsch, R.E. (1981), “An Adaptive Nonlinear Least-Squares Algorithm,” ACM Transactions on Mathematical Software, 7, 348–368. Dennis, J.E. and Mei, H.H.W. (1979), “Two New Unconstrained Optimization Algorithms Which Use Function and Gradient Values,” J. Optim. Theory Appl., 28, 453–482. Dennis, J.E. and Schnabel, R.B. (1983), Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Englewood, NJ: Prentice-Hall. Fletcher, R. (1987), Practical Methods of Optimization, Second Edition, Chichester: John Wiley & Sons, Inc. Gay, D.M. (1983), “Subroutines for Unconstrained Minimization,” ACM Transactions on Mathematical Software, 9, 503–524. Moré, J.J. (1978), “The Levenberg-Marquardt Algorithm: Implementation and Theory,” in Lecture Notes in Mathematics 630, ed. G.A. Watson, Berlin-Heidelberg-New York: Springer Verlag. Moré, J.J. and Sorensen, D.C. (1983), “Computing a Trust-region Step,” SIAM Journal on Scientific and Statistical Computing, 4, 553–572. Polak, E. (1971), Computational Methods in Optimization, New York: Academic Press. Powell, J.M.D. (1977), “Restart Procedures for the Conjugate Gradient Method,” Math. Prog., 12, 241–254.

186

Part II

Procedure Reference

188

Chapter 7

The ARIMA Procedure Contents Overview: ARIMA Procedure . . . . . . . . . . . . . . . . . Getting Started: ARIMA Procedure . . . . . . . . . . . . . . The Three Stages of ARIMA Modeling . . . . . . . . . Identification Stage . . . . . . . . . . . . . . . . . . . . Estimation and Diagnostic Checking Stage . . . . . . . Forecasting Stage . . . . . . . . . . . . . . . . . . . . Using ARIMA Procedure Statements . . . . . . . . . . General Notation for ARIMA Models . . . . . . . . . . Stationarity . . . . . . . . . . . . . . . . . . . . . . . . Differencing . . . . . . . . . . . . . . . . . . . . . . . Subset, Seasonal, and Factored ARMA Models . . . . . Input Variables and Regression with ARMA Errors . . . Intervention Models and Interrupted Time Series . . . . Rational Transfer Functions and Distributed Lag Models Forecasting with Input Variables . . . . . . . . . . . . . Data Requirements . . . . . . . . . . . . . . . . . . . . Syntax: ARIMA Procedure . . . . . . . . . . . . . . . . . . . Functional Summary . . . . . . . . . . . . . . . . . . . PROC ARIMA Statement . . . . . . . . . . . . . . . . BY Statement . . . . . . . . . . . . . . . . . . . . . . IDENTIFY Statement . . . . . . . . . . . . . . . . . . ESTIMATE Statement . . . . . . . . . . . . . . . . . . OUTLIER Statement . . . . . . . . . . . . . . . . . . . FORECAST Statement . . . . . . . . . . . . . . . . . Details: ARIMA Procedure . . . . . . . . . . . . . . . . . . . The Inverse Autocorrelation Function . . . . . . . . . . The Partial Autocorrelation Function . . . . . . . . . . The Cross-Correlation Function . . . . . . . . . . . . . The ESACF Method . . . . . . . . . . . . . . . . . . . The MINIC Method . . . . . . . . . . . . . . . . . . . The SCAN Method . . . . . . . . . . . . . . . . . . . . Stationarity Tests . . . . . . . . . . . . . . . . . . . . . Prewhitening . . . . . . . . . . . . . . . . . . . . . . . Identifying Transfer Function Models . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

190 191 191 192 197 204 206 207 210 210 211 213 216 218 219 221 221 221 224 227 228 232 236 237 239 239 240 240 241 243 244 246 246 247

190 F Chapter 7: The ARIMA Procedure

Missing Values and Autocorrelations . . . . . . . . . . . . . . . Estimation Details . . . . . . . . . . . . . . . . . . . . . . . . . Specifying Inputs and Transfer Functions . . . . . . . . . . . . . Initial Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stationarity and Invertibility . . . . . . . . . . . . . . . . . . . . Naming of Model Parameters . . . . . . . . . . . . . . . . . . . Missing Values and Estimation and Forecasting . . . . . . . . . . Forecasting Details . . . . . . . . . . . . . . . . . . . . . . . . . Forecasting Log Transformed Data . . . . . . . . . . . . . . . . Specifying Series Periodicity . . . . . . . . . . . . . . . . . . . Detecting Outliers . . . . . . . . . . . . . . . . . . . . . . . . . OUT= Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . OUTCOV= Data Set . . . . . . . . . . . . . . . . . . . . . . . . OUTEST= Data Set . . . . . . . . . . . . . . . . . . . . . . . . OUTMODEL= SAS Data Set . . . . . . . . . . . . . . . . . . . OUTSTAT= Data Set . . . . . . . . . . . . . . . . . . . . . . . . Printed Output . . . . . . . . . . . . . . . . . . . . . . . . . . . ODS Table Names . . . . . . . . . . . . . . . . . . . . . . . . . Statistical Graphics . . . . . . . . . . . . . . . . . . . . . . . . . Examples: ARIMA Procedure . . . . . . . . . . . . . . . . . . . . . . Example 7.1: Simulated IMA Model . . . . . . . . . . . . . . . Example 7.2: Seasonal Model for the Airline Series . . . . . . . Example 7.3: Model for Series J Data from Box and Jenkins . . Example 7.4: An Intervention Model for Ozone Data . . . . . . Example 7.5: Using Diagnostics to Identify ARIMA Models . . Example 7.6: Detection of Level Changes in the Nile River Data Example 7.7: Iterative Outlier Detection . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

248 248 253 254 255 256 256 257 258 259 260 262 263 264 267 268 269 272 273 277 277 282 289 298 300 305 307 309

Overview: ARIMA Procedure The ARIMA procedure analyzes and forecasts equally spaced univariate time series data, transfer function data, and intervention data by using the autoregressive integrated moving-average (ARIMA) or autoregressive moving-average (ARMA) model. An ARIMA model predicts a value in a response time series as a linear combination of its own past values, past errors (also called shocks or innovations), and current and past values of other time series. The ARIMA approach was first popularized by Box and Jenkins, and ARIMA models are often referred to as Box-Jenkins models. The general transfer function model employed by the ARIMA procedure was discussed by Box and Tiao (1975). When an ARIMA model includes other time series as input variables, the model is sometimes referred to as an ARIMAX model. Pankratz (1991) refers to the ARIMAX model as dynamic regression.

Getting Started: ARIMA Procedure F 191

The ARIMA procedure provides a comprehensive set of tools for univariate time series model identification, parameter estimation, and forecasting, and it offers great flexibility in the kinds of ARIMA or ARIMAX models that can be analyzed. The ARIMA procedure supports seasonal, subset, and factored ARIMA models; intervention or interrupted time series models; multiple regression analysis with ARMA errors; and rational transfer function models of any complexity. The design of PROC ARIMA closely follows the Box-Jenkins strategy for time series modeling with features for the identification, estimation and diagnostic checking, and forecasting steps of the Box-Jenkins method. Before you use PROC ARIMA, you should be familiar with Box-Jenkins methods, and you should exercise care and judgment when you use the ARIMA procedure. The ARIMA class of time series models is complex and powerful, and some degree of expertise is needed to use them correctly.

Getting Started: ARIMA Procedure This section outlines the use of the ARIMA procedure and gives a cursory description of the ARIMA modeling process for readers who are less familiar with these methods.

The Three Stages of ARIMA Modeling The analysis performed by PROC ARIMA is divided into three stages, corresponding to the stages described by Box and Jenkins (1976). 1. In the identification stage, you use the IDENTIFY statement to specify the response series and identify candidate ARIMA models for it. The IDENTIFY statement reads time series that are to be used in later statements, possibly differencing them, and computes autocorrelations, inverse autocorrelations, partial autocorrelations, and cross-correlations. Stationarity tests can be performed to determine if differencing is necessary. The analysis of the IDENTIFY statement output usually suggests one or more ARIMA models that could be fit. Options enable you to test for stationarity and tentative ARMA order identification. 2. In the estimation and diagnostic checking stage, you use the ESTIMATE statement to specify the ARIMA model to fit to the variable specified in the previous IDENTIFY statement and to estimate the parameters of that model. The ESTIMATE statement also produces diagnostic statistics to help you judge the adequacy of the model. Significance tests for parameter estimates indicate whether some terms in the model might be unnecessary. Goodness-of-fit statistics aid in comparing this model to others. Tests for white noise residuals indicate whether the residual series contains additional information that might be used by a more complex model. The OUTLIER statement provides another useful tool to check whether the currently estimated model accounts for all the variation in the series. If the diagnostic tests indicate problems with the model, you try another model and then repeat the estimation and diagnostic checking stage.

192 F Chapter 7: The ARIMA Procedure

3. In the forecasting stage, you use the FORECAST statement to forecast future values of the time series and to generate confidence intervals for these forecasts from the ARIMA model produced by the preceding ESTIMATE statement. These three steps are explained further and illustrated through an extended example in the following sections.

Identification Stage Suppose you have a variable called SALES that you want to forecast. The following example illustrates ARIMA modeling and forecasting by using a simulated data set TEST that contains a time series SALES generated by an ARIMA(1,1,1) model. The output produced by this example is explained in the following sections. The simulated SALES series is shown in Figure 7.1. proc sgplot data=test; scatter y=sales x=date; run;

Figure 7.1 Simulated ARIMA(1,1,1) Series SALES

Identification Stage F 193

Using the IDENTIFY Statement You first specify the input data set in the PROC ARIMA statement. Then, you use an IDENTIFY statement to read in the SALES series and analyze its correlation properties. You do this by using the following statements: proc arima data=test ; identify var=sales nlag=24; run;

Descriptive Statistics

The IDENTIFY statement first prints descriptive statistics for the SALES series. This part of the IDENTIFY statement output is shown in Figure 7.2. Figure 7.2 IDENTIFY Statement Descriptive Statistics Output The ARIMA Procedure Name of Variable = sales Mean of Working Series Standard Deviation Number of Observations

137.3662 17.36385 100

Autocorrelation Function Plots

The IDENTIFY statement next produces a panel of plots used for its autocorrelation and trend analysis. The panel contains the following plots:  the time series plot of the series  the sample autocorrelation function plot (ACF)  the sample inverse autocorrelation function plot (IACF)  the sample partial autocorrelation function plot (PACF) This correlation analysis panel is shown in Figure 7.3.

194 F Chapter 7: The ARIMA Procedure

Figure 7.3 Correlation Analysis of SALES

These autocorrelation function plots show the degree of correlation with past values of the series as a function of the number of periods in the past (that is, the lag) at which the correlation is computed. The NLAG= option controls the number of lags for which the autocorrelations are shown. By default, the autocorrelation functions are plotted to lag 24. Most books on time series analysis explain how to interpret the autocorrelation and the partial autocorrelation plots. See the section “The Inverse Autocorrelation Function” on page 239 for a discussion of the inverse autocorrelation plots. By examining these plots, you can judge whether the series is stationary or nonstationary. In this case, a visual inspection of the autocorrelation function plot indicates that the SALES series is nonstationary, since the ACF decays very slowly. For more formal stationarity tests, use the STATIONARITY= option. (See the section “Stationarity” on page 210.)

White Noise Test

The last part of the default IDENTIFY statement output is the check for white noise. This is an approximate statistical test of the hypothesis that none of the autocorrelations of the series up to a

Identification Stage F 195

given lag are significantly different from 0. If this is true for all lags, then there is no information in the series to model, and no ARIMA model is needed for the series. The autocorrelations are checked in groups of six, and the number of lags checked depends on the NLAG= option. The check for white noise output is shown in Figure 7.4. Figure 7.4 IDENTIFY Statement Check for White Noise Autocorrelation Check for White Noise To Lag

ChiSquare

DF

Pr > ChiSq

6 12 18 24

426.44 547.82 554.70 585.73

6 12 18 24

) >

controls the plots produced through ODS Graphics. When you specify only one plot request, you can omit the parentheses around the plot request. Here are some examples: plots=none plots=all plots(unpack)=series(corr crosscorr) plots(only)=(series(corr crosscorr) residual(normal smooth))

PROC ARIMA Statement F 225

You must enable ODS Graphics before requesting plots as shown in the following statements. For general information about ODS Graphics, see Chapter 21, “Statistical Graphics Using ODS” (SAS/STAT User’s Guide). If you have enabled ODS Graphics but do not specify any specific plot request, then the default plots associated with each of the PROC ARIMA statements used in the program are produced. The old line printer plots are suppressed when ODS Graphics is enabled. ods graphics on; proc arima; identify var=y(1 12); estimate q=(1)(12) noint; run;

Since no specific plot is requested in this program, the default plots associated with the identification and estimation stages are produced. Global Plot Options: The global-plot-options apply to all relevant plots generated by the ARIMA procedure. The following global-plot-options are supported: ONLY

suppresses the default plots. Only the plots specifically requested are produced. UNPACK

breaks a graphic that is otherwise paneled into individual component plots.

Specific Plot Options: The following list describes the specific plots and their options. ALL

produces all plots appropriate for the particular analysis. NONE

suppresses all plots. SERIES(< series-plot-options > )

produces plots associated with the identification stage of the modeling. The panel plots corresponding to the CORR and CROSSCORR options are produced by default. The following series-plot-options are available: ACF

produces the plot of autocorrelations. ALL

produces all the plots associated with the identification stage.

226 F Chapter 7: The ARIMA Procedure

CORR

produces a panel of plots that are useful in the trend and correlation analysis of the series. The panel consists of the following: 

the time series plot



the series-autocorrelation plot



the series-partial-autocorrelation plot



the series-inverse-autocorrelation plot

CROSSCORR

produces panels of cross-correlation plots. IACF

produces the plot of inverse-autocorrelations. PACF

produces the plot of partial-autocorrelations. RESIDUAL(< residual-plot-options > )

produces the residuals plots. The residual correlation and normality diagnostic panels are produced by default. The following residual-plot-options are available: ACF

produces the plot of residual autocorrelations. ALL

produces all the residual diagnostics plots appropriate for the particular analysis. CORR

produces a summary panel of the residual correlation diagnostics that consists of the following: 

the residual-autocorrelation plot



the residual-partial-autocorrelation plot



the residual-inverse-autocorrelation plot



a plot of Ljung-Box white-noise test p-values at different lags

HIST

produces the histogram of the residuals. IACF

produces the plot of residual inverse-autocorrelations. NORMAL

produces a summary panel of the residual normality diagnostics that consists of the following:

BY Statement F 227

 histogram of the residuals  normal quantile plot of the residuals PACF

produces the plot of residual partial-autocorrelations. QQ

produces the normal quantile plot of the residuals. SMOOTH

produces a scatter plot of the residuals against time, which has an overlaid smooth fit. WN

produces the plot of Ljung-Box white-noise test p-values at different lags. FORECAST(< forecast-plot-options > )

produces the forecast plots in the forecasting stage. The forecast-only plot that shows the multistep forecasts in the forecast region is produced by default. The following forecast-plot-options are available: ALL

produces the forecast-only plot as well as the forecast plot. FORECAST

produces a plot that shows the one-step-ahead forecasts as well as the multistepahead forecasts. FORECASTONLY

produces a plot that shows only the multistep-ahead forecasts in the forecast region. OUT=SAS-data-set

specifies a SAS data set to which the forecasts are output. If different OUT= specifications appear in the PROC ARIMA and FORECAST statements, the one in the FORECAST statement is used.

BY Statement BY variables ;

A BY statement can be used in the ARIMA procedure to process a data set in groups of observations defined by the BY variables. Note that all IDENTIFY, ESTIMATE, and FORECAST statements specified are applied to all BY groups.

228 F Chapter 7: The ARIMA Procedure

Because of the need to make data-based model selections, BY-group processing is not usually done with PROC ARIMA. You usually want to use different models for the different series contained in different BY groups, and the PROC ARIMA BY statement does not let you do this. Using a BY statement imposes certain restrictions. The BY statement must appear before the first RUN statement. If a BY statement is used, the input data must come from the data set specified in the PROC statement; that is, no input data sets can be specified in IDENTIFY statements. When a BY statement is used with PROC ARIMA, interactive processing applies only to the first BY group. Once the end of the PROC ARIMA step is reached, all ARIMA statements specified are executed again for each of the remaining BY groups in the input data set.

IDENTIFY Statement IDENTIFY VAR=variable options ;

The IDENTIFY statement specifies the time series to be modeled, differences the series if desired, and computes statistics to help identify models to fit. Use an IDENTIFY statement for each time series that you want to model. If other time series are to be used as inputs in a subsequent ESTIMATE statement, they must be listed in a CROSSCORR= list in the IDENTIFY statement. The following options are used in the IDENTIFY statement. The VAR= option is required. ALPHA=significance-level

The ALPHA= option specifies the significance level for tests in the IDENTIFY statement. The default is 0.05. CENTER

centers each time series by subtracting its sample mean. The analysis is done on the centered data. Later, when forecasts are generated, the mean is added back. Note that centering is done after differencing. The CENTER option is normally used in conjunction with the NOCONSTANT option of the ESTIMATE statement. CLEAR

deletes all old models. This option is useful when you want to delete old models so that the input variables are not prewhitened. (See the section “Prewhitening” on page 246 for more information.) CROSSCORR=variable (d11, d12, . . . , d1k ) CROSSCORR= (variable (d11, d12, . . . , d1k )... variable (d21, d22, . . . , d2k ))

names the variables cross-correlated with the response variable given by the VAR= specification. Each variable name can be followed by a list of differencing lags in parentheses, the same as for the VAR= specification. If differencing is specified for a variable in the CROSSCORR= list, the differenced series is cross-correlated with the VAR= option series, and the differenced series is used when the ESTIMATE statement INPUT= option refers to the variable.

IDENTIFY Statement F 229

DATA=SAS-data-set

specifies the input SAS data set that contains the time series. If the DATA= option is omitted, the DATA= data set specified in the PROC ARIMA statement is used; if the DATA= option is omitted from the PROC ARIMA statement as well, the most recently created data set is used. ESACF

computes the extended sample autocorrelation function and uses these estimates to tentatively identify the autoregressive and moving-average orders of mixed models. The ESACF option generates two tables. The first table displays extended sample autocorrelation estimates, and the second table displays probability values that can be used to test the significance of these estimates. The P=.pmi n W pmax / and Q=.qmi n W qmax / options determine the size of the table. The autoregressive and moving-average orders are tentatively identified by finding a triangular pattern in which all values are insignificant. The ARIMA procedure finds these patterns based on the IDENTIFY statement ALPHA= option and displays possible recommendations for the orders. The following code generates an ESACF table with dimensions of p=(0:7) and q=(0:8). proc arima data=test; identify var=x esacf p=(0:7) q=(0:8); run;

See the section “The ESACF Method” on page 241 for more information. MINIC

uses information criteria or penalty functions to provide tentative ARMA order identification. The MINIC option generates a table that contains the computed information criterion associated with various ARMA model orders. The PERROR=.p;mi n W p;max / option determines the range of the autoregressive model orders used to estimate the error series. The P=.pmi n W pmax / and Q=.qmi n W qmax / options determine the size of the table. The ARMA orders are tentatively identified by those orders that minimize the information criterion. The following statements generate a MINIC table with default dimensions of p=(0:5) and q=(0:5) and with the error series estimated by an autoregressive model with an order, p , that minimizes the AIC in the range from 8 to 11. proc arima data=test; identify var=x minic perror=(8:11); run;

See the section “The MINIC Method” on page 243 for more information. NLAG=number

indicates the number of lags to consider in computing the autocorrelations and crosscorrelations. To obtain preliminary estimates of an ARIMA(p, d, q ) model, the NLAG= value must be at least p +q +d. The number of observations must be greater than or equal to the NLAG= value. The default value for NLAG= is 24 or one-fourth the number of observations, whichever is less. Even though the NLAG= value is specified, the NLAG= value can be changed according to the data set.

230 F Chapter 7: The ARIMA Procedure

NOMISS

uses only the first continuous sequence of data with no missing values. By default, all observations are used. NOPRINT

suppresses the normal printout (including the correlation plots) generated by the IDENTIFY statement. OUTCOV=SAS-data-set

writes the autocovariances, autocorrelations, inverse autocorrelations, partial autocorrelations, and cross covariances to an output SAS data set. If the OUTCOV= option is not specified, no covariance output data set is created. See the section “OUTCOV= Data Set” on page 263 for more information. P=(pmi n W pmax )

see the ESACF, MINIC, and SCAN options for details. PERROR=(p;mi n W p;max )

determines the range of the autoregressive model orders used to estimate the error series in MINIC, a tentative ARMA order identification method. See the section “The MINIC Method” on page 243 for more information. By default p;mi n is set to pmax and p;max is set to pmax C qmax , where pmax and qmax are the maximum settings of the P= and Q= options on the IDENTIFY statement. Q=(qmi n W qmax )

see the ESACF, MINIC, and SCAN options for details. SCAN

computes estimates of the squared canonical correlations and uses these estimates to tentatively identify the autoregressive and moving-average orders of mixed models. The SCAN option generates two tables. The first table displays squared canonical correlation estimates, and the second table displays probability values that can be used to test the significance of these estimates. The P=.pmi n W pmax / and Q=.qmi n W qmax / options determine the size of each table. The autoregressive and moving-average orders are tentatively identified by finding a rectangular pattern in which all values are insignificant. The ARIMA procedure finds these patterns based on the IDENTIFY statement ALPHA= option and displays possible recommendations for the orders. The following code generates a SCAN table with default dimensions of p=(0:5) and q=(0:5). The recommended orders are based on a significance level of 0.1. proc arima data=test; identify var=x scan alpha=0.1; run;

See the section “The SCAN Method” on page 244 for more information.

IDENTIFY Statement F 231

STATIONARITY=

performs stationarity tests. Stationarity tests can be used to determine whether differencing terms should be included in the model specification. In each stationarity test, the autoregressive orders can be specified by a range, test= armax , or as a list of values, test= .ar1 ; ::; arn /, where test is ADF, PP, or RW. The default is (0,1,2). See the section “Stationarity Tests” on page 246 for more information. STATIONARITY=(ADF= AR orders DLAG= s ) STATIONARITY=(DICKEY= AR orders DLAG= s )

performs augmented Dickey-Fuller tests. If the DLAG=s option is specified with s is greater than one, seasonal Dickey-Fuller tests are performed. The maximum allowable value of s is 12. The default value of s is 1. The following code performs augmented Dickey-Fuller tests with autoregressive orders 2 and 5. proc arima data=test; identify var=x stationarity=(adf=(2,5)); run;

STATIONARITY=(PP= AR orders ) STATIONARITY=(PHILLIPS= AR orders )

performs Phillips-Perron tests. The following statements perform augmented Phillips-Perron tests with autoregressive orders ranging from 0 to 6. proc arima data=test; identify var=x stationarity=(pp=6); run;

STATIONARITY=(RW=AR orders ) STATIONARITY=(RANDOMWALK=AR orders )

performs random-walk-with-drift tests. The following statements perform random-walkwith-drift tests with autoregressive orders ranging from 0 to 2. proc arima data=test; identify var=x stationarity=(rw); run;

VAR=variable VAR= variable ( d1, d2, . . . , dk )

names the variable that contains the time series to analyze. The VAR= option is required. A list of differencing lags can be placed in parentheses after the variable name to request that the series be differenced at these lags. For example, VAR=X(1) takes the first differences of X. VAR=X(1,1) requests that X be differenced twice, both times with lag 1, producing a second difference series, which is .Xt Xt 1 / .Xt 1 Xt 2 / D Xt 2Xt 1 C Xt 2 . VAR=X(2) differences X once at lag two .Xt

Xt

2 /.

If differencing is specified, it is the differenced series that is processed by any subsequent ESTIMATE statement.

232 F Chapter 7: The ARIMA Procedure

WHITENOISE=ST | IGNOREMISS

specifies the type of test statistic that is used in the white noise test of the series when the series contains missing values. If WHITENOISE=IGNOREMISS, the standard Ljung-Box test statistic is used. If WHITENOISE=ST, a modification of this statistic suggested by Stoffer and Toloi (1992) is used. The default is WHITENOISE=ST.

ESTIMATE Statement < label: >ESTIMATE options ;

The ESTIMATE statement specifies an ARMA model or transfer function model for the response variable specified in the previous IDENTIFY statement, and produces estimates of its parameters. The ESTIMATE statement also prints diagnostic information by which to check the model. The label in the ESTIMATE statement is optional. Include an ESTIMATE statement for each model that you want to estimate. Options used in the ESTIMATE statement are described in the following sections.

Options for Defining the Model and Controlling Diagnostic Statistics The following options are used to define the model to be estimated and to control the output that is printed. ALTPARM

specifies the alternative parameterization of the overall scale of transfer functions in the model. See the section “Alternative Model Parameterization” on page 253 for details. INPUT=variable INPUT=( transfer-function variable . . . )

specifies input variables and their transfer functions. The variables used on the INPUT= option must be included in the CROSSCORR= list in the previous IDENTIFY statement. If any differencing is specified in the CROSSCORR= list, then the differenced series is used as the input to the transfer function. The transfer function specification for an input variable is optional. If no transfer function is specified, the input variable enters the model as a simple regressor. If specified, the transfer function specification has the following syntax: S $.L1;1 ; L1;2 ; : : :/.L2;1 ; : : :/ : : : =.Lj;1 ; : : :/ : : : Here, S is a shift or lag of the input variable, the terms before the slash (/) are numerator factors, and the terms after the slash (/) are denominator factors of the transfer function. All three parts are optional. See the section “Specifying Inputs and Transfer Functions” on page 253 for details. METHOD=value

specifies the estimation method to use. METHOD=ML specifies the maximum likelihood

ESTIMATE Statement F 233

method. METHOD=ULS specifies the unconditional least squares method. METHOD=CLS specifies the conditional least squares method. METHOD=CLS is the default. See the section “Estimation Details” on page 248 for more information. NOCONSTANT NOINT

suppresses the fitting of a constant (or intercept) parameter in the model. (That is, the parameter  is omitted.) NODF

estimates the variance by dividing the error sum of squares (SSE) by the number of residuals. The default is to divide the SSE by the number of residuals minus the number of free parameters in the model. NOPRINT

suppresses the normal printout generated by the ESTIMATE statement. If the NOPRINT option is specified for the ESTIMATE statement, then any error and warning messages are printed to the SAS log. P=order P=(lag, . . . , lag ) . . . (lag, . . . , lag )

specifies the autoregressive part of the model. By default, no autoregressive parameters are fit. P=(l 1 , l 2 , . . . , l k ) defines a model with autoregressive parameters at the specified lags. P= order is equivalent to P=(1, 2, . . . , order). A concatenation of parenthesized lists specifies a factored model. P=(1,2,5)(6,12) specifies the autoregressive model .1

1;1 B

1;2 B 2

1;3 B 5 /.1

2;1 B 6

For example,

2;2 B 12 /

PLOT

plots the residual autocorrelation functions. The sample autocorrelation, the sample inverse autocorrelation, and the sample partial autocorrelation functions of the model residuals are plotted. Q=order Q=(lag, . . . , lag ) . . . (lag, . . . , lag )

specifies the moving-average part of the model. By default, no moving-average part is included in the model. Q=(l 1 , l 2 , . . . , l k ) defines a model with moving-average parameters at the specified lags. Q= order is equivalent to Q=(1, 2, . . . , order). A concatenation of parenthesized lists specifies a factored model. The interpretation of factors and lags is the same as for the P= option. WHITENOISE=ST | IGNOREMISS

specifies the type of test statistic that is used in the white noise test of the series when the series contains missing values. If WHITENOISE=IGNOREMISS, the standard Ljung-Box test statistic is used. If WHITENOISE=ST, a modification of this statistic suggested by Stoffer and Toloi (1992) is used. The default is WHITENOISE=ST.

234 F Chapter 7: The ARIMA Procedure

Options for Output Data Sets The following options are used to store results in SAS data sets: OUTEST=SAS-data-set

writes the parameter estimates to an output data set. If the OUTCORR or OUTCOV option is used, the correlations or covariances of the estimates are also written to the OUTEST= data set. See the section “OUTEST= Data Set” on page 264 for a description of the OUTEST= output data set. OUTCORR

writes the correlations of the parameter estimates to the OUTEST= data set. OUTCOV

writes the covariances of the parameter estimates to the OUTEST= data set. OUTMODEL=SAS-data-set

writes the model and parameter estimates to an output data set. If OUTMODEL= is not specified, no model output data set is created. See the section “OUTMODEL= SAS Data Set” on page 267 for a description of the OUTMODEL= output data set. OUTSTAT=SAS-data-set

writes the model diagnostic statistics to an output data set. If OUTSTAT= is not specified, no statistics output data set is created. See the section “OUTSTAT= Data Set” on page 268 for a description of the OUTSTAT= output data set.

Options to Specify Parameter Values The following options enable you to specify values for the model parameters. These options can provide starting values for the estimation process, or you can specify fixed parameters for use in the FORECAST stage and suppress the estimation process with the NOEST option. By default, the ARIMA procedure finds initial parameter estimates and uses these estimates as starting values in the iterative estimation process. If values for any parameters are specified, values for all parameters should be given. The number of values given must agree with the model specifications. AR=value . . .

lists starting values for the autoregressive parameters. See the section “Initial Values” on page 254 for more information. INITVAL=(initializer-spec variable . . . )

specifies starting values for the parameters in the transfer function parts of the model. See the section “Initial Values” on page 254 for more information. MA=value . . .

lists starting values for the moving-average parameters. See the section “Initial Values” on page 254 for more information.

ESTIMATE Statement F 235

MU=value

specifies the MU parameter. NOEST

uses the values specified with the AR=, MA=, INITVAL=, and MU= options as final parameter values. The estimation process is suppressed except for estimation of the residual variance. The specified parameter values are used directly by the next FORECAST statement. When NOEST is specified, standard errors, t values, and the correlations between estimates are displayed as 0 or missing. (The NOEST option is useful, for example, when you want to generate forecasts that correspond to a published model.)

Options to Control the Iterative Estimation Process The following options can be used to control the iterative process of minimizing the error sum of squares or maximizing the log-likelihood function. These tuning options are not usually needed but can be useful if convergence problems arise. BACKLIM= n

omits the specified number of initial residuals from the sum of squares or likelihood function. Omitting values can be useful for suppressing transients in transfer function models that are sensitive to start-up values. CONVERGE=value

specifies the convergence criterion. Convergence is assumed when the largest change in the estimate for any parameter is less that the CONVERGE= option value. If the absolute value of the parameter estimate is greater than 0.01, the relative change is used; otherwise, the absolute change in the estimate is used. The default is CONVERGE=0.001. DELTA=value

specifies the perturbation value for computing numerical derivatives. DELTA=0.001.

The default is

GRID

prints the error sum of squares (SSE) or concentrated log-likelihood surface in a small grid of the parameter space around the final estimates. For each pair of parameters, the SSE is printed for the nine parameter-value combinations formed by the grid, with a center at the final estimates and with spacing given by the GRIDVAL= specification. The GRID option can help you judge whether the estimates are truly at the optimum, since the estimation process does not always converge. For models with a large number of parameters, the GRID option produces voluminous output. GRIDVAL=number

controls the spacing in the grid printed by the GRID option. The default is GRIDVAL=0.005. MAXITER=n MAXIT=n

specifies the maximum number of iterations allowed. The default is MAXITER=50.

236 F Chapter 7: The ARIMA Procedure

NOLS

begins the maximum likelihood or unconditional least squares iterations from the preliminary estimates rather than from the conditional least squares estimates that are produced after four iterations. See the section “Estimation Details” on page 248 for more information. NOSTABLE

specifies that the autoregressive and moving-average parameter estimates for the noise part of the model not be restricted to the stationary and invertible regions, respectively. See the section “Stationarity and Invertibility” on page 255 for more information. PRINTALL

prints preliminary estimation results and the iterations in the final estimation process. NOTFSTABLE

specifies that the parameter estimates for the denominator polynomial of the transfer function part of the model not be restricted to the stability region. See the section “Stationarity and Invertibility” on page 255 for more information. SINGULAR=value

specifies the criterion for checking singularity. If a pivot of a sweep operation is less than the SINGULAR= value, the matrix is deemed singular. Sweep operations are performed on the Jacobian matrix during final estimation and on the covariance matrix when preliminary estimates are obtained. The default is SINGULAR=1E–7.

OUTLIER Statement OUTLIER options ;

The OUTLIER statement can be used to detect shifts in the level of the response series that are not accounted for by the previously estimated model. An ESTIMATE statement must precede the OUTLIER statement. The following options are used in the OUTLIER statement: TYPE=ADDITIVE TYPE=SHIFT TYPE=TEMP ( d1 ; : : : ; dk ) TYPE=(< ADDITIVE >< SHIFT > < TEMP ( d1 ; : : : ; dk ) ) >

specifies the types of level shifts to search for. The default is TYPE=(ADDITIVE SHIFT), which requests searching for additive outliers and permanent level shifts. The option TEMP( d1 ; : : : ; dk ) requests searching for temporary changes in the level of durations d1 ; : : : ; dk . These options can also be abbreviated as AO, LS, and TC. ALPHA=significance-level

specifies the significance level for tests in the OUTLIER statement. The default is 0.05. SIGMA=ROBUST | MSE

specifies the type of error variance estimate to use in the statistical tests performed during the outlier detection. SIGMA=MSE corresponds to the usual mean squared error (MSE) estimate,

FORECAST Statement F 237

and SIGMA=ROBUST corresponds to a robust estimate of the error variance. The default is SIGMA=ROBUST. MAXNUM=number

limits the number of outliers to search. The default is MAXNUM=5. MAXPCT=number

limits the number of outliers to search for according to a percentage of the series length. The default is MAXPCT=2. When both the MAXNUM= and MAXPCT= options are specified, the minimum of the two search numbers is used. ID=Date-Time ID variable

specifies a SAS date, time, or datetime identification variable to label the detected outliers. This variable must be present in the input data set. The following examples illustrate a few possibilities for the OUTLIER statement. The most basic usage, shown as follows, sets all the options to their default values. outlier;

That is, it is equivalent to outlier type=(ao ls) alpha=0.05 sigma=robust maxnum=5 maxpct=2;

The following statement requests a search for permanent level shifts and for temporary level changes of durations 6 and 12. The search is limited to at most three changes and the significance level of the underlying tests is 0.001. MSE is used as the estimate of error variance. It also requests labeling of the detected shifts using an ID variable date. outlier type=(ls tc(6 12)) alpha=0.001 sigma=mse maxnum=3 ID=date;

FORECAST Statement FORECAST options ;

The FORECAST statement generates forecast values for a time series by using the parameter estimates produced by the previous ESTIMATE statement. See the section “Forecasting Details” on page 257 for more information about calculating forecasts. The following options can be used in the FORECAST statement: ALIGN=option

controls the alignment of SAS dates used to identify output observations. The ALIGN= option allows the following values: BEGINNING|BEG|B, MIDDLE|MID|M, and ENDING|END|E. BEGINNING is the default. ALPHA=n

sets the size of the forecast confidence limits. The ALPHA= value must be between 0 and

238 F Chapter 7: The ARIMA Procedure

1. When you specify ALPHA=˛, the upper and lower confidence limits have a 1 ˛ confidence level. The default is ALPHA=0.05, which produces 95% confidence intervals. ALPHA values are rounded to the nearest hundredth. BACK=n

specifies the number of observations before the end of the data where the multistep forecasts are to begin. The BACK= option value must be less than or equal to the number of observations minus the number of parameters. The default is BACK=0, which means that the forecast starts at the end of the available data. The end of the data is the last observation for which a noise value can be calculated. If there are no input series, the end of the data is the last nonmissing value of the response time series. If there are input series, this observation can precede the last nonmissing value of the response variable, since there may be missing values for some of the input series. ID=variable

names a variable in the input data set that identifies the time periods associated with the observations. The ID= variable is used in conjunction with the INTERVAL= option to extrapolate ID values from the end of the input data to identify forecast periods in the OUT= data set. If the INTERVAL= option specifies an interval type, the ID variable must be a SAS date or datetime variable with the spacing between observations indicated by the INTERVAL= value. If the INTERVAL= option is not used, the last input value of the ID= variable is incremented by one for each forecast period to extrapolate the ID values for forecast observations. INTERVAL=interval INTERVAL=n

specifies the time interval between observations. See Chapter 4, “Date Intervals, Formats, and Functions,” for information about valid INTERVAL= values. The value of the INTERVAL= option is used by PROC ARIMA to extrapolate the ID values for forecast observations and to check that the input data are in order with no missing periods. See the section “Specifying Series Periodicity” on page 259 for more details. LEAD=n

specifies the number of multistep forecast values to compute. For example, if LEAD=10, PROC ARIMA forecasts for ten periods beginning with the end of the input series (or earlier if BACK= is specified). It is possible to obtain fewer than the requested number of forecasts if a transfer function model is specified and insufficient data are available to compute the forecast. The default is LEAD=24. NOOUTALL

includes only the final forecast observations in the OUT= output data set, not the one-step forecasts for the data before the forecast period. NOPRINT

suppresses the normal printout of the forecast and associated values. OUT=SAS-data-set

writes the forecast (and other values) to an output data set. If OUT= is not specified, the OUT=

Details: ARIMA Procedure F 239

data set specified in the PROC ARIMA statement is used. If OUT= is also not specified in the PROC ARIMA statement, no output data set is created. See the section “OUT= Data Set” on page 262 for more information. PRINTALL

prints the FORECAST computation throughout the whole data set. The forecast values for the data before the forecast period (specified by the BACK= option) are one-step forecasts. SIGSQ=value

specifies the variance term used in the formula for computing forecast standard errors and confidence limits. The default value is the variance estimate computed by the preceding ESTIMATE statement. This option is useful when you wish to generate forecast standard errors and confidence limits based on a published model. It would often be used in conjunction with the NOEST option in the preceding ESTIMATE statement.

Details: ARIMA Procedure

The Inverse Autocorrelation Function The sample inverse autocorrelation function (SIACF) plays much the same role in ARIMA modeling as the sample partial autocorrelation function (SPACF), but it generally indicates subset and seasonal autoregressive models better than the SPACF. Additionally, the SIACF can be useful for detecting over-differencing. If the data come from a nonstationary or nearly nonstationary model, the SIACF has the characteristics of a noninvertible moving-average. Likewise, if the data come from a model with a noninvertible moving average, then the SIACF has nonstationary characteristics and therefore decays slowly. In particular, if the data have been over-differenced, the SIACF looks like a SACF from a nonstationary process. The inverse autocorrelation function is not often discussed in textbooks, so a brief description is given here. More complete discussions can be found in Cleveland (1972), Chatfield (1980), and Priestly (1981). Let Wt be generated by the ARMA(p, q ) process .B/Wt D .B/at where at is a white noise sequence. If (B) is invertible (that is, if  considered as a polynomial in B has no roots less than or equal to 1 in magnitude), then the model .B/Zt D .B/at is also a valid ARMA(q,p ) model. This model is sometimes referred to as the dual model. The autocorrelation function (ACF) of this dual model is called the inverse autocorrelation function (IACF) of the original model.

240 F Chapter 7: The ARIMA Procedure

Notice that if the original model is a pure autoregressive model, then the IACF is an ACF that corresponds to a pure moving-average model. Thus, it cuts off sharply when the lag is greater than p; this behavior is similar to the behavior of the partial autocorrelation function (PACF). The sample inverse autocorrelation function (SIACF) is estimated in the ARIMA procedure by the following steps. A high-order autoregressive model is fit to the data by means of the Yule-Walker equations. The order of the autoregressive model used to calculate the SIACF is the minimum of the NLAG= value and one-half the number of observations after differencing. The SIACF is then calculated as the autocorrelation function that corresponds to this autoregressive operator when treated as a moving-average operator. That is, the autoregressive coefficients are convolved with themselves and treated as autocovariances. Under certain conditions, the sampling distribution of the SIACF can be approximated by the sampling distribution of the SACF of the dual model (Bhansali 1980). In the plots generated by ARIMA, p the confidence limit marks (.) are located at ˙2= n. These limits bound an approximate 95% confidence interval for the hypothesis that the data are from a white noise process.

The Partial Autocorrelation Function The approximation for a standard error for the estimated partial autocorrelation function at lag k is based on a null hypothesis that a pure autoregressive Gaussian process of order k–1 generated the p time series. This standard error is 1= n and is used to produce the approximate 95% confidence intervals depicted by the dots in the plot.

The Cross-Correlation Function The autocorrelation and partial and inverse autocorrelation functions described in the preceding sections help when you want to model a series as a function of its past values and past random errors. When you want to include the effects of past and current values of other series in the model, the correlations of the response series and the other series must be considered. The CROSSCORR= option in the IDENTIFY statement computes cross-correlations of the VAR= series with other series and makes these series available for use as inputs in models specified by later ESTIMATE statements. When the CROSSCORR= option is used, PROC ARIMA prints a plot of the cross-correlation function for each variable in the CROSSCORR= list. This plot is similar in format to the other correlation plots, but it shows the correlation between the two series at both lags and leads. For example, identify var=y crosscorr=x ...;

plots the cross-correlation function of Y and X, Cor.yt ; xt s /, for s D L to L, where L is the value of the NLAG= option. Study of the cross-correlation functions can indicate the transfer functions through which the input series should enter the model for the response series.

The ESACF Method F 241

The cross-correlation function is computed after any specified differencing has been done. If differencing is specified for the VAR= variable or for a variable in the CROSSCORR= list, it is the differenced series that is cross-correlated (and the differenced series is processed by any following ESTIMATE statement). For example, identify var=y(1) crosscorr=x(1);

computes the cross-correlations of the changes in Y with the changes in X. When differencing is specified, the subsequent ESTIMATE statement models changes in the variables rather than the variables themselves.

The ESACF Method The extended sample autocorrelation function (ESACF) method can tentatively identify the orders of a stationary or nonstationary ARMA process based on iterated least squares estimates of the autoregressive parameters. Tsay and Tiao (1984) proposed the technique, and Choi (1992) provides useful descriptions of the algorithm. Given a stationary or nonstationary time series fzt W 1  t  ng with mean corrected form zQ t D zt z with a true autoregressive order of p C d and with a true moving-average order of q, you can use the ESACF method to estimate the unknown orders p C d and q by analyzing the autocorrelation functions associated with filtered series of the form .m;j / wt

O .m;j / .B/zQ t D zQt Dˆ

m X

.m;j / O i zQ t

i

i D1

where B represents the backshift operator, where m D pmi n ; : : :; pmax are the autoregressive test .m;j / orders, where j D qmi n C 1; : : :; qmax C 1 are the moving-average test orders, and where O i are the autoregressive parameter estimates under the assumption that the series is an ARMA(m; j ) process. For purely autoregressive models (j D 0), ordinary least squares (OLS) is used to consistently .m;0/ estimate O i . For ARMA models, consistent estimates are obtained by the iterated least squares recursion formula, which is initiated by the pure autoregressive estimates: .m;j / .mC1;j O i D O i

1/

.m;j O i 1

O .mC1;j 1/ 1/ mC1 .m;j 1/ O m .m;j /

The j th lag of the sample autocorrelation function of the filtered series wt sample autocorrelation function, and it is denoted as rj.m/ D rj .w .m;j / /.

is the extended

The standard errors of rj.m/ are computed in the usual way by using Bartlett’s approximation of the Pj 1 variance of the sample autocorrelation function, var.rj.m/ /  .1 C t D1 rj2 .w .m;j / //.

242 F Chapter 7: The ARIMA Procedure

.m;j /

If the true model is an ARMA (p C d; q) process, the filtered series wt model for j q so that rj.pCd /  0

j >q

rj.pCd / ¤ 0

j Dq

follows an MA(q)

Additionally, Tsay and Tiao (1984) show that the extended sample autocorrelation satisfies rj.m/  0

j

rj.m/ ¤ c.m where c.m and 1.

p

q>m p

d; j

d; j

d 0

p q/

0j

qm

p

d

q/ is a nonzero constant or a continuous random variable bounded by –1

An ESACF table is then constructed by using the rj.m/ for m D pmi n; : : :; pmax and j D qmi n C 1; : : :; qmax C 1 to identify the ARMA orders (see Table 7.4). The orders are tentatively identified by finding a right (maximal) triangular pattern with vertices located at .p C d; q/ and .p C d; qmax / and in which all elements are insignificant (based on asymptotic normality of the autocorrelation function). The vertex .p C d; q/ identifies the order. Table 7.5 depicts the theoretical pattern associated with an ARMA(1,2) series. Table 7.4

ESACF Table

AR 0 1 2 3  

Table 7.5

0 r1.0/ r1.1/ r1.2/ r1.3/  

1 r2.0/ r2.1/ r2.2/ r2.3/  

MA 2 r3.0/ r3.1/ r3.2/ r3.3/  

      

3 r4.0/ r4.1/ r4.2/ r4.3/  

      

Theoretical ESACF Table for an ARMA(1,2) Series

AR 0 1 2 3 4

MA 0 1 2 3 4 5 * X X X X X * X 0 0 0 0 * X X 0 0 0 * X X X 0 0 * X X X X 0 X = significant terms 0 = insignificant terms * = no pattern

6 X 0 0 0 0

7 X 0 0 0 0

The MINIC Method F 243

The MINIC Method The minimum information criterion (MINIC) method can tentatively identify the order of a stationary and invertible ARMA process. Note that Hannan and Rissannen (1982) proposed this method, and Box, Jenkins, and Reinsel (1994) and Choi (1992) provide useful descriptions of the algorithm. Given a stationary and invertible time series fzt W 1  t  ng with mean corrected form zQ t D zt z with a true autoregressive order of p and with a true moving-average order of q, you can use the MINIC method to compute information criteria (or penalty functions) for various autoregressive and moving average orders. The following paragraphs provide a brief description of the algorithm. If the series is a stationary and invertible ARMA(p, q ) process of the form ˆ.p;q/ .B/zQ t D ‚.p;q/ .B/t the error series can be approximated by a high-order AR process O .p ;q/ .B/zQ t  t Ot D ˆ  O .p ;q/ are obtained from the Yule-Walker estimates. The choice where the parameter estimates ˆ  of the autoregressive order p is determined by the order that minimizes the Akaike information criterion (AIC) in the range p;mi n  p  p;max 2 AIC.p ; 0/ D ln.Q .p / C 2.p C 0/=n  ;0/

where 2 Q .p D  ;0/

1 n

n X

Ot2

t Dp C1

Note that Hannan and Rissannen (1982) use the Bayesian information criterion (BIC) to determine the autoregressive order used to estimate the error series. Box, Jenkins, and Reinsel (1994) and Choi (1992) recommend the AIC. Once the error series has been estimated for autoregressive test order m D pmi n ; : : :; pmax and for O .m;j / and ‚ O .m;j / are commoving-average test order j D qmi n ; : : :; qmax , the OLS estimates ˆ puted from the regression model zQ t D

m X

.m;j /

i

zQ t

i

i D1

C

j X

.m;j /

k

Ot

k

C error

kD1

From the preceding parameter estimates, the BIC is then computed 2 BIC.m; j / D ln.Q .m;j / / C 2.m C j /ln.n/=n

where 2 Q .m;j /

0 n X 1 @zQ t D n t Dt 0

m X i D1

.m;j /

i

zQ t

i

C

j X kD1

1 .m;j /

k

Ot

k

A

244 F Chapter 7: The ARIMA Procedure

where t0 D p C max.m; j /. A MINIC table is then constructed using BIC.m; j /; see Table 7.6. If pmax > p;mi n , the preceding regression might fail due to linear dependence on the estimated error series and the mean-corrected series. Values of BIC.m; j / that cannot be computed are set to missing. For large autoregressive and moving-average test orders with relatively few observations, a nearly perfect fit can result. This condition can be identified by a large negative BIC.m; j / value. Table 7.6

MINIC Table

MA AR 0 1 2 3  

0 BIC.0; 0/ BIC.1; 0/ BIC.2; 0/ BIC.3; 0/  

1 BIC.0; 1/ BIC.1; 1/ BIC.2; 1/ BIC.3; 1/  

2 BIC.0; 2/ BIC.1; 2/ BIC.2; 2/ BIC.3; 2/  

3 BIC.0; 3/ BIC.1; 3/ BIC.2; 3/ BIC.3; 3/  

      

      

The SCAN Method The smallest canonical (SCAN) correlation method can tentatively identify the orders of a stationary or nonstationary ARMA process. Tsay and Tiao (1985) proposed the technique, and Box, Jenkins, and Reinsel (1994) and Choi (1992) provide useful descriptions of the algorithm. Given a stationary or nonstationary time series fzt W 1  t  ng with mean corrected form zQ t D zt z with a true autoregressive order of p C d and with a true moving-average order of q, you can use the SCAN method to analyze eigenvalues of the correlation matrix of the ARMA process. The following paragraphs provide a brief description of the algorithm. For autoregressive test order m D pmi n ; : : :; pmax and for moving-average test order j D qmi n ; : : :; qmax , perform the following steps. 1. Let Ym;t D .zQ t ; zQ t

0 1 ; : : :; zQ t m / .

O ˇ.m; j C 1/ D

Compute the following .m C 1/  .m C 1/ matrix ! 1 ! X X 0 0 Ym;t j 1 Ym;t Ym;t j 1 Ym;t j 1 t

t

! ˇO  .m; j C 1/ D

X

0 Ym;t Ym;t

t

1

! X

0 Ym;t Ym;t

j 1

t

O AO .m; j / D ˇO  .m; j C 1/ˇ.m; j C 1/ where t ranges from j C m C 2 to n. 2. Find the smallest eigenvalue, O  .m; j /, of AO .m; j / and its corresponding normalized eigen.m;j / .m;j / .m;j / vector, ˆm;j D .1; 1 ; 2 ; : : : ; m /. The squared canonical correlation estimate is O  .m; j /.

The SCAN Method F 245

3. Using the ˆm;j as AR(m) coefficients, obtain the residuals for t D j C m C 1 to n, by .m;j / .m;j / .m;j / .m;j / following the formula: wt D zQt 1 zQ t 1 2 zQ t 2 : : : m zQ t m . 4. From the sample autocorrelations of the residuals, rk .w/, approximate the standard error of the squared canonical correlation estimate by var.O  .m; j /1=2 /  d.m; j /=.n m Pj 1 where d.m; j / D .1 C 2 i D1 rk .w .m;j / //.

j/

The test statistic to be used as an identification criterion is c.m; j / D

.n

m

j /ln.1

O  .m; j /=d.m; j //

which is asymptotically 21 if m D p C d and j  q or if m  p C d and j D q. For m > p and j < q, there is more than one theoretical zero canonical correlation between Ym;t and Ym;t j 1 . Since the O  .m; j / are the smallest canonical correlations for each .m; j /, the percentiles of c.m; j / are less than those of a 21 ; therefore, Tsay and Tiao (1985) state that it is safe to assume a 21 . For m < p and j < q, no conclusions about the distribution of c.m; j / are made. A SCAN table is then constructed using c.m; j / to determine which of the O  .m; j / are significantly different from zero (see Table 7.7). The ARMA orders are tentatively identified by finding a (maximal) rectangular pattern in which the O  .m; j / are insignificant for all test orders m  p C d and j  q. There may be more than one pair of values (p C d; q) that permit such a rectangular pattern. In this case, parsimony and the number of insignificant items in the rectangular pattern should help determine the model order. Table 7.8 depicts the theoretical pattern associated with an ARMA(2,2) series. Table 7.7

SCAN Table

AR 0 1 2 3  

0 c.0; 0/ c.1; 0/ c.2; 0/ c.3; 0/  

1 c.0; 1/ c.1; 1/ c.2; 1/ c.3; 1/  

MA 2 c.0; 2/ c.1; 2/ c.2; 2/ c.3; 2/  

3 c.0; 3/ c.1; 3/ c.2; 3/ c.3; 3/  

      

      

246 F Chapter 7: The ARIMA Procedure

Table 7.8

Theoretical SCAN Table for an ARMA(2,2) Series

AR 0 1 2 3 4

MA 0 1 2 3 4 5 * X X X X X * X X X X X * X 0 0 0 0 * X 0 0 0 0 * X 0 0 0 0 X = significant terms 0 = insignificant terms * = no pattern

6 X X 0 0 0

7 X X 0 0 0

Stationarity Tests When a time series has a unit root, the series is nonstationary and the ordinary least squares (OLS) estimator is not normally distributed. Dickey (1976) and Dickey and Fuller (1979) studied the limiting distribution of the OLS estimator of autoregressive models for time series with a simple unit root. Dickey, Hasza, and Fuller (1984) obtained the limiting distribution for time series with seasonal unit roots. Hamilton (1994) discusses the various types of unit root testing. For a description of Dickey-Fuller tests, see the section “PROBDF Function for Dickey-Fuller Tests” on page 158 in Chapter 5. See Chapter 8, “The AUTOREG Procedure,” for a description of Phillips-Perron tests. The random-walk-with-drift test recommends whether or not an integrated times series has a drift term. Hamilton (1994) discusses this test.

Prewhitening If, as is usually the case, an input series is autocorrelated, the direct cross-correlation function between the input and response series gives a misleading indication of the relation between the input and response series. One solution to this problem is called prewhitening. You first fit an ARIMA model for the input series sufficient to reduce the residuals to white noise; then, filter the input series with this model to get the white noise residual series. You then filter the response series with the same model and cross-correlate the filtered response with the filtered input series. The ARIMA procedure performs this prewhitening process automatically when you precede the IDENTIFY statement for the response series with IDENTIFY and ESTIMATE statements to fit a model for the input series. If a model with no inputs was previously fit to a variable specified by the CROSSCORR= option, then that model is used to prewhiten both the input series and the response series before the cross-correlations are computed for the input series.

Identifying Transfer Function Models F 247

For example, proc arima data=in; identify var=x; estimate p=1 q=1; identify var=y crosscorr=x; run;

Both X and Y are filtered by the ARMA(1,1) model fit to X before the cross-correlations are computed. Note that prewhitening is done to estimate the cross-correlation function; the unfiltered series are used in any subsequent ESTIMATE or FORECAST statements, and the correlation functions of Y with its own lags are computed from the unfiltered Y series. But initial values in the ESTIMATE statement are obtained with prewhitened data; therefore, the result with prewhitening can be different from the result without prewhitening. To suppress prewhitening for all input variables, use the CLEAR option in the IDENTIFY statement to make PROC ARIMA disregard all previous models.

Prewhitening and Differencing If the VAR= and CROSSCORR= options specify differencing, the series are differenced before the prewhitening filter is applied. When the differencing lists specified in the VAR= option for an input and in the CROSSCORR= option for that input are not the same, PROC ARIMA combines the two lists so that the differencing operators used for prewhitening include all differences in either list (in the least common multiple sense).

Identifying Transfer Function Models When identifying a transfer function model with multiple input variables, the cross-correlation functions can be misleading if the input series are correlated with each other. Any dependencies among two or more input series will confound their cross-correlations with the response series. The prewhitening technique assumes that the input variables do not depend on past values of the response variable. If there is feedback from the response variable to an input variable, as evidenced by significant cross-correlation at negative lags, both the input and the response variables need to be prewhitened before meaningful cross-correlations can be computed. PROC ARIMA cannot handle feedback models. The STATESPACE and VARMAX procedures are more appropriate for models with feedback.

248 F Chapter 7: The ARIMA Procedure

Missing Values and Autocorrelations To compute the sample autocorrelation function when missing values are present, PROC ARIMA uses only crossproducts that do not involve missing values and employs divisors that reflect the number of crossproducts used rather than the total length of the series. Sample partial autocorrelations and inverse autocorrelations are then computed by using the sample autocorrelation function. If necessary, a taper is employed to transform the sample autocorrelations into a positive definite sequence before calculating the partial autocorrelation and inverse correlation functions. The confidence intervals produced for these functions might not be valid when there are missing values. The distributional properties for sample correlation functions are not clear for finite samples. See Dunsmuir (1984) for some asymptotic properties of the sample correlation functions.

Estimation Details The ARIMA procedure primarily uses the computational methods outlined by Box and Jenkins. Marquardt’s method is used for the nonlinear least squares iterations. Numerical approximations of the derivatives of the sum-of-squares function are taken by using a fixed delta (controlled by the DELTA= option). The methods do not always converge successfully for a given set of data, particularly if the starting values for the parameters are not close to the least squares estimates.

Back-Forecasting The unconditional sum of squares is computed exactly; thus, back-forecasting is not performed. Early versions of SAS/ETS software used the back-forecasting approximation and allowed a positive value of the BACKLIM= option to control the extent of the back-forecasting. In the current version, requesting a positive number of back-forecasting steps with the BACKLIM= option has no effect.

Preliminary Estimation If an autoregressive or moving-average operator is specified with no missing lags, preliminary estimates of the parameters are computed by using the autocorrelations computed in the IDENTIFY stage. Otherwise, the preliminary estimates are arbitrarily set to values that produce stable polynomials. When preliminary estimation is not performed by PROC ARIMA, then initial values of the coefficients for any given autoregressive or moving-average factor are set to 0.1 if the degree of the polynomial associated with the factor is 9 or less. Otherwise, the coefficients are determined by expanding the polynomial (1 0:1B) to an appropriate power by using a recursive algorithm.

Estimation Details F 249

These preliminary estimates are the starting values in an iterative algorithm to compute estimates of the parameters.

Estimation Methods Maximum Likelihood

The METHOD= ML option produces maximum likelihood estimates. The likelihood function is maximized via nonlinear least squares using Marquardt’s method. Maximum likelihood estimates are more expensive to compute than the conditional least squares estimates; however, they may be preferable in some cases (Ansley and Newbold 1980; Davidson 1981). The maximum likelihood estimates are computed as follows. Let the univariate ARMA model be .B/.Wt

t / D .B/at

where at is an independent sequence of normally distributed innovations with mean 0 and variance  2 . Here t is the mean parameter  plus the transfer function inputs. The log-likelihood function can be written as follows: 1 0 x 2 2

1

1 ln.jj/ 2

x

n ln. 2 / 2

In this equation, n is the number of observations,  2  is the variance of x as a function of the  and  parameters, and jj denotes the determinant. The vector x is the time series Wt minus the structural part of the model t , written as a column vector, as follows: 2 3 2 3 W1 1 6W2 7 62 7 6 7 6 7 xD6 : 7 6 : 7 4 :: 5 4 :: 5 Wn

n

The maximum likelihood estimate (MLE) of  2 is s2 D

1 0 x n

1

x

Note that the default estimator of the variance divides by n r, where r is the number of parameters in the model, instead of by n. Specifying the NODF option causes a divisor of n to be used. The log-likelihood concentrated with respect to  2 can be taken up to additive constants as n ln.x0  2

1

x/

1 ln.jj/ 2

Let H be the lower triangular matrix with positive elements on the diagonal such that HH0 D . Let e be the vector H 1 x. The concentrated log-likelihood with respect to  2 can now be written as n ln.e0 e/ 2

ln.jHj/

250 F Chapter 7: The ARIMA Procedure

or n ln.jHj1=n e0 ejHj1=n / 2 The MLE is produced by using a Marquardt algorithm to minimize the following sum of squares: jHj1=n e0 ejHj1=n The subsequent analysis of the residuals is done by using e as the vector of residuals. Unconditional Least Squares

The METHOD=ULS option produces unconditional least squares estimates. The ULS method is also referred to as the exact least squares (ELS) method. For METHOD=ULS, the estimates minimize n X

aQ t2 D

t D1

n X

Ct Vt 1 .x1 ;   ; xt

.xt

0 2 1/ /

t D1

where Ct is the covariance P matrix of xt and .x1 ;   ; xt 1 /, and Vt is the variance matrix of .x1 ;   ; xt 1 /. In fact, ntD1 aQ t2 is the same as x0  1 x, and hence e0 e. Therefore, the unconditional least squares estimates are obtained by minimizing the sum of squared residuals rather than using the log-likelihood as the criterion function. Conditional Least Squares

The METHOD=CLS option produces conditional least squares estimates. The CLS estimates are conditional on the assumption that the past unobserved errors are equal to 0. The series xt can be represented in terms of the previous observations, as follows: xt D at C

1 X

i xt

i

i D1

The  weights are computed from the ratio of the  and  polynomials, as follows: .B/ D1 .B/

1 X

i B i

i D1

The CLS method produces estimates minimizing n X t D1

aO t2 D

n X t D1

.xt

1 X

O i xt

2 i/

i D1

where the unobserved past values of xt are set to 0 and O i are computed from the estimates of  and  at each iteration. For METHOD=ULS and METHOD=ML, initial estimates are computed using the METHOD=CLS algorithm.

Estimation Details F 251

Start-up for Transfer Functions When computing the noise series for transfer function and intervention models, the start-up for the transferred variable is done by assuming that past values of the input series are equal to the first value of the series. The estimates are then obtained by applying least squares or maximum likelihood to the noise series. Thus, for transfer function models, the ML option does not generate the full (multivariate ARMA) maximum likelihood estimates, but it uses only the univariate likelihood function applied to the noise series. Because PROC ARIMA uses all of the available data for the input series to generate the noise series, other start-up options for the transferred series can be implemented by prefixing an observation to the beginning of the real data. For example, if you fit a transfer function model to the variable Y with the single input X, then you can employ a start-up using 0 for the past values by prefixing to the actual data an observation with a missing value for Y and a value of 0 for X.

Information Criteria PROC ARIMA computes and prints two information criteria, Akaike’s information criterion (AIC) (Akaike 1974; Harvey 1981) and Schwarz’s Bayesian criterion (SBC) (Schwarz 1978). The AIC and SBC are used to compare competing models fit to the same series. The model with the smaller information criteria is said to fit the data better. The AIC is computed as 2ln.L/ C 2k where L is the likelihood function and k is the number of free parameters. The SBC is computed as 2ln.L/ C ln.n/k where n is the number of residuals that can be computed for the time series. Sometimes Schwarz’s Bayesian criterion is called the Bayesian information criterion (BIC). If METHOD=CLS is used to do the estimation, an approximation value of L is used, where L is based on the conditional sum of squares instead of the exact sum of squares, and a Jacobian factor is left out.

Tests of Residuals A table of test statistics for the hypothesis that the model residuals are white noise is printed as part of the ESTIMATE statement output. The chi-square statistics used in the test for lack of fit are computed using the Ljung-Box formula 2m

D n.n C 2/

m X kD1

where Pn k t D1 at at Ck rk D P n 2 t D1 at

rk2 .n

k/

252 F Chapter 7: The ARIMA Procedure

and at is the residual series. This formula has been suggested by Ljung and Box (1978) as yielding a better fit to the asymptotic chi-square distribution than the Box-Pierce Q statistic. Some simulation studies of the finite sample properties of this statistic are given by Davies, Triggs, and Newbold (1977) and by Ljung and Box (1978). When the time series has missing values, Stoffer and Toloi (1992) suggest a modification of this test statistic that has improved distributional properties over the standard Ljung-Box formula given above. When the series contains missing values, this modified test statistic is used by default. Each chi-square statistic is computed for all lags up to the indicated lag value and is not independent of the preceding chi-square values. The null hypotheses tested is that the current set of autocorrelations is white noise.

t-values The t values reported in the table of parameter estimates are approximations whose accuracy depends on the validity of the model, the nature of the model, and the length of the observed series. When the length of the observed series is short and the number of estimated parameters is large with respect to the series length, the t approximation is usually poor. Probability values that correspond to a t distribution should be interpreted carefully because they may be misleading.

Cautions during Estimation The ARIMA procedure uses a general nonlinear least squares estimation method that can yield problematic results if your data do not fit the model. Output should be examined carefully. The GRID option can be used to ensure the validity and quality of the results. Problems you might encounter include the following:  Preliminary moving-average estimates might not converge. If this occurs, preliminary estimates are derived as described previously in “Preliminary Estimation” on page 248. You can supply your own preliminary estimates with the ESTIMATE statement options.  The estimates can lead to an unstable time series process, which can cause extreme forecast values or overflows in the forecast.  The Jacobian matrix of partial derivatives might be singular; usually, this happens because not all the parameters are identifiable. Removing some of the parameters or using a longer time series might help.  The iterative process might not converge. PROC ARIMA’s estimation method stops after n iterations, where n is the value of the MAXITER= option. If an iteration does not improve the SSE, the Marquardt parameter is increased by a factor of ten until parameters that have a smaller SSE are obtained or until the limit value of the Marquardt parameter is exceeded.  For METHOD=CLS, the estimates might converge but not to least squares estimates. The estimates might converge to a local minimum, the numerical calculations might be distorted by data whose sum-of-squares surface is not smooth, or the minimum might lie outside the region of invertibility or stationarity.

Specifying Inputs and Transfer Functions F 253

 If the data are differenced and a moving-average model is fit, the parameter estimates might try to converge exactly on the invertibility boundary. In this case, the standard error estimates that are based on derivatives might be inaccurate.

Specifying Inputs and Transfer Functions Input variables and transfer functions for them can be specified using the INPUT= option in the ESTIMATE statement. The variables used in the INPUT= option must be included in the CROSSCORR= list in the previous IDENTIFY statement. If any differencing is specified in the CROSSCORR= list, then the differenced variable is used as the input to the transfer function.

General Syntax of the INPUT= Option The general syntax of the INPUT= option is ESTIMATE . . . INPUT=( transfer-function variable . . . )

The transfer function for an input variable is optional. The name of a variable by itself can be used to specify a pure regression term for the variable. If specified, the syntax of the transfer function is S $ .L1;1 ; L1;2 ; : : :/.L2;1 ; : : :/: : :=.Li;1 ; Li;2 ; : : :/.Li C1;1 ; : : :/: : : S is the number of periods of time delay (lag) for this input series. Each term in parentheses specifies a polynomial factor with parameters at the lags specified by the Li;j values. The terms before the slash (/) are numerator factors. The terms after the slash (/) are denominator factors. All three parts are optional. Commas can optionally be used between input specifications to make the INPUT= option more readable. The $ sign after the shift is also optional. Except for the first numerator factor, each of the terms Li;1 ; Li;2 ; : : :; Li;k indicates a factor of the form .1

!i;1 B Li;1

!i;2 B Li;2

:::

!i;k B Li;k /

The form of the first numerator factor depends on the ALTPARM option. By default, the constant 1 in the first numerator factor is replaced with a free parameter !0 .

Alternative Model Parameterization When the ALTPARM option is specified, the !0 parameter is factored out so that it multiplies the entire transfer function, and the first numerator factor has the same form as the other factors.

254 F Chapter 7: The ARIMA Procedure

The ALTPARM option does not materially affect the results; it just presents the results differently. Some people prefer to see the model written one way, while others prefer the alternative representation. Table 7.9 illustrates the effect of the ALTPARM option. Table 7.9

The ALTPARM Option INPUT= Option INPUT=((1 2)(12)/(1)X);

ALTPARM No Yes

Model .!0 !1 B !2 B 2 /.1 !3 B 12 /=.1 ı1 B/Xt !0 .1 !1 B !2 B 2 /.1 !3 B 12 /=.1 ı1 B/Xt

Differencing and Input Variables If you difference the response series and use input variables, take care that the differencing operations do not change the meaning of the model. For example, if you want to fit the model Yt D

.1 1 B/ !0 Xt C at .1 ı1 B/ .1 B/.1 B 12 /

then the IDENTIFY statement must read identify var=y(1,12) crosscorr=x(1,12); estimate q=1 input=(/(1)x) noconstant;

If instead you specify the differencing as identify var=y(1,12) crosscorr=x; estimate q=1 input=(/(1)x) noconstant;

then the model being requested is Yt D

.1

!0 ı1 B/.1 B/.1

B 12 /

Xt C

.1 1 B/ at .1 B/.1 B 12 /

which is a very different model. The point to remember is that a differencing operation requested for the response variable specified by the VAR= option is applied only to that variable and not to the noise term of the model.

Initial Values The syntax for giving initial values to transfer function parameters in the INITVAL= option parallels the syntax of the INPUT= option. For each transfer function in the INPUT= option, the INITVAL= option should give an initialization specification followed by the input series name. The initialization specification for each transfer function has the form C $ .V1;1 ; V1;2 ; : : :/.V2;1 ; : : :/: : :=.Vi;1 ; : : :/: : :

Stationarity and Invertibility F 255

where C is the lag 0 term in the first numerator factor of the transfer function (or the overall scale factor if the ALTPARM option is specified) and Vi;j is the coefficient of the Li;j element in the transfer function. To illustrate, suppose you want to fit the model Yt D  C

.!0 !1 B !2 B 2 / Xt .1 ı1 B ı2 B 2 ı3 B 3 /

3

C

.1

1 at 1 B 2 B 3 /

and start the estimation process with the initial values =10, !0 =1, !1 =0.5, !2 =0.03, ı1 =0.8, ı2 =–0.1, ı3 =0.002, 1 =0.1, 2 =0.01. (These are arbitrary values for illustration only.) You would use the following statements: identify var=y crosscorr=x; estimate p=(1,3) input=(3$(1,2)/(1,2,3)x) mu=10 ar=.1 .01 initval=(1$(.5,.03)/(.8,-.1,.002)x);

Note that the lags specified for a particular factor are sorted, so initial values should be given in sorted order. For example, if the P= option had been entered as P=(3,1) instead of P=(1,3), the model would be the same and so would the AR= option. Sorting is done within all factors, including transfer function factors, so initial values should always be given in order of increasing lags. Here is another illustration, showing initialization for a factored model with multiple inputs. The model is Yt D  C C

!1;0 Wt C .!2;0 !2;1 B/Xt .1 ı1;1 B/ 1 at .1 1 B/.1 2 B 6 3 B 12 /

3

and the initial values are =10, !1;0 =5, ı1;1 =0.8, !2;0 =1, !2;1 =0.5, 1 =0.1, 2 =0.05, and 3 =0.01. You would use the following statements: identify var=y crosscorr=(w x); estimate p=(1)(6,12) input=(/(1)w, 3$(1)x) mu=10 ar=.1 .05 .01 initval=(5$/(.8)w 1$(.5)x);

Stationarity and Invertibility By default, PROC ARIMA requires that the parameter estimates for the AR and MA parts of the model always remain in the stationary and invertible regions, respectively. The NOSTABLE option removes this restriction and for high-order models can save some computer time. Note that using the NOSTABLE option does not necessarily result in an unstable model being fit, since the estimates can leave the stable region for some iterations but still ultimately converge to stable values. Similarly, by default, the parameter estimates for the denominator polynomial of the transfer function part of the model are also restricted to be stable. The NOTFSTABLE option can be used to remove this restriction.

256 F Chapter 7: The ARIMA Procedure

Naming of Model Parameters In the table of parameter estimates produced by the ESTIMATE statement, model parameters are referred to by using the naming convention described in this section. The parameters in the noise part of the model are named as ARi,j or MAi,j, where AR refers to autoregressive parameters and MA to moving-average parameters. The subscript i refers to the particular polynomial factor, and the subscript j refers to the jth term within the ith factor. These terms are sorted in order of increasing lag within factors, so the subscript j refers to the jth term after sorting. When inputs are used in the model, the parameters of each transfer function are named NUMi,j and DENi,j. The jth term in the ith factor of a numerator polynomial is named NUMi,j. The jth term in the ith factor of a denominator polynomial is named DENi,j. This naming process is repeated for each input variable, so if there are multiple inputs, parameters in transfer functions for different input series have the same name. The table of parameter estimates shows in the “Variable” column the input with which each parameter is associated. The parameter name shown in the “Parameter” column and the input variable name shown in the “Variable” column must be combined to fully identify transfer function parameters. The lag 0 parameter in the first numerator factor for the first input variable is named NUM1. For subsequent input variables, the lag 0 parameter in the first numerator factor is named NUMk, where k is the position of the input variable in the INPUT= option list. If the ALTPARM option is specified, the NUMk parameter is replaced by an overall scale parameter named SCALEk. For the mean and noise process parameters, the response series name is shown in the “Variable” column. The lag and shift for each parameter are also shown in the table of parameter estimates when inputs are used.

Missing Values and Estimation and Forecasting Estimation and forecasting are carried out in the presence of missing values by forecasting the missing values with the current set of parameter estimates. The maximum likelihood algorithm employed was suggested by Jones (1980) and is used for both unconditional least squares (ULS) and maximum likelihood (ML) estimation. The CLS algorithm simply fills in missing values with infinite memory forecast values, computed by forecasting ahead from the nonmissing past values as far as required by the structure of the missing values. These artificial values are then employed in the nonmissing value CLS algorithm. Artificial values are updated at each iteration along with parameter estimates. For models with input variables, embedded missing values (that is, missing values other than at the beginning or end of the series) are not generally supported. Embedded missing values in input variables are supported for the special case of a multiple regression model that has ARIMA errors. A multiple regression model is specified by an INPUT= option that simply lists the input variables

Forecasting Details F 257

(possibly with lag shifts) without any numerator or denominator transfer function factors. One-stepahead forecasts are not available for the response variable when one or more of the input variables have missing values. When embedded missing values are present for a model with complex transfer functions, PROC ARIMA uses the first continuous nonmissing piece of each series to do the analysis. That is, PROC ARIMA skips observations at the beginning of each series until it encounters a nonmissing value and then uses the data from there until it encounters another missing value or until the end of the data is reached. This makes the current version of PROC ARIMA compatible with earlier releases that did not allow embedded missing values.

Forecasting Details If the model has input variables, a forecast beyond the end of the data for the input variables is possible only if univariate ARIMA models have previously been fit to the input variables or future values for the input variables are included in the DATA= data set. If input variables are used, the forecast standard errors and confidence limits of the response depend on the estimated forecast error variance of the predicted inputs. If several input series are used, the forecast errors for the inputs should be independent; otherwise, the standard errors and confidence limits for the response series will not be accurate. If future values for the input variables are included in the DATA= data set, the standard errors of the forecasts will be underestimated since these values are assumed to be known with certainty. The forecasts are generated using forecasting equations consistent with the method used to estimate the model parameters. Thus, the estimation method specified in the ESTIMATE statement also controls the way forecasts are produced by the FORECAST statement. If METHOD=CLS is used, the forecasts are infinite memory forecasts, also called conditional forecasts. If METHOD=ULS or METHOD=ML, the forecasts are finite memory forecasts, also called unconditional forecasts. A complete description of the steps to produce the series forecasts and their standard errors by using either of these methods is quite involved, and only a brief explanation of the algorithm is given in the next two sections. Additional details about the finite and infinite memory forecasts can be found in Brockwell and Davis (1991). The prediction of stationary ARMA processes is explained in Chapter 5, and the prediction of nonstationary ARMA processes is given in Chapter 9 of Brockwell and Davis (1991).

Infinite Memory Forecasts If METHOD=CLS is used, the forecasts are infinite memory forecasts, also called conditional forecasts. The term conditional is used because the forecasts are computed by assuming that the unknown values of the response series before the start of the data are equal to the mean of the series. Thus, the forecasts are conditional on this assumption.

258 F Chapter 7: The ARIMA Procedure

The series xt can be represented as xt D at C

1 X

i xt

i

i D1

where .B/=.B/ D 1

P1

i D1 i B

i.

The k -step forecast of xt Ck is computed as xO tCk D

k X1

O i xO t Ck

i D1

i

C

1 X

O i xt Ck

i

i Dk

where unobserved past values of xt are set to zero and O i is obtained from the estimated parameters O O and .

Finite Memory Forecasts For METHOD=ULS or METHOD=ML, the forecasts are finite memory forecasts, also called unconditional forecasts. For finite memory forecasts, the covariance function of the ARMA model is used to derive the best linear prediction equation. That is, the k-step forecast of xt Ck , given .x1 ;   ; xt xQ tCk D Ck;t Vt 1 .x1 ;   ; xt

1 /,

is

0 1/

where Ck;t is the covariance of xtCk and .x1 ;   ; xt 1 / and Vt is the covariance matrix of the vector .x1 ;   ; xt 1 /. Ck;t and Vt are derived from the estimated parameters. Finite memory forecasts minimize the mean squared error of prediction if the parameters of the ARMA model are known exactly. (In most cases, the parameters of the ARMA model are estimated, so the predictors are not true best linear forecasts.) If the response series is differenced, the final forecast is produced by summing the forecast of the differenced series. This summation and the forecast are conditional on the initial values of the series. Thus, when the response series is differenced, the final forecasts are not true finite memory forecasts because they are derived by assuming that the differenced series begins in a steady-state condition. Thus, they fall somewhere between finite memory and infinite memory forecasts. In practice, there is seldom any practical difference between these forecasts and true finite memory forecasts.

Forecasting Log Transformed Data The log transformation is often used to convert time series that are nonstationary with respect to the innovation variance into stationary time series. The usual approach is to take the log of the series in a DATA step and then apply PROC ARIMA to the transformed data. A DATA step is then used to transform the forecasts of the logs back to the original units of measurement. The confidence limits are also transformed by using the exponential function.

Specifying Series Periodicity F 259

As one alternative, you can simply exponentiate the forecast series. This procedure gives a forecast for the median of the series, but the antilog of the forecast log series underpredicts the mean of the original series. If you want to predict the expected value of the series, you need to take into account the standard error of the forecast, as shown in the following example, which uses an AR(2) model to forecast the log of a series Y: data in; set in; ylog = log( y ); run; proc arima data=in; identify var=ylog; estimate p=2; forecast lead=10 out=out; run; data out; set out; y = exp( l95 = exp( u95 = exp( forecast = run;

ylog ); l95 ); u95 ); exp( forecast + std*std/2 );

Specifying Series Periodicity The INTERVAL= option is used together with the ID= variable to describe the observations that make up the time series. For example, INTERVAL=MONTH specifies a monthly time series in which each observation represents one month. See Chapter 4, “Date Intervals, Formats, and Functions,” for details about the interval values supported. The variable specified by the ID= option in the PROC ARIMA statement identifies the time periods associated with the observations. Usually, SAS date, time, or datetime values are used for this variable. PROC ARIMA uses the ID= variable in the following ways:  to validate the data periodicity. When the INTERVAL= option is specified, PROC ARIMA uses the ID variable to check the data and verify that successive observations have valid ID values that correspond to successive time intervals. When the INTERVAL= option is not used, PROC ARIMA verifies that the ID values are nonmissing and in ascending order.  to check for gaps in the input observations. For example, if INTERVAL=MONTH and an input observation for April 1970 follows an observation for January 1970, there is a gap in the input data with two omitted observations (namely February and March 1970). A warning message is printed when a gap in the input data is found.  to label the forecast observations in the output data set. PROC ARIMA extrapolates the values of the ID variable for the forecast observations from the ID value at the end of the input data

260 F Chapter 7: The ARIMA Procedure

according to the frequency specifications of the INTERVAL= option. If the INTERVAL= option is not specified, PROC ARIMA extrapolates the ID variable by incrementing the ID variable value for the last observation in the input data by 1 for each forecast period. Values of the ID variable over the range of the input data are copied to the output data set. The ALIGN= option is used to align the ID variable to the beginning, middle, or end of the time ID interval specified by the INTERVAL= option.

Detecting Outliers You can use the OUTLIER statement to detect changes in the level of the response series that are not accounted for by the estimated model. The types of changes considered are additive outliers (AO), level shifts (LS), and temporary changes (TC). Let t be a regression variable that describes some type of change in the mean response. In time series literature t is called a shock signature. An additive outlier at some time point s corresponds to a shock signature t such that s D 1:0 and t is 0.0 at all other points. Similarly a permanent level shift that originates at time s has a shock signature such that t is 0.0 for t < s and 1.0 for t  s. A temporary level shift of duration d that originates at time s has t equal to 1.0 between s and s C d and 0.0 otherwise. Suppose that you are estimating the ARIMA model D.B/Yt D t C

.B/ at .B/

where Yt is the response series, D.B/ is the differencing polynomial in the backward shift operator B (possibly identity), t is the transfer function input, .B/ and .B/ are the AR and MA polynomials, respectively, and at is the Gaussian white noise series. The problem of detection of level shifts in the OUTLIER statement is formulated as a problem of sequential selection of shock signatures that improve the model in the ESTIMATE statement. This is similar to the forward selection process in the stepwise regression procedure. The selection process starts with considering shock signatures of the type specified in the TYPE= option, originating at each nonmissing measurement. This involves testing H0 W ˇ D 0 versus Ha W ˇ ¤ 0 in the model D.B/.Yt

ˇt / D t C

.B/ at .B/

for each of these shock signatures. The most significant shock signature, if it also satisfies the significance criterion in ALPHA= option, is included in the model. If no significant shock signature is found, then the outlier detection process stops; otherwise this augmented model, which incorporates the selected shock signature in its transfer function input, becomes the null model for the subsequent selection process. This iterative process stops if at any stage no more significant shock signatures are found or if the number of iterations exceeds the maximum search number that results due to the MAXNUM= and MAXPCT= settings. In all these iterations, the parameters of the ARIMA model in the ESTIMATE statement are held fixed.

Detecting Outliers F 261

The precise details of the testing procedure for a given shock signature t are as follows: The preceding testing problem is equivalent to testing H0 W ˇ D 0 versus Ha W ˇ ¤ 0 in the following “regression with ARMA errors” model Nt D ˇt C

.B/ at .B/

where Nt D .D.B/Yt signature.

t / is the “noise” process and t D D.B/t is the “effective” shock

In this setting, under H0 ; N D .N1 ; N2 ; : : : ; Nn /T is a mean zero Gaussian vector with variance covariance matrix  2 . Here  2 is the variance of the white noise process at and  is the variancecovariance matrix associated with the ARMA model. Moreover, under Ha , N has ˇ as the mean vector where  D .1 ; 2 ; : : : ; n /T . Additionally, the generalized least squares estimate of ˇ and its variance is given by ˇO D ı= O D  2 = Var.ˇ/ where ı D  T  1 N and  D  T  1 . The test statistic  2 D ı 2 =. 2 / is used to test the significance of ˇ, which has an approximate chi-squared distribution with 1 degree of freedom under H0 . The type of estimate of  2 used in the calculation of  2 can be specified by the SIGMA= option. The default setting is SIGMA=ROBUST, which corresponds to a robust estimate suggested in an outlier detection procedure in X-12-ARIMA, the Census Bureau’s time series analysis program; see Findley et al. (1998) for additional information. The robust estimate of  2 is computed by the formula O 2 D .1:49  Median.jaO t j//2 where aO t are the standardized residuals of the null ARIMA model. The setting SIGMA=MSE corresponds to the usual mean squared error estimate (MSE) computed the same way as in the ESTIMATE statement with the NODF option. The quantities ı and  are efficiently computed by a method described in de Jong and Penzer (1998); see also Kohn and Ansley (1985).

Modeling in the Presence of Outliers In practice, modeling and forecasting time series data in the presence of outliers is a difficult problem for several reasons. The presence of outliers can adversely affect the model identification and estimation steps. Their presence close to the end of the observation period can have a serious impact on the forecasting performance of the model. In some cases, level shifts are associated with changes in the mechanism that drives the observation process, and separate models might be appropriate to different sections of the data. In view of all these difficulties, diagnostic tools such as outlier detection and residual analysis are essential in any modeling process. The following modeling strategy, which incorporates level shift detection in the familiar BoxJenkins modeling methodology, seems to work in many cases:

262 F Chapter 7: The ARIMA Procedure

1. Proceed with model identification and estimation as usual. Suppose this results in a tentative ARIMA model, say M. 2. Check for additive and permanent level shifts unaccounted for by the model M by using the OUTLIER statement. In this step, unless there is evidence to justify it, the number of level shifts searched should be kept small. 3. Augment the original dataset with the regression variables that correspond to the detected outliers. 4. Include the first few of these regression variables in M, and call this model M1. Reestimate all the parameters of M1. It is important not to include too many of these outlier variables in the model in order to avoid the danger of over-fitting. 5. Check the adequacy of M1 by examining the parameter estimates, residual analysis, and outlier detection. Refine it more if necessary.

OUT= Data Set The output data set produced by the OUT= option of the PROC ARIMA or FORECAST statements contains the following:  the BY variables  the ID variable  the variable specified by the VAR= option in the IDENTIFY statement, which contains the actual values of the response series  FORECAST, a numeric variable that contains the one-step-ahead predicted values and the multistep forecasts  STD, a numeric variable that contains the standard errors of the forecasts  a numeric variable that contains the lower confidence limits of the forecast. This variable is named L95 by default but has a different name if the ALPHA= option specifies a different size for the confidence limits.  RESIDUAL, a numeric variable that contains the differences between actual and forecast values  a numeric variable that contains the upper confidence limits of the forecast. This variable is named U95 by default but has a different name if the ALPHA= option specifies a different size for the confidence limits. The ID variable, the BY variables, and the response variable are the only ones copied from the input to the output data set. In particular, the input variables are not copied to the OUT= data set.

OUTCOV= Data Set F 263

Unless the NOOUTALL option is specified, the data set contains the whole time series. The FORECAST variable has the one-step forecasts (predicted values) for the input periods, followed by n forecast values, where n is the LEAD= value. The actual and RESIDUAL values are missing beyond the end of the series. If you specify the same OUT= data set in different FORECAST statements, the latter FORECAST statements overwrite the output from the previous FORECAST statements. If you want to combine the forecasts from different FORECAST statements in the same output data set, specify the OUT= option once in the PROC ARIMA statement and omit the OUT= option in the FORECAST statements. When a global output data set is created by the OUT= option in the PROC ARIMA statement, the variables in the OUT= data set are defined by the first FORECAST statement that is executed. The results of subsequent FORECAST statements are vertically concatenated onto the OUT= data set. Thus, if no ID variable is specified in the first FORECAST statement that is executed, no ID variable appears in the output data set, even if one is specified in a later FORECAST statement. If an ID variable is specified in the first FORECAST statement that is executed but not in a later FORECAST statement, the value of the ID variable is the same as the last value processed for the ID variable for all observations created by the later FORECAST statement. Furthermore, even if the response variable changes in subsequent FORECAST statements, the response variable name in the output data set is that of the first response variable analyzed.

OUTCOV= Data Set The output data set produced by the OUTCOV= option of the IDENTIFY statement contains the following variables:  LAG, a numeric variable that contains the lags that correspond to the values of the covariance variables. The values of LAG range from 0 to N for covariance functions and from –N to N for cross-covariance functions, where N is the value of the NLAG= option.  VAR, a character variable that contains the name of the variable specified by the VAR= option.  CROSSVAR, a character variable that contains the name of the variable specified in the CROSSCORR= option, which labels the different cross-covariance functions. The CROSSVAR variable is blank for the autocovariance observations. When there is no CROSSCORR= option, this variable is not created.  N, a numeric variable that contains the number of observations used to calculate the current value of the covariance or cross-covariance function.  COV, a numeric variable that contains the autocovariance or cross-covariance function values. COV contains the autocovariances of the VAR= variable when the value of the CROSSVAR variable is blank. Otherwise COV contains the cross covariances between the VAR= variable and the variable named by the CROSSVAR variable.

264 F Chapter 7: The ARIMA Procedure

 CORR, a numeric variable that contains the autocorrelation or cross-correlation function values. CORR contains the autocorrelations of the VAR= variable when the value of the CROSSVAR variable is blank. Otherwise CORR contains the cross-correlations between the VAR= variable and the variable named by the CROSSVAR variable.  STDERR, a numeric variable that contains the standard errors of the autocorrelations. The standard error estimate is based on the hypothesis that the process that generates the time series is a pure moving-average process of order LAG–1. For the cross-correlations, STDERR p contains the value 1= n, which approximates the standard error under the hypothesis that the two series are uncorrelated.  INVCORR, a numeric variable that contains the inverse autocorrelation function values of the VAR= variable. For cross-correlation observations (that is, when the value of the CROSSVAR variable is not blank), INVCORR contains missing values.  PARTCORR, a numeric variable that contains the partial autocorrelation function values of the VAR= variable. For cross-correlation observations (that is, when the value of the CROSSVAR variable is not blank), PARTCORR contains missing values.

OUTEST= Data Set PROC ARIMA writes the parameter estimates for a model to an output data set when the OUTEST= option is specified in the ESTIMATE statement. The OUTEST= data set contains the following:  the BY variables  _MODLABEL_, a character variable that contains the model label, if it is provided by using the label option in the ESTIMATE statement (otherwise this variable is not created).  _NAME_, a character variable that contains the name of the parameter for the covariance or correlation observations or is blank for the observations that contain the parameter estimates. (This variable is not created if neither OUTCOV nor OUTCORR is specified.)  _TYPE_, a character variable that identifies the type of observation. A description of the _TYPE_ variable values is given below.  variables for model parameters The variables for the model parameters are named as follows: ERRORVAR

This numeric variable contains the variance estimate. The _TYPE_=EST observation for this variable contains the estimated error variance, and the remaining observations are missing.

MU

This numeric variable contains values for the mean parameter for the model. (This variable is not created if NOCONSTANT is specified.)

OUTEST= Data Set F 265

MAj _k

These numeric variables contain values for the moving-average parameters. The variables for moving-average parameters are named MAj _k, where j is the factor-number and k is the index of the parameter within a factor.

ARj _k

These numeric variables contain values for the autoregressive parameters. The variables for autoregressive parameters are named ARj _k, where j is the factor number and k is the index of the parameter within a factor.

Ij _k

These variables contain values for the transfer function parameters. Variables for transfer function parameters are named Ij _k, where j is the number of the INPUT variable associated with the transfer function component and k is the number of the parameter for the particular INPUT variable. INPUT variables are numbered according to the order in which they appear in the INPUT= list.

_STATUS_

This variable describes the convergence status of the model. A value of 0_CONVERGED indicates that the model converged.

The value of the _TYPE_ variable for each observation indicates the kind of value contained in the variables for model parameters for the observation. The OUTEST= data set contains observations with the following _TYPE_ values: EST

The observation contains parameter estimates.

STD

The observation contains approximate standard errors of the estimates.

CORR

The observation contains correlations of the estimates. OUTCORR must be specified to get these observations.

COV

The observation contains covariances of the estimates. OUTCOV must be specified to get these observations.

FACTOR

The observation contains values that identify for each parameter the factor that contains it. Negative values indicate denominator factors in transfer function models.

LAG

The observation contains values that identify the lag associated with each parameter.

SHIFT

The observation contains values that identify the shift associated with the input series for the parameter.

The values given for _TYPE_=FACTOR, _TYPE_=LAG, or _TYPE_=SHIFT observations enable you to reconstruct the model employed when provided with only the OUTEST= data set.

OUTEST= Examples This section clarifies how model parameters are stored in the OUTEST= data set with two examples. Consider the following example: proc arima data=input; identify var=y cross=(x1 x2); estimate p=(1)(6) q=(1,3)(12) input=(x1 x2) outest=est;

266 F Chapter 7: The ARIMA Procedure

run; proc print data=est; run;

The model specified by these statements is Yt D  C !1;0 X1;t C !2;0 X2;t C

11 B 12 B 3 /.1 21 B 12 / at .1 11 B/.1 21 B 6 /

.1

The OUTEST= data set contains the values shown in Table 7.10. Table 7.10 Obs 1 2 3 4 5

OUTEST= Data Set for First Example _TYPE_ EST STD FACTOR LAG SHIFT

Y 2 . . . .

MU  se  0 0 0

MA1_1 11 se 11 1 1 0

MA1_2 12 se 12 1 3 0

MA2_1 21 se 21 2 12 0

AR1_1 11 se 11 1 1 0

AR2_1 21 se 21 2 6 0

I1_1 !1;0 se !1;0 1 0 0

I2_1 !2;0 se !2;0 1 0 0

Note that the symbols in the rows for _TYPE_=EST and _TYPE_=STD in Table 7.10 would be numeric values in a real data set. Next, consider the following example: proc arima data=input; identify var=y cross=(x1 x2); estimate p=1 q=1 input=(2 $ (1)/(1,2)x1 1 $ /(1)x2) outest=est; run; proc print data=est; run;

The model specified by these statements is Yt D  C

!10 !11 B X1;t 1 ı11 B ı12 B 2

2

C

!20 X2;t 1 ı21 B

1

C

.1 .1

1 B/ at 1 B/

The OUTEST= data set contains the values shown in Table 7.11. Table 7.11 Obs 1 2 3 4 5

OUTEST= Data Set for Second Example _TYPE_ EST STD FACTOR LAG SHIFT

Y 2 . . . .

MU  se  0 0 0

MA1_1 1 se 1 1 1 0

AR1_1 1 se 1 1 1 0

I1_1 !10 se !10 1 0 2

I1_2 !11 se !11 1 1 2

I1_3 ı11 se ı11 -1 1 2

I1_4 ı12 se ı12 -1 2 2

I2_1 !20 se !20 1 0 1

I2_2 ı21 se ı21 -1 1 1

OUTMODEL= SAS Data Set F 267

OUTMODEL= SAS Data Set The OUTMODEL= option in the ESTIMATE statement writes an output data set that enables you to reconstruct the model. The OUTMODEL= data set contains much the same information as the OUTEST= data set but in a transposed form that might be more useful for some purposes. In addition, the OUTMODEL= data set includes the differencing operators. The OUTMODEL data set contains the following:  the BY variables  _MODLABEL_, a character variable that contains the model label, if it is provided by using the label option in the ESTIMATE statement (otherwise this variable is not created).  _NAME_, a character variable that contains the name of the response or input variable for the observation.  _TYPE_, a character variable that contains the estimation method that was employed. The value of _TYPE_ can be CLS, ULS, or ML.  _STATUS_, a character variable that describes the convergence status of the model. A value of 0_CONVERGED indicates that the model converged.  _PARM_, a character variable that contains the name of the parameter given by the observation. _PARM_ takes on the values ERRORVAR, MU, AR, MA, NUM, DEN, and DIF.  _VALUE_, a numeric variable that contains the value of the estimate defined by the _PARM_ variable.  _STD_, a numeric variable that contains the standard error of the estimate.  _FACTOR_, a numeric variable that indicates the number of the factor to which the parameter belongs.  _LAG_, a numeric variable that contains the number of the term within the factor that contains the parameter.  _SHIFT_, a numeric variable that contains the shift value for the input variable associated with the current parameter. The values of _FACTOR_ and _LAG_ identify which particular MA, AR, NUM, or DEN parameter estimate is given by the _VALUE_ variable. The _NAME_ variable contains the response variable name for the MU, AR, or MA parameters. Otherwise, _NAME_ contains the input variable name associated with NUM or DEN parameter estimates. The _NAME_ variable contains the appropriate variable name associated with the current DIF observation as well. The _VALUE_ variable is 1 for all DIF observations, and the _LAG_ variable indicates the degree of differencing employed. The observations contained in the OUTMODEL= data set are identified by the _PARM_ variable. A description of the values of the _PARM_ variable follows:

268 F Chapter 7: The ARIMA Procedure

NUMRESID

_VALUE_ contains the number of residuals.

NPARMS

_VALUE_ contains the number of parameters in the model.

NDIFS

_VALUE_ contains the sum of the differencing lags employed for the response variable.

ERRORVAR

_VALUE_ contains the estimate of the innovation variance.

MU

_VALUE_ contains the estimate of the mean term.

AR

_VALUE_ contains the estimate of the autoregressive parameter indexed by the _FACTOR_ and _LAG_ variable values.

MA

_VALUE_ contains the estimate of a moving-average parameter indexed by the _FACTOR_ and _LAG_ variable values.

NUM

_VALUE_ contains the estimate of the parameter in the numerator factor of the transfer function of the input variable indexed by the _FACTOR_, _LAG_, and _SHIFT_ variable values.

DEN

_VALUE_ contains the estimate of the parameter in the denominator factor of the transfer function of the input variable indexed by the _FACTOR_, _LAG_, and _SHIFT_ variable values.

DIF

_VALUE_ contains the difference operator defined by the difference lag given by the value in the _LAG_ variable.

OUTSTAT= Data Set PROC ARIMA writes the diagnostic statistics for a model to an output data set when the OUTSTAT= option is specified in the ESTIMATE statement. The OUTSTAT data set contains the following:  the BY variables.  _MODLABEL_, a character variable that contains the model label, if it is provided by using the label option in the ESTIMATE statement (otherwise this variable is not created).  _TYPE_, a character variable that contains the estimation method used. _TYPE_ can have the value CLS, ULS, or ML.  _STAT_, a character variable that contains the name of the statistic given by the _VALUE_ variable in this observation. _STAT_ takes on the values AIC, SBC, LOGLIK, SSE, NUMRESID, NPARMS, NDIFS, ERRORVAR, MU, CONV, and NITER.  _VALUE_, a numeric variable that contains the value of the statistic named by the _STAT_ variable. The observations contained in the OUTSTAT= data set are identified by the _STAT_ variable. A description of the values of the _STAT_ variable follows:

Printed Output F 269

AIC

Akaike’s information criterion

SBC

Schwarz’s Bayesian criterion

LOGLIK

the log-likelihood, if METHOD=ML or METHOD=ULS is specified

SSE

the sum of the squared residuals

NUMRESID

the number of residuals

NPARMS

the number of parameters in the model

NDIFS

the sum of the differencing lags employed for the response variable

ERRORVAR

the estimate of the innovation variance

MU

the estimate of the mean term

CONV

tells if the estimation converged. The value of 0 signifies that estimation converged. Nonzero values reflect convergence problems.

NITER

the number of iterations

Remark. CONV takes an integer value that corresponds to the error condition of the parameter estimation process. The value of 0 signifies that estimation process has converged. The higher values signify convergence problems of increasing severity. Specifically:  CONV D 0 indicates that the estimation process has converged.  CONV D 1 or 2 indicates that the estimation process has run into numerical problems (such as encountering an unstable model or a ridge) during the iterations.  CONV >D 3 indicates that the estimation process has failed to converge.

Printed Output The ARIMA procedure produces printed output for each of the IDENTIFY, ESTIMATE, and FORECAST statements. The output produced by each ARIMA statement is described in the following sections. If ODS Graphics is enabled, the line printer plots mentioned below are replaced by the corresponding ODS plots.

IDENTIFY Statement Printed Output The printed output of the IDENTIFY statement consists of the following:  a table of summary statistics, including the name of the response variable, any specified periods of differencing, the mean and standard deviation of the response series after differencing, and the number of observations after differencing  a plot of the sample autocorrelation function for lags up to and including the NLAG= option value. Standard errors of the autocorrelations also appear to the right of the autocorrelation

270 F Chapter 7: The ARIMA Procedure

plot if the value of LINESIZE= option is sufficiently large. The standard errors are derived using Bartlett’s approximation (Box and Jenkins 1976, p. 177). The approximation for a standard error for the estimated autocorrelation function at lag k is based on a null hypothesis that a pure moving-average Gaussian process of order k–1 generated the time series. The relative position of an approximate 95% confidence interval under this null hypothesis is indicated by the dots in the plot, while the asterisks represent the relative magnitude of the autocorrelation value.  a plot of the sample inverse autocorrelation function. See the section “The Inverse Autocorrelation Function” on page 239 for more information about the inverse autocorrelation function.  a plot of the sample partial autocorrelation function  a table of test statistics for the hypothesis that the series is white noise. These test statistics are the same as the tests for white noise residuals produced by the ESTIMATE statement and are described in the section “Estimation Details” on page 248.  a plot of the sample cross-correlation function for each series specified in the CROSSCORR= option. If a model was previously estimated for a variable in the CROSSCORR= list, the cross-correlations for that series are computed for the prewhitened input and response series. For each input variable with a prewhitening filter, the cross-correlation report for the input series includes the following: – a table of test statistics for the hypothesis of no cross-correlation between the input and response series – the prewhitening filter used for the prewhitening transformation of the predictor and response variables  ESACF tables if the ESACF option is used  MINIC table if the MINIC option is used  SCAN table if the SCAN option is used  STATIONARITY test results if the STATIONARITY option is used

ESTIMATE Statement Printed Output The printed output of the ESTIMATE statement consists of the following:  if the PRINTALL option is specified, the preliminary parameter estimates and an iteration history that shows the sequence of parameter estimates tried during the fitting process  a table of parameter estimates that show the following for each parameter: the parameter name, the parameter estimate, the approximate standard error, t value, approximate probability (P r > jt j), the lag for the parameter, the input variable name for the parameter, and the lag or “Shift” for the input variable

Printed Output F 271

 the estimates of the constant term, the innovation variance (variance estimate), the innovation standard deviation (Std Error Estimate), Akaike’s information criterion (AIC), Schwarz’s Bayesian criterion (SBC), and the number of residuals  the correlation matrix of the parameter estimates  a table of test statistics for hypothesis that the residuals of the model are white noise. The table is titled “Autocorrelation Check of Residuals.”  if the PLOT option is specified, autocorrelation, inverse autocorrelation, and partial autocorrelation function plots of the residuals  if an INPUT variable has been modeled in such a way that prewhitening is performed in the IDENTIFY step, a table of test statistics titled “Crosscorrelation Check of Residuals.” The test statistic is based on the chi-square approximation suggested by Box and Jenkins (1976, pp. 395–396). The cross-correlation function is computed by using the residuals from the model as one series and the prewhitened input variable as the other series.  if the GRID option is specified, the sum-of-squares or likelihood surface over a grid of parameter values near the final estimates  a summary of the estimated model that shows the autoregressive factors, moving-average factors, and transfer function factors in backshift notation with the estimated parameter values.

OUTLIER Statement Printed Output The printed output of the OUTLIER statement consists of the following:  a summary that contains the information about the maximum number of outliers searched, the number of outliers actually detected, and the significance level used in the outlier detection.  a table that contains the results of the outlier detection process. The outliers are listed in the order in which they are found. This table contains the following columns: – The Obs column contains the observation number of the start of the level shift. – If an ID= option is specified, then the Time ID column contains the time identification labels of the start of the outlier. – The Type column lists the type of the outlier. O the estimate of the regression coefficient of the shock – The Estimate column contains ˇ, signature. – The Chi-Square column lists the value of the test statistic  2 . – The Approx Prob > ChiSq column lists the approximate p-value of the test statistic.

FORECAST Statement Printed Output The printed output of the FORECAST statement consists of the following:

272 F Chapter 7: The ARIMA Procedure

 a summary of the estimated model  a table of forecasts with following columns: – The Obs column contains the observation number. – The Forecast column contains the forecast values. – The Std Error column contains the forecast standard errors. – The Lower and Uppers columns contain the approximate 95% confidence limits. The ALPHA= option can be used to change the confidence interval for forecasts. – If the PRINTALL option is specified, the forecast table also includes columns for the actual values of the response series (Actual) and the residual values (Residual).

ODS Table Names PROC ARIMA assigns a name to each table it creates. You can use these names to reference the table when you use the Output Delivery System (ODS) to select tables and create output data sets. These names are listed in Table 7.12. Table 7.12

ODS Tables Produced by PROC ARIMA

ODS Table Name

Description

Statement

ChiSqAuto

chi-square statistics table for autocorrelation chi-square statistics table for cross-correlations Correlations graph Descriptive statistics Extended sample autocorrelation function ESACF probability values Inverse autocorrelations graph Input descriptive statistics Minimum information criterion Partial autocorrelations graph Squared canonical correlation estimates SCAN chi-square probability values Stationarity tests Tentative order selection Filter equations

IDENTIFY

ChiSqCross CorrGraph DescStats ESACF ESACFPValues IACFGraph InputDescStats MINIC PACFGraph SCAN SCANPValues StationarityTests TentativeOrders ARPolynomial

Option

IDENTIFY

CROSSCORR

IDENTIFY IDENTIFY IDENTIFY

ESACF

IDENTIFY IDENTIFY IDENTIFY IDENTIFY

ESACF

MINIC

IDENTIFY IDENTIFY

SCAN

IDENTIFY

SCAN

IDENTIFY IDENTIFY ESTIMATE

STATIONARITY MINIC, ESACF, or SCAN

Statistical Graphics F 273

Table 7.12

continued

ODS Table Name

Description

Statement

ChiSqAuto

chi-square statistics table for autocorrelation chi-square statistics table for cross-correlations Correlations of the estimates Filter equations Fit statistics Iteration history Initial autoregressive parameter estimates Initial moving-average parameter estimates Input description Filter equations Model description Filter equations Parameter estimates Preliminary estimates Objective function grid matrix ARIMA estimation optimization Detected outliers Forecast

ESTIMATE

ChiSqCross CorrB DenPolynomial FitStatistics IterHistory InitialAREstimates InitialMAEstimates InputDescription MAPolynomial ModelDescription NumPolynomial ParameterEstimates PrelimEstimates ObjectiveGrid OptSummary OutlierDetails Forecasts

Option

ESTIMATE ESTIMATE ESTIMATE ESTIMATE ESTIMATE ESTIMATE

PRINTALL

ESTIMATE ESTIMATE ESTIMATE ESTIMATE ESTIMATE ESTIMATE ESTIMATE ESTIMATE

GRID

ESTIMATE

PRINTALL

OUTLIER FORECAST

Statistical Graphics This section provides information about the basic ODS statistical graphics produced by the ARIMA procedure. To request graphics with PROC ARIMA, you must first enable ODS Graphics by specifying the ODS GRAPHICS ON; statement. See Chapter 21, “Statistical Graphics Using ODS” (SAS/STAT User’s Guide), for more information. The main types of plots available are as follows:  plots useful in the trend and correlation analysis of the dependent and input series  plots useful for the residual analysis of an estimated model  forecast plots

274 F Chapter 7: The ARIMA Procedure

You can obtain most plots relevant to the specified model by default if ODS Graphics is enabled. For finer control of the graphics, you can use the PLOTS= option in the PROC ARIMA statement. The following example is a simple illustration of how to use the PLOTS= option.

Airline Series: Illustration of ODS Graphics The series in this example, the monthly airline passenger series, is also discussed later, in Example 7.2. The following statements specify an ARIMA(0,1,1)(0,1,1)12 model without a mean term to the logarithms of the airline passengers series, xlog. Notice the use of the global plot option ONLY in the PLOTS= option of the PROC ARIMA statement. It suppresses the production of default graphics and produces only the plots specified by the subsequent RESIDUAL and FORECAST plot options. The RESIDUAL(SMOOTH) plot specification produces a time series plot of residuals that has an overlaid loess fit; see Figure 7.21. The FORECAST(FORECAST) option produces a plot that shows the one-step-ahead forecasts, as well as the multistep-ahead forecasts; see Figure 7.22. proc arima data=seriesg plots(only)=(residual(smooth) forecast(forecasts)); identify var=xlog(1,12); estimate q=(1)(12) noint method=ml; forecast id=date interval=month; run;

Statistical Graphics F 275

Figure 7.21 Residual Plot of the Airline Model

276 F Chapter 7: The ARIMA Procedure

Figure 7.22 Forecast Plot of the Airline Model

ODS Graph Names PROC ARIMA assigns a name to each graph it creates by using ODS. You can use these names to reference the graphs when you use ODS. The names are listed in Table 7.13. Table 7.13

ODS Graphics Produced by PROC ARIMA

ODS Graph Name

Plot Description

Option

SeriesPlot

Time series plot of the dependent series Autocorrelation plot of the dependent series Partial-autocorrelation plot of the dependent series Inverse-autocorrelation plot of the dependent series Series trend and correlation analysis panel

PLOTS(UNPACK)

SeriesACFPlot SeriesPACFPlot SeriesIACFPlot SeriesCorrPanel

PLOTS(UNPACK) PLOTS(UNPACK) PLOTS(UNPACK) Default

Examples: ARIMA Procedure F 277

Table 7.13

continued

ODS Graph Name

Plot Description

Option

CrossCorrPanel

Cross-correlation plots, either individual or paneled. They are numbered 1, 2, and so on as needed. Residual-autocorrelation plot

Default

PLOTS(UNPACK)

ResidualHistogram

Residual-partialautocorrelation plot Residual-inverseautocorrelation plot Residual-white-noiseprobability plot Residual histogram

ResidualQQPlot

Residual normal Q-Q Plot

PLOTS(UNPACK)

ResidualPlot

Time series plot of residuals with a superimposed smoother Time series plot of multistep forecasts Time series plot of one-step-ahead as well as multistep forecasts

PLOTS=RESIDUAL(SMOOTH)

ResidualACFPlot ResidualPACFPlot

ResidualIACFPlot

ResidualWNPlot

ForecastsOnlyPlot ForecastsPlot

PLOTS(UNPACK)

PLOTS(UNPACK)

PLOTS(UNPACK)

PLOTS(UNPACK)

Default PLOTS=FORECAST(FORCAST)

Examples: ARIMA Procedure

Example 7.1: Simulated IMA Model This example illustrates the ARIMA procedure results for a case where the true model is known. An integrated moving-average model is used for this illustration. The following DATA step generates a pseudo-random sample of 100 periods from the ARIMA(0,1,1) process ut D ut 1 C at 0:8at 1 , at iid N.0; 1/: title1 ’Simulated IMA(1,1) Series’; data a; u1 = 0.9; a1 = 0; do i = -50 to 100;

278 F Chapter 7: The ARIMA Procedure

a = rannor( 32565 ); u = u1 + a - .8 * a1; if i > 0 then output; a1 = a; u1 = u; end; run;

The following ARIMA procedure statements identify and estimate the model: /*-- Simulated IMA Model --*/ proc arima data=a; identify var=u; run; identify var=u(1); run; estimate q=1 ; run; quit;

The graphical series correlation analysis output of the first IDENTIFY statement is shown in Output 7.1.1. The output shows the behavior of the sample autocorrelation function when the process is nonstationary. Note that in this case the estimated autocorrelations are not very high, even at small lags. Nonstationarity is reflected in a pattern of significant autocorrelations that do not decline quickly with increasing lag, not in the size of the autocorrelations.

Example 7.1: Simulated IMA Model F 279

Output 7.1.1 Correlation Analysis from the First IDENTIFY Statement

The second IDENTIFY statement differences the series. The results of the second IDENTIFY statement are shown in Output 7.1.2. This output shows autocorrelation, inverse autocorrelation, and partial autocorrelation functions typical of MA(1) processes.

280 F Chapter 7: The ARIMA Procedure

Output 7.1.2 Correlation Analysis from the Second IDENTIFY Statement

The ESTIMATE statement fits an ARIMA(0,1,1) model to the simulated data. Note that in this case the parameter estimates are reasonably close to the values used to generate the simulated data. ( D 0; O D 0:02I 1 D 0:8; O1 D 0:79I  2 D 1; O 2 D 0:82:) Moreover, the graphical analysis of the residuals shows no model inadequacies (see Output 7.1.4 and Output 7.1.5). The ESTIMATE statement results are shown in Output 7.1.3. Output 7.1.3 Output from Fitting ARIMA(0,1,1) Model Conditional Least Squares Estimation

Parameter MU MA1,1

Estimate

Standard Error

t Value

Approx Pr > |t|

Lag

0.02056 0.79142

0.01972 0.06474

1.04 12.22

0.2997 |t|

Lag

0.40194 0.55686

0.07988 0.08403

5.03 6.63

|t|

Lag

MU AR1,1 AR1,2 AR1,3

-0.12280 1.97607 -1.37499 0.34336

0.10902 0.05499 0.09967 0.05502

-1.13 35.94 -13.80 6.24

0.2609 0. Vinod (1973) generalized the Durbin-Watson statistic: PN

O t O t j /2 t Dj C1 . PN O t2 t D1 

dj D

where O are OLS residuals. Using the matrix notation, dj D

Y0 MA0j Aj MY

where M D IN 2 6 6 Aj D 6 4

Y0 MY X.X0 X/ 1 X0 and Aj is a .N 1 0 :: : 0

and there are j

0  1 0 :: :: : :  0

j /  N matrix: 3 0 1 0  0    0 1 0   7 7 :: :: :: :: :: 7 : : : : :5 1 0  0 1

1 zeros between 1 and 1 in each row of matrix Aj .

The QR factorization of the design matrix X yields a N  N orthogonal matrix Q: X D QR

Generalized Durbin-Watson Tests F 371

where R is an N  k upper triangular matrix. There exists an N  .N k/ submatrix of Q such that Q1 Q01 D M and Q01 Q1 D IN k . Consequently, the generalized Durbin-Watson statistic is stated as a ratio of two quadratic forms: Pn 2 lD1 j l l dj D P n 2 lD1 l where j1 : : :j n are upper n eigenvalues of MA0 j Aj M and l is a standard normal variate, and n D min.N k; N j /. These eigenvalues are obtained by a singular value decomposition of Q0 1 A0 j (Golub and Van Loan 1989; Savin and White 1978). The marginal probability (or p-value) for dj given c0 is Pn 2 lD1 j l l < c0 / D Prob.qj < 0/ Prob. Pn 2 lD1 l where qj D

n X

.j l

c0 /l2

lD1

When the null hypothesis H0 W 'j D 0 holds, the quadratic form qj has the characteristic function j .t / D

n Y

.1

2.j l

c0 /i t /

1=2

lD1

The distribution function is uniquely determined by this characteristic function: Z 1 itx 1 e j . t / e i tx j .t/ 1 dt F .x/ D C 2 2 0 it For example, to test H0 W '4 D 0 given '1 D '2 D '3 D 0 against H1 W probability (p-value) can be used: Z 1 1 1 .4 . t / 4 .t// F .0/ D C dt 2 2 0 it

'4 > 0, the marginal

where 4 .t / D

n Y

.1

2.4l

dO4 /i t /

1=2

lD1

and dO4 is the calculated value of the fourth-order Durbin-Watson statistic. In the Durbin-Watson test, the marginal probability indicates positive autocorrelation ( 'j > 0) if it is less than the level of significance (˛), while you can conclude that a negative autocorrelation ( 'j < 0) exists if the marginal probability based on the computed Durbin-Watson statistic is greater than 1 ˛. Wallis (1972) presented tables for bounds tests of fourth-order autocorrelation, and Vinod (1973) has given tables for a 5% significance level for orders two to four. Using the

372 F Chapter 8: The AUTOREG Procedure

AUTOREG procedure, you can calculate the exact p-values for the general order of Durbin-Watson test statistics. Tests for the absence of autocorrelation of order p can be performed sequentially; at the j th step, test H0 W 'j D 0 given '1 D : : : D 'j 1 D 0 against 'j ¤ 0. However, the size of the sequential test is not known. The Durbin-Watson statistic is computed from the OLS residuals, while that of the autoregressive error model uses residuals that are the difference between the predicted values and the actual values. When you use the Durbin-Watson test from the residuals of the autoregressive error model, you must be aware that this test is only an approximation. See “Autoregressive Error Model” on page 357 earlier in this chapter. If there are missing values, the Durbin-Watson statistic is computed using all the nonmissing values and ignoring the gaps caused by missing residuals. This does not affect the significance level of the resulting test, although the power of the test against certain alternatives may be adversely affected. Savin and White (1978) have examined the use of the Durbin-Watson statistic with missing values.

Enhanced Durbin-Watson Probability Computation The Durbin-Watson probability calculations have been enhanced to compute the p-value of the generalized Durbin-Watson statistic for large sample sizes. Previously, the Durbin-Watson probabilities were only calculated for small sample sizes. Consider the following linear regression model: Y D Xˇ C u ut C 'j ut

j

D t ;

t D 1; : : : ; N

where X is an N  k data matrix, ˇ is a k  1 coefficient vector, u is a N  1 disturbance vector, and t is a sequence of independent normal error terms with mean 0 and variance  2 . The generalized Durbin-Watson statistic is written as DWj D

uO 0 A0j Aj uO uO 0 uO

where uO is a vector of OLS residuals and Aj is a .T j /T matrix. The generalized Durbin-Watson statistic DWj can be rewritten as DWj D

Y0 MA0j Aj MY

where Q01 Q1 D IT

Y0 MY k;

D

0 .Q01 A0j Aj Q1 / 0 

Q01 X D 0; and  D Q01 u.

The marginal probability for the Durbin-Watson statistic is Pr.DWj < c/ D Pr.h < 0/ where h D 0 .Q01 A0j Aj Q1

cI/.

The p-value or the marginal probability for the generalized Durbin-Watson statistic is computed by numerical inversion of the characteristic function .u/ of the quadratic form h D 0 .Q01 A0j Aj Q1

Generalized Durbin-Watson Tests F 373

cI/. The trapezoidal rule approximation to the marginal probability Pr.h < 0/ is 1 Pr.h < 0/ D 2

  K X Im ..k C 12 // kD0

.k C 21 /

C EI ./ C ET .K/

where Im Œ./ is the imaginary part of the characteristic function, EI ./ and ET .K/ are integration and truncation errors, respectively. Refer to Davies (1973) for numerical inversion of the characteristic function. Ansley, Kohn, and Shively (1992) proposed a numerically efficient algorithm that requires O(N ) operations for evaluation of the characteristic function .u/. The characteristic function is denoted as ˇ ˇ .u/ D ˇI

ˇ ˇ 2i u.Q01 A0j Aj Q1 cIN k /ˇ ˇ ˇ 1=2 ˇ 0 ˇ1=2 ˇX Xˇ D jVj 1=2 ˇX0 V 1 Xˇ

1=2

p where V D .1C2i uc/I 2i uA0j Aj and i D 1. By applying the Cholesky decomposition to the complex matrix V, you can obtain the lower triangular matrix G that satisfies V D GG0 . Therefore, the characteristic function can be evaluated in O(N ) operations by using the following formula: .u/ D jGj

1 ˇ 0

ˇ ˇ X X ˇ

ˇ1=2 X Xˇ

1=2 ˇˇ 0

where X D G 1 X. Refer to Ansley, Kohn, and Shively (1992) for more information on evaluation of the characteristic function.

Tests for Serial Correlation with Lagged Dependent Variables When regressors contain lagged dependent variables, the Durbin-Watson statistic (d1 ) for the firstorder autocorrelation is biased toward 2 and has reduced power. Wallis (1972) shows that the bias in the Durbin-Watson statistic (d4 ) for the fourth-order autocorrelation is smaller than the bias in d1 in the presence of a first-order lagged dependent variable. Durbin (1970) proposes two alternative statistics (Durbin h and t ) that are asymptotically equivalent. The h statistic is written as q h D O N=.1 N VO / P P O t2 and VO is the least squares variance estimate for the coefficient where O D N O t O t 1 = N t D2  tD1  of the lagged dependent variable. Durbin’s t test consists of regressing the OLS residuals O t on explanatory variables and O t 1 and testing the significance of the estimate for coefficient of O t 1 . Inder (1984) shows that the Durbin-Watson test for the absence of first-order autocorrelation is generally more powerful than the h test in finite samples. Refer to Inder (1986) and King and Wu (1991) for the Durbin-Watson test in the presence of lagged dependent variables.

374 F Chapter 8: The AUTOREG Procedure

Testing Heteroscedasticity and Normality Tests Portmanteau Q Test

For nonlinear time series models, the portmanteau test statistic based on squared residuals is used to test for independence of the series (McLeod and Li 1983): q X r.i I O t2 / Q.q/ D N.N C 2/ .N i/ i D1

where r.i I O t2 /

O 2 D

PN

O t2 O 2 /.O t2 i t Di C1 . PN O t2 O 2 /2 t D1 .

D

O 2 /

N 1 X 2 O t N t D1

This Q statistic is used to test the nonlinear effects (for example, GARCH effects) present in the residuals. The GARCH.p; q/ process can be considered as an ARMA.max.p; q/; p/ process. See the section “Predicting the Conditional Variance” on page 383 later in this chapter. Therefore, the Q statistic calculated from the squared residuals can be used to identify the order of the GARCH process.

Lagrange Multiplier Test for ARCH Disturbances

Engle (1982) proposed a Lagrange multiplier test for ARCH disturbances. The test statistic is asymptotically equivalent to the test used by Breusch and Pagan (1979). Engle’s Lagrange multiplier test for the qth order ARCH process is written LM.q/ D

N W0 Z.Z0 Z/ W0 W

1 Z0 W

where WD

2 O N O 12 ; : : :; O 2 O 2

!0

and 3 1 O 02    O 2 qC1 6 :: :: :: :: 7 6: : : : 7 6 7 Z D 6: :: :: :: 7 : 4: : : : 5 2 2 1 O N     O 1 N q 2

Testing F 375

The presample values ( 02 ,: : :,  2 qC1 ) have been set to 0. Note that the LM.q/ tests may have different finite-sample properties depending on the presample values, though they are asymptotically equivalent regardless of the presample values. The LM and Q statistics are computed from the OLS residuals assuming that disturbances are white noise. The Q and LM statistics have an approximate 2.q/ distribution under the white-noise null hypothesis.

Normality Test Based on skewness and kurtosis, Jarque and Bera (1980) calculated the test statistic   N 2 N 2 TN D b C .b2 3/ 6 1 24 where p PN N t D1 uO 3t b1 D  3 PN 2 2 u O t D1 t P N N O 4t t D1 u b2 D  2 PN 2 u O t D1 t The 2 (2) distribution gives an approximation to the normality test TN . When the GARCH model is estimated, the normality test is obtained using the standardized residup als uO t D Ot = ht . The normality test can be used to detect misspecification of the family of ARCH models.

Computation of the Chow Test Consider the linear regression model y D Xˇ C u where the parameter vector ˇ contains k elements. Split the observations for this model into two subsets at the break point specified by the CHOW= option, so that y D .y0 1 ; y0 2 /0 X D .X0 1 ; X0 2 /0 u D .u0 1 ; u0 2 /0 Now consider the two linear regressions for the two subsets of the data modeled separately, y1 D X1 ˇ1 C u1

376 F Chapter 8: The AUTOREG Procedure

y2 D X2 ˇ2 C u2 where the number of observations from the first set is n1 and the number of observations from the second set is n2 . The Chow test statistic is used to test the null hypothesis H0 W ˇ1 D ˇ2 conditional on the same error variance V .u1 / D V .u2 /. The Chow test is computed using three sums of square errors: Fchow D

.uO 0 uO uO 01 uO 1 uO 02 uO 2 /=k .uO 01 uO 1 C uO 02 uO 2 /=.n1 C n2 2k/

where uO is the regression residual vector from the full set model, uO 1 is the regression residual vector from the first set model, and uO 2 is the regression residual vector from the second set model. Under the null hypothesis, the Chow test statistic has an F distribution with k and .n1 C n2 2k/ degrees of freedom, where k is the number of elements in ˇ. Chow (1960) suggested another test statistic that tests the hypothesis that the mean of prediction errors is 0. The predictive Chow test can also be used when n2 < k. The PCHOW= option computes the predictive Chow test statistic Fpchow D

.uO 0 uO uO 01 uO 1 /=n2 uO 01 uO 1 =.n1 C k/

The predictive Chow test has an F distribution with n2 and .n1

k/ degrees of freedom.

Phillips-Perron Unit Root and Cointegration Testing Consider the random walk process yt D yt

1

C ut

where the disturbances might be serially correlated with possible heteroscedasticity. Phillips and Perron (1988) proposed the unit root test of the OLS regression model, yt D yt

1

C ut

P Let s 2 D T 1 k TtD1 uO 2t and let O 2 be the variance estimate of the OLS estimator , O where uO t is the 1 PT 2 OLS residual. You can estimate the asymptotic variance of T t D1 uO t by using the truncation lag l.

O D

l X

j Œ1

j=.l C 1/ Oj

j D0

where 0 D 1, j D 2 for j > 0, and Oj D

1 T

PT

O t uO t j . t Dj C1 u

Then the Phillips-Perron ZO ˛ (defined here as ZO  ) test (zero mean case) is written ZO  D T.O

1/

1 2 2 O T O . 2

O0 /=s 2

Testing F 377

and has the following limiting distribution: 2 1 1g 2 fB.1/ R1 2 0 ŒB.x/ dx

where B() is a standard Brownian motion. Note that the realization z from the stochastic process B.x/ is distributed as N.0; x/ and thus B.1/2 21 . Note that P.O < 1/0:68 as T!1, which shows that the limiting distribution is skewed to the left. Let  be the  statistic for . O The Phillips-Perron ZO t (defined here as ZO  ) test is written 1 T O .O 2

O 1=2 tO ZO  D . O0 =/

O0 /=.s O 1=2 /

and its limiting distribution is derived as 1 2 1g 2 fŒB.1/ R1 f 0 ŒB.x/2 dxg1=2

When you test the regression model yt D  C yt 1 C ut for the true random walk process (single mean case), the limiting distribution of the statistic ZO  is written 1 2 2 fŒB.1/

R1 0

1g

ŒB.x/2 dx

R1 B.1/ 0 B.x/dx hR i2 1 B.x/dx 0

while the limiting distribution of the statistic ZO  is given by R1 1 2 1g B.1/ 0 B.x/dx 2 fŒB.1/ hR i2 R1 1 B.x/dx g1=2 f 0 ŒB.x/2 dx 0 Finally, the limiting distribution of the Phillips-Perron test for the random walk with drift process yt D  C yt 1 C ut (trend case) can be derived as 2 3 B.1/   6 7 B.1/2 1 0 c 0 V 14 5 R 21 B.1/ 0 B.x/dx where c D 1 for ZO  and c D 2

1 6R 1 V D 4 0 B.x/dx 1 2

  QD 0 c 0 V

p1 Q

for ZO  ,

3 R1 1 B.x/dx 0 2 R1 R1 7 B.x/2 dx 0 xB.x/dx 5 0 R1 1 0 xB.x/dx 3 2 3 0 14 5 c 0

378 F Chapter 8: The AUTOREG Procedure

When several variables zt D .z1t ;   ; zk t /0 are cointegrated, there exists a .k1/ cointegrating vector c such that c0 zt is stationary and c is a nonzero vector. The residual based cointegration test assumes the following regression model: yt D ˇ1 C x0t ˇ C ut where yt D z1t , xt D .z2t ;   ; zkt /0 , and ˇ = (ˇ 2 ,  ,ˇ k /0 . You can estimate the consistent cointegrating vector by using OLS if all variables are difference stationary — that is, I(1). The PhillipsOuliaris test is computed using the OLS residuals from the preceding regression model, and it performs the test for the null hypothesis of no cointegration. The estimated cointegrating vector is cO D .1; ˇO2 ;   ; ˇOk /0 . You need to refer to the tables by Phillips and Ouliaris (1990) to obtain the p-value of the cointegration test. Before you apply the cointegration test, you may want to perform the unit root test for each variable (see the option STATIONARITY= ( PHILLIPS )).

Kwiatkowski, Phillips, Schmidt, and Shin (KPSS) Unit Root Test The KPSS test was introduced in Kwiatkowski et al. (1992) to test the null hypothesis that an observable series is stationary around a deterministic trend. Please note, that for consistency reasons, the notation used here is different from the notation used in the original paper. The setup of the problem is as follows: it is assumed that the series is expressed as the sum of the deterministic trend, random walk rt , and stationary error ut ; that is, yt D  C ıt C rt C ut with rt D rt 1 C et , et iid .0; e2 /, and an intercept  (in the original paper, the authors use r0 instead of .) Under stronger assumptions of normality and iid of ut and et , a one-sided LM test of the null that there is no random walk (et D 0; 8t ) can be constructed as follows:

bM D T1 L

2

1 s .l/ D T 2

St D

T X St2 s 2 .l/ t D1

T X t D1

t X

u2t

l T X 2 X C w.s; l/ ut ut T sD1

s

tDsC1

u

 D1

Following the original work of Kwiatkowski, Phillips, Schmidt, and Shin, under the null (e2 D 0), LM statistic converges asymptotically to three different distributions depending on whether the model is trend-stationary, level-stationary (ı D 0), or zero-mean stationary (ı D 0,  D 0). The trend-stationary model is denoted by subscript  and the level-stationary model is denoted by subscript . The case when there is no trend and zero intercept denoted as 0. The last case is

b

Testing F 379

considered in Hobijn, Franses, and Ooms (2004). yt D ut W yt D  C ut W yt D  C ıt C ut W

b bM L bM L

Z

D

Z

D

0 Z 1

LM 0 ! 



!

B 2 .r/dr

0

!

V .r/ D B.r/

with:

1

D

0

1

V 2 .r/dr V22 .r/dr

rB.1/

V2 .r/ D B.r/ C .2r

2

2

3r /B.1/ C . 6r C 6r /

1

Z

B.s/ds 0

where V .r/ is a standard Brownian bridge, V2 .r/ is a Brownian bridge of a second-level, B.r/ is a D

Brownian motion (Wiener process), and ! is convergence in distribution.

b

O This test depends on Using the notation of Kwiatkowski et al. (1992) the LM statistic is named as . the computational method used to compute the long-run variance s.l/ — that is, the window width l and the kernel type w.:/. You can specify the kernel used in the test, using the KERNEL option:  Newey-West/Bartlett (KERNEL=NW j BART), default w.s; l/ D 1

s l C1

 Quadratic spectral (KERNEL=QS) 25 w.s= l/ D w.x/ D 12 2 x 2



sin .6x=5/ 6x=5

 cos .6x=5/

You can specify the number of lags, l, in three different ways:  Schwert (SCHW = c) (default for NW, c=4) (   ) T 1=4 l D floor c 100  Manual (LAG = l)  Automatic selection (AUTO) (default for QS) Hobijn, Franses, and Ooms (2004) The last option (AUTO) needs more explanation, summarized in the following table.

380 F Chapter 8: The AUTOREG Procedure

NW Kernel

QS Kernel

l D min.T; floor. O T 1=3 // l D min.T; floor. O T 1=5 // where T – number of observations,

O D 1:1447



sO .1/ sO .0/

2 1=3

sO .j / D ı0;j O0 C 2

O D 1:3221 Pn

n D floor.T 2=9 /

 ı0;j D

i D1 i



sO .2/ sO .0/

2 1=5

j O i

n D floor.T 2=25 /

1 j D0 0 otherwise

T i 1 X ut ut Ci

Oi D T t D1

Ramsey’s Reset Test Ramsey’s reset test is a misspecification test associated with the functional form of models to check whether power transforms need to be added to a model. The original linear model, henceforth called the restricted model, is y t D x t ˇ C ut To test for misspecification in the functional form, the unrestricted model is y t D xt ˇ C

p X

j

j yOt C ut

j D2

where yOt is the predicted value from the linear model and p is the power of yOt in the unrestricted model equation starting from 2. The number of higher-ordered terms to be chosen depends on the discretion of the analyst. The RESET opti on produces test results for p D 2, 3, and 4. The reset test is an F statistic for testing H0 W j D 0; 8j D 2; : : : ; p, against H1 W j ¤ 0 for at least one j D 2; : : : ; p in the unrestricted model and is computed as follows: F.p

1;n k pC1/

D

.S SER SSEU /=.p 1/ S SEU =.n k p C 1/

where S SER is the sum of squared errors due to the restricted model, SSEU is the sum of squared errors due to the unrestricted model, n is the total number of observations, and k is the number of parameters in the original linear model. Ramsey’s test can be viewed as a linearity test that checks whether any nonlinear transformation of the specified independent variables has been omitted, but it need not help in identifying a new relevant variable other than those already specified in the current model.

Predicted Values F 381

Predicted Values The AUTOREG procedure can produce two kinds of predicted values for the response series and corresponding residuals and confidence limits. The residuals in both cases are computed as the actual value minus the predicted value. In addition, when GARCH models are estimated, the AUTOREG procedure can output predictions of the conditional error variance.

Predicting the Unconditional Mean The first type of predicted value is obtained from only the structural part of the model, x0t b. These are useful in predicting values of new response time series, which are assumed to be described by the same model as the current response time series. The predicted values, residuals, and upper and lower confidence limits for the structural predictions are requested by specifying the PREDICTEDM=, RESIDUALM=, UCLM=, or LCLM= option in the OUTPUT statement. The ALPHACLM= option controls the confidence level for UCLM= and LCLM=. These confidence limits are for estimation of the mean of the dependent variable, x0t b, where xt is the column vector of independent variables at observation t. The predicted values are computed as yOt D x0t b and the upper and lower confidence limits as uO t D yOt C t˛=2 v lOt D yOt

t˛=2 v

where v2 is an estimate of the variance of yOt and t˛=2 is the upper ˛/2 percentage point of the t distribution.

Prob.T > t˛=2 / D ˛=2 where T is an observation from a t distribution with q degrees of freedom. The value of ˛ can be set with the ALPHACLM= option. The degrees of freedom parameter, q, is taken to be the number of observations minus the number of free parameters in the regression and autoregression parts of the model. For the YW estimation method, the value of v is calculated as q v D s 2 x0t .X0 V 1 X/ 1 xt where s 2 is the error sum of squares divided by q. For the ULS and ML methods, it is calculated as q v D s 2 x0t Wxt where W is the kk submatrix of .J0 J/ 1 that corresponds to the regression parameters. For details, see the section “Computational Methods” on page 359 earlier in this chapter.

382 F Chapter 8: The AUTOREG Procedure

Predicting Future Series Realizations The other predicted values use both the structural part of the model and the predicted values of the error process. These conditional mean values are useful in predicting future values of the current response time series. The predicted values, residuals, and upper and lower confidence limits for future observations conditional on past values are requested by the PREDICTED=, RESIDUAL=, UCL=, or LCL= option in the OUTPUT statement. The ALPHACLI= option controls the confidence level for UCL= and LCL=. These confidence limits are for the predicted value, yQt D x0t b C t jt

1

where xt is the vector of independent variables and t jt 1 is the minimum variance linear predictor of the error term given the available past values of t j ; j D 1; 2; : : :; t 1, and the autoregressive model for t . If the m previous values of the structural residuals are available, then t jt

1

D

'O1 t

1

:::

'Om t

m

where 'O1 ; : : :; 'Om are the estimated AR parameters. The upper and lower confidence limits are computed as uQ t D yQt C t˛=2 v lQt D yQt

t˛=2 v

where v, in this case, is computed as q v D s 2 .x0t .X0 V 1 X/ 1 xt C r/ where the value rs 2 is the estimate of the variance of tjt 1 . At the start of the series, and after missing values, r is generally greater than 1. See the section “Predicting the Conditional Variance” on page 383 for computational details of r. The plot of residuals and confidence limits in Example 8.4 later in this chapter illustrates this behavior. Except to adjust the degrees of freedom for the error sum of squares, the preceding formulas do not account for the fact that the autoregressive parameters are estimated. In particular, the confidence limits are likely to be somewhat too narrow. In large samples, this is probably not an important effect, but it may be appreciable in small samples. Refer to Harvey (1981) for some discussion of this problem for AR(1) models. Note that at the beginning of the series (the first m observations, where m is the value of the NLAG= option) and after missing values, these residuals do not match the residuals obtained by using OLS on the transformed variables. This is because, in these cases, the predicted noise values must be based on less than a complete set of past noise values and, thus, have larger variance. The GLS transformation for these observations includes a scale factor as well as a linear combination of past values. Put another way, the L 1 matrix defined in the section “Computational Methods” on page 359 has the value 1 along the diagonal, except for the first m observations and after missing values.

Predicted Values F 383

Predicting the Conditional Variance The GARCH process can be written t2

D!C

n X

.˛i C

p X

i /t2 i

i D1

j t

j

C t

j D1

where t D t2 ht and n D max.p; q/. This representation shows that the squared residual t2 follows an ARMA.n; p/ process. Then for any d > 0, the conditional expectations are as follows: E.t2Cd j‰t /

n X

D!C

.˛i C

i /E.t2Cd i j‰t /

i D1

d X1

j E.t Cd

j j‰t /

j D1

The d-step-ahead prediction error, t Cd = yt Cd V.tCd j‰t / D

p X

gj2 t2Cd

yt Cd jt , has the conditional variance

j jt

j D0

where t2Cd

j jt

D E.t2Cd

j j‰t /

Coefficients in the conditional d-step prediction error variance are calculated recursively using the following formula: gj D

'1 gj

1

:::

'm gj

m

where g0 D 1 and gj D 0 if j < 0; '1 , : : :, 'm are autoregressive parameters. Since the parameters are not known, the conditional variance is computed using the estimated autoregressive parameters. The d-step-ahead prediction error variance is simplified when there are no autoregressive terms: V.t Cd j‰t / D t2Cd jt Therefore, the one-step-ahead prediction error variance is equivalent to the conditional error variance defined in the GARCH process: ht D E.t2 j‰t

1/

D t2jt

1

Note that the conditional prediction error variance of the EGARCH and GARCH-M models cannot be calculated using the preceding formula. Therefore, the confidence intervals for the predicted values are computed assuming the homoscedastic conditional error variance. That is, the conditional prediction error variance is identical to the unconditional prediction error variance: V.t Cd j‰t / D V.t Cd / D 

2

d X1 j D0

gj2

384 F Chapter 8: The AUTOREG Procedure

since t2Cd j jt D  2 . You can compute s 2 r, which is the second term of the variance for the predicted value yQt explained previously in the section “Predicting Future Series Realizations” on P P page 382 using the formula  2 dj D01 gj2 ; r is estimated from dj D01 gj2 by using the estimated autoregressive parameters. Consider the following conditional prediction error variance: V.t Cd j‰t / D 

2

d X1 j D0

gj2

C

d X1

gj2 .t2Cd

j jt

 2/

j D0

The second term in the preceding equation can be interpreted as the noise from using the homoscedastic conditional variance when the errors follow the GARCH process. However, it is expected that if the GARCH process is covariance stationary, the difference between the conditional prediction error variance and the unconditional prediction error variance disappears as the forecast horizon d increases.

OUT= Data Set The output SAS data set produced by the OUTPUT statement contains all the variables in the input data set and the new variables specified by the OUTPUT statement options. See the section “OUTPUT Statement” on page 354 earlier in this chapter for information on the output variables that can be created. The output data set contains one observation for each observation in the input data set.

OUTEST= Data Set The OUTEST= data set contains all the variables used in any MODEL statement. Each regressor variable contains the estimate for the corresponding regression parameter in the corresponding model. In addition, the OUTEST= data set contains the following variables: _A_i

the ith order autoregressive parameter estimate. There are m such variables _A_1 through _A_m, where m is the value of the NLAG= option.

_AH_i

the ith order ARCH parameter estimate, if the GARCH= option is specified. There are q such variables _AH_1 through _AH_q, where q is the value of the Q= option. The variable _AH_0 contains the estimate of !.

_DELTA_

the estimated mean parameter for the GARCH-M model, if a GARCH-in-mean model is specified

_DEPVAR_

the name of the dependent variable

_GH_i

the ith order GARCH parameter estimate, if the GARCH= option is specified. There are p such variables _GH_1 through _GH_p, where p is the value of the P= option.

Printed Output F 385

INTERCEPT

the intercept estimate. INTERCEP contains a missing value for models for which the NOINT option is specified.

_METHOD_

the estimation method that is specified in the METHOD= option

_MODEL_

the label of the MODEL statement if one is given, or blank otherwise

_MSE_

the value of the mean square error for the model

_NAME_

the name of the row of covariance matrix for the parameter estimate, if the COVOUT option is specified

_LIKLHD_

the log-likelihood value of the GARCH model

_SSE_

the value of the error sum of squares

_STATUS_

This variable indicates the optimization status. _STATUS_ D 0 indicates that there were no errors during the optimization and the algorithm converged. _STATUS_ D 1 indicates that the optimization could not improve the function value and means that the results should be interpreted with caution. _STATUS_ D 2 indicates that the optimization failed due to the number of iterations exceeding either the maximum default or the specified number of iterations or the number of function calls allowed. _STATUS_ D 3 indicates that an error occurred during the optimization process. For example, this error message is obtained when a function or its derivatives cannot be calculated at the initial values or during the iteration process, when an optimization step is outside of the feasible region or when active constraints are linearly dependent.

_STDERR_

standard error of the parameter estimate, if the COVOUT option is specified

_THETA_

the estimate of the  parameter in the EGARCH model, if an EGARCH model is specified

_TYPE_

OLS for observations containing parameter estimates, or COV for observations containing covariance matrix elements.

The OUTEST= data set contains one observation for each MODEL statement giving the parameter estimates for that model. If the COVOUT option is specified, the OUTEST= data set includes additional observations for each MODEL statement giving the rows of the covariance of parameter estimates matrix. For covariance observations, the value of the _TYPE_ variable is COV, and the _NAME_ variable identifies the parameter associated with that row of the covariance matrix.

Printed Output The AUTOREG procedure prints the following items: 1. the name of the dependent variable 2. the ordinary least squares estimates 3. Estimates of autocorrelations, which include the estimates of the autocovariances, the autocorrelations, and (if there is sufficient space) a graph of the autocorrelation at each LAG

386 F Chapter 8: The AUTOREG Procedure

4. if the PARTIAL option is specified, the partial autocorrelations 5. the preliminary MSE, which results from solving the Yule-Walker equations. This is an estimate of the final MSE. 6. the estimates of the autoregressive parameters (Coefficient), their standard errors (Std Error), and the ratio of estimate to standard error (t Ratio) 7. the statistics of fit for the final model. These include the error sum of squares (SSE), the degrees of freedom for error (DFE), the mean square error (MSE), the mean absolute error (MAE), the mean absolute percentage error (MAPE), the root mean square error (Root MSE), the Schwarz information criterion (SBC), the Akaike information criterion (AIC), the corrected Akaike information criterion (AICC), the regression R2 (Reg Rsq), and the total R2 (Total Rsq). For GARCH models, the following additional items are printed:  the value of the log-likelihood function  the number of observations that are used in estimation (OBS)  the unconditional variance (UVAR)  the normality test statistic and its p-value 8. the parameter estimates for the structural model (B Value), a standard error estimate (Std Error), the ratio of estimate to standard error (t Ratio), and an approximation to the significance probability for the parameter being 0 (Approx Prob) 9. If the NLAG= option is specified with METHOD=ULS or METHOD=ML, the regression parameter estimates are printed again, assuming that the autoregressive parameter estimates are known. In this case, the Std Error and related statistics for the regression estimates will, in general, be different from the case when they are estimated. Note that from a standpoint of estimation, Yule-Walker and iterated Yule-Walker methods (NLAG= with METHOD=YW, ITYW) generate only one table, assuming AR parameters are given. 10. If you specify the NORMAL option, the Bera-Jarque normality test statistics are printed. If you specify the LAGDEP option, Durbin’s h or Durbin’s t is printed.

ODS Table Names PROC AUTOREG assigns a name to each table it creates. You can use these names to reference the table when using the Output Delivery System (ODS) to select tables and create output data sets. These names are listed in the Table 8.2. Table 8.2

ODS Tables Produced in PROC AUTOREG

ODS Table Name

Description

ODS Tables Created by the MODEL Statement FitSummary Summary of regression SummaryDepVarCen Summary of regression (centered dependent var)

Option default CENTER

ODS Table Names F 387

Table 8.2

continued

ODS Table Name

Description

Option

SummaryNoIntercept YWIterSSE

NOINT METHOD=ITYW

ChowTest

Summary of regression (no intercept) Yule-Walker iteration sum of squared error Preliminary MSE Dependent variable Linear dependence equation Q and LM tests for ARCH disturbances Chow test and predictive Chow test

Godfrey PhilPerron

Godfrey’s serial correlation test Phillips-Perron unit root test

PhilOul

Phillips-Ouliaris cointegration test

PreMSE Dependent DependenceEquations ARCHTest

KPSS

Kwiatkowski, Phillips, Schmidt, and Shin test ResetTest Ramsey’s RESET test ARParameterEstimates Estimates of autoregressive parameters CorrGraph estimates of autocorrelations BackStep Backward elimination of autoregressive terms ExpAutocorr Expected autocorrelations IterHistory Iteration history ParameterEstimates Parameter estimates ParameterEstimatesGivenAR Parameter estimates assuming AR parameters are given PartialAutoCorr CovB CorrB CholeskyFactor Coefficients GammaInverse ConvergenceStatus MiscStat DWTest

Partial autocorrelation Covariance of parameter estimates Correlation of parameter estimates Cholesky root of gamma Coefficients for first NLAG observations Gamma inverse Convergence status table Durbin t or Durbin h, Bera-Jarque normality test Durbin-Watson statistics

ODS Tables Created by the RESTRICT Statement Restrict Restriction table

NLAG= default ARCHTEST CHOW= PCHOW= GODFREY STATIONARITY= (PHILIPS) (no regressor) STATIONARITY= (PHILIPS) (has regressor) STATIONARITY= (KPSS) RESET NLAG= NLAG= BACKSTEP NLAG= ITPRINT default NLAG=, METHOD= ULS | ML PARTIAL COVB CORRB ALL COEF GINV default LAGDEP=; NORMAL DW= default

388 F Chapter 8: The AUTOREG Procedure

Table 8.2

continued

ODS Table Name

Description

Option

ODS Tables Created by the TEST Statement FTest F test WaldTest Wald test

default TYPE=WALD

ODS Graphics This section describes the use of ODS for creating graphics with the AUTOREG procedure. To request these graphs, you must specify the ODS GRAPHICS statement. By default, only the residual, predicted versus actual, and autocorrelation of residuals plots are produced. If, in addition to the ODS GRAPHICS statement, you also specify the ALL option in either the PROC AUTOREG statement or MODEL statement, all plots are created. For HETERO, GARCH, and AR models studentized residuals are replaced by standardized residuals. For the autoregressive models, the conditional variance of the residuals is computed as described in the section “Predicting Future Series Realizations” on page 382. For the GA RCH and HETERO models, residuals are assumed to have ht conditional variance invoked by the HT= option of the OUTPUT statement. For all these cases, the Cook’s D plot is not produced.

ODS Graph Names PROC AUTOREG assigns a name to each graph it creates using ODS. You can use these names to reference the graphs when using ODS. The names are listed in Table 8.3. Table 8.3

ODS Graphics Produced by PROC AUTOREG

ODS Graph Name

Plot Description

Option

ACFPlot FitPlot CooksD IACFPlot QQPlot PACFPlot ResidualHistogram ResidualPlot StudentResidualPlot StandardResidualPlot WhiteNoiseLogProbPlot

Autocorrelation of residuals Predicted versus actual plot Cook’s D plot Inverse autocorrelation of residuals Q-Q plot of residuals Partial autocorrelation of residuals Histogram of the residuals Residual plot Studentized residual plot Standardized residual plot Tests for white noise residuals

ACF Default ALL (no NLAG=) ALL ALL ALL ALL Default ALL (no NLAG=/HETERO=/GARCH=) ALL ALL

Examples: AUTOREG Procedure F 389

Examples: AUTOREG Procedure

Example 8.1: Analysis of Real Output Series In this example, the annual real output series is analyzed over the period 1901 to 1983 (Balke and Gordon 1986, pp. 581–583). With the following DATA step, the original data are transformed using the natural logarithm, and the differenced series DY is created for further analysis. The log of real output is plotted in Output 8.1.1. title ’Analysis of Real GNP’; data gnp; date = intnx( ’year’, ’01jan1901’d, _n_-1 ); format date year4.; input x @@; y = log(x); dy = dif(y); t = _n_; label y = ’Real GNP’ dy = ’First Difference of Y’ t = ’Time Trend’; datalines; ... more lines ...

proc sgplot data=gnp noautolegend; scatter x=date y=y; xaxis grid values=(’01jan1901’d ’01jan1911’d ’01jan1921’d ’01jan1931’d ’01jan1941’d ’01jan1951’d ’01jan1961’d ’01jan1971’d ’01jan1981’d ’01jan1991’d); run;

390 F Chapter 8: The AUTOREG Procedure

Output 8.1.1 Real Output Series: 1901 – 1983

The (linear) trend-stationary process is estimated using the following form: yt D ˇ0 C ˇ1 t C t where t D t

'1  t

1

'2 t

2

t IN.0;  / The preceding trend-stationary model assumes that uncertainty over future horizons is bounded since the error term, t , has a finite variance. The maximum likelihood AR estimates from the statements that follow are shown in Output 8.1.2: proc autoreg data=gnp; model y = t / nlag=2 method=ml; run;

Example 8.1: Analysis of Real Output Series F 391

Output 8.1.2 Estimating the Linear Trend Model Analysis of Real GNP The AUTOREG Procedure Maximum Likelihood Estimates SSE MSE SBC MAE MAPE Durbin-Watson

Variable Intercept t AR1 AR2

0.23954331 0.00303 -230.39355 0.04016596 0.69458594 1.9935

DFE Root MSE AIC AICC Regress R-Square Total R-Square

79 0.05507 -240.06891 -239.55609 0.8645 0.9947

DF

Estimate

Standard Error

t Value

Approx Pr > |t|

1 1 1 1

4.8206 0.0302 -1.2041 0.3748

0.0661 0.001346 0.1040 0.1039

72.88 22.45 -11.58 3.61

|t| Variable Label 34.8101 0.0179 0.0449 0.2592

-0.54 1.89 3.05 -1.93

0.5993 0.0769 Lagged Value of GE shares 0.0076 Lagged Capital Stock GE 0.0718

Autoregressive parameters assumed given.

Variable

DF

Estimate

Intercept gef gec

1 1 1

-18.6582 0.0339 0.1369

Standard Approx Error t Value Pr > |t| Variable Label 33.7567 0.0159 0.0404

-0.55 2.13 3.39

0.5881 0.0486 Lagged Value of GE shares 0.0037 Lagged Capital Stock GE

Output 8.2.4 Regression Results Using Maximum Likelihood Method Estimates of Autoregressive Parameters

Lag

Coefficient

Standard Error

t Value

1

-0.460867

0.221867

-2.08

Algorithm converged. Grunfeld’s Investment Models Fit with Autoregressive Errors The AUTOREG Procedure Maximum Likelihood Estimates SSE MSE SBC MAE MAPE Durbin-Watson

10229.2303 639.32689 193.738877 18.0892426 21.0978407 1.3385

DFE Root MSE AIC AICC Regress R-Square Total R-Square

16 25.28491 189.755947 192.422614 0.5656 0.7719

Example 8.3: Lack-of-Fit Study F 397

Output 8.2.4 continued

Variable Intercept gef gec AR1

DF

Estimate

1 1 1 1

-18.3751 0.0334 0.1385 -0.4728

Standard Approx Error t Value Pr > |t| Variable Label 34.5941 0.0179 0.0428 0.2582

-0.53 1.87 3.23 -1.83

0.6026 0.0799 Lagged Value of GE shares 0.0052 Lagged Capital Stock GE 0.0858

Autoregressive parameters assumed given.

Variable Intercept gef gec

DF

Estimate

1 1 1

-18.3751 0.0334 0.1385

Standard Approx Error t Value Pr > |t| Variable Label 33.3931 0.0158 0.0389

-0.55 2.11 3.56

0.5897 0.0512 Lagged Value of GE shares 0.0026 Lagged Capital Stock GE

Example 8.3: Lack-of-Fit Study Many time series exhibit high positive autocorrelation, having the smooth appearance of a random walk. This behavior can be explained by the partial adjustment and adaptive expectation hypotheses. Short-term forecasting applications often use autoregressive models because these models absorb the behavior of this kind of data. In the case of a first-order AR process where the autoregressive parameter is exactly 1 (a random walk ), the best prediction of the future is the immediate past. PROC AUTOREG can often greatly improve the fit of models, not only by adding additional parameters but also by capturing the random walk tendencies. Thus, PROC AUTOREG can be expected to provide good short-term forecast predictions. However, good forecasts do not necessarily mean that your structural model contributes anything worthwhile to the fit. In the following example, random noise is fit to part of a sine wave. Notice that the structural model does not fit at all, but the autoregressive process does quite well and is very nearly a first difference (AR(1) = :976). The DATA step, PROC AUTOREG step, and PROC SGPLOT step follow: title1 ’Lack of Fit Study’; title2 ’Fitting White Noise Plus Autoregressive Errors to a Sine Wave’; data a; pi=3.14159; do time = 1 to 75; if time > 75 then y = .; else y = sin( pi * ( time / 50 ) ); x = ranuni( 1234567 ); output; end; run;

398 F Chapter 8: The AUTOREG Procedure

proc autoreg data=a plots; model y = x / nlag=1; output out=b p=pred pm=xbeta; run; proc sgplot data=b; scatter y=y x=time / markerattrs=(color=black); series y=pred x=time / lineattrs=(color=blue); series y=xbeta x=time / lineattrs=(color=red); run;

The printed output produced by PROC AUTOREG is shown in Output 8.3.1 and Output 8.3.2. Plots of observed and predicted values are shown in Output 8.3.3 and Output 8.3.4. Note: the plot Output 8.3.3 can be viewed in the Autoreg.Model.FitDiagnosticPlots category by selecting ViewIResults. Output 8.3.1 Results of OLS Analysis: No Autoregressive Model Fit Lack of Fit Study Fitting White Noise Plus Autoregressive Errors to a Sine Wave The AUTOREG Procedure Dependent Variable

y

Lack of Fit Study Fitting White Noise Plus Autoregressive Errors to a Sine Wave The AUTOREG Procedure Ordinary Least Squares Estimates SSE MSE SBC MAE MAPE Durbin-Watson

Variable Intercept x

34.8061005 0.47680 163.898598 0.59112447 117894.045 0.0057

DFE Root MSE AIC AICC Regress R-Square Total R-Square

73 0.69050 159.263622 159.430289 0.0008 0.0008

DF

Estimate

Standard Error

t Value

Approx Pr > |t|

1 1

0.2383 -0.0665

0.1584 0.2771

1.50 -0.24

0.1367 0.8109

Estimates of Autocorrelations Lag

Covariance

Correlation

0 1

0.4641 0.4531

1.000000 0.976386

-1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 | |

Preliminary MSE

|********************| |********************| 0.0217

Example 8.3: Lack-of-Fit Study F 399

Output 8.3.2 Regression Results with AR(1) Error Correction Estimates of Autoregressive Parameters

Lag

Coefficient

Standard Error

t Value

1

-0.976386

0.025460

-38.35

Lack of Fit Study Fitting White Noise Plus Autoregressive Errors to a Sine Wave The AUTOREG Procedure Yule-Walker Estimates SSE MSE SBC MAE MAPE Durbin-Watson

0.18304264 0.00254 -222.30643 0.04551667 29145.3526 0.0942

DFE Root MSE AIC AICC Regress R-Square Total R-Square

72 0.05042 -229.2589 -228.92087 0.0001 0.9947

Variable

DF

Estimate

Standard Error

t Value

Approx Pr > |t|

Intercept x

1 1

-0.1473 -0.001219

0.1702 0.0141

-0.87 -0.09

0.3898 0.9315

400 F Chapter 8: The AUTOREG Procedure

Output 8.3.3 Diagnostics Plots

Example 8.3: Lack-of-Fit Study F 401

Output 8.3.4 Plot of Autoregressive Prediction

402 F Chapter 8: The AUTOREG Procedure

Example 8.4: Missing Values In this example, a pure autoregressive error model with no regressors is used to generate 50 values of a time series. Approximately 15% of the values are randomly chosen and set to missing. The following statements generate the data: title ’Simulated Time Series with Roots:’; title2 ’ (X-1.25)(X**4-1.25)’; title3 ’With 15% Missing Values’; data ar; do i=1 to 550; e = rannor(12345); n = sum( e, .8*n1, .8*n4, -.64*n5 ); /* ar process */ y = n; if ranuni(12345) > .85 then y = .; /* 15% missing */ n5=n4; n4=n3; n3=n2; n2=n1; n1=n; /* set lags */ if i>500 then output; end; run;

The model is estimated using maximum likelihood, and the residuals are plotted with 99% confidence limits. The PARTIAL option prints the partial autocorrelations. The following statements fit the model: proc autoreg data=ar partial; model y = / nlag=(1 4 5) method=ml; output out=a predicted=p residual=r ucl=u lcl=l alphacli=.01; run;

The printed output produced by the AUTOREG procedure is shown in Output 8.4.1 and Output 8.4.2. Note: the plot Output 8.4.2 can be viewed in the Autoreg.Model.FitDiagnosticPlots category by selecting ViewIResults.

Example 8.4: Missing Values F 403

Output 8.4.1 Autocorrelation-Corrected Regression Results Simulated Time Series with Roots: (X-1.25)(X**4-1.25) With 15% Missing Values The AUTOREG Procedure Dependent Variable

y

Simulated Time Series with Roots: (X-1.25)(X**4-1.25) With 15% Missing Values The AUTOREG Procedure Ordinary Least Squares Estimates SSE MSE SBC MAE MAPE Durbin-Watson

Variable Intercept

182.972379 4.57431 181.39282 1.80469152 270.104379 1.3962

DFE Root MSE AIC AICC Regress R-Square Total R-Square

40 2.13876 179.679248 179.781813 0.0000 0.0000

DF

Estimate

Standard Error

t Value

Approx Pr > |t|

1

-2.2387

0.3340

-6.70

|t|

1 1 1 1

-2.2370 -0.6201 -0.7237 0.6550

0.5239 0.1129 0.0914 0.1202

-4.27 -5.49 -7.92 5.45

0.0001 |t| Variable Label 0.2359 0.0439 0.0122 0.007933 0.001584

1.31 20.38 3.89 -3.00 -3.56

0.1963 |t| Variable Label 0.4880 0.0908 0.0411 0.0159 0.001834 0.0686

4.94 4.50 3.67 -6.92 -3.46 -12.89

0), the negative binomial distribution can then be rewritten as

€.yi C ˛ 1 / f .yi jxi / D yi Š€.˛ 1 /



˛ ˛

1

1

C i



1



i 1 ˛ C i

yi ;

yi D 0; 1; 2; : : :

Thus, the negative binomial distribution is derived as a gamma mixture of Poisson random variables. It has conditional mean 0

E.yi jxi / D i D e xi ˇ and conditional variance V .yi jxi / D i Œ1 C

1 i  D i Œ1 C ˛i  > E.yi jxi / 

The conditional variance of the negative binomial distribution exceeds the conditional mean. Overdispersion results from neglected unobserved heterogeneity. The negative binomial model with

498 F Chapter 10: The COUNTREG Procedure

variance function V .yi jxi / D i C ˛2i , which is quadratic in the mean, is referred to as the NEGBIN2 model (Cameron and Trivedi 1986). To estimate this model, specify DIST=NEGBIN(p=2) in the MODEL statement. The Poisson distribution is a special case of the negative binomial distribution where ˛ D 0. A test of the Poisson distribution can be carried out by testing the hypothesis that ˛ D 1 D 0. A Wald test of this hypothesis is provided (it is the reported t statistic for the i estimated ˛ in the negative binomial model). The log-likelihood function of the negative binomial regression model (NEGBIN2) is given by

L D

N X

(y 1 i X

i D1

j D0

ln.j C ˛

1

/

ln.yi Š/ )

.yi C ˛

1

/ ln.1 C ˛ exp.x0i ˇ// C yi ln.˛/ C yi x0i ˇ

where use of the following fact is made: €.y C a/= €.a/ D

yY1

.j C a/

j D0

if y is an integer. The gradient is N

X yi i @L D xi @ˇ 1 C ˛i i D1

and @L D @˛

8 N < X i D1

˛

2

:

yX i 1 j D0

1 .j C ˛

1/



2

ln.1 C ˛i / C

9 =

yi i ˛.1 C ˛i / ;

Cameron and Trivedi (1986) consider a general class of negative binomial models with mean i p and variance function i C ˛i . The NEGBIN2 model, with p D 2, is the standard formulation of the negative binomial model. Models with other values of p, 1 < p < 1, have the same density f .yi jxi / except that ˛ 1 is replaced everywhere by ˛ 1 2 p . The negative binomial model NEGBIN1, which sets p D 1, has variance function V .yi jxi / D i C ˛i , which is linear in the mean. To estimate this model, specify DIST=NEGBIN(p=1) in the MODEL statement. The log-likelihood function of the NEGBIN1 regression model is given by

L D

N X

(y 1 i X

i D1

j D0

ln.yi Š/

ln j C ˛

1

 exp.x0i ˇ/

)  yi C ˛ 1 exp.x0i ˇ/ ln.1 C ˛/ C yi ln.˛/

Zero-Inflated Count Regression Overview F 499

The gradient is 80 1 N < yX i 1 X @L i @ A xi D : @ˇ .j˛ C i / i D1

˛

1

ln.1 C ˛/i xi

9 = ;

j D0

and 8 0 1 yX N < i 1 1 X @L ˛ i A @ D : @˛ .j˛ C i / i D1

˛

2

i ln.1 C ˛/

j D0

9 .yi C ˛ 1 i / yi = C 1C˛ ˛;

Zero-Inflated Count Regression Overview The main motivation for zero-inflated count models is that real-life data frequently display overdispersion and excess zeros. Zero-inflated count models provide a way of modeling the excess zeros as well as allowing for overdispersion. In particular, for each observation, there are two possible data generation processes. The result of a Bernoulli trial is used to determine which of the two processes is used. For observation i , Process 1 is chosen with probability 'i and Process 2 with probability 1 'i . Process 1 generates only zero counts. Process 2 generates counts from either a Poisson or a negative binomial model. In general,  0 with probability 'i yi  g.yi / with probability 1 'i Therefore, the probability of fYi D yi g can be described as P .yi D 0jxi / D 'i C .1 P .yi jxi / D .1

'i /g.0/

'i /g.yi /;

yi > 0

where g.yi / follows either the Poisson or the negative binomial distribution. When the probability 'i depends on the characteristics of observation i , 'i is written as a function of z0i , where z0i is the 1  .q C 1/ vector of zero-inflated covariates and is the .q C 1/  1 vector of zero-inflated coefficients to be estimated. (The zero-inflated intercept is 0 ; the coefficients for the q zero-inflated covariates are 1 ; : : : ; q .) The function F relating the product z0i (which is a scalar) to the probability 'i is called the zero-inflated link function, 'i D Fi D F .z0i / In the COUNTREG procedure, the zero-inflated covariates are indicated in the ZEROMODEL statement. Furthermore, the zero-inflated link function F can be specified as either the logistic function, F .z0i / D ƒ.z0i / D

exp.z0i / 1 C exp.z0i /

or the standard normal cumulative distribution function (also called the probit function), Z z0 i 1 0 0 F .zi / D ˆ.zi / D p exp. u2 =2/du 2 0 The zero-inflated link function is indicated in the ZEROMODEL statement, using the LINK= option. The default ZI link function is the logistic function.

500 F Chapter 10: The COUNTREG Procedure

Zero-Inflated Poisson Regression In the zero-inflated Poisson (ZIP) regression model, the data generation process referred to earlier as Process 2 is y

exp. i /i i g.yi / D yi Š 0

where i D e xi ˇ . Thus the ZIP model is defined as P .yi D 0jxi ; zi / D Fi C .1 P .yi jxi ; zi / D .1

Fi / exp. i / y exp. i /i i Fi / ; yi > 0 yi Š

The conditional expectation and conditional variance of yi are given by E.yi jxi ; zi / D i .1

Fi /

V .yi jxi ; zi / D E.yi jxi ; zi /.1 C i Fi / Note that the ZIP model (as well as the ZINB model) exhibits overdispersion since V .yi jxi ; zi / > E.yi jxi ; zi /. In general, the log-likelihood function of the ZIP model is LD

N X

ln ŒP .yi jxi ; zi /

i D1

Once a specific link function (either logistic or standard normal) for the probability 'i is chosen, it is possible to write the exact expressions for the log-likelihood function and the gradient.

ZIP Model with Logistic Link Function First, consider the ZIP model in which the probability 'i is expressed with a logistic link function, namely 'i D

exp.z0i / 1 C exp.z0i /

The log-likelihood function is L D

X

  ln exp.z0i / C exp. exp.x0i ˇ//

fi Wyi D0g

" C

X

yi x0i ˇ

exp.x0i ˇ/

fi Wyi >0g N X i D1

  ln 1 C exp.z0i /

yi X kD2

# ln.k/

Zero-Inflated Negative Binomial Regression F 501

The gradient for this model is given by @L D @ @L D @ˇ

X  fi Wyi D0g

X  fi Wyi D0g

 exp.z0i / zi exp.z0i / C exp. exp.x0i ˇ//

 N  X exp.z0i / zi 1 C exp.z0i / i D1

 exp.x0i ˇ/ exp. exp.x0i ˇ// xi C exp.z0i / C exp. exp.x0i ˇ//

X  yi

 exp.x0i ˇ/ xi

fiWyi >0g

ZIP Model with Standard Normal Link Function Next, consider the ZIP model in which the probability 'i is expressed with a standard normal link function: 'i D ˆ.z0i /. The log-likelihood function is L D

X

˚  ln ˆ.z0i / C 1

 ˆ.z0i / exp. exp.x0i ˇ//

fi Wyi D0g

( C

X



ln 1

 ˆ.z0i /

exp.x0i ˇ/ C yi x0i ˇ

fi Wyi >0g

yi X

) ln.k/

kD2

The gradient for this model is given by @L @

D

X fi Wyi D0g

  '.z0i / 1 exp. exp.x0i ˇ//   zi ˆ.z0i / C 1 ˆ.z0i / exp. exp.x0i ˇ// '.z0i /

X fi Wyi >0g

@L @ˇ

 zi ˆ.z0i /

 1

 ˆ.z0i / exp.x0i ˇ/ exp. exp.x0i ˇ//   xi D ˆ.z0i / C 1 ˆ.z0i / exp. exp.x0i ˇ// fi Wyi D0g X   C yi exp.x0i ˇ/ xi X

 1

fi Wyi >0g

Zero-Inflated Negative Binomial Regression The zero-inflated negative binomial (ZINB) model in PROC COUNTREG is based on the negative binomial model with quadratic variance function (p=2). The ZINB model is obtained by specifying a negative binomial distribution for the data generation process referred to earlier as Process 2: €.yi C ˛ 1 / g.yi / D yi Š€.˛ 1 /



˛ ˛

1

1

C i



1



i 1 ˛ C i

yi

502 F Chapter 10: The COUNTREG Procedure

Thus the ZINB model is defined to be 1

Fi / .1 C ˛i / ˛ ˛  €.yi C ˛ 1 / ˛ 1 P .yi jxi ; zi / D .1 Fi / yi Š€.˛ 1 / ˛ 1 C i  yi i  ; yi > 0 1 ˛ C i

P .yi D 0jxi ; zi / D Fi C .1

1

In this case, the conditional expectation and conditional variance of yi are E.yi jxi ; zi / D i .1

Fi /

V .yi jxi ; zi / D E.yi jxi ; zi / Œ1 C i .Fi C ˛/ As with the ZIP model, the ZINB model exhibits overdispersion because the conditional variance exceeds the conditional mean.

ZINB Model with Logistic Link Function In this model, the probability 'i is given by the logistic function, namely 'i D

exp.z0i / 1 C exp.z0i /

The log-likelihood function is L D

X

h ln exp.z0i / C .1 C ˛ exp.x0i ˇ//

˛

1

i

fi Wyi D0g

C

X

yX i 1

ln.j C ˛

1

/

fi Wyi >0g j D0

C

X ˚

ln.yi Š/

.yi C ˛

1

/ ln.1 C ˛ exp.x0i ˇ// C yi ln.˛/ C yi x0i ˇ

fi Wyi >0g N X

  ln 1 C exp.z0i /

i D1

The gradient for this model is given by @L @

" D

X fi Wyi D0g

exp.z0i / C .1 C ˛ exp.x0i ˇ//

 N  X exp.z0i / zi 1 C exp.z0i / i D1

#

exp.z0i / ˛

1

zi



Zero-Inflated Negative Binomial Regression F 503

@L @ˇ

"

exp.x0i ˇ/.1 C ˛ exp.x0i ˇ//

X

D

˛

exp.z0i / C .1 C ˛ exp.x0i ˇ// fi Wyi D0g X  yi exp.x0 ˇ/  i xi C 1 C ˛ exp.x0i ˇ/

1

˛

1

# xi

1

fi Wyi >0g

@L D @˛

C

fi Wyi >0g

˛ exp.x0i ˇ/

 .1 C ˛ exp.x0i ˇ// ln.1 C ˛ exp.x0i ˇ//



exp.z0i /.1 C ˛ exp.x0i ˇ//.1C˛/=˛ C .1 C ˛ exp.x0i ˇ//

fi Wyi D0g

8 X
0g yX i 1

X

C

˚ ln.j C ˛

1

/

fi Wyi >0g j D0

X

ln.yi Š/

fi Wyi >0g

X

.yi C ˛

1

/ ln.1 C ˛ exp.x0i ˇ//

fi Wyi >0g

X

C

yi ln.˛/

fi Wyi >0g

X

C

yi x0i ˇ

fi Wyi >0g

The gradient for this model is given by h i 2 0 0 ˛ 1 '.z

/ 1 .1 C ˛ exp.x ˇ// X i i @L 4 D   0 0 @ ˆ.zi / C 1 ˆ.zi / .1 C ˛ exp.x0i ˇ// fi Wyi D0g

3 ˛

1

5 zi

˛

1

o

504 F Chapter 10: The COUNTREG Procedure

X  fi Wyi >0g

@L D @ˇ

C

 '.z0i / zi 1 ˆ.z0i /

 ˆ.z0i / exp.x0i ˇ/.1 C ˛ exp.x0i ˇ// .1C˛/=˛ xi   1 ˆ.z0i / C 1 ˆ.z0i / .1 C ˛ exp.x0i ˇ// ˛

 1

X fi Wyi D0g

X  yi exp.x0 ˇ/  i xi 1 C ˛ exp.x0i ˇ/

fi Wyi >0g

@L D @˛

C

 1

X fi Wyi D0g

8 X < fi Wyi >0g

:

˛

  .1 C ˛ exp.x0i ˇ// ln.1 C ˛ exp.x0i ˇ// ˛ exp.x0i ˇ/   ˆ.z0i /.1 C ˛ exp.x0i ˇ//.1C˛/=˛ C 1 ˆ.z0i / .1 C ˛ exp.x0i ˇ//

2

 ˆ.z0i / ˛

yX i 1 j D0

2

9 = 0 y exp.x ˇ/ 1 i i 2 0 C ˛ ln.1 C ˛ exp.x ˇ// C i .j C ˛ 1 / ˛.1 C ˛ exp.x0i ˇ// ;

Computational Resources The time and memory required by PROC COUNTREG are proportional to the number of parameters in the model and the number of observations in the data set being analyzed. Less time and memory will be required for smaller models and fewer observations. Also affecting these resources are the method chosen to calculate the variance-covariance matrix and the optimization method. All optimization methods available through the METHOD= option have similar memory use requirements. The processing time might differ for each method depending on the number of iterations and functional calls needed. The data set is read into memory to save processing time. If there is not enough memory available to hold the data, the COUNTREG procedure stores the data in a utility file on disk and rereads the data as needed from this file. When this occurs, the execution time of the procedure increases substantially. The gradient and the variance-covariance matrix must be held in memory. If the model has p parameters including the intercept, then at least 8  .p C p  .p C 1/=2/ bytes will be needed. Time is also a function of the number of iterations needed to converge to a solution for the model parameters. The number of iterations needed cannot be known in advance. The MAXITER= option can be used to limit the number of iterations that PROC COUNTREG will do. The convergence criteria can be altered by nonlinear optimization options available in the PROC COUNTREG statement. For a list of all the nonlinear optimization options, see Chapter 6, “Nonlinear Optimization Methods.”

Nonlinear Optimization Options F 505

Nonlinear Optimization Options PROC COUNTREG uses the nonlinear optimization (NLO) subsystem to perform nonlinear optimization tasks. In the PROC COUNTREG statement, you can specify nonlinear optimization options that are then passed to the NLO subsystem. For a list of all the nonlinear optimization options, see Chapter 6, “Nonlinear Optimization Methods.”

Covariance Matrix Types The COUNTREG procedure allows the user to specify the estimation method for the covariance matrix. COVEST=HESSIAN option estimates the covariance matrix based on the inverse of the Hessian matrix; COVEST=OP uses the outer product of gradients; and COVEST=QML produces the covariance matrix based on both the Hessian and outer product matrices. While all three methods produce asymptotically equivalent results, they differ in computational intensity and produce results that might differ in finite samples. The COVEST=OP option provides the covariance matrix that is typically the easiest to compute. In some cases, the OP approximation is considered more efficient than the Hessian or QML approximations because it contains fewer random elements. The QML approximation is computationally the most complex because both outer product of gradients and Hessian matrix are required. In most cases, OP or Hessian approximations are preferred to QML. The need to use QML approximation arises in some cases when the model is misspecified and the information matrix equality does not hold. The default is COVEST=HESSIAN.

Displayed Output PROC COUNTREG produces the following displayed output.

Iteration History for Parameter Estimates If you specify the ITPRINT or PRINTALL options in the PROC COUNTREG statement, PROC COUNTREG displays a table containing the following for each iteration. Note that some information is specific to the model fitting procedure chosen (for example, Newton-Raphson, trust region, quasi-Newton).  iteration number  number of restarts since the fitting began  number of function calls  number of active constraints at the current solution

506 F Chapter 10: The COUNTREG Procedure

 value of the objective function (negative one times the log-likelihood value) at the current solution  change in the objective function from previous iteration  value of the maximum absolute gradient element  step size (for Newton-Raphson and quasi-Newton methods)  slope of the current search direction (for Newton-Raphson and quasi-Newton methods)  lambda (for trust region method)  radius value at current iteration (for trust region method)

Model Fit Summary The Model Fit Summary output contains the following information:  dependent (count) variable name  number of observations used  number of missing values in data set, if any  data set name  type of model that was fit  offset variable name, if any  zero-inflated link function, if any  zero-inflated offset variable name, if any  log-likelihood value at solution  maximum absolute gradient at solution  number of iterations  AIC value at solution. Smaller value indicates better fit.  SBC value at solution. Smaller value indicates better fit. Under the Model Fit Summary is a statement about whether or not the algorithm successfully converged.

OUTPUT OUT= Data Set F 507

Parameter Estimates The “Parameter Estimates” table in the displayed output gives the estimates for the ZI intercept and ZI explanatory variables; they are labeled with the prefix “Inf_”. For example, the ZI intercept is labeled “Inf_intercept”. If you specify “Age” (a variable in your data set) as a ZI explanatory variable, then the “Parameter Estimates” table labels the corresponding parameter estimate “Inf_Age”. If you do not list any ZI explanatory variables (for the ZI option VAR=), then only the intercept term is estimated. “_Alpha” is the negative binomial dispersion parameter. The t statistic given for “_Alpha” is a test of overdispersion.

Last Evaluation of the Gradient If you specify the model option ITPRINT, the COUNTREG procedure displays the last evaluation of the gradient vector.

Covariance of Parameter Estimates If you specify the COVB option in the MODEL statement, the COUNTREG procedure displays the estimated covariance matrix, defined as the inverse of the information matrix at the final iteration.

Correlation of Parameter Estimates If you specify the CORRB option in the MODEL statement, PROC COUNTREG displays the estimated correlation matrix. It is based on the Hessian matrix used at the final iteration.

OUTPUT OUT= Data Set The OUTPUT statement creates a new SAS data set containing all the variables in the input data set and, optionally, the estimates of x0i ˇ, the expected value of the response variable, and the probability of the response variable taking on the current value. Furthermore, if a zero-inflated model was fit, you can request that the output data set contain the estimates of z0i , and the probability that the response is zero as a result of the zero-generating process. These statistics can be computed for all observations in which the regressors are not missing, even if the response is missing. By adding observations with missing response values to the input data set, you can compute these statistics for new observations or for settings of the regressors not present in the data without affecting the model fit.

508 F Chapter 10: The COUNTREG Procedure

OUTEST= Data Set The OUTEST= data set is made up of one row (with _TYPE_=‘PARM’) containing each of the parameter estimates in the model. The second row (with _TYPE_=‘STD’) contains the standard errors for the parameter estimates in the model. If you use the COVOUT option in the PROC COUNTREG statement, the OUTEST= data set also contains the covariance matrix for the parameter estimates. The covariance matrix appears in the observations with _TYPE_=‘COV’, and the _NAME_ variable labels the rows with the parameter names.

ODS Table Names PROC COUNTREG assigns a name to each table it creates. You can use these names to denote the table when using the Output Delivery System (ODS) to select tables and create output data sets. These names are listed in Table 10.2. Table 10.2

ODS Tables Produced in PROC COUNTREG

ODS Table Name

Description

ODS Tables Created by the MODEL Statement FitSummary Summary of Nonlinear Estimation ConvergenceStatus Convergence Status ParameterEstimates Parameter Estimates CovB Covariance of Parameter Estimates CorrB Correlation of Parameter Estimates InputOptions Input Options IterStart Optimization Start IterHist Iteration History IterStop Optimization Results ParameterEstimatesResults Parameter Estimates ParameterEstimatesStart Parameter Estimates ProblemDescription Problem Description

Option default default default COVB CORRB ITPRINT ITPRINT ITPRINT ITPRINT ITPRINT ITPRINT ITPRINT

Example 10.1: Basic Models F 509

Examples: COUNTREG Procedure

Example 10.1: Basic Models Data Description and Objective The data set docvisit contains information for approximately 5,000 Australian individuals about the number and possible determinants of doctor visits that were made during a two-week interval. This data set contains a subset of variables taken from the Racd3 data set used by Cameron and Trivedi (1998). The variable doctorco represents doctor visits. Additional variables in the data set that you want to evaluate as determinants of doctor visits include sex (coded 0=male, 1=female), age (age in years divided by 100, with more than 70 coded as 72), illness (number of illnesses during the two-week interval, with five or more coded as five), income (annual income in Australian dollars divided by 1,000), and hscore (a general health questionnaire score, where a high score indicates bad health). Summary statistics for these variables are computed in the following statements and presented in Output 10.1.1. In the rest of this example some possible applications of the COUNTREG procedure in this context are presented. proc means data=docvisit; var doctorco sex age illness income hscore; run;

Output 10.1.1 Summary Statistics The MEANS Procedure Variable N Mean Std Dev Minimum Maximum -------------------------------------------------------------------------------doctorco 5190 0.3017341 0.7981338 0 9.0000000 sex 5190 0.5206166 0.4996229 0 1.0000000 age 5190 0.4063854 0.2047818 0.1900000 0.7200000 illness 5190 1.4319846 1.3841524 0 5.0000000 income 5190 0.5831599 0.3689067 0 1.5000000 hscore 5190 1.2175337 2.1242665 0 12.0000000 --------------------------------------------------------------------------------

Poisson Models These statements fit a Poisson model to the data by using the covariates SEX, ILLNESS, INCOME, and HSCORE: /*-- Poisson Model --*/ proc countreg data=docvisit;

510 F Chapter 10: The COUNTREG Procedure

model doctorco=sex illness income hscore / dist=poisson printall; run;

In this example, the DIST= option in the MODEL statement specifies the POISSON distribution. In addition, the PRINTALL option displays the correlation and covariance matrices for the parameters, log-likelihood values, and convergence information in addition to the parameter estimates. The parameter estimates for this model are shown in Output 10.1.2. Output 10.1.2 Parameter Estimates of Poisson Model The COUNTREG Procedure Parameter Estimates

Parameter

DF

Estimate

Standard Error

t Value

Approx Pr > |t|

Intercept sex illness income hscore

1 1 1 1 1

-1.855552 0.235583 0.270326 -0.242095 0.096313

0.074545 0.054362 0.017080 0.077829 0.009089

-24.89 4.33 15.83 -3.11 10.60

2Std..h// < 1 O O O F lag.O .h// D 0 2Std..h// < .h/ < 2Std..h// ˆ : 1 .h/ O O < 2Std..h//

White Noise Statistics

WN

P Q.h/ D T .T C 2/ hjD1 .j /2 =.T j / P Q.h/ D hjD1 Nj .j /2 when embedded missing values are present

WNPROB

P rob.Q.h// D max.1;h

WNLPROB

LogP rob.Q.h// D

WN

p/ .Q.h//

log10 .P rob.Q.h//

Cross-Correlation Analysis Cross-correlation analysis can be performed on the working series by specifying the OUTCROSSCORR= option or one of the CROSSPLOTS= options that are associated with cross-correlation. The CROSSCORR statement enables you to specify options that are related to cross-correlation analysis.

Cross-Correlation Statistics LAGS

h 2 f0; : : : ; H g

N

Nh is the number of observed products at lag h, ignoring missing values P

Ox;y .h/ D T1 TtDhC1 .xt x/.yt h y/ P

Ox;y .h/ D N1h TtDhC1 .xt x/.yt h y/ when embedded missing values are present p Ox;y .h/ D Ox;y .h/= Ox .0/ Oy .0/

CCOV CCOV CCF

Data Set Output F 1707

CCFSTD

p S t d.Ox;y .h// D 1= N0

CCFNORM

Norm.Ox;y .h// D Ox;y .h/=Std.Ox;y .h//

CCFPROB

P rob.Ox;y .h// D 2 1

CCFLPROB

LogP rob.Ox;y .h// D log10 .P rob.Ox;y .h// 8 Ox;y .h/ > 2Std.Ox;y .h// < 1 0 2Std.Ox;y .h// < Ox;y .h/ < 2Std.Ox;y .h// F lag.Ox;y .h// D : 1 Ox;y .h/ < 2Std.Ox;y .h//

CCF2STD

ˆ jNorm.Ox;y .h//j



Data Set Output The TIMESERIES procedure can create the OUT=, OUTCORR=, OUTCROSSCORR=, OUTDECOMP=, OUTSEASON=, OUTSUM=, and OUTTREND= data sets. In general, these data sets contain the variables listed in the BY statement. If an analysis step that is related to an output data step fails, the values of this step are not recorded or are set to missing in the related output data set, and appropriate error and/or warning messages are recorded in the log.

OUT= Data Set The OUT= data set contains the variables specified in the BY, ID, VAR, and CROSSVAR statements. If the ID statement is specified, the ID variable values are aligned and extended based on the ALIGN= and INTERVAL= options. The values of the variables specified in the VAR and CROSSVAR statements are accumulated based on the ACCUMULATE= option, and missing values are interpreted based on the SETMISSING= option.

OUTCORR= Data Set The OUTCORR= data set contains the variables specified in the BY statement as well as the variables listed below. The OUTCORR= data set records the correlations for each variable specified in a VAR statement (not the CROSSVAR statement). When the CORR statement TRANSPOSE=NO option is omitted or specified explicitly, the variable names are related to correlation statistics specified in the CORR statement options and the variable values are related to the NLAG= or LAGS= option. _NAME_

variable name

LAG

time lag

N

number of variance products

ACOV

autocovariances

ACF

autocorrelations

1708 F Chapter 27: The TIMESERIES Procedure

ACFSTD

autocorrelation standard errors

ACF2STD

two standard errors beyond autocorrelation

ACFNORM

normalized autocorrelations

ACFPROB

autocorrelation probabilities

ACFLPROB

autocorrelation log probabilities

PACF

partial autocorrelations

PACFSTD

partial autocorrelation standard errors

PACF2STD

two standard errors beyond partial autocorrelation

PACFNORM

partial normalized autocorrelations

PACFPROB

partial autocorrelation probabilities

PACFLPROB

partial autocorrelation log probabilities

IACF

inverse autocorrelations

IACFSTD

inverse autocorrelation standard errors

IACF2STD

two standard errors beyond inverse autocorrelation

IACFNORM

normalized inverse autocorrelations

IACFPROB

inverse autocorrelation probabilities

IACFLPROB

inverse autocorrelation log probabilities

WN

white noise test Statistics

WNPROB

white noise test probabilities

WNLPROB

white noise test log probabilities

The preceding correlation statistics are computed for each specified time lag. When the CORR statement TRANSPOSE=YES option is specified, the variable values are related to correlation statistics specified in the CORR statement and the variable names are related to the NLAG= or LAGS= options. _NAME_

variable name

_STAT_

correlation statistic name

_LABEL_

correlation statistic label

LAGh

correlation statistics for lag h

OUTCROSSCORR= Data Set The OUTCROSSCORR= data set contains the variables specified in the BY statement as well as the variables listed below. The OUTCROSSCORR= data set records the cross-correlations for each variable specified in a VAR and the CROSSVAR statements.

OUTDECOMP= Data Set F 1709

When the CROSSCORR statement TRANSPOSE=NO option is omitted or specified explicitly, the variable names are related to cross-correlation statistics specified in the CROSSCORR statement options and the variable values are related to the NLAG= or LAGS= option. _NAME_

variable name

_CROSS_

cross variable name

LAG

time lag

N

number of variance products

CCOV

cross-covariances

CCF

cross-correlations

CCFSTD

cross-correlation standard errors

CCF2STD

two standard errors beyond cross-correlation

CCFNORM

normalized cross-correlations

CCFPROB

cross-correlation probabilities

CCFLPROB

cross-correlation log probabilities

The preceding cross-correlation statistics are computed for each specified time lag. When the CROSSCORR statement TRANSPOSE=YES option is specified, the variable values are related to cross-correlation statistics specified in the CROSSCORR statement and the variable names are related to the NLAG= or LAGS= options. _NAME_

variable name

_CROSS_

cross variable name

_STAT_

cross-correlation statistic name

_LABEL_

cross-correlation statistic label

LAGh

cross-correlation statistics for lag h

OUTDECOMP= Data Set The OUTDECOMP= data set contains the variables specified in the BY statement as well as the variables listed below. The OUTDECOMP= data set records the seasonal decomposition/adjustments for each variable specified in a VAR statement (not the CROSSVAR statement). When the DECOMP statement TRANSPOSE=NO option is omitted or specified explicitly, the variable names are related to decomposition/adjustments specified in the DECOMP statement and the variable values are related to the ID statement INTERVAL= option and the PROC TIMESERIES statement SEASONALITY= option. _NAME_

variable name

_MODE_

mode of decomposition

1710 F Chapter 27: The TIMESERIES Procedure

_TIMEID_

time ID values

_SEASON_

seasonal index

ORIGINAL

original series values

TCC

trend-cycle component

SIC

seasonal-irregular component

SC

seasonal component

SCSTD

seasonal component standard errors

TCS

trend-cycle-seasonal component

IC

irregular component

SA

seasonally adjusted series

PCSA

percent change seasonally adjusted series

TC

trend component

CC

cycle component

The preceding decomposition components are computed for each time period. When the DECOMP statement TRANSPOSE=YES option is specified, the variable values are related to decomposition/adjustments specified in the DECOMP statement and the variable names are related to the ID statement INTERVAL= option, the PROC TIMESERIES statement SEASONALITY= option, and the DECOMP statement NPERIODS= option. _NAME_

variable name

_MODE_

mode of decomposition name

_COMP_

decomposition component name

_LABEL_

decomposition component label

PERIODt

decomposition component value for time period t

OUTSEASON= Data Set The OUTSEASON= data set contains the variables specified in the BY statement as well as the variables listed below. The OUTSEASON= data set records the seasonal statistics for each variable specified in a VAR statement (not the CROSSVAR statement). When the SEASON statement TRANSPOSE=NO option is omitted or specified explicitly, the variable names are related to seasonal statistics specified in the SEASON statement and the variable values are related to the ID statement INTERVAL= option or the PROC TIMESERIES statement SEASONALITY= option. _NAME_

variable name

_TIMEID_

time ID values

OUTSUM= Data Set F 1711

_SEASON_

seasonal index

NOBS

number of observations

N

number of nonmissing observations

NMISS

number of missing observations

MINIMUM

minimum value

MAXIMUM

maximum value

RANGE

maximum value

SUM

summation value

MEAN

mean value

STDDEV

standard deviation

CSS

corrected sum of squares

USS

uncorrected sum of squares

MEDIAN

median value

The preceding statistics are computed for each season. When the SEASON statement TRANSPOSE=YES option is specified, the variable values are related to seasonal statistics specified in the SEASON statement and the variable names are related to the ID statement INTERVAL= option or the PROC TIMESERIES statement SEASONALITY= option. _NAME_

variable name

_STAT_

season statistic name

_LABEL_

season statistic name

SEASONs

season statistic value for season s

OUTSUM= Data Set The OUTSUM= data set contains the variables specified in the BY statement as well as the variables listed below. The OUTSUM= data set records the descriptive statistics for each variable specified in a VAR statement (not the CROSSVAR statement). Variables related to descriptive statistics are based on the ACCUMULATE= and SETMISSING= options in the ID and VAR statements: _NAME_

variable name

_STATUS_

status flag that indicates whether the requested analyses were successful

NOBS

number of observations

N

number of nonmissing observations

1712 F Chapter 27: The TIMESERIES Procedure

NMISS

number of missing observations

MINIMUM

minimum value

MAXIMUM

maximum value

AVG

average value

STDDEV

standard deviation

The OUTSUM= data set contains the descriptive statistics of the (accumulated) time series.

OUTTREND= Data Set The OUTTREND= data set contains the variables specified in the BY statement as well as the variables listed below. The OUTTREND= data set records the trend statistics for each variable specified in a VAR statement (not the CROSSVAR statement). When the TREND statement TRANSPOSE=NO option is omitted or explicitly specified, the variable names are related to trend statistics specified in the TREND statement and the variable values are related to the ID statement INTERVAL= option or the PROC TIMESERIES statement SEASONALITY= option. _NAME_

variable name

_TIMEID_

time ID values

_SEASON_

seasonal index

NOBS

number of observations

N

number of nonmissing observations

NMISS

number of missing observations

MINIMUM

minimum value

MAXIMUM

maximum value

RANGE

maximum value

SUM

summation value

MEAN

mean value

STDDEV

standard deviation

CSS

corrected sum of squares

USS

uncorrected sum of squares

MEDIAN

median value

The preceding statistics are computed for each time period. When the TREND statement TRANSPOSE=YES option is specified, the variable values related to trend statistics specified in the TREND statement and the variable name are related to the ID statement INTERVAL=, the PROC TIMESERIES statement SEASONALITY= option, and the TREND statement NPERIODS= option.

_STATUS_ Variable Values F 1713

_NAME_

variable name

_STAT_

trend statistic name

_LABEL_

trend statistic name

PERIODt

trend statistic value for time period t

_STATUS_ Variable Values The _STATUS_ variable contains a code that specifies whether the analysis has been successful or not. The _STATUS_ variable can take the following values: 0

success

1000

transactional trend statistics failure

2000

transactional seasonal statistics failure

3000

accumulation failure

4000

missing value interpretation failure

6000

series is all missing

7000

transformation failure

8000

differencing failure

9000

unable to compute descriptive statistics

10000

seasonal decomposition failure

11000

correlation analysis failure

Printed Output The TIMESERIES procedure optionally produces printed output by using the Output Delivery System (ODS). By default, the procedure produces no printed output. All output is controlled by the PRINT= and PRINTDETAILS options associated with the PROC TIMESERIES statement. In general, if an analysis step related to printed output fails, the values of this step are not printed and appropriate error and/or warning messages are recorded in the log. The printed output is similar to the output data set, and these similarities are described below. The printed output produced by different printing option values is described as follows: PRINT=DECOMP

prints the seasonal decomposition similar to the OUTDECOMP= data set.

PRINT=DESCSTATS

prints a table of descriptive statistics for each variable.

PRINT=SEASONS prints the seasonal statistics similar to the OUTSEASON= data set. PRINT=SUMMARY prints the summary statistics similar to the OUTSUM= data set. PRINT=TRENDS

prints the trend statistics similar to the OUTTREND= data set.

1714 F Chapter 27: The TIMESERIES Procedure

PRINTDETAILS prints each table with greater detail. If PRINT=SEASONS and the PRINTDETAILS options are both specified, all seasonal statistics are printed.

ODS Table Names Table 27.3 relates the PRINT= options to ODS tables: Table 27.3

ODS Tables Produced in PROC TIMESERIES

ODS Table Name

Description

Statement

Option

SeasonalDecomposition DescStats GlobalStatistics SeasonStatistics StatisticsSummary TrendStatistics GlobalStatistics

seasonal decomposition descriptive statistics global statistics season statistics statistics summary trend statistics global statistics

PRINT PRINT PRINT PRINT PRINT PRINT PRINT

DECOMP DESCSTATS SEASONS SEASONS SUMMARY TRENDS TRENDS

The tables are related to a single series within a BY group.

ODS Graphics Names This section describes the graphical output produced by the TIMESERIES procedure. To request these graphs, you must specify the ODS GRAPHICS ON; statement in your SAS program before the PROC TIMESERIES step, and you must specify the PLOTS= or CROSSPLOTS= option in the PROC TIMESERIES statement. PROC TIMESERIES assigns a name to each graph it creates. These names are listed in Table 27.4. Table 27.4

ODS Graphics Produced by PROC TIMESERIES

ODS Graph Name

Plot Description

Statement

Option

ACFPlot ACFNORMPlot CCFNORMPlot CCFPlot CorrelationPlots CrossSeriesPlot CycleComponentPlot DecompositionPlots

autocorrelation function normalized autocorrelation function normalized cross-correlation function cross-correlation function correlation graphics panel cross series plot cycle component decomposition graphics panel

PLOTS PLOTS CROSSPLOTS CROSSPLOTS PLOTS CROSSPLOTS PLOTS PLOTS

ACF ACF CCF CCF CORR SERIES CC DECOMP

Examples: TIMESERIES Procedure F 1715

Table 27.4

(continued)

ODS Graph Name IACFPlot IACFNORMPlot

Plot Description inverse autocorrelation function normalized inverse autocorrelation function IrregularComponentPlot irregular component PACFPlot partial autocorrelation function PACFNORMPlot standardized partial autocorrelation function PercentChangeAdjustedplot percent-change seasonally adjusted ResidualPlot residual time series plot SeasonallyAdjustedPlot seasonally adjusted SeasonalComponentPlot seasonal component SeasonalCyclePlot seasonal cycles plot SeasonalIrregularComponentPlot seasonal-irregular component SeriesPlot time series plot TrendComponentPlot trend component TrendCycleComponentPlot trend-cycle component TrendCycleSeasonalPlot trend-cycle-seasonal component WhiteNoiseLogProbabilityPlot white noise log probability WhiteNoiseProbabilityPlot white noise probability

Statement PLOTS PLOTS

Option IACF IACF

PLOTS PLOTS PLOTS

IC PACF PACF

PLOTS PLOTS PLOTS PLOTS PLOTS PLOTS PLOTS PLOTS PLOTS PLOTS PLOTS PLOTS

PCSA RESIDUAL SA SC CYCLES SIC SERIES TC TCC TCS WN WN

Examples: TIMESERIES Procedure

Example 27.1: Accumulating Transactional Data into Time Series Data This example illustrates using the TIMESERIES procedure to accumalate time-stamped transactional data that has been recorded at no particular frequency into time series data at a specific frequency. After the time series is created, the various SAS/ETS procedures related to time series analysis, seasonal adjustment/decomposition, modeling, and forecasting can be used to further analyze the time series data. Suppose that the input data set WORK.RETAIL contains variables STORE and TIMESTAMP and numerous other numeric transaction variables. The BY variable STORE contains values that break up the transactions into groups (BY groups). The time ID variable TIMESTAMP contains SAS date values recorded at no particular frequency. The other data set variables contain the numeric transaction values to be analyzed. It is further assumed that the input data set is sorted by the variables STORE and TIMESTAMP. The following statements form monthly time series from the transactional data based on the median value (ACCUMULATE=MEDIAN) of the transactions recorded with each time period. Also, the accumulated time series values for time periods with no transactions are set to zero instead of to missing (SETMISS=0) and only transactions recorded between the first day of 1998 (START=’01JAN1998’D ) and last day of 2000 (END=’31JAN2000’D) are considered and,

1716 F Chapter 27: The TIMESERIES Procedure

if needed, extended to include this range. proc timeseries data=retail out=mseries; by store; id timestamp interval=month accumulate=median setmiss=0 start=’01jan1998’d end =’31dec2000’d; var _numeric_; run;

The monthly time series data are stored in the data WORK.MSERIES. Each BY group associated with the BY variable STORE contains an observation for each of the 36 months associated with the years 1998, 1999, and 2000. Each observation contains the variable STORE, TIMESTAMP, and each of the analysis variables in the input data set. After each set of transactions has been accumulated to form corresponding time series, accumulated time series can be analyzed using various time series analysis techniques. For example, exponentially weighted moving averages can be used to smooth each series. The following statements use the EXPAND procedure to smooth the analysis variable named STOREITEM. proc expand data=mseries out=smoothed from=month; by store; id date; convert storeitem=smooth / transform=(ewma 0.1); run;

The smoothed series are stored in the data set WORK.SMOOTHED. The variable SMOOTH contains the smoothed series. If the time ID variable TIMESTAMP contains SAS datetime values instead of SAS date values, the INTERVAL=, START=, and END= options must be changed accordingly and the following statements could be used: proc timeseries data=retail out=tseries; by store; id timestamp interval=dtmonth accumulate=median setmiss=0 start=’01jan1998:00:00:00’dt end =’31dec2000:00:00:00’dt; var _numeric_; run;

The monthly time series data are stored in the data WORK.TSERIES, and the time ID values use a SAS datetime representation.

Example 27.2: Trend and Seasonal Analysis F 1717

Example 27.2: Trend and Seasonal Analysis This example illustrates using the TIMESERIES procedure for trend and seasonal analysis of timestamped transactional data. Suppose that the data set SASHELP.AIR contains two variables: DATE and AIR. The variable DATE contains sorted SAS date values recorded at no particular frequency. The variable AIR contains the transaction values to be analyzed. The following statements accumulate the transactional data on an average basis to form a quarterly time series and perform trend and seasonal analysis on the transactions. proc timeseries data=sashelp.air out=series outtrend=trend outseason=season print=seasons; id date interval=qtr accumulate=avg; var air; run;

The time series is stored in the data set WORK.SERIES, the trend statistics are stored in the data set WORK.TREND, and the seasonal statistics are stored in the data set WORK.SEASON. Additionally, the seasonal statistics are printed (PRINT=SEASONS) and the results of the seasonal analysis are shown in Output 27.2.1. Output 27.2.1 Seasonal Statistics Table The TIMESERIES Procedure Season Statistics for Variable AIR Season Index

N

Minimum

Maximum

Sum

Mean

Standard Deviation

1 2 3 4

36 36 36 36

112.0000 121.0000 136.0000 104.0000

419.0000 535.0000 622.0000 461.0000

8963.00 10207.00 12058.00 9135.00

248.9722 283.5278 334.9444 253.7500

95.65189 117.61839 143.97935 101.34732

Using the trend statistics stored in the WORK.TREND data set, the following statements plot various trend statistics associated with each time period over time. title1 "Trend Statistics"; proc sgplot data=trend; series x=date y=max / lineattrs=(pattern=solid); series x=date y=mean / lineattrs=(pattern=solid); series x=date y=min / lineattrs=(pattern=solid); yaxis display=(nolabel); format date year4.; run;

1718 F Chapter 27: The TIMESERIES Procedure

The results of this trend analysis are shown in Output 27.2.2. Output 27.2.2 Trend Statistics Plot

Using the trend statistics stored in the WORK.TREND data set, the following statements chart the sum of the transactions associated with each time period for the second season over time. title1 "Trend Statistics for 2nd Season"; proc sgplot data=trend; where _season_ = 2; vbar date / freq=sum; format date year4.; yaxis label=’Sum’; run;

The results of this trend analysis are shown in Output 27.2.3.

Example 27.2: Trend and Seasonal Analysis F 1719

Output 27.2.3 Trend Statistics Bar Chart

Using the trend statistics stored in the WORK.TREND data set, the following statements plot the mean of the transactions associated with each time period by each year over time. data trend; set trend; year = year(date); run; title1 "Trend Statistics by Year"; proc sgplot data=trend; series x=_season_ y=mean / group=year lineattrs=(pattern=solid); xaxis values=(1 to 4 by 1); run;

The results of this trend analysis are shown in Output 27.2.4.

1720 F Chapter 27: The TIMESERIES Procedure

Output 27.2.4 Trend Statistics

Using the season statistics stored in the WORK.SEASON data set, the following statements plot various season statistics for each season. title1 "Seasonal Statistics"; proc sgplot data=season; series x=_season_ y=max / lineattrs=(pattern=solid); series x=_season_ y=mean / lineattrs=(pattern=solid); series x=_season_ y=min / lineattrs=(pattern=solid); yaxis display=(nolabel); xaxis values=(1 to 4 by 1); run;

The results of this seasonal analysis are shown in Output 27.2.5.

Example 27.3: Illustration of ODS Graphics F 1721

Output 27.2.5 Seasonal Statistics Plot

Example 27.3: Illustration of ODS Graphics This example illustrates the use of ODS graphics. The following statements use the SASHELP.WORKERS data set to study the time series of electrical workers and its interaction with the series of masonry workers. The series plot, the correlation panel, the seasonal adjustment panel, and all cross-series plots are requested. Output 27.3.1 through Output 27.3.4 show a selection of the plots created. The graphical displays are requested by specifying the ODS GRAPHICS statement and the PLOTS= or CROSSPLOTS= options in the PROC TIMESERIES statement. For information about the graphics available in the TIMESERIES procedure, see the section “ODS Graphics Names” on page 1714. title "Illustration of ODS Graphics"; proc timeseries data=sashelp.workers out=_null_ plots=(series corr decomp) crossplots=all; id date interval=month; var electric;

1722 F Chapter 27: The TIMESERIES Procedure

crossvar masonry; run;

Output 27.3.1 Series Plot

Example 27.3: Illustration of ODS Graphics F 1723

Output 27.3.2 Correlation Panel

1724 F Chapter 27: The TIMESERIES Procedure

Output 27.3.3 Seasonal Decomposition Panel

References F 1725

Output 27.3.4 Cross-Correlation Plot

References Greene, W.H. (1999), Econometric Analysis, Fourth Edition, New York: Macmillan. Hodrick, R. and Prescott, E. (1980), “Post-War U.S. Business Cycles: An Empirical Investigation,” Discussion Paper 451, Carnegie Mellon University. Makridakis, S. and Wheelwright, S.C. (1978), Interactive Forecasting: Univariate and Multivariate Methods, Second Edition, San Francisco: Holden-Day, 198–201. Pyle, D. (1999), Data Preparation for Data Mining, San Francisco: Morgan Kaufman Publishers, Inc. Stoffer, D.S., Toloi, C.M.C. (1992), “A Note on the Ljung-Box-Pierce Portmanteau Statistic with Missing Data,” Statistics and Probability Letters 13, 391–396. Wheelwright, S.C. and Makridakis, S. (1973), Forecasting Methods for Management, Third Edition, New York: Wiley-Interscience, 123–133.

1726

Chapter 28

The TSCSREG Procedure Contents Overview: The TSCSREG Procedure . . . Getting Started: The TSCSREG Procedure Specifying the Input Data . . . . . . Unbalanced Data . . . . . . . . . . . Specifying the Regression Model . . Estimation Techniques . . . . . . . . Introductory Example . . . . . . . . Syntax: The TSCSREG Procedure . . . . . Functional Summary . . . . . . . . . PROC TSCSREG Statement . . . . . BY Statement . . . . . . . . . . . . ID Statement . . . . . . . . . . . . . MODEL Statement . . . . . . . . . . TEST Statement . . . . . . . . . . . Details: The TSCSREG Procedure . . . . . ODS Table Names . . . . . . . . . . Examples: The TSCSREG Procedure . . . Acknowledgments: TSCSREG Procedure . References: TSCSREG Procedure . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

1727 1728 1728 1728 1729 1730 1731 1733 1733 1734 1735 1735 1736 1737 1738 1738 1739 1739 1740

Overview: The TSCSREG Procedure The TSCSREG (time series cross section regression) procedure analyzes a class of linear econometric models that commonly arise when time series and cross-sectional data are combined. The TSCSREG procedure deals with panel data sets that consist of time series observations on each of several cross-sectional units. The TSCSREG procedure is very similar to the PANEL procedure; for full description, syntax details, models, and estimation methods, see Chapter 19, “The PANEL Procedure.” The TSCSREG procedure is no longer being updated, and it shares the code base with the PANEL procedure.

1728 F Chapter 28: The TSCSREG Procedure

Getting Started: The TSCSREG Procedure

Specifying the Input Data The input data set used by the TSCSREG procedure must be sorted by cross section and by time within each cross section. Therefore, the first step in using PROC TSCSREG is to make sure that the input data set is sorted. Normally, the input data set contains a variable that identifies the cross section for each observation and a variable that identifies the time period for each observation. To illustrate, suppose that you have a data set A that contains data over time for each of several states. You want to regress the variable Y on regressors X1 and X2. Cross sections are identified by the variable STATE, and time periods are identified by the variable DATE. The following statements sort the data set A appropriately: proc sort data=a; by state date; run;

The next step is to invoke the TSCSREG procedure and specify the cross section and time series variables in an ID statement. List the variables in the ID statement exactly as they are listed in the BY statement. proc tscsreg data=a; id state date;

Alternatively, you can omit the ID statement and use the CS= and TS= options on the PROC TSCSREG statement to specify the number of cross sections in the data set and the number of time series observations in each cross section.

Unbalanced Data In the case of fixed-effects and random-effects models, the TSCSREG procedure is capable of processing data with different numbers of time series observations across different cross sections. You must specify the ID statement to estimate models that use unbalanced data. The missing time series observations are recognized by the absence of time series ID variable values in some of the cross sections in the input data set. Moreover, if an observation with a particular time series ID value and cross-sectional ID value is present in the input data set, but one or more of the model variables are missing, that time series point is treated as missing for that cross section.

Specifying the Regression Model F 1729

Specifying the Regression Model Next, specify the linear regression model with a MODEL statement, as shown in the following statements. proc tscsreg data=a; id state date; model y = x1 x2; run;

The MODEL statement in PROC TSCSREG is specified like the MODEL statement in other SAS regression procedures: the dependent variable is listed first, followed by an equal sign, followed by the list of regressor variables. The reason for using PROC TSCSREG instead of other SAS regression procedures is that you can incorporate a model for the structure of the random errors. It is important to consider what kind of error structure model is appropriate for your data and to specify the corresponding option in the MODEL statement. The error structure options supported by the TSCSREG procedure are FIXONE, FIXTWO, RANONE, RANTWO, FULLER, PARKS, and DASILVA. See “Details: The TSCSREG Procedure” on page 1738 for more information about these methods and the error structures they assume. By default, the two-way random-effects error model structure is used while Fuller-Battese and Wansbeek-Kapteyn methods are used for the estimation of variance components in balanced data and unbalanced data, respectively. Thus, the preceding example is the same as specifying the RANTWO option, as shown in the following statements: proc tscsreg data=a; id state date; model y = x1 x2 / rantwo; run;

You can specify more than one error structure option in the MODEL statement; the analysis is repeated using each method specified. You can use any number of MODEL statements to estimate different regression models or estimate the same model by using different options. In order to aid in model specification within this class of models, the procedure provides two specification test statistics. The first is an F statistic that tests the null hypothesis that the fixed-effects parameters are all zero. The second is a Hausman m-statistic that provides information about the appropriateness of the random-effects specification. It is based on the idea that, under the null hypothesis of no correlation between the effects variables and the regressors, OLS and GLS are consistent, but OLS is inefficient. Hence, a test can be based on the result that the covariance of an efficient estimator with its difference from an inefficient estimator is zero. Rejection of the null hypothesis might suggest that the fixed-effects model is more appropriate. The procedure also provides the Buse R-square measure, which is the most appropriate goodnessof-fit measure for models estimated by using GLS. This number is interpreted as a measure of the

1730 F Chapter 28: The TSCSREG Procedure

proportion of the transformed sum of squares of the dependent variable that is attributable to the influence of the independent variables. In the case of OLS estimation, the Buse R-square measure is equivalent to the usual R-square measure.

Estimation Techniques If the effects are fixed, the models are essentially regression models with dummy variables that correspond to the specified effects. For fixed-effects models, ordinary least squares (OLS) estimation is equivalent to best linear unbiased estimation. The output from TSCSREG is identical to what one would obtain from creating dummy variables to represent the cross-sectional and time (fixed) effects. The output is presented in this manner to facilitate comparisons to the least squares dummy variables estimator (LSDV). As such, the inclusion of a intercept term implies that one dummy variable must be dropped. The actual estimation of the fixed-effects models is not LSDV. LSDV is much too cumbersome to implement. Instead, TSCSREG operates in a two step fashion. In the first step, the following occurs:  One-way fixed-effects model: In the one-way fixed-effects model, the data is transformed by removing the cross-sectional means from the dependent and independent variables. The following is true: yQ it D yit

yN i

xQ it D xit

xN i

 Two-way fixed-effects model: In the two-way fixed-effects model, the data is transformed by removing the cross-sectional and time means and adding back the overall means: yQ it D yit

yN i

yN t C yNN

xQ it D xit

xN i

xN t C xNN

where the symbols: yit and xit are the dependent variable (a scalar) and the explanatory variables (a vector whose columns are the explanatory variables not including a constant), respectively yN i and xN i are cross section means yN t and xN t are time means yN and xN are the overall means The second step consists of running OLS on the properly demeaned series, provided that the data are balanced. The unbalanced case is slightly more difficult, because the structure of the missing data must be retained. For this case, PROC TSCSREG uses a slight specialization on Wansbeek and Kapteyn.

Introductory Example F 1731

The other alternative is to assume that the effects are random. In the one-way case, E.i / D 0, E.i2 / D 2 , and E.i j / D 0 for i ¤j , and i is uncorrelated with i t for all i and t . In the twoway case, in addition to all of the preceding, E.et / D 0, E.et2 / D e2 , and E.et es / D 0 for t ¤s, and the et are uncorrelated with the i and the i t for all i and t . Thus, the model is a variance components model, with the variance components 2 , e2 , and 2 , to be estimated. A crucial implication of such a specification is that the effects are independent of the regressors. For randomeffects models, the estimation method is an estimated generalized least squares (EGLS) procedure that involves estimating the variance components in the first stage and using the estimated variance covariance matrix thus obtained to apply generalized least squares (GLS) to the data.

Introductory Example The following example uses the cost function data from Greene (1990) to estimate the variance components model. The variable OUTPUT is the log of output in millions of kilowatt-hours, and COST is the log of cost in millions of dollars. Refer to Greene (1990) for details. title1; data greene; input firm df1 = firm df2 = firm df3 = firm df4 = firm df5 = firm d60 = year d65 = year d70 = year datalines;

year output cost @@; = 1; = 2; = 3; = 4; = 5; = 1960; = 1965; = 1970;

... more lines ...

Usually you cannot explicitly specify all the explanatory variables that affect the dependent variable. The omitted or unobservable variables are summarized in the error disturbances. The TSCSREG procedure used with the RANTWO option specifies the two-way random-effects error model where the variance components are estimated by the Fuller-Battese method, because the data are balanced and the parameters are efficiently estimated by using the GLS method. The variance components model used by the Fuller-Battese method is yit D

K X

Xitk ˇk C vi C et C it i D 1; : : :; NI t D 1; : : :; T

kD1

The following statements fit this model. proc sort data=greene; by firm year; run;

1732 F Chapter 28: The TSCSREG Procedure

proc tscsreg data=greene; model cost = output / rantwo; id firm year; run;

The TSCSREG procedure output is shown in Figure 28.1. A model description is printed first; it reports the estimation method used and the number of cross sections and time periods. The variance components estimates are printed next. Finally, the table of regression parameter estimates shows the estimates, standard errors, and t tests. Figure 28.1 The Variance Components Estimates The TSCSREG Procedure Fuller and Battese Variance Components (RanTwo) Dependent Variable: cost Model Description Estimation Method Number of Cross Sections Time Series Length

RanTwo 6 4

Fit Statistics SSE MSE R-Square

0.3481 0.0158 0.8136

DFE Root MSE

22 0.1258

Variance Component Estimates Variance Component for Cross Sections Variance Component for Time Series Variance Component for Error

0.046907 0.00906 0.008749

Hausman Test for Random Effects DF

m Value

Pr > m

1

26.46

|t|

1 1

-2.99992 0.746596

0.6478 0.0762

-4.63 9.80

0.0001 ;

Functional Summary The statements and options used with the TSCSREG procedure are summarized in the following table. Table 28.1

Functional Summary

Description

Statement

Option

Data Set Options specify the input data set write parameter estimates to an output data set include correlations in the OUTEST= data set include covariances in the OUTEST= data set specify number of time series observations specify number of cross sections

TSCSREG TSCSREG TSCSREG TSCSREG TSCSREG TSCSREG

DATA= OUTEST= CORROUT COVOUT TS= CS=

Declaring the Role of Variables specify BY-group processing specify the cross section and time ID variables

BY ID

Printing Control Options print correlations of the estimates print covariances of the estimates suppress printed output perform tests of linear hypotheses

MODEL MODEL MODEL TEST

CORRB COVB NOPRINT

Model Estimation Options specify the one-way fixed-effects model specify the two-way fixed-effects model specify the one-way random-effects model specify the two-way random-effects model specify Fuller-Battese method specify PARKS

MODEL MODEL MODEL MODEL MODEL MODEL

FIXONE FIXTWO RANONE RANTWO FULLER PARKS

1734 F Chapter 28: The TSCSREG Procedure

Description

Statement

Option

specify Da Silva method specify order of the moving-average error process for Da Silva method print ˆ matrix for Parks method print autocorrelation coefficients for Parks method suppress the intercept term control check for singularity

MODEL MODEL

DASILVA M=

MODEL MODEL

PHI RHO

MODEL MODEL

NOINT SINGULAR=

PROC TSCSREG Statement PROC TSCSREG options ;

The following options can be specified in the PROC TSCSREG statement. DATA=SAS-data-set

names the input data set. The input data set must be sorted by cross section and by time period within cross section. If you omit the DATA= option, the most recently created SAS data set is used. TS=number

specifies the number of observations in the time series for each cross section. The TS= option value must be greater than 1. The TS= option is required unless an ID statement is used. Note that the number of observations for each time series must be the same for each cross section and must cover the same time period. CS=number

specifies the number of cross sections. The CS= option value must be greater than 1. The CS= option is required unless an ID statement is used. OUTEST=SAS-data-set

the parameter estimates. When the OUTEST= option is not specified, the OUTEST= data set is not created. OUTCOV COVOUT

writes the covariance matrix of the parameter estimates to the OUTEST= data set. OUTCORR CORROUT

writes the correlation matrix of the parameter estimates to the OUTEST= data set. In addition, any of the following MODEL statement options can be specified in the PROC TSCSREG statement: CORRB, COVB, FIXONE, FIXTWO, RANONE, RANTWO,

BY Statement F 1735

FULLER, PARKS, DASILVA, NOINT, NOPRINT, M=, PHI, RHO, and SINGULAR=. When specified in the PROC TSCSREG statement, these options are equivalent to specifying the options for every MODEL statement.

BY Statement BY variables ;

A BY statement can be used with PROC TSCSREG to obtain separate analyses on observations in groups defined by the BY variables. When a BY statement appears, the input data set must be sorted by the BY variables as well as by cross section and time period within the BY groups. When both an ID statement and a BY statement are specified, the input data set must be sorted first with respect to BY variables and then with respect to the cross section and time series ID variables. For example, proc sort data=a; by byvar1 byvar2 csid tsid; run; proc tscsreg data=a; by byvar1 byvar2; id csid tsid; ... run;

When both a BY statement and an ID statement are used, the data set might have a different number of cross sections or a different number of time periods in each BY group. If no ID statement is used, the CS=N and TS=T options must be specified and each BY group must contain N  T observations.

ID Statement ID cross-section-id-variable time-series-id-variable ;

The ID statement is used to specify variables in the input data set that identify the cross section and time period for each observation. When an ID statement is used, the TSCSREG procedure verifies that the input data set is sorted by the cross section ID variable and by the time series ID variable within each cross section. The TSCSREG procedure also verifies that the time series ID values are the same for all cross sections. To make sure the input data set is correctly sorted, use PROC SORT with a BY statement with the variables listed exactly as they are listed in the ID statement to sort the input data set. For example, proc sort data=a;

1736 F Chapter 28: The TSCSREG Procedure

by csid tsid; run; proc tscsreg data=a; id csid tsid; ... etc. ... run;

If the ID statement is not used, the TS= and CS= options must be specified on the PROC TSCSREG statement. Note that the input data must be sorted by time within cross section, regardless of whether the cross section structure is given by an ID statement or by the options TS= and CS=. If an ID statement is specified, the time series length T is set to the minimum number of observations for any cross section, and only the first T observations in each cross section are used. If both the ID statement and the TS= and CS= options are specified, the TS= and CS= options are ignored.

MODEL Statement MODEL response = regressors / options ;

The MODEL statement specifies the regression model and the error structure assumed for the regression residuals. The response variable on the left side of the equal sign is regressed on the independent variables listed after the equal sign. Any number of MODEL statements can be used. For each model statement, only one response variable can be specified on the left side of the equal sign. The error structure is specified by the FIXONE, FIXTWO, RANONE, RANTWO, FULLER, PARKS, and DASILVA options. More than one of these options can be used, in which case the analysis is repeated for each error structure model specified. Models can be given labels up to 32 characters in length. Model labels are used in the printed output to identify the results for different models. If no label is specified, the response variable name is used as the label for the model. The model label is specified as follows: label: MODEL response = regressors / options ; The following options can be specified on the MODEL statement after a slash (/). CORRB CORR

prints the matrix of estimated correlations between the parameter estimates. COVB VAR

prints the matrix of estimated covariances between the parameter estimates. FIXONE

specifies that a one-way fixed-effects model be estimated with the one-way model that corresponds to group effects only.

TEST Statement F 1737

FIXTWO

specifies that a two-way fixed-effects model be estimated. RANONE

specifies that a one-way random-effects model be estimated. RANTWO

specifies that a two-way random-effects model be estimated. FULLER

specifies that the model be estimated by using the Fuller-Battese method, which assumes a variance components model for the error structure. PARKS

specifies that the model be estimated by using the Parks method, which assumes a first-order autoregressive model for the error structure. DASILVA

specifies that the model be estimated by using the Da Silva method, which assumes a mixed variance-component moving-average model for the error structure. M=number

specifies the order of the moving-average process in the Da Silva method. The M= value must be less than T 1. The default is M=1. PHI

prints the ˆ matrix of estimated covariances of the observations for the Parks method. The PHI option is relevant only when the PARKS option is used. RHO

prints the estimated autocorrelation coefficients for the Parks method. NOINT NOMEAN

suppresses the intercept parameter from the model. NOPRINT

suppresses the normal printed output. SINGULAR=number

specifies a singularity criterion for the inversion of the matrix. The default depends on the precision of the computer system.

TEST Statement TEST equation < , equation . . . > < / options > ;

1738 F Chapter 28: The TSCSREG Procedure

The TEST statement performs F tests of linear hypotheses about the regression parameters in the preceding MODEL statement. Each equation specifies a linear hypothesis to be tested. All hypotheses in one TEST statement are tested jointly. Variable names in the equations must correspond to regressors in the preceding MODEL statement, and each name represents the coefficient of the corresponding regressor. The keyword INTERCEPT refers to the coefficient of the intercept. The following statements illustrate the use of the TEST statement: proc tscsreg; model y = x1 x2 x3; test x1 = 0, x2 * .5 + 2 * x3= 0; test_int: test intercept=0, x3 = 0;

Note that a test of the following form is not permitted: test_bad: test x2 / 2 + 2 * x3= 0;

Do not use the division sign in test/restrict statements.

Details: The TSCSREG Procedure Models, estimators, and methods are covered in detail in Chapter 19, “The PANEL Procedure.”

ODS Table Names PROC TSCSREG assigns a name to each table it creates. You can use these names to reference the table when you use the Output Delivery System (ODS) to select tables and create output data sets. These names are listed in the following table. Table 28.2

ODS Tables Produced in PROC TSCSREG

ODS Table Name

Description

ODS Tables Created by the MODEL Statement ModelDescription Model description FitStatistics Fit statistics FixedEffectsTest F test for no fixed effects

ParameterEstimates CovB

Parameter estimates Covariance of parameter estimates

Option default default FIXONE, FIXTWO, RANONE, RANTWO default COVB

Examples: The TSCSREG Procedure F 1739

Table 28.2

continued

ODS Table Name

Description

Option

CorrB VarianceComponents

Correlations of parameter estimates Variance component estimates

RandomEffectsTest

Hausman test for random effects

AR1Estimates

First order autoregressive parameter estimates Estimated phi matrix Estimates of autocovariances

CORRB FULLER, DASILVA, M=, RANONE, RANTWO FULLER, DASILVA, M=, RANONE, RANTWO PARKS, RHO

EstimatedPhiMatrix EstimatedAutocovariances

PARKS DASILVA, M=

ODS Tables Created by the TEST Statement TestResults Test results

Examples: The TSCSREG Procedure For examples of analysis of panel data, see Chapter 19, “The PANEL Procedure.”

Acknowledgments: TSCSREG Procedure The original TSCSREG procedure was developed by Douglas J. Drummond and A. Ronald Gallant, and contributed to the Version 5 SUGI Supplemental Library in 1979. The original code was changed substantially over the years. Additional new methods as well as other new features are currently included in the PANEL PROCEDURE. SAS Institute would like to thank Dr. Drummond and Dr. Gallant for their contribution of the original version of the TSCSREG procedure.

1740 F Chapter 28: The TSCSREG Procedure

References: TSCSREG Procedure Greene, W. H. (1990), Econometric Analysis, First Edition, New York: Macmillan Publishing Company.

Chapter 29

The UCM Procedure Contents Overview: UCM Procedure . . . . . . . . . . . . . . . . . Getting Started: UCM Procedure . . . . . . . . . . . . . . A Seasonal Series with Linear Trend . . . . . . . . Syntax: UCM Procedure . . . . . . . . . . . . . . . . . . Functional Summary . . . . . . . . . . . . . . . . . PROC UCM Statement . . . . . . . . . . . . . . . AUTOREG Statement . . . . . . . . . . . . . . . . BLOCKSEASON Statement . . . . . . . . . . . . . BY Statement . . . . . . . . . . . . . . . . . . . . CYCLE Statement . . . . . . . . . . . . . . . . . . DEPLAG Statement . . . . . . . . . . . . . . . . . ESTIMATE Statement . . . . . . . . . . . . . . . . FORECAST Statement . . . . . . . . . . . . . . . ID Statement . . . . . . . . . . . . . . . . . . . . . IRREGULAR Statement . . . . . . . . . . . . . . . LEVEL Statement . . . . . . . . . . . . . . . . . . MODEL Statement . . . . . . . . . . . . . . . . . . NLOPTIONS Statement . . . . . . . . . . . . . . . OUTLIER Statement . . . . . . . . . . . . . . . . . RANDOMREG Statement . . . . . . . . . . . . . . SEASON Statement . . . . . . . . . . . . . . . . . SLOPE Statement . . . . . . . . . . . . . . . . . . SPLINEREG Statement . . . . . . . . . . . . . . . SPLINESEASON Statement . . . . . . . . . . . . . Details: UCM Procedure . . . . . . . . . . . . . . . . . . An Introduction to Unobserved Component Models The UCMs as State Space Models . . . . . . . . . . Outlier Detection . . . . . . . . . . . . . . . . . . . Missing Values . . . . . . . . . . . . . . . . . . . . Parameter Estimation . . . . . . . . . . . . . . . . Computational Issues . . . . . . . . . . . . . . . . Displayed Output . . . . . . . . . . . . . . . . . . . Statistical Graphics . . . . . . . . . . . . . . . . . . ODS Table Names . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1742 1743 1743 1750 1751 1753 1757 1758 1760 1760 1762 1763 1765 1767 1768 1771 1772 1772 1773 1773 1774 1777 1778 1779 1781 1781 1787 1796 1797 1797 1799 1800 1800 1810

1742 F Chapter 29: The UCM Procedure

ODS Graph Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OUTFOR= Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OUTEST= Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statistics of Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples: UCM Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example 29.1: The Airline Series Revisited . . . . . . . . . . . . . . . . . Example 29.2: Variable Star Data . . . . . . . . . . . . . . . . . . . . . . . Example 29.3: Modeling Long Seasonal Patterns . . . . . . . . . . . . . . Example 29.4: Modeling Time-Varying Regression Effects Using the RANDOMREG Statement . . . . . . . . . . . . . . . . . . . . . . . . . Example 29.5: Trend Removal Using the Hodrick-Prescott Filter . . . . . . Example 29.6: Using Splines to Incorporate Nonlinear Effects . . . . . . . Example 29.7: Detection of Level Shift . . . . . . . . . . . . . . . . . . . . Example 29.8: ARIMA Modeling (Experimental) . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1813 1817 1818 1819 1820 1820 1826 1829 1833 1839 1841 1846 1849 1853

Overview: UCM Procedure The UCM procedure analyzes and forecasts equally spaced univariate time series data by using an unobserved components model (UCM). The UCMs are also called structural models in the time series literature. A UCM decomposes the response series into components such as trend, seasonals, cycles, and the regression effects due to predictor series. The components in the model are supposed to capture the salient features of the series that are useful in explaining and predicting its behavior. Harvey (1989) is a good reference for time series modeling that uses the UCMs. Harvey calls the components in a UCM the “stylized facts” about the series under consideration. Traditionally, the ARIMA models and, to some limited extent, the exponential smoothing models have been the main tools in the analysis of this type of time series data. It is fair to say that the UCMs capture the versatility of the ARIMA models while possessing the interpretability of the smoothing models. A thorough discussion of the correspondence between the ARIMA models and the UCMs, and the relative merits of UCM and ARIMA modeling, is given in Harvey (1989). The UCMs are also very similar to another set of models, called the dynamic models, that are popular in the Bayesian time series literature (West and Harrison 1999). In SAS/ETS you can use PROC ARIMA for ARIMA modeling (see Chapter 7, “The ARIMA Procedure”), PROC ESM for exponential smoothing modeling (see Chapter 13, “The ESM Procedure”), and use the Time Series Forecasting System for a point-and-click interface to ARIMA and exponential smoothing modeling. You can use the UCM procedure to fit a wide range of UCMs that can incorporate complex trend, seasonal, and cyclical patterns and can include multiple predictors. It provides a variety of diagnostic tools to assess the fitted model and to suggest the possible extensions or modifications. The components in the UCM provide a succinct description of the underlying mechanism governing the series. You can print, save, or plot the estimates of these component series. Along with the standard forecast and residual plots, the study of these component plots is an essential part of time series

Getting Started: UCM Procedure F 1743

analysis using the UCMs. Once a suitable UCM is found for the series under consideration, it can be used for a variety of purposes. For example, it can be used for the following:  forecasting the values of the response series and the component series in the model  obtaining a model-based seasonal decomposition of the series  obtaining a “denoised” version and interpolating the missing values of the response series in the historical period  obtaining the full sample or “smoothed” estimates of the component series in the model

Getting Started: UCM Procedure The analysis of time series using the UCMs involves recognizing the salient features present in the series and modeling them suitably. The UCM procedure provides a variety of models for estimating and forecasting the commonly observed features in time series. These models are discussed in detail later in the section “An Introduction to Unobserved Component Models” on page 1781. First the procedure is illustrated using an example.

A Seasonal Series with Linear Trend The airline passenger series, given as Series G in Box and Jenkins (1976), is often used in time series literature as an example of a nonstationary seasonal time series. This series is a monthly series consisting of the number of airline passengers who traveled during the years 1949 to 1960. Its main features are a steady rise in the number of passengers from year to year and the seasonal variation in the numbers during any given year. It also exhibits an increase in variability around the trend. A log transformation is used to stabilize this variability. The following DATA step prepares the log-transformed passenger series analyzed in this example: data seriesG; set sashelp.air; logair = log( air ); run;

The following statements produce a time series plot of the series by using the TIMESERIES procedure (see Chapter 27, “The TIMESERIES Procedure”). The trend and seasonal features of the series are apparent in the plot in Figure 29.1. proc timeseries data=seriesG plot=series; id date interval=month; var logair; run;

1744 F Chapter 29: The UCM Procedure

Figure 29.1 Series Plot of Log-Transformed Airline Passenger Series

In this example this series is modeled using an unobserved component model called the basic structural model (BSM). The BSM models a time series as a sum of three stochastic components: a trend component t , a seasonal component t , and random error t . Formally, a BSM for a response series yt can be described as yt D t C t C t Each of the stochastic components in the model is modeled separately. The random error t , also called the irregular component, is modeled simply as a sequence of independent, identically distributed (i.i.d.) zero-mean Gaussian random variables. The trend and the seasonal components can be modeled in a few different ways. The model for trend used here is called a locally linear time trend. This trend model can be written as follows: t

D t

1

C ˇt

ˇt

D ˇt

1

C t ;

1

C t ;

t  i:i:d: N.0; 2 / t  i:i:d: N.0; 2 /

These equations specify a trend where the level t as well as the slope ˇt is allowed to vary over time. This variation in slope and level is governed by the variances of the disturbance terms t and t in their respective equations. Some interesting special cases of this model arise when you

A Seasonal Series with Linear Trend F 1745

manipulate these disturbance variances. For example, if the variance of t is zero, the slope will be constant (equal to ˇ0 ); if the variance of t is also zero, t will be a deterministic trend given by the line 0 C ˇ0 t . The seasonal model used in this example is called a trigonometric seasonal. The stochastic equations governing a trigonometric seasonal are explained later (see the section “Modeling Seasons” on page 1783). However, it is interesting to note here that this seasonal model reduces to the familiar regression with deterministic seasonal dummies if the variance of the disturbance terms in its equations is equal to zero. The following statements specify a BSM with these three components: proc ucm data=seriesG; id date interval=month; model logair; irregular; level; slope; season length=12 type=trig print=smooth; estimate; forecast lead=24 print=decomp; run;

The PROC UCM statement signifies the start of the UCM procedure, and the input data set, seriesG, containing the dependent series is specified there. The optional ID statement is used to specify a date, datetime, or time identification variable, date in this example, to label the observations. The INTERVAL=MONTH option in the ID statement indicates that the measurements were collected on a monthly basis. The model specification begins with the MODEL statement, where the response series is specified (logair in this case). After this the components in the model are specified using separate statements that enable you to control their individual properties. The irregular component t is specified using the IRREGULAR statement and the trend component t is specified using the LEVEL and SLOPE statements. The seasonal component t is specified using the SEASON statement. The specifics of the seasonal characteristics such as the season length, its stochastic evolution properties, etc., are specified using the options in the SEASON statement. The seasonal component used in this example has a season length of 12, corresponding to the monthly seasonality, and is of the trigonometric type. Different types of seasonals are explained later (see the section “Modeling Seasons” on page 1783). The parameters of this model are the variances of the disturbance terms in the evolution equations of t , ˇt , and t and the variance of the irregular component t . These parameters are estimated by maximizing the likelihood of the data. The ESTIMATE statement options can be used to specify the span of data used in parameter estimation and to display and save the results of the estimation step and the model diagnostics. You can use the estimated model to obtain the forecasts of the series as well as the components. The options in the individual component statements can be used to display the component forecasts—for example, PRINT=SMOOTH option in the SEASON statement requests the displaying of smoothed forecasts of the seasonal component t . The series forecasts and forecasts of the sum of components can be requested using the FORECAST statement. The option PRINT=DECOMP in the FORECAST statement requests the printing of the smoothed trend t and the trend plus seasonal component (t C t ). The parameter estimates for this model are displayed in Figure 29.2.

1746 F Chapter 29: The UCM Procedure

Figure 29.2 BSM for the Logair Series The UCM Procedure Final Estimates of the Free Parameters

Component

Parameter

Irregular Level Slope Season

Error Error Error Error

Variance Variance Variance Variance

Estimate

Approx Std Error

t Value

Approx Pr > |t|

0.00023436 0.00029828 8.47911E-13 0.00000356

0.0001079 0.0001057 6.2271E-10 1.32347E-6

2.17 2.82 0.00 2.69

0.0298 0.0048 0.9989 0.0072

The estimates suggest that except for the slope component, the disturbance variances of all the components are significant—that is, all these components are stochastic. The slope component, however, appears to be deterministic because its error variance is quite insignificant. It might then be useful to check if the slope component can be dropped from the model—that is, if ˇ0 D 0. This can be checked by examining the significance analysis table of the components given in Figure 29.3. Figure 29.3 Component Significance Analysis for the Logair Series Significance Analysis of Components (Based on the Final State) Component

DF

Chi-Square

Pr > ChiSq

Irregular Level Slope Season

1 1 1 11

0.08 117867 43.78 507.75

0.7747 ; DEPLAG options ; ESTIMATE < options > ; FORECAST < options > ; ID variable options ; IRREGULAR < options > ; LEVEL < options > ; MODEL dependent variable < = regressors > ; NLOPTIONS options ; OUTLIER options ; RANDOMREG regressors < / options > ; SEASON options ; SLOPE < options > ; SPLINEREG regressor < options > ; SPLINESEASON options ;

The PROC UCM and MODEL statements are required. In addition, the model must contain at least one component with nonzero disturbance variance.

Functional Summary The statements and options controlling the UCM procedure are summarized in the following table. Most commonly needed scenarios are listed; see the individual statements for additional details. You can use the PRINT= and PLOT= options in the individual component statements for printing and plotting the corresponding component forecasts. Table 29.1

Functional Summary

Description

Statement

Option

Data Set Options specify the input data set write parameter estimates to an output data set write series and component forecasts to an output data set

PROC UCM ESTIMATE FORECAST

DATA= OUTEST= OUTFOR=

Model Specification specify the dependent variable and simple predictors specify predictors with time-varying coefficients specify a nonlinear predictor

MODEL RANDOMREG SPLINEREG

1752 F Chapter 29: The UCM Procedure

Table 29.1

continued

Description

Statement

Option

specify the irregular component specify the random walk trend specify the locally linear trend specify a cycle component specify a dummy seasonal component specify a trigonometric seasonal component drop some harmonics from a trigonometric seasonal component specify a list of harmonics to keep in a trigonometric seasonal component specify a spline-season component specify a block-season component specify an autoreg component specify the lags of the dependent variable

IRREGULAR LEVEL LEVEL and SLOPE CYCLE SEASON SEASON SEASON

TYPE=DUMMY TYPE=TRIG DROPH=

SEASON

KEEPH=

SPLINESEASON BLOCKSEASON AUTOREG DEPLAG

Controlling the Likelihood Optimization Process request optimization of the profile likelihood ESTIMATE request optimization of the usual likelihood ESTIMATE specify the optimization technique NLOPTIONS limit the number of iterations NLOPTIONS Outlier Detection turn on the search for additive outliers turn on the search for level shifts specify the significance level for outlier tests limit the number of outliers limit the number of outliers to a percentage of the series length Controlling the Series Span exclude some initial observations from analysis during the parameter estimation exclude some observations at the end from analysis during the parameter estimation exclude some initial observations from analysis during forecasting exclude some observations at the end from analysis during forecasting Graphical Residual Analysis get a panel of plots consisting of residual autocorrelation plots and residual normality plots get the residual CUSUM plot get the residual cumulative sum of squares plot

PROFILE NOPROFILE TECH= MAXITER=

LEVEL OUTLIER OUTLIER OUTLIER

Default CHECKBREAK ALPHA= MAXNUM= MAXPCT=

ESTIMATE

SKIPFIRST=

ESTIMATE

BACK=

FORECAST

SKIPFIRST=

FORECAST

BACK=

ESTIMATE

PLOT=PANEL

ESTIMATE ESTIMATE

PLOT=CUSUM PLOT=CUSUMSQ

PROC UCM Statement F 1753

Table 29.1

continued

Description

Statement

Option

get a plot of p-values for the portmanteau white noise test get a time series plot of residuals with overlaid LOESS smoother

ESTIMATE

PLOT=WN

ESTIMATE

PLOT=LOESS

FORECAST

LEAD=

FORECAST

ALPHA=

FORECAST

PRINT=DECOMP

FORECAST

PRINT=FORECASTS

FORECAST

PLOT=DECOMP

FORECAST

PLOT=FORECASTS

Series Decomposition and Forecasting specify the number of periods to forecast in the future specify the significance level of the forecast confidence interval request printing of smoothed series decomposition request printing of one-step-ahead and multi step-ahead forecasts request plotting of smoothed series decomposition request plotting of one-step-ahead and multi step-ahead forecasts BY Groups specify BY group processing Global Printing and Plotting Options turn off all the printing for the procedure turn on all the printing options for the procedure turn off all the plotting for the procedure turn on all the plotting options for the procedure turn on a variety of plotting options for the procedure ID specify a variable that provides the time index for the series values

PROC UCM Statement PROC UCM < options > ;

BY

PROC UCM PROC UCM

NOPRINT PRINTALL

PROC UCM PROC UCM

PLOTS=NONE PLOTS=ALL

PROC UCM

PLOTS=

ID

1754 F Chapter 29: The UCM Procedure

The PROC UCM statement is required. The following options can be used in the PROC UCM statement: DATA=SAS-data-set

specifies the name of the SAS data set containing the time series. If the DATA= option is not specified in the PROC UCM statement, the most recently created SAS data set is used. NOPRINT

turns off all the printing for the procedure. The subsequent print options in the procedure are ignored. PLOTS< (global-plot-options) > < = plot-request < (options) > > PLOTS< (global-plot-options) > < = (plot-request < (options) > < ... plot-request < (options) > >) >

controls the plots produced with ODS Graphics. When you specify only one plot request, you can omit the parentheses around the plot request. Here are some examples: plots=none plots=all plots=residuals(acf loess) plots(noclm)=(smooth(decomp) residual(panel loess))

You must enable ODS Graphics before requesting plots, as shown in the following example. For general information about ODS Graphics, see Chapter 21, “Statistical Graphics Using ODS” (SAS/STAT User’s Guide). ods graphics on; proc ucm; model y = x; irregular; level; run; proc ucm plots=all; model y = x; irregular; level; run;

The first PROC UCM step does not specify the PLOTS= option, so the default plot that displays the series forecasts in the forecast region is produced. The PLOTS=ALL option in the second PROC UCM step produces all the plots that are appropriate for the specified model. In addition to the PLOTS= option in the PROC UCM statement, you can request plots by using the PLOT= option in other statements of the UCM procedure. This way of requesting plots provides finer control over the plot production. If you have enabled ODS Graphics but do not specify any specific plot request, then PROC UCM produces the plot of series forecasts in the forecast horizon by default.

PROC UCM Statement F 1755

Global Plot Options: The global-plot-options apply to all relevant plots generated by the UCM procedure. The following global-plot-option is supported: NOCLM

suppresses the confidence limits in all the component and forecast plots.

Specific Plot Options: The following list describes the specific plots and their options: ALL

produces all plots appropriate for the particular analysis. NONE

suppresses all plots. FILTER (< filter-plot-options >)

produces time series plots of the filtered component estimates. The following filterplot-options are available: ALL

produces all the filtered component estimate plots appropriate for the particular analysis. LEVEL

produces a time series plot of the filtered level component estimate, provided the model contains the level component. SLOPE

produces a time series plot of the filtered slope component estimate, provided the model contains the slope component. CYCLE

produces time series plots of the filtered cycle component estimates for all cycle components in the model, if there are any. SEASON

produces time series plots of the filtered season component estimates for all seasonal components in the model, if there are any. DECOMP

produces time series plots of the filtered estimates of the series decomposition. RESIDUAL ( < residual-plot-options >)

produces the residuals plots. The following residual-plot-options are available: ALL

produces all the residual diagnostics plots appropriate for the particular analysis.

1756 F Chapter 29: The UCM Procedure

ACF

produces the autocorrelation plot of residuals. CUSUM

produces the plot of cumulative residuals against time. CUSUMSQ

produces the plot of cumulative squared residuals against time. HISTOGRAM

produces the histogram of residuals. LOESS

produces a scatter plot of residuals against time, which has an overlaid loess-fit. PACF

produces the partial-autocorrelation plot of residuals. PANEL

produces a summary panel of the residual diagnostics consisting of the following: 

histogram of residuals



normal quantile plot of residuals



the residual-autocorrelation-plot



the residual-partial-autocorrelation-plot

QQ

produces a normal quantile plot of residuals. RESIDUAL

produces a needle plot of residuals against time. WN

produces the plot of Ljung-Box white-noise test p-values at different lags (in log scale). SMOOTH ( < smooth-plot-options >)

produces time series plots of the smoothed component estimates. The following smooth-plot-options are available: ALL

produces all the smoothed component estimate plots appropriate for the particular analysis. LEVEL

produces time series plot of the smoothed level component estimate, provided the model contains the level component.

AUTOREG Statement F 1757

SLOPE

produces time series plot of the smoothed slope component estimate, provided the model contains the slope component. CYCLE

produces time series plots of the smoothed cycle component estimates for all cycle components in the model, if there are any. SEASON

produces time series plots of the smoothed season component estimates for all season components in the model, if there are any. DECOMP

produces time series plots of the smoothed estimates of the series decomposition. PRINTALL

turns on all the printing options for the procedure. The subsequent NOPRINT options in the procedure are ignored.

AUTOREG Statement AUTOREG < options > ;

The AUTOREG statement specifies an autoregressive component in the model. An autoregressive component is a special case of cycle that corresponds to the frequency of zero or . It is modeled separately for easier interpretation. A stochastic equation for an autoregressive component rt can be written as follows: rt D rt

1

C t ;

t  i:i:d: N.0; 2 /

The damping factor  can take any value in the interval (–1, 1), including –1 but excluding 1. If  D 1, the autoregressive component cannot be distinguished from the random walk level component. If  D 1, the autoregressive component corresponds to a seasonal component with a season length of 2, or a nonstationary cycle with period 2. If jj < 1, then the autoregressive component is stationary. The following example illustrates the AUTOREG statement. This statement includes an autoregressive component in the model. The damping factor  and the disturbance variance 2 are estimated from the data. autoreg;

NOEST=RHO NOEST=VARIANCE NOEST=(RHO VARIANCE)

fixes the values of  and 2 to those specified in the RHO= and VARIANCE= options.

1758 F Chapter 29: The UCM Procedure

PLOT=FILTER PLOT=SMOOTH PLOT=( < FILTER > < SMOOTH > )

requests plotting of the filtered or smoothed estimate of the autoreg component. PRINT=FILTER PRINT=SMOOTH PRINT=(< FILTER > < SMOOTH >)

requests printing of the filtered or smoothed estimate of the autoreg component. RHO=value

specifies an initial value for the damping factor  during the parameter estimation process. The value of  must be in the interval (–1, 1), including –1 but excluding 1. VARIANCE=value

specifies an initial value for the disturbance variance 2 during the parameter estimation process. Any nonnegative value, including zero, is an acceptable starting value.

BLOCKSEASON Statement BLOCKSEASON NBLOCKS = integer BLOCKSIZE = integer < options > ;

The BLOCKSEASON or BLOCKSEASONAL statement is used to specify a seasonal component

t that has a special block structure. The seasonal t is called a block seasonal of block size m and number of blocks k if its season length, s, can be factored as s D m  k and its seasonal effects have a block form—that is, the first m seasonal effects are all equal to some number 1 , the next m effects are all equal to some number 2 , and so on. This type of seasonal structure can be appropriate in some cases; for example, consider a series that is recorded on an hourly basis. Further assume that, in this particular case, the hour-of-the-day effect and the day-of-the-week effect are additive. In this situation the hour-of-the-week seasonality, having a season length of 168, can be modeled as a sum of two components. The hour-of-the-day effect is modeled using a simple seasonal of season length 24, while the day-of-the-week is modeled as a block seasonal component that has the days of the week as blocks. This day-of-the-week block seasonal component has seven blocks, each of size 24. A block seasonal specification requires, at the minimum, the block size m and the number of blocks in the seasonal k. These are specified using the BLOCKSIZE= and NBLOCKS= option, respectively. In addition, you might need to specify the position of the first observation of the series by using the OFFSET= option if it is not at the beginning of one of the blocks. In the example just considered, this corresponds to a situation where the first series measurement is not at the start of the day. Suppose that the first measurement of the series corresponds to the hour between 6:00 and 7:00 a.m., which is the seventh hour within that day or at the seventh position within that block. This is specified as OFFSET=7. The other options in this statement are very similar to the options in the SEASON statement; for example, a block seasonal can also be of one of the two types, DUMMY and TRIG. There can be

BLOCKSEASON Statement F 1759

more than one block seasonal component in the model, each specified using a separate BLOCKSEASON statement. No two block seasonals in the model can have the same NBLOCKS= and BLOCKSIZE= specifications. The following example illustrates the use of the BLOCKSEASON statement to specify the additive, hour-of-the-week seasonal model: season length=24 type=trig; blockseason nblocks=7 blocksize=24;

BLOCKSIZE=integer

specifies the block size, m. This is a required option in this statement. The block size can be any integer larger than or equal to two. Typical examples of block sizes are 24, corresponding to the hours of the day when a day is being used as a block in hourly data, or 60, corresponding to the minutes in an hour when an hour is being used as a block in data recorded by minutes, etc. NBLOCKS=integer

specifies the number of blocks, k. This is a required option in this statement. The number of blocks can be any integer greater than or equal to two. NOEST

specifies the value of the disturbance variance parameter to the value specified in the VARIANCE= option. OFFSET=integer

specifies the position of the first measurement within the block, if the first measurement is not at the start of a block. The OFFSET= value must be between one and the block size. The default value is one. The first measurement refers to the start of the estimation span and the forecast span. If these spans differ, their starting measurements must be separated by an integer multiple of the block size. PLOT=FILTER PLOT=SMOOTH PLOT=F_ANNUAL PLOT=S_ANNUAL PLOT=( < plot request > . . . < plot request > )

requests plots of the season component. When you specify only one plot request, you can omit the parentheses around the plot request. You can use the FILTER and SMOOTH options to plot the filtered and smoothed estimates of the season component t . You can use the F_ANNUAL and S_ANNUAL options to get the plots of “annual” variation in the filtered and smoothed estimates of t . The annual plots are useful to see the change in the contribution of a particular month over the span of years. Here “month” and “year” are generic terms that change appropriately with the interval type being used to label the observations and the season length. For example, for monthly data with a season length of 12, the usual meaning applies, while for daily data with a season length of 7, the days of the week serve as months and the weeks serve as years. The first period in each block is plotted over the years.

1760 F Chapter 29: The UCM Procedure

PRINT=FILTER PRINT=SMOOTH PRINT=( < FILTER > < SMOOTH > )

requests the printing of the filtered or smoothed estimate of the block seasonal component t . TYPE=DUMMY | TRIG

specifies the type of the block seasonal component. The default type is DUMMY. VARIANCE=value

specifies an initial value for the disturbance variance, !2 , in the t equation at the start of the parameter estimation process. Any nonnegative value, including zero, is an acceptable starting value.

BY Statement BY variables ;

A BY statement can be used in the UCM procedure to process a data set in groups of observations defined by the BY variables. The model specified using the MODEL and other component statements is applied to all the groups defined by the BY variables. When a BY statement appears, the procedure expects the input data set to be sorted in order of the BY variables. The variables are one or more variables in the input data set.

CYCLE Statement CYCLE < options > ;

The CYCLE statement is used to specify a cycle component, t , in the model. The stochastic equation governing a cycle component of period p and damping factor  is as follows:        cos  sin  t t 1 t D  C   sin  cos  t t t 1 where t and t are independent, zero-mean, Gaussian disturbances with variance 2 and  D 2  =p is the angular frequency of the cycle. Any p strictly greater than two is an admissible value for the period, and the damping factor  can be any value in the interval (0, 1), including one but excluding zero. The cycles with frequency zero and , which correspond to the periods equal to infinity and two, respectively, can be specified using the AUTOREG statement. The values of  less than one give rise to a stationary cycle, while  D 1 gives rise to a nonstationary cycle. As a default, values of , p, and 2 are estimated from the data. However, if necessary, you can fix the values of some or all of these parameters. There can be multiple cycles in a model, each specified using a separate CYCLE statement. The examples that follow illustrate the use of the CYCLE statement.

CYCLE Statement F 1761

The following statements request including two cycles in the model. The parameters of each of these cycles are estimated from the data. cycle; cycle;

The following statement requests inclusion of a nonstationary cycle in the model. The cycle period p and the disturbance variance 2 are estimated from the data. cycle rho=1 noest=rho;

In the following statement a nonstationary cycle with a fixed period of 12 is specified. Moreover, a starting value is supplied for 2 . cycle period=12 rho=1 variance=4 noest=(rho period);

NOEST=PERIOD NOEST=RHO NOEST=VARIANCE NOEST=( < RHO > < PERIOD > < VARIANCE > )

fixes the values of the component parameters to those specified in the RHO=, PERIOD=, and VARIANCE= options. This option enables you to fix any combination of parameter values. PERIOD=value

specifies an initial value for the cycle period during the parameter estimation process. Period value must be strictly greater than 2. PLOT=FILTER PLOT=SMOOTH PLOT=( < FILTER > < SMOOTH > )

requests plotting of the filtered or smoothed estimate of the cycle component. PRINT=FILTER PRINT=SMOOTH PRINT=( < FILTER > < SMOOTH > )

requests the printing of a filtered or smoothed estimate of the cycle component

t.

RHO=value

specifies an initial value for the damping factor in this component during the parameter estimation process. Any value in the interval (0, 1), including one but excluding zero, is an acceptable initial value for the damping factor. VARIANCE=value

specifies an initial value for the disturbance variance parameter, 2 , to be used during the parameter estimation process. Any nonnegative value, including zero, is an acceptable starting value.

1762 F Chapter 29: The UCM Procedure

DEPLAG Statement DEPLAG LAGS = order < PHI = value . . . > < NOEST > ;

The DEPLAG statement is used to specify the lags of the dependent variable to be included as predictors in the model. The following examples illustrate the use of DEPLAG statement. If the dependent series is denoted by yt , the following statement specifies the inclusion of 1 yt 2 yt 2 in the model. The parameters 1 and 2 are estimated from the data.

1C

deplag lags=2;

The following statement requests including 1 yt of 1 and 2 are fixed at 0.8 and –1.2.

1 C 2 y t 4

1 2 y t

5

in the model. The values

deplag lags=(1)(4) phi=0.8 -1.2 noest;

The dependent lag parameters are not constrained to lie in any particular region. In particular, this implies that a UCM that contains only an irregular component and dependent lags, resulting in a traditional autoregressive model, is not constrained to be a stationary model. In the DEPLAG statement, if an initial value is supplied for any one of the parameters, the initial values must also be supplied for all other parameters. LAGS=order LAGS=(lag, . . . , lag ) . . . (lag, . . . , lag )

is a required option in this statement. LAGS=(l 1 , l 2 , . . . , l k ) defines a model with specified lags of the dependent variable included as predictors. LAGS=order is equivalent to LAGS=(1, 2, . . . , order ). A concatenation of parenthesized lists specifies a factored model. For example, LAGS=(1)(12) specifies that the lag values, 1, 12, and 13, corresponding to the following polynomial in the backward shift operator, be included in the model .1

1;1 B/.1

2;1 B 12 /

Note that, in this case, the coefficient of the thirteenth lag is constrained to be the product of the coefficients of the first and twelfth lags. NOEST

fixes the values of the parameters to those specified in PHI= option. PHI=value . . .

lists starting values for the coefficients of the lagged dependent variable. The order of the values listed corresponds with the order of the lags specified in the LAGS= option.

ESTIMATE Statement F 1763

ESTIMATE Statement ESTIMATE < options > ;

The ESTIMATE statement is an optional statement used to control the overall model-fitting environment. Using this statement, you can control the span of observations used to fit the model by using the SKIPFIRST= and BACK= options. This can be useful in model diagnostics. You can request a variety of goodness-of-fit statistics and other model diagnostic information including different residual diagnostic plots. Note that the ESTIMATE statement is not used to control the nonlinear optimization process itself. That is done using the NLOPTIONS statement, where you can control the number of iterations, choose between the different optimization techniques, and so on. You can save the estimated parameters and other related information in a data set by using the OUTEST= option. You can request the optimization of the profile likelihood, the likelihood obtained by concentrating out a disturbance variance, for parameter estimation by using the PROFILE option. The following example illustrates the use of this statement: estimate skipfirst=12 back=24;

This statement requests that the initial 12 measurements and the last 24 measurements be excluded during the model-fitting process. The actual observation span used to fit the model is decided as follows: Suppose that n0 and n1 are the observation numbers of the first and the last nonmissing values of the response variable, respectively. As a result of SKIPFIRST=12 and BACK=24, the measurements between observation numbers n0 C 12 and n1 24 form the estimation span. Of course, the model fitting might not take place if there are insufficient data in the resulting span. The model fitting does not take place if there are regressors in the model that have missing values in the estimation span. BACK=integer SKIPLAST=integer

indicates that some ending part of the data needs to be ignored during the parameter estimation. This can be useful when you want to study the forecasting performance of a model on the observed data. BACK=10 results in skipping the last 10 measurements of the response series during the parameter estimation. The default is BACK=0. EXTRADIFFUSE=k

enables continuation of the diffuse filtering iterations for k additional iterations beyond the first instance where the initialization of the diffuse state would have otherwise taken place. If the specified k is larger than the sample size, the diffuse iterations continue until the end of the sample. Note that one-step-ahead residuals are produced only after the diffuse state is initialized. Delaying the initialization leads to a reduction in the number of one-step-ahead residuals available for computing the residual diagnostic measures. This option is useful when you want to ignore the first few one-step-ahead residuals that often have large variance. NOPROFILE

requests that the usual likelihood be optimized for parameter estimation. For more information, see the section “Parameter Estimation by Profile Likelihood Optimization” on page 1798.

1764 F Chapter 29: The UCM Procedure

OUTEST=SAS-data-set

specifies an output data set for the estimated parameters. In the ESTIMATE statement, the PLOT= option is used to obtain different residual diagnostic plots. The different possibilities are as follows: PLOT=ACF PLOT=MODEL PLOT=LOESS PLOT=HISTOGRAM PLOT=PACF PLOT=PANEL PLOT=QQ PLOT=RESIDUAL PLOT=WN PLOT=( < plot request > . . . < plot request > )

requests different residual diagnostic plots. The different options are as follows: ACF

produces the residual-autocorrelation plot. CUSUM

produces the plot of cumulative residuals against time. CUSUMSQ

produces the plot of cumulative squared residuals against time. MODEL

produces the plot of one-step-ahead forecasts in the estimation span. HISTOGRAM

produces the histogram of residuals. LOESS

produces a scatter plot of residuals against time, which has an overlaid loess-fit. PACF

produces the residual-partial-autocorrelation plot. PANEL

produces a summary panel of the residual diagnostics consisting of    

histogram of residuals normal quantile plot of residuals the residual-autocorrelation-plot the residual-partial-autocorrelation-plot

QQ

produces a normal quantile plot of residuals.

FORECAST Statement F 1765

RESIDUAL

produces a needle plot of residuals against time. WN

produces a plot of p-values, in log-scale, at different lags for the Ljung-Box portmanteau white noise test statistics. PRINT=NONE

suppresses all the printed output related to the model fitting, such as the parameter estimates, the goodness-of-fit statistics, and so on. PROFILE

requests that the profile likelihood, obtained by concentrating out one of the disturbance variances from the likelihood, be optimized for parameter estimation. By default, the profile likelihood is not optimized if any of the disturbance variance parameters is held fixed to a nonzero value. For more information see the section “Parameter Estimation by Profile Likelihood Optimization” on page 1798. SKIPFIRST=integer

indicates that some early part of the data needs to be ignored during the parameter estimation. This can be useful if there is a reason to believe that the model being estimated is not appropriate for this portion of the data. SKIPFIRST=10 results in skipping the first 10 measurements of the response series during the parameter estimation. The default is SKIPFIRST=0.

FORECAST Statement FORECAST < options > ;

The FORECAST statement is an optional statement that is used to specify the overall forecasting environment for the specified model. It can be used to specify the span of observations, the historical period, to use to compute the forecasts of the future observations. This is done using the SKIPFIRST= and BACK= options. The number of periods to forecast beyond the historical period, and the significance level of the forecast confidence interval, is specified using the LEAD= and ALPHA= options. You can request one-step-ahead series and component forecasts by using the PRINT= option. You can save the series forecasts, and the model-based decomposition of the series, in a data set by using the OUTFOR= option. The following example illustrates the use of this statement: forecast skipfirst=12 back=24 lead=30;

This statement requests that the initial 12 and the last 24 response values be excluded during the forecast computations. The forecast horizon, specified using the LEAD= option, is 30 periods; that is, multistep forecasting begins at the end of the historical period and continues for 30 periods. The actual observation span used to compute the multistep forecasting is decided as follows: Suppose that n0 and n1 are the observation numbers of the first and the last nonmissing values of the response variable, respectively. As a result of SKIPFIRST=12 and BACK=24, the historical period, or the

1766 F Chapter 29: The UCM Procedure

forecast span, begins at n0 C12 and ends at n1 24. Multistep forecasts are produced for the next 30 periods—that is, for the observation numbers n1 23 to n1 C6. Of course, the forecast computations can fail if the model has regressor variables that have missing values in the forecast span. If the regressors contain missing values in the forecast horizon—that is, between the observations n1 23 and n1 C 6—the forecast horizon is reduced accordingly. ALPHA=value

specifies the significance level of the forecast confidence intervals; for example, ALPHA=0.05, which is the default, results in a 95% confidence interval. BACK=integer SKIPLAST=integer

specifies the holdout sample for the evaluation of the forecasting performance of the model. For example, BACK=10 results in treating the last 10 observed values of the response series as unobserved. A post-sample-prediction-analysis table is produced for comparing the predicted values with the actual values in the holdout period. The default is BACK=0. EXTRADIFFUSE=k

enables continuation of the diffuse filtering iterations for k additional iterations beyond the first instance where the initialization of the diffuse state would have otherwise taken place. If the specified k is larger than the sample size, the diffuse iterations continue until the end of the sample. Note that one-step-ahead forecasts are produced only after the diffuse state is initialized. Delaying the initialization leads to reduction in the number of one-step-ahead forecasts. This option is useful when you want to ignore the first few one-step-ahead forecasts that often have large variance. LEAD=integer

specifies the number of periods to forecast beyond the historical period defined by the SKIPFIRST= and BACK= options; for example, LEAD=10 results in the forecasting of 10 future values of the response series. The default is LEAD=12. OUTFOR=SAS-data-set

specifies an output data set for the forecasts. The output data set contains the ID variable (if specified), the response and predictor series, the one-step-ahead and out-of-sample response series forecasts, the forecast confidence intervals, the smoothed values of the response series, and the smoothed forecasts produced as a result of the model-based decomposition of the series. PLOT=DECOMP PLOT=DECOMPVAR PLOT=FDECOMP PLOT=FDECOMPVAR PLOT=FORECASTS PLOT=TREND PLOT=( < plot request > . . . < plot request > )

requests forecast and model decomposition plots. The FORECASTS option provides the plot of the series forecasts, the TREND and DECOMP options provide the plots of the smoothed trend and other decompositions, the DECOMPVAR option can be used to plot the variance of

ID Statement F 1767

these components, and the FDECOMP and FDECOMPVAR options provide the same plots for the filtered decomposition estimates and their variances. PRINT=DECOMP PRINT=FDECOMP PRINT=FORECASTS PRINT=NONE PRINT=( < print request > . . . < print request > )

controls the printing of the series forecasts and the printing of smoothed model decomposition estimates. By default, the series forecasts are printed only for the forecast horizon specified by the LEAD= option; that is, the one-step-ahead predicted values are not printed. You can request forecasts for the entire forecast span by specifying the PRINT=FORECASTS option. Using PRINT=DECOMP, you can get smoothed estimates of the following effects: trend, trend plus regression, trend plus regression plus cycle, and sum of all components except the irregular. If some of these effects are absent in the model, then they are ignored. Similarly you can get filtered estimates of these effects by using PRINT=FDECOMP. You can use PRINT=NONE to suppress the printing of all the forecast output. SKIPFIRST=integer

indicates that some early part of the data needs to be ignored during the forecasting calculations. This can be useful if there is a reason to believe that the model being used for forecasting is not appropriate for this portion of the data. SKIPFIRST=10 results in skipping the first 10 measurements of the response series during the forecast calculations. The default is SKIPFIRST=0.

ID Statement ID variable INTERVAL=value < ALIGN=value > ;

The ID statement names a numeric variable that identifies observations in the input and output data sets. The ID variable’s values are assumed to be SAS date, time, or datetime values. In addition, the ID statement specifies the frequency associated with the time series. The ID statement options also specify how the observations are aligned to form the time series. If the ID statement is specified, the INTERVAL= option must also be specified. If the ID statement is not specified, the observation number, with respect to the BY group, is used as the time ID. The values of the ID variable are extrapolated for the forecast observations based on the values of the INTERVAL= option. ALIGN=value

controls the alignment of SAS dates used to identify output observations. The ALIGN= option has the following possible values: BEGINNING | BEG | B, MIDDLE | MID | M, and ENDING | END | E. The default is BEGINNING. The ALIGN= option is used to align the ID variable with the beginning, middle, or end of the time ID interval specified by the INTERVAL= option. INTERVAL=value

specifies the time interval between observations. This option is required in the ID statement.

1768 F Chapter 29: The UCM Procedure

INTERVAL=value is used in conjunction with the ID variable to check that the input data are in order and have no gaps. The INTERVAL= option is also used to extrapolate the ID values past the end of the input data. For a complete discussion of the intervals supported, please see Chapter 4, “Date Intervals, Formats, and Functions.”

IRREGULAR Statement IRREGULAR < options > ;

The IRREGULAR statement is used to include an irregular component in the model. There can be at most one IRREGULAR statement in the model specification. The irregular component corresponds to the overall random error, t , in the model. By default the irregular component is modeled as white noise—that is, as a sequence of independent, identically distributed, zero-mean, Gaussian random variables. However, as an experimental feature in this release of the UCM procedure, you can also model it as an autoregressive moving-average (ARMA) process. The options for specifying an ARMA model for the irregular component are given in a separate subsection: “ARMA Specification (Experimental)” on page 1769. The options in this statement enable you to specify the value of 2 and to output the forecasts of t . As a default, 2 is estimated using the data. Two examples of the IRREGULAR statement are given next. In the first example the statement is in its simplest form, resulting in the inclusion of an irregular component that is white noise with unknown variance: irregular;

The following statement provides a starting value for 2 , to be used in the nonlinear parameter estimation process. It also requests the printing of smoothed predictions of t . The smoothed irregulars are useful in model diagnostics. irregular variance=4 print=smooth;

NOEST

fixes the value of 2 to the value specified in the VARIANCE= option. PLOT=FILTER PLOT=SMOOTH PLOT=( < FILTER > < SMOOTH > )

requests plotting of the filtered or smoothed estimate of the irregular component. PRINT=FILTER PRINT=SMOOTH PRINT=( < FILTER > < SMOOTH > )

requests printing of the filtered or smoothed estimate of the irregular component. VARIANCE=value

specifies an initial value for 2 during the parameter estimation process. Any nonnegative value, including zero, is an acceptable starting value.

IRREGULAR Statement F 1769

ARMA Specification (Experimental) This section details the options for specifying an ARMA model for the irregular component. The specification of ARMA models requires some notation, which is explained first. Let B denote the backshift operator—that is, for any sequence t , Bt D t 1 . The higher powers of B represent larger shifts (for example, B 3 t D t 3 ). A random sequence t follows a zeromean ARMA(p,q)(P,Q)s model with nonseasonal autoregressive order p, seasonal autoregressive order P , nonseasonal moving-average order q, and seasonal moving-average order Q, if it satisfies the following difference equation specified in terms of the polynomials in the backshift operator: .B/ˆ.B s /t D .B/‚.B s /at where at is a white noise sequence and s is the season length. The polynomials ; ˆ; ; and ‚ are of orders p, P , q, and Q, respectively, which can be any nonnegative integers. The season length s must be a positive integer. For example, t satisfies an ARMA(1,1) model—that is, p D 1; q D 1; P D 0; and Q D 0—if t D 1 t

1

C at

1 at

1

for some coefficients 1 and 1 and a white noise sequence at . ARMA(1,1)(1,1)12 model if t D 1 t

1

C ˆ1  t

12

1 ˆ1 t

13

C at

1 at

1

‚1 at

12

Similarly t satisfies an C 1 ‚1 at

13

for some coefficients 1 ; ˆ1 ; 1 ; and ‚1 and a white noise sequence at . The ARMA process is stationary and invertible if the defining polynomials ; ˆ; ; and ‚ have all their roots outside the unit circle—that is, their absolute values are strictly larger than 1.0. It is assumed that the ARMA model specified for the irregular component is stationary and invertible—that is, the coefficients of the polynomials ; ˆ; ; and ‚ are constrained so that the stationarity and invertibility conditions are satisfied. The unknown coefficients of these polynomials become part of the model parameter vector that is estimated using the data. The notation for a closely related class of models, autoregressive integrated movingaverage (ARIMA) models, is also given here. A random sequence yt is said to follow an ARIMA(p,d,q)(P,D,Q)s model if, for some nonnegative integers d and D, the differenced series t D .1 B/d .1 B s /D yt follows an ARMA(p,q)(P,Q)s model. The integers d and D are called nonseasonal and seasonal differencing orders, respectively. You can specify ARIMA models by using the DEPLAG statement for specifying the differencing orders and by using the IRREGULAR statement for the ARMA specification. See Example 29.8 for an example of ARIMA(0,1,1)(0,1,1)12 model specification. Brockwell and Davis (1991) can be consulted for additional information about ARIMA models. You can use options of the IRREGULAR statement to specify the desired ARMA model and to request printed and graphical output. Several examples of the IRREGULAR statement are given next. The following statement specifies an irregular component that is modeled as an ARMA(1,1) process. It also requests plotting its smoothed estimate. irregular p=1 q=1 plot=smooth;

1770 F Chapter 29: The UCM Procedure

The following statement specifies an ARMA(1,1)(1,1)12 model. It also fixes the coefficient of the first-order seasonal moving-average polynomial to 0.1. The other coefficients and the white noise variance are estimated using the data. irregular p=1 sp=1 q=1 sq=1 s=12 sma=0.1 noest=(sma);

AR=1 2 . . . p

lists the starting values of the coefficients of the nonseasonal autoregressive polynomial: .B/ D 1

1 B

p B p

:::

where the order p is specified in the P= option. The coefficients i must define a stationary autoregressive polynomial. MA=1 2 . . . q

lists the starting values of the coefficients of the nonseasonal moving-average polynomial: .B/ D 1

1 B

:::

q B q

where the order q is specified in the Q= option. The coefficients i must define an invertible moving-average polynomial. NOEST=( )

fixes the values of the ARMA parameters and the value of the white noise variance to those specified in the AR=, SAR=, MA=, SMA=, or VARIANCE= options. P=integer

specifies the order of the nonseasonal autoregressive polynomial. The order can be any nonnegative integer; the default value is 0. In practice the order is a small integer such as 1, 2, or 3. Q=integer

specifies the order of the nonseasonal moving-average polynomial. The order can be any nonnegative integer; the default value is 0. In practice the order is a small integer such as 1, 2, or 3. S=integer

specifies the season length used during the specification of the seasonal autoregressive or seasonal moving-average polynomial. The season length can be any positive integer; for example, S=4 might be an appropriate value for a quarterly series. The default value is S=1. SAR=ˆ1 ˆ2 . . . ˆP

lists the starting values of the coefficients of the seasonal autoregressive polynomial: ˆ.B s / D 1

ˆ1 B s

:::

ˆP B sP

where the order P is specified in the SP= option and the season length s is specified in the S= option. The coefficients ˆi must define a stationary autoregressive polynomial.

LEVEL Statement F 1771

SMA=‚1 ‚2 . . . ‚Q

lists the starting values of the coefficients of the seasonal moving-average polynomial: ‚.B s / D 1

‚1 B s

:::

‚Q B sQ

where the order Q is specified in the SQ= option and the season length s is specified in the S= option. The coefficients ‚i must define an invertible moving-average polynomial. SP=integer

specifies the order of the seasonal autoregressive polynomial. The order can be any nonnegative integer; the default value is 0. In practice the order is a small integer such as 1 or 2. SQ=integer

specifies the order of the seasonal moving-average polynomial. The order can be any nonnegative integer; the default value is 0. In practice the order is a small integer such as 1 or 2.

LEVEL Statement LEVEL < options > ;

The LEVEL statement is used to include a level component in the model. The level component, either by itself or together with a slope component (see the SLOPE statement), forms the trend component, t , of the model. If the slope component is absent, the resulting trend is a random walk (RW) specified by the following equations: t D t

1

C t ;

t  i:i:d: N.0; 2 /

If the slope component is present, signified by the presence of a SLOPE statement, a locally linear trend (LLT) is obtained. The equations of LLT are as follows: t

D t

1

C ˇt

ˇt

D ˇt

1

C t ;

1

C t ;

t  i:i:d: N.0; 2 / t  i:i:d: N.0; 2 /

In either case, the options in the LEVEL statement are used to specify the value of 2 and to request forecasts of t . The SLOPE statement is used for similar purposes in the case of slope ˇt . The following examples illustrate the use of the LEVEL statement. Assuming that a SLOPE statement is not added subsequently, a simple random walk trend is specified by the following statement: level;

The following statements specify a locally linear trend with value of 2 fixed at 4. It also requests printing of filtered values of t . The value of 2 , the disturbance variance in the slope equation, is estimated from the data.

1772 F Chapter 29: The UCM Procedure

level variance=4 noest print=filter; slope;

CHECKBREAK

turns on the checking of breaks in the level component. NOEST

fixes the value of 2 to the value specified in the VARIANCE= option. PLOT=FILTER PLOT=SMOOTH PLOT=( < FILTER > < SMOOTH > )

requests plotting of the filtered or smoothed estimate of the level component. PRINT=FILTER PRINT=SMOOTH PRINT=( < FILTER > < SMOOTH > )

requests printing of the filtered or smoothed estimate of the level component. VARIANCE=value

specifies an initial value for 2 , the disturbance variance in the t equation at the start of the parameter estimation process. Any nonnegative value, including zero, is an acceptable starting value.

MODEL Statement MODEL dependent < = regressors > ;

The MODEL statement specifies the response variable and, optionally, the predictor or regressor variables for the UCM model. This is a required statement in the UCM procedure. The predictors specified in the MODEL statement are assumed to have a linear and time-invariant relationship with the response. The predictors that have time-varying regression coefficients are specified separately in the RANDOMREG statement. Similarly, the predictors that have a nonlinear effect on the response variable are specified separately in the SPLINEREG statement. Only one MODEL statement can be specified.

NLOPTIONS Statement NLOPTIONS < options > ;

PROC UCM uses the nonlinear optimization (NLO) subsystem to perform the nonlinear optimization of the likelihood function during the estimation of model parameters. You can use the NLOPTIONS statement to control different aspects of this optimization process. For most problems the

OUTLIER Statement F 1773

default settings of the optimization process are adequate. However, in some cases it might be useful to change the optimization technique or to change the maximum number of iterations. This can be done by using the TECH= and MAXITER= options in the NLOPTIONS statement as follows: nloptions tech=dbldog maxiter=200;

This sets the maximum number of iterations to 200 and changes the optimization technique to DBLDOG rather than the default technique, TRUREG, used in PROC UCM. A discussion of the full range of options that can be used with the NLOPTIONS statement is given in Chapter 6, “Nonlinear Optimization Methods.” In PROC UCM all these options are available except the options related to the printing of the optimization history. In this version of PROC UCM all the printed output from the NLO subsystem is suppressed.

OUTLIER Statement OUTLIER < options > ;

The OUTLIER statement enables you to control the reporting of the additive outliers (AO) and level shifts (LS) in the response series. The AOs are searched by default. You can turn on the search for LSs by using the CHECKBREAK option in the LEVEL statement. ALPHA=significance-level

specifies the significance level for reporting the outliers. The default is 0.05. MAXNUM=number

limits the number of outliers to search. The default is MAXNUM=5. MAXPCT=number

is similar to the MAXNUM= option. In the MAXPCT= option you can limit the number of outliers to search for according to a percentage of the series length. The default is MAXPCT=1. When both of these options are specified, the minimum of the two search numbers is used. PRINT=SHORT | DETAIL

enables you to control the printed output of the outlier search. The PRINT=SHORT option, which is the default, produces an outlier summary table containing the most significant outliers, either AO or LS, discovered in the outlier search. The PRINT=DETAIL option produces, in addition to the outlier summary table, separate tables containing the AO and LS structural break chi-square statistics computed at each time point in the estimation span.

RANDOMREG Statement RANDOMREG regressors < / options > ;

1774 F Chapter 29: The UCM Procedure

The RANDOMREG statement is used to specify regressors with time-varying regression coefficients. Each regression coefficient—say, ˇt — is assumed to evolve as a random walk: ˇt D ˇt

1

C t ;

t  i:i:d: N.0;  2 /

Of course, if the random walk disturbance variance  2 is zero, then the regression coefficient is not time varying, and it reduces to the standard regression setting. There can be multiple RANDOMREG statements, and each statement can contain one or more regressors. The regressors in a given RANDOMREG statement form a group that is assumed to share the same disturbance variance parameter. The random walks associated with different regressors are assumed to be independent. For an example of using this statement see Example 29.4. See the section “Reporting Parameter Estimates for Random Regressors” on page 1794 for additional information about the way parameter estimates are reported for this type of regressors. NOEST

fixes the value of  2 to the value specified in the VARIANCE= option. PLOT=FILTER PLOT=SMOOTH PLOT=( < FILTER > < SMOOTH > )

requests plotting of filtered or smoothed estimate of the time-varying regression coefficient. PRINT=FILTER PRINT=SMOOTH PRINT=( < FILTER > < SMOOTH > )

requests printing of the filtered or smoothed estimate of the time-varying regression coefficient. VARIANCE=value

specifies an initial value for  2 during the parameter estimation process. Any nonnegative value, including zero, is an acceptable starting value.

SEASON Statement SEASON LENGTH = integer < options > ;

The SEASON or SEASONAL statement is used to specify a seasonal component, t , in the model. A seasonal component can be one of the two types, DUMMY or TRIG. A DUMMY seasonal with season length s satisfies the following stochastic equation: s 1 X

t

i

D !t ;

!t  i:i:d: N.0; !2 /

i D0

The equations for a TRIG (short for trigonometric) seasonal component are as follows:

t D

Œs=2 X j D1

j;t

SEASON Statement F 1775

where Œs=2 equals s=2 if s is even and .s 1/=2 if it is odd. The sinusoids, also called harmonics,

j;t have frequencies j D 2j=s and are specified by the matrix equation 

j;t 

j;t



 D

cos j sin j

sin j cos j



j;t 

j;t

1 1



 C

!j;t  !j;t



 are assumed to be independent and, for fixed j , !j;t and where the disturbances !j;t and !j;t   2 is not needed and s=2;t is given by !j;t  N.0; ! /. If s is even, then the equation for s=2;t

s=2;t D

s=2;t

1

C !s=2;t

In the TRIG seasonal case, the option KEEPH= or DROPH= can be used to obtain subset trigonometric seasonals that contain only a subset of the full set of harmonics j;t , j D 1; 2; : : : ; Œs=2. This is particularly useful when the season length s is large and the seasonal pattern is relatively smooth. Note that whether the seasonal type is DUMMY or TRIG, there is only one parameter, the disturbance variance !2 , in the seasonal model. There can be more than one seasonal component in the model, necessarily with different season lengths if the seasons are full. You can have multiple subset season components with the same season length, if you need to use separate disturbance variances for different sets of harmonics. Each seasonal component is specified using a separate SEASON statement. A model with multiple seasonal components can easily become quite complex and might need a large amount of data and computing resources for its estimation and forecasting. The examples that follow illustrate the use of SEASON statement. The following statement specifies a DUMMY type (default) seasonal component with a season length of four, corresponding to the quarterly seasonality. The disturbance variance !2 is estimated from the data. season length=4;

The following statement specifies a trigonometric seasonal with monthly seasonality. It also provides a starting value for !2 . season length=12 type=trig variance=4;

DROPHARMONICS|DROPH=number-list | n TO m BY p

enables you to drop some harmonics j;t from the full set of harmonics used to obtain a trigonometric seasonal. The drop list can include any integer between 1 and Œs=2, s being the season length. For example, the following specification results in a specification of a trigonometric seasonal with a season length 12 that consists of only the first four harmonics

j;t , j D 1; 2; 3; 4: season length=12 type=trig DROPH=5 6;

The last two high frequency harmonics are dropped. The DROPH= option cannot be used with the KEEPH= option.

1776 F Chapter 29: The UCM Procedure

KEEPHARMONICS|KEEPH=number-list | n TO m BY p

enables you to keep only the harmonics j;t listed in the option to obtain a trigonometric seasonal. The keep list can include any integer between 1 and Œs=2, s being the season length. For example, the following specification results in a specification of a trigonometric seasonal with a season length of 12 that consists of all the six harmonics j;t , j D 1; : : : 6: season length=12 type=trig KEEPH=1 to 3; season length=12 type=trig KEEPH=4 to 6;

However, these six harmonics are grouped into two groups, each having its own disturbance variance parameter. The DROPH= option cannot be used with the KEEPH= option. LENGTH=integer

specifies the season length, s. This is a required option in this statement. The season length can be any integer greater than or equal to 2. Typical examples of season lengths are 12, corresponding to the monthly seasonality, or 4, corresponding to the quarterly seasonality. NOEST

fixes the value of the disturbance variance parameter to the value specified in the VARIANCE= option. PLOT=FILTER PLOT=SMOOTH PLOT=F_ANNUAL PLOT=S_ANNUAL PLOT=( . . . )

requests plots of the season component. When you specify only one plot request, you can omit the parentheses around the plot request. You can use the FILTER and SMOOTH options to plot the filtered and smoothed estimates of the season component t . You can use the F_ANNUAL and S_ANNUAL options to get the plots of “annual” variation in the filtered and smoothed estimates of t . The annual plots are useful to see the change in the contribution of a particular month over the span of years. Here “month” and “year” are generic terms that change appropriately with the interval type being used to label the observations and the season length. For example, for monthly data with a season length of 12, the usual meaning applies, while for daily data with a season length of 7, the days of the week serve as months and the weeks serve as years. PRINT=HARMONICS

requests printing of the summary of harmonics present in the seasonal component. This option is valid only for the trigonometric seasonal component. PRINT=FILTER PRINT=SMOOTH PRINT=( < print request > . . . < print request > )

requests printing of the filtered or smoothed estimate of the seasonal component t . TYPE=DUMMY | TRIG

specifies the type of the seasonal component. The default type is DUMMY.

SLOPE Statement F 1777

VARIANCE=value

specifies an initial value for the disturbance variance, !2 , in the t equation at the start of the parameter estimation process. Any nonnegative value, including zero, is an acceptable starting value.

SLOPE Statement SLOPE < options > ;

The SLOPE statement is used to include a slope component in the model. The slope component cannot be used without the level component (see the LEVEL statement). The level and slope specifications jointly define the trend component of the model. A SLOPE statement without the accompanying LEVEL statement is ignored. The equations of the trend, defined jointly by the level t and slope ˇt , are as follows: t

D t

1

C ˇt

ˇt

D ˇt

1

C t ;

1

C t ;

t  i:i:d: N.0; 2 / t  i:i:d: N.0; 2 /

The SLOPE statement is used to specify the value of the disturbance variance, 2 , in the slope equation, and to request forecasts of ˇt . The following examples illustrate this statement: level; slope;

The preceding statements fit a model with a locally linear trend. The disturbance variances 2 and 2 are estimated from the data. You can request a locally linear trend with fixed slope by using the following statements: level; slope variance=0 noest;

NOEST

fixes the value of the disturbance variance, 2 , to the value specified in the VARIANCE= option. PLOT=FILTER PLOT=SMOOTH PLOT=( < FILTER > < SMOOTH > )

requests plotting of the filtered or smoothed estimate of the slope component. PRINT=FILTER PRINT=SMOOTH PRINT=( < FILTER > < SMOOTH > )

requests printing of the filtered or smoothed estimate of the slope component ˇt .

1778 F Chapter 29: The UCM Procedure

VARIANCE=value

specifies an initial value for the disturbance variance, 2 , in the ˇt equation at the start of the parameter estimation process. Any nonnegative value, including zero, is an acceptable starting value.

SPLINEREG Statement SPLINEREG regressor < options > ;

The SPLINEREG statement is used to specify a regressor that has a nonlinear relationship with the dependent series that can be approximated by a given B-spline. If the specified spline has degree d and is based on n internal knots, then it is known that it can be written as a linear combination of .n C d C 1/ regressors that are derived from the original regressor. The span of these .n C d C 1/ derived regressors includes constant; therefore, to avoid multicollinearity with the level component, one of these regressors is dropped. Specifying the SPLINEREG statement is equivalent to specifying a RANDOMREG statement with these derived regressors. There can be multiple SPLINEREG statements. You must specify at least one interior knot, either using the NKNOTS= option or the KNOTS= option. For additional information about splines, see Chapter 90, “The TRANSREG Procedure” (SAS/STAT User’s Guide). For an example of using this statement, see Example 29.6. See the section “Reporting Parameter Estimates for Random Regressors” on page 1794 for additional information about the way parameter estimates are reported for this type of regressors. DEGREE=integer

specifies the degree of the spline. It can be any integer larger than or equal to zero. The default value is 3. The polynomial degree should be a small integer, usually 0, 1, 2, or 3. Larger values are rarely useful. If you have any doubt as to what degree to specify, use the default. KNOTS=number-list | n TO m BY p

specifies the interior knots or break points. The values in the knot list must be nondecreasing and must lie between the minimum and the maximum of the spline regressor values in the input data set. The first time you specify a value in the knot list, it indicates a discontinuity in the nth (from DEGREE=n) derivative of the transformation function at the value of the knot. The second mention of a value indicates a discontinuity in the .n 1/th derivative of the transformation function at the value of the knot. Knots can be repeated any number of times for decreasing smoothness at the break points, but the values in the knot list can never decrease. You cannot use the KNOTS= option with the NKNOTS= option. You should keep the number of knots small. NKNOTS=m

creates m knots, the first at the 100=.m C 1/ percentile, the second at the 200=.m C 1/ percentile, and so on. Knots are always placed at data values; there is no interpolation. For example, if NKNOTS=3, knots are placed at the 25th percentile, the median, and the 75th percentile. The value specified for the NKNOTS= option must be  1. You cannot use the NKNOTS=option with the KNOTS= option.

SPLINESEASON Statement F 1779

N OTE : Specifying knots by using the NKNOTS= option can result in different sets of knots in the estimation and forecast stages if the distributions of regressor values in the estimation and forecast spans differ. The estimation span is based on the BACK= and SKIPFIRST= options in the ESTIMATE statement, and the forecast span is based on the BACK= and SKIPFIRST= options in the FORECAST statement. NOEST

fixes the value of the regression coefficient random walk disturbance variance to the value specified in the VARIANCE= option. PLOT=FILTER PLOT=SMOOTH PLOT=( < FILTER > < SMOOTH > )

requests plotting of filtered or smoothed estimate of the time-varying regression coefficient. PRINT=FILTER PRINT=SMOOTH PRINT=( < FILTER > < SMOOTH > )

requests printing of filtered or smoothed estimate of the time-varying regression coefficient. VARIANCE=value

specifies an initial value for the regression coefficient random walk disturbance variance during the parameter estimation process. Any nonnegative value, including zero, is an acceptable starting value.

SPLINESEASON Statement SPLINESEASON LENGTH = integer KNOTS= i nteger1 i nteger2 . . . < options > ;

The SPLINESEASON statement is used to specify a seasonal pattern that is to be approximated by a given B-spline. If the specified spline has degree d and is based on n internal knots, then it can be written as a linear combination of .n C d / regressors that are derived from the seasonal dummy regressors. The SPLINESEASON specification is equivalent to specifying a RANDOMREG specification with these derived regressors. Such approximation is useful only if the season length is relatively large, at least larger than .n C d /. For additional information about splines, see Chapter 90, “The TRANSREG Procedure” (SAS/STAT User’s Guide). For an example of using this statement, see Example 29.3. DEGREE=integer

specifies the degree of the spline. It can be any integer greater than or equal to zero. The default value is 3. KNOTS=i nt eger1 i nt eger2 . . .

lists the internal knots. This list of values must be a nondecreasing sequence of integers within the range of 2 to .s 1/, where s is the season length specified in the LENGTH= option. This is a required option in this statement.

1780 F Chapter 29: The UCM Procedure

LENGTH=integer

specifies the season length, s. This is a required option in this statement. The length can be any integer greater than or equal to three. NOEST

fixes the value of the regression coefficient random walk disturbance variance to the value specified in the VARIANCE= option. OFFSET=integer

specifies the position of the first measurement within the season, if the first measurement is not at the start of the season. The OFFSET= value must be between one and the season length. The default value is one. The first measurement refers to the start of the estimation span and the forecast span. If these spans differ, their starting measurements must be separated by an integer multiple of the season length. PLOT=FILTER PLOT=SMOOTH PLOT=( < FILTER > < SMOOTH > )

requests plots of the season component. When you specify only one plot request, you can omit the parentheses around the plot request. You can use the FILTER and SMOOTH options to plot the filtered and smoothed estimates of the season component. PRINT=FILTER PRINT=SMOOTH PRINT=( < FILTER > < SMOOTH > )

requests the printing of the filtered or smoothed estimate of the spline season component. RKNOTS=(knot, . . . , knot ) . . . (knot, . . . , knot )

Experimental

specifies a grouping of knots such that the knots within the same group have identical seasonal values. The knots specified in this option must already be present in the list specified by the KNOTS= option. The knot groups must be non-overlapping and without any repeated knots. VARIANCE=value

specifies an initial value for the regression coefficient random walk disturbance variance during the parameter estimation process. Any nonnegative value, including zero, is an acceptable starting value.

Details: UCM Procedure F 1781

Details: UCM Procedure

An Introduction to Unobserved Component Models A UCM decomposes the response series into components such as trend, seasons, cycles, and the regression effects due to predictor series. The following model shows a possible scenario:

yt

D t C t C

t

C

m X

ˇj xjt C t

j D1

t



i:i:d: N.0; 2 /

The terms t ; t , and t represent the trend, seasonal, and cyclical components, respectively. In fact the model can contain multiple seasons and cycles, and the seasons can be of different types. For simplicity of discussion the preceding model contains only one of each of these components. The P regression term, m ˇ j D1 j xjt , includes contribution of regression variables with fixed regression coefficients. A model can also contain regression variables that have time varying regression coefficients or that have a nonlinear relationship with the dependent series (see “Incorporating Predictors of Different Kinds” on page 1793). The disturbance term t , also called the irregular component, is usually assumed to be Gaussian white noise. In some cases it is useful to model the irregular component as a stationary ARMA process. See the section “Modeling the Irregular Component” on page 1785 for additional information. By controlling the presence or absence of various terms and by choosing the proper flavor of the included terms, the UCMs can generate a rich variety of time series patterns. A UCM can be applied to variables after transforming them by transforms such as log and difference. The components t ; t , and t model structurally different aspects of the time series. For example, the trend t models the natural tendency of the series in the absence of any other perturbing effects such as seasonality, cyclical components, and the effects of exogenous variables, while the seasonal component t models the correction to the level due to the seasonal effects. These components are assumed to be statistically independent of each other and independent of the irregular component. All of the component models can be thought of as stochastic generalizations of the relevant deterministic patterns in time. This way the deterministic cases emerge as special cases of the stochastic models. The different models available for these unobserved components are discussed next.

Modeling the Trend As mentioned earlier, the trend in a series can be loosely defined as the natural tendency of the series in the absence of any other perturbing effects. The UCM procedure offers two ways to model the trend component t . The first model, called the random walk (RW) model, implies that the trend remains roughly constant throughout the life of the series without any persistent upward or downward drift. In the second model the trend is modeled as a locally linear time trend (LLT). The

1782 F Chapter 29: The UCM Procedure

RW model can be described as t

D t

1

t  i:i:d: N.0; 2 /

C t ;

Note that if 2 D 0, then the model becomes t D constant . In the LLT model the trend is locally linear, consisting of both the level and slope. The LLT model is t

D t

1

C ˇt

ˇt

D ˇt

1

C t ;

1

C t ;

t  i:i:d: N.0; 2 / t  i:i:d: N.0; 2 /

The disturbances t and t are assumed to be independent. There are some interesting special cases of this model obtained by setting one or both of the disturbance variances 2 and 2 equal to zero. If 2 is set equal to zero, then you get a linear trend model with fixed slope. If 2 is set to zero, then the resulting model usually has a smoother trend. If both the variances are set to zero, then the resulting model is the deterministic linear time trend: t D 0 C ˇ0 t. You can incorporate these trend patterns in your model by using the LEVEL and SLOPE statements.

Modeling a Cycle A deterministic cycle t

t

with frequency , 0 <  < , can be written as

D ˛ cos.t / C ˇ sin.t /

If the argument t is measured on a continuous scale, then t is a periodic function with period 2=, amplitude D .˛ 2 C ˇ 2 /1=2 , and phase  D tan 1 .ˇ=˛/. Equivalently, the cycle can be written in terms of the amplitude and phase as t

D cos.t

/

Note that when t is measured only at the integer values, it is not exactly periodic, unless  D .2j /=k for some integers j and k. The cycles in their pure form are not used very often in practice. However, they are very useful as building blocks for more complex periodic patterns. It is well known that the periodic pattern of any complexity can be written as a sum of pure cycles of different frequencies and amplitudes. In time series situations it is useful to generalize this simple cyclical pattern to a stochastic cycle that has a fixed period but time-varying amplitude and phase. The stochastic cycle considered here is motivated by the following recursive formula for computing t:      cos  sin  t t 1 D   sin  cos  t t 1 starting with 2 t

C

0

D ˛ and

2 t

 0

D ˛2 C ˇ2

D ˇ. Note that for all t

t

and

 t

satisfy the relation

An Introduction to Unobserved Component Models F 1783

A stochastic generalization of the cycle t can be obtained by adding random noise to this recursion and by introducing a damping factor, , for additional modeling flexibility. This model can be described as follows:        t cos  sin  t 1 t C D   t sin  cos  t t 1 where 0    1, and the disturbances t and t are independent N.0; 2 / variables. The resulting stochastic cycle has a fixed period but time-varying amplitude and phase. The stationarity properties of the random sequence t depend on the damping factor . If  < 1, t has a stationary distribution with mean zero and variance 2 =.1 2 /. If  D 1, t is nonstationary. You can incorporate a cycle in a UCM by specifying a CYCLE statement. You can include multiple cycles in the model by using separate CYCLE statements for each included cycle. As mentioned before, the cycles are very useful as building blocks for constructing more complex periodic patterns. Periodic patterns of almost any complexity can be created by superimposing cycles of different periods and amplitudes. In particular, the seasonal patterns, general periodic patterns with integer periods, can be constructed as sums of cycles. This important topic of modeling the seasonal components is considered next.

Modeling Seasons The seasonal fluctuations are a common source of variation in time series data. These fluctuations arise because of the regular changes in seasons or some other periodic events. The seasonal effects are regarded as corrections to the general trend of the series due to the seasonal variations, and these effects sum to zero when summed over the full season cycle. Therefore the seasonal component

t Ps 1 is modeled as a stochastic periodic pattern of an integer period s such that the sum i D0 t i is always zero in the mean. The period s is called the season length. Two different models for the seasonal component are considered here. The first model is called the dummy variable form of the seasonal component. It is described by the equation s 1 X

t

i

!t  i:i:d: N.0; !2 /

D !t ;

i D0

The other model is called the trigonometric form of the seasonal component. In this case t is modeled as a sum of cycles of different frequencies. This model is given as follows:

t D

Œs=2 X

j;t

j D1

where Œs=2 equals s=2 if s is even and .s 1/=2 if it is odd. The cycles j;t have frequencies j D 2j=s and are specified by the matrix equation        !j;t

j;t

j;t 1 cos j sin j D C    !j;t

j;t

j;t sin j cos j 1  where the disturbances !j;t and !j;t are assumed to be independent and, for fixed j , !j;t and   !j;t  N.0; !2 /. If s is even, then the equation for s=2;t is not needed and s=2;t is given by

s=2;t D

s=2;t

1

C !s=2;t

1784 F Chapter 29: The UCM Procedure

The cycles j;t are called harmonics. If the seasonal component is deterministic, the decomposition of the seasonal effects into these harmonics is identical to its Fourier decomposition. In this case the sum of squares of the seasonal factors equals the sum of squares of the amplitudes of these harmonics. In many practical situations, the contribution of the high-frequency harmonics is negligible and can be ignored, giving rise to a simpler description of the seasonal. In the case of stochastic seasonals, the situation might not be so transparent; however, similar considerations still apply. Note that if the disturbance variance !2 D 0, then both the dummy and the trigonometric forms of seasonal components reduce to constant seasonal effects. That is, the seasonal component reduces to a deterministic function that is completely determined by its first s 1 values. In the UCM procedure you can specify a seasonal component in a variety of ways, the SEASON statement being the simplest of these. The dummy and the trigonometric seasonal components discussed so far can be considered as saturated seasonal components that put no restrictions on the s 1 seasonal values. In some cases a more parsimonious representation of the seasonal might be more appropriate. This is particularly useful for seasonal components with large season lengths. In the UCM procedure you can obtain parsimonious representations of the seasonal components by one of the following ways:  Use a subset trigonometric seasonal component obtained by deleting a few of the Œs=2 harmonics used in its sum. For example, a slightly smoother seasonal component of length 12, corresponding to the monthly seasonality, can be obtained by deleting the highest-frequency harmonic of period 2. That is, such a seasonal component will be a sum of five stochastic cycles that have periods 12, 6, 4, 3, and 2.4. You can specify such subset seasonal components by using the KEEPH= or DROPH= option in the SEASON statement.  Approximate the seasonal pattern by a suitable spline approximation. You can do this by using the SPLINESEASON statement.  A block-seasonal pattern is a seasonal pattern where the pattern is divided into a few blocks of equal length such that the season values within a block are the same—for example, a monthly seasonal pattern that has only four different values, one for each quarter. In some situations a long seasonal pattern can be approximated by the sum of block season and a simple season, the length of the simple season being equal to the block length of the block season. You can obtain such approximation by using a combination of BLOCKSEASON and SEASON statements.  Consider a seasonal component of a large season length as a sum of two or more seasonal components that are each of much smaller season lengths. This can be done by specifying more than one SEASON statements. Note that the preceding techniques of obtaining parsimonious seasonal components can also enable you to specify seasonal components that are more general than the simple saturated seasonal components. For example, you can specify a saturated trigonometric seasonal component that has some of its harmonics evolving according to one disturbance variance parameter while the others evolve with another disturbance variance parameter.

An Introduction to Unobserved Component Models F 1785

Modeling an Autoregression An autoregression of order one can be thought of as a special case of a cycle when the frequency  is either 0 or . Modeling this special case separately helps interpretation and parameter estimation. The autoregression component rt is modeled as follows: rt D rt

1

C t ;

t  i:i:d: N.0; 2 /

where 1   < 1. An autoregression can also provide an alternative to the IRREGULAR component when the model errors show some autocorrelation. You can incorporate an autoregression in your model by using the AUTOREG statement.

Modeling Regression Effects A predictor variable can affect the response variable in a variety of ways. The UCM procedure enables you to model several different types of predictor-response relationships:  The predictor-response relationship is linear, and the regression coefficient does not change with time. This is the simplest kind of relationship and such predictors are specified in the MODEL statement.  The predictor-response relationship is linear, but the regression coefficient does change with time. Such predictors are specified in the RANDOMREG statement. Here the regression coefficient is assumed to evolve as a random walk.  The predictor-response relationship is nonlinear and the relationship can change with time. This type of relationship can be approximated by an appropriate time-varying spline. Such predictors are specified in the SPLINEREG statement. A response variable can depend on its own past values—that is, lagged dependent values. Such a relationship can be specified in the DEPLAG statement.

Modeling the Irregular Component The components—such as trend, seasonal and regression effects, and nonstationary cycles—are used to capture the structural dynamics of a response series. In contrast, the stationary cycles and the autoregression are used to capture the transient aspects of the response series that are important for its short-range prediction but have little impact on its long-term forecasts. The irregular component represents the residual variation remaining in the response series that is modeled using an appropriate selection of structural and transient effects. In most cases, the irregular component can be assumed to be simply Gaussian white noise. In some other cases, however, the residual variation can be more complicated. In such situations, it might be necessary to model the irregular component as a stationary ARMA process. Moreover, you can use the ARMA irregular component together with the dependent lag specification (see the DEPLAG statement) to specify an ARIMA(p,d,q)(P,D,Q)s model for the response series. See the IRREGULAR statement for the explanation of the ARIMA notation. See Example 29.8 for an example of modeling a series by using an ARIMA(0,1,1)(0,1,1)12 model.

1786 F Chapter 29: The UCM Procedure

The Model Parameters The parameter vector in a UCM consists of the variances of the disturbance terms of the unobserved components, the damping coefficients and frequencies in the cycles, the damping coefficient in the autoregression, and the regression coefficients in the regression terms. These parameters are estimated by maximizing the likelihood. It is possible to restrict the values of the model parameters to user-specified values.

Model Specification A UCM is specified by describing the components in the model. For example, consider the model yt D t C t C t consisting of the irregular, level, slope, and seasonal components. This model is called the basic structural model (BSM) by Harvey (1989). The syntax for a BSM with monthly seasonality of trigonometric type is as follows: model y; irregular; level; slope; season length=12 type=trig;

Similarly the following syntax specifies a BSM with a response variable y, a regressor x, and dummy-type monthly seasonality: model y = x; irregular; level; slope variance=0 noest; season length=12 type=dummy;

Moreover, the disturbance variance of the slope component is restricted to zero, giving rise to a local linear trend with fixed slope. A model can contain multiple cycle and seasonal components. In such cases the model syntax contains a separate statement for each of these multiple cycle or seasonal components; for example, the syntax for a model containing irregular and level components along with two cycle components could be as follows: model y = x; irregular; level; cycle; cycle;

The UCMs as State Space Models F 1787

The UCMs as State Space Models The UCMs considered in PROC UCM can be thought of as special cases of more general models, called (linear) Gaussian state space models (GSSM). A GSSM can be described as follows: yt

D Zt ˛t

˛t C1 D Tt ˛t C t C1 ;

t  N.0; Qt /

˛1  N.0; P / The first equation, called the observation equation, relates the response series yt to a state vector ˛t that is usually unobserved. The second equation, called the state equation, describes the evolution of the state vector in time. The system matrices Zt and Tt are of appropriate dimensions and are known, except possibly for some unknown elements that become part of the parameter vector of the model. The noise series t consists of independent, zero-mean, Gaussian vectors with covariance matrices Qt . For most of the UCMs considered here, the system matrices Zt and Tt , and the noise covariances Qt , are time invariant—that is, they do not depend on time. In a few cases, however, some or all of them can depend on time. The initial state vector ˛1 is assumed to be independent of the noise series, and its covariance matrix P can be partially diffuse. A random vector has a partially diffuse covariance matrix if it can be partitioned such that one part of the vector has a properly defined probability distribution, while the covariance matrix of the other part is infinite— that is, you have no prior information about this part of the vector. The covariance of the initial state ˛1 is assumed to have the following form: P D P C P1 where P and P1 are nonnegative definite, symmetric matrices and  is a constant that is assumed to be close to 1. In the case of UCMs considered here, P1 is always a diagonal matrix that consists of zeros and ones, and, if a particular diagonal element of P1 is one, then the corresponding row and column in P are zero. The state space formulation of a UCM has many computational advantages. In this formulation there are convenient algorithms for estimating and forecasting the unobserved states f˛t g by using the observed series fyt g. These algorithms also yield the in-sample and out-of-sample forecasts and the likelihood of fyt g. The state space representation of a UCM does not need to be unique. In the representation used here, the unobserved components in the UCM often appear as elements of the state vector. This makes the elements of the state interpretable and, more important, the sample estimates and forecasts of these unobserved components are easily obtained. For additional information about the computational aspects of the state space modeling, see Durbin and Koopman (2001). Next, some notation is developed to describe the essential quantities computed during the analysis of the state space models. Let fyt ; t D 1; : : : ; ng be the observed sample from a series that satisfies a state space model. Next, for 1  t  n, let the one-step-ahead forecasts of the series, the states, and their variances be defined as follows, using the usual notation to denote the conditional expectation and conditional

1788 F Chapter 29: The UCM Procedure

variance: ˛O t

D E.˛t jy1 ; y2 ; : : : ; yt

€t

D Var.˛t jy1 ; y2 ; : : : ; yt

yOt

D E.yt jy1 ; y2 ; : : : ; yt

Ft

D Var.yt jy1 ; y2 ; : : : ; yt

1/ 1/

1/ 1/

These are also called the filtered estimates of the series and the states. Similarly, for t  1, let the following denote the full-sample estimates of the series and the state values at time t: ˛Q t

D E.˛t jy1 ; y2 ; : : : ; yn /

t

D Var.˛t jy1 ; y2 ; : : : ; yn /

yQt

D E.yt jy1 ; y2 ; : : : ; yn /

Gt

D Var.yt jy1 ; y2 ; : : : ; yn /

If the time t is in the historical period— that is, if 1  t  n— then the full-sample estimates are called the smoothed estimates, and if t lies in the future then they are called out-of-sample forecasts. Note that if 1  t  n, then yQt D yt and Gt D 0, unless yt is missing. All the filtered and smoothed estimates (˛O t ; ˛Q t ; : : : ; Gt , and so on) are computed by using the Kalman filtering and smoothing (KFS) algorithm, which is an iterative process. If the initial state is diffuse, as is often the case for the UCMs, its treatment requires modification of the traditional KFS, which is called the diffuse KFS (DKFS). The details of DKFS implemented in the UCM procedure can be found in de Jong and Chu-Chun-Lin (2003). Additional information on the state space models can be found in Durbin and Koopman (2001). The likelihood formulas described in this section are taken from the latter reference. In the case of diffuse initial condition, the effect of the improper prior distribution of ˛1 manifests itself in the first few filtering iterations. During these initial filtering iterations the distribution of the filtered quantities remains diffuse; that is, during these iterations the one-step-ahead series and state forecast variances Ft and €t have the following form: Ft

D Ft C F1t

€t

D €t C €1t

The actual number of iterations—say, I — affected by this improper prior depends on the nature of the vectors Zt , the number of nonzero diagonal elements of P1 , and the pattern of missing values in the dependent series. After I iterations, €1t and F1t become zero and the one-step-ahead series and state forecasts have proper distributions. These first I iterations constitute the initialization phase of the DKFS algorithm. The post-initialization phase of the DKFS and the traditional KFS is the same. In the state space modeling literature the pre-initialization and post-initialization phases are some times called pre-collapse and post-collapse phases of the diffuse Kalman filtering. In certain missing value patterns it is possible for I to exceed the sample size; that is, the sample information can be insufficient to create a proper prior for the filtering process. In these cases, parameter estimation and forecasting is done on the basis of this improper prior, and some or all

The UCMs as State Space Models F 1789

of the series and component forecasts can have infinite variances (or zero precision). The forecasts that have infinite variance are set to missing. The same situation can occur if the specified model contains components that are essentially multicollinear. In these situations no residual analysis is possible; in particular, no residuals-based goodness-of-fit statistics are produced. The log likelihood of the sample (L1 ), which takes account of this diffuse initialization step, is computed by using the one-step-ahead series forecasts as follows: .n

L1 .y1 ; : : : ; yn / D

d/ 2

I

log 2

1X wt 2 t D1

n 1 X 2 .log Ft C t / 2 Ft t DI C1

where d is the number of diffuse elements in the initial state ˛1 , t D yt ahead residuals, and wt

D log F1t D log Ft C

Zt ˛O t are the one-step-

if F1t > 0 t2

if F1t D 0

Ft

If yt is missing at some time t, then the corresponding summand in the log likelihood expression is deleted, and the constant term is adjusted suitably. Moreover, if the initialization step does not complete—that is, if I exceeds the sample size— then the value of d is reduced to the number of diffuse states that are successfully initialized. The portion of the log likelihood that corresponds to the post-initialization period is called the nondiffuse log likelihood (L0 ). The nondiffuse log likelihood is given by L0 .y1 ; : : : ; yn / D

n 1 X 2 .log Ft C t / 2 Ft t DI C1

In the P case of UCMs considered in PROC UCM, it often happens that the diffuse part of the likelihood, ItD1 wt , does not depend on the model parameters, and in these cases the maximization of nondiffuse and diffuse likelihoods is equivalent. However, in some cases, such as when the model consists of dependent lags, the diffuse part does depend on the model parameters. In these cases the maximization of the diffuse and nondiffuse likelihood can produce different parameter estimates. In some situations it is convenient to reparameterize the nondiffuse initial state covariance P as  2 P and the state noise covariance Qt as  2 Qt for some common scalar parameter  2 . In this case the preceding log-likelihood expression, up to a constant, can be written as I

L1 .y1 ; : : : ; yn / D

1X wt 2 t D1

n 1 X log Ft 2 t DI C1

n 1 X t2 2 2 Ft t DI C1

.n

d/ 2

log  2

Solving analytically for the optimum, the maximum likelihood estimate of  2 can be shown to be 1

2

O D

.n

n X t2 d/ Ft t DI C1

1790 F Chapter 29: The UCM Procedure

When this expression of  2 is substituted back into the likelihood formula, an expression called the profile likelihood (Lprof ile ) of the data is obtained: 2Lprof ile .y1 ; : : : ; yn / D

I X t D1

wt C

n X

log Ft C .n

t DI C1

d / log.

n X t2 / Ft

t DI C1

In some situations the parameter estimation is done by optimizing the profile likelihood (see the section “Parameter Estimation by Profile Likelihood Optimization” on page 1798 and the PROFILE option in the ESTIMATE statement). In the remainder of this section the state space formulation of UCMs is further explained by using some particular UCMs as examples. The examples show that the state space formulation of the UCMs depends on the components in the model in a simple fashion; for example, the system matrix T is usually a block diagonal matrix with blocks that correspond to the components in the model. The only exception to this pattern is the UCMs that consist of the lags of dependent variable. This case is considered at the end of the section. In what follows, Di ag Œa; b; : : :  denotes a diagonal matrix with diagonal entries Œa; b; : : : , and 0 the transpose of a matrix T is denoted as T .

Local Level Model Recall that the dynamics of a local level model are yt

D t C t

t

D t

1

C ˇt

ˇt

D ˇt

1

C t

1

C t

Here yt is the response series and t ; t ; and t are independent, zero-mean Gaussian disturbance sequences with variances 2 ; 2 , and 2 , respectively. This model can be formulated as a state 0

0

space model where the state vector ˛t D Œ t t ˇt  and the state noise t D Œ t t t  . Note that the elements of the state vector are precisely the unobserved components in the model. The system matrices T and Z and the noise covariance Q corresponding to this choice of state and state noise vectors can be seen to be time invariant and are given by 2 3 000 h i Z D Œ 1 1 0  ; T D 4 0 1 1 5 and Q D Diag 2 ; 2 ; 2 001   The distribution of the initial state vector ˛1 is diffuse, with P D Diag 2 ; 0; 0 and P1 D Di ag Œ0; 1; 1. The parameter vector  consists of all the disturbance variances—that is,  D .2 ; 2 ; 2 /.

Basic Structural Model The basic structural model (BSM) is obtained by adding a seasonal component, t , to the local level model. In order to economize on the space, the state space formulation of a BSM with a relatively

The UCMs as State Space Models F 1791

short season length, season length = 4 (quarterly seasonality), is considered here. The pattern for longer season lengths such as 12 (monthly) and 52 (weekly) is easy to see. Let us first consider the dummy form of seasonality. In this case the state and state noise vectors are  0 0 ˛t D t t ˇt 1;t 2;t 3;t and t D Œ t t t !t 0 0  , respectively. The first three elements of the state vector are the irregular, level, and slope components, respectively. The remaining elements, i;t , are lagged versions of the seasonal component t . 1;t corresponds to lag zero—that is, the same as t , 2;t to lag 1 and 3;t to lag 2. The system matrices are 2 6 6 6 Z D Œ 1 1 0 1 0 0 ; T D 6 6 6 4

0 0 0 0 0 0

0 1 0 0 0 0

0 1 1 0 0 0

0 0 0 –1 1 0

0 0 0 –1 0 1

0 0 0 –1 0 0

3 7 7 7 7 7 7 5

h i and Q D Di ag 2 ; 2 ; 2 ; !2 ; 0; 0 . The distribution of the initial state vector ˛1 is diffuse, with   P D Di ag 2 ; 0; 0; 0; 0; 0 and P1 D Diag Œ0; 1; 1; 1; 1; 1. h i0  In the case of the trigonometric type of seasonality, ˛t D t t ˇt 1;t 1;t

2;t and t D h i0   t t t !1;t !1;t !2;t . The disturbance sequences, !j;t ; 1  j  2, and !1;t , are independent, zero-mean, Gaussian sequences with variance !2 . The system matrices are 2 6 6 6 Z D Œ 1 1 0 1 0 1 ; T D 6 6 6 4

0 0 0 0 0 0

0 1 0 0 0 0

0 1 1 0 0 0

0 0 0 cos 1 sin 1 0

0 0 0 sin 1 cos 1 0

0 0 0 0 0 cos 2

3 7 7 7 7 7 7 5

h i and Q D Di ag 2 ; 2 ; 2 ; !2 ; !2 ; !2 . Here j D .2j /=4. The distribution of the initial   state vector ˛1 is diffuse, with P D Di ag 2 ; 0; 0; 0; 0; 0 and P1 D Diag Œ0; 1; 1; 1; 1; 1. The parameter vector in both the cases is  D .2 ; 2 ; 2 ; !2 /.

Seasons with Blocked Seasonal Values Block seasonals are special seasonal components that impose a special block structure on the seasonal effects. Let us consider a BSM with monthly seasonality that has a quarterly block structure— that is, months within the same quarter are assumed to have identical effects except for some random perturbation. Such a seasonal component is a block seasonal with block size m equal to 3 and the number of blocks k equal to 4. The state space structure for such a model with dummy-type sea 0 sonality is as follows: The state and state noise vectors are ˛t D t t ˇt 1;t 2;t 3;t and 0 t D Œ t t t !t 0 0  , respectively. The first three elements of the state vector are the irregular, level, and slope components, respectively. The remaining elements, i;t , are lagged versions of the seasonal component t . 1;t corresponds to lag zero—that is, the same as t , 2;t to lag m and 3;t

1792 F Chapter 29: The UCM Procedure

to lag 2m. All the system matrices h are time invariant,iexcept the matrix T . They can be seen to be Z D Œ 1 1 0 1 0 0 , Q D Di ag 2 ; 2 ; 2 ; !2 ; 0; 0 , and 2 6 6 6 Tt D 6 6 6 4

0 0 0 0 0 0

0 1 0 0 0 0

0 1 1 0 0 0

0 0 0 –1 1 0

0 0 0 –1 0 1

0 0 0 –1 0 0

3 7 7 7 7 7 7 5

when t is a multiple of the block size m, and 3 2 0 0 0 0 0 0 6 0 1 1 0 0 0 7 7 6 6 0 0 1 0 0 0 7 7 Tt D 6 6 0 0 0 1 0 0 7 7 6 4 0 0 0 0 1 0 5 0

0

0

0

0

1

otherwise. Note that when t is not a multiple of m, the portion of the Tt matrix corresponding to the seasonal is identity. The distribution of the initial state vector ˛1 is diffuse, with P D  Di ag 2 ; 0; 0; 0; 0; 0 and P1 D Diag Œ0; 1; 1; 1; 1; 1. h i0  Similarly in the case of the trigonometric form of seasonality, ˛t D t t ˇt 1;t 1;t

2;t h i0   and t D t t t !1;t !1;t !2;t . The disturbance sequences, !j;t ; 1  j  2, and !1;t , are independent, zero-mean, Gaussian sequences with variance !2 . Z D Œ 1 1 0 1 0 1 , Q D h i Di ag 2 ; 2 ; 2 ; !2 ; !2 ; !2 , and 2 6 6 6 Tt D 6 6 6 4

0 0 0 0 0 0

0 1 0 0 0 0

0 1 1 0 0 0

0 0 0 cos 1 sin 1 0

0 0 0 sin 1 cos 1 0

0 0 0 0 0 cos 2

3 7 7 7 7 7 7 5

when t is a multiple of the block size m, and 2 3 0 0 0 0 0 0 6 0 1 1 0 0 0 7 6 7 6 0 0 1 0 0 0 7 6 7 Tt D 6 7 0 0 0 1 0 0 6 7 4 0 0 0 0 1 0 5 0 0 0 0 0 1 otherwise. As before, when t is not a multiple of m, the portion of the Tt matrix corresponding to the seasonal is identity. Here j D .2j  /=4. The distribution of the initial state vector ˛1 is diffuse, with P D Di ag 2 ; 0; 0; 0; 0; 0 and P1 D Diag Œ0; 1; 1; 1; 1; 1. The parameter vector in both the cases is  D .2 ; 2 ; 2 ; !2 /.

The UCMs as State Space Models F 1793

Cycles and Autoregression The preceding examples have illustrated how to build a state space model corresponding to a UCM that includes components such as irregular, trend, and seasonal. There you can see that the state vector and the system matrices have a simple block structure with blocks corresponding to the components in the model. Therefore, here only a simple model consisting of a single cycle and an irregular component is considered. The state space form for more complex UCMs consisting of multiple cycles and other components can be easily deduced from this example. Recall that a stochastic cycle t with frequency , 0 <  < , and damping coefficient  can be modeled as        t cos  sin  t 1 t C D   t sin  cos  t t 1 where t and t are independent, zero-mean, Gaussian disturbances with variance 2 . In what follows, a state space form for a model consisting of such a stochastic cycle and an irregular component is given.  0  0 The state vector ˛t D t t t , and the state noise vector t D t t t . The system matrices are 2 3 0 0 0   Z D Œ 1 1 0  T D 4 0  cos   sin  5 Q D Diag 2 ; 2 ; 2 0  sin   cos  h i The distribution of the initial state vector ˛1 is proper, with P D Diag 2 ;  2 ;  2 , where  2 D 2 .1

2 /

1.

The parameter vector  D .2 ; ; ; 2 /.

An autoregression rt can be considered as a special case of cycle with frequency  equal to 0 or . In this case the equation for t is not needed. Therefore, for a UCM consisting of an autoregressive component and an irregular component, the state space model simplifies to the following form. 0

0

The state vector ˛t D Œ t rt  , and the state noise vector t D Œ t t  . The system matrices are     0 0 Z D Œ 1 1 ; T D and Q D Diag 2 ; 2 0    The distribution of the initial state vector ˛1 is proper, with P D Diag 2 ; r2 , where r2 D 2 .1 2 / 1 . The parameter vector  D .2 ; ; 2 /.

Incorporating Predictors of Different Kinds In the UCM procedure, predictors can be incorporated in a UCM in a variety of ways: simple time-invariant linear predictors are specified in the MODEL statement, predictors with time-varying coefficients can be specified in the RANDOMREG statement, and predictors that have a nonlinear relationship with the response variable can be specified in the SPLINEREG statement. As with earlier examples, how to obtain a state space form of a UCM consisting of such variety of predictors is illustrated using a simple special case. Consider a random walk trend model with predictors

1794 F Chapter 29: The UCM Procedure

x; u1 ; u2 , and v. Let us assume that x is a simple regressor specified in the MODEL statement, u1 and u2 are random regressors with time-varying regression coefficients that are specified in the same RANDOMREG statement, and v is a nonlinear regressor specified on a SPLINEREG statement. Let us further assume that the spline associated with v has degree one and is based on two internal knots. As explained in the section “SPLINEREG Statement” on page 1778, using v is equivalent to using .nk not s C degree/ D .2 C 1/ D 3 derived (random) regressors: say, s1 ; s2 ; s3 . In all there are .1 C 2 C 3/ D 6 regressors, the first one being a simple regressor and the others being time-varying coefficient regressors. The time-varying regressors are in two groups, the first consisting of u1 and u2 and the other consisting of s1 ; s2 , and s3 . The dynamics of this model are as follows:

yt

D t C ˇxt C 1t u1t C 2t u2t C

t

D t

1t

D 1.t

1/

C 1t

2t

D 2.t

1/

C 2t

1t

D 1.t

1/

C 1t

2t

D 2.t

1/

C 2t

3t

D 3.t

1/

C 3t

3 X

i t si t C t

i D1 1

C t

All the disturbances t ; t ; 1t ; 2t ; 1t ; 2t ; and 3t are independent, zero-mean, Gaussian variables, where 1t ; 2t share a common variance parameter 2 and 1t ; 2t ; 3t share a common variance 2 . These dynamics can be captured in the state space form by taking state ˛t D 0

0

Œ t t ˇ 1t 2t 1t 2t 3t  , state disturbance t D Œ t t 0 1t 2t 1t 2t 3t  , and the system matrices Zt

D Œ 1 1 xt u1t u2t s1t s2t s3t 

D Di ag Œ0; 1; 1; 1; 1; 1; 1; 1 h i Q D Di ag 2 ; 2 ; 0; 2 ; 2 ; 2 ; 2 ; 2 T

Note that the regression coefficients are elements of the state vector and that the system vector Zt is not time invariant. The  distribution of the initial state vector ˛1 is diffuse, with P D Di ag 2 ; 0; 0; 0; 0; 0; 0; 0 and P1 D Diag Œ0; 1; 1; 1; 1; 1; 1; 1. The parameters of this model are the disturbance variances, 2 , 2 ; 2 ; and 2 , which get estimated by maximizing the likelihood. The regression coefficients, time-invariant ˇ and time-varying 1t ; 2t ; 1t ; 2t and 3t , get implicitly estimated during the state estimation (smoothing). Reporting Parameter Estimates for Random Regressors

If the random walk disturbance variance associated with a random regressor is held fixed at zero, then its coefficient is no longer time-varying. In the UCM procedure the random regressor parameter estimates are reported differently if the random walk disturbance variance associated with a random regressor is held fixed at zero. The following points explain how the parameter estimates are reported in the parameter estimates table and in the OUTEST= data set.

The UCMs as State Space Models F 1795

 If the random walk disturbance variance associated with a random regressor is not held fixed, then its estimate is reported in the parameter estimates table and in the OUTEST= data set.  If more that one random regressor is specified in a RANDOMREG statement, then the first regressor in the list is used as a representative of the list while reporting the corresponding common variance parameter estimate.  If the random walk disturbance variance is held fixed at zero, then the parameter estimates table and the OUTEST= data set contain the corresponding regression parameter estimate rather than the variance parameter estimate.  Similar considerations apply in the case of the derived random regressors associated with a spline-regressor.

ARMA Irregular Component (Experimental) The state space form for the irregular component that follows an ARMA(p,q)(P,Q)s model is described in this section. The notation for ARMA models is explained in the IRREGULAR statement. A number of alternate state space forms are possible in this case; the one given here is based on Jones (1980). With slight abuse of notation, let p D p C sP denote the effective autoregressive order and q D q C sQ denote the effective moving-average order of the model. Similarly, let  be the effective autoregressive polynomial and  be the effective moving-average polynomial in the backshift operator with coefficients 1 ; : : : ; p and 1 ; : : : ; q , obtained by multiplying the respective nonseasonal and seasonal factors. Then, a random sequence t that follows an ARMA(p,q)(P,Q)s model with a white noise sequence at has a state space form with state vector of size m D max.p; q C 1/. The system matrices, which are time invariant, are as follows: Z D Œ1 0 : : : 0. The state transition matrix T , in a blocked form, is given by   0 Im 1 T D m . . . 1 where i D 0 if i > p and Im 1 is an .m 1/ dimensional indentity matrix. The covariance of the 0 state disturbance matrix Q D  2 where  2 is the variance of the white noise sequence at and 0 the vector D Œ 0 : : : m 1  contains the first m values of the impulse response function—that is, the first m coefficients in the expansion of the ratio =. Since t is a stationary sequence, the initial state is nondiffuse and P1 D 0. The description of P , the covariance matrix of the initial state, is a little involved; the details are given in Jones (1980).

Models with Dependent Lags The state space form of a UCM consisting of the lags of the dependent variable is quite different from the state space forms considered so far. Let us consider an example to illustrate this situation. Consider a model that has random walk trend, two simple time-invariant regressors, and that also includes a few—say, k—lags of the dependent variable. That is, k X

yt

D

t

D t

i yt

i

i D1 1

C t

C t C ˇ1 x1t C ˇ2 x2t C t

1796 F Chapter 29: The UCM Procedure

The state space form of this augmented model can be described in terms of the state space form of a model that has random walk trend with two simple time-invariant regressors. A superscript dagger (Ž) has been added to distinguish the augmented model state space entities from the corresponding entities of the state space form of the random walk with predictors model. With this notation, the h 0 i0 Ž state vector of the augmented model ˛t D ˛t yt yt 1 : : : yt kC1 and the new state noise h 0 i0 Ž vector t D t ut 0 : : : 0 , where ut is the matrix product Zt t . Note that the length of the new state vector is k C length.˛t / D k C 4. The new system matrices, in block form, are 2 3 Tt 0 ... 0 Ž Ž 1 . . . k 5 Zt D Œ 0 0 0 0 1 : : : 0  ; Tt D 4 Zt C1 Tt 0 Ik 1;k 1 0 where Ik

1;k 1

is the k

2

Qt Ž Qt D 4 Zt Qt 0

1 dimensional identity matrix and 0

Qt Zt 0 Zt Qt Zt 0

3 0 0 5 0

Note that the T and Q matrices of the random walk with predictors model are time invariant, and in the expressions above their time indices are kept because they illustrate the pattern for more general models. The initial state vector is diffuse, with     P 0 P1 0 Ž Ž ; P1 D P D 0 0 0 Ik;k The parameters of this model are the disturbance variances 2 and 2 , the lag coefficients 1 ; 2 ; : : : ; k , and the regression coefficients ˇ1 and ˇ2 . As before, the regression coefficients get estimated during the state smoothing, and the other parameters are estimated by maximizing the likelihood.

Outlier Detection In time series analysis it is often useful to detect changes over time in the characteristics of the response series. In the UCM procedure you can search for two types of changes, additive outliers (AO) and level shifts (LS). An additive outlier is an unusual value in the series, the cause of which might be a data recording error or a temporary shock to the series generation process. A level shift represents a permanent shift, either up or down, in the level of the series. You can control different aspects of the outlier search, such as the significance level of the reported outliers, by choosing different options in the OUTLIER statement. The search for AOs is done by default, whereas the CHECKBREAK option in the LEVEL statement must be used to turn on the search for LSs. The outlier detection process implemented in the UCM procedure is based on de Jong and Penzer (1998). In this approach the fitted model is taken to be the null model, and the series values and level shifts that are not adequately accounted for by the null model are flagged as outliers. The unusualness of a response series value at a particular time point t0 , with respect to the fitted model, can be judged by estimating its value based on the rest of the data (that is, the series obtained

Missing Values F 1797

by deleting the series value at t0 ) and comparing the estimated value to the observed value. If the difference between the estimated and observed values is statistically significant, then such value can be regarded as an AO. Note that this difference between the estimated and observed values is also the regression coefficient of a dummy regressor that takes the value 1.0 at t0 and is 0.0 elsewhere, assuming such a regressor is added to the null model. In this way the series value at t0 is regarded as AO if the regression coefficient of this dummy regressor is significant. Similarly, you can say that a level shift has occurred at a time point t0 if the regression coefficient of a regressor, which is 0.0 before t0 and 1.0 at t0 and thereafter, is statistically significant. De Jong and Penzer (1998) provide an efficient way to compute such AO and LS regression coefficients and their standard errors at all time points in the series. The outlier summary table, which is produced by default, simply lists the most statistically significant candidates among these.

Missing Values Embedded missing values in the dependent variable usually cause no problems in UCM modeling. However, no embedded missing values are allowed in the predictor variables. Certain patterns of missing values in the dependent variable can lead to failure of the initialization step of the diffuse Kalman filtering for some models. For example, if in a monthly series all values are missing for a certain month—say, May—then a BSM with monthly seasonality leads to such a situation. However, in this case the initialization step can complete successfully for a nonseasonal model such as local linear model.

Parameter Estimation The parameter vector in a UCM consists of the variances of the disturbance terms of the unobserved components, the damping coefficients and frequencies in the cycles, the damping coefficient in the autoregression, the lag coefficients of the dependent lags, and the regression coefficients in the regression terms. The regression coefficients are always part of the state vector and are estimated by state smoothing. The remaining parameters are estimated by maximizing either the full diffuse likelihood or the nondiffuse likelihood. The decision to use the full diffuse likelihood or the nondiffuse likelihood depends on the presence or absence of the dependent lag coefficients in the parameter vector. If the parameter vector does not contain any dependent lag coefficients, then the full diffuse likelihood is used. If, on the other hand, the parameter vector does contain some dependent lag coefficients, then the parameters are estimated by maximizing the nondiffuse likelihood. The optimization of the full diffuse likelihood is often unstable when the parameter vector contains dependent lag coefficients. In this sense, when the parameter vector contains dependent lag coefficients, the parameter estimates are not true maximum likelihood estimates. The optimization of the likelihood, either full or nondiffuse, is carried out using one of several nonlinear optimization algorithms. The user can control many aspects of the optimization process by using the NLOPTIONS statement and by providing the starting values of the parameters while specifying the corresponding components. However, in most cases the default settings work quite well. The optimization process is not guaranteed to converge to a maximum likelihood estimate. In

1798 F Chapter 29: The UCM Procedure

most cases the difficulties in parameter estimation are associated with the specification of a model that is not appropriate for the series being modeled.

Parameter Estimation by Profile Likelihood Optimization If a disturbance variance, such as the disturbance variance of the irregular component, is a part of the UCM and is a free parameter, then it can be profiled out of the likelihood. This means solving analytically for its optimum and plugging this expression back into the likelihood formula, giving rise to the so-called profile likelihood. The expression of the profile likelihood and the MLE of the profiled variance are given earlier in the section “The UCMs as State Space Models” on page 1787, where the computation of the likelihood of the state space model is also discussed. In some situations the optimization of the profile likelihood can be more efficient because the number of parameters to optimize is reduced by one; however, for a variety of reasons such gains might not always be observed. Moreover, in theory the estimates obtained by optimizing the profile likelihood and the usual likelihood should be the same, but in practice this might not hold because of numerical rounding and other conditions. In the UCM procedure, by default the usual likelihood is optimized if any of the disturbance variance parameters is held fixed to a nonzero value by using the NOEST option in the corresponding component statement. In other cases the decision whether to optimize the profile likelihood or the usual likelihood is based on several factors that are difficult to document. You can choose which likelihood to optimize during parameter estimation by specifying the PROFILE option for the profile likelihood optimization or the NOPROFILE option for the usual likelihood optimization. In the presence of the PROFILE option, the disturbance variance to profile is checked in a specific order, so that if the irregular component disturbance variance is free then it is always chosen. The situation in other cases is more complicated.

Profiling in the Presence of Fixed Variance Parameters

Note that when the parameter estimation is done by optimizing the profile likelihood, the interpretation of the variance parameters that are held fixed to nonzero values changes. In the presence of the PROFILE option, the disturbance variances that are held at a fixed value by using the NOEST option in their respective component statements are interpreted as being restricted to be that fixed multiple of the profiled variance rather than being fixed at that nominal value. That is, implicitly, the parameter estimation is done under the restriction of holding the disturbance variance ratio fixed at a given value rather than the disturbance variance itself. See Example 29.5 for an example of this type of restriction to obtain a UC model that is equivalent to the famous Hodrick-Prescott filter.

t values The t values reported in the table of parameter estimates are approximations whose accuracy depends on the validity of the model, the nature of the model, and the length of the observed series. The distributional properties of the maximum likelihood estimates of general unobserved components models have not been explored fully; therefore the probability values that correspond to a t distribution should be interpreted carefully, as they can be misleading. This is particularly true if the

Computational Issues F 1799

parameters in question are close to the boundary of the parameter space. The two sources by Harvey (1989, 2001) are good references for information about this topic. For some parameters, such as, the cycle period, the reported t values are uninformative because comparison of the estimated parameter with zero is never needed. In such cases the t values and the corresponding probability values should be ignored.

Computational Issues Convergence Problems As explained in the section “Parameter Estimation” on page 1797, the model parameters are estimated by nonlinear optimization of the likelihood. This process is not guaranteed to succeed. For some data sets, the optimization algorithm can fail to converge. Nonconvergence can result from a number of causes, including flat or ridged likelihood surfaces and ill-conditioned data. It is also possible for the algorithm to converge to a point that is not the global optimum of the likelihood. If you experience convergence problems, the following points might be helpful:  Data that are extremely large or extremely small can adversely affect results because of the internal tolerances used during the filtering steps of the likelihood calculation. Rescaling the data can improve stability.  Examine your model for redundancies in the included components and regressors. If some of the included components or regressors are nearly collinear to each other, then the optimization process can become unstable.  Experimenting with different options offered by the NLOPTIONS statement can help.  Lack of convergence can indicate model misspecification or a violation of the normality assumption.

Computer Resource Requirements The computing resources required for the UCM procedure depend on several factors. The memory requirement for the procedure is largely dependent on the number of observations to be processed and the size of the state vector underlying the specified model. If n denotes the sample size and m denotes the size of the state vector, the memory requirement for the smoothing stage of the Kalman filter is of the order of 6  8  n  m2 bytes, ignoring the lower-order terms. If the smoothed component estimates are not needed then the memory requirement is of the order of 68.m2 Cn/ bytes. Besides m and n, the computing time for the parameter estimation depends on the type of components included in the model. For example, the parameter estimation is usually faster if the model parameter vector consists only of disturbance variances, because in this case there is an efficient way to compute the likelihood gradient.

1800 F Chapter 29: The UCM Procedure

Displayed Output The default printed output produced by the UCM procedure is described in the following list:  brief information about the input data set, including the data set name and label, and the name of the ID variable specified in the ID statement  summary statistics for the data in the estimation and forecast spans, including the names of the variables in the model, their categorization as dependent or predictor, the index of the beginning and ending observations in the spans, the total number of observations and the number of missing observations, the smallest and largest measurements, and the mean and standard deviation  information about the model parameters at the start of the model-fitting stage, including the fixed parameters in the model and the initial estimates of the free parameters in the model  convergence status of the likelihood optimization process if any parameter estimation is done  estimates of the free parameters at the end of the model fitting-stage, including the parameter estimates, their approximate standard errors, t statistics, and the approximate p-value  the likelihood-based goodness-of-fit statistics, including the full likelihood, the portion of the likelihood corresponding to the diffuse initialization, the sum of squares of residuals normalized by their standard errors, and the information criteria: AIC, AICC, HQIC, BIC, and CAIC  the fit statistics that are based on the raw residuals (observed minus predicted), including the mean squared error (MSE), the root mean squared error (RMSE), the mean absolute percentage error (MAPE), the maximum percentage error (MAXPE), the R square, the adjusted R square, the random walk R square, and Amemiya’s R square  the significance analysis of the components included in the model that is based on the estimation span  brief information about the components included in the model  additive outliers in the series, if any are detected  the multistep series forecasts  post-sample-prediction analysis table that compares the multistep forecasts with the observed series values, if the BACK= option is used in the FORECAST statement

Statistical Graphics This section provides information about the basic ODS statistical graphics produced by the UCM procedure. To request graphics with PROC UCM, you must first enable ODS Graphics by specifying the ODS GRAPHICS ON; statement. See Chapter 21, “Statistical Graphics Using ODS” (SAS/STAT User’s Guide), for more information.

Statistical Graphics F 1801

You can obtain most plots relevant to the specified model by using the global PLOTS= option in the PROC UCM statement. The plot of series forecasts in the forecast horizon is produced by default. You can further control the production of individual plots by using the PLOT= options in the different statements. The main types of plots available are as follows:  Time series plots of the component estimates, either filtered or smoothed, can be requested by using the PLOT= option in the respective component statements. For example, the use of PLOT=SMOOTH option in a CYCLE statement produces a plot of smoothed estimate of that cycle.  Residual plots for model diagnostics can be obtained by using the PLOT= option in the ESTIMATE statement.  Plots of series forecasts and model decompositions can be obtained by using the PLOT= option in the FORECAST statement. The following example is a simple illustration of the available plot options.

Analysis of Sunspot Data: Illustration of ODS Graphics In this example a well-known series, Wolfer’s sunspot data (Anderson 1971), is considered. The data consist of yearly sunspot numbers recorded from 1749 to 1924. These sunspot numbers are known to have a cyclical pattern with a period of about eleven years. The following DATA step creates the input data set: data sunspot; input year wolfer @@; year = mdy(1,1, year); format year year4.; datalines; 1749 809 1750 834 1751 477 1752 1756 102 1757 324 1758 476 1759

478 1753 540 1760

307 1754 629 1761

122 1755 859 1762

96 612

... more lines ...

The following statements specify a UCM that includes a cycle component and a random walk trend component: ods graphics on; proc ucm data=sunspot; id year interval=year; model wolfer; irregular; level ; cycle plot=(filter smooth); estimate back=24 plot=(loess panel cusum wn);

1802 F Chapter 29: The UCM Procedure

forecast back=24 lead=24 plot=(forecasts decomp); run;

The following subsections explain the graphics produced by the above statements.

Component Plots

The plots in Figure 29.8 and Figure 29.9, produced by specifying PLOT=(FILTER SMOOTH) in the CYCLE statement, show the filtered and smoothed estimates, respectively, of the cycle component in the model. Note that the smoothed estimate appears smoother than the filtered estimate. This is always true because the filtered estimate of a component at time t is based on the observations prior to time t—that is, it uses measurements from the first observation up to the .t 1/th observation. On the other hand, the corresponding smoothed estimate uses all the available observations—that is, all the measurements from the first observation to the last. This makes the smoothed estimate of the component more precise than the filtered estimate for the time points within historical period. In the forecast horizon, both filtered and smoothed estimates are identical, being based on the same set of observations. Figure 29.8 Sunspots Series: Filtered Cycle

Statistical Graphics F 1803

Figure 29.9 Sunspots Series: Smoothed Cycle

Residual Diagnostics

If the fitted model is appropriate for the given data, then the corresponding one-step-ahead residuals should be approximately white—that is, uncorrelated—and approximately normal. Moreover, the residuals should not display any discernible pattern. You can detect departures from these conditions graphically. Different residual diagnostic plots can be requested by using the PLOT= option in the ESTIMATE statement. The normality can be checked by examining the histogram and the normal quantile plot of residuals. The whiteness can be checked by examining the ACF and PACF plots that show the sample autocorrelation and sample partial-autocorrelation at different lags. The diagnostic panel shown in Figure 29.10, produced by specifying PLOT=PANEL, contains these four plots.

1804 F Chapter 29: The UCM Procedure

Figure 29.10 Sunspots Series: Residual Diagnostics

The residual histogram and Q-Q plot show no serious violation of normality. The histogram appears reasonably symmetric and follows the overlaid normal density curve reasonably closely. Similarly in the Q-Q plot the residuals follow the reference line fairly closely. The ACF and PACF plots also do not exhibit any violation of the whiteness assumption; the correlations at all nonzero lags seem to be insignificant. The residual whiteness can also be formally tested by using the Ljung-Box portmanteau test. The plot in Figure 29.11, produced by specifying PLOT=WN, shows the p-values of the Ljung-Box test statistics at different lags. In these plots the p-values for the first few lags, equal to the number of estimated parameters in the model, are not shown because they are always missing. This portion of the plot is shaded blue to indicate this fact. In the case of this model, five parameters are estimated so the p-values for the first five lags are not shown. The p-values are displayed on a log scale in such a way that higher bars imply more extreme test statistics. In this plot some early p-values appear extreme. However, these p-values are based on large sample theory, which suggests that these statistics should be examined for lagsplarger than the square root of sample size. In this example it means that the p-values for the first 154  12 lags can be ignored. With this consideration, the plot shows no violation of whiteness since the p-values after the 12th lag do not appear extreme.

Statistical Graphics F 1805

Figure 29.11 Sunspots Series: Ljung-Box Portmanteau Test

The plot in Figure 29.12, produced by specifying PLOT=LOESS, shows the residuals plotted against time with an overlaid LOESS curve. This plot is useful for checking whether any discernible pattern remains in the residuals. Here again, no significant pattern appears to be present.

1806 F Chapter 29: The UCM Procedure

Figure 29.12 Sunspots Series: Residual Loess Plot

The plot in Figure 29.13, produced by specifying PLOT=CUSUM, shows the cumulative residuals plotted against time. This plot is useful for checking structural breaks. Here, there appears to be no evidence of structural break since the cumulative residuals remain within the confidence band throughout the sample period. Similarly you can request a plot of the squared cumulative residuals by specifying PLOT=CUSUMSQ.

Statistical Graphics F 1807

Figure 29.13 Sunspots Series: CUSUM Plot

Brockwell and Davis (1991) can be consulted for additional information on diagnosing residuals. For more information on CUSUM and CUSUMSQ plots, you can consult Harvey (1989).

Forecast and Series Decomposition Plots

You can use the PLOT= option in the FORECAST statement to obtain the series forecast plot and the series decomposition plots. The series decomposition plots show the result of successively adding different components in the model starting with the trend component. The IRREGULAR component is left out of this process. The following two plots, produced by specifying PLOT=DECOMP, show the results of successive component addition for this example. The first plot, shown in Figure 29.14, shows the smoothed trend component and the second plot, shown in Figure 29.15, shows the sum of smoothed trend and cycle.

1808 F Chapter 29: The UCM Procedure

Figure 29.14 Sunspots Series: Smoothed Trend

Statistical Graphics F 1809

Figure 29.15 Sunspots Series: Smoothed Trend plus Cycle

Finally, Figure 29.16 shows the forecast plot.

1810 F Chapter 29: The UCM Procedure

Figure 29.16 Sunspots Series: Series Forecasts

ODS Table Names The UCM procedure assigns a name to each table it creates. You can use these names to reference the table when using the Output Delivery System (ODS) to select tables and create output data sets. These names are listed in Table 29.2. Table 29.2

ODS Tables Produced by PROC UCM

ODS Table Name

Description

Statement

Tables Summarizing the Estimation and Forecast Spans EstimationSpan Estimation span summary information ForecastSpan Forecast span summary information Tables Related to Model Parameters ConvergenceStatus Convergence status of the estimation process

Option default default

default

ODS Table Names F 1811

Table 29.2

continued

ODS Table Name

Description

FixedParameters

Fixed parameters in the model Initial estimates of the free parameters Final estimates of the free parameters

InitialParameters ParameterEstimates

Tables Related to Model Information and Diagnostics BlockSeasonDescription Information about the block seasonals in the model ComponentSignificance Significance analysis of the components in the model CycleDescription Information about the cycles in the model FitStatistics Fit statistics based on the one-step-ahead predictions FitSummary Likelihood-based fit statistics OutlierSummary Summary table of the detected outliers SeasonDescription Information about the seasonals in the model SeasonHarmonics Summary of harmonics in a trigonometric seasonal component SplineSeasonDescription Information about the splineseasonals in the model TrendInformation Summary information of the level and slope components Tables Related to Filtered Component Estimates FilteredAutoReg Filtered estimate of an autoreg component FilteredBlockSeason Filtered estimate of a block seasonal component FilteredCycle Filtered estimate of a cycle component FilteredIrregular Filtered estimate of the irregular component FilteredLevel Filtered estimate of the level component FilteredRandomReg Filtered estimate of the timevarying random-regression coefficient

Statement

Option default default default

default default default default default default default SEASON

PRINT=HARMONICS

default default

AUTOREG

PRINT=FILTER

BLOCKSEASON PRINT=FILTER CYCLE

PRINT=FILTER

IRREGULAR

PRINT=FILTER

LEVEL

PRINT=FILTER

RANDOMREG

PRINT=FILTER

1812 F Chapter 29: The UCM Procedure

Table 29.2

continued

ODS Table Name

Description

Statement

Option

FilteredSeason

Filtered estimate of a seasonal component Filtered estimate of the slope component Filtered estimate of the timevarying spline-regression coefficient Filtered estimate of a splineseasonal component

SEASON

PRINT=FILTER

SLOPE

PRINT=FILTER

SPLINEREG

PRINT=FILTER

FilteredSlope FilteredSplineReg

FilteredSplineSeason

Tables Related to Smoothed Component Estimates SmoothedAutoReg Smoothed estimate of an autoreg component SmoothedBlockSeason Smoothed estimate of a block seasonal component SmoothedCycle Smoothed estimate of the cycle component SmoothedIrregular Smoothed estimate of the irregular component SmoothedLevel Smoothed estimate of the level component SmoothedRandomReg Smoothed estimate of the time-varying randomregression coefficient SmoothedSeason Smoothed estimate of a seasonal component SmoothedSlope Smoothed estimate of the slope component SmoothedSplineReg Smoothed estimate of the time-varying splineregression coefficient SmoothedSplineSeason Smoothed estimate of a spline-seasonal component

SPLINESEASON PRINT=FILTER

AUTOREG

PRINT=SMOOTH

BLOCKSEASON PRINT=SMOOTH CYCLE

PRINT=SMOOTH

IRREGULAR

PRINT=SMOOTH

LEVEL

PRINT=SMOOTH

RANDOMREG

PRINT=SMOOTH

SEASON

PRINT=SMOOTH

SLOPE

PRINT=SMOOTH

SPLINEREG

PRINT=SMOOTH

SPLINESEASON PRINT=SMOOTH

Tables Related to Series Decomposition and Forecasting FilteredAllExceptIrreg Filtered estimate of sum of FORECAST all components except the irregular component FilteredTrend Filtered estimate of trend FORECAST FilteredTrendReg Filtered estimate of trend FORECAST plus regression FilteredTrendRegCyc Filtered estimate of trend FORECAST plus regression plus cycles and autoreg

PRINT=FDECOMP

PRINT= FDECOMP PRINT=FDECOMP PRINT=FDECOMP

ODS Graph Names F 1813

Table 29.2

continued

ODS Table Name

Description

Forecasts PostSamplePrediction

Dependent series forecasts Forecasting performance in the holdout period Smoothed estimate of sum of all components except the irregular component Smoothed estimate of trend Smoothed estimate of trend plus regression Smoothed estimate of trend plus regression plus cycles and autoreg

SmoothedAllExceptIrreg

SmoothedTrend SmoothedTrendReg SmoothedTrendRegCyc

Statement

Option

FORECAST

default BACK=

FORECAST

PRINT=DECOMP

FORECAST FORECAST

PRINT= DECOMP PRINT=DECOMP

FORECAST

PRINT=DECOMP

N OTE : The tables are related to a single series within a BY group. In the case of models that contain multiple cycles, seasonal components, or block seasonal components, the corresponding component estimate tables are sequentially numbered. For example, if a model contains two cycles and a seasonal component and the PRINT=SMOOTH option is used for each of them, the ODS tables containing the smoothed estimates will be named SmoothedCycle1, SmoothedCycle2, and SmoothedSeason. Note that the seasonal table is not numbered because there is only one seasonal component.

ODS Graph Names To request graphics with PROC UCM, you must first enable ODS Graphics by specifying the ODS GRAPHICS ON; statement. See Chapter 21, “Statistical Graphics Using ODS” (SAS/STAT User’s Guide), for more information. You can reference every graph produced through ODS Graphics with a name. The names of the graphs that PROC UCM generates are listed in Table 29.3, along with the required statements and options. Table 29.3

ODS Graphics Produced by PROC UCM

ODS Graph Name

Description

Plots Related to Residual Analysis ErrorACFPlot Prediction error autocorrelation plot ErrorPACFPlot Prediction error partial-autocorrelation plot

Statement

Option

ESTIMATE

PLOT=ACF

ESTIMATE

PLOT=PACF

1814 F Chapter 29: The UCM Procedure

Table 29.3

continued

ODS Graph Name

Description

Statement

Option

ErrorHistogram

Prediction error histogram Prediction error normal quantile plot Plot of prediction errors Plot of p-values at different lags for the Ljung-Box portmanteau white noise test statistics Plot of cumulative residuals Plot of cumulative squared residuals Plot of one-step-ahead forecasts in the estimation span Panel of residual diagnostic plots Time series plot of residuals with superimposed LOESS smoother

ESTIMATE

PLOT=NORMAL

ESTIMATE

PLOT=QQ

ESTIMATE

PLOT=RESIDUAL

ESTIMATE

PLOT=WN

ESTIMATE

PLOT=CUSUM

ESTIMATE

PLOT=CUSUMSQ

ESTIMATE

PLOT=MODEL

ESTIMATE

PLOT=PANEL

ESTIMATE

PLOT=LOESS

AUTOREG

PLOT=FILTER

BLOCKSEASON

PLOT=FILTER

CYCLE

PLOT=FILTER

IRREGULAR

PLOT=FILTER

LEVEL

PLOT=FILTER

RANDOMREG

PLOT=FILTER

SEASON

PLOT=FILTER

SLOPE

PLOT=FILTER

ErrorQQPlot ErrorPlot ErrorWhiteNoiseLogProbPlot

CUSUMPlot CUSUMSQPlot ModelPlot

PanelResidualPlot ResidualLoessPlot

Plots Related to Filtered Component Estimates FilteredAutoregPlot Plot of filtered autoreg component FilteredBlockSeasonPlot Plot of filtered block season component FilteredCyclePlot Plot of filtered cycle component FilteredIrregularPlot Plot of filtered irregular component FilteredLevelPlot Plot of filtered level component FilteredRandomRegPlot Plot of filtered timevarying regression coefficient FilteredSeasonPlot Plot of filtered season component FilteredSlopePlot Plot of filtered slope component

ODS Graph Names F 1815

Table 29.3

continued

ODS Graph Name

Description

Statement

Option

FilteredSplineRegPlot

Plot of filtered timevarying regression coefficient Plot of filtered splineseason component Plot of annual variation in the filtered season component

SPLINEREG

PLOT=FILTER

SPLINESEASON

PLOT=FILTER

SEASON

PLOT=F_ANNUAL

AUTOREG

PLOT=SMOOTH

BLOCKSEASON

PLOT=SMOOTH

CYCLE

PLOT=SMOOTH

IRREGULAR

PLOT=SMOOTH

LEVEL

PLOT=SMOOTH

RANDOMREG

PLOT=SMOOTH

SEASON

PLOT=SMOOTH

SLOPE

PLOT=SMOOTH

SPLINEREG

PLOT=SMOOTH

SPLINESEASON

PLOT=SMOOTH

SEASON

PLOT=S_ANNUAL

FORECAST

DEFAULT

FilteredSplineSeasonPlot AnnualSeasonPlot

Plots Related to Smoothed Component Estimates SmoothedAutoregPlot Plot of smoothed autoreg component SmoothedBlockSeasonPlot Plot of smoothed block season component SmoothedCyclePlot Plot of smoothed cycle component SmoothedIrregularPlot Plot of smoothed irregular component SmoothedLevelPlot Plot of smoothed level component SmoothedRandomRegPlot Plot of smoothed timevarying regression coefficient SmoothedSeasonPlot Plot of smoothed season component SmoothedSlopePlot Plot of smoothed slope component SmoothedSplineRegPlot Plot of smoothed timevarying regression coefficient SmoothedSplineSeasonPlot Plot of smoothed spline-season component AnnualSeasonPlot Plot of annual variation in the smoothed season component Plots Related to Series Decomposition and Forecasting ForecastsOnlyPlot Series forecasts beyond the historical period

1816 F Chapter 29: The UCM Procedure

Table 29.3

continued

ODS Graph Name

Description

Statement

Option

ForecastsPlot

One-step-ahead as well as multistepahead forecasts Plot of sum of all filtered components except the irregular component Plot of filtered trend Plot of sum of filtered trend, cycles, and regression effects Plot of filtered trend plus regression effects Plot of sum of all smoothed components except the irregular component Plot of smoothed trend Plot of smoothed trend plus regression effects Plot of sum of smoothed trend, cycles, and regression effects Plot of standard error of sum of all filtered components except the irregular Plot of standard error of filtered trend Plot of standard error of filtered trend plus regression effects Plot of standard error of filtered trend, cycles, and regression effects Plot of standard error of sum of all smoothed components except the irregular Plot of standard error of smoothed trend

FORECAST

PLOT=FORECASTS

FORECAST

PLOT= FDECOMP

FORECAST FORECAST

PLOT= FDECOMP PLOT= FDECOMP

FORECAST

PLOT= FDECOMP

FORECAST

PLOT= DECOMP

FORECAST FORECAST

PLOT= TREND PLOT= DECOMP

FORECAST

PLOT= DECOMP

FORECAST

PLOT= FDECOMPVAR

FORECAST

PLOT= FDECOMPVAR

FORECAST

PLOT= FDECOMPVAR

FORECAST

PLOT= FDECOMPVAR

FORECAST

PLOT= DECOMPVAR

FORECAST

PLOT= DECOMPVAR

FilteredAllExceptIrregPlot

FilteredTrendPlot FilteredTrendRegCycPlot

FilteredTrendRegPlot SmoothedAllExceptIrregPlot

SmoothedTrendPlot SmoothedTrendRegPlot SmoothedTrendRegCycPlot

FilteredAllExceptIrregVarPlot

FilteredTrendVarPlot FilteredTrendRegVarPlot

FilteredTrendRegCycVarPlot

SmoothedAllExceptIrregVarPlot

SmoothedTrendVarPlot

OUTFOR= Data Set F 1817

Table 29.3

continued

ODS Graph Name

Description

Statement

Option

SmoothedTrendRegVarPlot

Plot of standard error of smoothed trend plus regression effects Plot of standard error of smoothed trend, cycles, and regression effects

FORECAST

PLOT= DECOMPVAR

FORECAST

PLOT= DECOMPVAR

SmoothedTrendRegCycVarPlot

OUTFOR= Data Set You can use the OUTFOR= option in the FORECAST statement to store the series and component forecasts produced by the procedure. This data set contains the following columns:  the BY variables  the ID variable. If an ID variable is not specified, then a numerical variable, _ID_, is created that contains the observation numbers from the input data set.  the dependent series and the predictor series  FORECAST, a numerical variable containing the one-step-ahead predicted values and the multistep forecasts  RESIDUAL, a numerical variable containing the difference between the actual and forecast values  STD, a numerical variable containing the standard error of prediction  LCL and UCL, numerical variables containing the lower and upper forecast confidence limits  S_SERIES and VS_SERIES, numerical variables containing the smoothed values of the dependent series and their variances  S_IRREG and VS_IRREG, numerical variables containing the smoothed values of the irregular component and their variances. These variables are present only if the model has an irregular component.  F_LEVEL, VF_LEVEL, S_LEVEL, and VS_LEVEL, numerical variables containing the filtered and smoothed values of the level component and the respective variances. These variables are present only if the model has a level component.  F_SLOPE, VF_SLOPE, S_SLOPE, and VS_SLOPE, numerical variables containing the filtered and smoothed values of the slope component and the respective variances. These variables are present only if the model has a slope component.

1818 F Chapter 29: The UCM Procedure

 F_AUTOREG, VF_AUTOREG, S_AUTOREG, and VS_AUTOREG, numerical variables containing the filtered and smoothed values of the autoreg component and the respective variances. These variables are present only if the model has an autoreg component.  F_CYCLE, VF_CYCLE, S_CYCLE, and VS_CYCLE, numerical variables containing the filtered and smoothed values of the cycle component and the respective variances. If there are multiple cycles in the model, these variables are sequentially numbered as F_CYCLE1, F_CYCLE2, etc. These variables are present only if the model has at least one cycle component.  F_SEASON, VF_SEASON, S_SEASON, and VS_SEASON, numerical variables containing the filtered and smoothed values of the season component and the respective variances. If there are multiple seasons in the model, these variables are sequentially numbered as F_SEASON1, F_SEASON2, etc. These variables are present only if the model has at least one season component.  F_BLKSEAS, VF_BLKSEAS, S_BLKSEAS, and VS_BLKSEAS, numerical variables containing the filtered and smoothed values of the blockseason component and the respective variances. If there are multiple block seasons in the model, these variables are sequentially numbered as F_BLKSEAS1, F_BLKSEAS2, etc.  F_SPLSEAS, VF_SPLSEAS, S_SPLSEAS, and VS_SPLSEAS, numerical variables containing the filtered and smoothed values of the splineseason component and the respective variances. If there are multiple spline seasons in the model, these variables are sequentially numbered as F_SPLSEAS1, F_SPLSEAS2, etc. These variables are present only if the model has at least one splineseason component.  Filtered and smoothed estimates, and their variances, of the time-varying regression coefficients of the variables specified in the RANDOMREG and SPLINEREG statements. A variable is not included if its coefficient is time-invariant, that is, if the associated disturbance variance is zero.  S_TREG and VS_TREG, numerical variables containing the smoothed values of level plus regression component and their variances. These variables are present only if the model has at least one predictor variable or has dependent lags.  S_TREGCYC and VS_TREGCYC, numerical variables containing the smoothed values of level plus regression plus cycle component and their variances. These variables are present only if the model has at least one cycle or an autoreg component.  S_NOIRREG and VS_NOIRREG, numerical variables containing the smoothed values of the sum of all components except the irregular component and their variances. These variables are present only if the model has at least one seasonal or block seasonal component.

OUTEST= Data Set You can use the OUTEST= option in the ESTIMATE statement to store the model parameters and the related estimation details. This data set contains the following columns:

Statistics of Fit F 1819

 the BY variables  COMPONENT, a character variable containing the name of the component corresponding to the parameter being described  PARAMETER, a character variable containing the parameter name  TYPE, a character variable indicating whether the parameter value was fixed by the user or estimated  _STATUS_, a character variable indicating whether the parameter estimation process converged or failed or there was an error of some other kind  ESTIMATE, a numerical variable containing the parameter estimate  STD, a numerical variable containing the standard error of the parameter estimate. This has a missing value if the parameter value is fixed.  TVALUE, a numerical variable containing the t-statistic. This has a missing value if the parameter value is fixed.  PVALUE, a numerical variable containing the p-value. This has a missing value if the parameter value is fixed.

Statistics of Fit This section explains the goodness-of-fit statistics reported to measure how well the specified model fits the data. First the various statistics of fit that are computed using the prediction errors, yt yOt , are considered. In these formulas, n is the number of nonmissing prediction errors and Pk is the number of fitted parameters in the model. Moreover, the sum of squared errors, P SSE D .yt yOt /2 , and the total sum of squares for the series corrected for the mean, SST D .yt y/2 , where y is the series mean, and the sums are over all the nonmissing prediction errors. Mean Squared Error The mean squared prediction error, MSE D n1 SSE Root Mean Squared Error p The root mean square error, RMSE = MSE Mean Absolute Percent Error The mean absolute percent prediction error, MAPE = The summation ignores observations where yt D 0.

100 n

Pn

t D1 j.yt

yOt /=yt j.

R-square The R-square statistic, R2 D 1 SSE=SST. If the model fits the series badly, the model error sum of squares, SSE, might be larger than SST and the R-square statistic will be negative. Adjusted R-square The adjusted R-square statistic, 1

. nn

1 /.1 k

R2 /

1820 F Chapter 29: The UCM Procedure

Amemiya’s Adjusted R-square Amemiya’s adjusted R-square, 1

/.1 . nCk n k

R2 /

Random Walk R-square The random walk R-square statistic (Harvey’s R-square statistic that Puses the random walk model for comparison), 1 . n n 1 /SSE=RWSSE, where RWSSE D ntD2 .yt yt 1 /2 , P and  D n 1 1 ntD2 .yt yt 1 / Maximum Percent Error The largest percent prediction error, 100 max..yt tions where yt D 0 are ignored.

yOt /=yt /. In this computation the observa-

The likelihood-based fit statistics are reported separately (see the section “The UCMs as State Space Models” on page 1787). They include the full log likelihood (L1 ), the diffuse part of the log likelihood, the normalized residual sum of squares, and several information criteria: AIC, AICC, HQIC, BIC, and CAIC. Let q denote the number of estimated parameters, n be the number of nonmissing measurements in the estimation span, and d be the number of diffuse elements in the initial state vector that are successfully initialized during the Kalman filtering process. Moreover, let n D .n d /. The reported information criteria, all in smaller-is-better form, are described in Table 29.4: Table 29.4

Criterion

Information Criteria

Formula

Reference

AIC AICC

2L1 C 2q 2L1 C 2q n =.n

HQIC BIC CAIC

2L1 C 2q log log.n / 2L1 C q log.n / 2L1 C q.log.n / C 1/

q

1/

Akaike (1974) Hurvich and Tsai (1989) Burnham and Anderson (1998) Hannan and Quinn (1979) Schwarz (1978) Bozdogan (1987)

Examples: UCM Procedure

Example 29.1: The Airline Series Revisited The series in this example, the monthly airline passenger series, has already been discussed earlier; see the section “A Seasonal Series with Linear Trend” on page 1743. Recall that the series consists of monthly numbers of international airline travelers (from January 1949 to December 1960). Here additional output features of the UCM procedure are illustrated, such as how to use the ESTIMATE and FORECAST statements to limit the span of the data used in parameter estimation and forecasting. The following statements fit a BSM to the logarithm of the airline passenger numbers. The disturbance variance for the slope component is held fixed at value 0; that is, the trend is locally linear with constant slope. In order to evaluate the performance of the fitted model on observed data,

Example 29.1: The Airline Series Revisited F 1821

some of the observed data are withheld during parameter estimation and forecast computations. The observations in the last two years, years 1959 and 1960, are not used in parameter estimation, while the observations in the last year, year 1960, are not used in the forecasting computations. This is done using the BACK= option in the ESTIMATE and FORECAST statements. In addition, a panel of residual diagnostic plots is obtained using the PLOT=PANEL option in the ESTIMATE statement. data seriesG; set sashelp.air; logair = log(air); run;

proc ucm data = seriesG; id date interval = month; model logair; irregular; level; slope var = 0 noest; season length = 12 type=trig; estimate back=24 plot=panel; forecast back=12 lead=24 print=forecasts; run;

The following tables display the summary of data used in estimation and forecasting (Output 29.1.1 and Output 29.1.2). These tables provide simple summary statistics for the estimation and forecast spans; they include useful information such as the beginning and ending dates of the span, the number of nonmissing values, etc. Output 29.1.1 Observation Span Used in Parameter Estimation (partial output) Variable logair

Type Dependent

First

Last

Nobs

Mean

JAN1949

DEC1958

120

5.43035

Output 29.1.2 Observation Span Used in Forecasting (partial output) Variable logair

Type Dependent

First

Last

Nobs

Mean

JAN1949

DEC1959

132

5.48654

The following tables display the fixed parameters in the model, the preliminary estimates of the free parameters, and the final estimates of the free parameters (Output 29.1.3, Output 29.1.4, and Output 29.1.5).

1822 F Chapter 29: The UCM Procedure

Output 29.1.3 Fixed Parameters in the Model The UCM Procedure Fixed Parameters in the Model Component

Parameter

Value

Slope

Error Variance

0

Output 29.1.4 Starting Values for the Parameters to Be Estimated Preliminary

Estimates of the Free Parameters

Component

Parameter

Estimate

Irregular Level Season

Error Variance Error Variance Error Variance

6.64120 2.49045 1.26676

Output 29.1.5 Maximum Likelihood Estimates of the Free Parameters Final Estimates of the Free Parameters

Component

Parameter

Irregular Level Season

Error Variance Error Variance Error Variance

Estimate

Approx Std Error

t Value

Approx Pr > |t|

0.00018686 0.00040314 0.00000350

0.0001212 0.0001566 1.66319E-6

1.54 2.57 2.10

0.1233 0.0100 0.0354

Two types of goodness-of-fit statistics are reported after a model is fit to the series (see Output 29.1.6 and Output 29.1.7). The first type is the likelihood-based goodness-of-fit statistics, which include the full likelihood of the data, the diffuse portion of the likelihood (see the section “Details: UCM Procedure” on page 1781), and the information criteria. The second type of statistics is based on the raw residuals, residual = observed – predicted. If the model is nonstationary, then one-stepahead predictions are not available for some initial observations, and the number of values used in computing these fit statistics will be different from those used in computing the likelihood-based test statistics.

Example 29.1: The Airline Series Revisited F 1823

Output 29.1.6 Likelihood-Based Fit Statistics for the Airline Data Likelihood Based Fit Statistics Statistic

Value

Full Log Likelihood Diffuse Part of Log Likelihood Non-Missing Observations Used Estimated Parameters Initialized Diffuse State Elements Normalized Residual Sum of Squares AIC (smaller is better) BIC (smaller is better) AICC (smaller is better) HQIC (smaller is better) CAIC (smaller is better)

180.63 -13.93 120 3 13 107 -355.3 -347.2 -355 -352 -344.2

Output 29.1.7 Residuals-Based Fit Statistics for the Airline Data Fit Statistics Based on Residuals Mean Squared Error Root Mean Squared Error Mean Absolute Percentage Error Maximum Percent Error R-Square Adjusted R-Square Random Walk R-Square Amemiya’s Adjusted R-Square

0.00156 0.03944 0.57677 2.19396 0.98705 0.98680 0.86370 0.98630

Number of non-missing residuals used for computing the fit statistics = 107

The diagnostic plots based on the one-step-ahead residuals are shown in Output 29.1.8. The residual histogram and the Q-Q plot show no reasons to question the approximate normality of the residual distribution. The remaining plots check for the whiteness of the residuals. The sample correlation plots, the autocorrelation function (ACF) and the partial autocorrelation function (PACF), also do not show any significant violations of the whiteness of the residuals. Therefore, on the whole, the model seems to fit the data well.

1824 F Chapter 29: The UCM Procedure

Output 29.1.8 Residual Diagnostics for the Airline Series Using a BSM

The forecasts are given in Output 29.1.9. In order to save the space, the upper and lower confidence limit columns are dropped from the output, and only the rows corresponding to the year 1960 are shown. Recall that the actual measurements in the years 1959 and 1960 were withheld during the parameter estimation, and the ones in 1960 were not used in the forecast computations.

Example 29.1: The Airline Series Revisited F 1825

Output 29.1.9 Forecasts for the Airline Data Obs

date

133 134 135 136 137 138 139 140 141 142 143 144

JAN60 FEB60 MAR60 APR60 MAY60 JUN60 JUL60 AUG60 SEP60 OCT60 NOV60 DEC60

Forecast

StdErr

logair

Residual

6.050 5.996 6.156 6.124 6.168 6.303 6.435 6.450 6.265 6.138 6.015 6.121

0.038 0.044 0.049 0.053 0.058 0.061 0.065 0.068 0.071 0.073 0.075 0.077

6.033 5.969 6.038 6.133 6.157 6.282 6.433 6.407 6.230 6.133 5.966 6.068

-0.017 -0.027 -0.118 0.010 -0.011 -0.021 -0.002 -0.043 -0.035 -0.005 -0.049 -0.053

The figure Output 29.1.10 shows the forecast plot. The forecasts in the year 1960 show that the model predictions were quite good. Output 29.1.10 Forecast Plot of the Airline Series Using a BSM

1826 F Chapter 29: The UCM Procedure

Example 29.2: Variable Star Data The series in this example is studied in detail in Bloomfield (2000). This series consists of brightness measurements (magnitude) of a variable star taken at midnight for 600 consecutive days. The data can be downloaded from a time series archive maintained by the University of York, England (http://www.york.ac.uk/depts/maths/data/ts/welcome.htm (series number 26)). The following DATA step statements read the data in a SAS data set. data star; input magnitude @@; day = _n_; datalines; 25 28 31 32 33 33 14 10 7 4 2 0 15 19 23 26 29 32 24 20 17 13 10 7 7 10 13 16 19 22 27 25 24 21 19 17

32 0 33 5 24 15

31 0 34 3 26 13

28 2 33 3 27 12

25 4 32 3 28 11

22 8 30 4 29 11

18 11 27 5 28 10

... more lines ...

The following statements use the TIMESERIES procedure to get a timeseries plot of the series (see Output 29.2.1). proc timeseries data=star plot=series; var magnitude; run;

Example 29.2: Variable Star Data F 1827

Output 29.2.1 Plot of Star Brightness on Successive Days

The plot clearly shows the cyclic nature of the series. Bloomfield shows that the series is very well explained by a model that includes two deterministic cycles that have periods 29.0003 and 24.0001 days, a constant term, and a simple error term. He also mentions the difficulty involved in estimating the periods from the data (see Bloomfield 2000, Chapter 3). In his case the cycle periods are estimated by least squares, and the sum of squares surface has multiple local optima and ridges. The following statements show how to use the UCM procedure to fit this two-cycle model to the series. The constant term in the model is specified by holding the variance parameter of the level component to zero.

proc ucm data=star; model magnitude; irregular; level var=0 noest; cycle; cycle; estimate; run;

1828 F Chapter 29: The UCM Procedure

The final parameter estimates and the goodness-of-fit statistics are shown in Output 29.2.2 and Output 29.2.3, respectively. The model fit appears to be good. Output 29.2.2 Two-Cycle Model: Parameter Estimates The UCM Procedure Final Estimates of the Free Parameters

Component

Parameter

Irregular Cycle_1 Cycle_1 Cycle_1 Cycle_2 Cycle_2 Cycle_2

Error Variance Damping Factor Period Error Variance Damping Factor Period Error Variance

Estimate

Approx Std Error

t Value

Approx Pr > |t|

0.09257 1.00000 29.00036 0.00000882 1.00000 24.00011 0.00000535

0.0053845 1.81175E-7 0.0022709 5.27213E-6 2.11939E-7 0.0019128 3.56374E-6

17.19 5519514 12770.4 1.67 4718334 12547.2 1.50

. . . > < / options > ; GARCH options ; NLOPTIONS options ; OUTPUT < options > ; RESTRICT restrictions ; TEST restrictions ;

Functional Summary F 1883

Functional Summary The statements and options used with the VARMAX procedure are summarized in the following table: Table 30.1

VARMAX Functional Summary

Description Data Set Options specify the input data set write parameter estimates to an output data set include covariances in the OUTEST= data set write the diagnostic checking tests for a model and the cointegration test results to an output data set write actuals, predictions, residuals, and confidence limits to an output data set write the conditional covariance matrix to an output data set

Statement

Option

VARMAX VARMAX VARMAX VARMAX

DATA= OUTEST= OUTCOV OUTSTAT=

OUTPUT

OUT=

GARCH

OUTHT=

BY Groups specify BY-group processing

BY

ID Variable specify identifying variable specify the time interval between observations control the alignment of SAS Date values

ID ID ID

Options to Control the Optimization Process specify the optimization options

NLOPTIONS

Printing Control Options specify how many lags to print results suppress the printed output request all printing options request the printing format controls plots produced through ODS GRAPHICS

MODEL MODEL MODEL MODEL VARMAX

LAGMAX= NOPRINT PRINTALL PRINTFORM= PLOTS=

MODEL MODEL

CORRB CORRX

MODEL

CORRY

MODEL MODEL

COVPE COVX

PRINT= Option print the correlation matrix of parameter estimates print the cross-correlation matrices of independent variables print the cross-correlation matrices of dependent variables print the covariance matrices of prediction errors print the cross-covariance matrices of the independent variables

INTERVAL= ALIGN=

1884 F Chapter 30: The VARMAX Procedure

Table 30.1

continued

Description

Statement

Option

print the cross-covariance matrices of the dependent variables print the covariance matrix of parameter estimates print the decomposition of the prediction error covariance matrix print the residual diagnostics print the contemporaneous relationships among the components of the vector time series print the parameter estimates print the infinite order AR representation print the impulse response function print the impulse response function in the transfer function print the partial autoregressive coefficient matrices print the partial canonical correlation matrices print the partial correlation matrices print the eigenvalues of the companion matrix print the Yule-Walker estimates

MODEL

COVY

MODEL MODEL

COVB DECOMPOSE

MODEL MODEL

DIAGNOSE DYNAMIC

MODEL MODEL MODEL MODEL

ESTIMATES IARR IMPULSE= IMPULSX=

MODEL MODEL MODEL MODEL MODEL

PARCOEF PCANCORR PCORR ROOTS YW

MODEL MODEL

CENTER DIF=

MODEL

DIFX=

MODEL

DIFY=

MODEL MODEL MODEL MODEL

ECM= METHOD= MINIC= NOCURRENTX

MODEL MODEL MODEL MODEL MODEL MODEL MODEL MODEL

NOINT NSEASON= P= PRIOR= Q= SCENTER TREND= VARDEF=

MODEL

XLAG=

Model Estimation and Order Selection Options center the dependent variables specify the degrees of differencing for the specified model variables specify the degrees of differencing for all independent variables specify the degrees of differencing for all dependent variables specify the vector error correction model specify the estimation method select the tentative order suppress the current values of independent variables suppress the intercept parameters specify the number of seasonal periods specify the order of autoregressive polynomial specify the Bayesian prior model specify the order of moving-average polynomial center the seasonal dummies specify the degree of time trend polynomial specify the denominator for error covariance matrix estimates specify the lag order of independent variables

PROC VARMAX Statement F 1885

Table 30.1

continued

Description

Statement

Option

GARCH Related Options specify the GARCH-type model specify the order of the GARCH polynomial specify the order of the ARCH polynomial

GARCH GARCH GARCH

FORM= P= Q=

COINTEG

EXOGENEITY

COINTEG

H=

COINTEG

J=

Cointegration Related Options print the results from the weak exogeneity test of the long-run parameters specify the restriction on the cointegrated coefficient matrix specify the restriction on the adjustment coefficient matrix specify the variable name whose cointegrating vectors are normalized specify a cointegration rank print the Johansen cointegration rank test

COINTEG

NORMALIZE=

COINTEG MODEL

print the Stock-Watson common trends test print the Dickey-Fuller unit root test

MODEL MODEL

RANK= COINTTEST= (JOHANSEN= ) COINTTEST=(SW= ) DFTEST=

Tests and Restrictions on Parameters test the Granger causality

CAUSAL

place and test restrictions on parameter estimates test hypotheses on parameter estimates

RESTRICT TEST

Forecasting Control Options specify the size of confidence limits for forecasting start forecasting before end of the input data specify how many periods to forecast suppress the printed forecasts

OUTPUT OUTPUT OUTPUT OUTPUT

PROC VARMAX Statement PROC VARMAX options ;

The following options can be used in the PROC VARMAX statement:

GROUP1= GROUP2=

ALPHA= BACK= LEAD= NOPRINT

1886 F Chapter 30: The VARMAX Procedure

DATA=SAS-data-set

specifies the input SAS data set. If the DATA= option is not specified, the PROC VARMAX statement uses the most recently created SAS data set. OUTEST=SAS-data-set

writes the parameter estimates to the output data set. COVOUT OUTCOV

writes the covariance matrix for the parameter estimates to the OUTEST= data set. This option is valid only if the OUTEST= option is specified. OUTSTAT=SAS-data-set

writes residual diagnostic results to an output data set. If the COINTTEST=(JOHANSEN) option is specified, the results of this option are also written to the output data set. The following statements are the examples of these options in the PROC VARMAX statement: proc varmax data=one outest=est outcov outstat=stat; model y1-y3 / p=1; run; proc varmax data=one outest=est outstat=stat; model y1-y3 / p=1 cointtest=(johansen); run;

PLOTS< (global-plot-option) > = plot-request-option < (options) > PLOTS< (global-plot-option) > = ( plot-request-option < (options) > ... plot-request-option < (options) > )

controls the plots produced through ODS Graphics. When you specify only one plot, you can omit the parentheses around the plot request. Some examples follow: plots=none plots=all plots(unpack)=residual(residual normal) plots=(forecasts model)

You must enable ODS Graphics before requesting plots as shown in the following example. For general information about ODS Graphics, see Chapter 21, “Statistical Graphics Using ODS” (SAS/STAT User’s Guide). ods graphics on; proc varmax data=one plots=impulse(simple); model y1-y3 / p=1; run; proc varmax data=one plots=(model residual); model y1-y3 / p=1; run;

PROC VARMAX Statement F 1887

proc varmax data=one plots=forecasts; model y1-y3 / p=1; output lead=12; run;

The first VARMAX program produces the simple response impulse plots. The second VARMAX program produces the plots associated with the model and prediction errors. The plots associated with prediction errors are the ACF, PACF, IACF, distribution, white-noise, and Normal quantile plots and the prediction error plot. The third VARMAX program produces the FORECASTS and FORECASTSONLY plots. The global-plot-option applies to the impulse and prediction error analysis plots generated by the VARMAX procedure. The following global-plot-option is available: UNPACK

breaks a graphic that is otherwise paneled into individual component plots.

The following plot-request-options are available: ALL

produces all plots appropriate for the particular analysis.

FORECASTS < (forecasts-plot-options ) > produces plots of the forecasts. The forecastsonly plot that shows the multistep forecasts in the forecast region is produced by default. The following forecasts-plot-options are available: ALL

produces the FORECASTSONLY and the FORECASTS plots. This is the default.

FORECASTS

produces a plot that shows the one-step-ahead as well as the multistep forecasts.

FORECASTSONLY produces a plot that shows only the multistep forecasts. IMPULSE < (impulse-plot-options ) > produces the plots of impulse response function and the impulse response of the transfer function. ALL

produces all impulse plots. This is the default.

ACCUM

produces the accumulated impulse plot.

ORTH

produces the orthogonalized impulse plot.

SIMPLE

produces the simple impulse plot.

MODEL

produces plots of dependent variables listed in the MODEL statement and plots of the one-step-ahead predicted values for each dependent variables.

NONE

suppresses all plots.

RESIDUAL < (residual-plot-options ) > produces plots associated with the prediction errors obtained after modeling the data. The following residual-plot-options are available: ALL

produces all plots associated with the analysis of the prediction errors. This is the default.

RESIDUAL

produces prediction error plot.

1888 F Chapter 30: The VARMAX Procedure

DIAGNOSTICS produces a panel of plots useful in assessing the autocorrelations and white-noise of the prediction errors. The panel consists of the following:

NORMAL



the autocorrelation plot of the prediction errors



the partial autocorrelation plot of the prediction errors



the inverse autocorrelation plot of the prediction errors



the log scaled white noise plot of the prediction errors

produces a panel of plots useful in assessing normality of the prediction errors. The panel consists of the following: 

distribution of the prediction errors with overlaid the normal curve



normal quantile plot of the prediction errors

Other Options In addition, any of the following MODEL statement options can be specified in the PROC VARMAX statement, which is equivalent to specifying the option for every MODEL statement: CENTER, DFTEST=, DIF=, DIFX=, DIFY=, LAGMAX=, METHOD=, MINIC=, NOCURRENTX, NOINT, NOPRINT, NSEASON=, P=, PRINT=, PRINTALL, PRINTFORM=, Q=, SCENTER, TREND=, VARDEF=, and XLAG= options. The following is an example of the options in the PROC VARMAX statement: proc varmax data=one lagmax=3 method=ml; model y1-y3 / p=1; run;

BY Statement BY variables ;

A BY statement can be used with PROC VARMAX to obtain separate analyses on observations in groups defined by the BY variables. When a BY statement appears, the procedure expects the input data set to be sorted in order of the BY variables. If your input data set is not sorted in ascending order, use one of the following alternatives:  Sort the data using the SORT procedure with a similar BY statement.

CAUSAL Statement F 1889

 Specify the BY statement option NOTSORTED or DESCENDING in the BY statement for the VARMAX procedure. The NOTSORTED option does not mean that the data are unsorted but rather that the data are arranged in groups (according to values of the BY variables) and that these groups are not necessarily in alphabetical or increasing numeric order.  Create an index on the BY variables using the DATASETS procedure. For more information about the BY statement, see in SAS Language Reference: Concepts. For more information about the DATASETS procedure, see the discussion in the Base SAS Procedures Guide. The following is an example of the BY statement: proc varmax data=one; by region; model y1-y3 / p=1; run;

CAUSAL Statement CAUSAL GROUP1=( variables) GROUP2=( variables) ;

A CAUSAL statement prints the Granger causality test by fitting the VAR(p) model by using all variables defined in GROUP1 and GROUP2. Any number of CAUSAL statements can be specified. The CAUSAL statement proceeds with the MODEL statement and uses the variables and the autoregressive order, p, specified in the MODEL statement. Variables in the GROUP1= and GROUP2= options should be defined in the MODEL statement. If the P=0 option is specified in the MODEL statement, the CAUSAL statement is not applicable. The null hypothesis of the Granger causality test is that GROUP1 is influenced only by itself, and not by GROUP2. If the hypothesis test fails to reject the null, then the variables listed in GROUP1 might be considered as independent variables. See the section “VAR and VARX Modeling” on page 1941 for details. The following is an example of the CAUSAL statement. You specify the CAUSAL statement with the GROUP1= and GROUP2= options. proc varmax data=one; model y1-y3 = x1 / p=1; causal group1=(x1) group2=(y1-y3); causal group1=(y2) group2=(y1 y3); run;

The first CAUSAL statement fits the VAR(1) model by using the variables y1, y2, y3, and x1 and tests the null hypothesis that x1 causes the other variables, y1, y2, and y3, but the other variables do not cause x1. The second CAUSAL statement fits the VAR(1) model by using the variables y1, y3, and y2 and tests the null hypothesis that y2 causes the other variables, y1 and y3, but the other variables do not cause y2.

1890 F Chapter 30: The VARMAX Procedure

COINTEG Statement COINTEG RANK=number < H=( matrix) > < J=( matrix) > < EXOGENEITY > < NORMALIZE=variable > ;

The COINTEG statement fits the vector error correction model to the data, tests the restrictions of the long-run parameters and the adjustment parameters, and tests for the weak exogeneity in the long-run parameters. The cointegrated system uses the maximum likelihood analysis proposed by Johansen and Juselius (1990) and Johansen (1995a, 1995b). Only one COINTEG statement is allowed. You specify the ECM= option in the MODEL statement or the COINTEG statement to fit the VECM(p). The P= option in the MODEL statement is used to specify the autoregressive order of the VECM. The following statements are equivalent for fitting a VECM(2). proc varmax data=one; model y1-y3 / p=2 ecm=(rank=1); run;

proc varmax data=one; model y1-y3 / p=2; cointeg rank=1; run;

To test restrictions of either ˛ or ˇ or both, you specify either J= or H= or both, respectively. You specify the EXOGENEITY option in the COINTEG statement for tests of the weak exogeneity in the long-run parameters. The following is an example of the COINTEG statement. proc varmax data=one; model y1-y3 / p=2; cointeg rank=1 h=(1 0, -1 0, 0 1) j=(1 0, 0 0, 0 1) exogeneity; run;

The following options can be used in the COINTEG statement: EXOGENEITY

formulates the likelihood ratio tests for testing weak exogeneity in the long-run parameters. The null hypothesis is that one variable is weakly exogenous for the others. H=(matrix)

specifies the restrictions H on the k  r or .k C 1/  r cointegrated coefficient matrix ˇ such that ˇ D H, where H is known and  is unknown. If the VECM(p) is specified with the COINTEG statement or with the ECM= option in the MODEL statement and the ECTREND

COINTEG Statement F 1891

option is not included with the ECM= specification, then the H matrix has dimension k  m. If the VECM(p) is specified with the COINTEG statement or with the ECM= option in the MODEL statement and the ECTREND option is also used, then the H matrix has dimension .k C 1/  m. Here k is the number of dependent variables, and m is r  m < k where r is defined with the RANK=r option. For example, consider a system that contains four variables and the RANK=1 option with ˇ D .ˇ1 ; ˇ2 ; ˇ3 ; ˇ4 /0 . The restriction matrix for the test of ˇ1 C ˇ2 D 0 can be specified as cointeg rank=1 h=(1 0 0, -1 0 0, 0 1 0, 0 0 1);

Here the matrix H is 4  3 where k D 4 and m D 3, and each row of the matrix H is separated by commas. When the series has no separate deterministic trend, the constant term should be restricted by ˛0? ı D 0. In the preceding example, the ˇ can be either ˇ D .ˇ1 ; ˇ2 ; ˇ3 ; ˇ4 ; 1/0 or ˇ D .ˇ1 ; ˇ2 ; ˇ3 ; ˇ4 ; t /0 . You can specify the restriction matrix for the previous test of ˇ1 Cˇ2 D 0 as follows: cointeg rank=1 h=(1 0 0 0, -1 0 0 0, 0 1 0 0, 0 0 1 0, 0 0 0 1);

When the cointegrated system contains three dependent variables and the RANK=2 option, you can specify the restriction matrix for the test of ˇ1j D ˇ2j for j D 1; 2 as follows: cointeg rank=2 h=(1 0, -1 0, 0 1);

J=(matrix)

specifies the restrictions J on the k  r adjustment matrix ˛ such that ˛ D J , where J is known and is unknown. The k  m matrix J is specified by using this option, where k is the number of dependent variables, m is r  m < k, and r is defined with the RANK=r option. For example, when the system contains four variables and the RANK=1 option is used, you can specify the restriction matrix for the test of ˛j D 0 for j D 2; 3; 4 as follows: cointeg rank=1 j=(1, 0, 0, 0);

When the system contains three variables and the RANK=2 option, you can specify the restriction matrix for the test of ˛2j D 0 for j D 1; 2 as follows: cointeg rank=2 j=(1 0, 0 0, 0 1);

NORMALIZE=variable

specifies a single dependent (endogenous) variable name whose cointegrating vectors are normalized. If the variable name is different from that specified in the COINTTEST=(JOHANSEN= ) or ECM= option in the MODEL statement, the variable name specified in the COINTEG statement is used. If the normalized variable is not specified, cointegrating vectors are not normalized.

1892 F Chapter 30: The VARMAX Procedure

RANK=number

specifies the cointegration rank of the cointegrated system. This option is required in the COINTEG statement. The rank of cointegration should be greater than zero and less than the number of dependent (endogenous) variables. If the value of the RANK= option in the COINTEG statement is different from that specified in the ECM= option, the rank specified in the COINTEG statement is used.

ID Statement ID variable INTERVAL=value < ALIGN=value > ;

The ID statement specifies a variable that identifies observations in the input data set. The datetime variable specified in the ID statement is included in the OUT= data set if the OUTPUT statement is specified. Note that the ID variable is usually a SAS datetime variable. The values of the ID variable are extrapolated for the forecast observations based on the value of the INTERVAL= option. ALIGN=value

controls the alignment of SAS dates used to identify output observations. The ALIGN= option allows the following values: BEGINNING | BEG | B, MIDDLE | MID | M, and ENDING | END | E. The default is BEGINNING. The ALIGN= option is used to align the ID variable to the beginning, middle, or end of the time ID interval specified by the INTERVAL= option. INTERVAL=value

specifies the time interval between observations. This option is required in the ID statement. The INTERVAL= option is used in conjunction with the ID variable to check that the input data are in order and have no missing periods. The INTERVAL= option is also used to extrapolate the ID values past the end of the input data when the OUTPUT statement is specified. The following is an example of the ID statement: proc varmax data=one; id date interval=qtr align=mid; model y1-y3 / p=1; run;

MODEL Statement MODEL dependents < = regressors > < , dependents < = regressors > . . . > < / options > ;

The MODEL statement specifies dependent (endogenous) variables and independent (exogenous) variables for the VARMAX model. The multivariate model can have the same or different independent variables corresponding to the dependent variables. As a special case, the VARMAX procedure allows you to analyze one dependent variable. Only one MODEL statement is allowed.

MODEL Statement F 1893

For example, the following statements are equivalent ways of specifying the multivariate model for the vector .y1; y2; y3/: model y1 y2 y3 ; model y1-y3 ;

The following statements are equivalent ways of specifying the multivariate model with independent variables, where y1; y2; y3, and y4 are the dependent variables and x1; x2; x3; x4, and x5 are the independent variables: model model model model

y1 y2 y3 y4 = x1 x2 x3 x4 x5 ; y1 y2 y3 y4 = x1-x5 ; y1 = x1-x5, y2 = x1-x5, y3 y4 = x1-x5 ; y1-y4 = x1-x5 ;

When the multivariate model has different independent variables that correspond to each of the dependent variables, equations are separated by commas (,) and the model can be specified as illustrated by the following MODEL statement: model y1 = x1-x3, y2 = x3-x5, y3 y4 = x1-x5 ;

The following options can be used in the MODEL statement after a forward slash (/): CENTER

centers the dependent (endogenous) variables by subtracting their means. Note that centering is done after differencing when the DIF= or DIFY= option is specified. If there are exogenous (independent) variables, this option is not applicable. model y1 y2 / p=1 center;

DIF(variable (number-list) < ... variable (number-list) >) DIF=(variable (number-list) < ... variable (number-list) >)

specifies the degrees of differencing to be applied to the specified dependent or independent variables. The number-list must contain one or more numbers, each of which should be greater than zero. The differencing can be the same for all variables, or it can vary among variables. For example, the DIF=(y1 (1,4) y3 (1) x2 (2)) option specifies that the series y1 is differenced at lag 1 and at lag 4, which is .1

B 4 /.1

B/y1t D .y1t

y1;t

1/

.y1;t

the series y3 is differenced at lag 1, which is .y3t at lag 2, which is .x2t x2;t 2 /.

4

y3;t

y1;t 1 /;

5/

and the series x2 is differenced

The following uses the data dy1, y2, x1, and dx2, where dy1 D .1 .1 B/2 x2t . model y1 y2 = x1 x2 / p=1 dif=(y1(1) x2(2));

B/y1t and dx2 D

1894 F Chapter 30: The VARMAX Procedure

DIFX(number-list) DIFX=(number-list)

specifies the degrees of differencing to be applied to all independent variables. The numberlist must contain one or more numbers, each of which should be greater than zero. For example, the DIFX=(1) option specifies that all of the independent series are differenced once at lag 1. The DIFX=(1,4) option specifies that all of the independent series are differenced at lag 1 and at lag 4. If independent variables are specified in the DIF= option, then the DIFX= option is ignored. The following statement uses the data y1, y2, dx1, and dx2, where dx1 D .1 dx2 D .1 B/x2t .

B/x1t and

model y1 y2 = x1 x2 / p=1 difx(1);

DIFY(number-list) DIFY=(number-list)

specifies the degrees of differencing to be applied to all dependent (endogenous) variables. The number-list must contain one or more numbers, each of which should be greater than zero. For details, see the DIFX= option. If dependent variables are specified in the DIF= option, then the DIFY= option is ignored. model y1 y2 / p=1 dify(1);

METHOD=value

requests the type of estimates to be computed. The possible values of the METHOD= option are as follows: LS

specifies least squares estimates.

ML

specifies maximum likelihood estimates.

When the ECM=, PRIOR=, and Q= options and the GARCH statement are specified, the default ML method is used regardless of the method given by the METHOD= option. model y1 y2 / p=1 method=ml;

NOCURRENTX

suppresses the current values xt of the independent variables. In general, the VARX(p; s) model is yt D ı C

p X i D1

ˆi yt

i

C

s X

‚i xt

i

C t

i D0

where p is the number of lags of the dependent variables included in the model, and s is the number of lags of the independent variables included in the model, including the contemporaneous values of xt . A VARX(1,2) model can be specified as: model y1 y2 = x1 x2 / p=1 xlag=2;

MODEL Statement F 1895

If the NOCURRENTX option is specified, it suppresses the current values xt and starts with xt 1 . The VARX(p; s) model is redefined as: yt D ı C

p X iD1

ˆi yt

i

C

s X

‚i xt

i

C t

i D1

This model with p D 1 and s D 2 can be specified as: model y1 y2 = x1 x2 / p=1 xlag=2 nocurrentx;

NOINT

suppresses the intercept parameter ı. model y1 y2 / p=1 noint;

NSEASON=number

specifies the number of seasonal periods. When the NSEASON=number option is specified, (number –1) seasonal dummies are added to the regressors. If the NOINT option is specified, the NSEASON= option is not applicable. model y1 y2 / p=1 nseason=4;

SCENTER

centers seasonal dummies specified by the NSEASON= option. The centered seasonal dummies are generated by c .1=s/, where c is a seasonal dummy generated by the NSEASON=s option. model y1 y2 / p=1 nseason=4 scenter;

TREND=value

specifies the degree of deterministic time trend included in the model. Valid values are as follows: LINEAR

includes a linear time trend as a regressor.

QUAD

includes linear and quadratic time trends as regressors.

The TREND=QUAD option is not applicable for a cointegration analysis. model y1 y2 / p=1 trend=linear;

VARDEF=value

corrects for the degrees of freedom of the denominator for computing an error covariance matrix for the METHOD=LS option. If the METHOD=ML option is specified, the VARDEF=N option is always used. Valid values are as follows: DF

specifies that the number of nonmissing observation minus the number of regressors be used.

N

specifies that the number of nonmissing observation be used. model y1 y2 / p=1 vardef=n;

1896 F Chapter 30: The VARMAX Procedure

Printing Control Options LAGMAX=number

specifies the maximum number of lags for which results are computed and displayed by the PRINT=(CORRX CORRY COVX COVY IARR IMPULSE= IMPULSX= PARCOEF PCANCORR PCORR) options. This option is also used to limit the printed results for the cross covariances and cross-correlations of residuals. The default is LAGMAX=min(12, T 2), where T is the number of nonmissing observations. model y1 y2 / p=1 lagmax=6;

NOPRINT

suppresses all printed output. model y1 y2 / p=1 noprint;

PRINTALL

requests all printing control options. The options set by the option PRINTALL are DFTEST=, MINIC=, PRINTFORM=BOTH, and PRINT=(CORRB CORRX CORRY COVB COVPE COVX COVY DECOMPOSE DYNAMIC IARR IMPULSE=(ALL) IMPULSX=(ALL) PARCOEF PCANCORR PCORR ROOTS YW). You can also specify this option as the option ALL. model y1 y2 / p=1 printall;

PRINTFORM=value

requests the printing format of the output generated by the PRINT= option and cross covariances and cross-correlations of residuals. Valid values are as follows: BOTH

prints output in both MATRIX and UNIVARIATE forms.

MATRIX

prints output in matrix form. This is the default.

UNIVARIATE

prints output by variables.

model y1 y2 / p=1 print=(impulse) printform=univariate;

Printing Options PRINT=(options)

The following options can be used in the PRINT=( ) option. The options are listed within parentheses. If a number in parentheses follows an option listed below, then the option prints the number of lags specified by number in parentheses. The default is the number of lags specified by the LAGMAX=number option. CORRB

prints the estimated correlations of the parameter estimates.

MODEL Statement F 1897

CORRX CORRX(number )

prints the cross-correlation matrices of exogenous (independent) variables. The number should be greater than zero. CORRY CORRY(number )

prints the cross-correlation matrices of dependent (endogenous) variables. The number should be greater than zero. COVB

prints the estimated covariances of the parameter estimates. COVPE COVPE(number )

prints the covariance matrices of number-ahead prediction errors for the VARMAX(p,q,s) model. The number should be greater than zero. If the DIF= or DIFY= option is specified, the covariance matrices of multistep prediction errors are computed based on the differenced data. This option is not applicable when the PRIOR= option is specified. See the section “Forecasting” on page 1930 for details. COVX COVX(number )

prints the cross-covariance matrices of exogenous (independent) variables. The number should be greater than zero. COVY COVY(number )

prints the cross-covariance matrices of dependent (endogenous) variables. The number should be greater than zero. DECOMPOSE DECOMPOSE(number )

prints the decomposition of the prediction error covariances using up to the number of lags specified by number in parentheses for the VARMA(p,q) model. The number should be greater than zero. It can be interpreted as the contribution of innovations in one variable to the mean squared error of the multistep forecast of another variable. The DECOMPOSE option also prints proportions of the forecast error variance. If the DIF= or DIFY= option is specified, the covariance matrices of multistep prediction errors are computed based on the differenced data. This option is not applicable when the PRIOR= option is specified. See the section “Forecasting” on page 1930 for details. DIAGNOSE

prints the residual diagnostics and model diagnostics. DYNAMIC

prints the contemporaneous relationships among the components of the vector time series.

1898 F Chapter 30: The VARMAX Procedure

ESTIMATES

prints the coefficient estimates and a schematic representation of the significance and sign of the parameter estimates. IARR IARR(number )

prints the infinite order AR representation of a VARMA process. The number should be greater than zero. If the ECM= option and the COINTEG statement are specified, then the reparameterized AR coefficient matrices are printed. IMPULSE IMPULSE(number ) IMPULSE=(SIMPLE ACCUM ORTH STDERR ALL) IMPULSE(number )=(SIMPLE ACCUM ORTH STDERR ALL)

prints the impulse response function. The number should be greater than zero. It investigates the response of one variable to an impulse in another variable in a system that involves a number of other variables as well. It is an infinite order MA representation of a VARMA process. See the section “Impulse Response Function” on page 1919 for details. The following options can be used in the IMPULSE=( ) option. The options are specified within parentheses. ACCUM

prints the accumulated impulse response function.

ALL

is equivalent to specifying all of SIMPLE, ACCUM, ORTH, and STDERR.

ORTH

prints the orthogonalized impulse response function.

SIMPLE

prints the impulse response function. This is the default.

STDERR

prints the standard errors of the impulse response function, the accumulated impulse response function, or the orthogonalized impulse response function.

If the exogenous variables are used to fit the model, then the STDERR option is ignored. IMPULSX IMPULSX(number ) IMPULSX=(SIMPLE ACCUM ALL) IMPULSX(number )=(SIMPLE ACCUM ALL)

prints the impulse response function related to exogenous (independent) variables. The number should be greater than zero. See the section “Impulse Response Function” on page 1919 for details. The following options can be used in the IMPULSX=( ) option. The options are specified within parentheses. ACCUM

prints the accumulated impulse response matrices for the transfer function.

ALL

is equivalent to specifying both SIMPLE and ACCUM.

MODEL Statement F 1899

SIMPLE

prints the impulse response matrices for the transfer function. This is the default.

PARCOEF PARCOEF(number )

prints the partial autoregression coefficient matrices, ˆmm up to the lag number. The number should be greater than zero. With a VAR process, this option is useful for the identification of the order since the ˆmm have the property that they equal zero for m > p under the hypothetical assumption of a VAR(p) model. See the section “Tentative Order Selection” on page 1935 for details. PCANCORR PCANCORR(number )

prints the partial canonical correlations of the process at lag m and the test for testing ˆm =0 for m > p up to the lag number. The number should be greater than zero. The lag m partial canonical correlations are the canonical correlations between yt and yt m , after adjustment for the dependence of these variables on the intervening values yt 1 , . . . , yt mC1 . See the section “Tentative Order Selection” on page 1935 for details. PCORR PCORR(number )

prints the partial correlation matrices. The number should be greater than zero. With a VAR process, this option is useful for a tentative order selection by the same property as the partial autoregression coefficient matrices, as described in the PRINT=(PARCOEF) option. See the section “Tentative Order Selection” on page 1935 for details. ROOTS

prints the eigenvalues of the kp  kp companion matrix associated with the AR characteristic function ˆ.B/, where k is the number of dependent (endogenous) variables, and ˆ.B/ is the finite order matrix polynomial in the backshift operator B, such that B i yt D yt i . These eigenvalues indicate the stationary condition of the process since the stationary condition on the roots of jˆ.B/j D 0 in the VAR(p) model is equivalent to the condition in the corresponding VAR(1) representation that all eigenvalues of the companion matrix be less than one in absolute value. Similarly, you can use this option to check the invertibility of the MA process. In addition, when the GARCH statement is specified, this option prints the roots of the GARCH characteristic polynomials to check covariance stationarity for the GARCH process. YW

prints Yule-Walker estimates of the preliminary autoregressive model for the dependent (endogenous) variables. The coefficient matrices are printed using the maximum order of the autoregressive process. Some examples of the PRINT= option are as follows: model model model model

y1 y1 y1 y1

y2 y2 y2 y2

/ / / /

p=1 p=1 p=1 p=1

print=(covy(10) corry(10)); print=(parcoef pcancorr pcorr); print=(impulse(8) decompose(6) covpe(6)); print=(dynamic roots yw);

1900 F Chapter 30: The VARMAX Procedure

Lag Specification Options P=number P=(number-list)

specifies the order of the vector autoregressive process. Subset models of vector autoregressive orders can be specified by listing the desired set of lags. For example, you can specify the P=(1,3,4) option. The P=3 option is equivalent to the P=(1,2,3) option. The default is P=0. If P=0 and there are no exogenous (independent) variables, then the AR polynomial order is automatically determined by minimizing an information criterion. If P=0 and the PRIOR= or ECM= option or both are specified, then the AR polynomial order is determined automatically. If the ECM= option is specified, then subset models of vector autoregressive orders are not allowed and the AR maximum order specified is used. Examples illustrating the P= option follow: model y1 y2 / p=3; model y1 y2 / p=(1,3); model y1 y2 / p=(1,3) prior;

Q=number Q=(number-list)

specifies the order of the moving-average error process. Subset models of moving-average orders can be specified by listing the desired set of lags. For example, you can specify the Q=(1,5) option. The default is Q=0. model y1 y2 / p=1 q=1; model y1 y2 / q=(2);

XLAG=number XLAG=(number-list)

specifies the lags of exogenous (independent) variables. Subset models of distributed lags can be specified by listing the desired set of lags. For example, XLAG=(2) selects only a lag 2 of the exogenous variables. The default is XLAG=0. To exclude the present values of exogenous variables from the model, the NOCURRENTX option must be used. model y1 y2 = x1-x3 / xlag=2 nocurrentx; model y1 y2 = x1-x3 / p=1 xlag=(2);

Tentative Order Selection Options MINIC MINIC=(TYPE=value P=number Q=number PERROR=number )

prints the information criterion for the appropriate AR and MA tentative order selection and for the diagnostic checks of the fitted model.

MODEL Statement F 1901

If the MINIC= option is not specified, all types of information criteria are printed for diagnostic checks of the fitted model. The following options can be used in the MINIC=( ) option. The options are specified within parentheses. P=number P=(pmi n :pmax )

specifies the range of AR orders to be considered in the tentative order selection. The default is P=(0:5). The P=3 option is equivalent to the P=(0:3) option. PERROR=number PERROR=(p;mi n :p;max )

specifies the range of AR orders for obtaining the error series. PERROR=(pmax W pmax C qmax ).

The default is

Q=number Q=(qmi n :qmax )

specifies the range of MA orders to be considered in the tentative order selection. The default is Q=(0:5). TYPE=value

specifies the criterion for the model order selection. Valid criteria are as follows: AIC

specifies the Akaike information criterion.

AICC

specifies the corrected Akaike information criterion. This is the default criterion.

FPE

specifies the final prediction error criterion.

HQC

specifies the Hanna-Quinn criterion.

SBC

specifies the Schwarz Bayesian criterion. You can also specify this value as TYPE=BIC.

model y1 y2 / minic; model y1 y2 / minic=(type=aic p=5);

Cointegration Related Options Two options are related to integrated time series; one is the DFTEST option to test for a unit root and the other is the COINTTEST option to test for cointegration. DFTEST DFTEST=(DLAG=number ) DFTEST=(DLAG=(number ) . . . (number ) )

prints the Dickey-Fuller unit root tests. The DLAG=(number) . . . (number) option specifies the regular or seasonal unit root test. Supported values of number are in 1, 2, 4, 12. If the number is greater than one, a seasonal Dickey-Fuller test is performed. If the TREND= option is specified, the seasonal unit root test is not available. The default is DLAG=1.

1902 F Chapter 30: The VARMAX Procedure

For example, the DFTEST=(DLAG=(1)(12)) option produces two tables: the Dickey-Fuller regular unit root test and the seasonal unit root test. Some examples of the DFTEST= option follow: model model model model

y1 y1 y1 y1

y2 y2 y2 y2

/ / / /

p=2 p=2 p=2 p=2

dftest; dftest=(dlag=4); dftest=(dlag=(1)(12)); dftest cointtest;

COINTTEST COINTTEST=(JOHANSEN < (=options) > SW < (=options) > SIGLEVEL=number )

The following options can be used with the COINTTEST=( ) option. The options are specified within parentheses. JOHANSEN JOHANSEN=(TYPE=value IORDER=number NORMALIZE=variable)

prints the cointegration rank test for multivariate time series based on Johansen’s method. This test is provided when the number of dependent (endogenous) variables is less than or equal to 11. See the section “Vector Error Correction Modeling” on page 1962 for details. The VARX(p,s) model can be written as the error correction model yt D …yt

1C

p X1

ˆi yt

i

C ADt C

i D1

s X

‚i xt

i

C t

i D0

where …, ˆi , A, and ‚i are coefficient parameters; Dt is a deterministic term such as a constant, a linear trend, or seasonal dummies. The I.1/ model is defined by one reduced-rank condition. If the cointegration rank is r < k, then there exist k  r matrices ˛ and ˇ of rank r such that … D ˛ˇ 0 . The I.1/ model is rewritten as the I.2/ model 2 yt D …yt

1

‰yt

1C

p X2

‰i 2 yt

i

C ADt C

i D1

where ‰ D Ik

Pp

1  i D1 ˆi

and ‰i D

s X

‚i xt

i

C t

i D0

Pp

1  j Di C1 ˆi .

The I.2/ model is defined by two reduced-rank conditions. One is that … D ˛ˇ 0 , where ˛ and ˇ are k  r matrices of full-rank r. The other is that ˛0? ‰ˇ? D 0 where  and  are .k r/  s matrices with s  k r; ˛? and ˇ? are k  .k r/ matrices of full-rank k r such that ˛0 ˛? D 0 and ˇ 0 ˇ? D 0. The following options can be used in the JOHANSEN=( ) option. The options are specified within parentheses. IORDER=number

specifies the integrated order.

MODEL Statement F 1903

IORDER=1

prints the cointegration rank test for an integrated order 1 and prints the long-run parameter, ˇ, and the adjustment coefficient, ˛. This is the default. If the IORDER=1 option is specified, then the AR order should be greater than or equal to 1. When the P=0 option, the value of P is set to 1 for the Johansen test.

IORDER=2

prints the cointegration rank test for integrated orders 1 and 2. If the IORDER=2 option is specified, then the AR order should be greater than or equal to 2. If the P=1 option with the IORDER=2 option, then the value of IORDER is set to 1; if the P=0 option with the IORDER=2 option, then the value of P is set to 2.

NORMALIZE=variable specifies the dependent (endogenous) variable name whose cointegration vectors are to be normalized. If the normalized variable is different from that specified in the ECM= option or the COINTEG statement, then the value specified in the COINTEG statement is used. TYPE=value

specifies the type of cointegration rank test to be printed. Valid values are as follows: MAX

prints the cointegration maximum eigenvalue test.

TRACE

prints the cointegration trace test. This is the default.

If the NOINT option is not specified, the procedure prints two different cointegration rank tests in the presence of the unrestricted and restricted deterministic terms (constant or linear trend) models. If the IORDER=2 option is specified, the procedure automatically determines that the TYPE=TRACE option. Some examples illustrating the COINTTEST= option follow: model y1 y2 / p=2 cointtest=(johansen=(type=max normalize=y1)); model y1 y2 / p=2 cointtest=(johansen=(iorder=2 normalize=y1));

SIGLEVEL=value

sets the size of cointegration rank tests and common trends tests. The SIGLEVEL=value can be set to 0.1, 0.05, or 0.01. The default is SIGLEVEL=0.05. model y1 y2 / p=2 cointtest=(johansen siglevel=0.1); model y1 y2 / p=2 cointtest=(sw siglevel=0.1);

SW SW=(TYPE=value LAG=number )

prints common trends tests for a multivariate time series based on the Stock-Watson method. This test is provided when the number of dependent (endogenous) variables is less than or equal to 6. See the section “Common Trends” on page 1960 for details. The following options can be used in the SW=( ) option. The options are listed within parentheses.

1904 F Chapter 30: The VARMAX Procedure

LAG=number

specifies the number of lags. The default is LAG=max(1,p) for the TYPE=FILTDIF or TYPE=FILTRES option, where p is the AR maximum order specified by the P= option; LAG=T 1=4 for the TYPE=KERNEL option, where T is the number of nonmissing observations. If the specified LAG=number exceeds the default, then it is replaced by the default.

TYPE=value

specifies the type of common trends test to be printed. Valid values are as follows: FILTDIF

prints the common trends test based on the filtering method applied to the differenced series. This is the default.

FILTRES

prints the common trends test based on the filtering method applied to the residual series.

KERNEL

prints the common trends test based on the kernel method.

model y1 y2 / p=2 cointtest=(sw); model y1 y2 / p=2 cointtest=(sw=(type=kernel)); model y1 y2 / p=2 cointtest=(sw=(type=kernel lag=3));

Bayesian VARX Estimation Options PRIOR PRIOR=(prior-options)

specifies the prior value of parameters for the BVARX(p, s) model. The BVARX model allows for a subset model specification. If the ECM= option is specified with the PRIOR option, the BVECMX(p, s) form is fitted. To compute the standard errors of the forecasts, a bootstrap procedure is used. See the section “Bayesian VAR and VARX Modeling” on page 1947 for details. The following options can be used with the PRIOR=(prior-options) option. The prior-options are listed within parentheses. IVAR IVAR=(variables)

specifies an integrated BVAR(p) model. The variables should be specified in the MODEL statement as dependent variables. If you use the IVAR option without variables, then it sets the overall prior mean of the first lag of each variable equal to one in its own equation and sets all other coefficients to zero. If variables are specified, it sets the prior mean of the first lag of the specified variables equal to one in its own equation and sets all other coefficients to zero. When the series yt D .y1 ; y2 /0 follows a bivariate BVAR(2) process, the IVAR or IVAR=(y1 y2 ) option is equivalent to specifying MEAN=(1 0 0 0 0 1 0 0). If the PRIOR=(MEAN=) or ECM= option is specified, the IVAR= option is ignored. LAMBDA=value

specifies the prior standard deviation of the AR coefficient parameter matrices. It should be a positive number. The default is LAMBDA=1. As the value of the LAMBDA= option is increased, the BVAR(p) model becomes closer to a VAR(p) model.

MODEL Statement F 1905

MEAN=(vector )

specifies the mean vector in the prior distribution for the AR coefficients. If the vector is not specified, the prior value is assumed to be a zero vector. See the section “Bayesian VAR and VARX Modeling” on page 1947 for details. You can specify the mean vector by order of the equation. Let .ı; ˆ1 ; : : : ; ˆp / be the parameter sets to be estimated and ˆ D .ˆ1 ; : : : ; ˆp / be the AR parameter sets. The mean vector is specified by row-wise from ˆ; that is, the MEAN=(vec.ˆ0 /) option. For the PRIOR=(mean) option in the BVAR(2),    2 0:1 1 1;11 1;12 2;11 2;12 D ˆD 0:5 3 0 1;21 1;22 2;21 2;22

0 1



where l;ij is an element of ˆ, l is a lag, i is associated with the first dependent variable, and j is associated with the second dependent variable. model y1 y2 / p=2 prior=(mean=(2 0.1 1 0 0.5 3 0 -1));

The deterministic terms and exogenous variables are considered to shrink toward zero; you must omit prior means of exogenous variables and deterministic terms such as a constant, seasonal dummies, or trends. For a Bayesian error correction model estimated when both the ECM= and PRIOR= options are used, a mean vector for only lagged AR coefficients, ˆi , in terms of regressors yt i , for i D 1; : : : ; .p 1/ is used in the VECM(p) representation. The diffused prior variance of ˛ is used, since ˇ is replaced by ˇO estimated in a nonconstrained VECM(p) form. yt D ˛zt

1

C

p X1

ˆi yt i

C ADt C

i D1

s X

‚i xt

i

C t

i D0

where zt D ˇ 0 yt . For example, in the case of a bivariate (k D 2) BVECM(2) form, the option     MEAN D .1;11 1;12 1;21 1;22 /  where 1;ij is the .i; j /th element of the matrix ˆ1 .

NREP=number

specifies the number of periods to compute the measure of forecast accuracy. The default is NREP=0:5T , where T is the number of observations. THETA=value

specifies the prior standard deviation of the AR coefficient parameter matrices. The value is in the interval (0,1). The default is THETA=0.1. As the value of the THETA= option approaches 1, the specified BVAR(p) model approaches a VAR(p) model. Some examples of the PRIOR= option follow:

1906 F Chapter 30: The VARMAX Procedure

model y1 y2 / p=2 prior; model y1 y2 / p=2 prior=(theta=0.2 lambda=5); model y1 y2 = x1 / p=2 prior=(theta=0.2 lambda=5); model y1 y2 = x1 / p=2 prior=(theta=0.2 lambda=5 mean=(2 0.1 1 0 0.5 3 0 -1));

See the section “Bayesian VAR and VARX Modeling” on page 1947 for details.

Vector Error Correction Model Options ECM=(RANK=number NORMALIZE= emphvariable ECTREND )

specifies a vector error correction model. The following options can be used in the ECM=( ) option. The options are specified within parentheses. NORMALIZE=variable

specifies a single dependent variable name whose cointegrating vectors are normalized. If the variable name is different from that specified in the COINTEG statement, then the value specified in the COINTEG statement is used. RANK=number

specifies the cointegration rank. This option is required in the ECM= option. The value of the RANK= option should be greater than zero and less than or equal to the number of dependent (endogenous) variables, k. If the rank is different from that specified in the COINTEG statement, then the value specified in the COINTEG statement is used. ECTREND

specifies the restriction on the drift in the VECM(p) form. 

There is no separate drift in the VECM(p) form, but a constant enters only through the error correction term. yt D ˛.ˇ 0 ; ˇ0 /.y0t

0 1 ; 1/ C

p X1

ˆi yt

i

C t

i D1

An example of the ECTREND option follows: model y1 y2 / p=2 ecm=(rank=1 ectrend);



There is a separate drift and no separate linear trend in the VECM(p) form, but a linear trend enters only through the error correction term. yt D ˛.ˇ 0 ; ˇ1 /.y0t

0 1 ; t/ C

p X1

ˆi yt

i

C ı0 C  t

i D1

An example of the ECTREND option with the TREND= option follows: model y1 y2 / p=2 ecm=(rank=1 ectrend) trend=linear;

GARCH Statement F 1907

If the NSEASON option is specified, then the NSEASON option is ignored; if the NOINT option is specified, then the ECTREND option is ignored. Some examples of the ECM= option follow: model y1 y2 / p=2 ecm=(rank=1 normalized=y1); model y1 y2 / p=2 ecm=(rank=1 ectrend) trend=linear;

See the section “Vector Error Correction Modeling” on page 1962 for details.

GARCH Statement GARCH options ;

The GARCH statement specifies a GARCH-type multivariate conditional heteroscedasticity model. The following options can be used in the GARCH statement. FORM=value

specifies the representation for a GARCH model. Valid values are as follows: BEKK

specifies a BEKK representation. This is the default.

CCC

specifies a constant conditional correlation representation.

OUTHT=SAS-data-set

writes the conditional covariance matrix to an output data set. P=number P=(number-list)

specifies the order of the process or the subset of GARCH terms to be fitted. For example, you can specify the P=(1,3) option. The P=3 option is equivalent to the P=(1,2,3) option. The default is P=0. Q=number Q=(number-list)

specifies the order of the process or the subset of ARCH terms to be fitted. This option is required in the GARCH statement. For example, you can specify the Q=(2) option. The Q=2 option is equivalent to the Q=(1,2) option. For the VAR(1)–ARCH(1) model, model y1 y2 / p=1; garch q=1 form=bekk;

For the multivariate GARCH(1,1) model, model y1 y2; garch q=1 p=1 form=ccc;

1908 F Chapter 30: The VARMAX Procedure

Other multivariate GARCH-type models are model y1 y2 = x1 / xlag=1; garch q=1; model y1 y2 / q=1; garch q=1 p=1;

See the section “Multivariate GARCH Modeling” on page 1981 for details.

NLOPTIONS Statement NLOPTIONS options ;

The VARMAX procedure uses the nonlinear optimization (NLO) subsystem to perform nonlinear optimization tasks. For a list of all the options of the NLOPTIONS statement, see Chapter 6, “Nonlinear Optimization Methods.” An example of the NLOPTIONS statement follows: proc varmax data=one; nloptions tech=qn; model y1 y2 / p=2; run;

The VARMAX procedure uses the dual quasi-Newton optimization method by default when no NLOPTIONS statement is specified. However, it uses Newton-Raphson ridge optimization when the NLOPTIONS statement is specified. The following example uses the TECH=QUANEW by default. proc varmax data=one; model y1 y2 / p=2 method=ml; run;

The next example uses the TECH=NRRIDG by default. proc varmax data=one; nloptions maxiter=500 maxfunc=5000; model y1 y2 / p=2 method=ml; run;

OUTPUT Statement OUTPUT < options > ;

RESTRICT Statement F 1909

The OUTPUT statement generates and prints forecasts based on the model estimated in the previous MODEL statement and, optionally, creates an output SAS data set that contains these forecasts. When the GARCH model is estimated, the upper and lower confidence limits of forecasts are calculated by assuming that the error covariance has homoscedastic conditional covariance. ALPHA=number

sets the forecast confidence limit size, where number is between 0 and 1. When you specify the ALPHA=number option, the upper and lower confidence limits define the 100(1 ˛)% confidence interval. The default is ALPHA=0.05, which produces 95% confidence intervals. BACK=number

specifies the number of observations before the end of the data at which the multistep forecasts begin. The BACK= option value must be less than or equal to the number of observations minus the number of lagged regressors in the model. The default is BACK=0, which means that the forecasts start at the end of the available data. LEAD=number

specifies the number of multistep forecast values to compute. The default is LEAD=12. NOPRINT

suppresses the printed forecast values of each dependent (endogenous) variable. OUT=SAS-data-set

writes the forecast values to an output data set. Some examples of the OUTPUT statements follow: proc varmax data=one; model y1 y2 / p=2; output lead=6 back=2; run; proc varmax data=one; model y1 y2 / p=2; output out=for noprint; run;

RESTRICT Statement RESTRICT restriction, . . . , restriction ;

The RESTRICT statement restricts the specified parameters to the specified values. Only one RESTRICT statement is allowed, but multiple restrictions can be specified in one RESTRICT statement. The restriction’s form is parameter=value and each restriction is separated by commas. Parameters are referred by the following keywords:  CONST(i ) is the intercept parameter of the ith time series yi t

1910 F Chapter 30: The VARMAX Procedure

 AR(l; i; j ) is the autoregressive parameter of the lag l value of the j th dependent (endogenous) variable, yj;t l , to the i th dependent variable at time t, yi t  MA(l; i; j ) is the moving-average parameter of the lag l value of the j th error process, j;t to the i th dependent variable at time t , yi t

l,

 XL(l; i; j ) is the exogenous parameter of the lag l value of the j th exogenous (independent) variable, xj;t l , to the i th dependent variable at time t, yi t  SDUMMY(i; j ) is the j th seasonal dummy of the i th time series at time t, yi t , where j D 1; : : : ; .nseason 1/, where nseason is based on the NSEASON= option in the MODEL statement  LTREND(i ) is the linear trend parameter of the current value i th time series yi t  QTREND(i ) is the quadratic trend parameter of the current value i th time series yi t The following keywords are for the fitted GARCH model. The indexes i and j refer to the position of the element in the coefficient matrix.  GCHC(i ,j ) is the constant parameter of the covariance matrix, Ht , and (i ,j ) is 1  i D j  k for CCC representation and 1  i  j  k for BEKK representations, where k is the number of dependent variables  ACH(l,i,j ) is the ARCH parameter of the lag l value of t t0 , where i; j D 1; : : : ; k for BEKK representation and i D j D 1; : : : ; k for CCC representation  GCH(l,i ,j ) is the GARCH parameter of the lag l value of covariance matrix, Ht , where i; j D 1; : : : ; k for BEKK representation and i D j D 1; : : : ; k for CCC representation  CCC(i ,j ) is the constant conditional correlation parameter for only the CCC representation; (i ,j ) is 1  i < j  k To use the RESTRICT statement, you need to know the form of the model. If the P=, Q=, and XLAG= options are not specified, then the RESTRICT statement is not applicable. Restricted parameter estimates are computed by introducing a Lagrangian parameter for each restriction (Pringle and Rayner 1971). The Lagrangian parameter measures the sensitivity of the sum of square errors to the restriction. The estimates of these Lagrangian parameters and their significance are printed in the restriction results table. The following are examples of the RESTRICT statement. The first example shows a bivariate (k=2) VAR(2) model, proc varmax data=one; model y1 y2 / p=2; restrict AR(1,1,2)=0, AR(2,1,2)=0.3; run;

TEST Statement F 1911

The AR(1,1,2) and AR(2,1,2) parameters are fixed as AR(1,1,2)=0 and AR(2,1,2)=0.3, respectively, and other parameters are to be estimated. The following shows a bivariate (k=2) VARX(1,1) model with three exogenous variables, proc varmax data=two; model y1 = x1 x2, y2 = x2 x3 / p=1 xlag=1; restrict XL(0,1,1)=-1.2, XL(1,2,3)=0; run;

The XL(0,1,1) and XL(1,2,3) parameters are fixed as XL(0,1,1)=–1.2 and XL(1,2,3)=0, respectively, and other parameters are to be estimated.

TEST Statement TEST restriction, . . . , restriction ;

The TEST statement performs the Wald test for the joint hypothesis specified in the statement. The restriction’s form is parameter=value, and each restriction is separated by commas. The restrictions are specified in the same manner as in the RESTRICT statement. See the RESTRICT statement for description of model parameter naming conventions used by the RESTRICT and TEST statements. Any number of TEST statements can be specified. To use the TEST statement, you need to know the form of the model. If the P=, Q=, and XLAG= options are not specified, then the TEST statement is not applicable. See the section “Granger Causality Test” on page 1944 for the Wald test. The following is an example of the TEST statement. In the case of a bivariate (k=2) VAR(2) model, proc varmax data=one; model y1 y2 / p=2; test AR(1,1,2)=0, AR(2,1,2)=0; run;

After estimating the parameters, the TEST statement tests the null hypothesis that AR(1,1,2)=0 and AR(2,1,2)=0.

1912 F Chapter 30: The VARMAX Procedure

Details: VARMAX Procedure

Missing Values The VARMAX procedure currently does not support missing values. The procedure uses the first contiguous group of observations with no missing values for any of the MODEL statement variables. Observations at the beginning of the data set with missing values for any MODEL statement variables are not used or included in the output data set. At the end of the data set, observations can have dependent (endogenous) variables with missing values and independent (exogenous) variables with nonmissing values.

VARMAX Model The vector autoregressive moving-average model with exogenous variables is called the VARMAX(p,q,s) model. The form of the model can be written as yt D

p X

ˆi yt

i

i D1

C

s X

‚i xt

i

C t

i D0

q X

‚i t

i

i D1

where the output variables of interest, yt D .y1t ; : : : ; yk t /0 , can be influenced by other input variables, xt D .x1t ; : : : ; xrt /0 , which are determined outside of the system of interest. The variables yt are referred to as dependent, response, or endogenous variables, and the variables xt are referred to as independent, input, predictor, regressor, or exogenous variables. The unobserved noise variables, t D .1t ; : : : ; kt /0 , are a vector white noise process. The VARMAX(p,q,s) model can be written ˆ.B/yt

D ‚ .B/xt C ‚.B/t

where ˆ.B/ D Ik

ˆ1 B



ˆp B p

‚ .B/ D ‚0 C ‚1 B C    C ‚s B s ‚.B/ D Ik

‚1 B



‚q B q

are matrix polynomials in B in the backshift operator, such that B i yt D yt k  k matrices, and the ‚i are k  r matrices.

i,

the ˆi and ‚i are

The following assumptions are made:  E.t / D 0, E.t t0 / D †, which is positive-definite, and E.t s0 / D 0 for t ¤ s.

VARMAX Model F 1913

 For stationarity and invertibility of the VARMAX process, the roots of jˆ.z/j D 0 and j‚.z/j D 0 are outside the unit circle.  The exogenous (independent) variables xt are not correlated with residuals t , E.xt t0 / D 0. The exogenous variables can be stochastic or nonstochastic. When the exogenous variables are stochastic and their future values are unknown, forecasts of these future values are needed to forecast the future values of the endogenous (dependent) variables. On occasion, future values of the exogenous variables can be assumed to be known because they are deterministic variables. The VARMAX procedure assumes that the exogenous variables are nonstochastic if future values are available in the input data set. Otherwise, the exogenous variables are assumed to be stochastic and their future values are forecasted by assuming that they follow the VARMA(p,q) model, prior to forecasting the endogenous variables, where p and q are the same as in the VARMAX(p,q,s) model.

State-Space Representation Another representation of the VARMAX(p,q,s) model is in the form of a state-variable or a statespace model, which consists of a state equation zt D F zt

1

C Kxt C Gt

and an observation equation yt D H zt where 3 2 3 ‚0 Ik 6 7 6 7 yt 60kr 7 60kk 7 6 : 7 6 : 7 6 : 7 6 :: 7 6 :: 7 6 :: 7 6 6 7 7 6 7 6y 60 7 7 60 7 6 t pC1 7 6 kr 7 6 kk 7 6 x 6 I 7 7 60 7 t 6 6 r 7 7 6 rk 7 6 : 7 60 7 6 : 7 6 rr 7 7 6 7 zt D 6 6 :: 7 ; K D 6 : 7 ; G D 6 :: 7 6 6 : 7 7 6 7 6 xt sC1 7 6 : 7 6 0rk 7 6 6 7 7 6 7 6 t 7 6 0rr 7 6Ikk 7 6 6 7 7 6 7 6 :: 7 60kr 7 60kk 7 6 7 6 7 4 : 5 6 :: 7 6 :: 7 4 : 5 4 : 5 t qC1 0kr 0kk 2 ˆ1    ˆp 1 ˆp ‚1    ‚s 1 ‚s 6 Ik    0 0 0  0 0 6 6 : : : : : : : :: :: :: :: :: :: :: 6 :: : 6 60  Ik 0 0  0 0 6 60  0 0 0  0 0 6 60  0 0 Ir    0 0 6 F D6 :: :: :: :: :: :: :: 6 :: 6 : : : : : : : : 6 60  0 0 0  Ir 0 6 60  0 0 0  0 0 6 60    0 0 0    0 0 6 6 :: :: :: :: :: :: :: :: 4 : : : : : : : : 0  0 0 0  0 0 2

2

3

‚1 0 :: : 0 0 0 :: : 0 0 Ik :: : 0

  :: :    :: :    :: : 

‚q 0 :: : 0 0 0 :: : 0 0 0 :: : Ik

1

3 ‚q 0 7 7 :: 7 : 7 7 0 7 7 0 7 7 0 7 7 :: 7 7 : 7 7 0 7 7 0 7 7 0 7 7 :: 7 : 5 0

1914 F Chapter 30: The VARMAX Procedure

and H D ŒIk ; 0kk ; : : : ; 0kk ; 0kr ; : : : ; 0kr ; 0kk ; : : : ; 0kk  On the other hand, it is assumed that xt follows a VARMA(p,q) model xt D

p X

Ai xt

i

q X

C at

i D1

Ci at

i

i D1

The model can also be expressed as A.B/xt D C.B/at where A.B/ D Ir A1 B    Ap B p and C.B/ D Ir C1 B    Cq B q are matrix polynomials in B, and the Ai and Ci are r  r matrices. Without loss of generality, the AR and MA orders can be taken to be the same as the VARMAX(p,q,s) model, and at and t are independent white noise processes. Under suitable conditions such as stationarity, xt is represented by an infinite order moving-average process

xt D A.B/

1

x

C.B/at D ‰ .B/at D

1 X

‰jx at

j

j D0

where ‰ x .B/ D A.B/

1 C.B/

D

P1

x j j D0 ‰j B .

The optimal minimum mean squared error (minimum MSE) i -step-ahead forecast of xt Ci is 1 X

D

xt Ci jt

‰jx at Ci

j

j Di

xt Ci jt C1 D xt Ci jt C ‰ix 1 atC1 For i > q, p X

xt Ci jt D

Aj xtCi

j jt

j D1

The VARMAX(p,q,s) model has an absolutely convergent representation as yt

D ˆ.B/

1

x



D ‰ .B/‰ .B/at C ˆ.B/ D V .B/at C ‰.B/t or yt D

1 X j D0

Vj at

j

1

‚ .B/xt C ˆ.B/

C

1 X j D0

‰j t

j

1

‚.B/t

‚.B/t

VARMAX Model F 1915

P1

where ‰.B/ D ˆ.B/ 1 ‚.B/ D P j ‰  .B/‰ x .B/ D 1 j D0 Vj B .

j D0 ‰j B

j,

‰  .B/ D ˆ.B/

1 ‚ .B/,

The optimal (minimum MSE) i -step-ahead forecast of yt Ci is D

yt Ci jt

1 X

Vj atCi

j

C

j Di

1 X

‰j t Ci

j

j Di

yt Ci jt C1 D yt Ci jt C Vi

1 at C1

C ‰i

1 t C1

for i D 1; : : : ; v with v D max.p; q C 1/. For i > q,

yt Ci jt

D

D

D

D

p X j D1 p X j D1 p X j D1 p X

ˆj yt Ci

j jt

s X

C

‚j xt Ci

j jt

j D0

ˆj yt Ci

j jt

C

‚0 xtCi jt

C

s X

‚j xt Ci

j jt

j D1

ˆj yt Ci

j jt

C

p X

‚0

Aj xt Ci

j jt

j D1

ˆj yt Ci

j jt

u X

C

j D1

s X

C

‚j xt Ci

j jt

j D1

.‚0 Aj C ‚j /xt Ci

j jt

j D1

where u D max.p; s/. Define …j D ‚0 Aj C ‚j . For i D v > q with v D max.p; q C 1/, you obtain

yt Cvjt yt Cvjt

D

D

p X j D1 p X

ˆj yt Cv

j jt

C

u X

…j xt Cv

j jt

for u  v

…j xt Cv

j jt

for u > v

j D1

ˆj yt Cv

j jt

C

j D1

r X j D1

From the preceding relations, a state equation is zt C1 D F zt C Kxt C Get C1 and an observation equation is yt D H zt

and V .B/ D

1916 F Chapter 30: The VARMAX Procedure

where 2

yt

3

6 ytC1jt 7 6 7 2 3 6 7 :: xt Cv u 6 7 : 6 7   6xt Cv uC1 7 6ytCv 1jt 7 at C1 6 7  6 7 zt D 6 7 ; etC1 D :: 7 ; xt D 6 t C1 4 5 : 6 xt 7 6 x 7 xt 1 6 tC1jt 7 6 7 :: 4 5 : xtCv 1jt 2 0 Ik 0  0 0 0 0 60 0 I    0 0 0 0 k 6 6 :: :: :: : : : :: : : : :: :: :: 6 : : : : 6 6ˆv ˆv 1 ˆv 2    ˆ1 …v …v 1 …v 2 F D6 60 0 0  0 0 Ir 0 6 60 0 0    0 0 0 I r 6 6 :: :: :: :: :: :: :: :: 4 : : : : : : : : 0 0 0    0 Av Av 1 Av 2 3 2 2 3 V0 Ik 0 0  0 6 V1 ‰1 7 6 7 6 0 0    0 7 6 7 6 :: :: 7 6 :: 6 : :: :: 7 :: : 7 6 : 6 7 : : : 7 6 7 6 Vv 1 ‰v 1 7 7 6 7 KD6 6…u …u 1    …vC1 7 ; G D 6 Ir 0rk 7 6 0 7 6 7 0  0 7 6 6 ‰x 7 0 rk 7 7 6 :: 6 1 :: : :: :: 5 6 :: 4 : :: 7 : : 4 : : 5 0 0  0 x ‰v 1 0rk

  :: :    :: : 

0 0 :: :

3

7 7 7 7 7 …1 7 7 0 7 7 0 7 7 :: 7 : 5 A1

and H D ŒIk ; 0kk ; : : : ; 0kk ; 0kr ; : : : ; 0kr  Note that the matrix K and the input vector xt are defined only when u > v.

Dynamic Simultaneous Equations Modeling In the econometrics literature, the VARMAX(p,q,s) model is sometimes written in a form that is slightly different than the one shown in the previous section. This alternative form is referred to as a dynamic simultaneous equations model or a dynamic structural equations model. Since E.t t0 / D † is assumed to be positive-definite, there exists a lower triangular matrix A0 with ones on the diagonals such that A0 †A00 D †d , where †d is a diagonal matrix with positive diagonal elements. A 0 yt D

p X i D1

Ai yt

i

C

s X i D0

Ci xt

i

C C0  t

q X i D1

Ci  t

i

Dynamic Simultaneous Equations Modeling F 1917

where Ai D A0 ˆi , Ci D A0 ‚i , C0 D A0 , and Ci D A0 ‚i . As an alternative form,

A0 yt D

p X i D1

Ai yt

i

C

s X i D0

Ci xt

i

C at

q X

Ci at

i

i D1

where Ai D A0 ˆi , Ci D A0 ‚i , Ci D A0 ‚i A0 1 , and at D C0 t has a diagonal covariance matrix †d . The PRINT=(DYNAMIC) option returns the parameter estimates that result from estimating the model in this form. A dynamic simultaneous equations model involves a leading (lower triangular) coefficient matrix for yt at lag 0 or a leading coefficient matrix for t at lag 0. Such a representation of the VARMAX(p,q,s) model can be more useful in certain circumstances than the standard representation. From the linear combination of the dependent variables obtained by A0 yt , you can easily see the relationship between the dependent variables in the current time. The following statements provide the dynamic simultaneous equations of the VAR(1) model. proc iml; sig = {1.0 0.5, 0.5 1.25}; phi = {1.2 -0.5, 0.6 0.3}; /* simulate the vector time series */ call varmasim(y,phi) sigma = sig n = 100 seed = 34657; cn = {’y1’ ’y2’}; create simul1 from y[colname=cn]; append from y; quit; data simul1; set simul1; date = intnx( ’year’, ’01jan1900’d, _n_-1 ); format date year4.; run; proc varmax data=simul1; model y1 y2 / p=1 noint print=(dynamic); run;

This is the same data set and model used in the section “Getting Started: VARMAX Procedure” on page 1858. You can compare the results of the VARMA model form and the dynamic simultaneous equations model form.

1918 F Chapter 30: The VARMAX Procedure

Figure 30.25 Dynamic Simultaneous Equations (DYNAMIC Option) The VARMAX Procedure Covariances of Innovations Variable y1 y2

y1

y2

1.28875 0.00000

0.00000 1.29578

AR Lag

Variable

0

y1 y2 y1 y2

1

y1

y2

1.00000 -0.30845 1.15977 0.18861

0.00000 1.00000 -0.51058 0.54247

Dynamic Model Parameter Estimates

Equation Parameter

Estimate

y1

1.15977 -0.51058 0.30845 0.18861 0.54247

AR1_1_1 AR1_1_2 AR0_2_1 AR1_2_1 AR1_2_2

y2

Standard Error t Value Pr > |t| Variable 0.05508 0.07140

21.06 -7.15

0.05779 0.07491

3.26 7.24

0.0001 y1(t-1) 0.0001 y2(t-1) y1(t) 0.0015 y1(t-1) 0.0001 y2(t-1)

In Figure 30.4 in the section “Getting Started: VARMAX Procedure” on page 1858, the covariance of t estimated from the VARMAX model form is  † D

1:28875 0:39751 0:39751 1:41839



Figure 30.25 shows the results from estimating the model as a dynamic simultaneous equations model. By the decomposition of † , you get a diagonal matrix (†a ) and a lower triangular matrix (A0 ) such as †a D A0 † A00 where  †a D

1:28875 0 0 1:29578



 and A0 D

1 0 0:30845 1



The lower triangular matrix (A0 ) is shown in the left side of the simultaneous equations model. The parameter estimates in equations system are shown in the right side of the two-equations system. The simultaneous equations model is written as 

1 0 0:30845 1



 yt D

1:15977 0:18861

0:51058 0:54247

 yt

1

C at

Impulse Response Function F 1919

The resulting two-equation system can be written as y1t

D 1:15977y1;t

y2t

D 0:30845y1t C 0:18861y1;t

1

0:51058y2;t 1

1

C a1t

C 0:54247y2;t

1

C a2t

Impulse Response Function Simple Impulse Response Function (IMPULSE=SIMPLE Option) The VARMAX(p,q,s) model has a convergent representation yt D ‰  .B/xt C ‰.B/t where ‰  .B/ D ˆ.B/

1 ‚ .B/

D

P1

 j j D0 ‰j B

and ‰.B/ D ˆ.B/

1 ‚.B/

D

P1

j D0 ‰j B

j.

The elements of the matrices ‰j from the operator ‰.B/, called the impulse response, can be interpreted as the impact that a shock in one variable has on another variable. Let j;i n be the i nt h element of ‰j at lag j , where i is the index for the impulse variable, and n is the index for the response variable (impulse ! response). For instance, j;11 is an impulse response to y1t ! y1t , and j;12 is an impulse response to y1t ! y2t .

Accumulated Impulse Response Function (IMPULSE=ACCUM Option) The accumulated impulse response function is the cumulative sum of the impulse response function, P ‰la D lj D0 ‰j .

Orthogonalized Impulse Response Function (IMPULSE=ORTH Option) The MA representation of a VARMA(p,q) model with a standardized white noise innovation process offers another way to interpret a VARMA(p,q) model. Since † is positive-definite, there is a lower triangular matrix P such that † D PP 0 . The alternate MA representation of a VARMA(p,q) model is written as yt D ‰ o .B/ut P o j o where ‰ o .B/ D 1 j D0 ‰j B , ‰j D ‰j P , and ut D P

1 . t

The elements of the matrices ‰jo , called the orthogonal impulse response, can be interpreted as the effects of the components of the standardized shock process ut on the process yt at lag j .

1920 F Chapter 30: The VARMAX Procedure

Impulse Response of Transfer Function (IMPULSX=SIMPLE Option) The coefficient matrix ‰j from the transfer function operator ‰  .B/ can be interpreted as the effects that changes in the exogenous variables xt have on the output variable yt at lag j ; it is called an impulse response matrix in the transfer function.

Impulse Response of Transfer Function (IMPULSX=ACCUM Option) The accumulated impulse response in the transfer function is the cumulative sum of the impulse P response in the transfer function, ‰la D lj D0 ‰j . The asymptotic distributions of the impulse functions can be seen in the section “VAR and VARX Modeling” on page 1941. The following statements provide the impulse response and the accumulated impulse response in the transfer function for a VARX(1,0) model. proc varmax data=grunfeld plot=impulse; model y1-y3 = x1 x2 / p=1 lagmax=5 printform=univariate print=(impulsx=(all) estimates); run;

In Figure 30.26, the variables x1 and x2 are impulses and the variables y1, y2, and y3 are responses. You can read the table matching the pairs of impulse ! response such as x1 ! y1, x1 ! y2, x1 ! y3, x2 ! y1, x2 ! y2, and x2 ! y3. In the pair of x1 ! y1, you can see the long-run responses of y1 to an impulse in x1 (the values are 1.69281, 0.35399, 0.09090, and so on for lag 0, lag 1, lag 2, and so on, respectively).

Impulse Response Function F 1921

Figure 30.26 Impulse Response in Transfer Function (IMPULSX= Option) The VARMAX Procedure Simple Impulse Response of Transfer Function by Variable Variable Response\Impulse y1

y2

y3

Lag

x1

x2

0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5

1.69281 0.35399 0.09090 0.05136 0.04717 0.04620 -6.09850 -5.15484 -3.04168 -2.23797 -1.98183 -1.87415 -0.02317 1.57476 1.80231 1.77024 1.70435 1.63913

-0.00859 0.01727 0.00714 0.00214 0.00072 0.00040 2.57980 0.45445 0.04391 -0.01376 -0.01647 -0.01453 -0.01274 -0.01435 0.00398 0.01062 0.01197 0.01187

Figure 30.27 shows the responses of y1, y2, and y3 to a forecast error impulse in x1.

1922 F Chapter 30: The VARMAX Procedure

Figure 30.27 Plot of Impulse Response in Transfer Function

Figure 30.28 shows the accumulated impulse response in transfer function.

Impulse Response Function F 1923

Figure 30.28 Accumulated Impulse Response in Transfer Function (IMPULSX= Option) Accumulated Impulse Response of Transfer Function by Variable Variable Response\Impulse y1

y2

y3

Lag

x1

x2

0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5

1.69281 2.04680 2.13770 2.18906 2.23623 2.28243 -6.09850 -11.25334 -14.29502 -16.53299 -18.51482 -20.38897 -0.02317 1.55159 3.35390 5.12414 6.82848 8.46762

-0.00859 0.00868 0.01582 0.01796 0.01867 0.01907 2.57980 3.03425 3.07816 3.06440 3.04793 3.03340 -0.01274 -0.02709 -0.02311 -0.01249 -0.00052 0.01135

Figure 30.29 shows the accumulated responses of y1, y2, and y3 to a forecast error impulse in x1.

1924 F Chapter 30: The VARMAX Procedure

Figure 30.29 Plot of Accumulated Impulse Response in Transfer Function

The following statements provide the impulse response function, the accumulated impulse response function, and the orthogonalized impulse response function with their standard errors for a VAR(1) model. Parts of the VARMAX procedure output are shown in Figure 30.30, Figure 30.32, and Figure 30.34. proc varmax data=simul1 plot=impulse; model y1 y2 / p=1 noint lagmax=5 print=(impulse=(all)) printform=univariate; run;

Figure 30.30 is the output in a univariate format associated with the PRINT=(IMPULSE=) option for the impulse response function. The keyword STD stands for the standard errors of the elements. The matrix in terms of the lag 0 does not print since it is the identity. In Figure 30.30, the variables y1 and y2 of the first row are impulses, and the variables y1 and y2 of the first column are responses. You can read the table matching the impulse ! response pairs, such as y1 ! y1, y1 ! y2, y2 ! y1, and y2 ! y2. For example, in the pair of y1 ! y1 at lag 3, the response is 0.8055. This represents the impact on y1 of one-unit change in y1 after 3 periods. As the lag gets higher, you can see the long-run responses of y1 to an impulse in itself.

Impulse Response Function F 1925

Figure 30.30 Impulse Response Function (IMPULSE= Option) The VARMAX Procedure Simple Impulse Response by Variable Variable Response\Impulse y1

y2

Lag

y1

y2

1 STD 2 STD 3 STD 4 STD 5 STD 1 STD 2 STD 3 STD 4 STD 5 STD

1.15977 0.05508 1.06612 0.10450 0.80555 0.14522 0.47097 0.17191 0.14315 0.18214 0.54634 0.05779 0.84396 0.08481 0.90738 0.10307 0.78943 0.12318 0.56123 0.14236

-0.51058 0.05898 -0.78872 0.10702 -0.84798 0.14121 -0.73776 0.15864 -0.52450 0.16115 0.38499 0.06188 -0.13073 0.08556 -0.48124 0.09865 -0.64856 0.11661 -0.65275 0.13482

Figure 30.31 shows the responses of y1 and y2 to a forecast error impulse in y1 with two standard errors.

1926 F Chapter 30: The VARMAX Procedure

Figure 30.31 Plot of Impulse Response

Figure 30.32 is the output in a univariate format associated with the PRINT=(IMPULSE=) option for the accumulated impulse response function. The matrix in terms of the lag 0 does not print since it is the identity.

Impulse Response Function F 1927

Figure 30.32 Accumulated Impulse Response Function (IMPULSE= Option) Accumulated Impulse Response by Variable Variable Response\Impulse y1

y2

Lag

y1

y2

1 STD 2 STD 3 STD 4 STD 5 STD 1 STD 2 STD 3 STD 4 STD 5 STD

2.15977 0.05508 3.22589 0.21684 4.03144 0.52217 4.50241 0.96922 4.64556 1.51137 0.54634 0.05779 1.39030 0.17614 2.29768 0.36166 3.08711 0.65129 3.64834 1.07510

-0.51058 0.05898 -1.29929 0.22776 -2.14728 0.53649 -2.88504 0.97088 -3.40953 1.47122 1.38499 0.06188 1.25426 0.18392 0.77302 0.36874 0.12447 0.65333 -0.52829 1.06309

Figure 30.33 shows the accumulated responses of y1 and y2 to a forecast error impulse in y1 with two standard errors.

1928 F Chapter 30: The VARMAX Procedure

Figure 30.33 Plot of Accumulated Impulse Response

Figure 30.34 is the output in a univariate format associated with the PRINT=(IMPULSE=) option for the orthogonalized impulse response function. The two right-hand side columns, y1 and y2, represent the y1_i nnovat i on and y2_i nnovation variables. These are the impulses variables. The left-hand side column contains responses variables, y1 and y2. You can read the table by matching the i mpulse ! response pairs such as y1_i nnovation ! y1, y1_i nnovation ! y2, y2_i nnovat i on ! y1, and y2_i nnovation ! y2.

Impulse Response Function F 1929

Figure 30.34 Orthogonalized Impulse Response Function (IMPULSE= Option) Orthogonalized Impulse Response by Variable Variable Response\Impulse y1

y2

Lag

y1

y2

0 STD 1 STD 2 STD 3 STD 4 STD 5 STD 0 STD 1 STD 2 STD 3 STD 4 STD 5 STD

1.13523 0.08068 1.13783 0.10666 0.93412 0.13113 0.61756 0.15348 0.27633 0.16940 -0.02115 0.17432 0.35016 0.11676 0.75503 0.06949 0.91231 0.10553 0.86158 0.12266 0.66909 0.13305 0.40856 0.14189

0.00000 0.00000 -0.58120 0.14110 -0.89782 0.16776 -0.96528 0.18595 -0.83981 0.19230 -0.59705 0.18830 1.13832 0.08855 0.43824 0.10937 -0.14881 0.13565 -0.54780 0.14825 -0.73827 0.15846 -0.74304 0.16765

In Figure 30.4, there is a positive correlation between "1t and "2t . Therefore, shock in y1 can be accompanied by a shock in y2 in the same period. For example, in the pair of y1_i nnovation ! y2, you can see the long-run responses of y2 to an impulse in y1_i nnovation. Figure 30.35 shows the orthogonalized responses of y1 and y2 to a forecast error impulse in y1 with two standard errors.

1930 F Chapter 30: The VARMAX Procedure

Figure 30.35 Plot of Orthogonalized Impulse Response

Forecasting The optimal (minimum MSE) l-step-ahead forecast of yt Cl is

yt Cljt

yt Cljt

D

D

p X

ˆj yt Cl

j jt

C

s X

j D1

j D0

p X

s X

j D1

ˆj yt Cl

j jt

C

‚j xtCl j jt

q X

‚j t Cl

j;

l q

j Dl

‚j xtCl

j jt ;

l >q

j D0

with yt Cl j jt D yt Cl j and xt Cl j jt D xt Cl j for l  j . For the forecasts xt Cl section “State-Space Representation” on page 1913.

j jt ,

see the

Forecasting F 1931

Covariance Matrices of Prediction Errors without Exogenous (Independent) Variables Under the stationarity assumption, the optimal (minimum MSE) l-step-ahead forecast of yt Cl has P1 an infinite moving-average form, yt Cljt D j Dl ‰j t Cl j . The prediction error of the optimal P 1 l-step-ahead forecast is et Cljt D yt Cl yt Cljt D lj D0 ‰j t Cl j , with zero mean and covariance matrix, †.l/ D Cov.et Cljt / D

l 1 X

‰j †‰j0 D

j D0

l 1 X

‰jo ‰jo

0

j D0

where ‰jo D ‰j P with a lower triangular matrix P such that † D PP 0 . Under the assumption of normality of the t , the l-step-ahead prediction error et Cljt is also normally distributed as multivariate N.0; †.l//. Hence, it follows that the diagonal elements i2i .l/ of †.l/ can be used, together with the point forecasts yi;t Cljt , to construct l-step-ahead prediction intervals of the future values of the component series, yi;t Cl . The following statements use the COVPE option to compute the covariance matrices of the prediction errors for a VAR(1) model. The parts of the VARMAX procedure output are shown in Figure 30.36 and Figure 30.37. proc varmax data=simul1; model y1 y2 / p=1 noint lagmax=5 printform=both print=(decompose(5) impulse=(all) covpe(5)); run;

Figure 30.36 is the output in a matrix format associated with the COVPE option for the prediction error covariance matrices. Figure 30.36 Covariances of Prediction Errors (COVPE Option) The VARMAX Procedure Prediction Error Covariances Lead 1 2 3 4 5

Variable y1 y2 y1 y2 y1 y2 y1 y2 y1 y2

y1

y2

1.28875 0.39751 2.92119 1.00189 4.59984 1.98771 5.91299 3.04856 6.69463 3.85346

0.39751 1.41839 1.00189 2.18051 1.98771 3.03498 3.04856 4.07738 3.85346 5.07010

Figure 30.37 is the output in a univariate format associated with the COVPE option for the prediction error covariances. This printing format more easily explains the prediction error covariances of each variable.

1932 F Chapter 30: The VARMAX Procedure

Figure 30.37 Covariances of Prediction Errors Prediction Error Covariances by Variable Variable y1

y2

Lead

y1

y2

1 2 3 4 5 1 2 3 4 5

1.28875 2.92119 4.59984 5.91299 6.69463 0.39751 1.00189 1.98771 3.04856 3.85346

0.39751 1.00189 1.98771 3.04856 3.85346 1.41839 2.18051 3.03498 4.07738 5.07010

Covariance Matrices of Prediction Errors in the Presence of Exogenous (Independent) Variables Exogenous variables can be both stochastic and nonstochastic (deterministic) variables. Considering the forecasts in the VARMAX(p,q,s) model, there are two cases. When exogenous (independent) variables are stochastic (future values not specified): As defined in the section “State-Space Representation” on page 1913, yt Cljt has the representation yt Cljt D

1 X

Vj at Cl

j

C

1 X

j Dl

j Dl

l 1 X

l 1 X

‰j t Cl

j

‰j t Cl

j

and hence et Cljt D

Vj at Cl

j

j D0

C

j D0

Therefore, the covariance matrix of the l-step-ahead prediction error is given as †.l/ D Cov.et Cljt / D

l 1 X j D0

Vj †a Vj0 C

l 1 X

‰j † ‰j0

j D0

where †a is the covariance of the white noise series at , and at is the white noise series for the VARMA(p,q) model of exogenous (independent) variables, which is assumed not to be correlated with t or its lags. When future exogenous (independent) variables are specified:

Forecasting F 1933

The optimal forecast yt Cljt of yt conditioned on the past information and also on known future values xt C1 ; : : : ; xt Cl can be represented as yt Cljt D

1 X

‰j xtCl

j

1 X

C

j D0

‰j tCl

j

j Dl

and the forecast error is et Cljt D

l 1 X

‰j tCl

j

j D0

Thus, the covariance matrix of the l-step-ahead prediction error is given as †.l/ D Cov.et Cljt / D

l 1 X

‰j † ‰j0

j D0

Decomposition of Prediction Error Covariances Pl 1 o o 0 In the relation †.l/ D j D0 ‰j ‰j , the diagonal elements can be interpreted as providing a decomposition of the l-step-ahead prediction error covariance i2i .l/ for each component series yi t into contributions from the components of the standardized innovations t . If you denote the (i; n)th element of ‰jo by MSE.yi;t Chjt / D E.yi;t Ch

j;i n ,

yi;t Chjt /2 D

the MSE of yi;tChjt is k l 1 X X

2 j;i n

j D0 nD1

P 1 2 Note that lj D0 j;i n is interpreted as the contribution of innovations in variable n to the prediction error covariance of the l-step-ahead forecast of variable i . The proportion, !l;i n , of the l-step-ahead forecast error covariance of variable i accounting for the innovations in variable n is !l;i n D

l 1 X

2 j;i n =MSE.yi;t Chjt /

j D0

The following statements use the DECOMPOSE option to compute the decomposition of prediction error covariances and their proportions for a VAR(1) model: proc varmax data=simul1; model y1 y2 / p=1 noint print=(decompose(15)) printform=univariate; run;

The proportions of decomposition of prediction error covariances of two variables are given in Figure 30.38. The output explains that about 91.356% of the one-step-ahead prediction error covariances of the variable y2t is accounted for by its own innovations and about 8.644% is accounted for by y1t innovations.

1934 F Chapter 30: The VARMAX Procedure

Figure 30.38 Decomposition of Prediction Error Covariances (DECOMPOSE Option) Proportions of Prediction Error Covariances by Variable Variable

Lead

y1

y2

1 2 3 4 5 1 2 3 4 5

1.00000 0.88436 0.75132 0.64897 0.58460 0.08644 0.31767 0.50247 0.55607 0.53549

0.00000 0.11564 0.24868 0.35103 0.41540 0.91356 0.68233 0.49753 0.44393 0.46451

y1

y2

Forecasting of the Centered Series If the CENTER option is specified, the sample mean vector is added to the forecast.

Forecasting of the Differenced Series If dependent (endogenous) variables are differenced, the final forecasts and their prediction error covariances are produced by integrating those of the differenced series. However, if the PRIOR option is specified, the forecasts and their prediction error variances of the differenced series are produced. Let zt be the original series with some appended zero values that correspond to the unobserved past observations. Let .B/ be the k  k matrix polynomial in the backshift operator that corresponds to the differencing specified by the MODEL statement. The off-diagonal elements of i are zero, and the diagonal elements can be different. Then yt D .B/zt . This gives the relationship zt D 

1

.B/yt D

1 X

ƒj yt

j

j D0

where 

1 .B/

D

P1

j D0 ƒj B

j

and ƒ0 D Ik .

The l-step-ahead prediction of zt Cl is zt Cljt D

l 1 X j D0

ƒj yt Cl

j jt

C

1 X j Dl

ƒj yt Cl

j

Tentative Order Selection F 1935

The l-step-ahead prediction error of ztCl is l 1 X

ƒj yt Cl

yt Cl

j

j jt



l 1 X

D

0

j X

@

j D0

j D0

1 ƒu ‰ j

u A t Cl j

uD0

Letting †z .0/ D 0, the covariance matrix of the l-step-ahead prediction error of zt Cl , †z .l/, is l 1 X

†z .l/ D

0

j X

@

1 ƒu ‰j

u A †

uD0

j D0

j X

@

10 ƒu ‰j

uA

uD0

0 D †z .l

0

1/ C @

l 1 X

1 ƒj ‰ l

1 j

0

l 1 X

A † @

j D0

10 ƒj ‰l

1 j

A

j D0

If there are stochastic exogenous (independent) variables, the covariance matrix of the l-step-ahead prediction error of zt Cl , †z .l/, is 0 †z .l/ D †z .l

1/ C @

l 1 X

1 ƒj ‰ l

1 j

0

A † @

j D0

0 C@

l 1 X

1 j

j D0

10 ƒj ‰l

1 j

A

j D0

1 ƒj V l

l 1 X

0

A †a @

l 1 X

10 ƒj V l

1 j

A

j D0

Tentative Order Selection Sample Cross-Covariance and Cross-Correlation Matrices Given a stationary multivariate time series yt , cross-covariance matrices are €.l/ D EŒ.yt

/.yt Cl

/0 

where  D E.yt /, and cross-correlation matrices are .l/ D D

1

€.l/D

1

where D is a diagonal matrix with the standard deviations of the components of yt on the diagonal. The sample cross-covariance matrix at lag l, denoted as C.l/, is computed as T l 1 X O yQ t yQ 0t Cl €.l/ D C.l/ D T t D1

1936 F Chapter 30: The VARMAX Procedure

O where yQ t is the centered data and T is the number of nonmissing observations. Thus, €.l/ has .i; j /th element Oij .l/ D cij .l/. The sample cross-correlation matrix at lag l is computed as Oij .l/ D cij .l/=Œci i .0/cjj .0/1=2 ; i; j D 1; : : : ; k The following statements use the CORRY option to compute the sample cross-correlation matrices and their summary indicator plots in terms of C; ; and , where C indicates significant positive cross-correlations, indicates significant negative cross-correlations, and  indicates insignificant cross-correlations. proc varmax data=simul1; model y1 y2 / p=1 noint lagmax=3 print=(corry) printform=univariate; run;

Figure 30.39 shows the sample cross-correlation matrices of y1t and y2t . As shown, the sample autocorrelation functions for each variable decay quickly, but are significant with respect to two standard errors. Figure 30.39 Cross-Correlations (CORRY Option) The VARMAX Procedure Cross Correlations of Dependent Series by Variable Variable y1

y2

Lag

y1

y2

0 1 2 3 0 1 2 3

1.00000 0.83143 0.56094 0.26629 0.67041 0.29707 -0.00936 -0.22058

0.67041 0.84330 0.81972 0.66154 1.00000 0.77132 0.48658 0.22014

Schematic Representation of Cross Correlations Variable/ Lag 0 1 2

3

y1 y2

++ -+

++ ++

++ ++

++ .+

+ is > 2*std error, - is < -2*std error, . is between

Tentative Order Selection F 1937

Partial Autoregressive Matrices For each m D 1; 2; : : : ; p you can define a sequence of matrices ˆmm , which is called the partial autoregression matrices of lag m, as the solution for ˆmm to the Yule-Walker equations of order m,

€.l/ D

m X

€.l

i /ˆ0im ; l D 1; 2; : : : ; m

i D1

The sequence of the partial autoregression matrices ˆmm of order m has the characteristic property that if the process follows the AR(p), then ˆpp D ˆp and ˆmm D 0 for m > p. Hence, the matrices ˆmm have the cutoff property for a VAR(p) model, and so they can be useful in the identification of the order of a pure VAR model. The following statements use the PARCOEF option to compute the partial autoregression matrices: proc varmax data=simul1; model y1 y2 / p=1 noint lagmax=3 printform=univariate print=(corry parcoef pcorr pcancorr roots); run;

Figure 30.40 shows that the model can be obtained by an AR order m D 1 since partial autoregression matrices are insignificant after lag 1 with respect to two standard errors. The matrix for lag 1 is the same as the Yule-Walker autoregressive matrix. Figure 30.40 Partial Autoregression Matrices (PARCOEF Option) The VARMAX Procedure Partial Autoregression Lag 1 2 3

Variable y1 y2 y1 y2 y1 y2

y1

y2

1.14844 0.54985 -0.00724 0.02409 -0.02578 -0.03720

-0.50954 0.37409 0.05138 0.05909 0.03885 0.10149

Schematic Representation of Partial Autoregression Variable/ Lag 1 2 3 y1 y2

+++

.. ..

.. ..

+ is > 2*std error, - is < -2*std error, . is between

1938 F Chapter 30: The VARMAX Procedure

Partial Correlation Matrices Define the forward autoregression m X1

yt D

ˆi;m

1 yt i

C um;t

i D1

and the backward autoregression

yt

m

D

m X1

ˆi;m

1 yt mCi

C um;t

m

i D1

The matrices P .m/ defined by Ansley and Newbold (1979) are given by 1=2 0 1=2 1 ˆmm †m 1

P .m/ D †m where

†m

1

D Cov.um;t / D €.0/

m X1

€. i/ˆ0i;m

1

i D1

and †m

1

D Cov.um;t

m / D €.0/

m X1

€.m

0

i/ˆm

i;m 1

i D1

P .m/ are the partial cross-correlation matrices at lag m between the elements of yt and yt m , given yt 1 ; : : : ; yt mC1 . The matrices P .m/ have the cutoff property for a VAR(p) model, and so they can be useful in the identification of the order of a pure VAR structure. The following statements use the PCORR option to compute the partial cross-correlation matrices: proc varmax data=simul1; model y1 y2 / p=1 noint lagmax=3 print=(pcorr) printform=univariate; run;

The partial cross-correlation matrices in Figure 30.41 are insignificant after lag 1 with respect to two standard errors. This indicates that an AR order of m D 1 can be an appropriate choice.

Tentative Order Selection F 1939

Figure 30.41 Partial Correlations (PCORR Option) The VARMAX Procedure Partial Cross Correlations by Variable Variable

Lag

y1

y2

1 2 3 1 2 3

0.80348 0.00276 -0.01091 -0.30946 0.04676 0.01993

0.42672 0.03978 0.00032 0.71906 0.07045 0.10676

y1

y2

Schematic Representation of Partial Cross Correlations Variable/ Lag 1 2 3 y1 y2

++ -+

.. ..

.. ..

+ is > 2*std error, - is < -2*std error, . is between

Partial Canonical Correlation Matrices The partial canonical correlations at lag m between the vectors yt and yt m , given yt 1 ; : : : ; yt mC1 , are 1  1 .m/  2 .m/     k .m/. The partial canonical correlations are the canonical correlations between the residual series um;t and um;t m , where um;t and um;t m are defined in the previous section. Thus, the squared partial canonical correlations i2 .m/ are the eigenvalues of the matrix fCov.um;t /g

1

0

E.um;t um;t

0  1  m /fCov.um;t m /g E.um;t m um;t /

0

0

D ˆmm ˆmm

It follows that the test statistic to test for ˆm D 0 in the VAR model of order m > p is approximately .T

m/ tr

0 0 fˆmm ˆmm g

 .T

m/

k X

i2 .m/

i D1

and has an asymptotic chi-square distribution with k 2 degrees of freedom for m > p. The following statements use the PCANCORR option to compute the partial canonical correlations: proc varmax data=simul1; model y1 y2 / p=1 noint lagmax=3 print=(pcancorr); run;

Figure 30.42 shows that the partial canonical correlations i .m/ between yt and yt m are {0.918, 0.773}, {0.092, 0.018}, and {0.109, 0.011} for lags m D1 to 3. After lag m D1, the partial

1940 F Chapter 30: The VARMAX Procedure

canonical correlations are insignificant with respect to the 0.05 significance level, indicating that an AR order of m D 1 can be an appropriate choice. Figure 30.42 Partial Canonical Correlations (PCANCORR Option) The VARMAX Procedure Partial Canonical Correlations Lag

Correlation1

Correlation2

DF

Chi-Square

Pr > ChiSq

1 2 3

0.91783 0.09171 0.10861

0.77335 0.01816 0.01078

4 4 4

142.61 0.86 1.16

2*std error, is < -2*std error, . is between, * is N/A

Model Parameter Estimates

Equation Parameter

Estimate

y1

1.01809 -0.38651 0.32291 -0.02153 0.39147 0.55290 -0.16566 0.58612

y2

AR1_1_1 AR1_1_2 MA1_1_1 MA1_1_2 AR1_2_1 AR1_2_2 MA1_2_1 MA1_2_2

Standard Error t Value Pr > |t| Variable 0.10256 0.09643 0.14529 0.14199 0.10062 0.08421 0.15699 0.14114

9.93 -4.01 2.22 -0.15 3.89 6.57 -1.06 4.15

0.0001 0.0001 0.0285 0.8798 0.0002 0.0001 0.2939 0.0001

y1(t-1) y2(t-1) e1(t-1) e2(t-1) y1(t-1) y2(t-1) e1(t-1) e2(t-1)

Model Diagnostic Checks F 1957

The fitted VARMA(1,1) model with estimated standard errors in parentheses is given as 1 1:01809 0:38651 B .0:10256/ .0:09644/ C Cy yt D B @ 0:39147 0:55290 A t .0:10062/ .0:08421/

0

0

1

C t

1 0:32291 0:02153 B .0:14530/ .0:14199/ C B C @ 0:16566 0:58613 A t .0:15699/ .0:14115/

1

VARMAX Modeling A VARMAX(p; q; s) process is written as

yt D ı C

p X

ˆi yt

i

i D1

C

s X

‚i xt i

i D0

C t

q X

‚i t

i

i D1

or ˆ.B/yt D ı C ‚ .B/xt C ‚.B/t where Pq ˆ.B/i D Ik i D1 ‚i B .

Pp

i D1 ˆi B

i,

‚ .B/ D ‚0 C ‚1 B C    C ‚s B s , and ‚.B/ D Ik

The dimension of the state-space vector of the Kalman filtering method for the parameter estimation of the VARMAX(p,q,s) model is large, which takes time and memory for computing. For convenience, the parameter estimation of the VARMAX(p,q,s) model uses the two-stage estimation method, which first estimates the deterministic terms and exogenous parameters, and then maximizes the log-likelihood function of a VARMA(p,q) model. Some examples of VARMAX modeling are as follows: model y1 y2 = x1 / q=1; nloptions tech=qn; model y1 y2 = x1 / p=1 q=1 xlag=1 nocurrentx; nloptions tech=qn;

Model Diagnostic Checks Multivariate Model Diagnostic Checks  Information Criterion After fitting some candidate models to the data, various model selection criteria (normalized by T ) can be used to choose the appropriate model. The following list includes the Akaike information criterion (AIC), the corrected Akaike information criterion

1958 F Chapter 30: The VARMAX Procedure

(AICC), the final prediction error criterion (FPE), the Hannan-Quinn criterion (HQC), and the Schwarz Bayesian criterion (SBC, also referred to as BIC): Q C 2r=T AIC D log.j†j/ Q C 2r=.T AICC D log.j†j/

r=k/ T C r=k k Q / j†j FPE D . T r=k Q C 2r log.log.T //=T HQC D log.j†j/ Q C r log.T /=T SBC D log.j†j/

where r denotes the number of parameters estimated, k is the number of dependent variables, Q is the maximum likelihood T is the number of observations used to estimate the model, and † estimate of †. When comparing models, choose the model with the smallest criterion values. An example of the output was displayed in Figure 30.4.  Portmanteau Qs statistic The Portmanteau Qs statistic is used to test whether correlation remains on the model residuals. The null hypothesis is that the residuals are uncorrelated. Let C .l/ be the residual cross-covariance matrices, O .l/ be the residual cross-correlation matrices as C .l/ D T

1

T Xl

t t0 Cl

t D1

and O .l/ D VO

1=2

C .l/VO

1=2

and O . l/ D O .l/0

2 2 O The multivariate where VO D Diag.O 11 ; : : : ; O kk / and O i2i are the diagonal elements of †. portmanteau test defined in Hosking (1980) is

Qs D T

2

s X

.T

l/

1

trfO .l/†

1

O . l/†

1

g

lD1

The statistic Qs has approximately the chi-square distribution with k 2 .s freedom. An example of the output is displayed in Figure 30.7.

p

q/ degrees of

Univariate Model Diagnostic Checks There are various ways to perform diagnostic checks for a univariate model. For details, see the section “Heteroscedasticity and Normality Tests” on page 374 in Chapter 8, “The AUTOREG Procedure.” An example of the output is displayed in Figure 30.8 and Figure 30.9.  Durbin-Watson (DW) statistics: The DW test statistics test for the first order autocorrelation in the residuals.

Cointegration F 1959

 Jarque-Bera normality test: This test is helpful in determining whether the model residuals represent a white noise process. This tests the null hypothesis that the residuals have normality.  F tests for autoregressive conditional heteroscedastic (ARCH) disturbances: F test statistics test for the heteroscedastic disturbances in the residuals. This tests the null hypothesis that the residuals have equal covariances  F tests for AR disturbance: These test statistics are computed from the residuals of the univariate AR(1), AR(1,2), AR(1,2,3) and AR(1,2,3,4) models to test the null hypothesis that the residuals are uncorrelated.

Cointegration This section briefly introduces the concepts of cointegration (Johansen 1995b). Definition 1. (Engle and Granger 1987): If a series yt with no deterministic components can be represented by a stationary and invertible ARMA process after differencing d times, the series is integrated of order d , that is, yt  I.d /. Definition 2. (Engle and Granger 1987): If all elements of the vector yt are I.d / and there exists a cointegrating vector ˇ ¤ 0 such that ˇ 0 yt  I.d b/ for any b > 0, the vector process is said to be cointegrated CI.d; b/. A simple example of a cointegrated process is the following bivariate system: y1t

D y2t C 1t

y2t

D y2;t

1

C 2t

with 1t and 2t being uncorrelated white noise processes. In the second equation, y2t is a random walk, y2t D 2t ,   1 B. Differencing the first equation results in y1t D y2t C 1t D 2t C 1t

1;t

1

Thus, both y1t and y2t are I.1/ processes, but the linear combination y1t y2t is stationary. Hence yt D .y1t ; y2t /0 is cointegrated with a cointegrating vector ˇ D .1; /0 . In general, if the vector process yt has k components, then there can be more than one cointegrating vector ˇ 0 . It is assumed that there are r linearly independent cointegrating vectors with r < k, which make the k  r matrix ˇ. The rank of matrix ˇ is r, which is called the cointegration rank of yt .

1960 F Chapter 30: The VARMAX Procedure

Common Trends This section briefly discusses the implication of cointegration for the moving-average representation. Let yt be cointegrated CI.1; 1/, then yt has the Wold representation: yt D ı C ‰.B/t where t is i id.0; †/, ‰.B/ D

P1

j D0 ‰j B

j

with ‰0 D Ik , and

P1

j D0 j j‰j j

< 1.

Assume that t D 0 if t  0 and y0 is a nonrandom initial value. Then the difference equation implies that

yt D y0 C ıt C ‰.1/

t X

i C ‰  .B/t

i D0

where ‰  .B/ D .1

B/

1 .‰.B/

‰.1// and ‰  .B/ is absolutely summable.

Assume that the rank of ‰.1/ is m D k r. When the process yt is cointegrated, there is a cointegrating k  r matrix ˇ such that ˇ 0 yt is stationary. Premultiplying yt by ˇ 0 results in ˇ 0 yt D ˇ 0 y0 C ˇ 0 ‰  .B/t because ˇ 0 ‰.1/ D 0 and ˇ 0 ı D 0. Stock and Watson (1988) showed that the cointegrated process yt has a common trends representation derived from the moving-average representation. Since the rank of ‰.1/ is m D k r, there is a k  r matrix H1 with rank r such that ‰.1/H1 D 0. Let H2 be a k  m matrix with rank m such that H20 H1 D 0; then A D C.1/H2 has rank m. The H D .H1 ; H2 / has rank k. By construction of H , ‰.1/H D Œ0; A D ASm where Sm D .0mr ; Im /. Since ˇ 0 ‰.1/ D 0 and ˇ 0 ı D 0, ı lies in the column space of ‰.1/ and can be written ı D C.1/ıQ where ıQ is a k-dimensional vector. The common trends representation is written as

yt

Q C D y0 C ‰.1/Œıt

t X

i  C ‰  .B/t

i D0

D y0 C ‰.1/H ŒH

1Q

ıt C H

1

t X i D0

D y0 C At C at

i  C at

Cointegration F 1961

and t D  C t

1

C vt

where at D ‰  .B/t ,  D Sm H

1 ı, Q

t D Sm ŒH

1 ıt Q

CH

1

Pt

i D0 i ,

and vt D Sm H

1 . t

Stock and Watson showed that the common trends representation expresses yt as a linear combination of m random walks (t ) with drift  plus I.0/ components (at /.

Test for the Common Trends Stock and Watson (1988) proposed statistics for common trends testing. The null hypothesis is that the k-dimensional time series yt has m common stochastic trends, where m  k and the alternative is that it has s common trends, where s < m . The test procedure of m versus s common stochastic trends is performed based on the first-order serial correlation matrix of yt . Let ˇ? be a k  m matrix 0 0 orthogonal to the cointegrating matrix such that ˇ? ˇ D 0 and ˇ? ˇ? D Im . Let zt D ˇ 0 yt and 0 wt D ˇ? yt . Then 0 0 0 wt D ˇ? y0 C ˇ? ıt C ˇ? ‰.1/

t X

0 i C ˇ? ‰  .B/t

i D0

Combining the expression of zt and wt , 

zt wt



 D  C

ˇ 0 y0 0 ˇ? y0



 C

ˇ 0 ‰  .B/ 0 ˇ? ‰  .B/

0 0

ˇ? ı



 tC

0 0 ˇ? ‰.1/

X t

i

i D1

 t

The Stock-Watson common trends test is performed based on the component wt by testing whether 0 ˇ? ‰.1/ has rank m against rank s. The following statements perform the Stock-Watson test for common trends: proc iml; sig = 100*i(2); phi = {-0.2 0.1, 0.5 0.2, 0.8 0.7, -0.4 0.6}; call varmasim(y,phi) sigma=sig n=100 initial=0 seed=45876; cn = {’y1’ ’y2’}; create simul2 from y[colname=cn]; append from y; quit; data simul2; set simul2; date = intnx( ’year’, ’01jan1900’d, _n_-1 ); format date year4. ; run;

1962 F Chapter 30: The VARMAX Procedure

proc varmax data=simul2; model y1 y2 / p=2 cointtest=(sw); run;

In Figure 30.51, the first column is the null hypothesis that yt has m  k common trends; the second column is the alternative hypothesis that yt has s < m common trends; the third column contains the eigenvalues used for the test statistics; the fourth column contains the test statistics using AR(p) filtering of the data. The table shows the output of the case p D 2. Figure 30.51 Common Trends Test (COINTTEST=(SW) Option) The VARMAX Procedure Common Trend Test

H0: Rank=m

H1: Rank=s

Eigenvalue

Filter

1 2

0 0 1

1.000906 0.996763 0.648908

0.09 -0.32 -35.11

5% Critical Value -14.10 -8.80 -23.00

Lag 2

The test statistic for testing for 2 versus 1 common trends is more negative (–35.1) than the critical value (–23.0). Therefore, the test rejects the null hypothesis, which means that the series has a single common trend.

Vector Error Correction Modeling This section discusses the implication of cointegration for the autoregressive representation. Assume that the cointegrated series can be represented by a vector error correction model according to the Granger representation theorem (Engle and Granger 1987). Consider the vector autoregressive process with Gaussian errors defined by

yt D

p X

ˆi yt

i

C t

i D1

or ˆ.B/yt D t where the initial values, y pC1 ; : : : ; y0 , are fixed and t  N.0; †/. Since the AR operator ˆ.B/ Pp 1  i can be re-expressed as ˆ.B/ D ˆ .B/.1 B/ C ˆ.1/B, where ˆ .B/ D Ik i D1 ˆi B with P p ˆi D ˆ , the vector error correction model is j Di C1 j ˆ .B/.1

B/yt D ˛ˇ 0 yt

1

C t

Vector Error Correction Modeling F 1963

or yt D ˛ˇ 0 yt

1C

p X1

ˆi yt

i

C t

i D1

where ˛ˇ 0 D

ˆ.1/ D

Ik C ˆ1 C ˆ2 C    C ˆp .

One motivation for the VECM(p) form is to consider the relation ˇ 0 yt D c as defining the underlying economic relations and assume that the agents react to the disequilibrium error ˇ 0 yt c through the adjustment coefficient ˛ to restore equilibrium; that is, they satisfy the economic relations. The cointegrating vector, ˇ is sometimes called the long-run parameters. You can consider a vector error correction model with a deterministic term. The deterministic term Dt can contain a constant, a linear trend, and seasonal dummy variables. Exogenous variables can also be included in the model.

yt D …yt

1

C

p X1

ˆi yt i

C ADt C

i D1

s X

‚i xt

i

C t

i D0

where … D ˛ˇ 0 . The alternative vector error correction representation considers the error correction term at lag t and is written as yt D

p X1

]

ˆi yt

i

C …] yt

p C ADt C

i D1

s X

‚i xt

i

p

C t

i D0

If the matrix … has a full-rank (r D k), all components of yt are I.0/. On the other hand, yt are stationary in difference if rank.…/ D 0. When the rank of the matrix … is r < k, there are k r linear combinations that are nonstationary and r stationary cointegrating relations. Note that the linearly independent vector zt D ˇ 0 yt is stationary and this transformation is not unique unless r D 1. There does not exist a unique cointegrating matrix ˇ since the coefficient matrix … can also be decomposed as … D ˛MM

1 0

ˇ D ˛ ˇ 

0

where M is an r  r nonsingular matrix.

Test for the Cointegration The cointegration rank test determines the linearly independent columns of …. Johansen (1988, 1995a) and Johansen and Juselius (1990) proposed the cointegration rank test by using the reduced rank regression.

1964 F Chapter 30: The VARMAX Procedure

Different Specifications of Deterministic Trends When you construct the VECM(p) form from the VAR(p) model, the deterministic terms in the VECM(p) form can differ from those in the VAR(p) model. When there are deterministic cointegrated relationships among variables, deterministic terms in the VAR(p) model are not present in the VECM(p) form. On the other hand, if there are stochastic cointegrated relationships in the VAR(p) model, deterministic terms appear in the VECM(p) form via the error correction term or as an independent term in the VECM(p) form. There are five different specifications of deterministic trends in the VECM(p) form.  Case 1: There is no separate drift in the VECM(p) form. yt D ˛ˇ 0 yt

1C

p X1

ˆi yt

i

C t

i D1

 Case 2: There is no separate drift in the VECM(p) form, but a constant enters only via the error correction term. yt D ˛.ˇ 0 ; ˇ0 /.y0t

0 1 ; 1/ C

p X1

ˆi yt

i

C t

i D1

 Case 3: There is a separate drift and no separate linear trend in the VECM(p) form. 0

yt D ˛ˇ yt

1

C

p X1

ˆi yt

i

C ı0 C  t

i D1

 Case 4: There is a separate drift and no separate linear trend in the VECM(p) form, but a linear trend enters only via the error correction term. yt D ˛.ˇ 0 ; ˇ1 /.y0t

0 1 ; t/ C

p X1

ˆi yt

i

C ı0 C  t

i D1

 Case 5: There is a separate linear trend in the VECM(p) form. yt D ˛ˇ 0 yt

1C

p X1

ˆi yt

i

C ı0 C ı1 t C t

i D1

First, focus on Cases 1, 3, and 5 to test the null hypothesis that there are at most r cointegrating vectors. Let Z0t

D yt

Z1t

D yt

Z2t

D

Z0 D Z1 D Z2 D

1 Œy0t 1 ; : : : ; y0t pC1 ; Dt 0 ŒZ01 ; : : : ; Z0T 0 ŒZ11 ; : : : ; Z1T 0 ŒZ21 ; : : : ; Z2T 0

Vector Error Correction Modeling F 1965

where Dt can be empty for Case 1, 1 for Case 3, and .1; t/ for Case 5. In Case 2, Z1t and Z2t are defined as Z1t

D Œy0t

Z2t

D

0 1 ; 1 Œy0t 1 ; : : : ; y0t pC1 0

In Case 4, Z1t and Z2t are defined as Z1t

D Œy0t

Z2t

D

0 1; t  Œy0t 1 ; : : : ; y0t pC1 ; 10

Let ‰ be the matrix of parameters consisting of ˆ1 , . . . , ˆp 1 , A, and ‚0 , . . . , ‚s , where parameters A corresponds to regressors Dt . Then the VECM(p) form is rewritten in these variables as Z0t D ˛ˇ 0 Z1t C ‰Z2t C t The log-likelihood function is given by ` D

kT log 2 2 T 1X .Z0t 2

T log j†j 2 ˛ˇ 0 Z1t

‰Z2t /0 †

1

.Z0t

˛ˇ 0 Z1t

‰Z2t /

t D1

The residuals, R0t and R1t , are obtained by regressing Z0t and Z1t on Z2t , respectively. The regression equation of residuals is R0t D ˛ˇ 0 R1t C O t The crossproducts matrices are computed Sij D

T 1 X 0 Rit Rjt ; i; j D 0; 1 T t D1

Then the maximum likelihood estimator for ˇ is obtained from the eigenvectors that correspond to the r largest eigenvalues of the following equation: jS11

S10 S001 S01 j D 0

The eigenvalues of the preceding equation are squared canonical correlations between R0t and R1t , and the eigenvectors that correspond to the r largest eigenvalues are the r linear combinations of yt 1 , which have the largest squared partial correlations with the stationary process yt after correcting for lags and deterministic terms. Such an analysis calls for a reduced rank regression of yt on yt 1 corrected for .yt 1 ; : : : ; yt pC1 ; Dt /, as discussed by Anderson (1951). Johansen (1988) suggests two test statistics to test the null hypothesis that there are at most r cointegrating vectors H0 W i D 0 for i D r C 1; : : : ; k

1966 F Chapter 30: The VARMAX Procedure

Trace Test The trace statistic for testing the null hypothesis that there are at most r cointegrating vectors is as follows: t race D

T

k X

log.1

i /

i DrC1

The asymptotic distribution of this statistic is given by (Z ) Z 1  1Z 1 1 0 0 0 tr .d W /WQ WQ WQ dr WQ .d W / 0

0

0

where t r.A/ is the trace of a matrix A, W is the k r dimensional Brownian motion, and WQ is the Brownian motion itself, or the demeaned or detrended Brownian motion according to the different specifications of deterministic trends in the vector error correction model. Maximum Eigenvalue Test The maximum eigenvalue statistic for testing the null hypothesis that there are at most r cointegrating vectors is as follows: max D

T log.1

rC1 /

The asymptotic distribution of this statistic is given by Z 1 Z maxf .d W /WQ 0 . 0

1 0

WQ WQ 0 dr/

1

1

Z 0

WQ .d W /0 g

where max.A/ is the maximum eigenvalue of a matrix A. Osterwald-Lenum (1992) provided detailed tables of the critical values of these statistics. The following statements use the JOHANSEN option to compute the Johansen cointegration rank trace test of integrated order 1: proc varmax data=simul2; model y1 y2 / p=2 cointtest=(johansen=(normalize=y1)); run;

Figure 30.52 shows the output based on the model specified in the MODEL statement, an intercept term is assumed. In the “Cointegration Rank Test Using Trace” table, the column Drift In ECM means there is no separate drift in the error correction model and the column Drift In Process means the process has a constant drift before differencing. The “Cointegration Rank Test Using Trace” table shows the trace statistics based on Case 3 and the “Cointegration Rank Test Using Trace under Restriction” table shows the trace statistics based on Case 2. The output indicates that the series are cointegrated with rank 1 because the trace statistics are smaller than the critical values in both Case 2 and Case 3.

Vector Error Correction Modeling F 1967

Figure 30.52 Cointegration Rank Test (COINTTEST=(JOHANSEN=) Option) The VARMAX Procedure Cointegration Rank Test Using Trace

H0: Rank=r

H1: Rank>r

Eigenvalue

Trace

0 1

0 1

0.4644 0.0056

61.7522 0.5552

5% Critical Value 15.34 3.84

Drift in ECM

Drift in Process

Constant

Linear

Cointegration Rank Test Using Trace Under Restriction

H0: Rank=r

H1: Rank>r

Eigenvalue

Trace

0 1

0 1

0.5209 0.0426

76.3788 4.2680

5% Critical Value 19.99 9.13

Drift in ECM

Drift in Process

Constant

Constant

Figure 30.53 shows which result, either Case 2 (the hypothesis H0) or Case 3 (the hypothesis H1), is appropriate depending on the significance level. Since the cointegration rank is chosen to be 1 by the result in Figure 30.52, look at the last row that corresponds to rank=1. Since the p-value is 0.054, the Case 2 cannot be rejected at the significance level 5%, but it can be rejected at the significance level 10%. For modeling of the two Case 2 and Case 3, see Figure 30.56 and Figure 30.57. Figure 30.53 Cointegration Rank Test Continued Hypothesis of the Restriction

Hypothesis

Drift in ECM

Drift in Process

H0(Case 2) H1(Case 3)

Constant Constant

Constant Linear

Hypothesis Test of the Restriction

Rank

Eigenvalue

Restricted Eigenvalue

DF

Chi-Square

Pr > ChiSq

0 1

0.4644 0.0056

0.5209 0.0426

2 1

14.63 3.71

0.0007 0.0540

Figure 30.54 shows the estimates of long-run parameter (Beta) and adjustment coefficients (Alpha) based on Case 3.

1968 F Chapter 30: The VARMAX Procedure

Figure 30.54 Cointegration Rank Test Continued Beta Variable y1 y2

1

2

1.00000 -2.04869

1.00000 -0.02854

Alpha Variable y1 y2

1

2

-0.46421 0.17535

-0.00502 -0.01275

Using the NORMALIZE= option, the first low of the “Beta” table has 1. Considering that the cointegration rank is 1, the long-run relationship of the series is ˇ 0 yt

D



1

D y1t y1t

2:04869





y1 y2



2:04869y2t

D 2:04869y2t

Figure 30.55 shows the estimates of long-run parameter (Beta) and adjustment coefficients (Alpha) based on Case 2. Figure 30.55 Cointegration Rank Test Continued Beta Under Restriction Variable y1 y2 1

1

2

1.00000 -2.04366 6.75919

1.00000 -2.75773 101.37051

Alpha Under Restriction Variable y1 y2

1

2

-0.48015 0.12538

0.01091 0.03722

Vector Error Correction Modeling F 1969

Considering that the cointegration rank is 1, the long-run relationship of the series is 3 y1 2:04366 6:75919 4 y2 5 1 2:04366 y2t C 6:75919 2

ˇ 0 yt

D





1

D y1t y1t

D 2:04366 y2t

6:75919

Estimation of Vector Error Correction Model The preceding log-likelihood function is maximized for 1=2 ˇO D S11 Œv1 ; : : : ; vr  O ˇO 0 S11 ˇ/ O 1 ˛O D S01 ˇ.

O D ˛O ˇO 0 … O 0/ O D .Z20 Z2 / 1 Z20 .Z0 Z1 … ‰ O D .Z0 Z2 ‰ O 0 Z1 … O 0 /0 .Z0 †

O0 Z2 ‰

O 0 /=T Z1 …

The estimators of the orthogonal complements of ˛ and ˇ are ˇO? D S11 ŒvrC1 ; : : : ; vk  and ˛O ? D S001 S01 ŒvrC1 ; : : : ; vk  The ML estimators have the following asymptotic properties: p

d

O ‰ O T vec.Œ…;

Œ…; ‰/ ! N.0; †co /

where  †co D † ˝

ˇ 0 0 Ik

 

1



ˇ0 0 0 Ik



and  D plim

1 T



ˇ 0 Z10 Z1 ˇ ˇ 0 Z10 Z2 Z20 Z1 ˇ Z20 Z2



The following statements are examples of fitting the five different cases of the vector error correction models mentioned in the previous section.

1970 F Chapter 30: The VARMAX Procedure

For fitting Case 1, model y1 y2 / p=2 ecm=(rank=1 normalize=y1) noint;

For fitting Case 2, model y1 y2 / p=2 ecm=(rank=1 normalize=y1 ectrend);

For fitting Case 3, model y1 y2 / p=2 ecm=(rank=1 normalize=y1);

For fitting Case 4, model y1 y2 / p=2 ecm=(rank=1 normalize=y1 ectrend) trend=linear;

For fitting Case 5, model y1 y2 / p=2 ecm=(rank=1 normalize=y1) trend=linear;

From Figure 30.53 that uses the COINTTEST=(JOHANSEN) option, you can fit the model by using either Case 2 or Case 3 because the test was not significant at the 0.05 level, but was significant at the 0.10 level. Here both models are fitted to show the difference in output display. Figure 30.56 is for Case 2, and Figure 30.57 is for Case 3. For Case 2, proc varmax data=simul2; model y1 y2 / p=2 ecm=(rank=1 normalize=y1 ectrend) print=(estimates); run;

Vector Error Correction Modeling F 1971

Figure 30.56 Parameter Estimation with the ECTREND Option The VARMAX Procedure Parameter Alpha * Beta’ Estimates Variable y1 y2

y1

y2

1

-0.48015 0.12538

0.98126 -0.25624

-3.24543 0.84748

AR Coefficients of Differenced Lag DIF Lag 1

Variable y1 y2

y1

y2

-0.72759 0.38982

-0.77463 -0.55173

Model Parameter Estimates

Equation Parameter

Estimate

D_y1

-3.24543 -0.48015 0.98126 -0.72759 -0.77463 0.84748 0.12538 -0.25624 0.38982 -0.55173

CONST1 AR1_1_1 AR1_1_2 AR2_1_1 AR2_1_2 CONST2 AR1_2_1 AR1_2_2 AR2_2_1 AR2_2_2

D_y2

Standard Error t Value Pr > |t| Variable 0.33022 0.04886 0.09984 0.04623 0.04978 0.35394 0.05236 0.10702 0.04955 0.05336

-15.74 -15.56

0.0001 0.0001

7.87 -10.34

0.0001 0.0001

1, EC y1(t-1) y2(t-1) D_y1(t-1) D_y2(t-1) 1, EC y1(t-1) y2(t-1) D_y1(t-1) D_y2(t-1)

Figure 30.56 can be reported as follows:  yt

0:48015 0:12538

D  C

0:72759 0:38982

2  y 3:24543 4 1;t y2;t 0:84748 1  0:77463 yt 1 C t 0:55173

0:98126 0:25624

1 1

3 5

The keyword “EC” in the “Model Parameter Estimates” table means that the ECTREND option is used for fitting the model. For fitting Case 3, proc varmax data=simul2; model y1 y2 / p=2 ecm=(rank=1 normalize=y1) print=(estimates); run;

1972 F Chapter 30: The VARMAX Procedure

Figure 30.57 Parameter Estimation without the ECTREND Option The VARMAX Procedure Parameter Alpha * Beta’ Estimates Variable y1 y2

y1

y2

-0.46421 0.17535

0.95103 -0.35923

AR Coefficients of Differenced Lag DIF Lag 1

Variable y1 y2

y1

y2

-0.74052 0.34820

-0.76305 -0.51194

Model Parameter Estimates

Equation Parameter

Estimate

D_y1

-2.60825 -0.46421 0.95103 -0.74052 -0.76305 3.43005 0.17535 -0.35923 0.34820 -0.51194

CONST1 AR1_1_1 AR1_1_2 AR2_1_1 AR2_1_2 CONST2 AR1_2_1 AR1_2_2 AR2_2_1 AR2_2_2

D_y2

Standard Error t Value Pr > |t| Variable 1.32398 0.05474 0.11215 0.05060 0.05352 1.39587 0.05771 0.11824 0.05335 0.05643

-1.97

-14.63 -14.26 2.46

6.53 -9.07

0.0518 1 y1(t-1) y2(t-1) 0.0001 D_y1(t-1) 0.0001 D_y2(t-1) 0.0159 1 y1(t-1) y2(t-1) 0.0001 D_y1(t-1) 0.0001 D_y2(t-1)

Figure 30.57 can be reported as follows:  yt

0:46421 0:17535

D  C

0:95103 0:35293  2:60825 C t 3:43005



 yt

1

C

0:74052 0:34820

0:76305 0:51194

 yt

1

Test for the Linear Restriction on the Parameters Consider the example with the variables mt log real money, yt log real income, itd deposit interest rate, and itb bond interest rate. It seems a natural hypothesis that in the long-run relation, money and income have equal coefficients with opposite signs. This can be formulated as the hypothesis that the cointegrated relation contains only mt and yt through mt yt . For the analysis, you can express these restrictions in the parameterization of H such that ˇ D H, where H is a known

Vector Error Correction Modeling F 1973

k  s matrix and is given by 2 6 H D6 4

is the s  r.r  s < k/ parameter matrix to be estimated. For this example, H

1 1 0 0

0 0 1 0

3 0 0 7 7 0 5 1

Restriction H0 W ˇ D H When the linear restriction ˇ D H is given, it implies that the same restrictions are imposed on all cointegrating vectors. You obtain the maximum likelihood estimator of ˇ by reduced rank regression of yt on H yt 1 corrected for .yt 1 ; : : : ; yt pC1 ; Dt /, solving the following equation jH 0 S11 H

H 0 S10 S001 S01 H j D 0

for the eigenvalues 1 > 1 >    > s > 0 and eigenvectors .v1 ; : : : ; vs /, Sij given in the preceding section. Then choose O D .v1 ; : : : ; vr / that corresponds to the r largest eigenvalues, and the ˇO is O H . The test statistic for H0 W ˇ D H is given by T

r X

i /=.1

logf.1

d

i /g ! 2r.k

s/

i D1

If the series has no deterministic trend, the constant term should be restricted by ˛0? ı0 D 0 as in Case 2. Then H is given by 2 6 6 H D6 6 4

1 1 0 0 0

0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

3 7 7 7 7 5

The following statements test that 2 ˇ1 C ˇ2 D 0: proc varmax data=simul2; model y1 y2 / p=2 ecm=(rank=1 normalize=y1); cointeg rank=1 h=(1,-2); run;

Figure 30.58 shows the results of testing H0 W 2ˇ1 C ˇ2 D 0. The input H matrix is H D .1 2/0 . The adjustment coefficient is reestimated under the restriction, and the test indicates that you cannot reject the null hypothesis.

1974 F Chapter 30: The VARMAX Procedure

Figure 30.58 Testing of Linear Restriction (H= Option) The VARMAX Procedure Beta Under Restriction Variable

1

y1 y2

1.00000 -2.00000

Alpha Under Restriction Variable

1

y1 y2

-0.47404 0.17534

Hypothesis Test

Index

Eigenvalue

Restricted Eigenvalue

DF

Chi-Square

Pr > ChiSq

1

0.4644

0.4616

1

0.51

0.4738

Test for the Weak Exogeneity and Restrictions of Alpha Consider a vector error correction model: 0

yt D ˛ˇ yt

1

C

p X1

ˆi yt

i

C ADt C t

i D1

Divide the process yt into .y01t ; y02t /0 with dimension k1 and k2 and the † into  †D

†11 †12 †21 †22



Similarly, the parameters can be decomposed as follows:  ˛D

˛1 ˛2



ˆi

 D

ˆ1i ˆ2i



 AD

A1 A2



Then the VECM(p) form can be rewritten by using the decomposed parameters and processes: 

y1t y2t



 D

˛1 ˛2



0

ˇ yt

1

C

p X1  i D1

ˆ1i ˆ2i



 yt

i

C

A1 A2



 Dt C

1t 2t



Vector Error Correction Modeling F 1975

The conditional model for y1t given y2t is !˛2 /ˇ 0 yt

D !y2t C .˛1

y1t

1C

p X1

.ˆ1i

!ˆ2i /yt

i

i D1

C.A1

!A2 /Dt C 1t

!2t

and the marginal model of y2t is y2t D ˛2 ˇ 0 yt

1C

p X1

ˆ2i yt

i

C A2 Dt C 2t

i D1

where ! D †12 †221 . The test of weak exogeneity of y2t for the parameters .˛1 ; ˇ/ determines whether ˛2 D 0. Weak exogeneity means that there is no information about ˇ in the marginal model or that the variables y2t do not react to a disequilibrium. Restriction H0 W ˛ D J Consider the null hypothesis H0 W ˛ D J , where J is a k  m matrix with r  m < k. From the previous residual regression equation R0t D ˛ˇ 0 R1t C O t D J ˇ 0 R1t C O t you can obtain JN 0 R0t

D

ˇ 0 R1t C JN 0 O t

J?0 R0t

D J?0 O t

where JN D J.J 0 J /

1

and J? is orthogonal to J such that J?0 J D 0.

Define †JJ? D JN 0 †J? and †J? J? D J?0 †J? and let ! D †JJ? †J?1 J? . Then JN 0 R0t can be written as JN 0 R0t D

ˇ 0 R1t C !J?0 R0t C JN 0 O t

!J?0 O t

Using the marginal distribution of J?0 R0t and the conditional distribution of JN 0 R0t , the new residuals are computed as RQ J t RQ 1t

D JN 0 R0t D R1t

SJJ? SJ?1J? J?0 R0t S1J? SJ?1J? J?0 R0t

1976 F Chapter 30: The VARMAX Procedure

where SJJ? D JN 0 S00 J? ; SJ? J? D J?0 S00 J? ; and SJ? 1 D J?0 S01 In terms of RQ J t and RQ 1t , the MLE of ˇ is computed by using the reduced rank regression. Let Sij:J? D

T 1 X Q Q0 Rit Rjt ; for i; j D 1; J T t D1

Under the null hypothesis H0 W ˛ D J , the MLE ˇQ is computed by solving the equation jS11:J?

1 S1J:J? SJJ:J S jD0 ? J1:J?

Then ˇQ D .v1 ; : : : ; vr /, where the eigenvectors correspond to the r largest eigenvalues. The likelihood ratio test for H0 W ˛ D J is T

r X

logf.1

i /=.1

d

i /g ! 2r.k

m/

i D1

The test of weak exogeneity of y2t is a special case of the test ˛ D J , considering J D .Ik1 ; 0/0 . Consider the previous example with four variables ( mt ; yt ; itb ; itd ). If r D 1, you formulate the weak exogeneity of (yt ; itb ; itd ) for mt as J D Œ1; 0; 0; 00 and the weak exogeneity of itd for (mt ; yt ; itb ) as J D ŒI3 ; 00 . The following statements test the weak exogeneity of other variables, assuming r D 1: proc varmax data=simul2; model y1 y2 / p=2 ecm=(rank=1 normalize=y1); cointeg rank=1 exogeneity; run;

Figure 30.59 shows that each variable is not the weak exogeneity of other variable. Figure 30.59 Testing of Weak Exogeneity (EXOGENEITY Option) The VARMAX Procedure Testing Weak Exogeneity of Each Variables Variable y1 y2

DF

Chi-Square

Pr > ChiSq

1 1

53.46 8.76

0,

yt Cl D ı.t C l/ C

t t Cl Xi X

‰j i C

i D1 j D0

l i l X X

‰j t Ci

i D1 j D0

The l-step-ahead forecast is derived from the preceding equation:

yt Cljt D .t C l/ C

t t Cl Xi X

‰ j i

i D1 j D0

Note that lim ˇ 0 yt Cljt D 0

l!1

P i 0 since liml!1 tjCl D0 ‰j D ‰.1/ and ˇ ‰.1/ D 0. The long-run forecast of the cointegrated system shows that the cointegrated relationship holds, although there might exist some deviations from the equilibrium status in the short-run. The covariance matrix of the predict error et Cljt D ytCl yt Cljt is l l i l i X X X †.l/ D Œ. ‰j /†. ‰j0 / i D1 j D0

j D0

When the linear process is represented as a VECM(p) model, you can obtain

yt D …yt

1C

p X1

ˆj yt

j D1

The transition equation is defined as zt D F zt

1

C et

j

C ı C t

1978 F Chapter 30: The VARMAX Procedure

where zt D .y0t 2 6 6 6 F D6 6 4

0 0 1 ; yt ; yt 1 ;   

; y0t

Ik Ik 0 … .… C ˆ1 / ˆ2 0 Ik 0 :: :: :: : : : 0 0 

0 pC2 /

 0    ˆp  0 :: :: : : Ik 0

is a state vector and the transition matrix is 3

1

7 7 7 7 7 5

where 0 is a k  k zero matrix. The observation equation can be written yt D ıt C H zt where H D ŒIk ; Ik ; 0; : : : ; 0. The l-step-ahead forecast is computed as yt Cljt D ı.t C l/ C HF l zt

Cointegration with Exogenous Variables The error correction model with exogenous variables can be written as follows: yt D ˛ˇ 0 yt

1C

p X1

ˆi yt

i

C ADt C

i D1

s X

‚i xt

i

C t

i D0

The following statements demonstrate how to fit VECMX(p; s), where p D 2 and s D 1 from the P=2 and XLAG=1 options: proc varmax data=simul3; model y1 y2 = x1 / p=2 xlag=1 ecm=(rank=1); run;

The following statements demonstrate how to BVECMX(2,1): proc varmax data=simul3; model y1 y2 = x1 / p=2 xlag=1 ecm=(rank=1) prior=(lambda=0.9 theta=0.1); run;

I(2) Model The VARX(p,s) model can be written in the error correction form: yt D ˛ˇ 0 yt

1C

p X1 i D1

ˆi yt

i

C ADt C

s X i D0

‚i xt

i

C t

I(2) Model F 1979

Pp

1  i D1 ˆi .

Let ˆ D Ik

If ˛ and ˇ have full-rank r, and rank.˛0? ˆ ˇ? / D k

r, then yt is an I.1/ process.

If the condition rank.˛0? ˆ ˇ? / D k r fails and ˛0? ˆ ˇ? has reduced-rank ˛0? ˆ ˇ? D 0 where  and  are .k r/  s matrices with s  k r, then ˛? and ˇ? are defined as k  .k r/ matrices of full rank such that ˛0 ˛? D 0 and ˇ 0 ˇ? D 0. If  and  have full-rank s, then the process yt is I.2/, which has the implication of I.2/ model for the moving-average representation.

yt D B0 C B1 t C C2

j t X X

i C C1

j D1 i D1

t X

i C C0 .B/t

i D1

The matrices C1 , C2 , and C0 .B/ are determined by the cointegration properties of the process, and B0 and B1 are determined by the initial values. For details, see Johansen (1995a). The implication of the I.2/ model for the autoregressive representation is given by

2

 yt D …yt



ˆ yt

1

1

C

p X2

2

‰ i  yt

i

C ADt C

i D1

Pp

1  j Di C1 ˆi

where ‰i D

and ˆ D Ik

s X

‚i xt

i

C t

i D0

Pp

1  i D1 ˆi .

Test for I(2) The I.2/ cointegrated model is given by the following parameter restrictions: Hr;s W … D ˛ˇ 0 and ˛0? ˆ ˇ? D 0 where  and  are .k r/  s matrices with 0  s  k r. Let Hr0 represent the I.1/ model where 0 represent the I.2/ model where  and  have full-rank s, and let ˛ and ˇ have full-rank r, let Hr;s Hr;s represent the I.2/ model where  and  have rank  s. The following table shows the relation between the I.1/ models and the I.2/ models. Relation between the I.1/ and I.2/ Models

Table 30.2

I.2/ rnk

r

k H00

0 1 :: : k

s

1



H01 H10

I.1/ 1



k-1  

 

 

H0;k 1 H1;k 2 :: : Hk 1;0

  :: : 

H0k H1;k 1 :: : Hk 1;1

D D :: : D

H00 H10 :: : Hk0 1

1980 F Chapter 30: The VARMAX Procedure

Johansen (1995a) proposed the two-step procedure to analyze the I.2/ model. In the first step, the values of .r; ˛; ˇ/ are estimated using the reduced rank regression analysis, performing the regression analysis 2 yt , yt 1 , and yt 1 on 2 yt 1 ; : : : ; 2 yt pC2 ; and Dt . This gives residuals R0t , R1t , and R2t , and residual product moment matrices Mij D

T 1 X 0 Rit Rjt for i; j D 0; 1; 2 T t D1

Perform the reduced rank regression analysis 2 yt on yt 1 corrected for yt 2 yt 1 ; : : : ; 2 yt pC2 ; and Dt , and solve the eigenvalue problem of the equation

1,

1 M20:1 M00:1 M02:1 j D 0

jM22:1

Mi1 M111 M1j for i; j D 0; 2.

where Mij:1 D Mij

In the second step, if .r; ˛; ˇ/ are known, the values of .s; ; / are determined us0 ing the reduced rank regression analysis, regressing ˛O 0? 2 yt on ˇO? yt 1 corrected for 2 2 0  yt 1 ; : : : ;  yt pC2 ; Dt , and ˇO yt 1 . The reduced rank regression analysis reduces to the solution of an eigenvalue problem for the equation Mˇ? ˛? :ˇ M˛?1˛? :ˇ M˛? ˇ? :ˇ j D 0

jMˇ? ˇ? :ˇ where

0 Mˇ? ˇ? :ˇ D ˇ? .M11

Mˇ0 ? ˛? :ˇ

D

M˛? ˛? :ˇ D where ˛N D ˛.˛0 ˛/

M11 ˇ.ˇ 0 M11 ˇ/

1 0

ˇ M11 /ˇ? 0 M˛? ˇ? :ˇ D ˛N ? .M01 M01 ˇ.ˇ 0 M11 ˇ/ 1 ˇ 0 M11 /ˇ? ˛N 0? .M00 M01 ˇ.ˇ 0 M11 ˇ/ 1 ˇ 0 M10 /˛N ?

1.

The solution gives eigenvalues 1 > 1 >    > s > 0 and eigenvectors .v1 ; : : : ; vs /. Then, the ML estimators are O D .v1 ; : : : ; vs / O D M˛? ˇ? :ˇ O The likelihood ratio test for the reduced rank model Hr;s with rank  s in the model Hr;k is given by

Qr;s D

T

k Xr

log.1

i /; s D 0; : : : ; k

r

1

i DsC1

The following statements compute the rank test to test for cointegrated order 2:

r

D Hr0

Multivariate GARCH Modeling F 1981

proc varmax data=simul2; model y1 y2 / p=2 cointtest=(johansen=(iorder=2)); run;

The last two columns in Figure 30.60 explain the cointegration rank test with integrated order 1. The results indicate that there is the cointegrated relationship with the cointegration rank 1 with respect to the 0.05 significance level because the test statistic of 0.5552 is smaller than the critical value of 3.84. Now, look at the row associated with r D 1. Compare the test statistic value, 211.84512, to the critical value, 3.84, for the cointegrated order 2. There is no evidence that the series are integrated order 2 at the 0.05 significance level. Figure 30.60 Cointegrated I(2) Test (IORDER= Option) The VARMAX Procedure Cointegration Rank Test for I(2)

r\k-r-s 0 1 5% CV I(2)

2

1

720.40735

308.69199 211.84512 3.84000

15.34000

Trace of I(1) 61.7522 0.5552

5% CV of I(1) 15.34 3.84

Multivariate GARCH Modeling Stochastic volatility modeling is important in many areas, particularly in finance. To study the volatility of time series, GARCH models are widely used because they provide a good approach to conditional variance modeling.

BEKK Representation Engle and Kroner (1995) propose a general multivariate GARCH model and call it a BEKK representation. Let F.t 1/ be the sigma field generated by the past values of t , and let Ht be the conditional covariance matrix of the k-dimensional random vector t . Let Ht be measurable with respect to F.t 1/; then the multivariate GARCH model can be written as t jF.t

1/  N.0; Ht / q X Ht D C C A0i t

0 i t i Ai

C

i D1

where C , Ai and Gi are k  k parameter matrices.

p X i D1

Gi0 Ht

i Gi

1982 F Chapter 30: The VARMAX Procedure

Consider a bivariate GARCH(1,1) model as follows: 0    2 1;t a11 a12 c11 c12 1 C D 2;t 1 1;t a21 a22 c12 c22  0   g11 g12 g11 g12 C Ht 1 g21 g22 g21 g22 

Ht

1;t 1 2;t 2 2;t 1

1

1



a11 a12 a21 a22



or, representing the univariate model, h11;t

2 2 D c11 C a11 1;t 2 Cg11 h11;t 1

h12;t

D c12 C

1

C 2g11 g21 h12;t

2 a11 a12 1;t 1

Cg11 g12 h11;t h22;t

D c22 C

C 2a11 a21 1;t 1

C

2 2 C a21 2;t

C .a21 a12 C a11 a22 /1;t

C 2a12 a22 1;t

C 2g12 g22 h12;t

1 2;t 1 1

C

1

2 g21 h22;t 1

1 C .g21 g12 C g11 g22 /h12;t

2 2 1;t 1 a12

2 h11;t 1 Cg12

1 2;t 1

C

1 2;t 1

2 C a21 a22 2;t

1 C g21 g22 h22;t

1

1

2 2 a22 2;t 1

2 g22 h22;t 1

For the BEKK representation of the bivariate GARCH(1,1) model, the SAS statements are: model y1 y2; garch q=1 p=1 form=bekk;

CCC Representation Bollerslev (1990) propose a multivariate GARCH model with time-varying conditional variances and covariances but constant conditional correlations. The conditional covariance matrix Ht consists of Ht D Dt €Dt where Dt is a k  k stochastic diagonal matrix with element i t and € is a k  k time-invariant matrix with the typical element ij . The elements of Ht are

hi i;t

D ci C

q X

2 ai i;l i;t

lD1

hij;t

l

C

p X lD1

1=2

D ij .hi i;t hjj;t /

i ¤j

gi i;l hi i;t

l

i; j D 1; : : : k

Multivariate GARCH Modeling F 1983

Estimation of GARCH Model The log-likelihood function of the multivariate GARCH model is written without a constant term T

1X Œlog jHt j C t0 Ht 1 t  2

`D

t D1

The log-likelihood function is maximized by an iterative numerical method such as quasi-Newton optimization. The starting values for the regression parameters are obtained from the least squares estimates. The covariance of t is used as the starting values for the GARCH constant parameters, and the starting value used for the other GARCH parameters is either 10 6 or 10 3 depending on the GARCH models representation. For the identification of the parameters of a BEKK representation GARCH model, the diagonal elements of the GARCH constant, the ARCH, and the GARCH parameters are restricted to be positive.

Covariance Stationarity Define the multivariate GARCH process as ht D

1 X

G.B/i

1

Œc C A.B/t 

i D1

where ht D vec.Ht /, c D vec.C0 /, and t D vec.t t0 /. This representation is equivalent to a GARCH(p; q) model by the following algebra:

ht

D c C A.B/t C

1 X

G.B/i

1

Œc C A.B/t 

i D2

D c C A.B/t C G.B/

1 X

G.B/i

1

Œtmbc C A.B/t 

i D1

D c C A.B/t C G.B/ht Defining A.B/ D tation.

Pq

i D1 .Ai

˝ Ai /0 B i and G.B/ D

Pp

i D1 .Gi

˝ Gi /0 B i gives a BEKK represen-

The necessary and sufficient conditions for covariance stationarity of the multivariate GARCH process is that all the eigenvalues of A.1/ C G.1/ are less than one in modulus.

An Example of a VAR(1)–ARCH(1) Model The following DATA step simulates a bivariate vector time series to provide test data for the multivariate GARCH model: data garch; retain seed 16587;

1984 F Chapter 30: The VARMAX Procedure

esq1 = 0; esq2 = 0; ly1 = 0; ly2 = 0; do i = 1 to 1000; ht = 6.25 + 0.5*esq1; call rannor(seed,ehat); e1 = sqrt(ht)*ehat; ht = 1.25 + 0.7*esq2; call rannor(seed,ehat); e2 = sqrt(ht)*ehat; y1 = 2 + 1.2*ly1 - 0.5*ly2 + e1; y2 = 4 + 0.6*ly1 + 0.3*ly2 + e2; if i>500 then output; esq1 = e1*e1; esq2 = e2*e2; ly1 = y1; ly2 = y2; end; keep y1 y2; run;

The following statements fit a VAR(1)–ARCH(1) model to the data. For a VAR-ARCH model, you specify the order of the autoregressive model with the P=1 option in the MODEL statement and the Q=1 option in the GARCH statement. In order to produce the initial and final values of parameters, the TECH=QN option is specified in the NLOPTIONS statement. proc varmax data=garch; model y1 y2 / p=1 print=(roots estimates diagnose); garch q=1; nloptions tech=qn; run;

Figure 30.61 through Figure 30.65 show the details of this example. Figure 30.61 shows the initial values of parameters.

Multivariate GARCH Modeling F 1985

Figure 30.61 Start Parameter Estimates for the VAR(1)–ARCH(1) Model The VARMAX Procedure Optimization Start Parameter Estimates

Estimate

Gradient Objective Function

2.249575 3.902673 1.231775 0.576890 -0.528405 0.343714 9.929763 0.193163 4.063245 0.001000 0 0 0.001000

5.787988 -4.856056 -17.155796 23.991176 14.656979 -12.763695 -0.111361 -0.684986 0.139403 -0.668058 -0.068657 -0.735896 -3.126628

N Parameter 1 2 3 4 5 6 7 8 9 10 11 12 13

CONST1 CONST2 AR1_1_1 AR1_2_1 AR1_1_2 AR1_2_2 GCHC1_1 GCHC1_2 GCHC2_2 ACH1_1_1 ACH1_2_1 ACH1_1_2 ACH1_2_2

Figure 30.62 shows the final parameter estimates. Figure 30.62 Results of Parameter Estimates for the VAR(1)–ARCH(1) Model The VARMAX Procedure Optimization Results Parameter Estimates N Parameter Estimate 1 2 3 4 5 6 7 8 9 10 11 12 13

CONST1 CONST2 AR1_1_1 AR1_2_1 AR1_1_2 AR1_2_2 GCHC1_1 GCHC1_2 GCHC2_2 ACH1_1_1 ACH1_2_1 ACH1_1_2 ACH1_2_2

1.943991 4.073898 1.220945 0.608263 -0.527121 0.303012 8.359045 -0.182483 1.602739 0.377569 0.032158 0.056491 0.710023

1986 F Chapter 30: The VARMAX Procedure

Figure 30.63 shows the conditional variance using the BEKK representation of the ARCH(1) model. The ARCH parameters are estimated by the vectorized parameter matrices. t jF.t

1/  N.0; Ht /   8:35905 0:18250 Ht D 0:18250 1:60275  0 0:37757 0:05649 C  0 0:03216 0:71002 t 1 t

  0:37757 0:05649 1 0:03216 0:71002

Figure 30.63 ARCH(1) Parameter Estimates for the VAR(1)–ARCH(1) Model The VARMAX Procedure Type of Model Estimation Method Representation Type

VAR(1)-ARCH(1) Maximum Likelihood Estimation BEKK

GARCH Model Parameter Estimates

Parameter

Estimate

Standard Error

t Value

Pr > |t|

GCHC1_1 GCHC1_2 GCHC2_2 ACH1_1_1 ACH1_2_1 ACH1_1_2 ACH1_2_2

8.35905 -0.18248 1.60274 0.37757 0.03216 0.05649 0.71002

0.73116 0.21706 0.19398 0.07470 0.06971 0.02622 0.06844

11.43 -0.84 8.26 5.05 0.46 2.15 10.37

0.0001 0.4009 0.0001 0.0001 0.6448 0.0317 0.0001

Figure 30.64 shows the AR parameter estimates and their significance. The fitted VAR(1) model with the previous conditional covariance ARCH model is written as follows:     1:94399 1:22094 0:52712 yt D C yt 1 C t 4:07390 0:60826 0:30301

Output Data Sets F 1987

Figure 30.64 VAR(1) Parameter Estimates for the VAR(1)–ARCH(1) Model Model Parameter Estimates

Equation Parameter

Estimate

y1

1.94399 1.22095 -0.52712 4.07390 0.60826 0.30301

y2

CONST1 AR1_1_1 AR1_1_2 CONST2 AR1_2_1 AR1_2_2

Standard Error t Value Pr > |t| Variable 0.21017 0.02564 0.02836 0.10574 0.01231 0.01498

9.25 47.63 -18.59 38.53 49.42 20.23

0.0001 0.0001 0.0001 0.0001 0.0001 0.0001

1 y1(t-1) y2(t-1) 1 y1(t-1) y2(t-1)

Figure 30.65 shows the roots of the AR and ARCH characteristic polynomials. The eigenvalues have a modulus less than one. Figure 30.65 Roots for the VAR(1)–ARCH(1) Model Roots of AR Characteristic Polynomial Index

Real

Imaginary

Modulus

Radian

Degree

1 2

0.76198 0.76198

0.33163 -0.33163

0.8310 0.8310

0.4105 -0.4105

23.5197 -23.5197

Roots of GARCH Characteristic Polynomial Index

Real

Imaginary

Modulus

Radian

Degree

1 2 3 4

0.51180 0.26627 0.26627 0.13853

0.00000 0.00000 0.00000 0.00000

0.5118 0.2663 0.2663 0.1385

0.0000 0.0000 0.0000 0.0000

0.0000 0.0000 0.0000 0.0000

Output Data Sets The VARMAX procedure can create the OUT=, OUTEST=, OUTHT=, and OUTSTAT= data sets. In general, if processing fails, the output is not recorded or is set to missing in the relevant output data set, and appropriate error and/or warning messages are recorded in the log.

OUT= Data Set The OUT= data set contains the forecast values produced by the OUTPUT statement. The following output variables can be created:

1988 F Chapter 30: The VARMAX Procedure

 the BY variables  the ID variable  the MODEL statement dependent (endogenous) variables. These variables contain the actual values from the input data set.  FORi, numeric variables that contain the forecasts. The FORi variables contain the forecasts for the ith endogenous variable in the MODEL statement list. Forecasts are one-step-ahead predictions until the end of the data or until the observation specified by the BACK= option. Multistep forecasts can be computed after that point based on the LEAD= option.  RESi, numeric variables that contain the residual for the forecast of the ith endogenous variable in the MODEL statement list. For multistep forecast observations, the actual values are missing and the RESi variables contain missing values.  STDi, numeric variables that contain the standard deviation for the forecast of the ith endogenous variable in the MODEL statement list. The values of the STDi variables can be used to construct univariate confidence limits for the corresponding forecasts.  LCIi, numeric variables that contain the lower confidence limits for the corresponding forecasts of the ith endogenous variable in the MODEL statement list.  UCIi, numeric variables that contain the upper confidence limits for the corresponding forecasts of the ith endogenous variable in the MODEL statement list. The OUT= data set contains the values shown in Table 30.3 and Table 30.4 for a bivariate case. Table 30.3

Table 30.4

OUT= Data Set

Obs

ID variable

y1

FOR1

RES1

STD1

LCI1

UCI1

1 2 :: :

date date

y11 y12

f11 f12

r11 r12

11 11

l11 l12

u11 u12

OUT= Data Set Continued

Obs

y2

FOR2

RES2

STD2

LCI2

UCI2

1 2 :: :

y21 y22

f21 f22

r21 r22

22 22

l21 l22

u21 u22

Consider the following example: proc varmax data=simul1 noprint; id date interval=year; model y1 y2 / p=1 noint; output out=out lead=5; run;

OUTEST= Data Set F 1989

proc print data=out(firstobs=98); run;

The output in Figure 30.66 shows part of the results of the OUT= data set for the preceding example. Figure 30.66 OUT= Data Set Obs

date

y1

FOR1

RES1

STD1

LCI1

UCI1

98 99 100 101 102 103 104 105

1997 1998 1999 2000 2001 2002 2003 2004

-0.58433 -2.07170 -3.38342 . . . . .

-0.13500 -1.00649 -2.58612 -3.59212 -3.09448 -2.17433 -1.11395 -0.14342

-0.44934 -1.06522 -0.79730 . . . . .

1.13523 1.13523 1.13523 1.13523 1.70915 2.14472 2.43166 2.58740

-2.36001 -3.23150 -4.81113 -5.81713 -6.44435 -6.37792 -5.87992 -5.21463

2.09002 1.21853 -0.36111 -1.36711 0.25539 2.02925 3.65203 4.92779

Obs

y2

FOR2

RES2

STD2

LCI2

UCI2

98 99 100 101 102 103 104 105

0.64397 0.35925 -0.64999 . . . . .

-0.34932 -0.07132 -0.99354 -2.09873 -2.77050 -2.75724 -2.24943 -1.47460

0.99329 0.43057 0.34355 . . . . .

1.19096 1.19096 1.19096 1.19096 1.47666 1.74212 2.01925 2.25169

-2.68357 -2.40557 -3.32779 -4.43298 -5.66469 -6.17173 -6.20709 -5.88782

1.98492 2.26292 1.34070 0.23551 0.12369 0.65725 1.70823 2.93863

OUTEST= Data Set The OUTEST= data set contains estimation results of the fitted model produced by the VARMAX statement. The following output variables can be created:  the BY variables  NAME, a character variable that contains the name of endogenous (dependent) variables or the name of the parameters for the covariance of the matrix of the parameter estimates if the OUTCOV option is specified  TYPE, a character variable that contains the value EST for parameter estimates, the value STD for standard error of parameter estimates, and the value COV for the covariance of the matrix of the parameter estimates if the OUTCOV option is specified  CONST, a numeric variable that contains the estimates of constant parameters and their standard errors  SEASON_i , a numeric variable that contains the estimates of seasonal dummy parameters and their standard errors, where i D 1; : : : ; .nseason 1/, and nseason is based on the NSEASON= option

1990 F Chapter 30: The VARMAX Procedure

 LTREND, a numeric variable that contains the estimates of linear trend parameters and their standard errors  QTREND, a numeric variable that contains the estimates of quadratic trend parameters and their standard errors  XLl_i , numeric variables that contain the estimates of exogenous parameters and their standard errors, where l is the lag lth coefficient matrix and i D 1; : : : ; r, where r is the number of exogenous variables  ARl_i , numeric variables that contain the estimates of autoregressive parameters and their standard errors, where l is the lag lth coefficient matrix and i D 1; : : : ; k, where k is the number of endogenous variables  MAl_i , numeric variables that contain the estimates of moving-average parameters and their standard errors, where l is the lag lth coefficient matrix and i D 1; : : : ; k, where k is the number of endogenous variables  ACHl_i are numeric variables that contain the estimates of the ARCH parameters of the covariance matrix and their standard errors, where l is the lag lth coefficient matrix and i D 1; : : : ; k for BEKK and CCC representations, where k is the number of endogenous variables.  GCHl_i are numeric variables that contain the estimates of the GARCH parameters of the covariance matrix and their standard errors, where l is the lag lth coefficient matrix and i D 1; : : : ; k for BEKK and CCC representations, where k is the number of endogenous variables.  GCHC_i are numeric variables that contain the estimates of the constant parameters of the covariance matrix and their standard errors, where i D 1; : : : ; k for BEKK representation, k is the number of endogenous variables, and i D 1 for CCC representation.  CCC_i are numeric variables that contain the estimates of the conditional constant correlation parameters for CCC representation where i D 2; : : : ; k. The OUTEST= data set contains the values shown Table 30.5 for a bivariate case. Table 30.5

OUTEST= Data Set

Obs

NAME

TYPE

CONST

AR1_1

AR1_2

AR2_1

AR2_2

1 2 3 4

y1

EST STD EST STD

ı1 se(ı1 ) ı2 se(ı2 )

1;11 se(1;11 ) 1;21 se(1;21 )

1;12 se(1;12 ) 1;22 se(1;22 )

2;11 se(2;11 ) 2;21 se(2;21 )

2;12 se(2;12 ) 2;22 se(2;22 )

y2

Consider the following example: proc varmax data=simul2 outest=est; model y1 y2 / p=2 noint ecm=(rank=1 normalize=y1)

OUTHT= Data Set F 1991

noprint; run; proc print data=est; run;

The output in Figure 30.67 shows the results of the OUTEST= data set. Figure 30.67 OUTEST= Data Set Obs

NAME

TYPE

AR1_1

AR1_2

AR2_1

AR2_2

1 2 3 4

y1

EST STD EST STD

-0.46680 0.04786 0.10667 0.05146

0.91295 0.09359 -0.20862 0.10064

-0.74332 0.04526 0.40493 0.04867

-0.74621 0.04769 -0.57157 0.05128

y2

OUTHT= Data Set The OUTHT= data set contains prediction of the fitted GARCH model produced by the GARCH statement. The following output variables can be created.  the BY variables  Hi _j , numeric variables that contain the prediction of covariance, where 1  i < j  k, where k is the number of dependent variables The OUTHT= data set contains the values shown in Table 30.6 for a bivariate case. Table 30.6

OUTHT= Data Set

Obs

H1_1

H1_2

H2_2

1 2 :

h111 h112 :

h121 h122 :

h221 h222 :

Consider the following example of the OUTHT= option: proc varmax data=garch; model y1 y2 / p=1 print=(roots estimates diagnose); garch q=1 outht=ht; run; proc print data=ht(firstobs=495); run;

1992 F Chapter 30: The VARMAX Procedure

The output in Figure 30.68 shows the part of the OUTHT= data set. Figure 30.68 OUTHT= Data Set Obs 495 496 497 498 499 500

h1_1 9.36568 8.46807 9.19686 8.40787 8.88429 8.60844

h1_2 -1.10406 -0.17464 0.09762 -0.33463 0.03646 -0.40260

h2_2 2.44644 1.60330 1.69639 2.07687 1.69401 1.79703

OUTSTAT= Data Set The OUTSTAT= data set contains estimation results of the fitted model produced by the VARMAX statement. The following output variables can be created. The subindex i is 1; : : : ; k, where k is the number of endogenous variables.  the BY variables  NAME, a character variable that contains the name of endogenous (dependent) variables  SIGMA_i, numeric variables that contain the estimate of the innovation covariance matrix  AICC, a numeric variable that contains the corrected Akaike’s information criterion value  HQC, a numeric variable that contains the Hannan-Quinn’s information criterion value  AIC, a numeric variable that contains the Akaike’s information criterion value  SBC, a numeric variable that contains the Schwarz Bayesian’s information criterion value  FPEC, a numeric variable that contains the final prediction error criterion value  FValue, a numeric variable that contains the F statistics  PValue, a numeric variable that contains p-value for the F statistics If the JOHANSEN= option is specified, the following items are added:  Eigenvalue, a numeric variable that contains eigenvalues for the cointegration rank test of integrated order 1  RestrictedEigenvalue, a numeric variable that contains eigenvalues for the cointegration rank test of integrated order 1 when the NOINT option is not specified  Beta_i , numeric variables that contain long-run effect parameter estimates, ˇ

OUTSTAT= Data Set F 1993

 Alpha_i , numeric variables that contain adjustment parameter estimates, ˛ If the JOHANSEN=(IORDER=2) option is specified, the following items are added:  EValueI2_i , numeric variables that contain eigenvalues for the cointegration rank test of integrated order 2  EValueI1, a numeric variable that contains eigenvalues for the cointegration rank test of integrated order 1  Eta_i , numeric variables that contain the parameter estimates in integrated order 2,   Xi_i , numeric variables that contain the parameter estimates in integrated order 2,  The OUTSTAT= data set contains the values shown Table 30.7 for a bivariate case. Table 30.7

OUTSTAT= Data Set

Obs

NAME

SIGMA_1

SIGMA_2

AICC

RSquare

FValue

PValue

1 2

y1 y2

11 21

12 22

aicc .

R12 R22

F1 F2

prob1 prob2

Obs

EValueI2_1

EValueI2_2

EValueI1

Beta_1

Beta_2

1 2

e11 e21

e12 .

e1 e2

ˇ11 ˇ21

ˇ12 ˇ21

Obs

Alpha_1

Alpha_2

Eta_1

Eta_2

Xi_1

Xi_2

1 2

˛11 ˛21

˛12 ˛22

11 21

12 22

11 21

12 22

Consider the following example: proc varmax data=simul2 outstat=stat; model y1 y2 / p=2 noint cointtest=(johansen=(iorder=2)) ecm=(rank=1 normalize=y1) noprint; run; proc print data=stat; run;

The output in Figure 30.69 shows the results of the OUTSTAT= data set.

1994 F Chapter 30: The VARMAX Procedure

Figure 30.69 OUTSTAT= Data Set Obs

NAME

SIGMA_1

SIGMA_2

AICC

HQC

AIC

SBC

FPEC

1 2

y1 y2

94.7557 4.5268

4.527 109.570

9.37221 .

9.43236 .

9.36834 .

9.52661 .

11712.14 .

Obs

RSquare

FValue

PValue

1 2

0.93905 0.94085

482.782 498.423

5.9027E-57 1.4445E-57

EValue I2_1

EValue I2_2

EValue I1

0.98486 0.81451

0.95079 .

0.50864 0.01108

Beta_1

Beta_2

1.00000 -1.95575

1.00000 -1.33622

Obs

Alpha_1

Alpha_2

Eta_1

Eta_2

Xi_1

Xi_2

1 2

-0.46680 0.10667

0.007937 0.033530

-0.012307 0.015555

0.027030 0.023086

54.1606 -79.4240

-52.3144 -18.3308

Printed Output The default printed output produced by the VARMAX procedure is described in the following list:  descriptive statistics, which include the number of observations used, the names of the variables, their means and standard deviations (STD), their minimums and maximums, the differencing operations used, and the labels of the variables  a type of model to fit the data and an estimation method  a table of parameter estimates that shows the following for each parameter: the variable name for the left-hand side of equation, the parameter name, the parameter estimate, the approximate standard error, t value, the approximate probability (P r > jtj), and the variable name for the right-hand side of equations in terms of each parameter  the innovation covariance matrix  the information criteria If PRINT=ESTIMATES is specified, the VARMAX procedure prints the following list with the default printed output:  the estimates of the constant vector (or seasonal constant matrix), the trend vector, the coefficient matrices of the distributed lags, the AR coefficient matrices, and the MA coefficient matrices  the ALPHA and BETA parameter estimates for the error correction model  the schematic representation of parameter estimates

ODS Table Names F 1995

If PRINT=DIAGNOSE is specified, the VARMAX procedure prints the following list with the default printed output:  the cross-covariance and cross-correlation matrices of the residuals  the tables of test statistics for the hypothesis that the residuals of the model are white noise: – Durbin-Watson (DW) statistics – F test for autoregressive conditional heteroscedastic (ARCH) disturbances – F test for AR disturbance – Jarque-Bera normality test – Portmanteau test

ODS Table Names The VARMAX procedure assigns a name to each table it creates. You can use these names to reference the table when using the Output Delivery System (ODS) to select tables and create output data sets. These names are listed in the following table: Table 30.8

ODS Tables Produced in the VARMAX Procedure

ODS Table Name

Description

Option

ODS Tables Created by the MODEL Statement AccumImpulse

Accumulated impulse response matrices

AccumImpulsebyVar Accumulated impulse response by variable AccumImpulseX Accumulated transfer function matrices AccumImpulseXbyVar Accumulated transfer function by variable Alpha ˛ coefficients AlphaInECM ˛ coefficients when rank=r AlphaOnDrift ˛ coefficients under the restriction of a deterministic term AlphaBetaInECM … D ˛ˇ 0 coefficients when rank=r ANOVA Univariate model diagnostic checks for the residuals ARCoef AR coefficients ARRoots Roots of AR characteristic polynomial Beta ˇ coefficients BetaInECM ˇ coefficients when rank=r BetaOnDrift ˇ coefficients under the restriction of a deterministic term

IMPULSE=(ACCUM) IMPULSE=(ALL) IMPULSE=(ACCUM) IMPULSE=(ALL) IMPULSX=(ACCUM) IMPULSX=(ALL) IMPULSX=(ACCUM) IMPULSX=(ALL) JOHANSEN= ECM= JOHANSEN= ECM= PRINT=DIAGNOSE P= ROOTS with P= JOHANSEN= ECM= JOHANSEN=

1996 F Chapter 30: The VARMAX Procedure

Table 30.8

continued

ODS Table Name Constant CorrB CorrResiduals CorrResidualsbyVar CorrResidualsGraph

Description

Constant estimates Correlations of parameter estimates Correlations of residuals Correlations of residuals by variable Schematic representation of correlations of residuals CorrXGraph Schematic representation of sample correlations of independent series CorrYGraph Schematic representation of sample correlations of dependent series CorrXLags Correlations of independent series CorrXbyVar Correlations of independent series by variable CorrYLags Correlations of dependent series CorrYbyVar Correlations of dependent series by variable CovB Covariances of parameter estimates CovInnovation Covariances of the innovations CovPredictError Covariance matrices of the prediction error CovPredictErrorbyVar Covariances of the prediction error by variable CovResiduals Covariances of residuals CovResidualsbyVar Covariances of residuals by variable CovXLags Covariances of independent series CovXbyVar Covariances of independent series by variable CovYLags Covariances of dependent series CovYbyVar Covariances of dependent series by variable DecomposeCovDecomposition of the prediction error PredictError covariances DecomposeCovDecomposition of the prediction error PredictErrorbyVar covariances by variable DFTest Dickey-Fuller test DiagnostAR Test the AR disturbance for the residuals DiagnostWN Test the ARCH disturbance and normality for the residuals DynamicARCoef AR coefficients of the dynamic model DynamicConstant Constant estimates of the dynamic model DynamicCov- Inno- Covariances of the innovations of the dyvation namic model DynamicLinearTrend Linear trend estimates of the dynamic model

Option without NOINT CORRB PRINT=DIAGNOSE PRINT=DIAGNOSE PRINT=DIAGNOSE CORRX CORRY CORRX CORRX CORRY CORRY COVB default COVPE COVPE PRINT=DIAGNOSE PRINT=DIAGNOSE COVX COVX COVY COVY DECOMPOSE DECOMPOSE DFTEST PRINT=DIAGNOSE PRINT=DIAGNOSE DYNAMIC DYNAMIC DYNAMIC DYNAMIC

ODS Table Names F 1997

Table 30.8

continued

ODS Table Name

Description

DynamicMACoef DynamicSConstant

MA coefficients of the dynamic model Seasonal constant estimates of the dynamic model DynamicParameter- Parameter estimates table of the dynamic Estimates model DynamicParameter- Schematic representation of the parameGraph ters of the dynamic model DynamicQuadTrend Quadratic trend estimates of the dynamic model DynamicSeasonGraph Schematic representation of the seasonal dummies of the dynamic model DynamicXLagCoef Dependent coefficients of the dynamic model Hypothesis Hypothesis of different deterministic terms in cointegration rank test HypothesisTest Test hypothesis of different deterministic terms in cointegration rank test EigenvalueI2 Eigenvalues in integrated order 2 Eta

 coefficients

InfiniteARRepresent InfoCriteria LinearTrend MACoef MARoots MaxTest

Infinite order ar representation Information criteria Linear trend estimates MA coefficients Roots of MA characteristic polynomial Cointegration rank test using the maximum eigenvalue Tentative order selection Type of model Number of observations Orthogonalized impulse response matrices Orthogonalized impulse response by variable Parameter estimates table Schematic representation of the parameters Partial autoregression matrices Schematic representation of partial autoregression Partial canonical correlation analysis Partial cross-correlation matrices Partial cross-correlations by variable

Minic ModelType NObs OrthoImpulse OrthoImpulsebyVar ParameterEstimates ParameterGraph PartialAR PartialARGraph PartialCanCorr PartialCorr PartialCorrbyVar

Option DYNAMIC DYNAMIC DYNAMIC DYNAMIC DYNAMIC DYNAMIC DYNAMIC JOHANSEN= JOHANSEN= JOHANSEN= (IORDER=2) JOHANSEN= (IORDER=2) IARR default TREND= Q= ROOTS with Q= JOHANSEN= (TYPE=MAX) MINIC MINIC= default default IMPULSE=(ORTH) IMPULSE=(ALL) IMPULSE=(ORTH) IMPULSE=(ALL) default PRINT=ESTIMATES PARCOEF PARCOEF PCANCORR PCORR PCORR

1998 F Chapter 30: The VARMAX Procedure

Table 30.8

continued

ODS Table Name

Description

Option

PartialCorrGraph

Schematic representation of partial cross-correlations Chi-square test table for residual crosscorrelations Proportions of prediction error covariance decomposition Proportions of prediction error covariance decomposition by variable Cointegration rank test in integrated order 2 Cointegration rank test using the maximum eigenvalue under the restriction of a deterministic term Cointegration rank test using the trace under the restriction of a deterministic term Quadratic trend estimates Schematic representation of the seasonal dummies Seasonal constant estimates Impulse response matrices

PCORR

PortmanteauTest ProportionCov- PredictError ProportionCov- PredictErrorbyVar RankTestI2 RestrictMaxTest

RestrictTraceTest

QuadTrend SeasonGraph SConstant SimpleImpulse

SimpleImpulsebyVar Impulse response by variable SimpleImpulseX

Impulse response matrices of transfer function SimpleImpulseXbyVar Impulse response of transfer function by variable Summary Simple summary statistics SWTest Common trends test TraceTest Cointegration rank test using the trace Xi

 coefficient matrix

XLagCoef YWEstimates

Dependent coefficients Yule-Walker estimates

PRINT=DIAGNOSE DECOMPOSE DECOMPOSE JOHANSEN= (IORDER=2) JOHANSEN= (TYPE=MAX) without NOINT JOHANSEN= (TYPE=TRACE) without NOINT TREND=QUAD PRINT=ESTIMATES NSEASON= IMPULSE=(SIMPLE) IMPULSE=(ALL) IMPULSE=(SIMPLE) IMPULSE=(ALL) IMPULSX=(SIMPLE) IMPULSX=(ALL) IMPULSX=(SIMPLE) IMPULSX=(ALL) default SW= JOHANSEN= (TYPE=TRACE) JOHANSEN= (IORDER=2) XLAG= YW

ODS Tables Created by the GARCH Statement ARCHCoef GARCHCoef GARCHConstant GARCHParameterEstimates

ARCH coefficients GARCH coefficients GARCH constant estimates GARCH parameter estimates table

Q= P= PRINT=ESTIMATES default

ODS Table Names F 1999

Table 30.8

continued

ODS Table Name

Description

Option

GARCHParameterGraph GARCHRoots

Schematic representation of the garch parameters Roots of GARCH characteristic polynomial

PRINT=ESTIMATES ROOTS

ODS Tables Created by the COINTEG Statement or the ECM option AlphaInECM AlphaBetaInECM AlphaOnAlpha AlphaOnBeta AlphaTestResults BetaInECM BetaOnBeta BetaOnAlpha BetaTestResults GrangerRepresent HMatrix JMatrix WeakExogeneity

˛ coefficients when rank=r … D ˛ˇ 0 coefficients when rank=r ˛ coefficients under the restriction of ˛ ˛ coefficients under the restriction of ˇ Hypothesis testing of ˇ ˇ coefficients when rank=r ˇ coefficients under the restriction of ˇ ˇ coefficients under the restriction of ˛ Hypothesis testing of ˇ Coefficient of Granger representation Restriction matrix for ˇ Restriction matrix for ˛ Testing weak exogeneity of each dependent variable with respect to BETA

PRINT=ESTIMATES PRINT=ESTIMATES J= H= J= PRINT=ESTIMATES H= J= H= PRINT=ESTIMATES H= J= EXOGENEITY

ODS Tables Created by the CAUSAL Statement CausalityTest GroupVars

Granger causality test Two groups of variables

default default

ODS Tables Created by the RESTRICT Statement Restrict

Restriction table

default

ODS Tables Created by the TEST Statement Test

Wald test

default

ODS Tables Created by the OUTPUT Statement Forecasts

Forecasts table

without NOPRINT

Note that the ODS table names suffixed by “byVar” can be obtained with the PRINTFORM=UNIVARIATE option.

2000 F Chapter 30: The VARMAX Procedure

ODS Graphics This section describes the use of ODS for creating statistical graphs with the VARMAX procedure. To request these graphs, you must specify the ODS GRAPHICS ON statement. When ODS GRAPHICS are in effect, the VARMAX procedure produces a variety of plots for each dependent variable. The plots available are as follows:  The procedure displays the following plots for each dependent variable in the MODEL statement with the PLOT= option in the VARMAX statement: – impulse response function – impulse response of the transfer function – time series and predicted series – prediction errors – distribution of the prediction errors – normal quantile of the prediction errors – ACF of the prediction errors – PACF of the prediction errors – IACF of the prediction errors – log scaled white noise test of the prediction errors  The procedure displays forecast plots for each dependent variable in the OUTPUT statement with the PLOT= option in the VARMAX statement.

ODS Graph Names The VARMAX procedure assigns a name to each graph it creates by using ODS. You can use these names to reference the graphs when using ODS. The names are listed in Table 30.9. Table 30.9

ODS Graphics Produced in the VARMAX Procedure

ODS Table Name

Plot Description

Statement

ErrorACFPlot

Autocorrelation function of prediction errors Inverse autocorrelation function of prediction errors Partial autocorrelation function of prediction errors Diagnostics of prediction errors Histogram and Q-Q plot of prediction errors

MODEL

ErrorIACFPlot ErrorPACFPlot ErrorDiagnosticsPanel ErrorNormalityPanel

MODEL MODEL MODEL MODEL

Computational Issues F 2001

Table 30.9

continued

ODS Table Name

Plot Description

Statement

ErrorDistribution ErrorQQPlot ErrorWhiteNoisePlot ErrorPlot ModelPlot AccumulatedIRFPanel AccumulatedIRFXPanel

Distribution of prediction errors Q-Q plot of prediction errors White noise test of prediction errors Prediction errors Time series and predicted series Accumulated impulse response function Accumulated impulse response of transfer function Orthogonalized impulse response function Simple impulse response function Simple impulse response of transfer function Time series and forecasts Forecasts

MODEL MODEL MODEL MODEL MODEL MODEL MODEL

OrthogonalIRFPanel SimpleIRFPanel SimpleIRFXPanel ModelForecastsPlot ForecastsOnlyPlot

MODEL MODEL MODEL OUTPUT OUTPUT

Computational Issues Computational Method The VARMAX procedure uses numerous linear algebra routines and frequently uses the sweep operator (Goodnight 1979) and the Cholesky root (Golub and Van Loan 1983). In addition, the VARMAX procedure uses the nonlinear optimization (NLO) subsystem to perform nonlinear optimization tasks for the maximum likelihood estimation. The optimization requires intensive computation.

Convergence Problems For some data sets, the computation algorithm can fail to converge. Nonconvergence can result from a number of causes, including flat or ridged likelihood surfaces and ill-conditioned data. If you experience convergence problems, the following points might be helpful:  Data that contain extreme values can affect results in PROC VARMAX. Rescaling the data can improve stability.  Changing the TECH=, MAXITER=, and MAXFUNC= options in the NLOPTIONS statement can improve the stability of the optimization process.  Specifying a different model that might fit the data more closely and might improve convergence.

2002 F Chapter 30: The VARMAX Procedure

Memory Let T be the length of each series, k be the number of dependent variables, p be the order of autoregressive terms, and q be the order of moving-average terms. The number of parameters to estimate for a VARMA(p; q) model is k C .p C q/k 2 C k  .k C 1/=2 As k increases, the number of parameters to estimate increases very quickly. Furthermore the memory requirement for VARMA(p; q) quadratically increases as k and T increase. For a VARMAX(p; q; s) model and GARCH-type multivariate conditional heteroscedasticity models, the number of parameters to estimate and the memory requirements are considerable.

Computing Time PROC VARMAX is computationally intensive, and execution times can be long. Extensive CPU time is often required to compute the maximum likelihood estimates.

Examples: VARMAX Procedure

Example 30.1: Analysis of U.S. Economic Variables Consider the following four-dimensional system of U.S. economic variables. Quarterly data for the years 1954 to 1987 are used (Lütkepohl 1993, Table E.3.). title ’Analysis of U.S. Economic Variables’; data us_money; date=intnx( ’qtr’, ’01jan54’d, _n_-1 ); format date yyq. ; input y1 y2 y3 y4 @@; y1=log(y1); y2=log(y2); label y1=’log(real money stock M1)’ y2=’log(GNP in bil. of 1982 dollars)’ y3=’Discount rate on 91-day T-bills’ y4=’Yield on 20-year Treasury bonds’; datalines; 450.9 1406.8 0.010800000 0.026133333 453.0 1401.2 0.0081333333 0.025233333 ... more lines ...

The following statements plot the series and proceed with the VARMAX procedure.

Example 30.1: Analysis of U.S. Economic Variables F 2003

proc timeseries data=us_money vectorplot=series; id date interval=qtr; var y1 y2; run;

Output 30.1.1 shows the plot of the variables y1 and y2. Output 30.1.1 Plot of Data

The following statements plot the variables y3 and y4. proc timeseries data=us_money vectorplot=series; id date interval=qtr; var y3 y4; run;

Output 30.1.2 shows the plot of the variables y3 and y4.

2004 F Chapter 30: The VARMAX Procedure

Output 30.1.2 Plot of Data

proc varmax data=us_money; id date interval=qtr; model y1-y4 / p=2 lagmax=6 dftest print=(iarr(3) estimates diagnose) cointtest=(johansen=(iorder=2)) ecm=(rank=1 normalize=y1); cointeg rank=1 normalize=y1 exogeneity; run;

This example performs the Dickey-Fuller test for stationarity, the Johansen cointegrated test integrated order 2, and the exogeneity test. The VECM(2) is fit to the data. From the outputs shown in Output 30.1.5, you can see that the series has unit roots and is cointegrated in rank 1 with integrated order 1. The fitted VECM(2) is given as 0 yt

1 0 0:0408 B 0:0860 C CCB A @ 0:0052 0:0144

B D B @ 0

0:3460 0:0913 B 0:0994 0:0379 CB @ 0:1812 0:0786 0:0322 0:0496

0:0140 0:0281 0:0022 0:0051 0:3535 0:2390 0:0223 0:0329

0:0065 0:0131 0:0010 0:0024

1 0:1306 0:2630 C Cy 0:0201 A t 0:0477

0:2026 0:4080 0:0312 0:0741 1

0:9690 0:2866 C C y 0:4051 A t 0:1857

1

C t

1

Example 30.1: Analysis of U.S. Economic Variables F 2005

The  prefixed to a variable name implies differencing. Output 30.1.3 through Output 30.1.14 show the details. Output 30.1.3 shows the descriptive statistics. Output 30.1.3 Descriptive Statistics Analysis of U.S. Economic Variables The VARMAX Procedure Number of Observations Number of Pairwise Missing

136 0

Simple Summary Statistics

N

Mean

Standard Deviation

Min

Max

136 136 136 136

6.21295 7.77890 0.05608 0.06458

0.07924 0.30110 0.03109 0.02927

6.10278 7.24508 0.00813 0.02490

6.45331 8.27461 0.15087 0.13600

Variable Type y1 y2 y3 y4

Dependent Dependent Dependent Dependent

Simple Summary Statistics Variable Label y1 y2 y3 y4

log(real money stock M1) log(GNP in bil. of 1982 dollars) Discount rate on 91-day T-bills Yield on 20-year Treasury bonds

Output 30.1.4 shows the output for Dickey-Fuller tests for the nonstationarity of each series. The null hypotheses is to test a unit root. All series have a unit root. Output 30.1.4 Unit Root Tests Unit Root Test Variable

Type

y1

Zero Mean Single Mean Trend Zero Mean Single Mean Trend Zero Mean Single Mean Trend Zero Mean Single Mean Trend

y2

y3

y4

Rho

Pr < Rho

Tau

Pr < Tau

0.05 -2.97 -5.91 0.13 -0.43 -9.21 -1.28 -8.86 -18.97 0.40 -2.79 -12.12

0.6934 0.6572 0.7454 0.7124 0.9309 0.4787 0.4255 0.1700 0.0742 0.7803 0.6790 0.2923

1.14 -0.76 -1.34 5.14 -0.79 -2.16 -0.69 -2.27 -2.86 0.45 -1.29 -2.33

0.9343 0.8260 0.8725 0.9999 0.8176 0.5063 0.4182 0.1842 0.1803 0.8100 0.6328 0.4170

2006 F Chapter 30: The VARMAX Procedure

The Johansen cointegration rank test shows whether the series is integrated order either 1 or 2 as shown in Output 30.1.5. The last two columns in Output 30.1.5 explain the cointegration rank test with integrated order 1. The results indicate that there is the cointegrated relationship with the cointegration rank 1 with respect to the 0.05 significance level because the test statistic of 20.6542 is smaller than the critical value of 29.38. Now, look at the row associated with r D 1. Compare the test statistic value and critical value pairs such as (219.62395, 29.38), (89.21508, 15.34), and (27.32609, 3.84). There is no evidence that the series are integrated order 2 at the 0.05 significance level. Output 30.1.5 Cointegration Rank Test Cointegration Rank Test for I(2)

r\k-r-s 0 1 2 3 5% CV I(2)

4

3

2

1

384.60903

214.37904 219.62395

107.93782 89.21508 73.61779

47.21000

29.38000

15.34000

37.02523 27.32609 22.13279 38.29435 3.84000

Trace of I(1) 55.9633 20.6542 2.6477 0.0149

Cointegration Rank Test for I(2)

r\k-r-s 0 1 2 3 5% CV I(2)

5% CV of I(1) 47.21 29.38 15.34 3.84

Output 30.1.6 shows the estimates of the long-run parameter, ˇ, and the adjustment coefficient, ˛.

Example 30.1: Analysis of U.S. Economic Variables F 2007

Output 30.1.6 Cointegration Rank Test Continued Beta Variable y1 y2 y3 y4

1

2

3

4

1.00000 -0.46458 14.51619 -9.35520

1.00000 -0.63174 -1.29864 7.53672

1.00000 -0.69996 1.37007 2.47901

1.00000 -0.16140 -0.61806 1.43731

Alpha Variable y1 y2 y3 y4

1

2

3

4

-0.01396 -0.02811 -0.00215 0.00510

0.01396 -0.02739 -0.04967 -0.02514

-0.01119 -0.00032 -0.00183 -0.00220

0.00008 0.00076 -0.00072 0.00016

Output 30.1.7 shows the estimates  and . Output 30.1.7 Cointegration Rank Test Continued Eta Variable y1 y2 y3 y4

1

2

3

4

52.74907 -49.10609 68.29674 121.25932

41.74502 -9.40081 -144.83173 271.80496

-20.80403 98.87199 -27.35953 85.85156

55.77415 22.56416 15.51142 -130.11599

Xi Variable y1 y2 y3 y4

1

2

3

4

-0.00842 0.00141 -0.00445 -0.00211

-0.00052 0.00213 0.00541 -0.00064

-0.00208 -0.00736 -0.00150 -0.00130

-0.00250 -0.00058 0.00310 0.00197

Output 30.1.8 shows that the VECM(2) is fit to the data. The ECM=(RANK=1) option produces the estimates of the long-run parameter, ˇ, and the adjustment coefficient, ˛.

2008 F Chapter 30: The VARMAX Procedure

Output 30.1.8 Parameter Estimates Analysis of U.S. Economic Variables The VARMAX Procedure Type of Model Estimation Method Cointegrated Rank

VECM(2) Maximum Likelihood Estimation 1

Beta Variable

1

y1 y2 y3 y4

1.00000 -0.46458 14.51619 -9.35520

Alpha Variable y1 y2 y3 y4

1 -0.01396 -0.02811 -0.00215 0.00510

Output 30.1.9 shows the parameter estimates in terms of the constant, the lag one coefficients (yt 1 ) contained in the ˛ˇ 0 estimates, and the coefficients associated with the lag one first differences (yt 1 ).

Example 30.1: Analysis of U.S. Economic Variables F 2009

Output 30.1.9 Parameter Estimates Continued Constant Variable

Constant

y1 y2 y3 y4

0.04076 0.08595 0.00518 -0.01438

Parameter Alpha * Beta’ Estimates Variable y1 y2 y3 y4

y1

y2

y3

y4

-0.01396 -0.02811 -0.00215 0.00510

0.00648 0.01306 0.00100 -0.00237

-0.20263 -0.40799 -0.03121 0.07407

0.13059 0.26294 0.02011 -0.04774

AR Coefficients of Differenced Lag DIF Lag 1

Variable y1 y2 y3 y4

y1

y2

y3

y4

0.34603 0.09936 0.18118 0.03222

0.09131 0.03791 0.07859 0.04961

-0.35351 0.23900 0.02234 -0.03292

-0.96895 0.28661 0.40508 0.18568

Output 30.1.10 shows the parameter estimates and their significance.

2010 F Chapter 30: The VARMAX Procedure

Output 30.1.10 Parameter Estimates Continued Model Parameter Estimates

Equation Parameter

Estimate

D_y1

0.04076 -0.01396 0.00648 -0.20263 0.13059 0.34603 0.09131 -0.35351 -0.96895 0.08595 -0.02811 0.01306 -0.40799 0.26294 0.09936 0.03791 0.23900 0.28661 0.00518 -0.00215 0.00100 -0.03121 0.02011 0.18118 0.07859 0.02234 0.40508 -0.01438 0.00510 -0.00237 0.07407 -0.04774 0.03222 0.04961 -0.03292 0.18568

D_y2

D_y3

D_y4

CONST1 AR1_1_1 AR1_1_2 AR1_1_3 AR1_1_4 AR2_1_1 AR2_1_2 AR2_1_3 AR2_1_4 CONST2 AR1_2_1 AR1_2_2 AR1_2_3 AR1_2_4 AR2_2_1 AR2_2_2 AR2_2_3 AR2_2_4 CONST3 AR1_3_1 AR1_3_2 AR1_3_3 AR1_3_4 AR2_3_1 AR2_3_2 AR2_3_3 AR2_3_4 CONST4 AR1_4_1 AR1_4_2 AR1_4_3 AR1_4_4 AR2_4_1 AR2_4_2 AR2_4_3 AR2_4_4

Standard Error t Value Pr > |t| Variable 0.01418 0.00495 0.00230 0.07191 0.04634 0.06414 0.07334 0.11024 0.20737 0.01679 0.00586 0.00272 0.08514 0.05487 0.07594 0.08683 0.13052 0.24552 0.01608 0.00562 0.00261 0.08151 0.05253 0.07271 0.08313 0.12496 0.23506 0.00803 0.00281 0.00130 0.04072 0.02624 0.03632 0.04153 0.06243 0.11744

2.87

5.39 1.25 -3.21 -4.67 5.12

1.31 0.44 1.83 1.17 0.32

2.49 0.95 0.18 1.72 -1.79

0.89 1.19 -0.53 1.58

0.0048 1 y1(t-1) y2(t-1) y3(t-1) y4(t-1) 0.0001 D_y1(t-1) 0.2154 D_y2(t-1) 0.0017 D_y3(t-1) 0.0001 D_y4(t-1) 0.0001 1 y1(t-1) y2(t-1) y3(t-1) y4(t-1) 0.1932 D_y1(t-1) 0.6632 D_y2(t-1) 0.0695 D_y3(t-1) 0.2453 D_y4(t-1) 0.7476 1 y1(t-1) y2(t-1) y3(t-1) y4(t-1) 0.0140 D_y1(t-1) 0.3463 D_y2(t-1) 0.8584 D_y3(t-1) 0.0873 D_y4(t-1) 0.0758 1 y1(t-1) y2(t-1) y3(t-1) y4(t-1) 0.3768 D_y1(t-1) 0.2345 D_y2(t-1) 0.5990 D_y3(t-1) 0.1164 D_y4(t-1)

Output 30.1.11 shows the innovation covariance matrix estimates, the various information criteria results, and the tests for white noise residuals. The residuals have significant correlations at lag 2 and 3. The Portmanteau test results into significant. These results show that a VECM(3) model might be better fit than the VECM(2) model is.

Example 30.1: Analysis of U.S. Economic Variables F 2011

Output 30.1.11 Diagnostic Checks Covariances of Innovations Variable y1 y2 y3 y4

y1

y2

y3

y4

0.00005 0.00001 -0.00001 -0.00000

0.00001 0.00007 0.00002 0.00001

-0.00001 0.00002 0.00007 0.00002

-0.00000 0.00001 0.00002 0.00002

Information Criteria AICC HQC AIC SBC FPEC

-40.6284 -40.4343 -40.6452 -40.1262 2.23E-18

Schematic Representation of Cross Correlations of Residuals Variable/ Lag 0 1 2 3 4 5 6 y1 y2 y3 y4

++.. ++++ .+++ .+++

.... .... .... ....

+ is > 2*std error,

++.. .... +.-. ....

.... .... ..++ ..+.

+... .... -... ....

- is < -2*std error,

..-.... .... ....

.... .... .... ....

. is between

Portmanteau Test for Cross Correlations of Residuals Up To Lag 3 4 5 6

DF

Chi-Square

Pr > ChiSq

16 32 48 64

53.90 74.03 103.08 116.94

2*std error, is < -2*std error, . is between, * is N/A

Model Parameter Estimates

Equation Parameter

Estimate

y1

-0.01672 -0.31963 0.14599 0.96122 -0.16055 0.11460 0.93439 0.01577 0.04393 -0.15273 0.28850 0.05003 0.01917 -0.01020 0.01293 -0.00242 0.22481 -0.26397 0.03388 0.35491 -0.02223

y2

y3

CONST1 AR1_1_1 AR1_1_2 AR1_1_3 AR2_1_1 AR2_1_2 AR2_1_3 CONST2 AR1_2_1 AR1_2_2 AR1_2_3 AR2_2_1 AR2_2_2 AR2_2_3 CONST3 AR1_3_1 AR1_3_2 AR1_3_3 AR2_3_1 AR2_3_2 AR2_3_3

Standard Error t Value Pr > |t| Variable 0.01723 0.12546 0.54567 0.66431 0.12491 0.53457 0.66510 0.00437 0.03186 0.13857 0.16870 0.03172 0.13575 0.16890 0.00353 0.02568 0.11168 0.13596 0.02556 0.10941 0.13612

-0.97 -2.55 0.27 1.45 -1.29 0.21 1.40 3.60 1.38 -1.10 1.71 1.58 0.14 -0.06 3.67 -0.09 2.01 -1.94 1.33 3.24 -0.16

0.3352 0.0132 0.7899 0.1526 0.2032 0.8309 0.1647 0.0006 0.1726 0.2744 0.0919 0.1195 0.8882 0.9520 0.0005 0.9251 0.0482 0.0565 0.1896 0.0019 0.8708

1 y1(t-1) y2(t-1) y3(t-1) y1(t-2) y2(t-2) y3(t-2) 1 y1(t-1) y2(t-1) y3(t-1) y1(t-2) y2(t-2) y3(t-2) 1 y1(t-1) y2(t-1) y3(t-1) y1(t-2) y2(t-2) y3(t-2)

Output 30.2.4 shows the innovation covariance matrix estimates, the various information criteria results, and the tests for white noise residuals. The residuals are uncorrelated except at lag 3 for y2 variable.

Example 30.2: Analysis of German Economic Variables F 2017

Output 30.2.4 Diagnostic Checks Covariances of Innovations Variable y1 y2 y3

y1

y2

y3

0.00213 0.00007 0.00012

0.00007 0.00014 0.00006

0.00012 0.00006 0.00009

Information Criteria AICC HQC AIC SBC FPEC

-24.4884 -24.2869 -24.5494 -23.8905 2.18E-11

Cross Correlations of Residuals Lag 0

1

2

3

Variable y1 y2 y3 y1 y2 y3 y1 y2 y3 y1 y2 y3

y1

y2

y3

1.00000 0.13242 0.28275 0.01461 -0.01125 -0.00993 0.07253 -0.08096 -0.02660 0.09915 -0.00289 -0.03364

0.13242 1.00000 0.55526 -0.00666 -0.00167 -0.06780 -0.00226 -0.01066 -0.01392 0.04484 0.14059 0.05374

0.28275 0.55526 1.00000 -0.02394 -0.04515 -0.09593 -0.01621 -0.02047 -0.02263 0.05243 0.25984 0.05644

Schematic Representation of Cross Correlations of Residuals Variable/ Lag 0 1 2 3 y1 y2 y3

+.+ .++ +++

... ... ...

... ... ...

... ..+ ...

+ is > 2*std error, - is < -2*std error, . is between

Portmanteau Test for Cross Correlations of Residuals Up To Lag 3

DF

Chi-Square

Pr > ChiSq

9

9.69

0.3766

2018 F Chapter 30: The VARMAX Procedure

Output 30.2.5 describes how well each univariate equation fits the data. The residuals are off from the normality, but have no AR effects. The residuals for y1 variable have the ARCH effect. Output 30.2.5 Diagnostic Checks Continued Univariate Model ANOVA Diagnostics

Variable y1 y2 y3

R-Square

Standard Deviation

F Value

Pr > F

0.1286 0.1142 0.2513

0.04615 0.01172 0.00944

1.62 1.42 3.69

0.1547 0.2210 0.0032

Univariate Model White Noise Diagnostics Durbin Watson

Variable y1 y2 y3

Normality Chi-Square Pr > ChiSq

1.96269 1.98145 2.14583

10.22 11.98 34.25

ARCH F Value Pr > F

0.0060 0.0025 F 0.01 0.00 0.68

0.9029 0.9883 0.4129

AR2 F Value Pr > F 0.19 0.00 0.38

0.8291 0.9961 0.6861

AR3 F Value Pr > F 0.39 0.46 0.30

0.7624 0.7097 0.8245

AR4 F Value Pr > F 1.39 0.34 0.21

0.2481 0.8486 0.9320

Output 30.2.6 is the output in a matrix format associated with the PRINT=(IMPULSE=) option for the impulse response function and standard errors. The y3 variable in the first row is an impulse variable. The y1 variable in the first column is a response variable. The numbers, 0.96122, 0.41555, –0.40789 at lag 1 to 3 are decreasing.

Example 30.2: Analysis of German Economic Variables F 2019

Output 30.2.6 Impulse Response Function Simple Impulse Response by Variable Variable Response\Impulse y1

y2

y3

Lag

y1

y2

y3

1 STD 2 STD 3 STD 1 STD 2 STD 3 STD 1 STD 2 STD 3 STD

-0.31963 0.12546 -0.05430 0.12919 0.11904 0.08362 0.04393 0.03186 0.02858 0.03184 -0.00884 0.01583 -0.00242 0.02568 0.04517 0.02563 -0.00055 0.01646

0.14599 0.54567 0.26174 0.54728 0.35283 0.38489 -0.15273 0.13857 0.11377 0.13425 0.07147 0.07914 0.22481 0.11168 0.26088 0.10820 -0.09818 0.07823

0.96122 0.66431 0.41555 0.66311 -0.40789 0.47867 0.28850 0.16870 -0.08820 0.16250 0.11977 0.09462 -0.26397 0.13596 0.10998 0.13101 0.09096 0.10280

The proportions of decomposition of the prediction error covariances of three variables are given in Output 30.2.7. If you see the y3 variable in the first column, then the output explains that about 64.713% of the one-step-ahead prediction error covariances of the variable y3t is accounted for by its own innovations, about 7.995% is accounted for by y1t innovations, and about 27.292% is accounted for by y2t innovations.

2020 F Chapter 30: The VARMAX Procedure

Output 30.2.7 Proportions of Prediction Error Covariance Decomposition Proportions of Prediction Error Covariances by Variable Variable y1

y2

y3

Lead

y1

y2

y3

1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6

1.00000 0.95996 0.94565 0.94079 0.93846 0.93831 0.01754 0.06025 0.06959 0.06831 0.06850 0.06924 0.07995 0.07725 0.12973 0.12870 0.12859 0.12852

0.00000 0.01751 0.02802 0.02936 0.03018 0.03025 0.98246 0.90747 0.89576 0.89232 0.89212 0.89141 0.27292 0.27385 0.33364 0.33499 0.33924 0.33963

0.00000 0.02253 0.02633 0.02985 0.03136 0.03145 0.00000 0.03228 0.03465 0.03937 0.03938 0.03935 0.64713 0.64890 0.53663 0.53631 0.53217 0.53185

The table in Output 30.2.8 gives forecasts and their prediction error covariances. Output 30.2.8 Forecasts Forecasts

Variable y1

y2

y3

Obs

Time

Forecast

Standard Error

77 78 79 80 81 77 78 79 80 81 77 78 79 80 81

1979:1 1979:2 1979:3 1979:4 1980:1 1979:1 1979:2 1979:3 1979:4 1980:1 1979:1 1979:2 1979:3 1979:4 1980:1

6.54027 6.55105 6.57217 6.58452 6.60193 7.68473 7.70508 7.72206 7.74266 7.76240 7.54024 7.55489 7.57472 7.59344 7.61232

0.04615 0.05825 0.06883 0.08021 0.09117 0.01172 0.01691 0.02156 0.02615 0.03005 0.00944 0.01282 0.01808 0.02205 0.02578

95% Confidence Limits 6.44982 6.43688 6.43725 6.42732 6.42324 7.66176 7.67193 7.67980 7.69140 7.70350 7.52172 7.52977 7.53928 7.55022 7.56179

6.63072 6.66522 6.70708 6.74173 6.78063 7.70770 7.73822 7.76431 7.79392 7.82130 7.55875 7.58001 7.61015 7.63666 7.66286

Output 30.2.9 shows that you cannot reject Granger noncausality from .y2; y3/ to y1 using the 0.05 significance level.

Example 30.2: Analysis of German Economic Variables F 2021

Output 30.2.9 Granger Causality Tests Granger-Causality Wald Test Test

DF

Chi-Square

Pr > ChiSq

1

4

6.37

0.1734

Test 1:

Group 1 Variables: Group 2 Variables:

y1 y2 y3

The following SAS statements show that the variable y1 is the exogenous variable and fit the VARX(2,1) model to the data. proc varmax data=use; id date interval=qtr; model y2 y3 = y1 / p=2 dify=(1) difx=(1) xlag=1 lagmax=3 print=(estimates diagnose); run;

The fitted VARX(2,1) model is written as 

y 2t y 3t





     0:01542 0:02520 0:03870 D C y 1t C y 1;t 0:01319 0:05130 0:00363    0:12258 0:25811 y 2;t 1 C 0:24367 0:31809 y 3;t 1      0:01651 0:03498 y 2;t 2 1t C C 0:34921 0:01664 y 3;t 2 2t

1

The detailed output is shown in Output 30.2.10 through Output 30.2.13. Output 30.2.10 shows the parameter estimates in terms of the constant, the current and the lag one coefficients of the exogenous variable, and the lag two coefficients of the dependent variables.

2022 F Chapter 30: The VARMAX Procedure

Output 30.2.10 Parameter Estimates Analysis of German Economic Variables The VARMAX Procedure Type of Model Estimation Method

VARX(2,1) Least Squares Estimation

Constant Variable

Constant

y2 y3

0.01542 0.01319

XLag Lag

Variable

0 1

y1

y2 y3 y2 y3

0.02520 0.05130 0.03870 0.00363

AR Lag 1 2

Variable y2 y3 y2 y3

y2

y3

-0.12258 0.24367 0.01651 0.34921

0.25811 -0.31809 0.03498 -0.01664

Output 30.2.11 shows the parameter estimates and their significance.

Example 30.2: Analysis of German Economic Variables F 2023

Output 30.2.11 Parameter Estimates Continued Model Parameter Estimates

Equation Parameter

Estimate

y2

0.01542 0.02520 0.03870 -0.12258 0.25811 0.01651 0.03498 0.01319 0.05130 0.00363 0.24367 -0.31809 0.34921 -0.01664

y3

CONST1 XL0_1_1 XL1_1_1 AR1_1_1 AR1_1_2 AR2_1_1 AR2_1_2 CONST2 XL0_2_1 XL1_2_1 AR1_2_1 AR1_2_2 AR2_2_1 AR2_2_2

Standard Error t Value Pr > |t| Variable 0.00443 0.03130 0.03252 0.13903 0.17370 0.13766 0.16783 0.00346 0.02441 0.02536 0.10842 0.13546 0.10736 0.13088

3.48 0.81 1.19 -0.88 1.49 0.12 0.21 3.81 2.10 0.14 2.25 -2.35 3.25 -0.13

0.0009 0.4237 0.2383 0.3811 0.1421 0.9049 0.8356 0.0003 0.0394 0.8868 0.0280 0.0219 0.0018 0.8992

1 y1(t) y1(t-1) y2(t-1) y3(t-1) y2(t-2) y3(t-2) 1 y1(t) y1(t-1) y2(t-1) y3(t-1) y2(t-2) y3(t-2)

Output 30.2.12 shows the innovation covariance matrix estimates, the various information criteria results, and the tests for white noise residuals. The residuals is uncorrelated except at lag 3 for y2 variable.

2024 F Chapter 30: The VARMAX Procedure

Output 30.2.12 Diagnostic Checks Covariances of Innovations Variable y2 y3

y2

y3

0.00014 0.00006

0.00006 0.00009

Information Criteria AICC HQC AIC SBC FPEC

-18.3902 -18.2558 -18.4309 -17.9916 9.91E-9

Cross Correlations of Residuals Lag

Variable

0

y2 y3 y2 y3 y2 y3 y2 y3

1 2 3

y2

y3

1.00000 0.56462 -0.02312 -0.07056 -0.02849 -0.05804 0.16071 0.10882

0.56462 1.00000 -0.05927 -0.09145 -0.05262 -0.08567 0.29588 0.13002

Schematic Representation of Cross Correlations of Residuals Variable/ Lag 0 1 2 3 y2 y3

++ ++

.. ..

.. ..

.+ ..

+ is > 2*std error, - is < -2*std error, . is between

Portmanteau Test for Cross Correlations of Residuals Up To Lag 3

DF

Chi-Square

Pr > ChiSq

4

8.38

0.0787

Output 30.2.13 describes how well each univariate equation fits the data. The residuals are off from the normality, but have no ARCH and AR effects.

Example 30.3: Numerous Examples F 2025

Output 30.2.13 Diagnostic Checks Continued Univariate Model ANOVA Diagnostics

Variable y2 y3

R-Square

Standard Deviation

F Value

Pr > F

0.0897 0.2796

0.01188 0.00926

1.08 4.27

0.3809 0.0011

Univariate Model White Noise Diagnostics Durbin Watson

Variable y2 y3

Normality Chi-Square Pr > ChiSq

2.02413 2.13414

14.54 32.27

F Value

ARCH Pr > F

0.49 0.08

0.4842 0.7782

0.0007 F 0.04 0.62

0.8448 0.4343

AR2 F Value Pr > F 0.04 0.62

0.9570 0.5383

AR3 F Value Pr > F 0.62 0.72

AR4 F Value Pr > F

0.6029 0.5452

Example 30.3: Numerous Examples The following are examples of syntax for model fitting: /* Data ’a’ Generated Process */ proc iml; sig = {1.0 0.5, 0.5 1.25}; phi = {1.2 -0.5, 0.6 0.3}; call varmasim(y,phi) sigma = sig n = 100 seed = 46859; cn = {’y1’ ’y2’}; create a from y[colname=cn]; append from y; run;; /* when the series has a linear trend */ proc varmax data=a; model y1 y2 / p=1 trend=linear; run; /* Fit subset of AR order 1 and 3 */ proc varmax data=a; model y1 y2 / p=(1,3); run;

0.42 0.36

0.7914 0.8379

2026 F Chapter 30: The VARMAX Procedure

/* Check if the series is nonstationary */ proc varmax data=a; model y1 y2 / p=1 dftest print=(roots); run; /* Fit VAR(1) in differencing */ proc varmax data=a; model y1 y2 / p=1 print=(roots) dify=(1); run; /* Fit VAR(1) in seasonal differencing */ proc varmax data=a; model y1 y2 / p=1 dify=(4) lagmax=5; run; /* Fit VAR(1) in both regular and seasonal differencing */ proc varmax data=a; model y1 y2 / p=1 dify=(1,4) lagmax=5; run; /* Fit VAR(1) in different differencing */ proc varmax data=a; model y1 y2 / p=1 dif=(y1(1,4) y2(1)) lagmax=5; run; /* Options related to prediction */ proc varmax data=a; model y1 y2 / p=1 lagmax=3 print=(impulse covpe(5) decompose(5)); run; /* Options related to tentative order selection */ proc varmax data=a; model y1 y2 / p=1 lagmax=5 minic print=(parcoef pcancorr pcorr); run; /* Automatic selection of the AR order */ proc varmax data=a; model y1 y2 / minic=(type=aic p=5); run; /* Compare results of LS and Yule-Walker Estimators */ proc varmax data=a; model y1 y2 / p=1 print=(yw); run; /* BVAR(1) of the nonstationary series y1 and y2 */ proc varmax data=a; model y1 y2 / p=1 prior=(lambda=1 theta=0.2 ivar); run;

Example 30.3: Numerous Examples F 2027

/* BVAR(1) of the nonstationary series y1 */ proc varmax data=a; model y1 y2 / p=1 prior=(lambda=0.1 theta=0.15 ivar=(y1)); run; /* Data ’b’ Generated Process */ proc iml; sig = { 0.5 0.14 -0.08 -0.03, 0.14 0.71 0.16 0.1, -0.08 0.16 0.65 0.23, -0.03 0.1 0.23 0.16}; sig = sig * 0.0001; phi = {1.2 -0.5 0. 0.1, 0.6 0.3 -0.2 0.5, 0.4 0. -0.2 0.1, -1.0 0.2 0.7 -0.2}; call varmasim(y,phi) sigma = sig n = 100 seed = 32567; cn = {’y1’ ’y2’ ’y3’ ’y4’}; create b from y[colname=cn]; append from y; quit; /* Cointegration Rank Test using Trace statistics */ proc varmax data=b; model y1-y4 / p=2 lagmax=4 cointtest; run; /* Cointegration Rank Test using Max statistics */ proc varmax data=b; model y1-y4 / p=2 lagmax=4 cointtest=(johansen=(type=max)); run; /* Common Trends Test using Filter(Differencing) statistics */ proc varmax data=b; model y1-y4 / p=2 lagmax=4 cointtest=(sw); run; /* Common Trends Test using Filter(Residual) statistics */ proc varmax data=b; model y1-y4 / p=2 lagmax=4 cointtest=(sw=(type=filtres lag=1)); run; /* Common Trends Test using Kernel statistics */ proc varmax data=b; model y1-y4 / p=2 lagmax=4 cointtest=(sw=(type=kernel lag=1)); run; /* Cointegration Rank Test for I(2) */ proc varmax data=b; model y1-y4 / p=2 lagmax=4 cointtest=(johansen=(iorder=2)); run; /* Fit VECM(2) with rank=3 */ proc varmax data=b; model y1-y4 / p=2 lagmax=4 print=(roots iarr) ecm=(rank=3 normalize=y1);

2028 F Chapter 30: The VARMAX Procedure

run; /* Weak Exogenous Testing for each variable */ proc varmax data=b outstat=bbb; model y1-y4 / p=2 lagmax=4 ecm=(rank=3 normalize=y1); cointeg rank=3 exogeneity; run; /* Hypotheses Testing for long-run and adjustment parameter */ proc varmax data=b outstat=bbb; model y1-y4 / p=2 lagmax=4 ecm=(rank=3 normalize=y1); cointeg rank=3 normalize=y1 h=(1 0 0, 0 1 0, -1 0 0, 0 0 1) j=(1 0 0, 0 1 0, 0 0 1, 0 0 0); run; /* ordinary regression model */ proc varmax data=grunfeld; model y1 y2 = x1-x3; run; /* Ordinary regression model with subset lagged terms */ proc varmax data=grunfeld; model y1 y2 = x1 / xlag=(1,3); run; /* VARX(1,1) with no current time Exogenous Variables */ proc varmax data=grunfeld; model y1 y2 = x1 / p=1 xlag=1 nocurrentx; run; /* VARX(1,1) with different Exogenous Variables */ proc varmax data=grunfeld; model y1 = x3, y2 = x1 x2 / p=1 xlag=1; run; /* VARX(1,2) in difference with current Exogenous Variables */ proc varmax data=grunfeld; model y1 y2 = x1 / p=1 xlag=2 difx=(1) dify=(1); run;

Example 30.4: Illustration of ODS Graphics This example illustrates the use of ODS Graphics. The graphical displays are requested by specifying the ODS GRAPHICS ON statement. For information about the graphics available in the VARMAX procedure, see the section “ODS Graphics” on page 2000. The following statements use the SASHELP.WORKERS data set to study the time series of electrical

Example 30.4: Illustration of ODS Graphics F 2029

workers and its interaction with the series of masonry workers. The series and predict plots, the residual plot, and the forecast plot are created in Output 30.4.1 through Output 30.4.3. These are a selection of the plots created by the VARMAX procedure. title "Illustration of ODS Graphics"; proc varmax data=sashelp.workers plot(unpack)=(residual model forecasts); id date interval=month; model electric masonry / dify=(1,12) noint p=1; output lead=12; run;

Output 30.4.1 Series and Predicted Series Plots

2030 F Chapter 30: The VARMAX Procedure

Output 30.4.2 Residual Plot

References F 2031

Output 30.4.3 Series and Forecast Plots

References Anderson, T. W. (1951), “Estimating Linear Restrictions on Regression Coefficients for Multivariate Normal Distributions,” Annals of Mathematical Statistics, 22, 327-351. Ansley, C. F. and Newbold, P. (1979), “Multivariate Partial Autocorrelations,” ASA Proceedings of the Business and Economic Statistics Section, 349–353. Bollerslev, T. (1990), “Modeling the Coherence in Short-Run Nominal Exchange Rates: A Multivariate Generalized ARCH Model,” Review of Econometrics and Stochastics, 72, 498–505. Engle, R. F. and Granger, C. W. J. (1987), “Co-integration and Error Correction: Representation, Estimation and Testing,” Econometrica, 55, 251–276. Engle, R. F. and Kroner, K. F. (1995), “Multivariate Simultaneous Generalized ARCH,” Econometric Theory, 11, 122–150. Golub, G. H. and Van Loan, C. F. (1983), Matrix Computations, Baltimore and London: Johns Hopkins University Press.

2032 F Chapter 30: The VARMAX Procedure

Goodnight, J. H. (1979), “A Tutorial on the SWEEP Operator,” The American Statistician, 33, 149– 158. Hosking, J. R. M. (1980), “The Multivariate Portmanteau Statistic,” Journal of the American Statistical Association, 75, 602–608. Johansen, S. (1988), “Statistical Analysis of Cointegration Vectors,” Journal of Economic Dynamics and Control, 12, 231–254. Johansen, S. (1995a), “A Statistical Analysis of Cointegration for I(2) Variables,” Econometric Theory, 11, 25–59. Johansen, S. (1995b), Likelihood-Based Inference in Cointegrated Vector Autoregressive Models, New York: Oxford University Press. Johansen, S. and Juselius, K. (1990), “Maximum Likelihood Estimation and Inference on Cointegration: With Applications to the Demand for Money,” Oxford Bulletin of Economics and Statistics, 52, 169–210. Koreisha, S. and Pukkila, T. (1989), “Fast Linear Estimation Methods for Vector Autoregressive Moving Average Models,” Journal of Time Series Analysis, 10, 325-339. Litterman, R. B. (1986), “Forecasting with Bayesian Vector Autoregressions: Five Years of Experience,” Journal of Business & Economic Statistics, 4, 25–38. Lütkepohl, H. (1993), Introduction to Multiple Time Series Analysis, Berlin: Springer-Verlag. Osterwald-Lenum, M. (1992), “A Note with Quantiles of the Asymptotic Distribution of the Maximum Likelihood Cointegration Rank Test Statistics,” Oxford Bulletin of Economics and Statistics, 54, 461–472. Pringle, R. M. and Rayner, D. L. (1971), Generalized Inverse Matrices with Applications to Statistics, Second Edition, New York: McGraw-Hill Inc. Quinn, B. G. (1980), “Order Determination for a Multivariate Autoregression,” Journal of the Royal Statistical Society, B, 42, 182–185. Reinsel, G. C. (1997), Elements of Multivariate Time Series Analysis, Second Edition, New York: Springer-Verlag. Spliid, H. (1983), “A Fast Estimation for the Vector Autoregressive Moving Average Models with Exogenous Variables,” Journal of the American Statistical Association, 78, 843–849. Stock, J. H. and Watson, M. W. (1988), “Testing for Common Trends,” Journal of the American Statistical Association, 83, 1097–1107.

Chapter 31

The X11 Procedure Contents Overview: X11 Procedure . . . . . . . . . . . . . . . . . . . . Getting Started: X11 Procedure . . . . . . . . . . . . . . . . . Basic Seasonal Adjustment . . . . . . . . . . . . . . . . X-11-ARIMA . . . . . . . . . . . . . . . . . . . . . . . Syntax: X11 Procedure . . . . . . . . . . . . . . . . . . . . . . Functional Summary . . . . . . . . . . . . . . . . . . . . PROC X11 Statement . . . . . . . . . . . . . . . . . . . ARIMA Statement . . . . . . . . . . . . . . . . . . . . . BY Statement . . . . . . . . . . . . . . . . . . . . . . . ID Statement . . . . . . . . . . . . . . . . . . . . . . . . MACURVES Statement . . . . . . . . . . . . . . . . . . MONTHLY Statement . . . . . . . . . . . . . . . . . . . OUTPUT Statement . . . . . . . . . . . . . . . . . . . . PDWEIGHTS Statement . . . . . . . . . . . . . . . . . . QUARTERLY Statement . . . . . . . . . . . . . . . . . SSPAN Statement . . . . . . . . . . . . . . . . . . . . . TABLES Statement . . . . . . . . . . . . . . . . . . . . VAR Statement . . . . . . . . . . . . . . . . . . . . . . . Details: X11 Procedure . . . . . . . . . . . . . . . . . . . . . . Historical Development of X-11 . . . . . . . . . . . . . . Implementation of the X-11 Seasonal Adjustment Method Computational Details for Sliding Spans Analysis . . . . Data Requirements . . . . . . . . . . . . . . . . . . . . . Missing Values . . . . . . . . . . . . . . . . . . . . . . . Prior Daily Weights and Trading-Day Regression . . . . . Adjustment for Prior Factors . . . . . . . . . . . . . . . . The YRAHEADOUT Option . . . . . . . . . . . . . . . Effect of Backcast and Forecast Length . . . . . . . . . . Details of Model Selection . . . . . . . . . . . . . . . . . OUT= Data Set . . . . . . . . . . . . . . . . . . . . . . . The OUTSPAN= Data Set . . . . . . . . . . . . . . . . . OUTSTB= Data Set . . . . . . . . . . . . . . . . . . . . OUTTDR= Data Set . . . . . . . . . . . . . . . . . . . . Printed Output . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2034 2034 2035 2038 2040 2041 2042 2043 2046 2046 2046 2047 2051 2052 2052 2055 2056 2056 2056 2056 2058 2062 2064 2065 2065 2066 2067 2067 2068 2071 2071 2071 2072 2074

2034 F Chapter 31: The X11 Procedure

ODS Table Names . . . . . . . . . . . . . . . . . . . . . Examples: X11 Procedure . . . . . . . . . . . . . . . . . . . . Example 31.1: Component Estimation—Monthly Data . Example 31.2: Components Estimation—Quarterly Data Example 31.3: Outlier Detection and Removal . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

2085 2089 2089 2093 2095 2097

Overview: X11 Procedure The X11 procedure, an adaptation of the U.S. Bureau of the Census X-11 Seasonal Adjustment program, seasonally adjusts monthly or quarterly time series. The procedure makes additive or multiplicative adjustments and creates an output data set containing the adjusted time series and intermediate calculations. The X11 procedure also provides the X-11-ARIMA method developed by Statistics Canada. This method fits an ARIMA model to the original series, then uses the model forecast to extend the original series. This extended series is then seasonally adjusted by the standard X-11 seasonal adjustment method. The extension of the series improves the estimation of the seasonal factors and reduces revisions to the seasonally adjusted series as new data become available. The X11 procedure incorporates sliding spans analysis. This type of analysis provides a diagnostic for determining the suitability of seasonal adjustment for an economic series. Seasonal adjustment of a series is based on the assumption that seasonal fluctuations can be measured in the original series, Ot , t D 1; : : : ; n, and separated from trend cycle, trading-day, and irregular fluctuations. The seasonal component of this time series, St , is defined as the intrayear variation that is repeated constantly or in an evolving fashion from year to year. The trend cycle component, Ct , includes variation due to the long-term trend, the business cycle, and other longterm cyclical factors. The trading-day component, Dt , is the variation that can be attributed to the composition of the calendar. The irregular component, It , is the residual variation. Many economic time series are related in a multiplicative fashion (Ot D St Ct Dt It ). A seasonally adjusted time series, Ct It , consists of only the trend cycle and irregular components.

Getting Started: X11 Procedure The most common use of the X11 procedure is to produce a seasonally adjusted series. Eliminating the seasonal component from an economic series facilitates comparison among consecutive months or quarters. A plot of the seasonally adjusted series is often more informative about trends or location in a business cycle than a plot of the unadjusted series. The following example shows how to use PROC X11 to produce a seasonally adjusted series, Ct It , from an original series Ot D St Ct Dt It .

Basic Seasonal Adjustment F 2035

In the multiplicative model, the trend cycle component Ct keeps the same scale as the original series Ot , while St , Dt , and It vary around 1.0. In all printed tables and in the output data set, these latter components are expressed as percentages, and thus will vary around 100.0 (in the additive case, they vary around 0.0). The naming convention used in PROC X11 for the tables follows the original U.S. Bureau of the Census X-11 Seasonal Adjustment program specification (Shiskin, Young, and Musgrave 1967). Also, see the section “Printed Output” on page 2074. This convention is outlined in Figure 31.1. The tables corresponding to parts A – C are intermediate calculations. The final estimates of the individual components are found in the D tables: table D10 contains the final seasonal factors, table D12 contains the final trend cycle, and table D13 contains the final irregular series. If you are primarily interested in seasonally adjusting a series without consideration of intermediate calculations or diagnostics, you only need to look at table D11, the final seasonally adjusted series. For further details about the X-11-ARIMA tables, see Ladiray and Quenneville (2001).

Basic Seasonal Adjustment Suppose you have monthly retail sales data starting in September 1978 in a SAS data set named SALES. At this point you do not suspect that any calendar effects are present, and there are no prior adjustments that need to be made to the data. In this simplest case, you need only specify the DATE= variable in the MONTHLY statement, which associates a SAS date value to each observation. To see the results of the seasonal adjustment, you must request table D11, the final seasonally adjusted series, in a TABLES statement. data sales; input sales @@; date = intnx( ’month’, ’01sep1978’d, _n_-1 ); format date monyy7.; datalines; ... more lines ...

/*--- X-11 ARIMA ---*/ proc x11 data=sales; monthly date=date; var sales; tables d11; run;

2036 F Chapter 31: The X11 Procedure

Figure 31.1 Basic Seasonal Adjustment The X11 Procedure X-11 Seasonal Adjustment Program U. S. Bureau of the Census Economic Research and Analysis Division November 1, 1968 The X-11 program is divided into seven major parts. Part Description A. Prior adjustments, if any B. Preliminary estimates of irregular component weights and regression trading day factors C. Final estimates of above D. Final estimates of seasonal, trend-cycle and irregular components E. Analytical tables F. Summary measures G. Charts Series - sales Period covered - 9/1978 to 8/1990 Type of run: multiplicative seasonal adjustment. Selected Tables or Charts. Sigma limits for graduating extreme values are 1.5 and 2.5 Irregular values outside of 2.5-sigma limits are excluded from trading day regression

Basic Seasonal Adjustment F 2037

Figure 31.2 Basic Seasonal Adjustment The X11 Procedure Seasonal Adjustment of - sales

Year

JAN

D11 Final Seasonally Adjusted Series FEB MAR APR

MAY

JUN

1978 . . . . . . 1979 124.935 126.533 125.282 125.650 127.754 129.648 1980 128.734 139.542 143.726 143.854 148.723 144.530 1981 176.329 166.264 167.433 167.509 173.573 175.541 1982 186.747 202.467 192.024 202.761 197.548 206.344 1983 233.109 223.345 218.179 226.389 224.249 227.700 1984 238.261 239.698 246.958 242.349 244.665 247.005 1985 275.766 282.316 294.169 285.034 294.034 296.114 1986 325.471 332.228 330.401 330.282 333.792 331.349 1987 363.592 373.118 368.670 377.650 380.316 376.297 1988 370.966 384.743 386.833 405.209 380.840 389.132 1989 428.276 418.236 429.409 446.467 437.639 440.832 1990 480.631 474.669 486.137 483.140 481.111 499.169 --------------------------------------------------------------------Avg 277.735 280.263 282.435 286.358 285.354 288.638

Year

JUL

D11 Final Seasonally Adjusted Series AUG SEP OCT NOV

DEC

Total

1978 . . 123.507 125.776 124.735 129.870 503.887 1979 127.880 129.285 126.562 134.905 133.356 136.117 1547.91 1980 140.120 153.475 159.281 162.128 168.848 165.159 1798.12 1981 179.301 182.254 187.448 197.431 184.341 184.304 2141.73 1982 211.690 213.691 214.204 218.060 228.035 240.347 2513.92 1983 222.045 222.127 222.835 212.227 230.187 232.827 2695.22 1984 251.247 253.805 264.924 266.004 265.366 277.025 3037.31 1985 294.196 309.162 311.539 319.518 318.564 323.921 3604.33 1986 337.095 341.127 346.173 350.183 360.792 362.333 4081.23 1987 379.668 375.607 374.257 372.672 368.135 364.150 4474.13 1988 385.479 377.147 397.404 403.156 413.843 416.142 4710.89 1989 450.103 454.176 460.601 462.029 427.499 485.113 5340.38 1990 485.370 485.103 . . . . 3875.33 -------------------------------------------------------------------------Avg 288.683 291.413 265.728 268.674 268.642 276.442 Total:

40324

Mean:

280.03

S.D.:

111.31

You can compare the original series, table B1, and the final seasonally adjusted series, table D11, by plotting them together. These tables are requested and named in the OUTPUT statement. title ’Monthly Retail Sales Data (in $1000)’; proc x11 data=sales noprint; monthly date=date; var sales; output out=out b1=sales d11=adjusted; run;

2038 F Chapter 31: The X11 Procedure

proc sgplot data=out; series x=date y=sales

/ markers markerattrs=(color=red symbol=’asterisk’) lineattrs=(color=red) legendlabel="original" ; series x=date y=adjusted / markers markerattrs=(color=blue symbol=’circle’) lineattrs=(color=blue) legendlabel="adjusted" ; yaxis label=’Original and Seasonally Adjusted Time Series’; run;

Figure 31.3 Plot of Original and Seasonally Adjusted Data

X-11-ARIMA An inherent problem with the X-11 method is the revision of the seasonal factor estimates as new data become available. The X-11 method uses a set of centered moving averages to estimate the seasonal components. These moving averages apply symmetric weights to all observations except those at the beginning and end of the series, where asymmetric weights have to be applied. These

X-11-ARIMA F 2039

asymmetric weights can cause poor estimates of the seasonal factors, which then can cause large revisions when new data become available. While large revisions to seasonally adjusted values are not common, they can happen. When they do happen, it undermines the credibility of the X-11 seasonal adjustment method. A method to address this problem was developed at Statistics Canada (Dagum 1980, 1982a). This method, known as X-11-ARIMA, applies an ARIMA model to the original data (after adjustments, if any) to forecast the series one or more years. This extended series is then seasonally adjusted, allowing symmetric weights to be applied to the end of the original data. This method was tested against a large number of Canadian economic series and was found to greatly reduce the amount of revisions as new data were added. The X-11-ARIMA method is available in PROC X11 through the use of the ARIMA statement. The ARIMA statement extends the original series either with a user-specified ARIMA model or by an automatic selection process in which the best model from a set of five predefined ARIMA models is used. The following example illustrates the use of the ARIMA statement. The ARIMA statement does not contain a user-specified model, so the best model is chosen by the automatic selection process. Forecasts from this best model are then used to extend the original series by one year. The following partial listing shows parameter estimates and model diagnostics for the ARIMA model chosen by the automatic selection process. proc x11 data=sales; monthly date=date; var sales; arima; run;

Figure 31.4 X-11-ARIMA Model Selection Monthly Retail Sales Data (in $1000) The X11 Procedure Seasonal Adjustment of - sales Conditional Least Squares Estimation Approx. Parameter Estimate Std Error t Value MU MA1,1 MA1,2 MA2,1

0.0001728 0.3739984 0.0231478 0.5727914

0.0009596 0.0893427 0.0892154 0.0790835

0.18 4.19 0.26 7.24

Lag 0 1 2 12

2040 F Chapter 31: The X11 Procedure

Figure 31.4 continued Conditional Least Squares Estimation Variance Estimate = Std Error Estimate = AIC = SBC = Number of Residuals=

0.0014313 0.0378326 -482.2412 -470.7404 131

* *

* Does not include log determinant Criteria Summary for Model 2: (0,1,2)(0,1,1)s, Log Transform Box-Ljung Chi-square: 22.03 with 21 df Prob= 0.40 (Criteria prob > 0.05) Test for over-differencing: sum of MA parameters = 0.57 (must be < 0.90) MAPE - Last Three Years: 2.84 (Must be < 15.00 %) - Last Year: 3.04 - Next to Last Year: 1.96 - Third from Last Year: 3.51

Table D11 (final seasonally adjusted series) is now constructed using symmetric weights on observations at the end of the actual data. This should result in better estimates of the seasonal factors and, thus, smaller revisions in Table D11 as more data become available.

Syntax: X11 Procedure The X11 procedure uses the following statements: PROC X11 options ; ARIMA options ; BY variables ; ID variables ; MACURVES option ; MONTHLY options ; OUTPUT OUT=dataset options ; PDWEIGHTS option ; QUARTERLY options ; SSPAN options ; TABLES tablenames ; VAR variables ;

Either the MONTHLY or QUARTERLY statement must be specified, depending on the type of time series data you have. The PDWEIGHTS and MACURVES statements can be used only with the MONTHLY statement. The TABLES statement controls the printing of tables, while the OUTPUT statement controls the creation of the OUT= data set.

Functional Summary F 2041

Functional Summary The statements and options controlling the X11 procedures are summarized in the following table. Description Data Set Options specify input data set write the trading-day regression results to an output data set write the stable seasonality test results to an output data set write table values to an output data set add extrapolated values to the output data set add year ahead estimates to the output data set write the sliding spans analysis results to an output data set Printing Control Options suppress all printed output suppress all printed ARIMA output print all ARIMA output print selected tables and charts print selected groups of tables print selected groups of charts print preliminary tables associated with ARIMA processing specify number of decimals for printed tables suppress all printed SSPAN output print all SSPAN output Date Information Options specify a SAS date variable specify the beginning date specify the ending date specify beginning year for trading-day regression Declaring the Role of Variables specify BY-group processing

Statement

Option

PROC X11 PROC X11

DATA= OUTTDR=

PROC X11

OUTSTB=

OUTPUT PROC X11 PROC X11 PROC X11

OUT= OUTEX YRAHEADOUT OUTSPAN=

PROC X11 ARIMA ARIMA TABLES MONTHLY QUARTERLY MONTHLY QUARTERLY ARIMA

NOPRINT NOPRINT PRINTALL

MONTHLY QUARTERLY SSPAN SSPAN

NDEC= NDEC= NOPRINT PRINTALL

MONTHLY QUARTERLY MONTHLY QUARTERLY MONTHLY QUARTERLY MONTHLY

DATE= DATE= START= START= END= END= TDCOMPUTE=

BY

PRINTOUT= PRINTOUT= CHARTS= CHARTS= PRINTFP

2042 F Chapter 31: The X11 Procedure

Description

Statement

Option

specify the variables to be seasonally adjusted specify identifying variables specify the prior monthly factor

VAR ID MONTHLY

PMFACTOR=

Controlling the Table Computations use additive adjustment specify seasonal factor moving average length specify the extreme value limit for trading-day regression specify the lower bound for extreme irregulars specify the upper bound for extreme irregulars include the length-of-month in trading-day regression specify trading-day regression action compute summary measure only modify extreme irregulars prior to trend cycle estimation specify moving average length in trend cycle estimation specify weights for prior trading-day factors

MONTHLY QUARTERLY MACURVES MONTHLY

ADDITIVE ADDITIVE

MONTHLY QUARTERLY MONTHLY QUARTERLY MONTHLY

FULLWEIGHT= FULLWEIGHT= ZEROWEIGHT= ZEROWEIGHT= LENGTH

MONTHLY MONTHLY QUARTERLY MONTHLY QUARTERLY MONTHLY QUARTERLY PDWEIGHTS

TDREGR= SUMMARY SUMMARY TRENDADJ TRENDADJ TRENDMA= TRENDMA=

EXCLUDE=

PROC X11 Statement PROC X11 options ;

The following options can appear in the PROC X11 statement: DATA= SAS-data-set

specifies the input SAS data set used. If it is omitted, the most recently created SAS data set is used. OUTEXTRAP

adds the extra observations used in ARIMA processing to the output data set. When ARIMA forecasting/backcasting is requested, extra observations are appended to the ends of the series, and the calculations are carried out on this extended series. The appended observations are not normally written to the OUT= data set. However, if OUTEXTRAP is specified, these extra observations are written to the output data set. If a DATE= variable is specified in the MONTHLY/QUARTERLY statement, the date variable is extrapolated to identify forecasts/backcasts. The OUTEXTRAP option can be abbreviated as OUTEX.

ARIMA Statement F 2043

NOPRINT

suppresses any printed output. The NOPRINT option overrides any PRINTOUT=, CHARTS=, or TABLES statement and any output associated with the ARIMA statement. OUTSPAN= SAS-data-set

specifies the output data set to store the sliding spans analysis results. Tables A1, C18, D10, and D11 for each span are written to this data set. See the section “The OUTSPAN= Data Set” on page 2071 for details. OUTSTB= SAS-data-set

specifies the output data set to store the stable seasonality test results (table D8). All the information in the analysis of variance table associated with the stable seasonality test is contained in the variables written to this data set. See the section “OUTSTB= Data Set” on page 2071 for details. OUTTDR= SAS-data-set

specifies the output data set to store the trading-day regression results (tables B15 and C15). All the information in the analysis of variance table associated with the trading-day regression is contained in the variables written to this data set. This option is valid only when TDREGR=PRINT, TEST, or ADJUST is specified in the MONTHLY statement. See the section “OUTTDR= Data Set” on page 2072 for details. YRAHEADOUT

adds one-year-ahead forecast values to the output data set for tables C16, C18, and D10. The original purpose of this option was to avoid recomputation of the seasonal adjustment factors when new data became available. While computing costs were an important factor when the X-11 method was developed, this is no longer the case and this option is obsolete. See the section “The YRAHEADOUT Option” on page 2067 for details.

ARIMA Statement ARIMA options ;

The ARIMA statement applies the X-11-ARIMA method to the series specified in the VAR statement. This method uses an ARIMA model estimated from the original data to extend the series one or more years. The ARIMA statement options control the ARIMA model used and the estimation, forecasting, and printing of this model. There are two ways of obtaining an ARIMA model to extend the series. A model can be given explicitly with the MODEL= and TRANSFORM= options. Alternatively, the best-fitting model from a set of five predefined models is found automatically whenever the MODEL= option is absent. See the section “Details of Model Selection” on page 2068 for details. BACKCAST= n

specifies the number of years to backcast the series. The default is BACKCAST= 0. See the section “Effect of Backcast and Forecast Length” on page 2067 for details.

2044 F Chapter 31: The X11 Procedure

CHICR= value

specifies the criteria for the significance level for the Box-Ljung chi-square test for lack of fit when testing the five predefined models. The default is CHICR= 0.05. The CHICR= option values must be between 0.01 and 0.90. The hypothesis being tested is that of model adequacy. Nonrejection of the hypothesis is evidence for an adequate model. Making the CHICR= value smaller makes it easier to accept the model. See the section “Criteria Details” on page 2069 for further details on the CHICR= option. CONVERGE= value

specifies the convergence criterion for the estimation of an ARIMA model. The default value is 0.001. The CONVERGE= value must be positive. FORECAST= n

specifies the number of years to forecast the series. The default is FORECAST= 1. See the section “Effect of Backcast and Forecast Length” on page 2067 for details. MAPECR= value

specifies the criteria for the mean absolute percent error (MAPE) when testing the five predefined models. A small MAPE value is evidence for an adequate model; a large MAPE value results in the model being rejected. The MAPECR= value is the boundary for acceptance/rejection. Thus a larger MAPECR= value would make it easier for a model to pass the criteria. The default is MAPECR= 15. The MAPECR= option values must be between 1 and 100. See the section “Criteria Details” on page 2069 for further details on the MAPECR= option. MAXITER= n

specifies the maximum number of iterations in the estimation process. MAXITER must be between 1 and 60; the default value is 15. METHOD= CLS METHOD= ULS METHOD= ML

specifies the estimation method. ML requests maximum likelihood, ULS requests unconditional least squares, and CLS requests conditional least squares. METHOD=CLS is the default. The maximum likelihood estimates are more expensive to compute than the conditional least squares estimates. In some cases, however, they can be preferable. For further information on the estimation methods, see “Estimation Details” on page 248 in Chapter 7, “The ARIMA Procedure.” MODEL= ( P=n1 Q=n2 SP=n3 SQ=n4 DIF=n5 SDIF=n6 < NOINT > < CENTER >)

specifies the ARIMA model. The AR and MA orders are given by P=n1 and Q=n2, respectively, while the seasonal AR and MA orders are given by SP=n3 and SQ=n4, respectively. The lag corresponding to seasonality is determined by the MONTHLY or QUARTERLY statement. Similarly, differencing and seasonal differencing are given by DIF=n5 and SDIF=n6, respectively. For example arima model=( p=2 q=1 sp=1 dif=1 sdif=1 );

ARIMA Statement F 2045

specifies a (2,1,1)(1,1,0)s model, where s, the seasonality, is either 12 (monthly) or 4 (quarterly). More examples of the MODEL= syntax are given in the section “Details of Model Selection” on page 2068. NOINT

suppresses the fitting of a constant (or intercept) parameter in the model. (That is, the parameter  is omitted.) CENTER

centers each time series by subtracting its sample mean. The analysis is done on the centered data. Later, when forecasts are generated, the mean is added back. Note that centering is done after differencing. The CENTER option is normally used in conjunction with the NOCONSTANT option of the ESTIMATE statement. For example, to fit an AR(1) model on the centered data without an intercept, use the following ARIMA statement: arima model=( p=1 center noint );

NOPRINT

suppresses the normal printout generated by the ARIMA statement. Note that the effect of specifying the NOPRINT option in the ARIMA statement is different from the effect of specifying the NOPRINT in the PROC X11 statement, since the former only affects ARIMA output. OVDIFCR= value

specifies the criteria for the over-differencing test when testing the five predefined models. When the MA parameters in one of these models sum to a number close to 1.0, this is an indication of over-parameterization and the model is rejected. The OVDIFCR= value is the boundary for this rejection; values greater than this value fail the over-differencing test. A larger OVDIFCR= value would make it easier for a model to pass the criteria. The default is OVDIFCR= 0.90. The OVDIFCR= option values must be between 0.80 and 0.99. See the section “Criteria Details” on page 2069 for further details on the OVDIFCR= option. PRINTALL

provides the same output as the default printing for all models fit and, in addition, prints an estimation summary and chi-square statistics for each model fit. See “Printed Output” on page 2074 for details. PRINTFP

prints the results for the initial pass of X11 made to exclude trading-day effects. This option has an effect only when the TDREGR= option specifies ADJUST, TEST, or PRINT. In these cases, an initial pass of the standard X11 method is required to get rid of calendar effects before doing any ARIMA estimation. Usually this first pass is not of interest, and by default no tables are printed. However, specifying PRINTFP in the ARIMA statement causes any tables printed in the final pass to also be printed for this initial pass.

2046 F Chapter 31: The X11 Procedure

TRANSFORM= (LOG) | LOG TRANSFORM= ( constant ** power )

The ARIMA statement in PROC X11 allows certain transformations on the series before estimation. The specified transformation is applied only to a user-specified model. If TRANSFORM= is specified and the MODEL= option is not specified, the transformation request is ignored and a warning is printed. The LOG transformation requests that the natural log of the series be used for estimation. The resulting forecast values are transformed back to the original scale. A general power transformation of the form Xt ! .Xt C a/b is obtained by specifying transform= ( a ** b )

If the constant a is not specified, it is assumed to be zero. The specified ARIMA model is then estimated using the transformed series. The resulting forecast values are transformed back to the original scale.

BY Statement BY variables ;

A BY statement can be used with PROC X11 to obtain separate analyses on observations in groups defined by the BY variables. When a BY statement appears, the procedure expects the input DATA= data set to be sorted in order of the BY variables.

ID Statement ID variables ;

If you are creating an output data set, use the ID statement to put values of the ID variables, in addition to the table values, into the output data set. The ID statement has no effect when an output data set is not created. If the DATE= variable is specified in the MONTHLY or QUARTERLY statement, this variable is included automatically in the OUTPUT data set. If no DATE= variable is specified, the variable _DATE_ is added. The date variable (or _DATE_) values outside the range of the actual data (from ARIMA forecasting or backcasting, or from YRAHEADOUT) are extrapolated, while all other ID variables are missing.

MACURVES Statement MACURVES month=option . . . ;

MONTHLY Statement F 2047

The MACURVES statement specifies the length of the moving-average curves for estimating the seasonal factors for any month. This statement can be used only with monthly time series data. The month=option specifications consist of the month name (or the first three letters of the month name), an equal sign, and one of the following option values: ’3’

specifies a three-term moving average for the month

’3X3’

specifies a three-by-three moving average

’3X5’

specifies a three-by-five moving average

’3X9’

specifies a three-by-nine moving average

STABLE

specifies a stable seasonal factor (average of all values for the month)

For example, the statement macurves jan=’3’ feb=’3x3’ march=’3x5’ april=’3x9’;

uses a three-term moving average to estimate seasonal factors for January, a 3  3 (a three-term moving average of a three-term moving average) for February, a 3  5 (a three-term moving average of a five-term moving average) for March, and a 3  9 (a three-term moving average of a nine-term moving average) for April. The numeric values used for the weights of the various moving averages and a discussion of the derivation of these weights are given in Shiskin, Young, and Musgrave (1967). A general discussion of moving average weights is given in Dagum (1985). If the specification for a month is omitted, the X11 procedure uses a three-by-three moving average for the first estimate of each iteration and a three-by-five average for the second estimate.

MONTHLY Statement MONTHLY options ;

The MONTHLY statement must be used when the input data to PROC X11 are a monthly time series. The MONTHLY statement specifies options that determine the computations performed by PROC X11 and what is included in its output. Either the DATE= or START= option must be used. The following options can appear in the MONTHLY statement. ADDITIVE

performs additive adjustments. If the ADDITIVE option is omitted, PROC X11 performs multiplicative adjustments.

2048 F Chapter 31: The X11 Procedure

CHARTS= STANDARD CHARTS= FULL CHARTS= NONE

specifies the charts produced by the procedure. The default is CHARTS=STANDARD, which specifies 12 monthly seasonal charts and a trend cycle chart. If you specify CHARTS=FULL (or CHARTS=ALL), the procedure prints additional charts of irregular and seasonal factors. To print no charts, specify CHARTS=NONE. The TABLES statement can also be used to specify particular monthly charts to be printed. If no CHARTS= option is given, and a TABLES statement is given, the TABLES statement overrides the default value of CHARTS=STANDARD; that is, no charts (or tables) are printed except those specified in the TABLES statement. However, if both the CHARTS= option and a TABLES statement are given, the charts corresponding to the CHARTS= option and those requested by the TABLES statement are printed. For example, suppose you wanted only charts G1, the final seasonally adjusted series and trend cycle, and G4, the final irregular and final modified irregular series. You would specify the following statements: monthly date=date; tables g1 g4;

DATE= variable

specifies a variable that gives the date for each observation. The starting and ending dates are obtained from the first and last values of the DATE= variable, which must contain SAS date values. The procedure checks values of the DATE= variable to ensure that the input observations are sequenced correctly. This variable is automatically added to the OUTPUT= data set if one is requested and extrapolated if necessary. If the DATE= option is not specified, the START= option must be specified. The DATE= option and the START= and END= options can be used in combination to subset a series for processing. For example, suppose you have 12 years of monthly data (144 observations, no missing values) beginning in January 1970 and ending in December 1981, and you wanted to seasonally adjust only six years beginning in January 1974. Specifying monthly date=date start=jan1974 end=dec1979;

would seasonally adjust only this subset of the data. If instead you wanted to adjust the last eight years of data, only the START= option is needed: monthly date=date start=jan1974;

END= mmmyyyy

specifies that only the part of the input series ending with the month and year given be adjusted (for example, END=DEC1970). See the DATE=variable option for using the START= and END= options to subset a series for processing. EXCLUDE= value

excludes from the trading-day regression any irregular values that are more than value standard deviations from the mean. The EXCLUDE=value must be between 0.1 and 9.9, with the default value being 2.5.

MONTHLY Statement F 2049

FULLWEIGHT= value

assigns weights to irregular values based on their distance from the mean in standard deviation units. The weights are used for estimating seasonal and trend cycle components. Irregular values less than the FULLWEIGHT= value (in standard deviation units) are assigned full weights of 1, values that fall between the ZEROWEIGHT= and FULLWEIGHT= limits are assigned weights linearly graduated between 0 and 1, and values greater than the ZEROWEIGHT= limit are assigned a weight of 0. For example, if ZEROWEIGHT=2 and FULLWEIGHT=1, a value 1.3 standard deviations from the mean would be assigned a graduated weight. The FULLWEIGHT=value must be between 0.1 and 9.9 but must be less than the ZEROWEIGHT=value. The default is FULLWEIGHT=1.5. LENGTH

includes length-of-month allowance in computing trading-day factors. If this option is omitted, length-of-month allowances are included with the seasonal factors. NDEC= n

specifies the number of decimal places shown in the printed tables in the listing. This option has no effect on the precision of the variable values in the output data set. PMFACTOR= variable

specifies a variable containing the prior monthly factors. Use this option if you have previous knowledge of monthly adjustment factors. The PMFACTOR= option can be used to make the following adjustments: 

adjust the level of all or part of a series with discontinuities



adjust for the influence of holidays that fall on different dates from year to year, such as the effect of Easter on certain retail sales



adjust for unreasonable weather influence on series, such as housing starts



adjust for changing starting dates of fiscal years (for budget series) or model years (for automobiles)



adjust for temporary dislocating events, such as strikes

See the section “Prior Daily Weights and Trading-Day Regression” on page 2065 for details and examples using the PMFACTOR= option. PRINTOUT= STANDARD | LONG | FULL | NONE

specifies the tables to be printed by the procedure. If the PRINTOUT=STANDARD option is specified, between 17 and 27 tables are printed, depending on the other options that are specified. PRINTOUT=LONG prints between 27 and 39 tables, and PRINTOUT=FULL prints between 44 and 59 tables. Specifying PRINTOUT=NONE results in no tables being printed; however, charts are still printed. The default is PRINTOUT=STANDARD. The TABLES statement can also be used to specify particular monthly tables to be printed. If no PRINTOUT= option is specified, and a TABLES statement is given, the TABLES statement overrides the default value of PRINTOUT=STANDARD; that is, no tables (or charts) are printed except those given in the TABLES statement. However, if both the PRINTOUT=

2050 F Chapter 31: The X11 Procedure

option and a TABLES statement are specified, the tables corresponding to the PRINTOUT= option and those requested by the TABLES statement are printed. START= mmmyyyy

adjusts only the part of the input series starting with the specified month and year. When the DATE= option is not used, the START= option gives the year and month of the first input observation — for example, START=JAN1966. START= must be specified if DATE= is not given. If START= is specified (and no DATE= option is given), and an OUT= data set is requested, a variable named _DATE_ is added to the data set, giving the date value for each observation. See the DATE= variable option for using the START= and END= options to subset a series. SUMMARY

specifies that the data are already seasonally adjusted and the procedure is to produce summary measures. If the SUMMARY option is omitted, the X11 procedure performs seasonal adjustment of the input data before calculating summary measures. TDCOMPUTE= year

uses the part of the input series beginning with January of the specified year to derive tradingday weights. If this option is omitted, the entire series is used. TDREGR= NONE | PRINT | ADJUST | TEST

specifies the treatment of trading-day regression. TDREG=NONE omits the computation of the trading-day regression. TDREG=PRINT computes and prints the trading-day regressions but does not adjust the series. TDREG=ADJUST computes and prints the trading-day regression and adjusts the irregular components to obtain preliminary weights. TDREG=TEST adjusts the final series if the trading-day regression estimates explain significant variation on the basis of an F test (or residual trading-day variation if prior weights are used). The default is TDREGR=NONE. See the section “Prior Daily Weights and Trading-Day Regression” on page 2065 for details and examples using the TDREGR= option. If ARIMA processing is requested, any value of TDREGR other than the default TDREGR=NONE will cause PROC X11 to perform an initial pass (see the “Details: X11 Procedure” on page 2056 section and the PRINTFP option). The significance level reported in Table C15 should be viewed with caution. The dependent variable in the trading-day regression is the irregular component formed by an averaging operation. This induces a correlation in the dependent variable and hence in the residuals from which the F test is computed. Hence the distribution of the trading-day regression F statistics differs from an exact F; see Cleveland and Devlin (1980) for details. TRENDADJ

modifies extreme irregular values prior to computing the trend cycle estimates in the first iteration. If the TRENDADJ option is omitted, the trend cycle is computed without modifications for extremes. TRENDMA= 9 | 13 | 23

specifies the number of terms in the moving average to be used by the procedure in estimating

OUTPUT Statement F 2051

the variable trend cycle component. The value of the TRENDMA= option must be 9, 13, or 23. If the TRENDMA= option is omitted, the procedure selects an appropriate moving average. For information about the number of terms in the moving average, see Shiskin, Young, and Musgrave (1967). ZEROWEIGHT= value

assigns weights to irregular values based on their distance from the mean in standard deviation units. The weights are used for estimating seasonal and trend cycle components. Irregular values beyond the standard deviation limit specified in the ZEROWEIGHT= option are assigned zero weights. Values that fall between the two limits (ZEROWEIGHT= and FULLWEIGHT=) are assigned weights linearly graduated between 0 and 1. For example, if ZEROWEIGHT=2 and FULLWEIGHT=1, a value 1.3 standard deviations from the mean would be assigned a graduated weight. The ZEROWEIGHT=value must be between 0.1 and 9.9 but must be greater than the FULLWEIGHT=value. The default is ZEROWEIGHT=2.5. The ZEROWEIGHT option can be used in conjunction with the FULLWEIGHT= option to adjust outliers from a monthly or quarterly series. See Example 31.3 later in this chapter for an illustration of this use.

OUTPUT Statement OUTPUT OUT= SAS-data-set tablename=var1 var2 . . . ;

The OUTPUT statement creates an output data set containing specified tables. The data set is named by the OUT= option. OUT= SAS-data-set

If OUT= is omitted, the SAS System names the new data set by using the DATAn convention. For each table to be included in the output data set, write the X11 table identification keyword, an equal sign, and a list of new variable names: tablename = var1 var2 ... The tablename keywords that can be used in the OUTPUT statement are listed in the section “Printed Output” on page 2074. The following is an example of a VAR statement and an OUTPUT statement: var z1 z2 z3; output out=out_x11

b1=s

d11=w x y;

The variable s contains the table B1 values for the variable z1, while the table D11 values for variables z1, z2, and z3 are contained in variables w, x, and y, respectively. As this example shows, the list of variables following a tablename= keyword can be shorter than the VAR variable list. In addition to the variables named by tablename =var1 var2 . . . , the ID variables, and BY variables, the output data set contains a date identifier variable. If the DATE= option is given

2052 F Chapter 31: The X11 Procedure

in the MONTHLY or QUARTERLY statement, the DATE= variable is the date identifier. If no DATE= option is given, a variable named _DATE_ is the date identifier.

PDWEIGHTS Statement PDWEIGHTS day=w . . . ;

The PDWEIGHTS statement can be used to specify one to seven daily weights. The statement can only be used with monthly series that are seasonally adjusted using the multiplicative model. These weights are used to compute prior trading-day factors, which are then used to adjust the original series prior to the seasonal adjustment process. Only relative weights are needed; the X11 procedure adjusts the weights so that they sum to 7.0. The weights can also be corrected by the procedure on the basis of estimates of trading-day variation from the input data. See the section “Prior Daily Weights and Trading-Day Regression” on page 2065 for details and examples using the PDWEIGHTS statement. Each day=w option specifies a weight (w) for the named day. The day can be any day, Sunday through Saturday. The day keyword can be the full spelling of the day, or the three-letter abbreviation. For example, SATURDAY=1.0 and SAT=1.0 are both valid. The weights w must be a numeric value between 0.0 and 10.0. The following is an example of a PDWEIGHTS statement: pdweights sun=.2 mon=.9 tue=1 wed=1 thu=1 fri=.8 sat=.3;

Any number of days can be specified with one PDWEIGHTS statement. The default weight value for any day that is not specified is 0. If you do not use a PDWEIGHTS statement, the program computes daily weights if TDREGR=ADJUST is specified. See Shiskin, Young, and Musgrave (1967) for details.

QUARTERLY Statement QUARTERLY options ;

The QUARTERLY statement must be used when the input data are quarterly time series. This statement includes options that determine the computations performed by the procedure and what is in the printed output. The DATE= option or the START= option must be used. The following options can appear in the QUARTERLY statement. ADDITIVE

performs additive adjustments. If this option is omitted, the procedure performs multiplicative adjustments.

QUARTERLY Statement F 2053

CHARTS= STANDARD CHARTS= FULL CHARTS= NONE

specifies the charts to be produced by the procedure. The default value is CHARTS=STANDARD, which specifies four quarterly seasonal charts and a trend cycle chart. If you specify CHARTS=FULL (or CHARTS=ALL), the procedure prints additional charts of irregular and seasonal factors. To print no charts, specify CHARTS=NONE. The TABLES statement can also be used to specify particular charts to be printed. The presence of a TABLES statement overrides the default value of CHARTS=STANDARD; that is, if a TABLES statement is specified, and no CHARTS=option is specified, no charts (nor tables) are printed except those given in the TABLES statement. However, if both the CHARTS= option and a TABLES statement are given, the charts corresponding to the CHARTS= option and those requested by the TABLES statement are printed. For example, suppose you wanted only charts G1, the final seasonally adjusted series and trend cycle, and G4, the final irregular and final modified irregular series. This is accomplished by specifying the following statements: quarterly date=date; tables g1 g4;

DATE= variable

specifies a variable that gives the date for each observation. The starting and ending dates are obtained from the first and last values of the DATE= variable, which must contain SAS date values. The procedure checks values of the DATE= variable to ensure that the input observations are sequenced correctly. This variable is automatically added to the OUTPUT= data set if one is requested, and extrapolated if necessary. If the DATE= option is not specified, the START= option must be specified. The DATE= option and the START= and END= options can be used in combination to subset a series for processing. For example, suppose you have a series with 10 years of quarterly data (40 observations, no missing values) beginning in ‘1970Q1’ and ending in ‘1979Q4’, and you want to seasonally adjust only four years beginning in ‘1974Q1’ and ending in ‘1977Q4’. Specifying quarterly date=variable start=’1974q1’ end=’1977q4’;

seasonally adjusts only this subset of the data. If instead you wanted to adjust the last six years of data, only the START= option is needed: quarterly date=variable start=’1974q1’;

END= ‘yyyyQq’

specifies that only the part of the input series ending with the quarter and year given be adjusted (for example, END=’1973Q4’). The specification must be enclosed in quotes and q must be 1, 2, 3, or 4. See the DATE= variable option for using the START= and END= options to subset a series. FULLWEIGHT= value

assigns weights to irregular values based on their distance from the mean in standard deviation

2054 F Chapter 31: The X11 Procedure

units. The weights are used for estimating seasonal and trend cycle components. Irregular values less than the FULLWEIGHT= value (in standard deviation units) are assigned full weights of 1, values that fall between the ZEROWEIGHT= and FULLWEIGHT= limits are assigned weights linearly graduated between 0 and 1, and values greater than the ZEROWEIGHT= limit are assigned a weight of 0. For example, if ZEROWEIGHT=2 and FULLWEIGHT=1, a value 1.3 standard deviations from the mean would be assigned a graduated weight. The default is FULLWEIGHT=1.5. NDEC= n

specifies the number of decimal places shown on the output tables. This option has no effect on the precision of the variables in the output data set. PRINTOUT= STANDARD PRINTOUT= LONG PRINTOUT= FULL PRINTOUT= NONE

specifies the tables to print. If PRINTOUT=STANDARD is specified, between 17 and 27 tables are printed, depending on the other options that are specified. PRINTOUT=LONG prints between 27 and 39 tables, and PRINTOUT=FULL prints between 44 and 59 tables. Specifying PRINTOUT=NONE results in no tables being printed. The default is PRINTOUT=STANDARD. The TABLES statement can also specify particular quarterly tables to be printed. If no PRINTOUT= is given, and a TABLES statement is given, the TABLES statement overrides the default value of PRINTOUT=STANDARD; that is, no tables (or charts) are printed except those given in the TABLES statement. However, if both the PRINTOUT= option and a TABLES statement are given, the tables corresponding to the PRINTOUT= option and those requested by the TABLES statement are printed. START= ’yyyyQq’

adjusts only the part of the input series starting with the quarter and year given. When the DATE= option is not used, the START= option gives the year and quarter of the first input observation (for example, START=’1967Q1’). The specification must be enclosed in quotes, and q must be 1, 2, 3, or 4. START= must be specified if the DATE= option is not given. If START= is specified (and no DATE= is given), and an OUTPUT= data set is requested, a variable named _DATE_ is added to the data set, giving the date value for a given observation. See the DATE= option for using the START= and END= options to subset a series. SUMMARY

specifies that the input is already seasonally adjusted and that the procedure is to produce summary measures. If this option is omitted, the procedure performs seasonal adjustment of the input data before calculating summary measures. TRENDADJ

modifies extreme irregular values prior to computing the trend cycle estimates. If this option is omitted, the trend cycle is computed without modification for extremes.

SSPAN Statement F 2055

ZEROWEIGHT= value

assigns weights to irregular values based on their distance from the mean in standard deviation units. The weights are used for estimating seasonal and trend cycle components. Irregular values beyond the standard deviation limit specified in the ZEROWEIGHT= option are assigned zero weights. Values that fall between the two limits (ZEROWEIGHT= and FULLWEIGHT=) are assigned weights linearly graduated between 0 and 1. For example, if ZEROWEIGHT=2 and FULLWEIGHT=1, a value 1.3 standard deviations from the mean would be assigned a graduated weight. The default is ZEROWEIGHT=2.5. The ZEROWEIGHT option can be used in conjunction with the FULLWEIGHT= option to adjust outliers from a monthly or quarterly series. See Example 31.3 later in this chapter for an illustration of this use.

SSPAN Statement SSPAN options ;

The SSPAN statement applies sliding spans analysis to determine the suitability of seasonal adjustment for an economic series. The following options can appear in the SSPAN statement: NDEC= n

specifies the number of decimal places shown on selected sliding span reports. This option has no effect on the precision of the variables values in the OUTSPAN output data set. CUTOFF= value

gives the percentage value for determining an excessive difference within a span for the seasonal factors, the seasonally adjusted series, and month-to-month and year-to-year differences in the seasonally adjusted series. The default value is 3.0. The use of the CUTOFF=value in determining the maximum percent difference (MPD) is described in the section “Computational Details for Sliding Spans Analysis” on page 2062. Caution should be used in changing the default CUTOFF=value. The empirical threshold ranges found by the U.S. Census Bureau no longer apply when value is changed. TDCUTOFF= value

gives the percentage value for determining an excessive difference within a span for the trading-day factors. The default value is 2.0. The use of the TDCUTOFF=value in determining the maximum percent difference (MPD) is described in the section “Computational Details for Sliding Spans Analysis” on page 2062. Caution should be used in changing the default TDCUTOFF=value. The empirical threshold ranges found by the U.S. Census Bureau no longer apply when the value is changed. NOPRINT

suppresses all sliding span reports. See “Computational Details for Sliding Spans Analysis” on page 2062 for more details on sliding span reports.

2056 F Chapter 31: The X11 Procedure

PRINT

prints the summary sliding span reports S 0 through S 6.E. PRINTALL

prints the summary sliding spans report S 0 through S 6.E, along with detail reports S 7.A through S 7.E.

TABLES Statement TABLES tablenames ;

The TABLES statement prints the tables specified in addition to the tables that are printed as a result of the PRINTOUT= option in the MONTHLY or QUARTERLY statement. Table names are listed in Table 31.4 later in this chapter. To print only selected tables, omit the PRINTOUT= option in the MONTHLY or QUARTERLY statement and list the tables to be printed in the TABLES statement. For example, to print only the final seasonal factors and final seasonally adjusted series, use the statement tables d10 d11;

VAR Statement VAR variables ;

The VAR statement is used to specify the variables in the input data set that are to be analyzed by the procedure. Only numeric variables can be specified. If the VAR statement is omitted, all numeric variables are analyzed except those appearing in a BY or ID statement or the variable named in the DATE= option in the MONTHLY or QUARTERLY statement.

Details: X11 Procedure

Historical Development of X-11 This section briefly describes the historical development of the standard X-11 seasonal adjustment method and the later development of the X-11-ARIMA method. Most of the following discussion is based on a comprehensive article by Bell and Hillmer (1984), which describes the history of X-11 and the justification of using seasonal adjustment methods, such as X-11, given the current

Historical Development of X-11 F 2057

availability of time series software. For further discussions about statistical problems associated with the X-11 method, see Ghysels (1990). Seasonal adjustment methods began to be developed in the 1920s and 1930s, before there were suitable analytic models available and before electronic computing devices were in existence. The lack of any suitable model led to methods that worked the same for any series — that is, methods that were not model-based and that could be applied to any series. Experience with economic series had shown that a given mathematical form could adequately represent a time series only for a fixed length; as more data were added, the model became inadequate. This suggested an approach that used moving averages. For further analysis of the properties of X-11 moving averages, see Cleveland and Tiao (1976). The basic method was to break up an economic time series into long-term trend, long-term cyclical movements, seasonal movements, and irregular fluctuations. Early investigators found that it was not possible to uniquely decompose the trend and cycle components. Thus, these two were grouped together; the resulting component is usually referred to as the “trend cycle component.” It was also found that estimating seasonal components in the presence of trend produced biased estimates of the seasonal components, but, at the same time, estimating trend in the presence of seasonality was difficult. This eventually lead to the iterative approach used in the X-11 method. Two other problems were encountered by early investigators. First, some economic series appear to have changing or evolving seasonality. Secondly, moving averages were very sensitive to extreme values. The estimation method used in the X-11 method allows for evolving seasonal components. For the second problem, the X-11 method uses repeated adjustment of extreme values. All of these problems encountered in the early investigation of seasonal adjustment methods suggested the use of moving averages in estimating components. Even with the use of moving averages instead of a model-based method, massive amounts of hand calculations were required. Only a small number of series could be adjusted, and little experimentation could be done to evaluate variations on the method. With the advent of electronic computing in the 1950s, work on seasonal adjustment methods proceeded rapidly. These methods still used the framework previously described; variants of these basic methods could now be easily tested against a large number of series. Much of the work was done by Julian Shiskin and others at the U.S. Bureau of the Census beginning in 1954 and culminating after a number of variants into the X-11 Variant of the Census Method II Seasonal Adjustment Program, which PROC X11 implements. References for this work during this period include Shiskin and Eisenpress (1957), Shiskin (1958), and Marris (1961). The authoritative documentation for the X-11 Variant is in Shiskin, Young, and Musgrave (1967). This document is not equivalent to a program specification; however, the FORTRAN code that implements the X-11 Variant is in the public domain. A less detailed description of the X-11 Variant is given in U.S. Bureau of the Census (1969).

2058 F Chapter 31: The X11 Procedure

Development of the X-11-ARIMA Method The X-11 method uses symmetric moving averages in estimating the various components. At the end of the series, however, these symmetric weights cannot be applied. Either asymmetric weights have to be used, or some method of extending the series must be found. While various methods of extending a series have been proposed, the most important method to date has been the X-11-ARIMA method developed at Statistics Canada. This method uses Box-Jenkins ARIMA models to extend the series. The Time Series Research and Analysis Division of Statistics Canada investigated 174 Canadian economic series and found five ARIMA models out of twelve that fit the majority of series well and reduced revisions for the most recent months. References that give details of various aspects of the X-11-ARIMA methodology include Dagum (1980, 1982a, c, 1983, 1988), Laniel (1985), Lothian and Morry (1978a), and Huot et al. (1986).

Differences between X11ARIMA/88 and PROC X11 The original implementation of the X-11-ARIMA method was by Statistics Canada in 1980 (Dagum 1980), with later changes and enhancements made in 1988 (Dagum 1988). The calculations performed by PROC X11 differ from those in X11ARIMA/88, which will result in differences in the final component estimates provided by these implementations. There are three areas where Statistics Canada made changes to the original X-11 seasonal adjustment method in developing X11ARIMA/80 (Monsell 1984). These are (a) selection of extreme values, (b) replacement of extreme values, and (c) generation of seasonal and trend cycle weights. These changes have not been implemented in the current version of PROC X11. Thus the procedure produces results identical to those from previous versions of PROC X11 in the absence of an ARIMA statement. Additional differences can result from the ARIMA estimation. X11ARIMA/88 uses conditional least squares (CLS), while CLS, unconditional least squares (ULS) and maximum likelihood (ML) are all available in PROC X11 by using the METHOD= option in the ARIMA statement. Generally, parameters estimates will differ for the different methods.

Implementation of the X-11 Seasonal Adjustment Method The following steps describe the analysis of a monthly time series using multiplicative seasonal adjustment. Additional steps used by the X-11-ARIMA method are also indicated. Equivalent descriptions apply for an additive model if you replace divide with subtract where applicable. In the multiplicative adjustment, the original series Ot is assumed to be of the form Ot D Ct St It Pt Dt where Ct is the trend cycle component, St is the seasonal component, It is the irregular component, Pt is the prior monthly factors component, and Dt is the trading-day component.

Implementation of the X-11 Seasonal Adjustment Method F 2059

The trading-day component can be further factored as Dt D Dr;t Dt r;t ; where Dt r;t are the trading-day factors derived from the prior daily weights, and Dr;t are the residual trading-day factors estimated from the trading-day regression. For further information about estimating trading day variation, see Young (1965).

Additional Steps When Using the X-11-ARIMA Method The X-11-ARIMA method consists of extending a given series by an ARIMA model and applying the usual X-11 seasonal adjustment method to this extended series. Thus in the simplest case in which there are no prior factors or calendar effects in the series, the ARIMA model selection, estimation, and forecasting are performed first, and the resulting extended series goes through the standard X-11 steps described in the next section. If prior factor or calendar effects are present, they must be eliminated from the series before the ARIMA estimation is done because these effects are not stochastic. Prior factors, if present, are removed first. Calendar effects represented by prior daily weights are then removed. If there are no further calendar effects, the adjusted series is extended by the ARIMA model, and this extended series goes through the standard X-11 steps without repeating the removal of prior factors and calendar effects from prior daily weights. If further calendar effects are present, a trading-day regression must be performed. In this case it is necessary to go through an initial pass of the X-11 steps to obtain a final trading-day adjustment. In this initial pass, the series, adjusted for prior factors and prior daily weights, goes through the standard X-11 steps. At the conclusion of these steps, a final series adjusted for prior factors and all calendar effects is available. This adjusted series is then extended by the ARIMA model, and this extended series goes through the standard X-11 steps again, without repeating the removal of prior factors and calendar effects from prior daily weights and trading-day regression.

The Standard X-11 Seasonal Adjustment Method The standard X-11 seasonal adjustment method consists of the following steps. These steps are applied to the original data or the original data extended by an ARIMA model. 1. In step 1, the data are read, ignoring missing values until the first nonmissing value is found. If prior monthly factors are present, the procedure reads prior monthly Pt factors and divides them into the original series to obtain Ot =Pt D Ct St It Dt r;t Dr;t . Seven daily weights can be specified to develop monthly factors to adjust the series for trading-day variation, Dt r;t ; these factors are then divided into the original or prior adjusted series to obtain Ct St It Dr;t . 2. In steps 2, 3, and 4, three iterations are performed, each of which provides estimates of the seasonal St , trading-day Dr;t , trend cycle Ct , and irregular components It . Each iteration refines estimates of the extreme values in the irregular components. After extreme values are

2060 F Chapter 31: The X11 Procedure

identified and modified, final estimates of the seasonal component, seasonally adjusted series, trend cycle, and irregular components are produced. Step 2 consists of three substeps: a) During the first iteration, a centered, 12-term moving average is applied to the original series Ot to provide a preliminary estimate CO t of the trend cycle curve Ct . This moving average combines 13 (a 2-term moving average of a 12-term moving average) consecutive monthly values, removing the St and It . Next, it obtains a preliminary estimate St It by

b

b

St It D

Ot CO t

b

b) A moving average is then applied to the St It to obtain an estimate SOt of the seasonal factors. St It is then divided by this estimate to obtain an estimate IOt of the irregular component. Next, a moving standard deviation is calculated from the irregular component and is used in assigning a weight to each monthly value for measuring its degree of extremeness. These weights are used to modify extreme values in St It . New seasonal factors are estimated by applying a moving average to the modified value of St It . A preliminary seasonally adjusted series is obtained by dividing the original series by these new seasonal factors. A second estimate of the trend cycle is obtained by applying a weighted moving average to this seasonally adjusted series.

b

b

b

c) The same process is used to obtain second estimates of the seasonally adjusted series and improved estimates of the irregular component. This irregular component is again modified for extreme values and then used to provide estimates of trading-day factors and refined weights for the identification of extreme values. 3. Using the same computations, a second iteration is performed on the original series that has been adjusted by the trading-day factors and irregular weights developed in the first iteration. The second iteration produces final estimates of the trading-day factors and irregular weights. 4. A third and final iteration is performed using the original series that has been adjusted for trading-day factors and irregular weights computed during the second iteration. During the third iteration, PROC X11 develops final estimates of seasonal factors, the seasonally adjusted series, the trend cycle, and the irregular components. The procedure computes summary measures of variation and produces a moving average of the final adjusted series.

Sliding Spans Analysis The motivation for sliding spans analysis is to answer the question, When is a economic series unsuitable for seasonal adjustment? There have been a number of past attempts to answer this question: stable seasonality F test; moving seasonality F test, Q statistics, and others. Sliding spans analysis attempts to quantify the stability of the seasonal adjustment process, and hence quantify the suitability of seasonal adjustment for a given series. It is based on a very simple idea: for a stable series, deleting a small number of observations should not result in greatly different component estimates compared with the original, full series. Conversely, if deleting a small number of observations results in drastically different estimates, the series is unstable. For example, a drastic difference in the seasonal factors (Table D10) might result

Implementation of the X-11 Seasonal Adjustment Method F 2061

from a dominating irregular component or sudden changes in the seasonally component. When the seasonal component estimates of a series is unstable in this manner, they have little meaning and the series is likely to be unsuitable for seasonal adjustment. Sliding spans analysis, developed at the Statistical Research Division of the U.S. Census Bureau (Findley et al. 1990; Findley and Monsell 1986), performs a repeated seasonal adjustment on subsets or spans of the full series. In particular, an initial span of the data, typically eight years in length, is seasonally adjusted, and the Tables C18, the trading-day factors (if trading-day regression performed), D10, the seasonal factors, and D11, the seasonally adjusted series are retained for further processing. Next, one year of data is deleted from the beginning of the initial span and one year of data is added. This new span is seasonally adjusted as before, with the same tables retained. This process continues until the end of the data is reached. The beginning and ending dates of the spans are such that the last observation in the original data is also the last observation in the last span. This is discussed in more detail in the following paragraphs. The following notation for the components or differences computed in the sliding spans analysis follows Findley et al. (1990). The meaning for the symbol Xt .k/ is component X in month (or quarter) t , computed from data in the kth span. These components are now defined.  Seasonal Factors (Table D10): St .k/  Trading-Day Factors (Table C18): TDt .k/  Seasonally Adjusted Data (Table D11): SAt .k/  Month-to-Month Changes in the Seasonally Adjusted Data: MMt .k/  Year-to-Year Changes in the Seasonally Adjusted Data: Y Yt .k/ The key measure is the maximum percent difference across spans. For example, consider a series that begins in January 1972, ends in December 1984, and has four spans, each of length 8 years (see Figure 1 in Findley et al. (1990), p. 346). Consider St .k/ the seasonal factor (Table D10) for month t for span k, and let Nt denote the number of spans containing month t ; that is, Nt D fk W span k cont ai ns month t g In the middle years of the series there is overlap of all four spans, and Nt will be 4. The last year of the series will have only one span, while the beginning can have 1 or 0 spans depending on the original length. Since we are interested in how much the seasonal factors vary for a given month across the spans, a natural quantity to consider is maxkNt St .k/

mi nkNt St .k/

In the case of the multiplicative model, it is useful to compute a percentage difference; define the maximum percentage difference (MPD) at time t as MPDt D

maxkNt St .k/ mi nkNt St .k/ mi nkNt St .k/

2062 F Chapter 31: The X11 Procedure

The seasonal factor for month t is then unreliable if MPDt is large. While no exact significance level can be computed for this statistic, empirical levels have been established by considering over 500 economic series (Findley et al. 1990; Findley and Monsell 1986). For these series it was found that for four spans, stable series typically had less than 15% of the MPD values exceeding 3.0%, while in marginally stable series, between 15% and 25% of the MPD values exceeded 3.0%. A series in which 25% or more of the MPD values exceeded 3.0% is almost always unstable. While these empirical values cannot be considered an exact significance level, they provide a useful empirical basis for deciding if a series is suitable for seasonal adjustment. These percentage values are shifted down when fewer than four spans are used.

Computational Details for Sliding Spans Analysis Length and Number of Spans The algorithm for determining the length and number of spans for a given series was developed at the U.S. Bureau of the Census, Statistical Research Division. A summary of this algorithm is as follows. First, an initial length based on the MACURVE month=option specification is determined, and then the maximum number of spans possible using this length is determined. If this maximum number exceeds four, set the number of spans to four. If this maximum number is one or zero, there are not enough observations to perform the sliding spans analysis. In this case a note is written to the log and the sliding spans analysis is skipped for this variable. If the maximum number of spans is two or three, the actual number of spans used is set equal to this maximum. Finally, the length is adjusted so that the spans begin in January (or the first quarter) of the beginning year of the span. The remainder of this section gives the computation formulas for the maximum percentage difference (MPD) calculations along with the threshold regions.

Seasonal Factors (Table D10) For the additive model, the MPD is defined as maxkNt St .k/

mi nkNt St .k/

For the multiplicative model, the MPD is MPDt D

maxkNt St .k/ mi nkNt St .k/ mi nkNt St .k/

A series for which less than 15% of the MPD values of D10 exceed 3.0% is stable; between 15% and 25% is marginally stable; and greater than 25% is unstable. Span reports S 2.A through S 2.C give the various breakdowns for the number of times the MPD exceeded these levels.

Computational Details for Sliding Spans Analysis F 2063

Trading Day Factor (Table C18) For the additive model, the MPD is defined as maxkNt TDt .k/

mi nkNt TDt .k/

For the multiplicative model, the MPD is MPDt D

maxkNt TDt .k/ mi nkNt TDt .k/ mi nkNt TDt .k/

The U.S. Census Bureau currently gives no recommendation concerning MPD thresholds for the trading-day factors. Span reports S 3.A through S 3.C give the various breakdowns for MPD thresholds. When TDREGR=NONE is specified, no trading-day computations are done, and this table is skipped.

Seasonally Adjusted Data (Table D11) For the additive model, the MPD is defined as maxkNt SAt .k/

mi nkNt SAt .k/

For the multiplicative model, the MPD is MPDt D

maxkNt SAt .k/ mi nkNt SAt .k/ mi nkNt SAt .k/

A series for which less than 15% of the MPD values of D11 exceed 3.0% is stable; between 15% and 25% is marginally stable; and greater than 25% is unstable. Span reports S 4.A through S 4.C give the various breakdowns for the number of times the MPD exceeded these levels.

Month-to-Month Changes in the Seasonally Adjusted Data Some additional notation is needed for the month-to-month and year-to-year differences. Define N1t as N1t D fk W span k cont ai ns month t and t

1g

For the additive model, the month-to-month change for span k is defined as MMt .k/ D SAt

SAt

1

while for the multiplicative model MMt .k/ D

SAt SAt SAt 1

1

2064 F Chapter 31: The X11 Procedure

Since this quantity is already in percentage form, the MPD for both the additive and multiplicative model is defined as MPDt D maxkN1t MMt .k/

mi nkN1t MMt .k/

The current recommendation of the U.S. Census Bureau is that if 35% or more of the MPD values of the month-to-month differences of D11 exceed 3.0%, then the series is usually not stable; 40% exceeding this level clearly marks an unstable series. Span reports S 5.A.1 through S 5.C give the various breakdowns for the number of times the MPD exceeds these levels. Year-to-Year Changes in the Seasonally Adjusted Data

First define N12t as N12t D fk W span k cont ai ns month t and t

12g

(Appropriate changes in notation for a quarterly series are obvious.) For the additive model, the month-to-month change for span k is defined as Y Yt .k/ D SAt

SAt

12

while for the multiplicative model Y Yt .k/ D

SAt SAt SAt 12

12

Since this quantity is already in percentage form, the MPD for both the additive and multiplicative model is defined as MPDt D maxkN1t Y Yt .k/

mi nkN1t Y Yt .k/

The current recommendation of the U.S. Census Bureau is that if 10% or more of the MPD values of the month-to-month differences of D11 exceed 3.0%, then the series is usually not stable. Span reports S 6.A through S 6.C give the various breakdowns for the number of times the MPD exceeds these levels.

Data Requirements The input data set must contain either quarterly or monthly time series, and the data must be in chronological order. For the standard X-11 method, there must be at least three years of observations (12 for quarterly time series or 36 for monthly) in the input data sets or in each BY group in the input data set if a BY statement is used. For the X-11-ARIMA method, there must be at least five years of observations (20 for quarterly time series or 60 for monthly) in the input data sets or in each BY group in the input data set if a BY statement is used.

Missing Values F 2065

Missing Values Missing values at the beginning of a series to be adjusted are skipped. Processing starts with the first nonmissing value and continues until the end of the series or until another missing value is found. Missing values are not allowed for the DATE= variable. The procedure terminates if missing values are found for this variable. Missing values found in the PMFACTOR= variable are replaced by 100 for the multiplicative model (default) and by 0 for the additive model. Missing values can occur in the output data set. If the time series specified in the OUTPUT statement is not computed by the procedure, the values of the corresponding variable are missing. If the time series specified in the OUTPUT statement is a moving average, the values of the corresponding variable are missing for the first n and last n observations, where n depends on the length of the moving average. Additionally, if the time series specified is an irregular component modified for extremes, only the modified values are given, and the remaining values are missing.

Prior Daily Weights and Trading-Day Regression Suppose that a detailed examination of retail sales at ZXY Company indicates that certain days of the week have higher amounts of sales. In particular, Thursday, Friday, and Saturday have approximately twice the amount of sales as Monday, Tuesday, and Wednesday, and no sales occur on Sunday. This means that months with five Saturdays would have higher amounts of sales than months with only four Saturdays. This phenomenon is called a calendar effect; it can be handled in PROC X11 by using the PDWEIGHTS (prior daily weights) statement or the TDREGR=option (trading-day regression). The PDWEIGHTS statement and the TDREGR=option can be used separately or together. If the relative weights are known (as in the preceding) it is appropriate to use the PDWEIGHTS statement. If further residual calendar variation is present, TDREGR=ADJUST should also be used. If you know that a calendar effect is present, but know nothing about the relative weights, use TDREGR=ADJUST without a PDWEIGHTS statement. In this example, it is assumed that the calendar variation is due to both prior daily weights and residual variation. Thus both a PDWEIGHTS statement and TDREGR=ADJUST are specified. Note that only the relative weights are needed; in the actual computations, PROC X11 normalizes the weights to sum to 7.0. If a day of the week is not present in the PDWEIGHTS statement, it is given a value of zero. Thus “sun=0” is not needed. proc x11 data=sales; monthly date=date tdregr=adjust; var sales; tables a1 a4 b15 b16 C14 C15 c18 d11; pdweights mon=1 tue=1 wed=1 thu=2 fri=2 sat=2;

2066 F Chapter 31: The X11 Procedure

output out=x11out a1=a1 a4=a4 b1=b1 c14=c14 c16=c16 c18=c18 d11=d11; run;

Tables of interest include A1, A4, B15, B16, C14, C15, C18, and D11. Table A4 contains the adjustment factors derived from the prior daily weights; Table C14 contains the extreme irregular values excluded from trading-day regression; Table C15 contains the trading-day-regression results; Table C16 contains the monthly factors derived from the trading-day regression; and Table C18 contains the final trading-day factors derived from the combined daily weights. Finally, Table D11 contains the final seasonally adjusted series.

Adjustment for Prior Factors Suppose now that a strike at ZXY Company during July and August of 1988 caused sales to decrease an estimated 50%. Since this is a one-time event with a known cause, it is appropriate to prior adjust the data to reflect the effects of the strike. This is done in PROC X11 through the use of PMFACTOR=varname (prior monthly factor) in the MONTHLY statement. In the following example, the PMFACTOR variable is named PMF. Since the estimate of the decrease in sales is 50%, PMF has a value of 50.0 for the observations corresponding to July and August 1988, and a value of 100.0 for the remaining observations. This prior adjustment on SALES is performed by replacing SALES with the calculated value (SALES/PMF) * 100.0. A value of 100.0 for PMF leaves SALES unchanged, while a value of 50.0 for PMF doubles SALES. This value is the estimate of what SALES would have been without the strike. The following example shows how this prior adjustment is accomplished. data sales2; set sales; if ’01jul1988’d . . . )

specifies initial or fixed values for the EVENT parameters. For details about the B= option, see B=(value . . . ) in the section “REGRESSION Statement” on page 2124. USERTYPE=AO USERTYPE=CONSTANT USERTYPE=EASTER USERTYPE=HOLIDAY USERTYPE=LABOR

INPUT Statement F 2113

USERTYPE=LOM USERTYPE=LOMSTOCK USERTYPE=LOQ USERTYPE=LPYEAR USERTYPE=LS USERTYPE=RP USERTYPE=SCEASTER USERTYPE=SEASONAL USERTYPE=TC USERTYPE=TD USERTYPE=TDSTOCK USERTYPE=THANKS USERTYPE=USER

For details about the USERTYPE= option, see the USERTYPE= option in the section “REGRESSION Statement” on page 2124.

INPUT Statement INPUT variables < / options > ;

The INPUT statement specifies variables in the PROC X12 DATA= data set that are to be used as regressors in the regression portion of the regARIMA model. The variables in the data set should contain the values for each observation that define the regressor. Future values of regression variables should also be included in the DATA= data set if the time series listed in the VAR statement is to be extended with regARIMA forecasts. Multiple INPUT statements can be specified. If a MDLINFOIN= data set is not specified, then all variables listed in the INPUT statements are applied to all BY-groups and all time series that are processed. If a MDLINFOIN= data set is specified, then the INPUT statements apply only if no regression information for the BY-group and series is available in the MDLINFOIN= data set. The following options can appear in the INPUT statement. B=(value . . . )

specifies initial or fixed values for the INPUT variable parameters. For details about the B= option, see the B=(value . . . ) option in the section “REGRESSION Statement” on page 2124. USERTYPE=AO USERTYPE=CONSTANT USERTYPE=EASTER USERTYPE=HOLIDAY USERTYPE=LABOR USERTYPE=LOM USERTYPE=LOMSTOCK

2114 F Chapter 32: The X12 Procedure

USERTYPE=LOQ USERTYPE=LPYEAR USERTYPE=LS USERTYPE=RP USERTYPE=SCEASTER USERTYPE=SEASONAL USERTYPE=TC USERTYPE=TD USERTYPE=TDSTOCK USERTYPE=THANKS USERTYPE=USER

For details about the USERTYPE= option, see the USERTYPE= option in the section “REGRESSION Statement” on page 2124.

ADJUST Statement ADJUST options ;

The ADJUST statement adjusts the series for leap year and length-of-period factors prior to estimating a regARIMA model. The “Prior Adjustment Factors” table is associated with the ADJUST statement. The following option can appear in the ADJUST statement. PREDEFINED=LOM PREDEFINED=LOQ PREDEFINED=LPYEAR

specifies length-of-month adjustment, length-of-quarter adjustment, or leap year adjustment. PREDEFINED=LOM and PREDEFINED=LOQ are equivalent; the actual adjustment is determined by the interval of the time series. Also, since leap year adjustment is a limited form of length-of-period adjustment, only one type of predefined adjustment can be specified. The PREDEFINED= option should not be used in conjunction with PREDEFINED=TD or PREDEFINED=TD1COEF in the REGRESSION statement or MODE=ADD or MODE=PSEUDOADD in the X11 statement. PREDEFINED=LPYEAR cannot be specified unless the series is log transformed. If the series is to be transformed by using a Box-Cox or logistic transformation, the series is first adjusted according to the ADJUST statement, then transformed. In the case of a length-of-month adjustment for the series with observations Yt , each observation is first divided by the number of days in that month, mt , and then multiplied by the average length of month (30.4375), resulting in .30:4375  Yt /=mt . Length-of-quarter adjustments are performed in a similar manner, resulting in .91:3125  Yt /=qt , where qt is the length in days of quarter t.

ARIMA Statement F 2115

Forecasts of the transformed and adjusted data are transformed and adjusted back to the original scale for output.

ARIMA Statement ARIMA options ;

The ARIMA statement specifies the ARIMA part of the regARIMA model. This statement defines a pure ARIMA model if no REGRESSION statements, INPUT statements, or EVENT statements are specified. The ARIMA part of the model can include multiplicative seasonal factors. The following option can appear in the ARIMA statement. MODEL=((p d q) (P D Q)s)

specifies the ARIMA model. The format follows standard Box-Jenkins notation (Box, Jenkins, and Reinsel 1994). The nonseasonal AR and MA orders are given by p and q, respectively, while the seasonal AR and MA orders are given by P and Q. The number of differences and seasonal differences are given by d and D, respectively. The notation (p d q) and (P D Q) can also be specified as (p, d, q) and (P, D, Q). The maximum lag of any AR or MA parameter is 36. The maximum value of a difference order, d or D, is 144. All values for p, d, q, P, D, and Q should be nonnegative integers. The lag that corresponds to seasonality is s; s should be a positive integer. If s is omitted, it is set equal to the value used in the SEASONS= option in the PROC X12 statement. For example, the following statements specify an ARIMA (2,1,1)(1,1,0)12 model: proc x12 data=ICMETI seasons=12 start=jan1968; arima model=((2,1,1)(1,1,0));

ESTIMATE Statement ESTIMATE options ;

The ESTIMATE statement estimates the regARIMA model. The regARIMA model is specified by the REGRESSION, INPUT, EVENT, and ARIMA statements or by the MDLINFOIN= data set. Estimation output includes point estimates and standard errors for all estimated AR, MA, and regression parameters; the maximum likelihood estimate of the variance  2 ; t statistics for individual regression parameters; 2 statistics for assessing the joint significance of the parameters associated with certain regression effects (if included in the model); and likelihood-based model selection statistics (if the exact likelihood function is used). The regression effects for which 2 statistics are produced are fixed seasonal effects. Tables displayed in the output associated with estimation are “Exact ARMA Likelihood Estimation Iteration Tolerances,” “Average Absolute Percentage Error in within-Sample Forecasts,” “ARMA Iteration History,” “AR/MA Roots,” “Exact ARMA Likelihood Estimation Iteration Summary,”

2116 F Chapter 32: The X12 Procedure

“Regression Model Parameter Estimates,” “ Chi-Squared Tests for Groups of Regressors,” “Exact ARMA Maximum Likelihood Estimation,” and “Estimation Summary.” The following options can appear in the ESTIMATE statement. MAXITER=value

specifies the maximum number of iterations (for estimating the AR and MA parameters) allowed. For models with regression variables, this limit applies to the total number of ARMA iterations over all iterations of the iterative generalized least squares (IGLS) algorithm. For models without regression variables, this is the maximum number of iterations allowed for the set of ARMA iterations. The default is MAXITER=200. TOL=value

specifies the convergence tolerance for the nonlinear estimation. Absolute changes in the loglikelihood are compared to the TOL= value to check convergence of the estimation iterations. For models with regression variables, the TOL= value is used to check convergence of the IGLS iterations (where the regression parameters are reestimated for each new set of AR and MA parameters). For models without regression variables, there are no IGLS iterations, and the TOL= value is then used to check convergence of the nonlinear iterations used to estimate the AR and MA parameters. The default value is TOL=0.00001. The minimum tolerance value is a positive value based on the machine precision and the length of the series. If a tolerance less than the minimum supported value is specified, an error message is displayed and the series is not processed. ITPRINT

specifies that the “Iterations History” table be displayed. This includes detailed output for estimation iterations (including log-likelihood values and parameters) and counts of function evaluations and iterations. It is useful to examine the “Iterations History” table when errors occur within estimation iterations. By default, only successful iterations are displayed, unless the PRINTERR option is specified. An unsuccessful iteration is an iteration that is restarted due to a problem such as a root inside the unit circle. Successful iterations have a status of 0. If restarted iterations are displayed, a note at the end of the table gives definitions for status codes that indicate a restarted iteration. For restarted iterations, the number of function evaluations and the number of iterations will be –1, which is displayed as missing. If regression parameters are included in the model, then both IGLS and ARMA iterations are included in the table. The number of function evaluations is a cumulative total. PRINTERR

causes restarted iterations to be included in the “Iterations History” table (if ITPRINT is specified) or creates the “Restarted Iterations” table (if ITPRINT is not specified). Whether or not PRINTERR is specified, a WARNING message is printed to the log file if any iteration is restarted during estimation.

FORECAST Statement FORECAST options ;

IDENTIFY Statement F 2117

The FORECAST statement uses the estimated model to forecast the time series. The output contains point forecast and forecast statistics for the transformed and original series. The following option can appear in the FORECAST statement. LEAD=value

specifies the number of periods ahead to forecast. The default is the number of periods in a year (4 or 12), and the maximum is 60. Tables that contain forecasts, standard errors, and confidence limits are displayed in association with the FORECAST statement. If the data is transformed, then two tables are displayed: one table for the original data, and one table for the transformed data.

IDENTIFY Statement IDENTIFY options ;

The IDENTIFY statement is used to produce plots of the sample autocorrelation function (ACF) and partial autocorrelation function (PACF) for identifying the ARIMA part of a regARIMA model. The sample ACF and PACF are produced for all combinations of the nonseasonal and seasonal differences of the data specified by the DIFF= and SDIFF= options. If the model includes a regression component (specified using the REGRESSION, INPUT, and EVENT statements or the MDLINFOIN= data set), then the ACFs and PACFs are calculated for the specified differences of the regression residuals. If the model does not include a regression component, then the ACFs and PACFs are calculated for the specified differences of the original data. Tables displayed in association with identification are “Autocorrelation of Model Residuals” and “Partial Autocorrelation of Model Residuals.” If the model includes a regression component (specified using the REGRESSION, INPUT, and EVENT statements or the MDLINFOIN= data set), then the “Regression Model Parameter Estimates” table is also available. The following options can appear in the IDENTIFY statement. DIFF=(order, order, order )

specifies orders of nonseasonal differencing to use in model identification. The value 0 specifies no differencing; the value 1 specifies one nonseasonal difference .1 B/; the value 2 specifies two nonseasonal differences .1 B/2 ; and so forth. The ACFs and PACFs are produced for all orders of nonseasonal differencing specified, in combination with all orders of seasonal differencing specified in the SDIFF= option. The default is DIFF=(0). You can specify up to three values for nonseasonal differences. SDIFF=(order, order, order )

specifies orders of seasonal differencing to use in model identification. The value 0 specifies no seasonal differencing; the value 1 specifies one seasonal difference .1 B s /; the value 2 specifies two seasonal differences .1 B s /2 ; and so forth. Here the value for s corresponds to the period specified in the SEASONS= option in the PROC X12 statement. The value of the SEASONS= option is supplied explicitly or is implicitly supplied through the INTERVAL= option or the values of the DATE= variable. The ACFs and PACFs are produced for all orders

2118 F Chapter 32: The X12 Procedure

of seasonal differencing specified, in combination with all orders of nonseasonal differencing specified in the DIFF= option. The default is SDIFF=(0). You can specify up to three values for seasonal differences. For example, the following statement produces ACFs and PACFs for two levels of differencing: .1 B/ and .1 B/.1 B s /: identify diff=(1) sdiff=(0, 1);

PRINTREG

causes the “Regression Model Parameter Estimates” table to be printed if the REGRESSION statement is present. By default, the table is not printed.

AUTOMDL Statement AUTOMDL options ;

The AUTOMDL statement is used to invoke the automatic model selection procedure of the X-12ARIMA method. This method is based largely on the TRAMO (time series regression with ARIMA noise, missing values, and outliers) method by Gomez and Maravall (1997a, b). If the AUTOMDL statement is used without the OUTLIER statement, then only missing values regressors are included in the regARIMA model. If the AUTOMDL and the OUTLIER statements are used, then both missing values regressors and regressors for automatically identified outliers are included in the regARIMA model. If both the AUTOMDL statement and the ARIMA statement are present, the ARIMA statement is ignored. The ARIMA statement specifies the model, while the AUTOMDL statement allows the X12 procedure to select the model. If the AUTOMDL statement is specified and a data set is specified in the MDLINFOIN= option of the PROC X12 statement, then the AUTOMDL statement is ignored if the specified data set contains a model specification for the series. If no model for the series is specified in the data set specified in the MDLINFOIN= option, then the AUTOMDL (or ARIMA) statement is used to determine the model. Thus, it is possible to give a specific model for some series and automatically identify the model for other series by using both the MDLINFOIN= option and the AUTOMDL statement. When AUTOMDL is specified, the X12 procedure compares a model selected using a TRAMO method to a default model. The TRAMO method is implemented first, and involves two parts: identifying the orders of differencing and identifying the ARIMA model. The table “ARIMA Estimates for Unit Root Identification” provides details about the identification of the orders of differencing, while the table “Results of Unit Root Test for Identifying Orders of Differencing” shows the orders of differencing selected by TRAMO. The table “Models Estimated by Automatic ARIMA Model Selection Procedure” provides details regarding the TRAMO automatic model selection, and the table “Best Five ARIMA Models Chosen by Automatic Modeling” ranks the best five models estimated using the TRAMO method. The next available table, “Comparison of Automatically Selected Model and Default Model,” compares the model selected by the TRAMO method to a default model. At this point in the processing, if the default model is selected over the TRAMO model, then PROC X12 displays a note. No note is displayed if the TRAMO model is selected. PROC X12 then performs checks for unit roots, over-differencing, and insignificant ARMA coefficients. If the model

AUTOMDL Statement F 2119

is changed due to any of these tests, a note is displayed. The last table, “Final Automatic Model Selection,” shows the results of the automatic model selection. The following options can appear in the AUTOMDL statement: MAXORDER=(nonseasonal order, seasonal order )

specifies the maximum orders of nonseasonal and seasonal ARMA polynomials for the automatic ARIMA model identification procedure. The maximum order for the nonseasonal ARMA parameters should be between 1 and 4; the maximum order for the seasonal ARMA should be 1 or 2. DIFFORDER=(nonseasonal order, seasonal order )

specifies the fixed orders of differencing to be used in the automatic ARIMA model identification procedure. When the DIFFORDER= option is used, only the AR and MA orders are automatically identified. Acceptable values for the regular differencing orders are 0, 1, and 2; acceptable values for the seasonal differencing orders are 0 and 1. If the MAXDIFF= option is also specified, then the DIFFORDER= option is ignored. There are no default values for DIFFORDER. If neither the DIFFORDER= option nor the MAXDIFF= option is specified, then the default is MAXDIFF=(2,1). MAXDIFF=(nonseasonal order, seasonal order )

specifies the maximum orders of regular and seasonal differencing for the automatic identification of differencing orders. When MAXDIFF is specified, the differencing orders are identified first, and then the AR and MA orders are identified. Acceptable values for the regular differencing orders are 1 and 2; the only acceptable value for the seasonal differencing order is 1. If both the MAXDIFF= option and the DIFFORDER option= are specified, then the DIFFORDER= option is ignored. If neither the DIFFORDER= nor the MAXDIFF= option is specified, the default is MAXDIFF=(2,1). NOINT

suppresses the fitting of a constant (or intercept) parameter in the model. (That is, the parameter  is omitted.) PRINT=UNITROOTTEST PRINT=AUTOCHOICE PRINT=UNITROOTTESTMDL PRINT=AUTOCHOICEMDL PRINT=BEST5MODEL

lists the tables to be displayed in the output. PRINT=AUTOCHOICE displays the tables titled “Comparison of Automatically Selected Model and Default Model” and “Final Automatic Model Selection.” The “Comparison of Automatically Selected Model and Default Model” table compares a default model to the model chosen by the TRAMO-based automatic modeling method. The “Final Automatic Model Selection” table indicates which model has been chosen automatically. If the PRINT= option is not specified, then PRINT=AUTOCHOICE is displayed by default. PRINT=UNITROOTTEST causes the table titled “Results of Unit Root Test for Identifying Orders of Differencing” to be printed. This table displays the orders that were automatically

2120 F Chapter 32: The X12 Procedure

selected by AUTOMDL. Unless the nonseasonal and seasonal differences are specified using the DIFFORDER= option, AUTOMDL automatically identifies the orders of differencing. PRINT=UNITROOTMDL displays the table titled “ARIMA Estimates for Unit Root Identification.” This table summarizes the various models that were considered by the TRAMO automatic selection method while identifying the orders of differencing and the statistics associated with those models. The unit root identification method first attempts to obtain the coefficients by using the Hannan-Rissanen method. If Hannan-Rissanen estimation cannot be performed, the algorithm attempts to obtain the coefficients by using conditional likelihood estimation. PRINT=AUTOCHOICEMDL displays the table “Models Estimated by Automatic ARIMA Model Selection Procedure.” This table summarizes the various models that were considered by the TRAMO automatic model selection method and their measures of fit. PRINT=BEST5MODEL displays the table “Best Five ARIMA Models Chosen by Automatic Modeling.” This table ranks the five best models that were considered by the TRAMO automatic modeling method. BALANCED

specifies that the automatic modeling procedure prefer balanced models over unbalanced models. A balanced model is one in which the sum of AR, differencing, and seasonal differencing orders equal to the sum of MA and seasonal MA orders. Specifying BALANCED gives the same preference as the TRAMO program. If BALANCED is not specified, all models are given equal consideration. HRINITIAL

specifies that Hannan-Rissanen estimation be done before exact maximum likelihood estimation to provide initial values. If HRINITIAL is specified, then models for which the HannanRissanen estimation has an unacceptable coefficient are rejected. ACCEPTDEFAULT

specifies that the default model be chosen if its Ljung-Box Q is acceptable. LJUNGBOXLIMIT=value

specifies acceptance criteria for confidence coefficient of the Ljung-Box Q statistic. If the Ljung-Box Q for a final model is greater than this value, the model is rejected, the outlier critical value is reduced, and outlier identification is redone with the reduced value (see the REDUCECV option). The value specified must be greater than 0 and less than 1. The default value is 0.95. REDUCECV=value

specifies the percentage that the outlier critical value be reduced when a final model is found to have an unacceptable confidence coefficient for the Ljung-Box Q statistic. This value should be between 0 and 1. The default value is 0.14286. ARMACV=value

specifies the threshold value for the t statistics associated with the highest order ARMA coefficients. As a check of model parsimony, the parameter estimates and t statistics of the highest order ARMA coefficients are examined to determine if the coefficient is insignificant.

OUTPUT Statement F 2121

An ARMA coefficient is considered to be insignificant if the absolute value of the parameter estimate is below 0.15 for 150 or fewer observations, and below 0.1 for more than 150 observations and the t value (displayed in the table “Exact ARMA Maximum Likelihood Estimation”) is below the value specified in the ARMACV= option. If the highest order ARMA coefficient is found to be insignificant then the order of the ARMA model is reduced. For example, if AUTOMDL identifies a (3 1 1)(0 0 1) model and the parameter estimate of the seasonal MA lag of order 1 is –0.9 and its t value is –0.55, then the ARIMA model is reduced to at least (3 1 1)(0 0 0). After the model is reestimated, the check for insignificant coefficients is performed again. If ARMACV=0.54 is specified in the preceding example, then the coefficient is not found to be insignificant and the model is not reduced. If a constant regressor is allowed in the model and if the t value (displayed in the table “Regression Model Parameter Estimates”) is below the ARMACV= critical value, then the constant regressor is considered to be insignificant and is removed. Note that if a constant regressor is added to or removed from the model and then the ARIMA model changes, then the t statistic for the constant regressor also changes. Thus, changing the ARMACV= value does not necessarily add or remove a constant term from the model. The value specified in the ARMACV= option should be greater than zero. The default value is 1.0.

OUTPUT Statement OUTPUT OUT= SAS-data-set tablename1 tablename2 . . . ;

The OUTPUT statement creates an output data set that contains specified tables. The data set is named by the OUT= option. OUT=SAS-data-set

names the data set to contain the specified tables. If the OUT= option is omitted, the SAS System names the new data set by using the default DATAn convention. For each table to be included in the output data set, you must specify the X12 tablename keyword. The keyword corresponds to the title label used by the Census Bureau X12-ARIMA software. Currently available tables are A1, A2, A6, A7, A8, A8AO, A8LS, A8TC, A9, A10, B1, C17, C20, D1, D7, D8, D9, D10, D10B, D10D, D11, D11A, D11F, D11R, D12, D13, D16, D16B, D18, E5, E6, E6A, E6R, E7, and MV1. If no table is specified in the OUTPUT statement, Table A1 is output to the OUT= data set by default. The tablename keywords that can be used in the OUTPUT statement are listed in the section “Displayed Output/ODS Table Names/OUTPUT Tablename Keywords” on page 2139. The following is an example of a VAR statement and an OUTPUT statement: var sales costs; output out=out_x12

b1 d11;

Note that the default variable name used in the output data set is the input variable name followed by an underscore and the corresponding table name. The variable sales_B1 contains

2122 F Chapter 32: The X12 Procedure

the Table B1 values for the variable sales, the variable costs_B1 contains the Table B1 values for costs, while the Table D11 values for sales are contained in the variable sales_D11, and the variable costs_D11 contains the Table D11 values for costs. If necessary, the variable name is shortened so that the table name can be added. If the DATE= variable is specified in the PROC X12 statement, then that variable is included in the output data set; otherwise, a variable named _DATE_ is written to the OUT= data set as the date identifier.

OUTLIER Statement OUTLIER options ;

The OUTLIER statement specifies that the X12 procedure perform automatic detection of additive (point) outliers, temporary change outliers, level shifts, or any combination of the three when using the specified model. After outliers are identified, the appropriate regression variables are incorporated into the model as “Automatically Identified Outliers,” and the model is reestimated. This procedure is repeated until no additional outliers are found. The OUTLIER statement also identifies potential outliers and lists them in the table “Potential Outliers” in the displayed output. Potential outliers are identified by decreasing the critical value by 0.5. In the output, the default initial critical values used for outlier detection in a given analysis are displayed in the table “Critical Values to Use in Outlier Detection.” Outliers that are detected and incorporated into the model are displayed in the output in the table “Regression Model Parameter Estimates,” where the regression variable is listed as “Automatically Identified.” The following options can appear in the OUTLIER statement. SPAN=(mmmyy ,mmmyy ) SPAN=(’yyQq’ ,’yyQq’ )

gives the dates of the first and last observations to define a subset for searching for outliers. A single date in parentheses is interpreted to be the starting date of the subset. To specify only the ending date, use SPAN=(,mmmyy ) or SPAN=(,’yyQq’ ). If the starting or ending date is omitted, then the first or last date, respectively, of the input data set is assumed. A four-digit year can be specified; if a two-digit year is specified, the value specified in the YEARCUTOFF= SAS system option applies. TYPE=NONE TYPE=(outlier types)

lists the outlier types to be detected by the automatic outlier identification method. TYPE=NONE turns off outlier detection. The valid outlier types are AO, LS, and TC. The default is TYPE=(AO LS). CV=value

specifies an initial critical value to use for detection of all types of outliers. The absolute value of the t statistic associated with an outlier parameter estimate is compared with the critical value to determine the significance of the outlier. If the CV= option is not specified, then the

OUTLIER Statement F 2123

default initial critical value is computed using a formula presented by Ljung (1993), which is based on the number of observations or model span used in the analysis. Table 32.2 gives default critical values for various series lengths. Increasing the critical value decreases the sensitivity of the outlier detection routine and can reduce the number of observations treated as outliers. The automatic model identification process might lower the critical value by a certain percentage, if the automatic model identification process fails to identify an acceptable model. Table 32.2

Default Critical Values for Outlier Identification

Number of Observations 1 2 3 4 5 6 7 8 9 10 11 12 24 36 48 72 96 120 144 168 192 216 240 264 288 312 336 360

Outlier Critical Value 1.96 2.24 2.44 2.62 2.74 2.84 2.92 2.99 3.04 3.09 3.13 3.16 3.42 3.55 3.63 3.73 3.80 3.85 3.89 3.92 3.95 3.97 3.99 4.01 4.03 4.04 4.05 4.07

AOCV=value

specifies a critical value to use for additive (point) outliers. If AOCV is specified, this value overrides any default critical value for AO outliers. See the CV= option for more details. LSCV=value

specifies a critical value to use for level shift outliers. If LSCV is specified, this value overrides any default critical value for LS outliers. See the CV= option for more details.

2124 F Chapter 32: The X12 Procedure

TCCV=value

specifies a critical value to use for temporary change outliers. If TCCV is specified, this value overrides any default critical value for TC outliers. See the CV= option for more details.

REGRESSION Statement REGRESSION PREDEFINED= variables < / options > ; REGRESSION USERVAR= variables < / options > ;

The REGRESSION statement includes regression variables in a regARIMA model or specifies regression variables whose effects are to be removed by the IDENTIFY statement to aid in ARIMA model identification. Predefined regression variables are selected with the PREDEFINED= option. User-defined regression variables are specified with the USERVAR= option. The currently available predefined variables are listed below in Table 32.3. Table A6 in the displayed output generated by the X12 procedure provides information related to trading day effects. Table A7 provides information related to holiday effects. Tables A8, A8AO, A8LS, and A8TC provide information related to outlier factors. Ramps and level shifts are combined in the A8LS table. The A8AO, A8LS and A8TC tables are available only when more than one outlier type is present in the model. Table A9 provides information about user-defined regression effects. Table A10 provides information about the user-defined seasonal component. Missing values in the span of an input series automatically create missing value regressors. See the NOTRIMMISS option of the PROC X12 statement and the section “Missing Values” on page 2136 for further details about missing values. Combining your model with additional predefined regression variables can result in a singularity problem. If a singularity occurs, then you might need to alter either the model or the choices of the predefined regressors in order to successfully perform the regression. In order to seasonally adjust a series that uses a regARIMA model, the factors derived from the regression coefficients must be the same type as the factors generated by the seasonal adjustment procedure, so that combined adjustment factors can be derived and adjustment diagnostics can be generated. If the regARIMA model is applied to a log-transformed series, the regression factors are expressed in the form of ratios, which match seasonal factors generated by the multiplicative (or log-additive) adjustment modes. Conversely, if the regARIMA model is fit to the original series, the regression factors are measured on the same scale as the original series, which match seasonal factors generated by the additive adjustment mode. Note that the default transformation (no transformation) and the default seasonal adjustment mode (multiplicative) are in conflict. Thus when you specify the X11 statement and any of the REGRESSION, INPUT, or EVENT statements, it is necessary to also specify either a transform option (using the TRANSFORM statement) or a mode (using the MODE= option of the X11 statement) in order to seasonally adjust the data that uses the regARIMA model. According to Ladiray and Quenneville (2001), “X-12-ARIMA is based on the same principle [as the X-11 method] but proposes, in addition, a complete module, called Reg-ARIMA, that allows for the initial series to be corrected for all sorts of undesirable effects. These effects are estimated using regression models with ARIMA errors (Findley et al. [23]).” In order to correct the series for effects in this manner, the REGRESSION statement must be specified. The effects that can be corrected in this manner are listed in the PREDEFINED= option below.

REGRESSION Statement F 2125

Either the PREDEFINED= option or the USERVAR= option can be specified in a single REGRESSION statement, but not both. Multiple REGRESSION statements can be used. The following options can appear in the REGRESSION statement. PREDEFINED=CONSTANT < / B= > PREDEFINED=LOM PREDEFINED=LOMSTOCK PREDEFINED=LOQ PREDEFINED=LPYEAR PREDEFINED=SEASONAL PREDEFINED=TD PREDEFINED=TDNOLPYEAR PREDEFINED=TD1COEF PREDEFINED=TD1NOLPYEAR PREDEFINED=EASTER(value) PREDEFINED=SCEASTER(value) PREDEFINED=LABOR(value) PREDEFINED=THANK(value) PREDEFINED=TDSTOCK(value) PREDEFINED=SINCOS(value . . . )

lists the predefined regression variables to be included in the model. Data values for these variables are calculated by the program, mostly as functions of the calendar. Table 32.3 gives definitions for the available predefined variables. The values LOM and LOQ are actually equivalent: the actual regression is controlled by the PROC X12 SEASONS= option. Multiple predefined regression variables can be used. The syntax for using both a length-of-month and a seasonal regression can be in one of the following forms: regression predefined=lom seasonal; regression predefined=(lom seasonal); regression predefined=lom predefined=seasonal;

Certain restrictions apply when you use more than one predefined regression variable. Only one of TD, TDNOLPYEAR, TD1COEF, or TD1NOLPYEAR can be specified. LPYEAR cannot be used with TD, TD1COEF, LOM, LOMSTOCK, or LOQ. LOM or LOQ cannot be used with TD or TD1COEF. The following restriction also applies to the SINCOS predefined regression variable. If SINCOS is specified, then the INTERVAL= option or the SEASONS= option must also be specified because there are restrictions to this regression variable based on the frequency of the data. The predefined regression variables TDSTOCK, SCEASTER, EASTER, LABOR, THANK, and SINCOS require extra parameters. Only one TDSTOCK regressor can be implemented in the regression model. If multiple TDSTOCK variables are specified, PROC X12 uses

2126 F Chapter 32: The X12 Procedure

the last TDSTOCK variable specified. For SCEASTER, EASTER, LABOR, THANK, and SINCOS, multiple regressors can be implemented in the model by specifying the variables with different parameters. The syntax for specifying two EASTER regressors with widths 7 and 14 would be: regression predefined=easter(7) easter(14);

For SINCOS, specifying a parameter includes both the sine and the cosine regressor except for the highest order allowed (2 for quarterly data and 6 for monthly data.) The most common use of the SINCOS variable for quarterly data would be regression predefined=sincos(1,2);

and for monthly data would be regression predefined=sincos(1,2,3,4,5,6);

These statements include 3 and 11 regressors in the model, respectively. Table 32.3

Predefined Regression Variables in X-12-ARIMA

Regression Effect

Variable Definitions B s( /

.1

CONSTANT

where I.t  1/ D

length-of-month (monthly flow) LOM

B/

d .1

trend constant

D I.t

1 0

 1/; for t  1 for t < 1

mt m N where mt = length of month t (in days) and m N D 30:4375 (average length of month)

(

mt m N .l/ for t D 1 SLOMt 1 C mt m N otherwise N and mt are defined in LOM and 8 where m ˆ 0:375 when 1st February in series is a leap year ˆ ˆ ˆ