SPSS for psychologists [Seventh ed.] 9781352009941, 1352009943

3,372 457 33MB

English Pages [502] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

SPSS for psychologists [Seventh ed.]
 9781352009941, 1352009943

Table of contents :
Contents
Preface
How to use this book
Chapter 1
Chapters 2–4
Chapters 5–9
Chapters 10–13
Chapter 14
Differences between versions of SPSS
Acknowledgements
1: Introduction
Section 1: PSYCHOLOGICAL RESEARCH AND SPSS
But I am studying psychology, not statistics – why do I need to learn to use SPSS?
Asking questions and collecting data
Levels of measurement
Hypotheses
Operationalisation
Types of study design
Correlational designs
Experimental designs
Quasi-experimental designs
Related and unrelated designs in psychological research
Populations and samples
Parameters and statistics
Descriptive statistics
Confidence intervals and point estimates
Bootstrapping
Inferential statistics and probability
Adjusting p values for one- and two-tailed hypotheses
Exact and asymptotic significance
Confidence intervals and statistical inference
Effect size
Statistical power
Practical equivalence of two samples
Statistical power and SPSS
Section 2: GUIDE TO THE STATISTICAL TESTS COVERED
Choosing the correct statistical procedures
Section 3: WORKING WITH SPSS
Data analysis using SPSS
The different types of window used in SPSS
The Data Editor window
The Viewer window
Some other windows used in SPSS
Section 4: STARTING SPSS
The menu and toolbars of the Data Editor window
Section 5: HOW TO EXIT FROM SPSS
2: Data entry in SPSS
Section 1: THE DATA EDITOR WINDOW
What is the Data Editor window?
The arrangement of the data in the Data Editor window
Section 2: DEFINING A VARIABLE IN SPSS
The Data View and Variable View
Setting up your variables
Variable name
Variable type
Variable width and decimal places
Variable label
Value labels
Missing values
Column format
Column alignment
Measure
Role
Check your settings
Copying variable settings
Section 3: ENTERING DATA
A first data entry exercise
Moving around the Data Editor window
The value labels button
Section 4: SAVING A DATA FILE
To save the data to a file
Section 5: OPENING A DATA FILE
Section 6: DATA ENTRY EXERCISES
Data from an unrelated (independent groups) design
Data from a related (repeated measures) design
Section 7: ANSWERS TO DATA ENTRY EXERCISES
A data file for an unrelated (independent groups) design
The data file for a related (repeated measures) design
Section 8: CHECKING AND CLEANING DATA FILES
3: Exploring, cleaning and graphing data in SPSS
Section 1: DESCRIPTIVE STATISTICS
Section 2: THE DESCRIPTIVES COMMAND
To obtain output from the Descriptives command
Section 3: THE VIEWER WINDOW
Section 4: THE FREQUENCIES COMMAND
To obtain a Frequencies output
The output produced by the Frequencies command
Section 5: THE EXPLORE COMMAND
Using the Explore command to analyse data from an independent groups design
The output produced by the Explore command for an independent groups design
Using the Explore command to analyse data from a repeated measures design
The boxplots produced by the Explore command for a repeated measures design
Section 6: USING DESCRIPTIVE STATISTICS TO CHECK YOUR DATA
Checking variables in Variable View
Using descriptive statistics to check the data
Checking scale variables
Finish cleaning the file
Section 7: INTRODUCING GRAPHING IN SPSS
Producing graphs in SPSS
Graph types
Boxplots
Histogram
Bar charts
Error bar chart
Line charts
Scatterplot
Section 8: CHART BUILDER
To use Chart Builder
Editing charts in the Chart Editor window
Section 9: GRAPHBOARD TEMPLATE CHOOSER
To use Graphboard Template Chooser
The Graphboard Editor window
4: Data handling
Section 1: AN INTRODUCTION TO DATA HANDLING
An example data file
Section 2: SORTING A FILE
The Sort Cases command
Section 3: SPLITTING A FILE
Options
Output
Unsplitting a file
Section 4: SELECTING CASES
Comparing the Select Cases and Split File commands
The Select Cases command
Selection rules
Selection methods
Reselecting all cases
Section 5: RECODING VALUES
Recode into Different Variables
Specifying the values to be recoded
Recode into Same Variables
Conditional recode
Section 6: COMPUTING NEW VARIABLES
Compute and Missing Values
Section 7: COUNTING VALUES
Conditional Count
Section 8: RANKING CASES
Ranking tied values
Ranking within categories
Section 9: DATA TRANSFORMATION
Log transformation of decision latency data
Section 10: DATA FILE FOR SCALES OR QUESTIONNAIRES
A simple check on data entry
Reversals
5: Tests of difference for  one- and two-sample designs
Section 1: AN INTRODUCTION TO THE t-TEST
Section 2: THE ONE-SAMPLE t-TEST
Example study: assessing memory
To perform a one-sample t-test
SPSS output for one-sample t-test
Obtained using menu items: Compare Means > One-Sample T Test
Reporting the results
Section 3: THE INDEPENDENT t-TEST
Example study: the memory experiment
To perform an independent t-test
SPSS output for independent groups t-test
Obtained using menu items: Compare Means > Independent-Samples T Test
Measure of effect size
Reporting the results
Creating an error bar graph: independent groups design
To obtain an error bar graph using Chart Builder
SPSS output for bar chart
Section 4: THE PAIRED t-TEST
Example study: the mental imagery experiment
To perform a paired t-test
SPSS output for paired (or related) t-test
Obtained using menu items: > Compare Means > Paired-Samples T Test
Footnotes
Measure of effect size
R eporting the results
Section 5: AN INTRODUCTION TO NONPARAMETRIC TESTS OF DIFFERENCE
Section 6: THE MANN–WHITNEY TEST
Example study: sex differences and emphasis on physical attractiveness
How to do it
SPSS output for Mann–Whitney U test
Obtained using menu items: Nonparametric Tests > Legacy Dialogs > 2 Independent Samples
Reporting the results
Section 7: THE WILCOXON TEST
Example study: quality of E-FIT images
How to do it
SPSS output for Wilcoxon matched-pairs signed-ranks test
Obtained using menu items: Nonparametric Tests > Legacy Dialogs > 2 Related Samples
Reporting the results
6: Tests of correlation and bivariate regression
Section 1: AN INTRODUCTION TO TESTS OF CORRELATION
Section 2: PRODUCING A SCATTERPLOT
Example study: relationship between age and CFF
How to obtain a scatterplot using Legacy Dialogs
Producing a scatterplot with a regression line using Chart Builder
How to add a regression equation to the scatterplot
How to obtain a scatterplot using Graphboard Template Chooser
Section 3: PEARSON’S r: PARAMETRIC TEST OF CORRELATION
Example study: critical flicker frequency and age
How to perform a Pearson’s r
SPSS output for Pearson’s r
Obtained using menu item: Correlate > Bivariate
Effect sizes in correlation
Re porting the results
Section 4: SPEARMAN’S rS: NONPARAMETRIC TEST OF CORRELATION
Example study: the relationships between attractiveness, believability and confidence
How to perform Spearman’s rs
SPSS output for Spearman’s rs
Obtained using menu item: Correlate > Bivariate
Reporting the results
How to perform Kendall’s tau-b
Section 5: PARTIAL CORRELATIONS
How to perform a partial correlation
SPSS output
Obtained using: Correlate > Partial
Reporting the results
Section 6: COMPARING THE STRENGTH OF CORRELATION COEFFICIENTS
Using equations
Reporting the results
Section 7: BRIEF INTRODUCTION TO REGRESSION
Regression as a model
Section 8: BIVARIATE REGRESSION
From bivariate correlation to bivariate regression
The bivariate regression equation
Residuals
Proportion of variance explained
How to perform bivariate regression in SPSS
SPSS output
Obtained using: Analyze, Regression, Curve Estimation
Using the procedure to predict Y for new cases
7: Tests for nominal data
Section 1: NOMINAL DATA AND DICHOTOMOUS VARIABLES
Descriptives for nominal data
Entering nominal data into SPSS
Section 2: CHI-SQUARE TEST VERSUS THE CHI-SQUARE DISTRIBUTION
Section 3: THE GOODNESS OF FIT CHI-SQUARE
To perform the goodness of fit chi-square test
Section 4: THE MULTIDIMENSIONAL CHI-SQUARE
General issues for chi-square
Causal relationships
Type of data
The N*N contingency table
Rationale for chi-square test
Example study: investigating tendency towards anorexia
To perform the multidimensional chi-square test
SPSS output for chi-square without using Exact option
Obtained using menu items: Descriptive Statistics > Crosstabs
Output for first chi-square: tendency towards anorexia * employment (a variable with two levels against a variable with three levels)
Reporting the results
Output for second chi-square: tendency towards anorexia* education (two variables each with two levels)
Interpreting and reporting results from chi-square
R eporting the results
Use of exact tests in chi-square
Output for third chi-square: tendency towards anorexia* cultural background (a variable with two levels against a variable with three levels)
Using the Exact option for chi-square
Output for third chi-square (2*3) with Exact option
Reporting the results
Output for a 2*2 chi-square with Exact option
Performing a chi-square using the Weight Cases option
Section 5: THE MCNEMAR TEST FOR REPEATED MEASURES
Example study: gender and handwriting
How to perform the McNemar test
McNemar test and Crosstabs command
SPSS output for the McNemar test
Obtained using menu items: Descriptive Statistics > Crosstabs
How to obtain a bar chart using Chart Builder
R eporting the results
How to obtain a bar chart using Graphboard Template Chooser
Note about causation and the McNemar test
8: One-way analysis of variance
Section 1: AN INTRODUCTION TO ONE-WAY ANALYSIS OF VARIANCE (ANOVA)
When can we use One-way ANOVA?
How does it work?
How do we find out if the F-ratio is significant?
What about degrees of freedom?
What terms are used with One-way ANOVA?
Factors
Levels of factors
Between-subjects factors
Within-subjects factors
Main effect
How is the F-ratio calculated?
Between-subjects One-way ANOVA design
Within-subjects One-way ANOVA design
Using SPSS to calculate the F-ratio
Effect size and ANOVA
Planned and unplanned comparisons
Section 2: ONE-WAY BETWEEN-SUBJECTS ANOVA, PLANNED AND UNPLANNED COMPARISONS, AND NONPARAMETRIC EQUIVALENT
Example study: the effects of witness masking
How to do it: using General Linear Model command
SPSS output for One-way between-subjects ANOVA
Obtained using menu items: General Linear Model > Univariate
Calculating eta squared: one measure of effect size
Reporting the results
How to do it: using One-way ANOVA command
SPSS output for One-way between-subjects ANOVA
Obtained using menu items: Compare Means > One-way ANOVA
Calculating eta squared: one measure of effect size
Reporting the results
Planned and unplanned comparisons
Unplanned (post-hoc) comparisons in SPSS
Using General Linear Model command
Using One-way ANOVA command
SPSS output for post-hoc tests
Reporting the results
Planned comparisons in SPSS
Using One-way ANOVA command
SPSS output for contrasts
Obtained using menu items: Compare Means > One-way ANOVA
R eporting the results
Using General Linear Model command
The Kruskal–Wallis test
How to do it
SPSS output for Kruskal–Wallis test
Obtained by using menu items: Nonparametric Tests > K Independent Samples
Reporting the results
Section 3: ONE-WAY WITHIN-SUBJECTS ANOVA, PLANNED AND UNPLANNED COMPARISONS AND NONPARAMETRIC EQUIVALENT
Example study: the Stroop effect
Understanding the output
How to do it
SPSS output for One-way within-subjects ANOVA
Obtained using menu items: General Linear Model > Repeated Measures
Calculating eta squared: one measure of effect size
Reporting the results
Planned comparisons: more contrasts for  within-subjects factor
Unplanned comparisons for within-subjects factor ANOVA
The Friedman test
How to do it
SPSS output for Friedman test
Obtained by using menu items: Nonparametric Tests > K Related Samples
R eporting the results
9: Factorial analysis of variance
Section 1: AN INTRODUCTION TO FACTORIAL ANALYSIS OF VARIANCE (ANOVA)
Different types of Factorial ANOVA?
Between-subjects ANOVA
Within-subjects ANOVA
Mixed ANOVA
Factorial ANOVAs
Main effects and interactions
Interactions and moderation
Understanding ANOVA output
Section 2: TWO-WAY BETWEEN-SUBJECTS ANOVA
Example study: the effect of defendant’s attractiveness and sex on sentencing
How to do it
SPSS output for two-way between-subjects ANOVA
Obtained using menu items: General Linear Model > Univariate
Calculating eta squared: one measure of effect size
Rep orting the results
Section 3: TWO-WAY WITHIN-SUBJECTS ANOVA
Example study: the effects of two memory tasks on finger tapping performance
Labelling within-subjects factors
How to do it
How to obtain an interaction graph
SPSS output for two-way within-subjects ANOVA
Obtained using menu items: General Linear Model > Repeated Measures
Calculating eta squared: one measure of effect size
Reporting the results
Section 4: MIXED ANOVA
Example study: perceptual expertise in the own-age bias
How to do it
SPSS output for three-way mixed ANOVA
Obtained using menu items: General Linear Model > Repeated Measures
Report ing the results
10: Multiple regression
Section 1: AN INTRODUCTION TO MULTIPLE REGRESSION
From bivariate to multiple
An example
How does multiple regression relate to analysis of variance?
Causation
When should I use multiple regression?
The multiple regression equation
Regression coefficients: B (unstandardised) and beta (standardised)
R, R-squared and adjusted R-squared
Regression methods
Unique and shared variance
Simultaneous or standard method
Sequential or hierarchical method
Statistical (or stepwise) methods
Validating results from statistical regression methods
Section 2: STANDARD OR SIMULTANEOUS METHOD OF MULTIPLE REGRESSION
Example study: state anxiety
SPSS output for standard multiple regression
Obtained using menu items: Regression > Linear (method = enter)
R eporting the results
Section 3: SEQUENTIAL OR HIERARCHICAL METHOD OF MULTIPLE REGRESSION
SPSS output for sequential multiple regression
Obtained using menu items: Regression > Linear (method = enter; three blocks have been entered)
A note on sequential method and regression coefficients
Reporting the results
Section 4: STATISTICAL METHODS OF MULTIPLE REGRESSION
How to perform multiple regression using the stepwise method
SPSS output for a statistical multiple regression
Obtained using menu items: Regression > Linear (method = stepwise)
Reporting the results
11: Analysis of covariance and multivariate analysis of variance
Section 1: AN INTRODUCTION TO ANALYSIS OF COVARIANCE
What does ANCOVA do?
When should I use ANCOVA?
An example
Checklist for choosing one or more covariates
Section 2: PERFORMING ANALYSIS OF COVARIANCE iN SPSS
Example study: exposure to low levels of organophosphates
How to check for homogeneity of regression slopes
SPSS output from procedure to check for homogeneity of regression slopes
How to check for linear relationship between covariate and dependent variable
SPSS output for graph
How to perform ANCOVA
SPSS output for ANCOVA
Rep orting the results
Section 3: AN INTRODUCTION TO MULTIVARIATE ANALYSIS OF VARIANCE
An example
What does MANOVA do?
Following up a significant result
When should I use MANOVA?
Checklist for using MANOVA
Section 4: PERFORMING MULTIVARIATE ANALYSIS OF VARIANCE iN SPSS
Example study: exposure to low levels of organophosphates
How to perform MANOVA
SPSS output for MANOVA
Reporting the results
A note on within-subjects designs
12: Discriminant analysis and logistic regression
Section 1: DISCRIMINANT ANALYSIS AND LOGISTIC REGRESSION
An example
Similarities and differences between discriminant analysis and logistic regression
Section 2: AN INTRODUCTION TO DISCRIMINANT ANALYSIS
An example
Two steps in discriminant analysis
Assumptions
Methods in discriminant analysis
Choosing a method to adopt
What does each method tell us?
How does each method work?
Section 3: PERFORMING DISCRIMINANT ANALYSIS iN SPSS
Example study: reconviction among offenders
To perform a simultaneous (enter method) discriminant analysis
SPSS output for discriminant analysis using enter method
Obtained using menu items: Classify > Discriminant (enter independents together)
R eporting the results
To perform a stepwise (or statistical) discriminant analysis
Using discriminant function analysis to predict group membership
Section 4: AN INTRODUCTION TO LOGISTIC REGRESSION
Section 5: PERFORMING LOGISTIC REGRESSION ON SPSS
Example study: reconviction among offenders
To perform a binary logistic regression
SPSS output for logistic regression using Enter method
Obtained using menu items: Regression > Binary Logistic
Reporting the results
Using logistic regression to predict group membership
13: Factor analysis, and reliability and dimensionality of scales
Section 1: AN INTRODUCTION TO FACTOR ANALYSIS
How this chapter is organised
How factor analysis relates to other statistical tests
Correlation and covariance
Multiple regression
Analysis of variance
Discriminant analysis and logistic regression
Correlation matrix and other matrices in factor analysis
Pearson’s r output
Correlation matrix from factor analysis output
Partial correlations from factor analysis output
Reproduced correlations and residuals from factor analysis output
When should I use factor analysis?
Usefulness/validity of a factor analysis
Terminology
Component and factor
Extraction
Communality
Eigenvalue
Scree plot
Factor loadings
Rotation
Section 2: PERFORMING A BASIC FACTOR ANALYSIS USING SPSS
Hypothetical study
How to perform the analysis
Output from factor analysis using principal component extraction and direct oblimin rotation
Obtained using menu items: Analyze > Dimension Reduction > Factor
Considering the results
Reporting the results
Section 3: OTHER ASPECTS OF FACTOR ANALYSIS
Other options from the Factor Analysis: Descriptives dialogue box
Determinant
Inverse
Other options from the Factor Analysis: Extraction dialogue box
Method
Extract options
Other options from the Factor Analysis: Rotation dialogue box
Method
Display
Factor Analysis: Options dialogue box
Coefficient Display Format
Negative and positive factor loadings
R factor analysis
Section 4: RELIABILITY ANALYSIS FOR SCALES AND QUESTIONNAIRES
Internal consistency
How to perform a reliability analysis
SPSS output for reliability analysis with Cronbach’s alpha
Obtained using menu item: Scale > Reliability Analysis (model = alpha)
Acting on the results
SPSS output for reliability analysis with split-half
Obtained using menu item: Scale > Reliability Analysis (model = split-half)
Section 5: DIMENSIONALITY OF SCALES AND QUESTIONNAIRES
To identify those items that load on a single component
Acting on the results
To assess the structure of items within a scale
Acting on the results
Important
14: Using syntax and other useful features of SPSS
Section 1: THE SYNTAX WINDOW
An example of a syntax command
The Paste button and the Syntax window
1. Duplicating actions
2. Keeping a record of your analysis and repeating an analysis
3. Describing the analysis you have undertaken
4. Tweaking the parameters of a command
The Syntax window
Basic rules of syntax
Saving syntax files
Executing syntax commands
Syntax errors
Selecting the correct data file
Section 2: SYNTAX EXAMPLES
Comparing correlation coefficients
System variables
Section 3: GETTING HELP IN SPSS
The Help button in dialogue boxes
The Help menu
What’s this?
Section 4: OPTION SETTINGS IN SPSS
Changing option settings
Some useful option settings
General tab
Syntax Editor tab
Viewer tab
Data tab
Output tab
File Locations tab
Section 5: PRINTING FROM SPSS
Printing output from the Output viewer window
Printing data and syntax files
Special output options for pivot tables
Section 6: INCORPORATING SPSS OUTPUT INTO OTHER DOCUMENTS
Exporting SPSS output
Section 7: SPSS AND EXCEL: IMPORTING AND EXPORTING DATA FILES
Import: opening an Excel file in SPSS
Export: saving an SPSS file to Excel
Appendix
Data for data handling exercises: Chapter 4, Sections 1–9
Data for data handling exercises: Chapter 4, Section 10
Data for one-sample t-test: Chapter 5, Section 2
Data for independent t-test: Chapter 5, Section 3
Data for paired t-test: Chapter 5, Section 4
Data for Mann–Whitney U test: Chapter 5, Section 6
Data for Wilcoxon matched-pairs signed-ranks test: Chapter 5, Section 7
Data for Pearson’s r correlation: Chapter 6, Section 3
Data for Spearman’s rho correlation: Chapter 6, Section 4
Data for chi-square test: Chapter 7, Section 4
Data for McNemar test: Chapter 7, Section 5
Glossary
References
Index

Citation preview

SPSS FOR PSYCHOLOGISTS VIRGINIA HARRISON, RICHARD KEMP, NICOLA BRACE AND ROSEMARY SNELGAR

SEVENTH EDITION

SPSS for Psychologists

SPSS for Psychologists Seventh Edition Virginia Harrison  Senior Lecturer in Psychology, The Open University, UK  Richard Kemp Professor, School of Psychology, University of New South Wales, Australia  Nicola Brace  Formerly Senior Lecturer in Psychology, The Open University, UK  Rosemary Snelgar Formerly Principal Lecturer in Psychology, University of Westminster, UK

Screen images from IBM ® SPSS ® Statistics software (“SPSS”) throughout this book are reprinted courtesy of International Business Machines Corporation, © International Business Machines Corporation. SPSS Inc. was acquired by IBM in October, 2009. IBM, the IBM logo, ibm.com, and SPSS are trademarks or registered trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at “IBM Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.

© Virginia Harrison, Richard Kemp, Nicola Brace and Rosemary Snelgar, under exclusive licence to Macmillan Education Limited 2021 © Nicola Brace, Richard Kemp and Rosemary Snelgar 2000, 2003, 2006, 2009, 2012, 2016 All rights reserved. No reproduction, copy or transmission of this publication may be made without written permission. No portion of this publication may be reproduced, copied or transmitted save with written permission or in accordance with the provisions of the Copyright, Designs and Patents Act 1988, or under the terms of any licence permitting limited copying issued by the Copyright Licensing Agency, Saffron House, 6–10 Kirby Street, London EC1N 8TS. Any person who does any unauthorized act in relation to this publication may be liable to criminal prosecution and civil claims for damages. The authors have asserted their rights to be identified as the authors of this work in accordance with the Copyright, Designs and Patents Act 1988. This edition published 2021 by RED GLOBE PRESS Previous editions published under the imprint PALGRAVE Red Globe Press in the UK is an imprint of Macmillan Education Limited, registered in England, company number 01755588, of 4 Crinan Street, London, N1 9XW. Red Globe Press® is a registered trademark in the United States, the United Kingdom, Europe and other countries. ISBN 978-1-352-00994-1 paperback This book is printed on paper suitable for recycling and made from fully managed and sustained forest sources. Logging, pulping and manufacturing processes are expected to conform to the environmental regulations of the country of origin. A catalogue record for this book is available from the British Library. A catalog record for this book is available from the Library of Congress. The icons that appear throughout this book are printed courtesy of The Noun Project, https://thenounproject.com/

To Emily Rae – my little statistical anomaly! 0. The Select Cases: If dialogue box also contains useful functions you can use in constructing your selection rule. The functions are organised into groups; for example, there is a statistical group containing various functions to calculate the mean, standard deviation and related values. Click on a Function group to see the available functions. If you click on a function name, a description will appear in the box next to the functions.

Here we have constructed a complex selection rule using the Sum function. Using this selection rule only participants who have some experience of adoption, and whose responses to questions 1, 2 and 3 add up to more than 6, will be selected.

CHAPTER 4

Data handling

CHAPTER 4

98

SPSS for Psychologists

Selection methods The Select Cases dialogue box offers a total of four methods of selecting cases (see step 3 above). These are: 1. The default, If Condition is satisfied method, is the one you will probably use most often. 2. The Random sample of cases method allows you to sample cases at random from your data file, and SPSS allows you to specify either the number or percentage of cases to be selected. This can be useful in some more advanced statistical operations. 3. The Based on time or case range method allows you to select cases either by case number or on the basis of a time or date range. 4. In the Use filter variable method, a case is selected if the value of the chosen variable is not zero (and is not missing) – this option can be useful if you have a yes/no variable coded as 1/0. Using this method, you could easily select only the ‘yes’ responses. It is useful to note that a line of text at the bottom of the Select Cases dialogue box indicates the current selection rule. Finally, remember to reselect All cases after you have completed your analysis of the selected cases.

Reselecting all cases The Select Cases function can be very useful, but it is important to remember that it is semi-permanent. Select Cases will stay in force until you either make some other selection or choose the All cases option in the Select Cases dialogue box.

Once you have finished working with your subset of cases, remember to select All cases so that subsequent analyses are based on your entire data set.

99

CHAPTER 4

Data handling

Section 5: RECODING VALUES There are many occasions when you need to recode some of your data. This might be because you made an error when entering the data, but it is more likely that you will want to recode your data in light of results from a preliminary analysis, or to allow you to undertake an additional analysis. For example, early analysis of our adoption survey might reveal that we have very few participants who report experience of adoption through either ‘immediate family’ or ‘other family’. In light of this, we might decide to collapse these two categories together into one new category. We could do this manually, but for a large data set that would be time-consuming. SPSS provides the Recode command for this purpose. SPSS offers two Recode options. We can either change the values in the existing variable, or we can create a new variable for our recoded data. These two options are called Recode into Same Variables and Recode into Different Variables, respectively. It is usually safer to recode into a different (new) variable rather than overwriting the original data – that way, if you make a mistake, you will be able to go back to the original values and try again. To recode a variable, follow the steps outlined below.

Recode into Different Variables 1. Click on the menu item Transform.

2. Choose which type of recode you want to perform – Recode into Different Variables is the safest.

SPSS for Psychologists

CHAPTER 4

100

3. Select the variable to be recoded and either click the arrow button or drag and drop it into the Numeric Variable -> Output Variable box. 4. Enter the name for the new variable for the recoded values. We have used the name ‘NewAdopt’. Add a label to help you remember the change. 5. Click the Change button. 6. Click here to bring up the Old and New Values dialogue box (see below).

7. Select one of the options to describe the values you want to recode. We have used the Range option and have entered the values 2 and 3 – the codes for the two groups we wish to collapse together. See next page for a description of the other ways of specifying the values to be recoded.

9. Click on the Add button to record this recode.

8. Enter the new value that you want to use to replace existing value(s). In this case, we have decided to use the value 2 to represent our new combined category.

10. Next we need to specify what to do with all other values of the variable ‘adopted’.

11. We can specify how we want each value modified, or we can set all other values to missing, or we can duplicate the old values into our new variable. We have selected this last option.

12. Click on the Add button to record this choice.

13. Click on Continue to return to the previous dialogue box, then click on OK to execute the recode command. The new variable will be added to your data file.

Specifying the values to be recoded In the Recode into Different Variables: Old and New Values dialogue box (see step 7 above), you are offered seven different methods of specifying the values you want to recode, and you can use a combination of these methods if required. The Range, LOWEST through value: and the Range, value through HIGHEST: are often very useful – for example, if you want to collapse together all categories 6 and above, you could use the Range, value through HIGHEST: option entering the value 6 in the box. When specifying the values to be recoded, you should give careful consideration to your missing values. If, for example, you used 9 as the missing value, then recoding using the Range, value through HIGHEST: would result in the missing observation being included in the new category, which you may not want. The Value: option allows you to specify a single value that you want to recode. The All other values option translates as ‘and for everything I haven’t yet specified’ and allows you to tell SPSS how to recode all the values not covered by one of the previous recode instructions. The System-missing and the System- or user-missing options are very powerful. System-missing values are rather like user-missing values (which, in Chapter 2, we simply called missing values). Both are used to indicate that there is no valid value for a variable. However, a system-missing value indicates that SPSS rather than you (the ‘user’) has declared a value non-valid – perhaps, for example, because, for this participant, it is not possible to calculate a valid value for the variable. These two options allow you to recode these two types of missing values but, before you use them, think carefully about the implications of your actions.

101

CHAPTER 4

Data handling

CHAPTER 4

102

SPSS for Psychologists

Note if you enter a value into the New Value box, which has previously been specified as a missing value (see step 8 above), you can effectively remove a range of values from an analysis by recoding valid responses into missing values. Similarly, by clicking on the System-missing option in the New Value box, you can instruct SPSS to regard any value or range of values as system-missing from this point onwards.

When using the Recode command, think carefully about what will happen to any missing values.

Don’t forget to tell SPSS how to deal with all the other values from the original variable. The Copy old value(s) option is very useful, as it allows you to tell SPSS that any values which you haven’t otherwise specified should just be copied over into the new variable. If you forget to do this, all these unspecified values will be set as system-missing in your new variable.

Remember: the big advantage of using Recode into Different Variables (rather than Recode into Same Variables described below) is that you do not lose anything. If you make an error, the original data are still available in the old variable and you can simply try again.

Recode into Same Variables If you are certain that you know what you are doing, and you have a backup of your data file, you might decide that you can overwrite the existing data rather than create a new variable. To do this, select Recode into Same Variables at step 2 above. From this point onwards, the procedure is very similar except that you omit steps 4 and 5 as there is no new variable to name. The results of this recode will overwrite the original data in the data file.

Conditional recode On some occasions, you might want to recode a variable only if a particular condition is satisfied for that participant. For example, you might want to perform the recode described above, but only for the female participants. This can be achieved by using the If button, which appears in both the Recode into Different Variables and the Recode into Same Variables dialogue boxes. Follow the procedure described above up to step 13. Now follow the steps described below.

1. Click the If button. 2. Click here to indicate that you want to perform a conditional recode. 3. Select the name of the variable around which you are going to build your conditional rule, then click the arrow (or drag and drop) to move this variable into the box (as shown here).

4. Now complete your conditional statement using either your computer keyboard or the keypad, or the functions listed in the dialogue box.

5. Once you have written your conditional statement, click on Continue to return to the previous dialogue box (see below).

6. Note that your conditional rule is listed here.

7. Click OK to complete the conditional recode.

103

CHAPTER 4

Data handling

SPSS for Psychologists

CHAPTER 4

104

8. This variable (‘NewAdopt’) has been added to the data file by the Recode into Different Variables command. All the values have been copied from the old variable ‘adopted’ except for the values of ‘2’ or ‘3’ which have been converted to ‘2’.

8. And this variable (‘NewAdoptFam’) was the one we created using the conditional recode. In this case the recode has only been applied to female respondents. For male respondents the values are set as system-missing (represented by a dot in the cell).

The rules for constructing the conditional statement (or logical expression) are the same as in the Select If command described earlier. You can construct quite complex logical expressions by using a combination of the functions provided and the operators (add, subtract etc.) available on the calculator-style buttons. Some of the less obvious buttons are listed below: ** = ~= & | ~

Raise to the power (for example, ‘3**2’ is equivalent to 32 = 9) Less than or equal to Greater than or equal to Not equal to AND OR NOT

105

CHAPTER 4

Data handling

Section 6: COMPUTING NEW VARIABLES Sometimes we need to calculate a new variable based on the values of existing variables. For example, you may have entered the response given by each participant to each question in a questionnaire. You could now use SPSS to calculate the overall score for the questionnaire or several separate scores for subscales within the questionnaire. In our fictitious survey of attitudes to adoption, we administered a 10-item questionnaire, which was made up of two subscales. We therefore need to sum the responses to the items that contribute to each of the subscales. SPSS can do this for us using the Compute Variable command.

1. Click on menu item Transform.

2. Select Compute Variable.

3. Type in the name of the new variable. We have called it ‘TransNatAdopt’, as it is a measure of attitudes to transnational adoption. The Type & Label button can be used to add labels and set the type for the new variable. 4. Enter the formula for the computation of the new variable here. You can type variable names, use the arrow button, or drag and drop them from the list on the left. Use the keyboard or the screen buttons to add the other symbols you need. You can also use the functions. 5. It is possible to perform a conditional compute. 6. Click on OK.

CHAPTER 4

106

SPSS for Psychologists

When entering the name of the new variable (see step 3 above), it is possible to enter a variable label to act as a reminder of what the new variable means. Do this by clicking on the Type & Label button. You can then either type in a text label, or, by selecting the Use expression as label option, you can ask SPSS to use your numeric expression as the variable label. In this case, the label would be ‘COMPUTE TransNatAdopt=(q1+q3+q4+q8+q10) / 5’.

7. This is our new variable ‘TransNatAdopt’ – the average of the scores on items 1, 3, 4, 8 and 10.

8. Note that SPSS has put a dot in this cell to represent a system-missing value. SPSS cannot calculate a valid value (see step 9).

9. This participant did not answer question 1, so we entered a missing value (9). As a result, SPSS can’t calculate a value for our new variable for this participant.

107

Compute and Missing Values When using Compute Variable, it is important to think carefully about missing values. SPSS will not be able to compute the value of the new variable if any of the values for the variables involved in the compute statement are missing. In the example above, participant 15 had not answered question 1 (q1) and we had entered a missing value (9) in this cell of the data table. As a result, SPSS is unable to compute a value for the new variable ‘TransNatAdopt’ for this participant. With complex compute statements involving lots of variables, this can be a major problem. One way round this is to make use of some of the functions built into SPSS, such as Mean, which make automatic allowances for missing values. This is illustrated below.

1. Select the Statistical function group. 2. Then select the required function (Mean in this case).

3. Click here to use the function.

4. Some useful information about this function, including how it handles missing values.

5. Note that for this function the arguments (variable names) must be enclosed in brackets and separated by commas.

6. Click OK.

7. This new variable has a valid value for this participant despite the missing value for q1. The computed value is the mean of the non-missing values.

CHAPTER 4

Data handling

CHAPTER 4

108

SPSS for Psychologists

It is important to consider whether the new values computed using functions in this way are legitimate. For example, if you use the Sum function, then in cases where one or more values are missing, the new value will be calculated based only on the non-missing values. This may or may not be what you want. Before undertaking a Compute Variable command, think carefully about what will happen in cases with missing values.

Section 7: COUNTING VALUES Sometimes it is useful to be able to count for each participant how many times a particular value occurs over a range of variables. If, as in our example data set, you have a series of variables that represent the responses to questionnaire items, you might want to find out how many times each participant has answered ‘Strongly Agree’. You could do this by asking SPSS to count the number of times the value 1 (the value used to code the response ‘Strongly Agree’) has occurred in variables ‘q1’ to ‘q10’. Using Count Values within Cases, SPSS will create a new variable that will contain a value representing the number of times the value 1 occurs in variables ‘q1’ to ‘q10’.

1. Click on menu item Transform. 2. Select Count Values within Cases.

3. Enter a name for the new variable (the Target Variable). You can also add a variable label to remind you what this variable means.

4. Select the variable(s) that you want to be included in the count (in this case, ‘q1’ to ‘q10’), then either click the arrow button or drag and drop to move the variable(s) into the Numeric Variables box.

5. Now click the Define Values button to specify the value(s) you want to be counted.

When selecting more than one variable – as in step 4 above – you can select them all in one go by clicking on the first and then holding down the key while you click on the last of the variables. You can then use the mouse to drag and drop the variables as a block.

6. Enter the value (or values) you want to be counted. The options are the same as for Recode (see Section 5). 7. Click on the Add button to include the values you specified in step 6. 8. Click on Continue to return to the previous dialogue box, then click on OK to execute the command.

109

CHAPTER 4

Data handling

SPSS for Psychologists

CHAPTER 4

110

9. This is the new variable. We can see that participant 1 never responded ‘Strongly Agree’, while participant 2 used this response a total of 3 times across variables ‘q1’ to ‘q10’.

Conditional Count It is possible to perform a conditional count – where we only count the occurrences of a value(s) for participants who satisfy some particular criterion. This is done by clicking on the If button after step 5 above. This will bring up a dialogue box almost identical to the one we used for the conditional recode described in Section 5. You can now specify your conditional rule and then click on the Continue button.

Section 8: RANKING CASES Sometimes it is useful to convert interval or ratio scores (scale data) into ordinal scores. We might, for example, want to convert the variable ‘TransNatAdopt’, which we calculated using Compute in Section 6, into a rank score. That is, we might want to rank all our participants on the basis of their score on this variable. The participant who had the highest overall ‘TransNatAdopt’ score would be given a rank of 1, the next highest a rank of 2 and so on. The Rank Cases command calculates the ranks for us and generates a new variable to contain the ranks. We can rank in either ascending or descending order, and can even rank on the basis of more than one variable.

111

CHAPTER 4

Data handling

1. Click on the menu item Transform.

2. Select Rank Cases.

3. Select the name of the variable you want to rank and move it into the Variable(s): box.

4a. Several different types of ranking are available – but these are unlikely to be of interest to new users.

4b. Click here to determine how the ranking process will deal with tied values (see Rank Cases: Ties dialogue box on next page).

6. Choose one of these two options; Smallest value corresponds to an ascending order.

5. It is possible to request that values are ranked within some other grouping variable (see text on ‘Ranking within categories’ at the end of this section).

CHAPTER 4

112

SPSS for Psychologists

7. You will probably want tied values represented by the Mean rank (see text below for a discussion of the alternatives). 8. Click on the Continue button to return to the previous dialogue box. Then click OK to perform the ranking.

9. This is the new variable containing each participant’s rank on the variable ‘TransNatAdopt’. SPSS has created a new variable name by prefixing the first few characters of the old name with the letter ‘R’ (for rank). You should amend this or add a variable label to remind you what it means.

10. Participants 12 and 13 both had a score of 3.0 on ‘TransNatAdopt’ so have been given the mean rank of 9.5.

11. Note that, if the value of the original variable is missing, then SPSS will assign a systemmissing value to the rank.

Ranking tied values SPSS provides four alternative methods of dealing with tied values: 1. The default, Mean method, gives the tied values the mean of the available ranks. You can see this option in operation above where participants 12 and 13 both scored 3.0 on ‘TransNatAdopt’ and were given a rank of 9.5 – the mean of ranks 9 and 10. This is the ranking method described in most introductory statistics books. 2. The Low option assigns the tied participants the lowest of the available ranks – so in this case both would have been ranked 9. 3. The High option would award both participants the highest of the available ranks – 10 in this case. 4. The Sequential ranks to unique values option would assign a rank of 10 to both participant 12 and 13, but would then assign a rank of 11 to the next highestscoring participant, thus ensuring that all the sequential ranks are awarded – this means that the highest rank will not be equal to the number of valid cases (as it would for the other three methods).

Ranking within categories By specifying a second variable in the By: box (see step 5 above), it is possible to request SPSS to rank the scores on the first variable within categories formed by the second variable. For example, if we specified the variable ‘sex’ in this box, then SPSS would first rank all the male participants and then rank all the female participants. Thus, in this case, we would have two participants (one male and one female) with each rank.

Section 9: DATA TRANSFORMATION Data transformation involves applying a mathematical function to every value in a data set to systematically change the values. At its simplest, data transformation could involve adding a constant, or converting from a raw score into a percentage score. For example, we might give students a simple statistics test scored out of 11. We could then transform these raw scores into percentage values by applying the formula PercentScore = (RawScore/11)*100. One of the most common data transformations used in psychology is the log transformation. Log transformations (or log transforms) involve replacing every value in a data set with the log of the value. You will remember from your high school mathematics classes that the logarithm of a value is the exponent to which the base must be raised to produce that number. Common logs (written as log10) use a base of 10, while natural logs (loge, sometimes written as ln) use the mathematical constant e (approximately 2.71828) as their base. For example, because 102 = 100, the common log of 100 is 2. This is written as log10 (100) = 2. The natural log of 100 is approximately 4.605. That is, loge (100) = 4.605, and e4.605 = 100.

113

CHAPTER 4

Data handling

CHAPTER 4

114

SPSS for Psychologists

The table below shows the relationship between raw values (x), and the log10 and loge of these values. x

1

10

100

1000

10000

100000

1000000

Log10 (x)

0

1

2

3

4

5

6

Loge (x)

0

2.30

4.61

6.91

9.21

11.51

13.82

Looking at this table, we can see one useful characteristic of logs. With each order of magnitude increase in the value x, the log of this value increases by a much smaller constant amount (in the case of common logs increasing by just 1). We can use this characteristic of logs to our advantage to rescale certain types of data. Often data are not normally distributed, showing either a positive or a negative skew. For example, a data set in which most values are close to the mean value, with a few much higher values, is said to have a ‘positive skew’ and when plotted in a histogram will appear asymmetric around the mean, with a longer tail to the right than the left. Skewed data such as these can invalidate certain inferential statistical tests. A possible solution is to undertake a log transform of the data by calculating the log of each raw value. This is often enough to reduce the skew in the data, allowing us to proceed with our analysis using the transformed data. We demonstrate this process below.

Log transformation of decision latency data To illustrate the use of log transformations, we will work with a file containing decision latency data from 129 trials in which participants were required to decide whether two photographs were of the same or different people. The time taken to reach this decision was recorded. Time-based data such as these are often positively skewed because there is no upper limit to how long participants can take to respond. Participants need a finite minimum amount of time to respond to a stimulus, but they could lose concentration and take seconds, minutes or even hours to respond. As a result, decision time data typically have a strong positive skew. To follow this example, download the data file called ‘LogTransformDemo.sav’ from macmillanihe.com/harrison-spss-7e. First, we will use the Explore command to check whether the raw data are normally distributed.

1. Select Analyze. 2. Select Descriptive Statistics. 3. Then select Explore.

4. Move the variable ‘DecisionLatency’ into the Dependent List box. 5. Click on Plots.

6. Select Histogram. 7. Select Normality plots with tests. 8. Click Continue, then OK. The output is shown below.

This table reports the Skewness statistic. The value of 0.618 tells us that the data are positively skewed and there is a moderate degree of skew (a value greater than 1 indicates a high degree of skew, a value of zero indicates no skew).

This table reports two tests of the ‘normality’ of the distribution (the Kolmogorov– Smirnov and Shapiro–Wilk tests). Both are significant (Sig value less than .05), indicating that this distribution is significantly different from a normal distribution.

115

CHAPTER 4

Data handling

SPSS for Psychologists

CHAPTER 4

116

The problem is evident from this histogram. The positive skew is seen as a longer tail to the right than to the left of the distribution. We need to transform this data. This process is described next.

1. Select Transform, then Compute Variable.

2. Give the new variable a suitable name – choose one which indicates this is a log transform of the raw data. 3. Select the Arithmetic function group.

4. Select either Lg10 for common log or Ln for natural log (it doesn’t matter which you select), and move this into the Numeric Expression box.

5. Edit the numeric expression (replace the question mark with the name of the variable to be transformed).

6. Click OK to compute the log transformed variable.

The new variable is added to the data file. To illustrate this process, we have computed two new variables: the first is a log10 transform, the other a loge transform. Note that these two transformations have different values, but as we will see shortly, they produce similarshaped distributions.

117

CHAPTER 4

Data handling

CHAPTER 4

118

SPSS for Psychologists

Now repeat the Explore analysis described above. This time include both the original variable and the transformed variable(s).

Repeat the Explore analysis with the new transformed variable(s) included in the Dependent List.

Sections of the output of the Explore analysis are shown below.

The Skewness value of the original variable was 0.618, indicating a moderate degree of positive skew. Compare this to...

… the Skewness value of the transformed variable. The value is now 0.225, indicating a greatly reduced degree of skew.

119

The Kolmogorov–Smirnov and Shapiro–Wilk tests show no significant deviation from normality for the two transformed variables.

This is the histogram of the raw decision latency data we saw earlier. Note the positive skew.

This is the histogram of the data after a log10 transformation. It is apparent that the distribution is now much more symmetric around the mean.

CHAPTER 4

Data handling

CHAPTER 4

120

SPSS for Psychologists

Section 10: DATA FILE FOR SCALES OR QUESTIONNAIRES In this section, we demonstrate how SPSS can be used to help you handle data obtained using scales or questionnaires. We describe a simple data check, and how to recode responses from reversed items. Checking the reliability and dimensionality of a scale is covered in Chapter 13, Sections 4 and 5. We have entered data obtained with Larsen’s (1995) Attitudes Toward Recycling (ATR) scale, used for a research methods exercise with first-year psychology students at the University of Westminster. There are 20 items, used in the order they are printed in Larsen (1995, Table 1). Two changes were made, as Larsen developed his scale in USA: ‘styrofoam’ was replaced with ‘polystyrene’ and ‘sorting garbage’ was replaced with ‘sorting rubbish into different containers’. We used a Likert-type scale with responses from 1 (strongly agree) to 5 (strongly disagree). The data file ‘ScaleV1.sav’ (available from the Appendix or macmillanihe.com/harrison-spss-7e) includes data from 50 cases, which we will use to demonstrate some issues around the use of scales in psychology. Normally one would need many more cases. Open the data file and note that: 1. ‘Qnum’ (Questionnaire Number) is the identification number we wrote on the paper copies of the questionnaires as we entered the data. We have included it so we can check our data file against our paper records, even if the order of cases in the data file is changed by sorting the file (see Section 2). 2. The responses to each item have been entered separately: ‘q_a’ to ‘q_t’. Students often prefer to calculate the total or mean score by hand and then enter these values only. However, it is better practice to enter the response to each item. This allows us to undertake various checks on the data before using Compute to calculate the total or mean response for each participant. 3. In this data set we have used 9 as the missing value for each of the scale items.

A simple check on data entry Using the process described in Chapter 3, Section 6, we will check the data using the Descriptives command. Follow the steps below.

121

Click Analyze ⇒ Descriptive Statistics ⇒ Descriptives.

Move the variables for item q_a through to q_t into the Variable(s) box. Click OK.

This is a section of the output. Note that the maximum value for the variable q_c is 6 but all these variables were coded 1 to 5 so this must be an error.

CHAPTER 4

Data handling

CHAPTER 4

122

SPSS for Psychologists

We now need to find and correct this error in the variable q_c.

Click here on the header for the variable q_c to select the column.

Click Edit ⇒ Find.

Enter the value we want to find (6).

Click Find Next.

SPSS finds any instances of the value we have searched for and highlights them – like this. We can now replace this with the correct value. This should have been ‘5’.

Now save the corrected data file. Call it ‘ScaleV2.sav’, so that you can use it in the next exercise.

Reversals Good scales or questionnaires often include items that are reverse coded to avoid a participant response bias. We can use SPSS to reverse the coding of these items. First consider what you want a high score to indicate. For example, consider these two items from a library satisfaction questionnaire: 1. The university library is an excellent place to make notes for coursework. 2. I find it very difficult to study in the university library. With responses on a scale of 1 (strongly agree) to 5 (strongly disagree), an individual with a strong positive view of the library should respond in opposite directions to those two items. Do you want your final score to represent overall satisfaction with the library or overall dissatisfaction? This will determine which items you reverse.

123

CHAPTER 4

Data handling

CHAPTER 4

124

SPSS for Psychologists

This type of data can be scored in either direction. Use the variable label to keep a note of the direction of the scoring.

In the case of the ‘ScaleV2.sav’ data file, we want a high score to indicate that the participant has positive attitudes towards recycling and related environmental issues, so items b, g, h, i, k, l, m, p, q and r need to be reversed. We can do this using the Recode command. Normally it is safer to use Recode into Different Variables because the original variables are preserved in case we make a mistake. However, when we need to recode lots of variables in the same way, as here, it is much quicker to use Recode into Same Variables, so that is what we will do here. Remember to save a backup copy of the data file before making these changes in case of errors.

Click Transform ⇒ Recode into Same Variables.

Move the variables to be recoded into this box. Click on the Old and New Values button.

Enter the Old and New values. Note that in addition to reversing the scale we have also recoded System-or usermissing values into System missing values.

Click here to add the last recode.

Click Continue then OK.

The relevant variables should now be reverse coded. The first few lines of the data file should look like this.

Resave the data file as ‘ScaleV3.sav’. The file is now ready to use in other exercises on assessing the reliability and dimensionality of the scale (Chapter 13, Sections 4 and 5).

Summary ■■

This chapter demonstrated a series of useful commands that allow us to manipulate the contents of the data file.

■■

Using these commands, it is possible to change the coding of variables, to compute new variables, or to select only certain cases for subsequent analysis.

■■

These commands are particularly valuable when managing large data sets such as those produced in survey research.

■■

The compute command can also be used to transform data so these meet the assumptions of inferential tests. One of the most common transformations, the log transform, was demonstrated, and we saw how this normalised a set of previously skewed data.

■■

Section 10 illustrated the use of some of these commands in a survey study.

125

CHAPTER 4

Data handling

CHAPTER 5

5

Tests of difference for  one- and two-sample designs In this chapter ●● ●● ●● ●● ●● ●● ●●

An introduction to the t-tests The one-sample t-test The independent t-test The paired t-test An introduction to nonparametric tests of difference The Mann–Whitney test The Wilcoxon test

SPSS for Psychologists online Visit macmillanihe.com/harrison-spss-7e for data sets, online tutorials and exercises.

Section 1: AN INTRODUCTION TO THE t-TEST ■■ ■■

■■

126

The t-test is used to determine whether two means are significantly different from one another. There are three types of t-test: €€

The one-sample t-test, which is the simplest, determines whether the observed mean is different from a part particular value (such as a population mean).

€€

The independent t-test is used when comparing means from two independent groups of individuals.

€€

The paired t-test is used when comparing the means of two sets of observations from the same individuals (e.g. repeated measures design) or from pairs of individuals (e.g. when using a matched-subjects design).

All forms of the t-test are parametric tests and make certain assumptions about the data: that they are measured at interval or ratio level, meet the assumption of homogeneity of variance and are drawn from a population that has a normal distribution (see Chapter 1, Section 2).

■■

When reporting descriptive statistics to accompany the results of the t-test, you should give the mean and standard deviation as the measures of central tendency and dispersion.

■■

In some textbooks you might find this test referred to as the Student’s t-test. This is because William Gossett, who devised the test, worked for the Guinness Brewing Company, which did not permit him to publish under his own name, so he wrote under the pseudonym of ‘Student’.

Section 2: THE ONE-SAMPLE t-TEST We mentioned in Section 1 that the t-test is used to determine whether two means are significantly different from one another. The one-sample t-test allows us to compare the sample mean against a specific score. For example, we might want to see whether the mean from a group of participants differs significantly from chance performance, or from some other predefined value, such as the population mean IQ score, or an expected value based on previous research. This test should be used when the data meet the assumptions for a parametric test.

Example study: assessing memory For the purposes of demonstrating how to perform the one-sample t-test, we will use fictitious data from a made-up study. An enthusiastic teacher, who also happens to be a psychology graduate, noticed that some of the children in the final year of primary school class consistently forgot to bring in their sports kit, their lunch box or their homework, and frequently forgot the instructions given to them in class. She wondered if this was because they were generally more forgetful due to poorer memory skills. She therefore asked permission to administer a standardised test of long-term memory, which had a published norm of 75% for children of this age group. She performed the one-sample t-test on the data she collected to test the hypothesis that the mean of her sample differed from the expected value of 75%. The results showed that her sample mean was significantly lower than the expected value. If you use this fictitious data set and follow the instructions given next, you will be able to compare the output you produce with the annotated output we give at the end of this section. (These data are available in the Appendix or from macmillanihe. com/harrison-spss-7e.)

127

CHAPTER 5

Tests of difference for one- and two-sample designs

CHAPTER 5

128

SPSS for Psychologists

To perform a one-sample t-test 1. Click on the word Analyze. 2. Click on Compare Means.

3. Click on One-Sample T Test.

4. Click on the DV and move it to the Test Variable(s) box, by clicking on the arrow button.

5. Enter the expected value in the Test Value box.

6. Click on Options.

95% is the default setting for the Confidence Interval Percentage; other common choices are 90% and 99% (see tip box below). 7. Click on Continue.

8. Finally, click in the One-Sample T Test dialogue box. The output for the t-test will appear in the Output window, and we show this with annotations below. We also show you how you would describe the results of the test, were you to write a report on this study.

Confidence intervals describe the limits within which the population mean is likely to fall (see Chapter 1, Section 1). The default, 95%, indicates that there is a 95% probability of the population mean falling between the limits shown in the output. SPSS allows us to make these confidence intervals narrower or wider. For example, if we are conducting a study with important clinical implications, where we want to estimate the population mean with a higher degree of precision, we might choose 99% confidence intervals.

SPSS output for one-sample t-test  btained using menu items: Compare Means > One-Sample O T Test Useful descriptive statistics showing the number of children and their mean score (M) and standard deviation (SD) on the memory test.

So, t = 6.27. Note that we ignore the minus sign. The degrees of freedom (df) = 19. Sig. (2-tailed) is the p-value for a two-tailed test. This needs to be smaller than .05 for the result to be significant. While this value is displayed as .000, a p-value can never equal zero. SPSS rounds p-values to three decimal places, so here p must be less than .0005 (see tip box below on reporting p-values).

129

CHAPTER 5

Tests of difference for one- and two-sample designs

CHAPTER 5

130

SPSS for Psychologists



Reporting the results



In a report you might write:  erformance on a memory test by a group of forgetful children was on P average lower (M = 67.9%, SD = 5.07) than the expected value of 75%. A one-sample t-test showed that this difference was significant (t = 6.27, df = 19, p < .001).

Guidance provided by the American Psychological Association recommends that we report statistics such as t to two decimal places, and p-values less than .001 as p < .001. If p-values are greater than .001, the exact value should be reported up to two or three decimal places. Note: A leading zero is not used when reporting values that can never be greater than 1, such as p-values.

Section 3: THE INDEPENDENT t-TEST The independent t-test compares the performance of the participants in one group with the performance of the participants in a different group. This test should be used when the data meet the assumptions for a parametric test and are obtained using an independent groups design. These two groups could constitute a male and a female group if you wanted to examine sex differences, or they could constitute two groups of participants who undergo different drug conditions: one a low-dose drug condition and one a high-dose drug condition. This type of t-test is often also called an unrelated t-test.

Example study: the memory experiment In the example shown next, we use the data from the memory experiment used in the data entry exercise in Chapter 2, Section 6. It was hypothesised that the group receiving mnemonic instructions would remember more words than the group who did not receive any specific mnemonic instructions. As this hypothesis is directional, it is a one-tailed hypothesis. If you use these data and follow the instructions given next, you will be able to compare the output you produce with the annotated output we give at the end of this section.

To perform an independent t-test 1. Click on the word Analyze. 2. Click on Compare Means.

3. Click on IndependentSamples T Test.

4. You will now be presented with the Independent-Samples T Test dialogue box (see below). As is typical in SPSS, the box on the left lists all the variables in your data file. Click on the name of the dependent variable (DV) in your analysis and then click on the arrow button to move this variable name into the box marked Test Variable(s). 5. Now click on the name of the independent variable (IV) and then click on the arrow button to move this into the box marked Grouping Variable. Once you have entered the dependent and independent variables into their appropriate boxes, the dialogue box will look like this. 4. Click on the DV and move to Test Variable(s) box.

5. Click on the IV and move to Grouping Variable box.

6. Click on the Define Groups button – this will bring up the Define Groups dialogue box.

131

CHAPTER 5

Tests of difference for one- and two-sample designs

CHAPTER 5

132

SPSS for Psychologists

6. Click on the Define Groups button to bring up the Define Groups dialogue box (see below). This dialogue box is used to specify which two groups you are comparing. For example, if your independent variable is ‘sex’, which you have coded as 1 = Male and 2 = Female, then you need to enter the values 1 and 2 into the boxes marked Group 1 and Group 2, respectively. This might seem rather pointless, but you might not always be comparing groups that you had coded as 1 and 2. For example, you might want to compare two groups who were defined on the basis of their religious belief (Atheists and Christians, who could be coded as 0 and 2, respectively – see Chapter 2, Section 2, on value labels). In this case, we would enter the values 0 and 2 into the two boxes in this dialogue box. (We will not be describing the use of the Cut point option here.)

7. Once these two values have been entered, press the Continue button to return to the IndependentSamples T Test dialogue box.

6a. In this box, enter the value used to code the first of the two groups you wish to compare. For this data set, we used ‘1’.

6b. Then enter the value used to code the second group in this box. For this data set, we used ‘2’.

7. Clicking on the Continue button in the Define Groups dialogue box will return you to the Independent-Samples T Test dialogue box. You will see that your two values have been entered into the brackets following the name of your independent variable (you may have noticed that previously there were question marks inside these brackets). 8. Finally, click on in the Independent-Samples T Test dialogue box. The output of the t-test will appear in the Output window. The output from this independent t-test is shown, with annotations, below. We also show you how you would describe the results of the test, were you to write a report on this experiment.

SPSS output for independent groups t-test  btained using menu items: Compare Means > Independent-­ O Samples T Test T-Test

Useful descriptive statistics showing that those in the Mnemonic condition remembered the most words. You can calculate the effect size from these descriptives (see next page).

These lower and upper confidence limits enclose the confidence interval. There is a 95% probability that the difference between the means for the population will fall between 0.68 and 6.57. Note, 0 is not within those limits: remember that indicates a significant difference (Chapter 1, Section 1).

If Levene’s p > .05, then there is equality of variance, use the top row of values for t. If Levene’s p < .05, then there is not equality of variance, use the bottom row of values for t. See tip box below.

This is the p-value for twotailed tests, but this hypothesis was one-tailed, so divide by 2.

For these results, there is equality of variance, so t = 2.58, df = 19, p = .009, one-tailed.

Equality (or at least similarity) of variance is one of the requirements for using parametric statistical tests. SPSS, however, carries out two versions of the independent groups t-test: the top row for when there is equality of variance and the bottom row for when the variances are unequal. If you use the latter in a report, you must note that fact.

Measure of effect size The output for the independent groups t-test does not include an estimate of the size of the effect, and this is not an option on SPSS that you can select. However, from the output you can calculate this, and we show you how to do so next. Cohen’s d (see Chapter 1, Section 1) is a measure of effect size that is frequently reported in journal articles when reporting t-tests (Fritz, Morris and Richler, 2012). Essentially, this involves working out the difference between the two means from each condition and dividing this by the two standard deviations (SD) combined. You use the following formula to calculate d when the number of participants in each condition (N) is identical or similar. Consult a statistics text on an alternative way of combining standard deviations if there is a large discrepancy in N in your data. d

 x1 – x2  mean SD

133

CHAPTER 5

Tests of difference for one- and two-sample designs

CHAPTER 5

134

SPSS for Psychologists

This formula requires you to carry out the following steps: 1. Look at the SPSS output and identify the mean and standard deviation for each condition, provided in a table called Group Statistics. For the data set used in this section: The mean for the mnemonic = condition, x1, 17 = .73 and the SD 2.87. The mean for the no mnemonic = condition, x2, 14 = .10 and the SD 3.57. 2. Take the mean of one condition from the mean of the other condition (it is not important which mean to take from which, so you can ignore the sign). For the data set in this section: 17.73 – 14.10 = 3.63. 3. Find the mean SD by adding the SD for each condition together and dividing by two: (SD of condition 1 + SD of condition 2) / 2). For the data set in this section:

 2.87  3.57  / 2  3.22. 4. Use the formula to calculate d. For the data set in this section: d = (17.73 – 14.10) / 3.22 = 3.63 / 3.22 = 1.13. This would be considered to be a large effect size. Cohen (1988) provided the following guidance on how to interpret d: ■■

Small effect size: 0.2 (or more)

■■

Moderate effect size: 0.5 (or more)

■■

Large effect size: 0.8 (or more).



Reporting the results



In a report you might write:  ore words were recalled in the Mnemonic condition (M = 17.73 M words) than in the Non-mnemonic condition (M = 14.10 words). An independent t-test showed that the difference between conditions was significant, and the size of this effect was large (t = 2.58, df = 19, p = .009, one-tailed, d = 1.13).

It would be helpful in your report to include more than just the descriptive statistics that are automatically generated as part of the independent t-test output. In Chapter 3, Section 5 we showed you how to use the Explore command, and the output using this command with the data from the memory experiment is included at the end of that section. This output provides you with the 95% confidence interval for the mean of each condition. Our estimate of the population mean for the Mnemonic condition (based on our sample) is 17.73, and the confidence intervals tell us that there is a 95% probability that the true value will fall between 15.80 and 19.65. Our estimate of the population mean for the Non-mnemonic condition is 14.10, and the confidence intervals tell us that there is a 95% probability that the true value will fall between 11.54 and 16.66. A useful way of presenting this information in your report is to show it graphically using an error bar graph. Next, we show you how to create such a graph using SPSS.

 reating an error bar graph: independent groups C design To obtain an error bar graph using Chart Builder 1. On the menu bar, click on Graphs. 2. Click on Chart Builder. 3. Close the top Chart Builder dialogue box, which reminds you that measurement level should be set properly for each variable in your chart (see Chapter 3, Section 8). The Chart Builder dialogue box below will be visible.

A range of bar graphs are shown in the bottom part of the box. To see other type of graphs, select from the lefthand list.

4. This is a simple error bar graph. Click and drag to box above.

135

CHAPTER 5

Tests of difference for one- and two-sample designs

CHAPTER 5

136

SPSS for Psychologists

We are showing you how to create an error bar graph where the error bars are selected to represent the 95% confidence intervals. The confidence interval tells us the limits within which the population mean (the true value of the mean) is likely to fall. SPSS allows alternative options where the bars represent a measure of dispersion such as the standard deviation (see below).

5. Drag the IV into the blue X-Axis? box and the DV into the blue Y-Axis? box as we have done here.

7. Click on OK. The error bar chart will appear at the end of your SPSS output.

6. The Element Properties dialogue box appears to the right of the Chart Builder dialogue box. By default, the Statistic option is set to the mean – leave it on this setting. Check that the Confidence intervals is selected and the Level (%) is set to 95. (Alternatively, you could set the bars to represent SE or SD. However, if using SD, the normal multiplier is 1 rather than the SPSS default of 2.)

SPSS output for bar chart

There is some overlap between the two bars for each group, although the extent of the overlap is quite small. If the sample means of two groups of people are so different to suggest that they came from a different population – the bars do not overlap – then this is because our experimental manipulation brought about a difference between the samples. So, little or no overlap is indicative of a significant difference between groups.

Section 4: THE PAIRED t-TEST In the repeated measures design, data are collected from each participant in all conditions (or levels) of the independent variable. For example, we might compare participant 1’s memory performance under noisy conditions with participant 1’s memory performance under quiet conditions. In this situation, it is likely that the data from participants will be correlated; for example, if participant A has a good memory, then their scores on a memory test will be high regardless of condition. It is for this reason that a repeated measures t-test is sometimes called a correlated t-test. With a repeated measures design, it is essential that the data are kept in the correct order, so that participant 1’s data on variable A are indeed compared with participant 1’s data on variable B. The test itself considers pairs of data together, and for this reason this test is also known as a paired t-test.

137

CHAPTER 5

Tests of difference for one- and two-sample designs

CHAPTER 5

138

SPSS for Psychologists

Example study: the mental imagery experiment To demonstrate the use of the paired t-test, we are going to analyse the data from the mental imagery experiment, the second data entry exercise in Chapter 2, Section 6. It was hypothesised that, as participants would compare their mental images of the two animals to determine which was the larger, their decision times for the small-size-­ difference trials would be longer than for the large-size-difference trials. A paired t-test is conducted to test this hypothesis.

To perform a paired t-test 1. Click on the word Analyze.

2. Click on Compare Means.

3. Click on the words PairedSamples T Test.

4. You will now see the Paired-Samples T Test dialogue box (see below). You need to choose the names of the two variables you want to compare. As before, all the variables in your data file are listed in the left-hand box. Click on each of the two variables you want to compare. These variable names will now be highlighted.

4. Click on the names of the two variables you wish to compare; ‘large size difference’ and ‘small size difference’.

5. Now click on this arrow button to move the selected variables into the box marked Paired Variables.

139

This box will show the names of the selected variables once you’ve moved them across; more than one pair can be selected.

6. Click on OK to run the analysis.

SPSS will perform the paired t-test. The annotated output is shown below.

SPSS output for paired (or related) t-test  btained using menu items: > Compare Means > O Paired-­Samples T Test You can do several paired t-tests at once, so SPSS labels each pair. Next, it displays the variable labels, if you entered them; if not, it displays the variable names. Useful descriptives showing that the mean decision time was fastest for large-­difference trials. It is worth noting the much larger SD for the ‘small size difference’ condition. A correlation analysis is also carried out (see Footnote 1). Note the upper and lower Confidence Interval values do not span zero (see Chapter 1, Section 1).

The p-value (see Footnote 3). Degrees of freedom. t = 4.46 (see Footnote 2).

CHAPTER 5

Tests of difference for one- and two-sample designs

CHAPTER 5

140

SPSS for Psychologists

Footnotes 1. SPSS performs a Pearson’s correlation (see Chapter 6, Section 3), but you can ignore it if you only want a t-test. A significant positive correlation tells you that participants who were fast on large-size-difference trials were also fast on small-size-­ difference trials. It does not mean that the scores are significantly different. 2. The minus sign tells you that the mean value for the first variable name in the Paired Variables box is lower on average than the mean value for the second variable name. 3. A p-value can never equal zero. SPSS rounds p-values to three decimal places, so p must be less than .001 or it would appear as .001. In a report, put p < .001 if the hypothesis was two-tailed. Here, the hypothesis was one-tailed, so divide by 2, which gives p < .0005. However, as we report p-values less than .001 as p < .001, we would write: p < .001, one-tailed.

Measure of effect size As with the independent t-test, the output for the paired t-test does not include an estimate of the size of the effect and this is not an option on SPSS that you can select. However, from the output, you can calculate Cohen’s d, which we described in Section 3, following the steps below and using the formula: d = (x1 – x2) / mean SD. 1. Look at the SPSS output and identify the mean and standard deviation (SD) for each condition, which are provided in a table called Paired Sample Statistics. For the data set used in this section: The mean for the large size difference condition, x1, = 1156.69 and the SD = 290.05. The mean for the small size difference condition, x2, = 1462.44 and the SD = 500.50. 2. Take the mean of one condition from the mean of the other condition (it is not important which mean to take from which, so you can ignore the sign). For the data set used in this section: 1156.69 – 1462.44 = 305.75. 3. Find the mean SD by adding the SD for each condition together and dividing by two: (SD of condition 1 + SD of condition 2) / 2. For the data set used in this section:

 290.05  500.50  / 2  395.28. 4. Use the formula to calculate d. For the data set used in this section: d = (1156.69 – 1462.44) / 30.95.28 = 305.75 / 395.28 = 0.77.

This is a moderate to large effect size. Cohen (1988) provided the following guidance on how to interpret d: ■■

Small effect size: 0.2 (or more)

■■

Moderate effect size: 0.5 (or more)

■■

Large effect size: 0.8 (or more).

Reporting the results

In a report you might write:  he average time to decide which of the pair of animals was larger was T greater for small-size-difference trials than for large-size-difference trials (1462.44 ms and 1156.69 ms, respectively). A paired t-test showed that the difference between conditions was significant, and the size of this effect was moderate to large (t = 4.46, df = 15, p < .001, onetailed, d = 0.77).

We will not show you how to create an error bar graph with SPSS for the data analysed with the paired t-test. Unfortunately, SPSS does not allow Chart Builder to create error bar graphs for data from a repeated measures design. With a repeated measures design the same participants complete both conditions, so this must be taken into account. Brysbaert (2011, 233–4) explains how the confidence intervals from experiments using such a design can be ‘corrected’ to allow them to be interpreted appropriately, in line with an independent groups design.

Section 5: A N INTRODUCTION TO NONPARAMETRIC TESTS OF DIFFERENCE ■■

The Mann–Whitney test and the Wilcoxon matched-pairs signed-ranks test are nonparametric tests of difference and are used to explore whether two data samples are different.

■■

The Wilcoxon test is the nonparametric equivalent of the paired t-test, and is used for data gathered in experiments involving repeated measures and matched-pairs designs.

■■

The Mann–Whitney test is the nonparametric equivalent of the independent t‑test, and is used to compare data collected in an experiment involving an independent groups design.

141

CHAPTER 5

Tests of difference for one- and two-sample designs

CHAPTER 5

142

SPSS for Psychologists

■■

These nonparametric tests should be used in preference to the equivalent t-tests when data are only of ordinal level of measurement or do not meet the other assumptions required for parametric tests.

■■

Both tests involve ranking the data, and the calculations are carried out on the ranks. In the annotated output pages for these tests, there is a brief explanation of how each test is performed.

■■

When reporting descriptive statistics to accompany the results of a nonparametric test of difference, such as the Mann–Whitney or Wilcoxon test, you should normally give the median and range (not the mean and standard deviation) as the measures of central tendency and dispersion. The median and range are more appropriate descriptives for nonparametric tests because these are distribution-­ free tests and do not assume normal distribution.

Section 6: THE MANN–WHITNEY TEST

 xample study: sex differences and emphasis E on physical attractiveness To demonstrate how to perform the Mann–Whitney test, we shall use the data from an experiment based on research conducted by one of our past students, which was designed to determine whether males and females differ in the emphasis they place on the importance of the physical attractiveness of their partner. Previous research has reported that men are more concerned than women about the physical attractiveness of their heterosexual partner. However, current advertising trends and societal pressure may have altered the emphasis placed on physical attractiveness and, more specifically, the importance they attach to ‘body’ or physique compared with other characteristics of their ideal partner. The hypothesis tested is two-tailed: that men and women will differ in the importance they attach to physique. The design employed was an independent groups design. The independent variable was whether the participant was male or female, operationalised by asking equal numbers of males and females to take part in the experiment (only one partner from a relationship participated). The dependent variable was the importance attached to body shape, operationalised by asking participants to rank order 10 characteristics of an ideal partner, one of these being body shape. (These data are available in the Appendix or from macmillanihe.com/ harrison-spss-7e.)

How to do it 1. Click on the word Analyze.

2. Click on Nonparametric Tests.

3. Click on Legacy Dialogs.

We recommend you use Legacy Dialogs as this gives you more information in the output compared with the new alternative.

143

CHAPTER 5

Tests of difference for one- and two-sample designs

4. Click on 2 Independent Samples.

The data have been entered with the variable names ‘sex’ and ‘rating’. Follow steps 5 to 11, shown below, then click on . The SPSS output, which will appear after a short delay, is shown on the following page with explanatory comments.

CHAPTER 5

144

SPSS for Psychologists

5. Highlight the relevant variables by clicking on them.

6. Click on this arrow to move across any highlighted variable into the relevant box.

7. Test Variable is the dependent variable, so move the ‘Rating of …’ into this box.

8. Grouping Variable is the independent variable, so move the ‘Sex of subject [sex]’ into this box.

9. After step 8, this box will become bold: click on it, and the dialogue box below will appear.

10. Enter the codes used in the SPSS data file; in this example, 1 (male) for Group 1 and 2 (female) for Group 2. Then click on Continue to return to the main dialogue box.

11. Make sure that only the Mann–Whitney U has a tick in the small box. You are unlikely to require the other tests Click on OK.

SPSS output for Mann–Whitney U test  btained using menu items: Nonparametric Tests > Legacy O Dialogs > 2 Independent Samples

You do not need this part of the output for your Results section, but it gives you some information about the calculations for the Mann–Whitney U test: first, all the data from both groups combined are assigned ranks from the lowest to the highest; then, the ranks given to one group are compared with the ranks given to the other group; the mean ranks shown here indicate whether there are more high ranks in one group than in the other.

The calculated value of U for the Mann–Whitney U Test. You will report this value in your results section. SPSS also gives the calculated value of an independent groups version of the Wilcoxon test. You do not need this for writing your results section.

Depending on the size of your data file, the output may or may not include the Exact Sig. (NB: The value shown here is not from the Exact option button, but produced automatically by SPSS.)

145

CHAPTER 5

Tests of difference for one- and two-sample designs

CHAPTER 5

146

SPSS for Psychologists



Reporting the results



In a report you might write:  here was no significant difference between men and women in the T importance they attached to body shape in a partner (U = 147.50, N1 = 20, N2 = 20, p = .157, two-tailed).

If you are reporting descriptive statistics, you should avoid reporting the mean rank or sum of ranks given in this SPSS output and that for the Wilcoxon test described next. Instead, obtain the median and range for the two conditions by using the Explore command (Chapter 3, Section 5).

Section 7: THE WILCOXON TEST

Example study: quality of E-FIT images The police frequently use a computerised facial composite system to help eyewitnesses recall the face of a perpetrator. One such system is E-FIT (Electronic Facial Identification Technique). In a study by Newlands (1997), participants were shown a short video clip of a mock crime scenario depicting an instance of petty theft. Participants were then asked to generate an E-FIT composite of the perpetrator. On completion, they were asked to rate the likeness of their E-FIT image to the person they remember seeing in the video. They were then shown a photo of the perpetrator and again asked to rate the likeness of their E-FIT to that person. The hypothesis tested was one-tailed: that the likeness ratings of the E-FIT to the perpetrator would be more favourable when recalling the perpetrator from memory than when seeing a photograph of the perpetrator. The design employed was a repeated measures design. The independent variable was the presence or absence of a photograph of the perpetrator, operationalised by asking participants to rate the likeness of their E-FIT, first to their recall of the perpetrator and then to a photo of the perpetrator. The dependent variable was measured on an ordinal scale and was the likeness rating, operationalised by the response on a seven-point scale, where point 1 was ‘very good likeness’ and point 7 ‘no likeness’. For the purposes of this book, we have created a data file that will reproduce some of the findings of this study. (These data are available in the Appendix or from macmillanihe.com/harrison-spss-7e.)

How to do it 1. Click on the word Analyze.

2. Click on Nonparametric Tests.

3. Click on Legacy Dialogs.

We recommend you use Legacy Dialogs as this gives you more information in the output compared with the new alternative.

4. Click on 2 Related Samples.

The dialogue box shown below will appear. The variable labels, and the variable names (‘mem’ and ‘photo’), used in the data file appear in the left-hand box. Follow steps 5 to 7, shown below, then click on . The SPSS output, which will appear after a short delay, is shown below with explanatory comments.

147

CHAPTER 5

Tests of difference for one- and two-sample designs

CHAPTER 5

148

SPSS for Psychologists

5. The list of variable labels from the data file appears here. Highlight the two variables you want to enter into the analysis by clicking on them.

6. Click on this arrow to move the highlighted variables across to the Test Pairs box.

7. Ensure that there is a tick in the Wilcoxon box, and that the other boxes are blank. Click on OK.

 PSS output for Wilcoxon matched-pairs S signed-­ranks test  btained using menu items: Nonparametric Tests > Legacy O Dialogs > 2 Related Samples

These are the variable labels. If you did not enter variable labels, then the variable names will appear here.

This row shows the p-value. The hypothesis for this example was one-tailed, so divide the p-value by 2 to give p = .472. (That is still much greater than .05.)

This section gives you some information about the calculations for the Wilcoxon test. You don’t need this for your Results section; but, first, for each subject the score for one level of the independent variable is subtracted from the score for the other level; then those differences are ranked, ignoring the sign (whether negative or positive) and omitting ties. Once ranking is completed, the signs are reattached. The mean ranks indicate whether there are more high ranks for the positive differences, or for the negative differences, or whether there is a fairly equal spread of ranks, as in this example.

SPSS gives the z value, shown in this row, not the T or W value shown in most textbooks. As Howell (2002) explains, z should always be used for large samples but it involves extra calculations. SPSS does that automatically. The negative sign can be ignored (as for the t-test).

149

CHAPTER 5

Tests of difference for one- and two-sample designs

CHAPTER 5

150

SPSS for Psychologists



Reporting the results



In a report you might write:  here was no significant difference between the likeness ratings of the T E-FITs that were made with a photo of the perpetrator visible and those that were made from memory (z = 0.07, N  – Ties = 39, p = .472, one-tailed).

Summary ■■

This chapter introduced you to statistical tests that will tell you if there is a significant difference between two means.

■■

This could involve comparing the performance of a single group of participants with a specific score, or comparing the performance of two groups of participants with one another (such as in an independent groups design), or comparing the performance of one group of participants who perform in two conditions (as in a repeated measures design).

■■

Your choice of which test to use will depend on the design of your experiment, and whether the data are parametric.

■■

Remember our advice on the appropriate descriptive statistics to accompany the results of these tests. See Chapter 3 for guidance on obtaining these.

■■

For guidance on incorporating SPSS output into a report, or on printing the output, see Chapter 14.

■■

Chapter 8 introduces you to Analysis of Variance (ANOVA), a test of difference that is appropriate for designs that involve more than two groups or conditions, or more than one independent variable.

CHAPTER 6

6

Tests of correlation and bivariate regression

In this chapter ●● ●● ●● ●● ●● ●● ●● ●●

An introduction to tests of correlation Producing a scatterplot Pearson’s r: parametric test of correlation Spearman’s rs: nonparametric test of correlation Partial correlations Comparing the strength of correlation coefficients Brief introduction to regression Bivariate regression

SPSS for Psychologists online Visit macmillanihe.com/harrison-spss-7e for data sets, online tutorials and exercises.

Section 1: AN INTRODUCTION TO TESTS OF CORRELATION ■■

Researchers often wish to measure the strength of the relationship between two variables. For example, there is likely to be a relationship between age and reading ability in children. A test of correlation will provide you with a measure of the strength and direction of such a relationship.

■■

In a correlation, there is no independent variable: you simply measure two variables. So, if someone wished to investigate the relationship between smoking and respiratory function, they could measure how many cigarettes people smoke and their respiratory function, and then test for a correlation between these two variables.

■■

Correlation does not imply causation. In any correlation, there could be a third variable which explains the association between the two variables you measured. For example, there may be a correlation between the number of ice creams sold and the number of people who drown. Here, temperature is the third variable, which could explain the relationship between the measured variables. Even when there seems to be a clear cause-and-effect relationship, a correlation alone is not sufficient evidence to prove this causal relationship. 151

CHAPTER 6

152

SPSS for Psychologists

■■

Francis Galton carried out early work on correlation and one of his colleagues, Pearson, developed a method of calculating correlation coefficients for parametric data: Pearson’s product moment correlation coefficient (Pearson’s r). If the data are not parametric, or if the relationship is not linear, a nonparametric test of correlation, such as Spearman’s rs should be used.

■■

Note that, for a correlation coefficient to be reliable, one should normally make at least 100 observations; otherwise a small number of extreme scores could skew the data and either mask a real relationship or give the appearance of a correlation that does not really exist. The scatterplot is a useful tool for checking such eventualities and for checking that the relationship is linear.

Section 2: PRODUCING A SCATTERPLOT A scatterplot (or scattergram) will give a good indication of whether two items are related in a linear fashion. Figure 6.1 shows a hypothetical example. Each point on the scatterplot represents the age and the reading ability of one child. The line running through the data points is called a ‘regression line’. It represents the ‘best fit’ of a straight line to the data points. The line in Figure  6.1 slopes upwards from left to right: as one variable increases in value, the other variable also increases in value and this is called a positive correlation. The closer the points are to being on the line itself, the stronger the correlation. If all the points fall along the straight line, then it is said to be a perfect correlation. The scatterplot will also show you any outliers. It is often the case that as one variable increases in value, the other variable decreases in value: this is called a negative correlation. In this case, the pattern of points on the scatterplot will slope downwards from top left to bottom right. As well as telling you the direction of the relationship between two variables (i.e. positive or negative), a correlation can also tell you how strong the relationship

Reading ability 25 20 15 10 5 0 3

4

5

6

7

8

9

10

Age in years

Figure 6.1  Scatterplot illustrating a positive correlation: hypothetical data for the relationship between age and reading ability in children

between your variables is. In this way, it’s very similar to the concept of effect size which you came across in Chapter 1 (and, in fact, correlation coefficients are sometimes used as a measure of effect size). In this case: ■■

a value of 1 shows a perfect positive correlation

■■

a value of –1 a perfect negative correlation

■■

a value of zero shows that there is no relationship between the two variables.

In reality, correlation coefficients are rarely exactly 1, –1 or 0, but instead fall somewhere in between. As with effect size, you can use a general rule of thumb to help interpret the strength of these correlations. We suggest using the following as a guide: ■■

0.7 to 1 shows a strong relationship between the variables

■■

0.3 to 0.6 suggests a moderate relationship

■■

0 to 0.2 indicates any relationship is weak

The size (or magnitude) of the correlation coefficient is directly related to how linear (or line-like) your data points appear on a scatterplot. If the points form a perfectly straight line, then the correlation coefficient will be either 1 or –1, depending on which direction it is sloping in. However, if the points are completely random, so that they appear distributed almost equally across the scatterplot, then the correlation coefficient will be nearer 0. This is illustrated in Figure 6.2.

Strong positive (r = 0.9)

Moderate positive (r = 0.5)

Moderate negative (r = -0.5)

No correlation (r=0)

Strong negative (r = -0.9)

Figure 6.2  Scatterplots illustrating different directions and strengths of correlations between two variables

153

CHAPTER 6

Tests of correlation and bivariate regression

SPSS for Psychologists

CHAPTER 6

154

Figure 6.3  Scatterplot showing two variables with an inverted U-shape relationship

In this book we will only show procedures for dealing with linear relationships, like that shown in Figure 6.1, and also the relationship in the example study below. Sometimes relationships are nonlinear, for example, an inverted U-shape relationship, illustrated in Figure 6.3, which might be found between two variables such as stress and exam performance. For procedures to deal with nonlinear relationships, read the appropriate textbooks.

Example study: relationship between age and CFF A paper by Mason et al. (1982) described an investigation of (among other things) whether the negative correlation between age and critical flicker frequency (CFF) is different for people with multiple sclerosis than for control participants. For this example, we have created a data file that will reproduce some of the findings for the control participants. CFF can be described briefly and somewhat simplistically as follows: If a light is flickering on and off at a low frequency, most people can detect the flicker; if the frequency of flicker is increased, eventually it looks like a steady light. The frequency at which someone can no longer perceive the flicker is called the critical flicker frequency (CFF). (These data are available in the Appendix or from macmillanihe.com/harrison-spss-7e.)

How to obtain a scatterplot using Legacy Dialogs 1. On the menu bar, click on Graphs. 2. Click on Legacy Dialogs. 3. Click on Scatter/Dot, and the Scatter/Dot dialogue box will appear.

4. Select the Simple Scatter option, then click on Define. The Simple Scatterplot dialogue box will appear.

155

CHAPTER 6

Tests of correlation and bivariate regression

5. Move the appropriate variable into the X Axis box, as we have done here. Then move the other variable into the Y Axis box. 6. When you have finished, click on OK.

The scatterplot will appear in the SPSS output window. You can edit it, in the way we describe below. However, using a different menu option to produce your graph will allow you to directly add a regression line to your scatterplot, and give you a number of other options in the dialogue box. We will describe this in the next section.

 roducing a scatterplot with a regression line using Chart P Builder 1. On the menu bar, click on Graphs. 2. Click on Chart Builder. 3. A Chart Builder dialogue box will remind you that measurement level should be set properly for each variable in your chart (see Chapter 3, Section 8). If you are sure that they are set correctly, just click on OK. (If you want to add a regression line to the scatterplot, then both variables must be set at Scale in the Measure column of Variable View.) The Chart Builder dialogue box shown below will now be visible.

SPSS for Psychologists

CHAPTER 6

156

4. Click on Scatter/Dot to see the range of graphs associated with this option.

5. This is a simple scatter graph. Click and drag to the box above. This will open up the Element Properties dialogue box; as you don’t need to alter this, click on Close.

6. Drag the variable ‘age’ into the blue X-Axis? box and the variable ‘cff’ into the blue Y-Axis? box. A preview of the chart appears, but this does not represent the real data.

8. Click on OK to produce the scatterplot in your output.

7. To produce a regression line, select the Total option under Linear Fit Lines.

157

CHAPTER 6

Tests of correlation and bivariate regression

 ow to add a regression equation H to the scatterplot To add the regression equation to your scatterplot, you have to edit the graph SPSS produces. You can also use this method to add a regression line to a scatterplot produced using Legacy Dialogs. Start by double-clicking in the scatterplot, and the SPSS Chart Editor window, shown below, will appear.

1. Put the cursor anywhere inside the scatterplot, and double-click to open the SPSS Chart Editor window.

2. Double-click on the regression line to highlight it and open the Properties dialogue box (shown below).

CHAPTER 6

158

SPSS for Psychologists

If you produced your scatterplot using the Legacy Dialogs, you can add a regression line here by selecting Elements and then Fit Line at Total. If this option is not available, check that in your data file both variables are set to Scale in the Measure column, then start again.

This dialogue box will open at the Fit Line tab, which is what we want. You could explore the other tabs on another occasion, to see what they do.

3. Check that the Linear option button is selected. As you can see, there are various other methods of fitting a line through data. Those other methods can be used, if appropriate for your data, in more advanced stages. 4. To add the regression equation as a label on the graph, select this option; we describe this briefly in the text below. Deselect here if you wish to show your data more clearly. 5. Apply the changes you have made, and close this dialogue box to return to the Chart Editor window, which will now display the regression equation. 6. Close the Chart Editor window to return to the Output window.

You can copy the scatterplot and paste it into a report, adding a suitable figure legend. For example, see Figure 6.4.

Figure legends should be suitable for the work into which you are incorporating the figure. The legend to Figure 6.4 might be suitable for a report about the study into age and CFF. The legends to Figures 6.1, 6.2 and 6.3, however, are intended to help you follow the explanation in this book, and would not be suitable for a report.

In addition to adding the regression line, you can edit other elements of the chart to improve appearance. For example, SPSS charts are usually rather large. If you leave them large, the report will be spread over more pages than necessary, which can hinder the ease with which the reader follows your argument. You can shrink charts easily in

a program such as MS Word, but it is much better to change the size in Chart Editor, because then the font and symbol size will automatically be adjusted for legibility. We show you how to do this in Chapter 3, Section 8. Editing would also be useful when a number of cases all fall at the same point. The data we use to illustrate the use of Spearman’s rs (Section 3) demonstrate that situation. To clearly illustrate the data you can edit the data symbols in Chart Editor, so that they vary in size according to the number of cases at each point. Guidelines on the appearance of figures are given in the Publication Manual of American Psychological Association (APA, 2019).

This value appears when you request the regression line; we comment on it in the text below. This label, the regression equation (discussed in Section 7), is also produced when you request the regression line, unless you unselect it (see annotated point 4 above). It can obscure part of your graph, so you could produce the graph again, without it.

Figure 6.4  Critical flicker frequency (in Hz) plotted against participant’s age (in years)

A scatterplot is a descriptive graph that illustrates the data, and can be used to check whether the data are suitable for analysis using a test of correlation. For example, if there are a few cases in one corner of the scatterplot, and most of the other cases are clustered together at the opposite end of the regression line, those outliers may produce a significant correlation even though there is no real relationship. A scatterplot would also indicate if there is a relationship but it is nonlinear; for example, if the relationship is U-shaped. If there does appear to be a linear relationship (and Pearson’s r makes the assumption that any relationship will be linear), we can find out whether or not it is significant with an inferential statistical test of correlation (as outlined above). Note the R2 Linear value that appears next to the scatterplot above. This is not the correlation coefficient itself; it is the square of Pearson’s r and is itself a useful statistic (described in Section 3). You can remove the R2 legend if you wish: in the Chart Editor window double-click on the legend, so that it is selected, then press delete key.

159

CHAPTER 6

Tests of correlation and bivariate regression

CHAPTER 6

160

SPSS for Psychologists

Next, we show you how to obtain a scatterplot using Graphboard Template Chooser, which has been available since Version 17. It is not yet possible to add a regression line using this relatively new SPSS graphing option or to edit the size of the graph.

 ow to obtain a scatterplot using Graphboard Template H Chooser Graphboard Template Chooser was described in Chapter 3, Section 9. 1. On the menu bar, click on Graphs. 2. Click on Graphboard Template Chooser. 3. In the dialogue box, ensure that the variables you want to plot are indicated as ordinal or scale. If they are not, go to your data file and set them.

4. Select the two variables you want. The top variable will be plotted on the X-axis. To reorder, use the up or down arrow. Remember to select both variables again.

5. Select the Scatterplot icon at the bottom. You may have to scroll down to see it.

6. Click on OK.

Here we have shown you how to produce the scatterplot using the Basic tab. You could instead use the Detailed tab, described in Chapter 3, Section 9. You would select Scatterplot from the Choose list, and then set the X and Y variables.

The graph will appear in Viewer window. Unlike other scatterplot commands, Graphboard does not add R2 Linear. Double-click in the scatterplot if you wish to edit it, and the Graphboard Editor window will appear. However, you cannot add a regression line using this method.

161

Section 3: PEARSON’S r : PARAMETRIC TEST OF CORRELATION

Example study: critical flicker frequency and age To illustrate how to carry out this parametric test of correlation, we will continue using the CFF and age data. Note that the data do not meet the guidelines for correlation of a sample size of around 100. The hypothesis tested was that there would be a negative correlation between CFF and age. The study employed a correlational design. Two variables were measured. The first was age, operationalised by recruiting volunteer participants who ranged in age from 25 to 66 years. The second variable was CFF, operationalised by using a flicker generator to measure CFF for each participant; six measures were made, and the mean taken to give a single CFF score for each participant.

How to perform a Pearson’s r

1. Click on Analyze.

2. Click on Correlate.

3. Click on Bivariate. The Bivariate Correlations dialogue box will appear (see below).

CHAPTER 6

Tests of correlation and bivariate regression

CHAPTER 6

162

SPSS for Psychologists

SPSS will correlate each variable you include with every other variable you include. Thus, if you included three variables A, B and C, it will calculate the correlation coefficient for A * B, A * C and B * C. In the Pearson’s r example we have just two variables, but in the Spearman’s rs example (in Section 4), we include three variables so that you can see what a larger correlation matrix looks like.

4. Highlight the two variables you wish to test for correlation. NB: Highlight more than one by clicking on the first, then holding down the shift key while you click on others.

6. Select the correlation coefficient you require. 9. Check that Flag significant correlations is ticked. Then click on OK.

5. Click here to move the highlighted names into the Variables box.

7. Choose to apply either a Two-tailed or One-tailed test, as appropriate for the hypothesis.

The Style button allows you to specify the format of any tables.

8. If you click on Options, another dialogue box will appear, in which you can request means and standard deviations.

163

In the Bivariate Correlations dialogue box, you have the option of choosing either a one- or two-tailed test, and SPSS will then print the appropriate value of p. Here, we selected two-­tailed. In the statistical tests we have covered previously, SPSS will only print the two-tailed p-value, and if you have a one-tailed hypothesis, you halve that value to give the one-tailed p-value.

The annotated output for Pearson’s r is shown below.

SPSS output for Pearson’s r Obtained using menu item: Correlate > Bivariate CORRELATIONS

Useful descriptives obtained by using the Options button in the Bivariate Correlations dialogue box.

The Pearson’s correlation coefficient or Pearson’s r.

The p-value. (See Chapter 5, Section 4, Footnote 3.)

N, the number of cases.

A complete matrix is printed.

In addition to the p-values in the matrix, SPSS prints this message. Significant correlations are flagged by asterisks; this is particularly useful if you have entered several variables and so have a large correlation matrix.

Two of the cells are for each variable with itself (for these cells, p is not calculated). The other two cells contain the same information about the correlation between the two variables.

CHAPTER 6

Tests of correlation and bivariate regression

CHAPTER 6

164

SPSS for Psychologists

What you might write in a report is given below, after we tell you about effect sizes in correlations. For correlations, the sign of the coefficient indicates whether the correlation is positive or negative, so you must report it (unlike the sign in a t-test analysis).

Effect sizes in correlation The value of r indicates the strength of the correlation, and it is a measure of effect size (see Chapter 1, Section 1). As a rule of thumb, r values of 0 to .2 are generally considered weak, .3 to .6 moderate and .7 to 1 strong. The strength of the correlation alone is not necessarily an indication of whether it is an important correlation: normally, the significance value should also be considered. With small sample sizes this is crucial, as strong correlations may easily occur by chance. With large to very large sample sizes, however, even a small correlation can be highly statistically significant. To illustrate this, look at a table of the critical values of r (in the back of most statistics textbooks). For example, if you carry out a correlational study with a sample of 100 and obtain r of .20, it is significant at the .05 level, two-tailed. Yet .2 is only a weak correlation. Thus, we recommend you report the effect size, the statistical significance and the proportion of variation, which we explain next. The concept of ‘proportion of variation explained’ is described in Chapter 8, Section 1. Briefly, a correlation coefficient allows us to estimate the proportion of variation within our data that is explained by the relationship between the two variables. (The remaining variation is down to extraneous variables, the situation and participants.) The proportion of variation explained is given by r2. Thus, for the age and CFF example, in which r = .78, r2 = (.78 x .78) = 0.6084. Multiplying r2 by 100 allows us to turn this into a percentage, and we can say that 60.84% of the variation in the CFF data can be attributed to age. Note that, logically, we can just as easily say that 60.84% of the variation in the age data can be attributed to CFF. The latter statement should make it clear that we are not implying a causal relationship: we cannot do so with correlation. The important practical point is that the two variables have quite a lot of variation in common, and one could use a person’s age to predict what their CFF might be. If their measured CFF is outside the lower confidence limit for their age, we could investigate further. Note that the proportion of variation explained does not have to be large to be important. How important it is may depend on the purpose of the study (see Howell, 2013, 312–13). We will come back to this concept in Section 5 of this chapter, and in Chapter 10, which covers multiple regression.

Reporting the results In a report you might write:  here was a significant negative correlation between age and CFF T (r = –.78, N = 20, p < .001, one-tailed). It is a fairly strong correlation: 60.84% of the variation is explained. The scatterplot (Figure 6.3) shows

that the data points are reasonably well distributed along the regression line, in a linear relationship with no outliers.

Section 4: SPEARMAN’S rS : NONPARAMETRIC TEST OF CORRELATION When the data for one or both of the variables are not parametric, for example they are measured at ordinal level, or if the scatterplot suggests that the relationship between the two variables is not linear, then we use a nonparametric measure of correlation. (See Chapter 1 for more about choosing the correct statistical procedure, and information about levels of measurement and parametric tests). Here, we describe two such tests, Spearman’s rs and Kendall’s tau-b. The s on Spearman’s rs is to distinguish it from Pearson’s r. This test was originally called Spearman’s ρ (the Greek letter rho), and SPSS still calls the output Spearman’s rho.

 xample study: the relationships between E attractiveness, believability and confidence Previous research using mock juries has shown that attractive defendants are less likely to be found guilty than unattractive defendants, and that attractive individuals are frequently rated more highly on other desirable traits, such as intelligence. In a study undertaken by one of our students, participants saw the testimony of a woman in a case of alleged assault. They were asked to rate her, on a scale of 1–7, in terms of how much confidence they placed in her testimony, how believable she was and how attractive she was. (These data are available in the Appendix or from macmillanihe. com/harrison-spss-7e.) The design employed was correlational, with three variables each measured on a seven-point scale. Although it is often accepted that such data could be considered interval in nature (see Chapter 1, Section 1), for the purpose of this section we will consider it as ordinal data. The hypotheses tested were that: 1. There would be a positive relationship between attractiveness and confidence placed in testimony. 2. There would be a positive relationship between attractiveness and believability. 3. There would be a positive relationship between confidence placed in testimony and believability.

We are using this study to illustrate the use of Spearman’s rs and some other aspects of correlation. However, multiple regression (Chapter 10) would usually be more appropriate for three or more variables in a correlational design.

165

CHAPTER 6

Tests of correlation and bivariate regression

CHAPTER 6

166

SPSS for Psychologists

How to perform Spearman’s rs Carry out steps 1 to 5 as for Pearson’s r (previous section). At step 6, select Spearman instead of Pearson (see Bivariate Correlations dialogue box below). This example also illustrates the fact that you can carry out more than one correlation at once. There are three variables, and we want to investigate the relationship between each variable with each of the other two. To do this, you simply highlight all three variable names and move them all into the Variables box.

We have moved the three variables into here from the left-hand box.

The Style button allows you to specify the format of any tables.

Pearson is the default, so unselect that, and select Spearman instead.

The SPSS output for Spearman’s rs is shown on the next page.

SPSS output for Spearman’s rs Obtained using menu item: Correlate > Bivariate NONPARAMETRIC CORRELATIONS

This cell contains the values for the correlation between variables ‘confdt’ and ‘believ’: .372 .000 89

is rs is p is the number of cases.

Look for the diagonal: the output for each bivariate correlation is given twice, once below the diagonal and once above it.

As in the Pearson’s output, a complete matrix is printed, but this matrix is larger because three variables were entered.

To find the correlation between two variables (for example, confdt and believ), you move down the column that represents your first variable (confdt), and move across the row that represents your second variable (believ). The cell where they intersect contains the relevant correlation statistics.



Reporting the results When reporting the outcome for each correlation, at the appropriate points, you would write:  here was a significant positive correlation between confidence in tesT timony and believability (rs = .37, N = 89, p < .001, two-tailed). There was no significant correlation between confidence in testimony and attractiveness (rs = .16, N = 89, p = .143, two-tailed). There was a significant positive correlation between attractiveness and believability (rs = .36, N = 89, p = .001, two-tailed).

 ou could illustrate each pair of variables in a scatterplot (see Section 1). Note that the R2 Y Linear value, given in the scatterplot when you add a regression line, is the square of Pearson’s r (r2) and not the square of Spearman’s rs. As described in Section 2, r2 indicates the proportion of variation explained, but this may not be appropriate for ordinal data.

167

CHAPTER 6

Tests of correlation and bivariate regression

CHAPTER 6

168

SPSS for Psychologists

How to perform Kendall’s tau-b Some researchers prefer to use Kendall’s tau instead of Spearman’s rs. To undertake a Kendall’s tau-b, follow the same steps as for Pearson’s r, but at step 6 select Kendall’s tau-b. The output takes the same form as that for Spearman’s rs. Kendall’s tau-b takes ties into account. Kendall’s tau-c, which ignores ties, is available in Crosstabs (see Chapter 7, Section 4).

Section 5: PARTIAL CORRELATIONS There may be times, when carrying out research, that you want to examine the unique relationship between two variables, excluding any potential influence of a third variable.

Example study: academic achievement Imagine you are a researcher interested in exploring the relationship between how much revision your students do, and the score they get on their final exam. Obviously, a correlation would be able to tell you the strength and direction of the relationship between participants’ exam scores and how hard they revised (e.g. time spent revising in hours). However, there are other variables that might be influencing this relationship. For example, how much your students enjoy their subject is likely to be related to both how well they do in that subject, and how much they engage with revision. So, if you wanted to explore the direct relationship between revision time and exam outcome, you might want to partial out the influence of subject enjoyment. To do this, you can use a partial correlation. This allows you to look at the strength and direction of the relationship between two variables while controlling for the influence of a third variable. This process involves looking at the proportion of variation explained by the different variables (see Figure  6.5) and effectively removing the influence of the third variable, producing a correlation between your two variables of interest, while the effects of the third variable are held constant. As with Pearson’s correlation, partial correlations require that all variables are measured at the interval level. In this example, both exam score and time spent revising naturally meet this criteria. However, for a partial correlation to be valid, you would also need to find a measure of subject enjoyment that is appropriate. As simply asking students how much they enjoy their subject on a scale of 1–5 would only produce ordinal data, you would instead need to find a standardised and validated measure that produces data that can be treated as an interval scale. In this case, the (fictional) Subject Enjoyment Questionnaire, which contains a number of different items and produces a score on a scale from 0–50 (with larger scores indicating greater subject enjoyment).

The question of whether standardised scales can be assumed to be interval (rather than ordinal) data is still hotly debated in the literature. However, much psychology research assumes that this is a legitimate claim, and uses parametric tests to analyse the data produced by them.

Subject Enjoyment

Exam Score

Variation in exam score that has nothing to do with revision intensity

Revision Intensity

The variance accounted for by the relationship between exam score and revision intensity. This is what a bivariate correlation is tapping into.

Revision Intensity

Exam Score

Variation in revision intensity that has nothing to do with exam score Partial correlation removes the influence of a third variable, and looks at what is left.

Figure 6.5  Venn diagrams illustrating the role of shared variance in partial correlations

How to perform a partial correlation

1. Click on Analyze.

2. Click on Correlate.

3. Click on Partial. The Partial Correlations dialogue box will appear (see below).

169

CHAPTER 6

Tests of correlation and bivariate regression

SPSS for Psychologists

CHAPTER 6

170

4. Move the two variables you want to know the relationship between into the Variables box. 5. Move the variable you want to partial out into the Controlling for box. 6. Choose either a Twotailed or One-tailed test, as appropriate for the hypothesis, and then click OK to produce the output.

SPSS output Obtained using: Correlate > Partial PARTIAL CORRELATIONS

Here you can see the values for the partial correlation between Exam score and Hours spent revising, taking into account Enjoyment of subject: .461 .012 27

is rs is p is the degrees of freedom

Reporting the results

When reporting the outcome for a partial correlation, at the appropriate points, you would write:  here was a significant moderate positive correlation between revision T time and exam score whilst controlling for subject enjoyment (r = .46, N = 30, p = .012, two-tailed). If you wanted to see how much influence subject enjoyment had on the relationship between these two variables, you could also carry out a standard Pearson correlation between revision time and exam score

and compare the two correlations and the proportion of variance they explain. If we did this following the steps in Section 3, we would find the following: There was a significant moderate positive correlation between revision score and exam score (r = .57, N = 30, p = .001, two-tailed).

So, controlling for subject enjoyment reduced the correlation coefficient (although not by a huge amount, as both correlations are still moderate and significant). Looking at the variance explained can help to interpret this further. While the bivariate (or zero order) correlation accounts for 32% (.57 x .57) of the variance in the data, the partial correlation only accounts for 21% (.46 x .46) of the variation. This tells us that, while revision intensity alone does explain some of the variance in exam scores (and vice versa), there is some influence of subject enjoyment on this relationship.

Partial correlation is very closely related to multiple regression (which you will cover in Chapter 10). As multiple regression allows you to do more with your data, and explore the relationship between your variables in more depth, it is often the preferred analysis.

Section 6: COMPARING THE STRENGTH OF CORRELATION COEFFICIENTS Sometimes you may have data on the same variables for participants from each of two populations. We may have reason to hypothesise that the correlation between two variables will differ between the two populations. To illustrate this, we will use a study we carried out with student participants as part of a module. It is in the area of environmental psychology, and for this purpose we will use three of the variables we recorded. These were: ‘threat’, the response to a question about the perceived threat from environmental problems to the participant’s own health and wellbeing; ‘recycling score’, the mean of self-reported recycling rates for a range of materials; and ‘gender’. Below, we show the SPSS output for the correlation between the two variables for all participants, for the female group and for the male group.

171

CHAPTER 6

Tests of correlation and bivariate regression

CHAPTER 6

172

SPSS for Psychologists

For all participants. Note: the male N and female N do not sum to the total N as two participants did not record their gender on the questionnaire.

For female participants.

For male participants.

You should have a rationale for any comparison of correlation coefficients you make. For this example, differences between the genders are often reported in the area of environmental psychology. The differences may, however, be due to differences in other variables such as age, social class, type of residence area, level of education and so on. Thus, if you do find a difference between correlation coefficients for samples that are not matched on other variables, you should check whether your two samples differ on other variables which might explain your finding.

A test that Fisher devised (e.g. see Howell, 2013, 284–5) allows use of the z tables to assess whether the difference between two r values is significant. There are three stages to applying this test: 1. Transform each r value to a value called r' (pronounced ‘r prime’). This is required because the distribution of the difference between two r values is used to assess whether a particular difference is significant; that distribution can be skewed, and then use of z would not be valid. 2. Use the r' values to calculate the z value. 3. Use z to determine whether or not there is a significant difference between the two r values. You can do this yourself using the formulae shown here, or by a SPSS syntax procedure, described in Chapter 14, Section 2.

Using equations 1 r Equation1 r    0.5 loge  1 r



Equation 2

z

r1  r2 1 1  N1  3 N2  3



First, calculate r’ from each r: r1 (for women) = .279

r2 (for men) = .452

1  .279  r1   0.5 loge 1  .279

1  .452  r2   0.5 loge 1  .452

1.279 r1   0.5 loge  0.721

1.452 r2   0.5 loge  0.548

r1   0.5 loge 1.774

r2   0.5 loge 2.649

r1   0.5 0.573

r2   0.5 0.974

r1  0.287

r2  0.487

173

CHAPTER 6

Tests of correlation and bivariate regression

SPSS for Psychologists

CHAPTER 6

174

Second, calculate z: z z

r1  r2 1 1  N1  3 N2  3



.287  .487 1 1  105  3 112  3

.200 0.010  0.009 .200 z 0.138 z  1.451 z

Note that the sign of the z simply indicates whether r1′ is larger or smaller than r2′ . Third, compare our observed z with the critical z of 1.96. We use the absolute value (that is, ignore the sign). If our observed z is less than 1.96, the difference between the two r values is not significant (p > .05); whereas if the observed z is greater than 1.96, the difference is significant (p < .05). In our example, the absolute value of the observed z is 1.451, and therefore p > .05.



Reporting the results When reporting the outcome of the comparison, if you have only used the critical value of z, you would write:  here was no significant difference between the correlation coefficients T of .287 for women and .487 for men (z = 1.45, p > .05). If you used z tables to find p more specifically, you would report: There was no significant difference between the correlation coefficients of .287 for women and .487 for men (z = 1.45, p ~ .147).

Section 7: BRIEF INTRODUCTION TO REGRESSION ■■

Like correlational analysis, regression is also concerned with the relationship between variables. But while correlation is just used to describe the relationship between two variables (i.e. description), regression can be used for prediction. Regression is a statistical technique that allows us to predict someone’s score on one variable from their scores on one or more other variables. This is particularly helpful when we want to make predictions that extend beyond our current data range (extrapolation).

■■

Unlike the situation with correlation, in regression we attempt to specify the variables in terms of being dependent or independent variables.

■■

Regression involves one dependent variable which we want to predict, known as the ‘outcome’ or ‘criterion’ variable, and one or more independent variables, which we refer to as the ‘predictor variables’.

■■

When we only have one predictor variable, we have a bivariate regression; in contrast, multiple regression (see Chapter 10) involves two or more predictor variables.

■■

The predictor variable can be measured using a range of scales, although ideally at interval or ratio level, but the criterion variable must be measured using a ratio or interval scale.

■■

Human behaviour is inherently noisy, and therefore it is not possible to produce totally accurate predictions; however, regression allows us to assess how useful a predictor variable is in estimating a participant’s likely score on the criterion variable.

■■

As with bivariate correlation, bivariate regression does not imply causal relationships unless a variable has been manipulated.

Regression as a model As we discuss in Chapter 8, Section 1, human behaviour is variable and therefore difficult to predict. A model is an attempt to explain and simplify data we have measured in a way that allows prediction of future cases. Say, for example, that we have measured how confident each student feels about how to use SPSS after they have completed a module. The simplest model is the mean; if the mean confidence of students is 33.58 (on a standardised confidence scale from 1 to 50), then we can predict that other students who complete the module the following year will have a confidence score of 33.58. However, there will be much error! In other words, the difference between each observed value and the predicted value (the mean) will be large for many students. Of course, students vary in other ways, and other variables can affect their confidence in using SPSS; for example, how much time they have spent practising with SPSS. If we measure that as well as confidence, we can use the technique of regression to model the relationship between those two variables. We can then predict how confident a student will be from how much time they have spent practising. There will still be error, but it will be less than in the situation when we only used the mean student confidence. Figure 6.6 illustrates how well (or not!) using the mean as a model works for these data, compared with using the regression line (or the line of best fit). The amount of error in the model is indicated by the residuals (the difference between the observed value and predicted value for each case). We describe residuals in more detail below.

175

CHAPTER 6

Tests of correlation and bivariate regression

SPSS for Psychologists

CHAPTER 6

176

Horizontal line represents the mean, which does not fit the data very well.

Data are a better fit when represented by the regression line (or line of best fit).

Figure 6.6  Comparing the mean and regression line as models

Section 8: BIVARIATE REGRESSION

From bivariate correlation to bivariate regression Earlier sections covered correlation between two variables. Here, we use the term ‘bivariate’, which indicates that only two variables are involved, to distinguish from ‘multiple’, which indicates that there are more than two variables. To illustrate this section, we will use the example previously used when obtaining a scatterplot with a regression line, and also for Pearson’s correlation coefficient. Age and CFF (critical flicker frequency: the frequency at which someone can no longer perceive a light flickering) were measured; the study is described in Section 2. In bivariate correlation, we consider the strength of association between two variables, and do not consider whether one might be independent and the other a dependent variable. The technique of regression, however, allows prediction of one variable from another; thus we need to distinguish between the two variables. Some authorities use the terms ‘independent’ and ‘dependent’ variable; however, prediction does not necessarily mean direct causation, and other authorities use a different labelling system, which we prefer. In this system, the terms are the ‘predictor variable’ and the ‘criterion variable’. The criterion variable is said to be predicted by the predictor variable. Another labelling system, used in equations and graphs, is X and Y. The predictor variable is denoted X, and the criterion variable is denoted Y. A scatterplot with regression line for the example data is shown again below. Age is classified as the predictor, or X, variable, and CFF as the criterion, or Y, variable. It is that way round because we assume that something to do with the ageing process affects CFF, rather than that CFF affects age or ageing. There can often be a logical reason such as that for classifying the variables. Mathematically, however, regression works

equally well in the opposite direction; if we used CFF as the predictor and age as the criterion, we can predict someone’s age from their CFF with the same degree of accuracy.

See Section 2 for how this scatterplot was produced and how the regression line was added.

The criterion variable, denoted as Y in the regression equation, and plotted on the Y-axis.

The predictor variable, denoted as X in the regression equation, and plotted on the X-axis.

The bivariate regression equation In Section 2 we introduced the regression line that can be added to the scatterplot (using the Fit Line at Total option, see Section 2), and we will now explain the equation underlying the regression line. The relationship between any two variables that have a linear relationship is given by the equation for a straight line:

Y  a  bX

■■

Y is the criterion variable. When we use the equation to predict values of Y from observed values of X, we can use the symbol Y′ (pronounced Y prime).

■■

X is the predictor variable.

■■

a is the intercept; it is the value of Y′ when X = 0; when the regression line is added to the scatterplot, a is the value of Y at the point where the line intercepts the Y-axis (if the X-axis starts at 0). Thus, in our example, a is the CFF in Hz for someone who is aged 0 years. For purposes of prediction, you should not try to extrapolate much beyond the range of values you measured. Nonetheless, for purposes of the equation, a is the value of Y′ when X = 0. Remember that, as default, SPSS applies axis scales around the values occurring in the data as in the example above, so scatterplot axes may not include 0. In such cases, a cannot be read from the graph; however, SPSS will provide the value of a, and also of b, as we show below.

177

CHAPTER 6

Tests of correlation and bivariate regression

CHAPTER 6

178

SPSS for Psychologists

■■

b is the slope of the line, known as the ‘regression coefficient’ (or regression weight); it is the value by which Y′ will change when X is changed by one unit. So, it is the difference in predicted CFF between two people who differ in age by one year. We return to regression coefficients in Chapter 10, for multiple regression.

The regression process involves finding a solution to the equation (that is, identifying values of a and b) for which the residuals are at a minimum, as described next.

Residuals A residual is the difference between the observed value of Y for a participant and the value predicted for them by the regression equation (Y′). This section of the scatterplot illustrates that.

This participant’s observed CFF, their Y value, was 42.2 Hz.

This vertical line indicates the difference between Y and Y′– the residual.

This line indicates their predicted CFF, their Y′. We will calculate that value below.

If the two variables are perfectly correlated, then all the points will fall exactly along the straight line and the residuals will all equal zero. It is unlikely, however, that two variables measured in psychological research will be perfectly correlated; normally, there will be a difference between most Y values and their Y′. The difference, Y–Y′, is considered error, and known as the ‘residual’ for each case. Note that residuals can be negative or positive, so they are usually squared when used (see next paragraph). There is error for a number of reasons. There is always error in measurement; this may be due to error in the scale (for example, questionnaires are unlikely ever to be perfect measures of a construct), but it is also due to extraneous individual or situational variables. In addition, the criterion variable is likely to be affected not just by the predictor variable but also by variables we have not measured. The best solution to the regression equation will involve values of a and b that minimise residuals  – that is, the predicted values are as close as possible to the observed values, on average. The least squares criterion is most commonly used to find the best solution; in this criterion Σ(Y–Y′)2 (the sum of the squared differences between observed and predicted scores) is at a minimum.

Proportion of variance explained In regression, we wish to explain the degree of variation or dispersion in our data, usually referred to as the variance; we can ask: ‘How much of the variance in the criterion variable is explained, or accounted for, by the predictor variable?’ For bivariate relationships, r2 gives the proportion of variance explained as described in Section 3. We will return to the concept of variance explained when describing ANOVA in Chapter 8, Section 1, and for multiple regression in Chapter 10.

How to perform bivariate regression in SPSS For this purpose, we will continue to use the age and CFF data. Click on Analyze, Regression, Curve Estimation. You will be presented with the Curve Estimation dialogue box (shown below). You now select Y, the criterion (dependent) variable, and X, the predictor (independent) variable. SPSS allows you to add more than one Y variable, but we currently wish to predict a single Y (‘cff’) from the X (‘age’).

Select the criterion (or dependent) variable and click here to move it into the Dependent box.

Select the predictor (or independent) variable and move it into Variable in the Independent area.

You can enter two or more Y variables into the Dependent box, and then SPSS will carry out the bivariate regression procedure separately for each Y with the X. If you do have more than one Y, to obtain all the output for all the Ys, you must select Display ANOVA table in the dialogue box.

179

CHAPTER 6

Tests of correlation and bivariate regression

CHAPTER 6

180

SPSS for Psychologists

We leave the other settings in the Curve Estimation dialogue box as they are. In the Models area of the dialogue box, the default is Linear, which applies a bivariate linear regression model and is appropriate for the example we are using. In the future, if you have data with nonlinear relationships, you could explore the other types of curve. Next click on the Save button (top right) to obtain the Curve Estimation: Save dialogue box (shown below). SPSS calculates certain values for each case, and the Save command allows you to save these to your data file. We have selected Predicted values and Residuals.

Select Predicted values and Residuals to add these to your data file.

Selecting Prediction intervals in the Curve Estimation: Save dialogue box will give you two other variables, the upper and lower values for confidence intervals of the predicted values. If you select this, the % Confidence interval box will no longer be greyed out, and you can change from 95% to 90% or 99% if you wish. See Chapter 5 for information about why you might change the confidence interval.

Click on Continue to return to the Curve Estimation dialogue box, then click on OK. You will be reminded that you have asked to add variables to your data file – if you are sure all is well, then click OK. (If you inadvertently added unwanted variables to your data file, you can delete them or just not save the amended file; SPSS does not automatically save the data file.) The Output window will contain information and a graph (described below). The new variables will be in the Data file (as shown below), and described in the annotations.

The new variable ‘FIT_1’ holds the predicted values, Y’. Thus, for participant 1, who is 41 years old, CFF is predicted from the regression equation to be 32.5 Hz.

The second new variable ‘ERR_1’ holds the residual (Y, the observed value, minus Y’, the predicted value). See text below for explanation.

If you selected Prediction intervals in the Curve Estimation: Save dialogue box, the upper and lower values for confidence intervals of the predicted values will also be added to the data file. We show these on the last page of this chapter.

The residual is equal to Y (the observed value) minus Y′ (the predicted value). It is sometimes called the ‘error’ (indicated by the SPSS variable name ‘ERR_1’) as it can be considered the amount of error in the prediction. The residual values can be interesting, as we will explain now, but they are not necessarily useful for our purposes. In this example, the residual ‘ERR_1’ = ‘cff’  – ‘FIT_1’. For participant 1, the predicted value is 2.4 Hz less than their observed value of 34.9. If you scan the ‘ERR_1’ column, you will see that the absolute value (that is, ignoring whether they are negative or positive) of the residuals varies from effectively 0 to nearly 6. The mean of the absolute residuals is indicative of the strength of the relationship between the two variables, but residuals are unstandardised. That is, they are on the same scale as the original data and not standardised (z-scores are an example of a standardised measure, as described in Chapter 3). Thus, residuals can be difficult to interpret because you must consider the scale on which the variable is measured. As mentioned above, you can enter more than one Y in the Curve Estimation dialogue box, in order to carry out separate bivariate regressions of X with each Y. For example, in addition to CFF, you may have a memory score for the same participants. If you do enter two Ys, then the new variables for the second Y will be called ‘FIT_2’ and ‘ERR_2’. In addition to the new variables in the data file, SPSS will also give output with information about the bivariate regression, and the output obtained for the bivariate regression of CFF with age is shown below.

181

CHAPTER 6

Tests of correlation and bivariate regression

CHAPTER 6

182

SPSS for Psychologists

SPSS output Obtained using: Analyze, Regression, Curve Estimation Curve Fit

This table gives information about the model you requested.

The Variable Processing Summary table shows information about cases for each variable.

This is r2, the square of r, which we have seen before. SPSS adds r2 to the scatterplot when we request a regression line (see Chapter 6, Section 2).

Parameter Estimates give values from the regression equation for these data: the constant is a, the intercept; b1 is b, the slope. See text on next page.

Whether r2 is significantly different from 0 is tested using F. This is rarely used for bivariate regression, as we simply use the correlation. We explain the use of F in Chapter 10.

The scatterplot is very similar to that shown at the start of this section, which was produced using the Graphs menu. Note that Curve Estimation automatically includes the regression line; we don’t request it separately. The main difference in appearance is that the Y-axis label (Critical flicker frequency) is in the middle at the top, which could be confusing. By doubleclicking on the graph, it can be edited in Chart Editor (see Chapter 6, Section 2). In Chart Editor, select the label and then change the format to left justified, so that the label begins above the Y-axis.

The Parameter Estimates section of the Model Summary and Parameter Estimates table (on the previous page) shows: 1. The constant or a, the intercept. Thus, when the modelling process was applied to the values in the data set, the CFF value for someone who is 0 years old was estimated to be 44.21 Hz. 2. b1 or b, the slope or the regression coefficient. For these data b is negative; thus, if age increases by one year, the CFF reduces by 0.285 Hz. More usefully for a report, we could instead say that when people are 10 years older, their CFF would be expected to have reduced by 2.85 Hz. This is, of course, the typical reduction predicted from these data. Thus, the straight line equation for these data is: CFF = 44.21 +− .285age, or CFF = 44.21 −.285age

183

CHAPTER 6

Tests of correlation and bivariate regression

CHAPTER 6

184

SPSS for Psychologists

Using the procedure to predict Y for new cases If you wish to predict Y for new cases without using the equation yourself, simply enter the new case/s into the data file and run the procedure again. As an example, we added two hypothetical new cases, with ‘age’ data only, to the data file and ran the procedure again. In this run, we also selected Prediction intervals to obtain confidence intervals for 95% CI. The section of the data file with these new cases is shown below. An inspection of the output viewer will show you that the two new cases, without observed values for the Y variable, affect the excluded cases and missing values, but not the other output.

The new case aged 32 years has predicted CFF (shown in ‘FIT_1’) of 35.1 Hz, while the case aged 62 years has predicted CFF of 26.6 Hz.

The last two columns, ‘LCL_1’ and ‘UCL_1’, give the lower and upper limits for the confidence interval requested (95% in this example). For case 20, aged 49, the limits are 23.9 and 36.6. These values can be used as a guide to how serious is a difference of an observed value from the predicted value.

Remember that for the purposes of prediction, you should not try to extrapolate much beyond the range of values that you measured. The two new cases we have added are within the age range of the participants in the study.

Summary ■■

This chapter introduced you to statistical tests of correlation that will tell you if there is a significant relationship between two variables, and provide you with information on the strength and nature of this relationship.

■■

Your choice of test of correlation will depend on whether the data are parametric and whether the relationship between the variables is linear.

■■

You should first obtain a scatterplot to observe any general trend in your data and to see if any relationship is linear.

■■

You can add a regression line to the scatterplot.

■■

The value of the correlation coefficient indicates the strength of the correlation, and is a measure of effect size.

■■

The sign of the correlation coefficient is important as it indicates whether the relationship is positive or negative.

■■

It is not possible to infer a causal relationship from a correlation.

■■

Partial correlations were introduced. These allow you to test the strength and direction of a relationship between two variables whilst controlling for the effect of one or more other variables

■■

We also introduced how to compare the strength of two correlation coefficients.

■■

Comparing the strength of two correlation coefficients can be useful if there are grounds to hypothesise that the correlation between two variables will differ for two groups (such as men and women).

■■

When comparing correlations, you should check whether the groups differ on variables other than the grouping variable (gender, in our example). If they do differ on other variables, you will not know if any difference between the correlations is due to gender or those other variables.

■■

We also introduced bivariate regression, which allows us to predict one variable (the criterion) from another (the predictor).

■■

In Chapter 10, we discuss multiple regression, which allows use of more than one predictor variable, in the same model, to predict the criterion variable.

■■

For guidance on incorporating SPSS output into a report, or on printing the output, see Chapter 14.

185

CHAPTER 6

Tests of correlation and bivariate regression

CHAPTER 7

7

Tests for nominal data In this chapter ●● ●● ●● ●● ●●

Nominal data and dichotomous variables Chi-square test versus the chi-square distribution The goodness of fit chi-square The multidimensional chi-square The McNemar test for repeated measures

SPSS for Psychologists online Visit macmillanihe.com/harrison-spss-7e for data sets, online tutorials and exercises.

Section 1: NOMINAL DATA AND DICHOTOMOUS VARIABLES

186

■■

Nominal data, also referred to as ‘categorical data’, are data measured using scales that categorise the observations or responses, for example being male or female.

■■

For convenience, each category is allocated a number when entering data into SPSS. These are numbers that cannot be put into any meaningful order; if they could, they would be ordinal data.

■■

With nominal data, the numbers only represent the category of which the participant belongs. That is why nominal data are sometimes called ‘qualitative data’, and, by contrast, the ordinal, interval and ratio levels of measurement are called ‘quantitative data’. The use of these terms in this way is different from the use in qualitative and quantitative research. Quantitative research can include all levels of measurement, including nominal.

■■

In experimental studies the independent variable is a nominal variable, while the dependent variable is often interval or ratio (although may sometimes be ordinal). However, when both of the variables we are interested in are nominal, then we need to use a different type of test devised specifically for nominal data.

■■

Some nominal variables can only have two values – for example, whether or not someone has ever had a head injury. These are known as ‘dichotomous variables’.

■■

A dichotomous variable is a nominal variable that can only take one of two values. For example, if you classify smoking status as smoker or non-smoker, then someone who smokes only very occasionally would be classified as a smoker, whereas an ex-chain smoker would be classified as a non-smoker.

■■

Some nominal variables can have more than two values. For example, if you recorded the smoking status of your participants, you could use three categories: smoker, never smoked and ex-smoker. If you collected data from an online survey and recorded nationality, you might have a great many different categories among your participants.

187

CHAPTER 7

Tests for nominal data

Descriptives for nominal data It is important to consider which summary descriptive statistics are appropriate for use with nominal data. If you have recorded your participant’s sex, then calculating values such as the mean sex or median sex is meaningless. The only summary descriptives suitable for nominal data are counts, or frequencies, and percentages. Thus, we could say that, of 20 participants, 15 (75%) are female and 5 (25%) are male. We can display those counts and percentages in a table, and can illustrate them using a bar chart. We show both of these options in this chapter. Note that histograms should be used for displaying data of at least ordinal level of measurement, and not for nominal data.

Entering nominal data into SPSS Data entry for nominal variables is easy. For any variable measured on a nominal scale, make sure that you assign value labels to help you remember what the different values mean (see Chapter 2), then simply enter the number chosen to represent the category. As with most data in psychology, the rows in the data file represent participants, and the columns are our variables. For example, we might have a column for the variable ‘Sex’, in which we enter a value of either 1 or 2 (male or female), and a column for Smoking status, which is coded as 1 for ‘smoker’, 2 for ‘never smoked’ or a 3 for ‘ex-smoker’.

187

CHAPTER 7

188

SPSS for Psychologists

Section 2: CHI-SQUARE TEST VERSUS THE CHI-SQUARE DISTRIBUTION ■■

The chi-square test is one of the tests specifically designed to handle nominal data. There are two different forms of the chi-square test: the goodness of fit chi-square test and the multidimensional chi-square test.

■■

The goodness of fit chi-square is used to test whether an observed pattern of events differs significantly from what would be expected by chance alone.

■■

The multidimensional chi-square test can be used either as a test of association or as a test of difference between nominal variables.

The chi-square test makes use of the chi-square distribution to test for significance. The distinction between ‘test’ and ‘distribution’ is explained by Howell (2013). In this book we only cover the chi-square inferential test, and not the chi-square distribution. However, it is worth noting that, for some other statistical tests, including some for data with a level of measurement other than nominal, a chi-square distribution is used to test for significance; for example, the Kruskal–Wallis and Friedman tests covered in Chapter 8, Sections 2 and 3, respectively. Also note that the chi-­ square distribution is not used to test for significance in all inferential tests that are applied to nominal data. For example, for the McNemar test (Section 5), SPSS uses the binomial distribution.

Section 3: THE GOODNESS OF FIT CHI-SQUARE In this type of chi-square test we are testing whether the observed pattern of events (or scores) differs significantly from what we might have expected to occur by chance. For example, we might ask whether a group of smokers choose brand A cigarettes more often than brand B. Here, we are effectively asking the question: ‘Do significantly more than 50% of our smokers choose one brand over the other brand?’ In practice, this form of the chi-square test is not often used in psychology. This example of cigarette brands relates to one of the few times the authors have ever used this form of the test. An undergraduate student undertook a project examining the effect of cigarette advertising on cigarette choice. As part of this project, she listed a series of personality characteristics that were implied by cigarette adverts. For example, some cigarette advertisements might imply a sophisticated personality. These personality statements were then presented to smokers who were asked to indicate to which of five brands of cigarettes they thought the statement best applied. The responses for each statement were analysed using the chi-square goodness of fit test to compare the observed distribution against that predicted by the null hypothesis (that the five brands would be equally often selected). This is an interesting, but rare, example of the use of this form of the chi-square test in psychology. Much more common is the second form of this test, described in the next section, which allows us to consider whether two variables are independent of one another (or conversely, whether they are associated).

To perform the goodness of fit chi-square test Note that this type of chi-square test is accessed via the chi-square command that can be found under Nonparametric tests in the Analyze menu. However, as this form of the test is used infrequently in psychology, we will not be demonstrating it here.

A common error for novice users is to select the wrong form of the chisquare test. Make sure you know which version you want and select appropriately.

Section 4: THE MULTIDIMENSIONAL CHI-SQUARE The multidimensional chi-square test can be thought of in two ways: as a test of association or as a test of difference between independent groups. It can be thought of as a test of association because it allows us to test whether two variables are associated or whether they are independent of each other. For example, let’s modify our cigarette example so that 50 smokers and 50 non-smokers are asked to choose which of two cigarette adverts they preferred. The multidimensional chi-square test would allow us to ask the question: ‘Is the pattern of brand choice independent of whether the participant was a smoker or not?’ Another example would be to determine whether receiving or not receiving a particular medical treatment was associated with living or dying. Yet another might be to see whether a person’s sex was independent of their choice of favourite colour. In psychology, we often need to test whether nominal variables are, or are not, independent of each other. The experimental hypothesis would be that the two variables are not independent of each other; for example, we could hypothesise that people receiving a particular treatment are less likely to die than those not receiving the treatment. Note that another way of phrasing that last hypothesis is that there will be differences between the number of people who die under each treatment condition. Thus, the multidimensional chi-square can be thought of as both a test of association, and a test of difference. Whichever way you think of it, the type of data and the way chisquare assesses those data are the same.

General issues for chi-square Causal relationships Remember, even if you find there is a significant association between your nominal variables, you cannot infer a causal relationship between them unless you have directly manipulated one of them. For example, unless you randomly allocated patients to either receive the treatment or not, then you cannot claim that the treatment reduces the rate of death. It may be that the hospital only offered the treatment to the patients who were strongest, and it may be this which gives rise to the results

189

CHAPTER 7

Tests for nominal data

CHAPTER 7

190

SPSS for Psychologists

you see. The only conclusion we can draw is that there is an association between the treatment a patient received and the outcome. If we wanted to test whether there was a causal relationship, we would need to undertake a randomised control trial in which patients were randomly allocated to treatment and the outcome assessed. It’s important to remember this when describing the results of a chi-square test.

Type of data In order to use chi-square, our data must satisfy the following criteria: 1. The variables must be measured on a nominal level of measurement, giving frequency data. In other words, our data must be able to tell us the number of times some event has occurred. We can, of course, convert other types of data into nominal data. For example, suppose we have recorded participants’ annual income – we could recode these data, scoring each participant as either ‘High income’ or ‘Low income’ (above or below the median income for the group). We could then count to give frequency data – the number of high- and low-income participants we have observed. (See Chapter 4, Section 5 for information on how to recode in this way.) 2. For multidimensional chi-square, we must have collected data of this sort on at least two variables. For example, in addition to the high-low-income data above, we might also know whether each of these participants is a smoker or not. 3. The categorisations of each of the variables must be mutually exclusive. In other words, each participant must be either a smoker or a non-smoker, and either high or low income. In other words, each participant must fall into one and only one of the cells of the contingency table (see below). 4. Every observation must be independent of every other observation. This will not be the case if you have more than one observation per participant. (Nominal data from a repeated measures design can be analysed by means of the McNemar test, see Section 5.)

The N*N contingency table When we have nominal data of this form, we can best display the frequencies or counts in a contingency table. If we have two variables, each with two levels (as in the example above), then we draw what is called a 2*2 (pronounced ‘two-by-two’) contingency table. So, if we had 100 participants in our example data set, the contingency table might look like Table 7.1. Table 7.1  An example of a 2*2 contingency table High income

Low income

Row totals

Smokers

10

20

30

Non-smokers

35

35

70

Column totals

45

55

100 (Grand total)

The numbers in this table represent the numbers of participants who fall into each cell of the table (and remember that each participant can be in only one cell). So we can see that, of the 30 smokers in our study, 10 are high income and 20 are low income. Similarly, we can see that, of the low-income group, 20 are smokers and 35 are non-smokers. Thus, the contingency table is a useful summary of the data.

Rationale for chi-square test If there was no association between smoking and income, then we would expect the proportion of smokers in the high-income group to be the same as the proportion in the total sample. That is, we would expect 45/100 or 45% of the smokers to be high income. As there were 30 smokers in total, we would expect (45% of 30) = 13.5 of the smokers to be in the high-income group. In this way, we can work out the expected frequencies for each cell. The general formula is: expected frequency 

row total  column total grand total

The chi-square test calculates the expected frequency for each cell and then compares the expected frequencies with the observed frequencies. If the observed and expected frequencies are significantly different, it would appear that the distribution of observations across the cells is not random. We can, therefore, conclude that there is a significant association between the two variables: Income and smoking behaviour are not independent for our (fictitious) sample of participants. Chi-square will actually allow us to calculate whether more than two variables are independent of each other. However, as we add variables, it fast becomes very difficult to interpret the results of such an analysis, so we would recommend that you resist the temptation to add extra variables unless you are sure you know what you are doing. It is, however, perfectly reasonable to have more than two categories of each variable – for example a 3*3 chi-square is quite acceptable.

 xample study: investigating tendency towards E anorexia To illustrate the use of chi-square, we will use some fictitious data based on research conducted by one of our past students. Eighty young women completed an eating questionnaire, which allowed them to be classified as either high or low anorexia (participants with high scores are at greater risk of developing anorexia). In addition, the questionnaire asked for the employment status of the women’s mother (full-time, part-time or unemployed) and their cultural background (Caucasian, Asian or other) and type of school they attended (private or state). Previous research has suggested that the incidence of anorexia is higher among girls attending private schools than state schools, and higher among girls whose mothers are not in full-time employment. In addition, the incidence seems to be higher in Caucasian girls than non-Caucasian girls. We therefore hypothesised that there would be an association between these factors and the classification on the eating questionnaire. To test this hypothesis, we conducted a series of chi-square analyses. (These data are available in the Appendix or from macmillanihe.com/harrison-spss-7e.)

191

CHAPTER 7

Tests for nominal data

CHAPTER 7

192

SPSS for Psychologists

To perform the multidimensional chi-square test The multidimensional chi-square is accessed under the Crosstabs command. Crosstabs draws up contingency tables, and chi-square is an optional inferential statistic within this command.

1. Click on Analyze. 2. Select Descriptive Statistics.

3. Select Crosstabs.

NB: Do NOT use the chisquare command available under Nonparametric Tests. That is for goodness of fit, chi-square.

4. Select the variable you want to form the rows of the contingency table and move it here. 5. And put the variable for the columns of the table here.

6. Click here for useful chart options.

7. Click on the Statistics button.

8. Select the Chi-square option.

9. These options provide measures of the strength of the association, or effect size.

10. Click Continue.

193

CHAPTER 7

Tests for nominal data

CHAPTER 7

194

SPSS for Psychologists

11. Click Cells.

12. These options control the information included in the cells of the contingency table. Select Observed and Expected.

13. And Row, Column and Total.

14. Click Continue, then OK.

195

Two sets of annotated output are given below. The first is from the 2*3 chi-square exploring the association between the tendency towards anorexia and mother’s employment status (the one shown above in the Crosstabs dialogue box). The second is from the 2*2 chi-square exploring the association between tendency towards anorexia and type of school attended.

 PSS output for chi-square without using Exact S option  btained using menu items: Descriptive Statistics > O Crosstabs  utput for first chi-square: tendency towards anorexia * O employment (a variable with two levels against a variable with three levels) CROSSTABS This first table tells us which variables have been entered into the analysis.

This is our row variable. It has two levels (high; low).

And this is the column variable. It has three levels (f/t; none; p/t) so there is a column for each level.

If you selected the options we suggested, then each cell of the table will contain five values. These are explained in the text below.

CHAPTER 7

Tests for nominal data

CHAPTER 7

196

SPSS for Psychologists

Within each cell of the contingency table you will see five values reported: Count: This is the number of cases falling into the cell. For example, on the top left cell this is the number of cases which scored high on the anorexia scale and in full-time employment. Expected Count: This is the number we would expect to see in this cell if there was no association between the two variables. % within tendency toward anorexia: This is the number of cases in the cell expressed as a percentage of the row total. For example, 36.8% of girls who score high on the anorexia scale have a parent in full-time employment. % within type of employment: This is the number of cases in the cell expressed as a percentage of the column total. For example 45.2% of the cases with a parent in full-time employment scored high on anorexia. % of Total: This is the cases in this cell as a percentage of the total number of cases. This table reports the results of the chi-square test. If either of your variables had more than 2 levels (as here), then SPSS reports three results.

Pearson’s chi-square is used most often, so you will probably report this top row.

If you used the Exact option, available on the Crosstabs dialogue box, the table above would have three extra columns: we will describe these later in this section.

This chart is produced by selecting Display clustered bar charts in the Crosstabs dialogue box. It might be useful to help illustrate your results.



Reporting the results



In a report you might write:  here was no relationship between tendency towards anorexia and the T employment status of the mother: χ2(2, N = 80) = 0.29, p = .862.

We will discuss reporting the outcome of chi-square in more detail, but first we explain the output for the second chi-square, which explores the association between tendency towards anorexia and type of school attended.

 utput for second chi-square: tendency towards anorexia* O education (two variables each with two levels)

Each row of this table gives information about each level of the column variable; e.g. this row gives information for girls in the ‘high’ group. It shows the figures for those in a state school separately from those attending a private school. The total is for all girls in the ‘high’ group. We can see that, of the total of 38 ‘high’ girls, 4 were in a state school compared with 34 in a private school. Within each cell, we are given: Count = the observed frequency. The number of participants falling into the cell; i.e. the number of girls who are ‘low’ and attended a state school. Expected Count: the number expected for this cell assuming no association (see text). % within tendency toward anorexia: the cases in this cell as a % of row total; i.e. % of ‘low’ girls who attend a state school. % within type of school: the cases in this cell as a % of the column total; i.e. the % of girls who attend a state school who are ‘low’. The columns give information about each level of the row variable. This column shows figures for girls in private schools. Figures are given separately for those in the ‘high’ and ‘low’ groups and for the total for all girls in private education. Of the total of 47 girls in private education, 34 of them were ‘high’ and 13 of them were ‘low’.

197

CHAPTER 7

Tests for nominal data

% of Total: the cases in this cell as a % of the total number of participants.

SPSS for Psychologists

CHAPTER 7

198

If each variable has only two levels (a 2*2 chi-square), SPSS reports five tests in the ChiSquare Tests table. Continuity correction is the Yates’s corrected chisquare (see text on next page).

You can ignore Fisher’s Exact Test, unless any cells have an expected count of less than five (see text below).

If you used the Exact option, available on the Crosstabs dialogue box, the table above would have one extra column: we will describe it later in this section.

I nterpreting and reporting results from chi-square SPSS reports several different measures of p. It is probably best to use Pearson’s (the chi-square test was developed by Karl Pearson). Note that, for chi-square, the value of df (degrees of freedom) is not related to the number of participants. It is the number of levels in each variable minus one multiplied together. So, for a 2*2 chi-square, df = (2–1)(2–1), which equals 1. When reporting the outcome, we also provide the value of N - the number of valid cases in our contingency table. For a 2*2 table, SPSS also calculates the result with and without the continuity correction, or Yates’s correction. This is a statistical correction used in analyses with relatively few participants or when you have reason to believe that your sample is not a very good approximation to the total population. There is disagreement about whether to use it, but the Exact test, described below, can be used instead for small samples. It is important to understand that the chi-square result on its own cannot tell you about the pattern of your results. For that, you have to look at the contingency table. For example, when reporting the results of the second chi-square result (on previous page), you might write: ‘Only a minority (12%) of those girls attending a state school scored high on the scale, whereas the majority (72%) attending a private school scored high on the scale.’ If you made a specific one-tailed prediction about the direction of the relationship between the two variables (here we predicted that there will be a higher tendency towards anorexia in the private school pupils) and the pattern of results revealed by the contingency is compatible with this prediction (as here), then you can use the chi-square results to assess whether this particular association is significant.

The strength of the association between the two variables can also be obtained in the Crosstabs: Statistics dialogue box by selecting Phi and Cramer’s V. The Symmetric Measures table will appear after the Chi-Square Tests table.

The value of Phi indicates the magnitude of the association. It can be considered equivalent to Pearson’s r (see Chapter 6, Section 3).

Just as we can square r to give an estimate of the proportion of variation that is common to the two variables, so we can square Phi. For these data Phi = –.594. Squaring this gives us a value of .35 or 35%. This tells us that 35% of the variation in the tendency towards anorexia score is accounted for by the type of school attended. We must remember that because none of the variables in this study were manipulated a significant chi-square result does not imply a causal relationships between variables.



Reporting the results

In a report you might write:  he relationship between tendency towards anorexia and the type of T school attended was significant: χ2(1, N = 80) = 28.19, p < .001. The association was of moderate strength: Phi = –.59, and the type of school attended accounted for 35% of the variance in the score on tendency towards anorexia scale.

You could also include a table of counts and expected frequencies, bar charts and other information as appropriate.

Use of exact tests in chi-square Look at the chi-square tables in the preceding pages. There is a note at the bottom of each, informing you of the number of cells with expected frequencies (what SPSS calls expected counts) of less than 5. It is important that you always check this note. For both chi-square analyses above, there are no cells with this problem. However, if you do undertake a chi-square analysis and SPSS reports that one or more cells has an expected frequency of less than 5, then you must take some action. If you are doing a 2*2 chi-square, you can make use of Fisher’s Exact test, which SPSS reports. This test can be used when cells have low expected frequencies (Siegel and Castellan, 1988, 103–11). However, this test is only available for 2*2 tables. If you are performing something other than a 2*2 chi-square and encounter this problem, then you can use the Exact option.

199

CHAPTER 7

Tests for nominal data

CHAPTER 7

200

SPSS for Psychologists

To demonstrate this for you, we have undertaken a further chi-square analysis, exploring a possible association between cultural background and tendency towards anorexia. In the third SPSS output (see below), two cells have an expected frequency of less than 5.

 utput for third chi-square: tendency towards anorexia* O cultural background (a variable with two levels against a variable with three levels)

A chi-square is not valid if any of the expected frequencies are less than 5. In this case two cells have this characteristic. If SPSS prints a message like this you should consider using the Exact option (see text below).

As this was a 2*3 chi-square, there is no Fisher’s Exact test. Instead, we use the Exact button on the Crosstabs dialogue box.

Using the Exact option for chi-square If you click on the Exact button in the Crosstabs dialogue box, then the Exact Tests dialogue box will appear (see below).

Click on the Exact button in the Crosstabs dialogue box.

Asymptotic only is the default setting which we used to produce the output shown above.

Click here to select Exact tests. This will generate the output shown below.

Output for third chi-square (2*3) with Exact option

You can now report Pearson’s chi-square, as we show next.



Reporting the results



In a report you might write:

These three extra columns are printed when you select the Exact option (see below).

 nalysis showed that two cells had an expected frequency of less than A 5, so an exact significance test was selected for Pearson’s chi-square. There was a relationship between tendency towards anorexia and cultural background: χ2(2, N = 80) = 7.87, exact p = .015).

201

CHAPTER 7

Tests for nominal data

CHAPTER 7

202

SPSS for Psychologists

Output for a 2*2 chi-square with Exact option

These two column always appear for a 2*2 chi-square. However, without the Exact option the only p-value reported was for Fisher’s Exact Test. With the Exact option, p-values for other tests, including Pearson’s chi-square, have been calculated and are reported here.

 erforming a chi-square using the Weight Cases P option Imagine that you have an eager research assistant who has collected the data for you and diligently calculated the observed frequencies and set these out in a contingency table. Instead of entering each data point into SPSS to perform the chi-square test, you can instead enter just these observed frequencies and make use of the Weight Cases option. This is illustrated below. Table 7.2 gives the observed frequencies for the data we analysed previously in this chapter. The 80 young women who completed an eating questionnaire were classified as either high or low anorexia, and the type of school they attended was a private or state school.

Table 7.2  The observed frequencies High anorexia

Low anorexia

Private

34

13

State

4

35

In order to enter these frequencies, you first need to set up a data file as shown below, creating three variables.

The variable ‘School’ has two levels (1 = Private; 2 = State)

The variable ‘AnorexiaScale’ has two levels (0 = Low; 1= High).

In Data View, enter the appropriate values for ‘school’ and ‘anorexia’, along with the corresponding observed frequency, as shown here.

203

CHAPTER 7

Tests for nominal data

CHAPTER 7

204

SPSS for Psychologists

Select Data.

Select Weight Cases.

Select the weight cases by option and move the weighting variable (Frequency) into this box..

Although no output is produced, and there is no change to the data table, you can now undertake the chi-square analysis as before. Click on Analyze ⇒ Descriptive Statistics ⇒ Crosstabs to bring up the Crosstabs dialogue box (shown below). 8. Move the variables ‘anorexia’ and ‘school’ into the Row(s) and Column(s) boxes. Click on the Statistics button and the Cells button, and set the options as before.

Click OK.

The output produced in this way will be identical to that shown earlier in the chapter. This is a quick and easy way to enter frequency data so it can be analysed using chi-square.

Section 5: THE MCNEMAR TEST FOR REPEATED MEASURES ■■

The McNemar test is used to analyse data obtained by measuring a dichotomous variable for related designs. The difference between this and the 2*2 chi-square test is that the chi-square test is for independent groups designs.

■■

Another difference is that, while the chi-square test can be used to test for an association between two variables, the McNemar test can only be used as a test of difference.

■■

Remember that a dichotomous variable, by definition, can only take one of two values (e.g. yes or no). Thus, the McNemar test is for a situation where you measure the same thing twice; for example, a ‘before treatment’ yes/no response, and an ‘after treatment’ yes/no response.

205

CHAPTER 7

Tests for nominal data

CHAPTER 7

206

SPSS for Psychologists

■■

If you measure more than two values (e.g. yes/uncertain/no), SPSS will automatically apply the McNemar–Bowker test instead. We will not cover that test here.

Example study: gender and handwriting To illustrate the use of the McNemar test, we will describe an experiment we carried out with students. It has been found that roughly two-thirds of handwriting samples can be correctly categorised as being written by a man or a woman. This is significantly above the chance level performance of 50% correct. A possible implication is that some (but not all) men and women tend to write in a gender-stereotyped manner. Very briefly, male handwriting tends to be irregular and untidy, whereas female handwriting is more rounded and neat. If this is the case, do people have a choice in their writing style? Hartley (1991) investigated this by asking children to try to imitate the handwriting of the opposite sex. We carried out a similar study, but with first-year psychology students. The experimental hypothesis was, when participants are asked to judge the sex of the author of a sample of handwriting, they will perform more accurately when the authors were using their normal writing style compared with when they were trying to emulate the other sex. The independent variable was handwriting style with two levels: one level was the students’ normal handwriting (before they knew the hypothesis) and the other level was their writing when trying to emulate the opposite sex. Participants then tried to categorise each of the two samples as either male or female. The design was, therefore, repeated measures. The order of presentation of the handwriting samples was counterbalanced across participants. The dependent variable was whether the participant’s judgement of the writer’s sex was correct or incorrect. Hypothetical data are available in the Appendix or from macmillanihe.com/harrisonspss-7e. Note that the writer’s sex is not recorded in these data; we simply recorded, for each handwriting sample, whether their sex was judged correctly or not.

How to perform the McNemar test There are two different ways to perform the McNemar test in SPSS, but we are going to describe the best of these which involves accessing the test from the Crosstabs command.

McNemar test and Crosstabs command Crosstabs draws up contingency tables, and the McNemar test is an optional inferential statistic within this command. Follow steps 1 to 6 in the instructions on performing a multidimensional chi-square (Section 4). The Crosstabs: Statistics dialogue box will then appear (see below).

207

CHAPTER 7

Tests for nominal data

Select the McNemar option.

Click on Continue then OK.

SPSS output for the McNemar test  btained using menu items: Descriptive Statistics > O Crosstabs

This is the output of the McNemar test. SPSS calculates the statistic using the binomial distribution, and gives the value of p and N only.

It would be useful to illustrate the results. For data from a related design, rather than using Clustered bar charts in the Crosstabs dialogue box, it is probably better to plot two separate bar charts using the Graphs menu.

CHAPTER 7

208

SPSS for Psychologists

How to obtain a bar chart using Chart Builder Click on Graphs then Chart Builder.

This dialogue box reminds you that you must set the measurement level for each of the variables in your chart. We have already done this. Click OK.

Select this chart type and drag it into the box above.

Drag the variable ‘normal handwriting’ into the X-Axis of the chart

Click OK.

Now repeat this process, this time dragging the second variable ‘handwriting as if opposite sex’ into the X-Axis of the Chart Builder. You will now have two charts in your output. We have reproduced these alongside each other below.

There is a problem with these two charts  – they are plotted using a different y-axis scale. We can easily correct this by editing the first one. Double-click on the chart in the Output window. This will open the Chart Editor. Click on the Y icon to open the Properties dialogue box.

Change the Maximum range value to 40 to match the other chart.

Click Apply.

The two charts will now be plotted on the same y-axis scale and are ready to be used in our report.

209

CHAPTER 7

Tests for nominal data

CHAPTER 7

210

SPSS for Psychologists

Reporting the results

In a report you might write:  he McNemar test using binomial distribution showed a significant difT ference, in the number of correct judgements, between the two conditions of handwriting style (N = 49, exact p = .001).



It would also be useful to explain the pattern of results in the following way:  f the 49 participants, 33 (67%) correctly identified the handwriter’s O sex for the normal handwriting. Of those 33, 17 correctly identified the handwriter’s sex for the ‘opposite handwriting’ and 16 incorrectly identified it. Of the 16 (33%) who were incorrect for the normal handwriting, 2 of them correctly identified the handwriter’s sex for the ‘opposite handwriting’ and 14 of them incorrectly identified it. In brief, there were more incorrect responses when the writer had written as if they were of the opposite sex. This pattern of results is illustrated in Figure 7.1.

Figure 7.1  The pattern of correct and incorrect identification of handwriter’s sex, when the writing was their normal handwriting and when they mimicked the writing of the opposite sex.

Next, we show you how to create a bar chart using Graphboard Template Chooser.

 ow to obtain a bar chart using Graphboard Template H Chooser

Select Graphs then Graphboard Template Chooser.

Select variable you want to plot.

Select the Bar of Counts icon.

Click OK.

Repeat this process for the second variable. Graphboard distinguishes between a ‘bar of counts’ suitable for nominal data, and ‘bar’ that will plot a summary descriptive for a variable of at least ordinal level of measurement.

211

CHAPTER 7

Tests for nominal data

CHAPTER 7

212

SPSS for Psychologists

The two charts will be in the output window. You can edit these as required by double-clicking. Double-click on the second chart and set the y-axis scale to a maximum value of 40.

Double-click on the chart in the output window to open the Graphboard Editor.

Click on one of the y-axis values.

Click on the Scale tab.

Set the Max value to 40.

The two charts are now plotted on the same y-axis scale.

Note about causation and the McNemar test In related designs, you manipulate an independent variable and either collect data for both of its levels from the same participants (repeated measures) or collect data from matched participants (matched pairs). If you have carried out the normal controls for the design, then you can draw conclusions about causation from the McNemar test output.

Summary ■■

This chapter introduced you to nominal data and the tests that can be performed on nominal data.

■■

A contingency table is the best way of displaying nominal data.

■■

Remember our advice about descriptive statistics for nominal data; calculating the mean or median is inappropriate, as is any measure of dispersion.

■■

The chi-square test is a nonparametric test often used to analyse nominal data.

■■

The multidimensional chi-square nevertheless requires that the data satisfy certain criteria. The observations must be independent, so each participant should contribute only one data point.

■■

The McNemar test is used to analyse data obtained by measuring a dichotomous variable for related designs.

■■

For guidance on recoding values, see Chapter 4, Section 5.

■■

For guidance on incorporating SPSS output into a report, or on printing the output, see Chapter 14.

213

CHAPTER 7

Tests for nominal data

CHAPTER 8

8

One-way analysis of variance In this chapter ●● An introduction to one-way analysis of variance (ANOVA) ●● One-way between-subjects ANOVA, planned and unplanned comparisons, and nonparametic equivalent ●● One-way within-subjects ANOVA, planned and unplanned comparisons, and nonparametric equivalent

SPSS for Psychologists online Visit macmillanihe.com/harrison-spss-7e for data sets, online tutorials and exercises

Section 1: AN INTRODUCTION TO ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

214

■■

ANOVA is an enormously useful statistical procedure that is widely used to test for differences in a number of different experimental designs. One-way ANOVA is used when you have one independent variable (IV) with more than two groups or conditions; and Factorial ANOVA (which we will cover in Chapter 9) allows you to analyse data collected from designs with more than one IV. This chapter will focus on One-way ANOVA.

■■

One-way ANOVA is a parametric test that we can use to establish whether the means of our experimental conditions are different or not. In this way it can tell us whether our experimental manipulation has had an effect on the dependent variable (DV – the thing we are measuring). The term 'one-way' simply refers to the fact that we use it to analyse data from experiments that have one IV. It may be easier to think about One-way ANOVA as an extension of the t-test, rather than something completely different.

■■

While t-tests are only able to test for differences between two groups or conditions, One-way ANOVA can allow us to compare participants’ performance across multiple groups or conditions. However, while One-way ANOVA will tell us whether the scores significantly vary across our conditions (i.e. whether there has

been a significant effect of our IV on participants’ scores), it won’t tell us precisely which pairs of conditions are significantly different from one another (e.g. whether condition 1 is significantly different from condition 2, whether condition 2 is significantly different from condition 3, or whether condition 1 is significantly different from condition 3). Such comparisons require some additional statistical procedures called planned and unplanned comparisons.

When can we use One-way ANOVA? As ANOVA is a parametric test, check that: 1. The dependent variable comprises data measured at interval or ratio level. 2. The data are drawn from a population that is normally distributed. 3. There is homogeneity of variance; that is, the samples being compared are drawn from populations that have similar variances. 4. In the case of independent groups designs, independent random samples must have been taken from each population. It is not essential to have equal numbers of observations from each group or condition you are comparing across the independent variable.

How does it work? We all know that humans vary in performance, both between individuals and within individuals over time. For these reasons, if we conduct a simple experiment comparing, say, the time it takes to learn a list of short words, medium length words and long words, we would not expect all the participants within a condition to take the same amount of time. We naturally accept that some participants will be faster than others (i.e. there will be variation between individuals). We also know that any one participant might take less or more time on one occasion than on other occasions (i.e. there will be variation within individuals). Remember that we can measure the amount of variation within a set of scores with measures of dispersion, such as the standard deviation or the variance. Now let us imagine for a moment that we were robopsychologists; that is, we were interested in the psychology of robots (rather than robots interested in psychology). If we repeated our learning experiment with a group of R2D2 robots, we would expect all the robots in one condition to react in exactly the same way. Table 8.1 shows some hypothetical data for robots and for humans. Let us first consider the data from the humans shown in this table. If we asked you to ‘eyeball’ the raw data and guess whether there was a difference in learning times for the three lists, you would probably have no problem saying that the difference did appear to be significant. In making this judgement, you are actually doing something quite sophisticated. What you are doing is deciding whether the natural variation between individuals within the conditions is large or small compared with the variation between individuals across the different conditions. That is, you are asking: ‘OK, so not all the participants in the List A condition took the same time, and not all the participants in the List B or List C condition took the same time, but is this

215

CHAPTER 8

One-way analysis of variance

CHAPTER 8

216

SPSS for Psychologists

Table 8.1  Time (in seconds) taken to learn three different lists of words for a group of robot and human participants ROBOTS List A

List B

List C

10

20

30

10

20

30

10

20

30

10

20

30

10

20

30

10

20

30

10

20

30

10

20

30

Mean = 10

Mean = 20

Mean = 30

List A

List B

List C

30

54

68

40

58

75

35

45

80

45

60

75

38

52

85

42

56

90

36

65

75

25

52

88

Mean = 36.38

Mean = 55.25

Mean = 79.50

Grand mean = 20

HUMANS

Grand mean = 57.04

natural variation (or noise) large or small compared with the difference in times between the three conditions?’ In this case, participants within each condition might vary from each other by several seconds, but this is small compared with the larger differences between the times produced under the three different list conditions. Let us look at the robots’ data. Robots perform identically under identical conditions (or at least our robots do), so within each condition every robot has exactly the same learning time. Thus, the variance within each condition is zero. But if we

compare the performance between the three conditions, it is clear that all the robots were fastest at learning the short words and all took longest to learn the long words. You might conclude that you want to switch from psychology to robopsychology, but there is also a more important point here. What we want to do is make our human participants’ data more like the robots’ data; that is, we want to reduce the variance within conditions down towards zero. In fact, all the practices of good experimental design, such as giving all participants the same instructions and testing under identical conditions, are designed to do just this – reduce the variance within each condition. This good experimental practice will reduce the variance but will not eliminate it – our participants will never behave exactly like the robots. So, if we cannot eliminate the variance, perhaps we can account for it. What we need is a statistical procedure that takes account of the variance within the conditions and compares this to the variance between conditions. If the variance between conditions is much larger than the variance within conditions, surely we can say that the IV is having a larger effect on the scores than the individual differences are. Clearly, for the robots, the variance within the conditions is zero, and the variance between the conditions is quite large. For our humans, the situation is not quite so clear-cut, but if we calculate the variances, we will find the same basic pattern applies: Variance between conditions > variance within conditions This concept of calculating the variance due to nuisance factors such as individual differences and comparing it to the variance due to our manipulation of the IV is central to ANOVA. Exactly how we calculate these variances can get rather complex for some designs, but this does not alter the basic principle that we simply want to ask whether or not the variance in the data brought about by our manipulation of the IV is larger than that brought about by the other nuisance factors such as individual differences. The variance brought about by these nuisance variables is usually referred to as the error variance, so we ask whether the error variance is less than the variance due to the manipulation of the IV. A convenient way of expressing this is to calculate the ratio of the variance due to our manipulation of the IV and the error variance. This ratio is known as the F-ratio (named after Fisher). The F-ratio is: F = Variance due to manipulation of IV / Error variance If the error variance is small compared with the variance due to the IV, then the F-ratio will be a number greater than 1 (a large number divided by a smaller number always gives rise to a number greater than 1). If, on the other hand, the effect of the IV is small, and/or the error variance is large (perhaps because our participants varied considerably or because we did not adequately control the experiment), then the F-ratio will be a number less than 1 (a small number divided by larger number will always result in a number less than 1). Thus, we can now say that the effect of the IV is definitely not significant if the F-ratio is less than 1. This is because the error variance is actually larger than the variance caused by our manipulation of the IV. So, the F-ratio is simply the ratio of these two estimates of variance. The larger the F-ratio, the greater the effect of the IV compared with the ‘noise’ (error variance) in

217

CHAPTER 8

One-way analysis of variance

CHAPTER 8

218

SPSS for Psychologists

the data. An F-ratio equal to or less than 1 indicates a non-significant result, as it shows that the scores were equally affected or more affected by the nuisance variables (such as individual differences) as they were by the manipulation of the IV.

How do we find out if the F-ratio is significant? Once we have calculated the value of the F-ratio and found it is larger than 1, we need to determine whether it is large enough to be regarded as significant. That is, we ask whether the effect of the IV is sufficiently larger than the effect of the nuisance variables to regard the result as significant. When calculating the F-ratio with a calculator, we consult F tables to discover, given the number of observations we made, what value F had to exceed to be considered as significant. When using SPSS to perform ANOVA, the output reports the exact p value for that particular F-ratio. This p value is the probability of getting this F-ratio by chance alone and it needs to be less than .05 for the F-ratio to be regarded as significant.

What about degrees of freedom? You will remember from performing a t-test, another test of difference, that we need to calculate and report the degrees of freedom associated with our analysis. One complication with ANOVA is that, for each F value, we must report two sets of degrees of freedom. This is because we need to remember how many observations went into our calculation of the error variance and also how many went into our calculation of the variance due to the manipulation of the IV. As these are the bottom and top halves of the F-ratio equation, these are sometimes referred to as the ‘denominator’ and ‘numerator’ degrees of freedom, respectively. A good statistics text will explain the calculation of degrees of freedom in detail, but as SPSS calculates and reports these for you, all you need to know is to expect two values for each F-ratio. We will show you how to report these degrees of freedom and the F-ratio in subsequent sections.

What terms are used with One-way ANOVA? Different textbooks tend to use slightly different terminologies to describe One-way ANOVA. To avoid the problems this can create, we are going to use what we consider to be the simplest terminology.

Factors Factors are the independent variables, but as there may well be more than one of them per study, it makes sense to call them factors from now on.

Levels of factors Levels of factors are similar to conditions. In the experiments we considered earlier, we had a single IV, which was manipulated to create two conditions. We would now describe this as a single factor with two levels. In ANOVA designs, a factor can have

as many levels as we like. For example, we might have a factor of ‘Drug dose’, which might be manipulated to create four levels of 0 mg, 10 mg, 20 mg and 30 mg.

Between-subjects factors Between-subjects factors are factors whose levels vary between participants, so that each participant will experience only one level of a factor. For example, a participant can be administered either 0 mg, 10 mg, 20 mg or 30 mg. This is a factor that is manipulated using an independent groups design, which we will now refer to as a between-subjects design.

Within-subjects factors Within-subjects factors are factors whose levels vary within a participant, so that each participant will experience two or more levels of a factor. For example, a participant might be administered all four different drug dosages. This is a factor that is manipulated using a repeated-measures design, which we will now refer to as a within-subjects design.

Main effect The term ‘main effect’ is used to describe the effect a single independent variable has on a dependent variable. If our One-way ANOVA is significant, we can say that there was a significant main effect of our IV on our DV.

How is the F-ratio calculated? You do not need to know how to calculate the F-ratio, as SPSS will do this for you. However, to fully appreciate the output that SPSS generates, it would be helpful to read this section and realise why the calculation is dependent on the type of factor manipulated. There are two different types of One-way ANOVA, and how they work depends on the type of experimental design you have.

Between-subjects One-way ANOVA design Let us go back to our word list learning experiment (see Table 8.2). This time, imagine that there are different humans taking part in each condition; that eight participants were asked to learn list A, another eight to learn list B and another eight to learn list C. There are two sources of variance of interest here: 1. How do the scores in one group vary from those in the other groups? We can look at how the mean of each column deviates from the grand mean. This provides us with a measure of the variance due to the factor. 2. How do the scores vary within each group? We can look at how each score within a column deviates from the mean for that condition. This provides us with a measure of noise. Together, these two sources of variance must add up to the total variance (the variance between each single score and the grand mean). That is: VarTotal   VarBetween Groups   Var Within Groups 

219

CHAPTER 8

One-way analysis of variance

CHAPTER 8

220

SPSS for Psychologists

Table 8.2 Time (in seconds) taken to learn three different lists of words by three groups of human participants in a between-subjects design HUMANS List A

List B

List C

30

54

68

40

58

75

35

45

80

45

60

75

38

52

85

42

56

90

36

65

75

25

52

88

Mean = 36.38

Mean = 55.25

Mean = 79.50

Grand mean = 57.04

Within-subjects One-way ANOVA design Imagine that in our learning experiment, eight participants took part and each performed in each level of the factor. We would be able to calculate a mean score for each list and a mean score for each participant; see Table 8.3. Table 8.3  Time (in seconds) taken to learn three different lists of words for the group of human participants in a within-subjects design HUMANS Participant number

List A

List B

List C

Participant mean

1

35

42

64

47

2

48

60

90

66

3

36

65

75

58.67

4

40

55

70

55

5

38

52

85

58.33

6

25

42

58

41.67

7

30

42

60

44

8

42

60

90

64

Mean = 36.75

Mean = 52.25

Mean = 74.0

Grand mean = 54.33

The calculation of F for the within-subjects design is more complicated than for the between-subjects design. Again, we want to determine the sources of variance. However, with this design, we have repeated observations of each participant as every person performs in every level of the factor. This allows us to distinguish between variation caused by individual differences and variation caused by different participants performing differently across the different conditions, and therefore separate out participant variance from error variance. So, we have three sources of variance, and we can ask: 1. How do the scores in one condition vary from those in the other? We can compare overall differences between the three lists. As before, we can look at how the mean of each column deviates from the grand mean. This provides us with a measure of the variance due to our manipulation of the factor. 2. How do participants vary in their average scores? We can get an indication of how much individuals differ from each other by looking at how much each participant’s average score deviates from the grand mean. This provides us with a measure of participant variance. 3. How much error variance is there? We can work this out by looking at the extent to which each score is not what we would predict from the row and column means. You can also think of this as the variance resulting from different participants responding differently to the change in the factor. For example, with regard to the score for participant 1 in list A, we know that their mean time is 47 seconds. Participant 1 is, on average, 7.33 seconds faster compared with the overall grand mean of 54.33 seconds. The mean for the list A column is 36.75 seconds, so participants are, on average, 17.58 seconds faster at learning list A than the overall grand mean of 54.33 seconds. So, altogether, we would expect participant 1 to be 17.58 + 7.33 seconds faster than the grand mean of 54.33 seconds at learning list A, giving an expected time of 29.42 seconds. The observed score is 35 seconds, which is slower than we would expect. (Looking at participant 1’s scores, we can see that they are relatively faster with lists B and C compared with list A.) With regard to participant 2’s score in list A condition, we know that their row mean is 66 seconds, which is 11.67 seconds slower than the grand mean of 54.33 seconds. So, we would expect participant 2 to be 17.58 seconds faster at learning list A, but 11.67 seconds slower because this participant is slower on average. Thus, we expect a time of 54.33 – 17.58 + 11.67 = 48.42 seconds. The observed score is 48 seconds – close to what we would expect. The extent to which the observed scores vary from the expected scores reflects the extent to which participants are inconsistent and, as illustrated above, provides us with a measure of error variance.

Using SPSS to calculate the F-ratio The calculation of the F-ratio for a within-subjects factor is tricky, and, as you will see, the SPSS output is rather more complex for a within-subjects factor compared with a between-subjects factor. Furthermore, because SPSS uses a procedure called the General Linear Model, it will give you much more information than just the F-ratio statistic. You will see both ANOVA and multiple regression statistics in the

221

CHAPTER 8

One-way analysis of variance

CHAPTER 8

222

SPSS for Psychologists

SPSS output, as ANOVA can be considered to be a special case of multiple linear regression (see Chapter 10), which itself is a special case of the General Linear Model.

Effect size and ANOVA We mentioned in Chapter 1 that it is usual to report effect size alongside the statement of whether the result is significant. This provides information about the magnitude of the finding and also may draw attention to the influence of sample size. For example, if our results show a marginal but non-significant result and a moderate or large effect size, power may be an issue, and it may be appropriate to consider a follow-up study with a larger sample. In Chapter 5, where we covered tests of differences for two samples, we introduced you to one measure of effect size, namely Cohen’s d, which was calculated from the descriptive statistics provided by SPSS. You may remember that this involved calculating the size of the difference between the means relative to the standard deviation of the scores. The effect size estimates that are calculated for ANOVA are different, in that they tend to describe the proportion of the variability accounted for by each factor (or combination of factors) included in the ANOVA design (in that sense, they are similar to r2 described in Chapter 6, which also is a measure of the proportion of variance accounted for). These effect size measures include eta squared, partial eta squared, generalized eta squared, associated omega squared and also correlational measures. Fritz, Morris and Richler (2012) explain that one can distinguish between: 1. Estimates that describe the effect size in the observed sample but do not consider the population from which the sample was drawn, and this is the case with eta squared and partial eta squared. 2. Estimates of effect size, such as omega squared measures, which try to estimate the variability in the sampled population rather than just in the observed sample, and these are therefore less likely to be inflated by chance factors. We recommend that you consult your statistics text, or Fritz et  al. (2012), for advice on how to calculate these different estimates of effect size. Field (2005) demonstrates how to calculate an omega squared measure (which involves a different equation depending on whether between-subjects or within-subjects ANOVA). Mulhern and Greer (2011) explain how to calculate eta squared and recommend against using partial eta squared, which is the only measure of effect size that SPSS will calculate for you. They explain that partial eta squared is not easy to interpret as it is an adjusted measure: the variance explained by one factor after taking into account the variance explained by the other factor(s). Fritz et al. (2012) suggest that partial eta squared is limited in terms of its usefulness and may only be helpful if making cross-study comparison with identical designs. Eta squared, on the other hand, is relatively straightforward to calculate by hand and will tell you the proportion of the total variability accounted for by each factor in your design. We will demonstrate how to do this in Section 2. Whichever effect size measure you decide on, do report this for significant and non-significant effects, and identify which statistic is used.

Planned and unplanned comparisons Imagine you have a design involving one factor with three conditions: A, B and C. If ANOVA reveals a significant effect of this factor, this suggests a difference between the conditions. However, to find out exactly where this difference is, we need to carry out further tests that compare the pairs of conditions. There are three possibilities: 1. All three possible comparisons are significant, so conditions A and B are significantly different from each other, as are conditions B and C, and conditions A and C. 2. Only two of the three comparisons are significant, for example only conditions A and C and conditions B and C differ significantly. 3. Only one of the three comparisons is significant, for example only conditions A and C are significantly different. There are two types of comparisons that we can conduct: 1. Planned (a priori) comparisons. These are decided on before the data were collected. The researcher has predicted which means will differ significantly from each other. 2. Unplanned (a posteriori or post-hoc) comparisons. Here, differences among means are explored after the data have been collected. Why should this matter? We need to use different tests for these two kinds of comparison because the probability of a Type I error is smaller when the comparisons are planned in advance. Type I error involves incorrectly rejecting a null hypothesis, thus concluding that there is a real effect, when in fact, the means differ due to chance. When making multiple comparisons, there is an increased risk of Type I errors. Howell (2013) gives the following example: assume that we give a group of males and a group of females 50 words and ask them to give us as many associations to these words as possible in one minute. For each word, we then test whether there is a significant difference in the number of associations given by male and female participants. We could run 50 more or less independent t tests, but we would run the risk that 2.5 of these (50*.05) will be declared ‘significant’ by chance. Why is there a greater risk of making a Type I error when carrying out unplanned comparisons? Consider the following: Imagine an experiment to look at the effect of five different levels of noise on memory that employed a One-way ANOVA design. You will have five means (one for each condition) and could do a total of ten comparisons (you could compare mean 1 to mean 2; mean 1 to mean 3; mean 1 to mean 4 etc.). Assume that the null hypothesis is true, and that noise does not affect memory, but that, by chance, two of the means are far enough apart to lead us erroneously to reject the null hypothesis, thus giving rise to a Type I error. If you had planned your single comparison in advance, you would have a probability of 0.1 of hitting on the one comparison out of 10 that would lead us to make a Type I error. But if you look at the data before deciding which comparisons to make, you are certain to make a Type I error, since you are likely to test the largest difference you can observe. As you will see, it is possible to ask SPSS to carry out planned and unplanned comparisons. Unplanned or post-hoc comparisons are easy to perform using SPSS. However, there is no need to perform these if the factor has only two levels (the

223

CHAPTER 8

One-way analysis of variance

CHAPTER 8

224

SPSS for Psychologists

main effect is sufficient) or if the main effect is not significant. Planned comparisons, on the other hand, are less easy to perform. We demonstrate how to perform both types of comparisons in Sections 2 and 3.

Section 2: ONE-WAY BETWEEN-SUBJECTS ANOVA, PLANNED AND UNPLANNED COMPARISONS, AND NONPARAMETRIC EQUIVALENT

Example study: the effects of witness masking To practise the use of the One-way between-subjects ANOVA, we shall consider an applied experiment which looked at the effects of masking the face of a witness. There is growing awareness that the identity of witnesses in sensitive cases should be protected, especially in light of the move towards televising live court cases. The technology to mask a witness’s face is available and has been used in America. Towell, Kemp and Pike (1996) reported the results of a study investigating the effect that masking might have on jurors’ memory for a witness’s testimony and on jurors’ perceptions of the witness’s credibility. The testimony of an alleged victim of rape presented in a televised trial in America was shown to participants. The design employed was a One-way between-subjects ANOVA design. The between-subjects factor, presentation condition, had four levels: unmasked, grey blob, pixelation and negation. These were operationalised by showing some participants the witness unmasked, so that her face was fully visible, some with her face masked by a grey blob, some with her face masked by pixelation and some with her face negated (white and black were reversed). One of the dependent variables was the percentage of facts from the testimony correctly remembered by the participants. The hypothesis was that there would be a negative effect of masking on memory. Results revealed that participants’ memory for the victim’s testimony was affected by presentation condition, while negating the face did not lower memory compared with the unmasked condition, masking with a grey blob and pixelation both impaired memory. For the purposes of this book, we have created a data file that will reproduce some of these findings. (These data are available from macmillanihe.com/harrison-spss-7e.) SPSS provides two ways of carrying out a One-way between-subjects ANOVA, one using the General Linear Model command and one using the One-way ANOVA command. The first command can also be used to perform a multi-between-­subjects ANOVA, as you will see in Section 3, and also has the option of including in the output the measure of effect size called partial eta squared. In Section 1, we indicated that this particular measure is not that useful; however, when the design involves just one factor, there is no difference between partial eta squared and eta squared. The second command will only permit analysis of a One-way ANOVA design, but has the advantage of a much simpler output and of providing alternative Fs should the assumption of homogeneity of variance be violated. Both ways allow you to do planned and unplanned comparisons to evaluate the differences between pairs of group means. See Section 1 for general guidance on these comparisons.

We will first describe how to use the General Linear Model command, followed by the One-way ANOVA command. We will then demonstrate how to carry out planned and unplanned comparisons and finish this section by describing a nonparametric equivalent test.

 ow to do it: using General Linear Model H command 1. Click on Analyze.

2. Click on General Linear Model.

3. Click on Univariate. The Univariate dialogue box will appear (see below).

4. Select the dependent variable ‘memory’ and move it into the Dependent Variable box. 5. Select the grouping variable (i.e. the between-subjects factor) ‘presentation condition’ and move it into the Fixed Factor(s) box (see tip box below).

6. Click on the Options button to obtain descriptive statistics. The Univariate Options dialogue box will appear (see next page).

225

CHAPTER 8

One-way analysis of variance

CHAPTER 8

226

SPSS for Psychologists

With a fixed factor, data have been collected from all the levels of the factor that are of interest to the researcher. An alternative is to choose the levels by a random procedure, but this rarely happens in psychological research.

7. Click here to obtain mean and standard deviation for each level of the factor.

8. Click here to check whether the assumption of equality of variance has been violated.

Click here if you want to obtain partial eta squared, a measure of effect size (see guidance in Section 1).

9. Click on the Continue button to return to the Univariate dialogue box.

It is often a good idea to ask SPSS to illustrate your data, so you have a graph you can use when it comes to writing up your results. SPSS allows you to do this directly from the main Univariate dialogue box. To produce a graph, select the Plots… option to open up the Univariate: Profile Plots dialogue box. 10. Select the independent variable ‘prescond’ and move it into the Horizontal Axis box.

11. Click Add to tell SPSS that you want to produce a graph for this variable.

227

CHAPTER 8

One-way analysis of variance

Notice that the IV (prescond) has been added to the Plots box.

12. Decide which type of graph your would like to produce; for One-way ANOVA a Bar Chart is the most commonly used.

13. Select the Include Error bars option, and choose the type of error bars you want to include, in this case 95% confidence intervals.

14. Click on the Continue button to return to the Univariate dialogue box.

Finally, click on the button, and SPSS will calculate the test for you. See below for an example of the output using the Univariate command via General Linear Model, which includes the means, standard deviations and N (number of scores) obtained by clicking on Descriptive statistics in the Univariate: Options dialogue box.

CHAPTER 8

228

SPSS for Psychologists

 PSS output for One-way between-subjects S ANOVA  btained using menu items: General Linear Model > O Univariate SPSS reminds you of the factor you are analysing, what the levels of that factor are, and the number of participants in each level.

This table will appear if you requested Descriptive statistics in the Univariate: Options dialogue box. The table gives the mean and the standard deviation (SD) for each level of the factor. The bottom row gives the total mean and SD; that is, for all participants regardless of which condition they were in.

SPSS tests the assumption of equality of variances in a number of different ways. In this case, we are interested in the statistic Based on Mean. If Levene’s test is significant, this suggests that the group variances are likely to be significantly different, suggesting that the assumption of homogeneity of variance has been violated. Here, it is not significant, so the assumption has been met.

The Sum of Squares for each source of variance.

Each row shows information for each source of variance listed underneath this heading.

The Mean Square for each source of variance.

The degrees of freedom for each source of variance.

This row contains the values of F, df and p for the factor ‘prescond’ that you will report.

229

The p value for each explained source of variance.

The F-ratio for each explained source of variance.

You also need the df value from the Error row.

This is the graph requested in the Univariate: Profile Plots dialogue box, which shows you participant performance for the four different conditions. When error bars do not overlap, it is likely that the conditions are significantly different – although you need to do post-hoc tests or planned comparisons to confirm this.

CHAPTER 8

One-way analysis of variance

CHAPTER 8

230

SPSS for Psychologists

 alculating eta squared: one measure of effect C size In Section 1 we provided guidance on reporting effect size when carrying out ANOVAs. Although SPSS will calculate partial eta squared, this is not easy to interpret as it is an adjusted measure: the variance explained by one factor after taking into account the variance explained by the other factor(s). Instead, Mulhern and Greer (2011) recommend eta squared, which is relatively straightforward to calculate by hand and will tell you the proportion of the total variability accounted for by each factor and interaction in your design. We will demonstrate how to do this now. Note, however, that there are alternative measures which may be preferable, as they attempt to estimate the effect size in the population rather than just in the sample (see Fritz, Morris and Richler, 2012 for a review and guidance on how to calculate these, and/or consult your statistics text). Eta squared (η2) can be calculated from the ANOVA output. It is the sum of squares for your IV (prescond) divided by the corrected total sum of squares, and these values can be found in the Tests of Between-Subjects Effects table.

■■

For ‘prescond’ (η2) = 1071.875 / 1911.775 = .56

This suggests the factor ‘prescond’ accounts for approximately 56% of the variance in the dependent variable.

As mentioned above, SPSS can calculate a different effect size measure for you: partial eta squared. This isn’t the most helpful measure (see Section 1 for guidance), so we generally recommend using eta squared instead. However, when you have a One-way ANOVA there is actually no difference between partial eta squared and eta squared. This is because partial eta squared calculates the proportion of variance a factor (or variable) explains that is not explained by other variables in the analysis. When there are no other factors in the analysis (as is the case with One-way ANOVA), the calculation is the same as eta squared. As such, when you have a One-way ANOVA, rather than calculating eta squared by hand, you can just use the SPSS ‘Estimate Effect Size’ option in the Options dialogue box. Try doing this now, by downloading the dataset used in this example, and see if you get the same number as the one we just calculated by hand.



Reporting the results



In a report you might write:

231

CHAPTER 8

One-way analysis of variance

 One-way between-subjects ANOVA was conducted to examine the A effect of presentation condition on participants’ recall of the witness testimony. This revealed a significant effect of presentation condition: F (3,36) = 15.31, p < .001, η2 = .56.

You would also want to report your descriptive statistics (i.e. means and standard deviations) and provide information regarding the confidence intervals for each condition. To identify which pair(s) of conditions significantly differed, you would carry out planned or unplanned comparisons as appropriate, and these are demonstrated later in this section.

How to do it: using One-way ANOVA command As stated earlier, One-way between-subjects ANOVA can be carried out in two different ways in SPSS. This is the second way, which provides a simpler output and also offers alternative Fs should the variances in the groups not be equal.

1. Click on Analyze.

2. Click on Compare Means.

3. Click on One-way ANOVA. The One-way ANOVA dialogue box will appear (see below).

SPSS for Psychologists

CHAPTER 8

232

4. Select the dependent variable ‘memory’ and move it to the Dependent List box. 5. Select the grouping variable (i.e. the between-subjects factor) ‘presentation condition’ and move it into the Factor box. 6. Click on Options. The Oneway ANOVA: Options dialogue box will appear (see below).

7. Select Descriptive. This will give you a number of additional statistics, including 95% confidence intervals for each condition. 8. Select these three options to check that the groups have similar variances, and to see two alternative Fs, should the assumption of homogeneity of variance be violated.

9. Click on the Continue button to return to the One-way ANOVA dialogue box.

Finally, click on the

button. See below for annotated output.

 PSS output for One-way between-subjects S ANOVA  btained using menu items: Compare Means > One-way O ANOVA Oneway This table is produced by selecting Descriptive in the One-way ANOVA: Options dialogue box.

233

This table is produced by selecting Homogeneity of variance test in the Oneway ANOVA: Options dialogue box. If Levene’s test is significant, this suggests that the group variances are significantly different, suggesting that the assumption of homogeneity of variance has been violated. Here, it is not significant, so the assumption has been met.

This table shows the outcome of the analysis of variance. Each row shows information for a source of variance.

This row contains the values of F, df and p for the factor ‘prescond’ that you will report.

You also need the df value from the Within Groups row (the Error variance row).

This table is produced by selecting Brown-Forsythe and Welch in the One-way ANOVA: Options dialogue box. These are alternative Fs, which can be used should Levene’s test be significant and hence the assumption of homogeneity of variance violated.

CHAPTER 8

One-way analysis of variance

SPSS for Psychologists

CHAPTER 8

234

 alculating eta squared: one measure of effect C size As above, effect size can be calculated using the main ANOVA table. To calculate eta squared (η2), again you need to take the sum of squares for your IV (in the ‘Between Groups’ row) and divide it by the corrected total sum of squares (in the ‘Total’ row), ■■ For ‘prescond’ (η2) = 1071.875 / 1911.775 = .56



Reporting the results

In a report you might write:  One-way between-subjects ANOVA was conducted to examine the A effect of presentation condition on participants’ recall of the witness testimony. This revealed a significant effect of presentation condition: F (3,36) = 15.31, p < .001, η2 = .56.

You would also want to report in your results your descriptive statistics (i.e. means and standard deviations) and information regarding the confidence intervals for each condition. As graphs cannot be produced from the ANOVA dialogue boxes using this method, you may want to refer to Chapter 5, Section 3 for guidance on how to obtain an error bar chart. To identify which pair(s) of conditions significantly differed, you would carry out planned or unplanned comparisons as appropriate, and these are demonstrated next.

Planned and unplanned comparisons The One-way between-subjects ANOVA identified a significant effect of presentation condition, but we need to carry out further analysis to compare the pairs of conditions to pinpoint the source of this effect. In Section 1 we explained that there are two main types of comparisons: planned and unplanned. For demonstration purposes, we carry out both of these on the same data set; however, normally you would only perform one type.

Unplanned (post-hoc) comparisons in SPSS There are a range of post-hoc tests to choose from, and you will need to consult your statistics text to select the one most suitable for your data. Which test you choose will depend on whether the assumption of homogeneity of variance has been met or not, and whether you have equal sample sizes or not (Field, 2013). We demonstrate how to select these tests for each of the two ways of conducting a One-way between-subjects ANOVA. Either way, the output is the same and is shown below.

235

CHAPTER 8

One-way analysis of variance

Using General Linear Model command Click on Analyze ⇒ General Linear Model ⇒ Univariate.

1. Click on the Post Hoc button to perform unplanned comparisons; this will bring up the Post Hoc Multiple Comparisons for Observed Means dialogue box (on the right).

2. Move the relevant factor into the Post Hoc Tests for box and select the post-­hoc test you want to perform. As Levene’s Test was not significant, we can assume the variances are equal. As such, we can select a test from the Equal Variances Assumed section. We have chosen Bonferroni. If Levene’s test was not significant you could choose one from the bottom section of the box, such as Games-Howell. Next, click on the Continue button to return to the Oneway ANOVA dialogue box and click on OK.

Using One-way ANOVA command Click on Analyze ⇒ Compare Means ⇒ One-way ANOVA.

1. Click on the Post Hoc button to perform unplanned comparisons; this will bring up the One-way ANOVA: Post Hoc Multiple Comparisons dialogue box (on the right).

2. Select the post-hoc test you want to perform; we’ve selected Bonferroni.

3. Click on the Continue button to return to the Oneway ANOVA dialogue box and click on OK.

CHAPTER 8

236

SPSS for Psychologists

SPSS output for post-hoc tests POST HOC TESTS This heading and the Multiple Comparisons table will appear after all the other tables in the ANOVA output.

SPSS prints a complete matrix (as it does for correlations). You have to pick out the comparisons required, and ignore the repetitions.



Reporting the results

In a report you might write:

As our factor had four levels, there are six possible comparisons. Their p values are highlighted in this column.

 mploying the Bonferroni post-hoc test, significant differences were E found between the unmasked and greyblob conditions (p < .001), between the unmasked and pixelated conditions (p = .001), between the greyblob and negated conditions (p < .001) and between the pixelated and negated conditions (p = .001). There was no significant difference between the unmasked and negated conditions (p = 1) or between the greyblob and pixelated conditions (p = 1).



Or, to abbreviate: here was no significant difference between the unmasked and T negated conditions or between the greyblob and pixelated conditions (for both, p = 1). The greyblob and pixelated conditions were each significantly different from each of the unmasked and negated conditions (p ≤ .001).

Planned comparisons in SPSS Generally, for planned comparisons, the technique of linear contrasts is used, which allows us to compare one level or set of levels with another level or set of levels. The simplest way of doing this is to assign weights to each of them. These weights are known as ‘coefficients’. This technique is available on SPSS, which uses the t statistic to test specific contrasts. Indeed, the printout will give you two t values, one for ‘assume equal variances’ and one for ‘does not assume equal variances’. Since the variances of the groups being compared should be broadly similar (otherwise you should not be using ANOVA), you can ‘assume equal variances’, but check both values and their significance. A point to note here is that the overall main effect does not have to be significant for you to test for specific differences using planned comparisons. By assigning weights (or coefficients), we can make three sorts of comparison: 1. We can compare one condition with one other condition. 2. We can compare one condition with the mean of two or more other conditions. 3. We can compare the mean of one set of conditions with the mean of another set of conditions. In all three cases, we assign a weight of zero to a condition (or conditions) that we do not want to be included in the comparison. Conditions (or groups of conditions) that are to be compared with each other are assigned opposite signs (positive or negative). In all cases, the sum of the weights must be zero. So, suppose you had four conditions, C1, C2, C3 and C4: ■■ If you wanted to compare only conditions 1 and 3, you could assign the weights: 1, 0, –1, 0. ■■

If you wanted to compare the average of the first two conditions with the third condition, you could assign the weights: 1, 1, –2, 0.

■■

If you wanted to compare the mean of the first two groups with the mean of the last two groups, you could use the weights: 1, 1, –1, –1.

If you wish to perform more than one planned comparison on the same data set, you need to check that the comparisons are independent of one another, that they are non-overlapping – these are called orthogonal comparisons. You can do this by taking each pair of comparisons and checking that the products of the coefficients assigned to each level sum to zero (see any good statistics text, for example Howell, 2013).

If you perform a planned comparison using the One-way ANOVA command, then you can design your own contrasts (as shown above), and enter the weights (coefficients) into a dialogue box. We show you how to do this next.

237

CHAPTER 8

One-way analysis of variance

CHAPTER 8

238

SPSS for Psychologists

Using One-way ANOVA command Click on Analyze ⇒ Compare Means ⇒ One-way ANOVA.

1. Click on the Contrasts button to perform unplanned comparisons; this will bring up the dialogue box below.

2. Type in here the coefficient of the first group or condition and click on Add. Repeat this for each of the conditions or groups.

3. The weights will appear in this box. The control condition (coefficient = –3) is being compared to the three experimental conditions (coefficient = 1). To double check you’ve entered the weightings correctly, the Coefficient Total should be 0.000

The dialogue box above shows a planned comparison, where the control group (who were shown the witness giving evidence with her face visible) is compared with the three experimental groups (who were all shown the witness giving evidence with her face masked). A linear contrast is requested, and the coefficients have been entered, first for group 1, then for groups 2, 3 and 4. The output below shows that this comparison is significant.

SPSS output for contrasts  btained using menu items: Compare Means > One-way O ANOVA

This row contains the values of t, df and p for the contrast you requested, and assuming equal variances.



Reporting the results



In a report you might write:  planned comparison revealed that participants who saw the witA ness’s face unmasked remembered significantly more of her testimony than the participants in the three masking conditions (t = 3.69, df = 36, p = .001).

Note that the contrast test can tell you whether the conditions you compared are significantly different or not, but nothing about the direction of the difference(s). In order to fully interpret the result, you will need to inspect the descriptive statistics for the conditions or groups being compared.

239

CHAPTER 8

One-way analysis of variance

CHAPTER 8

240

SPSS for Psychologists

Using General Linear Model command If you use the General Linear Model method to perform the One-way ANOVA, you have the option of choosing between a range of preset contrasts. Click on Analyze ⇒ General Linear Model ⇒ Univariate. 1. Click on the Contrasts button to obtain the Univariate: Contrasts dialogue box (overlaid here).

2. Click here to view the list of contrasts available. Select the one appropriate for your planned comparison (these are described below). This will then appear in the SPSS output.

For some contrasts, you can decide whether the Reference Category is set to Last or First.

1. A Deviation contrast compares the effect for each level of the factor, except the reference category, with the overall effect. Thus, if there are three levels of the factor, two comparisons will be carried out. If the reference category is set to last, then levels 1 and 2 will be compared with the overall effect (of levels 1, 2 and 3 combined). If you select the reference category to be the first, then levels 2 and 3 will be compared with the overall effect of all three. 2. In a Simple contrast, each level of the factor is compared with the reference level, which can be set to either the first or last level. For example, if there are three levels and the reference category is set to the first level, then the second and the third levels will each be compared with the first level. 3. In a Difference contrast, each level of the factor except the first is compared with the mean effect of all previous levels. Thus, if there are three levels, level 3 is compared with the combined effects of level 2 and level 1, and level 2 is compared with level 1. 4. A Helmert contrast is the reverse of the Difference contrast, and the effect of each level of the factor except the last is compared with the mean effect of subsequent levels. If there are three levels, level 1 is compared with levels 2 and 3 combined, and level 2 is compared with level 3. 5. In a Repeated contrast, each level is compared with the previous level, so if there are three levels, level 2 is compared with level 1, and level 3 with level 2. 6. The Polynomial contrast should be used to look for trends in the data; for example, it can be used to look for a linear trend.

The Kruskal–Wallis test To end this section, we introduce you to the nonparametric equivalent of the ­One-way between-subjects ANOVA, which can be used if your data fail to meet the assumptions for the ANOVA. For demonstration purposes, we perform the Kruskal–Wallis test on the same data used throughout this chapter.

How to do it 1. Click on the word Analyze.

2. Click on Nonparametric Tests. 3. Click on Legacy Dialogs. 4. Click on K Independent Samples. The Test for Several Independent Samples dialogue box will appear (see below).

We recommend you use Legacy Dialogs as this gives you more information in the output compared with the new alternative.

5. Select the dependent variable ‘memory’ and move this to the Test Variable List box. 6. Select the grouping variable ‘prescond’ and move this to the Grouping Variable box.

7. Click on the Define Range button which will bring up the dialogue box below.

241

CHAPTER 8

One-way analysis of variance

CHAPTER 8

242

SPSS for Psychologists

8. Enter the minimum and maximum values in the appropriate box and click on the Continue button to return to the Tests for Several Independent Samples dialogue box.

9. Click on the Options button in the Tests for Several Independent Samples dialogue box to bring up this dialogue box. We have selected Descriptive, but Quartiles may be more appropriate for your data. 10. Click on Continue to return to the Tests for Several Independent Samples dialogue box. Finally, click on OK.

SPSS output for Kruskal–Wallis test  btained by using menu items: Nonparametric Tests > K O Independent Samples This table was obtained via the Options button. The first row shows the amount participants correctly recalled of the witness testimony; however, it is for all 40 participants, regardless of the between-subjects condition they were in. For a report, descriptives for each group separately, obtained using Explore, would be much more useful (see Chapter 3, Section 4).

This table provides information about the calculations for the Kruskal–Wallis, which can be considered as an extension of the Mann–Whitney U test. Therefore, you might like to refer to the annotated output of that test for an explanation (Chapter 5, Section 5).

The calculated value for the Kruskal–Wallis, which is assessed for significance using χ2 distribution. The degrees of freedom = k-1, where k is the number of levels of the factor. The p value.



R  eporting the results In a report you might write:  Kruskal–Wallis test was conducted to examine the effect of presenA tation condition on participants’ recall of the witness testimony. This revealed a significant effect of presentation condition: χ2(3, N = 40) = 21.96, p < .001.

Section 3: ONE-WAY WITHIN-SUBJECTS ANOVA, PLANNED AND UNPLANNED COMPARISONS AND NONPARAMETRIC EQUIVALENT

Example study: the Stroop effect Many experiments have been conducted to investigate the Stroop effect (Stroop, 1935). The most common way of demonstrating this effect is to show participants the names of colours printed in an incongruous colour (e.g. the word ‘red’ written in green ink) and ask them to name the colour of the ink. Results show that this is not an easy task because of our tendency to read the word, which then interferes with the task of naming the colour of the ink. In one experiment conducted with undergraduate students, we devised three lists: One list was incongruent and contained four words with strong colour associations (grass, coal, blood and sky) repeated three times in a random order, each time in a different incongruent colour ink (e.g. ‘grass’ printed in black, red and blue ink). The second list was congruent and contained the same four words repeated three times in a random order, each time in their congruent colour ink (e.g. ‘grass’ printed in green ink). The third list was neutral and contained four new words, matched in word length to the original words, and repeated three times. These words were not associated with any particular colour and were printed in one of the four different colour inks (e.g. ‘table’ written in green). These three lists constituted the different experimental conditions, and all participants completed all three lists. The order of the lists was counterbalanced across participants. The design employed was a One-way within-subjects ANOVA design. The withinsubjects factor, the type of list, had three levels: incongruent, congruent and neutral.

243

CHAPTER 8

One-way analysis of variance

CHAPTER 8

244

SPSS for Psychologists

The dependent variable was the total time taken in seconds to name the colour of the ink of the 12 words in the list. The hypothesis was that there would be an effect of type of list on performance, with the shortest naming time for the congruent list and the longest naming time for the incongruent list. (These data are available from macmillanihe.com/harrison-spss-7e.)

Understanding the output The within-subjects ANOVA output contains several additional sections not seen previously in the between-subjects output. We describe these here and draw attention to them again when working through the output: 1. Mauchly’s test of sphericity: this is important. If you have two or more levels of a within-subjects factor, SPSS will produce a test called the Mauchly’s test of sphericity. The Mauchly’s test of sphericity is a statistical test to determine whether the data entered into the within-subjects ANOVA meet certain assumptions, rather like the Levene’s equality of variance test. The assumption of sphericity is that the variances of the differences between all possible pairs of levels of the factor are equal. Because the same participants are performing in each level of the withinsubjects factor, you would assume that the correlations between all possible combinations of levels are roughly the same. If there are only two levels, there will be only one correlation, so this test is only valuable when there are three or more levels. A chi value is estimated to test the significance of the Mauchly’s test of sphericity procedure (hence the output reports ‘Approx. Chi-square’). The significance of this value of chi is reported. If it is significant (i.e. less than .05), then the assumption of sphericity has been violated. When this occurs there are two things you can do: corrections using epsilon or multivariate tests, described next. 2. Corrections using epsilon: SPSS provides three estimates of a statistic called epsilon that can be used to correct for a violation revealed by Mauchly’s test of sphericity. The greater the violation, the smaller the value of epsilon. All you need to do is decide which of the three estimates of epsilon to use. Greenhouse–Geisser epsilon is probably the most appropriate value to use, but if you have relatively few participants, this can be rather too conservative (i.e. its use will decrease the chances of finding a significant result) – in these cases the Huynh–Feldt epsilon may be preferable. The third estimate, the lower-bound epsilon, is a minimum value for epsilon that will give the most conservative correction. SPSS gives corrected values in the table. When reporting any result, make it clear which one you have used. 3. Multivariate tests: these make fewer assumptions about the data and hence are more appropriate when the Mauchly’s test of sphericity is significant. In the Multivariate Tests table, SPSS reports four different multivariate statistics: Pilliai’s Trace, Wilks’ Lambda, Hotelling’s Trace and Roy’s Largest Root. Each of these tests reports a value of F with associated degrees of freedom and a significance value. You will probably find that there is little difference between the significance of F reported by these four procedures, so we suggest you pick one of them and report it. The multivariate values of F are always lower than the univariate values; hence, if a result is not significant by the univariate method, it cannot be significant by

the multivariate method. For this reason, SPSS does not report the multivariate estimates when the univariate test is non-significant. 4. Finally, it is worth mentioning the Test of between-subjects effects table that appears towards the end of the output even when your design does not include a between-subjects factor. This can be ignored when you only have within-­subjects factors in your design, as is the case here with the One-way within-­subjects ANOVA.  What SPSS is doing is assuming that ‘participant’ is an additional between-subjects factor in the analysis. One way to think of this is to say that the part of the output reporting the between-subjects effects is asking ‘Did all participants perform the same?’ It is in the nature of psychology that participants are variable in almost all tasks, and hence you will find that the F-ratio is invariably very high and highly significant. As we are not normally interested in this question of whether the participants are all performing in the same way (we usually want to know about general trends across groups of participants), we can ignore this section of the output. Indeed, you will rarely see this result reported in psychology papers.

How to do it 1. Click on the word Analyze.

2. Click on General Linear Model.

3. Click on Repeated Measures. The Repeated Measures Define Factor(s) dialogue box will appear (see below).

245

CHAPTER 8

One-way analysis of variance

CHAPTER 8

246

SPSS for Psychologists

4. Replace the factor name suggested by SPSS by highlighting factor1 and typing the word ‘list’. ‘list’ represents your IV: the type of list, the factor manipulated in this Stroop study, which was the lists of words in different colour inks.

5. There were three different lists in the study, so type the number ‘3’ in the Number of Levels box and click on the Add button. The dialogue box will now show list(3) (see right). 6. Click on the Define button, the Repeated Measures dialogue box will appear (see below).

7. Move the variables into the Within-Subjects Variables box in a sensible order (see tip box below).

8. Click on the Options button to obtain descriptive statistics. The Repeated Measures: Options dialogue box will appear (see below).

As SPSS does a trend test, we entered the variables in line with the hypothesis: first, congruent list, then neutral list and then incongruent list. We hypothesised that the time taken to name the ink colour would be shortest for the congruent list, longer for the neutral list and longest for the incongruent list.

9. Click here to obtain the mean and standard deviation for each level of the factor. Click here if you wish to obtain partial eta squared (which is the same as eta squared for a One-way ANOVA). If you click here, SPSS will tell you it is ignoring this because there are no between-subjects factors. 10. Click on the Continue button to return to the Univariate dialogue box.

Next, we can ask SPSS to produce a graph of the data using the Plots… in the main Univariate dialogue box. This opens up the Univariate: Profile Plots dialogue box.

247

CHAPTER 8

One-way analysis of variance

CHAPTER 8

248

SPSS for Psychologists

11. Select the independent variable ‘list’ and move it into the Horizontal Axis box.

12. Click Add – SPSS will only produce graphs of variables in the Plots box.

Notice that the IV (list) has been added to the Plots box.

13. Decide which type of graph you would like to produce; for One-way ANOVA a Bar Chart is the most commonly used. 14. Select the Include Error bars option, and choose the type of error bars you want to include, in this case 95% confidence intervals. 15. Click on the Continue button to return to the Univariate dialogue box.

Click on to obtain the SPSS output shown below. You will find that there is a significant effect of type of list, and you may wish to include in your results section a table displaying the mean for each condition and the 95% confidence intervals (see Chapter 3, Section 5).

SPSS output for One-way within-subjects ANOVA  btained using menu items: General Linear Model > O Repeated Measures These are the names for each level of the factor ‘list’. If these were entered into the Repeated Measures dialogue box in a meaningful order, they will be displayed here in that order, and the table Tests of Within-Subjects Contrasts (shown on the next page) will also be relevant.

Useful descriptives that you can incorporate into your report, obtained by ticking Descriptive statistics in the Repeated Measures: Options dialogue box.

If Mauchly’s Test of Sphericity (shown below) is significant, you might consider using one of the four multivariate statistics shown in this table, as these make fewer assumptions about the data. Guidance on these statistics was given towards the start of this section.

249

CHAPTER 8

One-way analysis of variance

CHAPTER 8

250

SPSS for Psychologists

Remember that, should Mauchly’s test be significant, there are two options: either adopt a multivariate approach and report one of the four statistics given in the Multivariate Tests table above, or use the values for your chosen epsilon from the Tests of Within-Subjects Effects table (shown below).

Mauchly’s Test of Sphericity determines whether the data meet the assumption that the correlations between all the levels of the factor ‘list’ are roughly the same. Here, it is not significant, so this assumption has not been violated.

You also need the df for the Error term.

This row is the one you use when Mauchly’s test is not significant. It gives the values for the within-subjects factor ‘list’ that has three levels (the three types of list). Underneath are the three estimates of epsilon. Because Mauchly’s test was not significant, no correction is needed and the four different entries for F in this table are identical. For more information, see the start of this section.

251

This table shows the outcome of two trend tests: Linear and Quadratic.

This row is for the Linear trend test.

As mentioned earlier, you can ignore this table.

For these data, as we predicted, there is a significant linear trend, F (1,9) =102.74, p < .001. The means suggest that participants took the shortest time to name the ink colour for the congruent list and took the longest time to name the colour of the ink in the incongruent list, with the mean for the neutral list falling between these two values. Furthermore, for these data, there is no significant quadratic trend, F (1,9) = 2.66, p = .137. A linear trend test is used to see if the points tend to fall onto a straight line (as here). A quadratic trend test looks for a U-shaped or inverted U-shaped trend. If you entered the three levels in the order ‘cong’, ‘incong’ and ‘neutral’, then the quadratic trend would be significant. You might like to try this.

CHAPTER 8

One-way analysis of variance

SPSS for Psychologists

CHAPTER 8

252

This is the graph requested in the Profile Plots dialogue box, which illustrates participant performance in the three different conditions. When error bars do not overlap, it is likely that the conditions are significantly different – although you need to do post-hoc tests or planned comparisons to confirm this.

 alculating eta squared: one measure of effect C size As with One-way between-subjects ANOVA, we can calculate eta squared to measure effect size. Again, eta squared (η2) is calculated by dividing the sum of squares for the IV (list) by the total sum of squares. As the total sum of squares isn’t given in the Tests of Within-Subjects Effects, we need to calculate it by hand by simply adding the IV sum of squares to the Error sum of squares.

■■

For ‘list’ (η2) = 153.267 / (153.267 + 20.067)      = 153.267 / 173.334      = .88



Reporting the results



In a report you might write:

253

CHAPTER 8

One-way analysis of variance

 One-way within-subjects ANOVA was conducted on response times A to a Stroop task. There was a significant effect of the type of list: F (2,18) = 68.74, p < .001, η2 = .88. A significant linear trend emerged, F (1,9) =102.74, p < .001, with naming times increasing across congruent, neutral and incongruent lists.

You would also want to report in your results section your descriptive statistics and information regarding the confidence intervals for each condition.

Note that the Tests of Within-Subjects Contrasts table shows only whether a trend is significant or not. It is not a test of whether the individual conditions significantly differ from one another. However, ANOVA commands via the General Linear Model allow you to choose between a range of preset contrasts, described in Section 2. It is also possible to carry out some unplanned (post-hoc) comparisons. We illustrate how to do each of these next for a within-subjects factor, using the data analysed above. Consult Section 1 for general guidance on planned versus unplanned comparisons.

 lanned comparisons: more contrasts for  P within-­subjects factor Click on Analyze ⇒ General Linear Model ⇒ Repeated Measures Click on the Contrasts button in the Repeated Measures dialogue box.

Click here to view the list of contrasts available. The default is Polynomial contrasts, which we saw in the output described above, showing the linear and quadratic trends.

CHAPTER 8

254

SPSS for Psychologists

An alternative is the Repeated contrast, where each level is compared with the previous level (see Section 2 for a description of each type of contrast). The output obtained shows the differences between the two pairs of levels to be significant (see below).

After selecting the contrast you want, click on the Change button. Then click on the Continue button to return to the Repeated Measures dialogue box.

The Tests of Within-Subjects Contrasts table will appear in the output. This shows the contrasts to be significant.

 nplanned comparisons for within-subjects factor U ANOVA In terms of unplanned (post-hoc) comparisons, the choice for a within-subjects factor is more limited than for a between-subjects factor. If you click on the Post Hoc button in the Repeated Measures dialogue box, you will not see the within-subjects factor listed. However, a smaller selection of post-hoc tests are available if you select the EM Means… in the Repeated Measures dialogue box and open up the Estimated Marginal Means box.

We have moved ‘list’ into the Display Means for: box, and in doing so, the Compare main effects box became active.

Click on the Continue button to return to the Repeated Measures dialogue box.

If you click here, the Confidence interval adjustment box becomes active. Click on the arrow to view all three options. These vary in terms of how conservative (cautious) they are. The default, LSD(none), is not recommended, and Sidak is less conservative than Bonferroni (Field, 2005).

The Pairwise Comparisons table will appear in the output. This shows the differences between all of the pairs to be significant.

Finally, it is worth noting that, as with the One-way between-subjects ANOVA, there is a nonparametric version of the One-way within-subjects ANOVA, and this is described in the last part of this section.

The Friedman test The Friedman test is the nonparametric equivalent of the One-way within-subjects analysis of variance. Confusingly, the Friedman test is sometimes referred to as the ‘Friedman two-way ANOVA’ (this is because for a within-subjects analysis of variance, the participants are also considered to be a factor). For demonstration purposes, we perform this test using the same data as for the One-way within-subjects ANOVA.

255

CHAPTER 8

One-way analysis of variance

CHAPTER 8

256

SPSS for Psychologists

How to do it 1. Click on the word Analyze.

2. Click on Nonparametric Tests.

3. Click on Legacy Dialogs. 4. Click on K Related Samples. The Test for Several Related Samples dialogue box will appear (see below).

We recommend you use Legacy Dialogs as this gives you more information in the output compared with the new alternative.

5. Select the levels of the within-subjects factor you would like to compare (here the three confidence ratings) and move these to the Test Variables box.

6. Click on Statistics to obtain the dialogue box below.

7. Select Descriptive or Quartiles, depending on what is most appropriate for your data. We demonstrate the second option. 8. Click on the Continue button to return to the Tests for Several Related Samples dialogue box. Finally, click on OK.

SPSS output for Friedman test  btained by using menu items: Nonparametric Tests > K O Related Samples Descriptives for each level of the within-subjects factor, here in the form of quartiles, were obtained via the Statistics button in the dialogue box.

In the Friedman test, for each participant, the scores for each level of the factor are put into rank order, with the faster (smaller) response times allocated lower (smaller) ranks. SPSS has calculated the mean rank for each level of the within-­subjects factor. (Here, the three mean ranks are integers; this is unlikely to be the case in most studies, and occurred here because all participants were fastest with the congruent list and slowest with the incongruent list.) The calculated value for the Friedman test, which is assessed for significance using χ2 distribution.

The degrees of freedom = k-1, where k is the number of levels of the factor.

The p value.

257

CHAPTER 8

One-way analysis of variance

CHAPTER 8

258

SPSS for Psychologists





Reporting the results In a report you might write:

 Friedman test revealed that response times to a Stroop task varied A significantly for incongruent, congruent and neutral lists: χ2 (2, N = 10) = 20.00, p < .001.

Summary ■■

This chapter introduced you to One-way ANOVA, planned and unplanned comparisons and nonparametric equivalents of the One-way ANOVA.

■■

These tests of differences are used for experimental designs involving more than two groups or conditions.

■■

Effect size (eta squared) can be calculated by hand by dividing the sum of squares for the IV by the total sum of squares. For One-way ANOVA, this can also be produced selecting ‘Estimates of effect size’ in the Options dialogue box.

■■

Appropriate descriptive statistics, the mean and standard deviation, can be obtained either by following the advice in Chapter 3 or by selecting the appropriate options on the ANOVA Options dialogue box.

■■

Error bar charts are often used to display statistically significant findings, and can be produced using the ANOVA options in some cases.

■■

If your dependent variable is a total score calculated from several raw scores, which have already been entered into your data file, then see Chapter 4 for guidance on how to create such a total score in SPSS.

■■

For guidance on incorporating SPSS output into a report, or on printing the output, see Chapter 14.

CHAPTER 9

9

Factorial analysis of variance

In this chapter ●● ●● ●● ●●

An introduction to factorial analysis of variance (ANOVA) Two-way between-subjects ANOVA Two-way within-subjects ANOVA Mixed ANOVA

SPSS for Psychologists online Visit macmillanihe.com/harrison-spss-7e for data sets, online tutorials and exercises

Section 1: A N INTRODUCTION TO FACTORIAL ANALYSIS OF VARIANCE (ANOVA) ■■

The previous chapter introduced you to the simplest type of ANOVA: the One-­ way ANOVA. While this allows you to analyse data collected from experimental designs with more than two conditions, it only allows you to explore the influence of one independent variable at a time. However, human beings are complicated creatures, and psychologists often want to investigate how multiple factors (or variables) can impact people’s behaviour. In these situations, researchers can use what is called a factorial design to investigate their hypotheses or research questions. Factorial simply means that their experimental design have more than one IV.

■■

Factorial ANOVA allows us to analyse the data from these types of factorial design, by allowing us to investigate the effect of more than one IV at once. For example, we could examine the effect of participants’ sex as well as their age on their memory for a list of words. Here, we have two IVs (sex and age) and one dependent variable (DV) (memory score). A single ANOVA test will allow us to examine simultaneously the effect of these two IVs. In fact, ANOVA can handle any number of IVs in a single experiment, but, in practice, we rarely include more than three or four for reasons that will become apparent shortly.

259

CHAPTER 9

260

SPSS for Psychologists

■■

A major advantage of ANOVA is that it allows us to investigate how these IVs combine to affect the DV. For example, we can ask questions about how the sex and the age of a participant combine to affect memory score – it might be that male participants’ performances decline with age but that female participants’ performances improve with age. Such an interaction between these two variables is of theoretical importance, and it is only by investigating both variables in one design that we can discover this interaction.

■■

For more information about the assumptions of ANOVA, and how it works, please refer back to Chapter 8. The basic principles underlying One-way and Factorial ANOVA are the same.

Different types of Factorial ANOVA? As with One-way ANOVAs and t-tests, Factorial ANOVAs come in a number of different varieties depending on your experimental design. These are:

Between-subjects ANOVA Between-subjects (or Independent) ANOVA is used when all of your factors (or independent variables) are between subjects. That is, when all have different participants taking part in each of the different levels (or conditions).

Within-subjects ANOVA Within-subjects (or Repeated Measures) ANOVA is used when all of your factors (or independent variables) are within subjects. In this case, this means that the same participants take part in all of the conditions in your experiment.

Mixed ANOVA Mixed ANOVA is used when you have an experimental design that includes one or more within-subjects factor(s) and one or more between-subjects factor(s).

Factorial ANOVAs Exactly which type of Factorial ANOVA you use will vary according to the experimental design you have (i.e. between-subjects, within-subjects or mixed), but also on the number of factors you have in your study. When describing an ANOVA design, we need to specify three things: 1. The number of factors involved in the design. 2. How many levels there are of each factor. 3. Whether each factor is a within- or between-subjects factor. Just as a one-way ANOVA has one factor, we can use the number of factors to describe a two-way ANOVA (which has two factors), or a three-way ANOVA (which has three factors) and so on (e.g. a six-way ANOVA would have six factors). What this does not tell you is how many levels (or conditions) each factor has. You could describe

this in longhand, but there is an easier convention. For example, a three-way ANOVA, in which the first factor, ‘Sex’, had two levels, the second factor, ‘Age’, had three levels and the third factor, ‘Drug dose’, had five levels, could be described more simply as a 2*3*5 ANOVA design. Note that, in this terminology, the number of numerals (three in this case) describes the number of factors, and the values of the numerals indicate the number of levels of each of these factors. Using this terminology, we just need to make it clear whether the factors were within- or between-subject factors. We could do this by writing: A 2*3*5 (Sex*Age*Drug dose) mixed ANOVA design was employed, where Sex and Age were between-subjects factors and Drug dose was a within-subjects factor.

Main effects and interactions Using ANOVA, we can analyse data from studies that incorporate more than one factor. We can assess the effect of each of these factors on their own and the interaction between the factors. The term ‘main effect’ is used to describe the independent effect of a factor. For example, in the 2*3*5 ANOVA described above, three main effects will be reported. The main effect of ‘Sex’ will tell us whether men performed significantly differently from women, irrespective of their age or drug dose. The main effect of ‘Age’ will tell us whether age affects performance, irrespective of sex or drug dose. Finally, the main effect of ‘Drug dose’ will tell us whether drug dosage affects performance, irrespective of the sex or age of the participants. These main effects simply compare the mean for one level of a factor with the mean of the other level(s) of that factor; for example, comparing mean male performance levels to mean female performance levels. Interactions, on the other hand, assess the combined effect of the factors. An interaction that assesses how two factors combine to affect performance is called a two-way interaction. When three factors are involved, the interaction is known as a three-way interaction.

Interactions and moderation Interactions are important, as they can tell us how one variable might moderate the effects of another. Moderation is an important statistical concept in psychological research, as psychologists are often interested in how the influence of an IV on a DV might be moderated by a third variable. When we carry out a standard Factorial ANOVA, we are usually just looking to see whether two variables (A and B) interact in terms of their effects on a criterion variable C. We don’t usually make any distinction in the role of A and B; they are both just considered to be independent variables (or factors). If an interaction occurs, this means that the effect of A on C is different at different levels of B; and likewise, the effect of B on C is different at different levels of A. In contrast, if you are interested in the moderating effects of a specific variable, then you assign your factors differently: One remains an IV and the other becomes the Moderator. In this case, you would specifically investigate the effect the Moderator, B, has on the relationship between A and C. Mathematically, there is no difference between interaction and moderation, and they are analysed in the same way. However, the concept of moderation affects the way you interpret the results.

261

CHAPTER 9

Factorial analysis of variance

CHAPTER 9

262

SPSS for Psychologists

Understanding ANOVA output When attempting to understand the output from the ANOVA command in SPSS, it is helpful if you know in advance how many results you are looking for: 1. A one-way ANOVA, where the single factor is called A, will give rise to just a single main effect of A. 2. A two-way ANOVA, where the factors are called A and B, will give rise to two main effects (main effect of A and main effect of B), and a single two-way interaction (A*B). This is a total of three results (3 F-ratios). 3. A three-way ANOVA, where the factors are called A, B and C, will give rise to three main effects (main effect of A, main effect of B and main effect of C), three two-way interactions (A*B, A*C and B*C) and a single three-way interaction (A*B*C). This is a total of seven results. 4. A four-way ANOVA, where the factors are called A, B, C and D, will give rise to four main effects (main effect of A, main effect of B, main effect of C and main effect of D), six two-way interactions (A*B, A*C, A*D, B*C, B*D and C*D), four three-way interactions (A*B*C, A*B*D, A*C*D and B*C*D), and a single four-way interaction (A*B*C*D). This is a total of 15 results. You can now see why it is unusual to include more than four factors in a design. The number of possible interactions rises steeply as the number of factors increases. Furthermore, it is unlikely that you hypothesised about the shape of these higherlevel interactions, and if they are significant, they can be hard to describe and/or explain. Using SPSS, it is easy to undertake a four- or even five-way ANOVA, but rather more difficult to explain the results. Our advice is to try to limit yourself to a maximum of three factors. The next three sections will walk you through how to carry out between-­subjects, within-subjects and mixed ANOVA using SPSS.

Section 2: TWO-WAY BETWEEN-SUBJECTS ANOVA

 xample study: the effect of defendant’s E attractiveness and sex on sentencing To practise how to analyse data with a two-way between-subjects ANOVA design, we will consider the possible effects of attractiveness and also the gender of the defendant in a mock trial. In the study described here, conducted by one of our past students, the testimony of a hypothetical defendant describing a murder and admitting guilt was presented as written text to 60 participants: 20 participants simply received the written text with no photo attached, 20 participants received the text and a photo of an attractive defendant, and 20 participants received the text and a photo of an unattractive defendant. The photo was of either a man or a woman. Participants were asked to indicate how many years in jail the defendant should receive as punishment.

263

The design employed was a 3*2 between-subjects ANOVA design. The first between-subjects factor was the knowledge about attractiveness, which had three levels; the factor is operationalised as showing no photo of the defendant (so no knowledge about attractiveness is provided, a control condition), a photo of an attractive defendant or a photo of an unattractive defendant. The second between-subjects factor was same or different sex, operationalised by showing a photo of the defendant of the same or opposite sex as the participant. Sex of the defendant was stated in the written text for the participants who received no photo. The dependent variable was the sentence given, operationalised as how many years the defendant should spend in prison, ranging from a minimum of 3 to a maximum of 25. The hypothesis tested was that the unattractive defendant would be sentenced more harshly and that the length of sentence given might also depend on the sex of the participant. (These data are available from macmillanihe.com/harrison-spss-7e.)

How to do it 1. Click on the word Analyze.

2. Click on General Linear Model.

3. Click on Univariate. The Univariate dialogue box will appear.

4. Select the dependent variable ‘Sentence awarded in years’ and move it into the Dependent Variable box. 5. Select the two between-subjects factors, ‘Same or different sex’ and ‘Attractiveness of defendant’, and move them into the Fixed Factor(s) box. 6. Click on Options to obtain descriptive statistics. The Univariate Options dialogue box will appear (see below).

CHAPTER 9

Factorial analysis of variance

CHAPTER 9

264

SPSS for Psychologists

7. Click here to check for assumption of homogeneity of variance. 8. Click here to obtain the mean and standard deviation for each level of the factor. 9. Click here if you want to obtain partial eta squared, a measure of effect size, but see guidance provided in Chapter 8. 10. Click on the Continue button to return to the Univariate dialogue box.

11. Click on Post Hoc… to obtain some unplanned comparisons. Note – you can also obtain post-hoc tests from the EM Means… option, as illustrated in Chapter 8.

265

12. You only need to do post-hoc tests for factors (or variables) that have more than two levels. As sexdiff only has two levels, we have only moved attract across to the Post Hoc Tests for: box. Doing this makes a number of post-hoc options below become active. In this case, we have selected Bonferroni (a relatively conservative test), but consult your statistics textbook for more about the other options.

13. Click on Continue to return to the Univariate dialogue box

14. Click on Plots to obtain an interaction graph, which is useful should there be an interaction between the two between-subjects factors.

CHAPTER 9

Factorial analysis of variance

CHAPTER 9

266

SPSS for Psychologists

15. Select each between-subjects factor, moving ‘attract’ into the Horizontal Axis box and ‘sexdiff’ into the Separate Lines box. (As a general rule, the factor with the fewest levels should go in the Separate Lines box.) The Add button will become active. Click on this to add the graph to the Plots: box.

16. Decide which type of graph you would like to produce; for Factorial ANOVA a Line Chart can be really helpful when of interpreting your interaction. Select the Include Error bars option, and choose the type of error bars you want to include, in this case 95% confidence intervals. 17. Click on Continue to return to the Univariate dialogue box

Finally, click on the button in the Univariate dialogue box to display the output. The annotated output is shown below.

267

CHAPTER 9

Factorial analysis of variance

 PSS output for two-way between-subjects S ANOVA  btained using menu items: General Linear Model > O Univariate Univariate Analysis of Variance SPSS reminds you of the factors you are analysing, what the levels of each factor are and the number of participants in each level.

This table was produced by requesting Descriptive statistics in the Univariate: Options dialogue box.

These three rows show descriptives for each level of the factor ‘attract’, collapsing across the levels of the other factor, ‘sexdiff’.

These two Total rows show descriptives for each level of the factor ‘sexdiff’, collapsing across the levels of the other factor, ‘attract’.

Each of these six rows shows the descriptives for one of the conditions of the study. Thus, the first row is for participants who were given a photo of an attractive defendant and were the same sex as that defendant.

SPSS for Psychologists

CHAPTER 9

268

SPSS tests the assumption of equality of variances in a number of different ways. In this case, we are interested in the statistic based on the mean. If Levene’s test is significant, this suggests that the group variances are likely to be significantly different, suggesting that the assumption of homogeneity of variance has been violated. Here, it is not significant, so the assumption has been met.

This table shows the outcome of the analysis of variance. Each row shows information for a source of variance.

This row shows information about the main effect of the factor ‘sexdiff’.

This row shows information about the main effect of the factor ‘attract’.

This row shows information about the interaction between the factors ‘sexdiff’ and ‘attract’.

269

SPSS prints out a matrix (as it does for correlations), and you have to pick out the three possible comparisons.

As our factor ‘attract’ had three levels, there are three possible comparisons. Their p values are highlighted.

This is the interaction graph requested in the Univariate: Profile Plots dialogue box. The graph suggests that the effect of ‘attract’ is similar across both levels of ‘sexdiff’ – the two lines are nearly parallel, and the error bars at each point overlap.

CHAPTER 9

Factorial analysis of variance

CHAPTER 9

270

SPSS for Psychologists

 alculating eta squared: one measure C of effect size In Chapter 8, Section 1 we provided guidance on reporting effect size when carrying out ANOVAs. Although SPSS will calculate partial eta squared, this is not easy to interpret as it is an adjusted measure: the variance explained by one factor after taking into account the variance explained by the other factor(s). Instead, Mulhern and Greer (2011) recommend eta squared, which is relatively straightforward to calculate by hand and will tell you the proportion of the total variability accounted for by each factor and interaction in your design. We will demonstrate how to do this now. Note, however, that there are alternative measures which may be preferable, as they attempt to estimate the effect size in the population rather than just in the sample (see Fritz, Morris and Richler, 2012 for a review and guidance on how to calculate these, and/or consult your statistics text). Eta squared (η2) can be calculated from the ANOVA output. It is the sum of squares for the factor or interaction divided by the corrected total sum of squares, and these values can be found in the Tests of Between-Subjects Effects table.

■■

For the factor ‘sexdiff’ (η2) = 6.02 / 637.25 = .009

■■

For the factor ‘attract’ (η2) = 422.50 / 637.25 = .663

■■

For the interaction ‘sexdiff*attract’ (η2) = 3.03 / 637.25 = .005

The factor ‘attract’ accounts for approximately 66% of the variance in the dependent variable, whereas the other factor, ‘sexdiff’, and the interaction account for only a tiny proportion. The remainder of the variance is accounted for by error (205.7 / 637.25 = .32).

Reporting the results

271

CHAPTER 9

Factorial analysis of variance

In a report you might write:  two-way between-subjects ANOVA was conducted on sentencing A judgements. Whether the sex of the defendant was the same as or different from the sex of the participant did not affect the length of sentence given (F (1,54) = 1.58, p = .214, η2 = .009). However, information about the attractiveness of the defendant did influence sentencing judgements (F (2,54) = 55.46, p < .001. η2 = .663), see graph below. There was no significant interaction between these two factors (F (2,54) = 0.40, p = .674, η2 = .005).   Bonferroni post-hoc comparisons showed significant differences between no picture and attractive conditions (p < .001), no picture and unattractive conditions (p < .001), and attractive and unattractive conditions (p < .001).

You may choose to display an illustration of the significant main effect in an error bar graph. See Chapter 5, Section 3 for guidance on how to obtain such a graph. Here, there is no overlap between the bars for each level of ‘attract’, which is indicative of the significant differences revealed by the pairwise post-hoc comparisons.

CHAPTER 9

272

SPSS for Psychologists

Section 3: TWO-WAY WITHIN-SUBJECTS ANOVA

 xample study: the effects of two memory tasks E on finger tapping performance To practise a two-way within-subjects ANOVA, we shall look at an experiment examining the effects of two memory tasks on tapping performance. Research has identified that right index finger tapping is largely controlled by the left hemisphere, and left index finger tapping by the right hemisphere. If a cognitive task is performed at the same time as this finger tapping task, then the way in which the cognitive task interferes with such tapping could reflect the extent to which either hemisphere is involved in controlling the cognitive task. Many studies that required participants to tap as fast as possible with their index finger while also performing a verbal task found that right-hand tapping was disrupted more than left-hand tapping. This result is compatible with the notion that the left side of the brain, more so than the right side, is involved in controlling both right-hand tapping and many verbal tasks. In a study by Towell, Burton and Burton (1994), participants were asked to tap with each hand while memorising either the words presented to them on a screen (a verbal memory task) or the position of the words on the screen (a visuospatial memory task). As before, memorising the words should disrupt right-hand tapping more than left-­ hand tapping due to the language processing involved. However, as the right side of the brain is thought to be involved in visuospatial tasks, memorising the positions of words should disrupt left-hand tapping more. The design employed was a 2*2 within-subjects ANOVA. Each factor had two levels: the first was the tapping hand (left or right hand) and the second was the memory task (memorising the words or memorising the positions). All participants were tested under each possible combination of the two factors, making it a within-­ subjects design. The dependent variable was a percentage change score, showing the extent to which tapping is slowed down by the concurrent performance of the memory task. The hypothesis tested was that there would be an interaction between tapping hand and memory task. This hypothesis was supported, and for the purposes of this book, we have created a data file that will reproduce some of the findings reported by Towell et al. (1994). (These data are available from m ­ acmillanihe.com/ harrison-spss-7e.)

Labelling within-subjects factors Consider the factors and levels in this example; they could be set out as in Table 9.1. As each factor has two levels, there are four conditions in total, each with one level of one factor and one level of the other factor. When entering data from a within-­subjects ANOVA, you will need to have a column for each condition that each participant takes part in. In this case, a 2*2 within-subjects ANOVA has 2 x 2 = 4 conditions (just as a 2*3 within-subjects ANOVA will have 6 conditions, or a 2*2*3 within-subjects ANOVA will have 12).

273

CHAPTER 9

Factorial analysis of variance

Table 9.1  The numbering system for within-subjects factors Factor 1

Tapping hand

Levels

Right

Left

Factor 2

Memory task

Memory task

Levels Column name, SPSS data file, for conditions

Words

Position

Words

Position

h1s1

h1s2

h2s1

h2s2

In this example, the name that was given in the SPSS data file to each column containing the data for each condition incorporates a number for each level of each factor, as shown in the bottom row of Table 8.4. For example: ■■ ‘h1s1’ means tapping hand 1 (right) and stimulus for task 1 (memorising words). ■■

‘h2s2’ means tapping hand 2 (left) and stimulus for task 2 (memorising positions).

You should jot down a rough table such as this before entering the data for any design with two or more within-subjects factors. This will help you when you define the within-subjects factors, because you will find that the numbers you have used for the column names will match the numbers that SPSS uses when requesting variable selection.

How to do it 1. Click on the word Analyze.

2. Click on General Linear Model.

3. Click on Repeated Measures. The Repeated Measures Define Factor(s) dialogue box will appear (see below).

SPSS for Psychologists

CHAPTER 9

274

4. Change the factor name suggested by SPSS by highlighting factor1 and typing the word that represents the first factor, ‘hand’.

5. Type the number ‘2’ in the Number of Levels box and click on the Add button; hand(2) will appear in the box next to Add (see below).

6. Repeat steps 4 and 5 for the second factor, i.e. type in the name that represents the second factor ‘task’ in the Within-Subject Factor Name: box and type ‘2’ in the Number of Levels box and click on the Add button.

7. Click on the Define button; the Repeated Measures dialogue box will appear (see below).

This key helps explain the numbers in the brackets below: ‘hand’ refers to the first number and ‘task’ to the second number. This will help you enter the variables in the correct order into the Within-Subjects Variables box. The variable names are given in brackets after the variable labels.

8. Select ‘Rt hand and word [h1s1]’, which is the condition where the level ‘hand 1’ (the right hand) is combined with the level ‘stimulus 1’ (memorising words); see Table 8.4.

9. Click on the arrow button to add it to the list in the Within-Subjects Variables box, where ‘Rt hand and word [h1s1] ’ should appear next to the slot (1,1).

275

10. Repeat steps 8 and 9 for the remaining variables. ‘Rt hand and position [h1s2]’ should be moved into the slot next to (1,2), ‘Left hand and word [h2s1]’ into the slot next to (2,1) and ‘Left hand and position [h2s2]’ into the slot next to (2,2). 11. Click on the EM Means… button to obtain descriptive statistics. If you have a factor with more than two levels, you can also carry out unplanned comparisons through this option (see Chapter 8, Section 3). The Post Hoc… option does not work for repeated measures variables.

12. Move ‘hand’, ‘task’ and ‘hand*task’ into the box labelled Display Means for.

13. Click on the Continue button to return to the Repeated Measures dialogue box.

, SPSS will perform the calculations. You may wish to obtain an Click on interaction graph should the analysis reveal a significant interaction – this is an option available on the Repeated Measures dialogue box; see the steps outlined below.

CHAPTER 9

Factorial analysis of variance

CHAPTER 9

276

SPSS for Psychologists

How to obtain an interaction graph

1. Click on the Plots button. This will bring up the Repeated Measures: Profile Plots (see below).

2. Select one factor to move into the Horizontal Axis: box and one factor to move into the Separate Lines: box.

3. Click on the Add button, to add it to the Plots box.

4. Decide which type of graph you would like to produce; for Factorial ANOVA a Line Chart can be really helpful in terms of interpreting your interaction. Select the Include Error bars option and choose the type of error bars you want to include, in this case 95% confidence intervals.

5. Click on the Continue button to return to the Repeated Measures dialogue box.

Click on to obtain the ANOVA output, shown below, with the interaction graph at the end of the output.

The output for a within-subjects ANOVA is lengthier than that for a between-subjects ANOVA, and we explain the purpose of some of the tables at the start of Chapter 8, Section 3. If you have not read this already, now would be a good time to do so.

SPSS output for two-way within-subjects ANOVA  btained using menu items: General Linear Model > O Repeated Measures This table is important when there are more than two levels of a within-subjects factor and when Mauchly’s Test of Sphericity (next table) is significant. See the guidance provided at start of Chapter 8, Section 3 on understanding the output of within-subjects ANOVAs.

This table is important when there are more than two levels. Here, both of the withinsubjects factors have only two levels, so we don’t need to use it.

277

CHAPTER 9

Factorial analysis of variance

CHAPTER 9

278

SPSS for Psychologists

This row gives the values for the factor ‘hand’, the withinsubjects factor with two levels (right and left).

This row gives the values for the factor ‘task’, the withinsubjects factor with two levels (memorising the word or memorising its position).

This table shows the outcome of the analysis of variance. For information about the nonhighlighted rows, see Chapter 8, Section 3.

This row gives the values for the interaction between the factor ‘hand’ and the factor ‘task’.

You also need the degrees of freedom for the error associated with each main effect or interaction. In this example, the error df is the same for all three. This is not always the case.

This table shows the outcome of trend tests. Each factor only has two levels, and so: 1. Only linear tests can be carried out, and not quadratic. 2. The values are simply those for the analysis of variance. If, however, you have at least one factor with three or more levels, this table would be useful, as shown in the one-way within-subjects ANOVA example in Chapter 8.

You can ignore this table; see guidance in Chapter 8, Section 3.

These three tables give the descriptives requested in the EM Means… box: ‘hand’, ‘task’ and ‘hand*task’ were moved into the box labelled Display Means for.

This table shows descriptives for each level of the factor ‘hand’, collapsed across the other factor ‘task’. We used the code 1 = right and 2 = left, so the first row is for the right hand, and the second row is for the left hand.

This table shows descriptives for each level of the factor ‘task’ collapsed across the two levels of the factor ‘hand’.

This table shows descriptives for each of the conditions of the study. Thus, the first row gives details of performance when participants were tapping with their right hand while memorising words. The bottom row gives details of performance when participants were tapping with their left hand while memorising the positions of the words.

279

CHAPTER 9

Factorial analysis of variance

CHAPTER 9

280

SPSS for Psychologists

The interaction graph shown below was obtained by clicking on the Plots button at the bottom of the Repeated Measures dialogue box. The labels and title are not helpful, so double-click on the graph in the SPSS output to go into the Chart Editor dialogue box. Now double-click on the labels and title to edit them.

The lines on the graph can help you interpret your interaction. If the lines are converging or diverging (as is the case in this example), it suggests there may be an interaction between the factors. This is because it illustrates that there is a different pattern of scores across the levels of one IV, for each level of another (you will need to look at the ANOVA statistics to find out whether this interaction is significant or not). If the lines are parallel, it is unlikely that the factors are interacting. When reporting a significant interaction, try to put what you see in the graph into words, linking the patterns you see in the different DV scores to the different factors and levels. For example, start by looking at the lines. Each line represents one condition of one IV (in this case, task type). If a line is sloping upwards it means that the scores in that condition are increasing across the levels of the other IV (in this case, increasing across right and left hand conditions); if a line is sloping downwards, then the opposite is true. As increased scores mean more disruption to tapping, in this case we could say something like: ■■ Right-hand tapping (hand 1) was more disrupted by memorising words (task 1) than was left-hand tapping (hand 2). ■■

Left-hand tapping was more disrupted by memorising the position of the words than was right-hand tapping.

Next, look at the groups of points on the graph: Are they closer together for some conditions than others? The degree of overlap between the error bars give an impression of how different (or not) participant performance was on tasks 1 and 2 (i.e. remembering the word vs the position) according to the different levels of the second variable (i.e. whether they were tapping with their right or left hand). This time we could say something like: ■■ Right-hand tapping (hand 1) was more disrupted by memorising words (task 1) than remembering their position (task 2). ■■

Left-hand tapping was similarly disrupted by both tasks.

 alculating eta squared: one measure C of effect size As with one-way within-subjects ANOVA, we can calculate eta squared to measure effect size. We need to do this for each factor, and for the interaction term. Again, eta squared (η2) is calculated by dividing the sum of squares for the IV (list) by the total sum of squares. However, as the total sum of squares isn’t given in the Tests of Within-Subjects Effects, we need to calculate it by hand for each IV and interaction by adding the appropriate sum of squares to the related Error sum of squares.

■■

For ‘hand’ (η2) = 2.295 / (2.295 + 398.267)

■■

        = 2.295 / 400.562       = .006 For ‘hand’ (η2) = 70.906 / (70.906 + 834.213)

■■

        = 70.906 / 905.119         = .08 For ‘hand’ (η2) = 21.441 / (21.441 + 102.585)         = 21.441 / 124.026         = .17

281

CHAPTER 9

Factorial analysis of variance

CHAPTER 9

282

SPSS for Psychologists

 Reporting the results

In a report you might write:  two-way within-subjects ANOVA was conducted on decrement of A finger tapping speed. The main effect of tapping hand was not significant: F (1,23) = 0.13, p = .719, η2 = .006. The main effect of type of task was not significant: F (1,23) = 1.96, p = .175, η2 = .08. There was a significant interaction between tapping hand and type of task: F (1,23) = 4.81, p = .039, η2 = .17. This interaction is displayed in the graph above, suggesting memorising words disrupted right-hand tapping more than left-hand tapping. The opposite pattern was observed when participants memorised the position of the words. While memorising the words was generally more disruptive to hand tapping than memorising position, this was more pronounced in the right hand condition.

You would also want to report your descriptive statistics (e.g. means and standard deviations) and information regarding the confidence intervals for each condition in your results section. If you wanted to break down the interaction to find out exactly where the significant differences lie, you could do this using follow-up paired t-tests (see Chapter 5). To find out whether performance on the two tasks was significantly different for the right-hand tapping condition, you would need to compare conditions h1s1 and h1s2. You could then do the same for the left hand (h2s1 vs h2s2). You could also explore whether there was a difference in the performance of the right and left hand for (a) the memorising word condition (h1s1 vs h2s1), and (b) the memorising position task (h1s2 vs h2s2). However, when reporting multiple follow-up t-tests you would need to carry out a correction to your alpha level (i.e. the level at which we accept significance, in this case p < .05) to avoid Type 1 error. The simplest way to do this is to manually apply a Bonferroni correction by dividing the alpha level by the number of comparisons you are carrying out (.05/4 = .0125). However, there are other methods that may be more suitable, so consult your statistics textbook about how best to achieve this.

Section 4: MIXED ANOVA Here, we show you how to perform an ANOVA that involves between- and within-­ subjects factors in the same experiment. To do this, we will use a study employing a three-way mixed design, which will also give you an idea of how to interpret any ANOVA with more than two IVs.

 xample study: perceptual expertise in the own-­ E age bias As adults, we are experts at face recognition. However, there are some faces which we are more expert at recognising than others. For example, over 50 years of research has demonstrated that we are better at recognising faces belonging to our own racial or ethnic group compared with those of a different, less familiar race or ethnicity. More recently, research has suggested that we may also display an ‘own-age bias’; being better at recognising faces belonging to our own age group compared with others. Why these biases occur is unclear, but there is some evidence to suggest differential contact may have a role to play. In other words, it may be that we are better at recognising own-group faces as a result of the increased experience we tend to have with those faces. Exactly what perceptual/cognitive mechanisms may be responsible for producing such an effect is unclear, however, one prominent explanation suggests our perceptual face-processing mechanisms may become better tuned for the groups of faces with which we have more experience. Specifically, it has been suggested that this increased in-group contact may lead to an increased ability to extract the configural (or holistic) information from in-group faces, while out-group faces are processed in a more piecemeal featural manner. The following example explores the role of contact and perceptual expertise in the own-age bias using a paradigm that has been repeatedly used to illustrate the relationship between configural processing and expertise: the inversion effect. That is, the finding that the more expertise we have with a group of stimuli, the more inversion (turning it upside-down) disrupts our ability to process that stimuli configurally (or ‘expertly’). Therefore, if the own-age bias is the result of greater expertise with own-age faces, one would predict a greater disruption of own-age compared with other-age face recognition following inversion. In the following study, Harrison (2011) investigated the role of expertise in the own-age bias by looking at the effect of inversion on adult faces and children’s faces with children (child face experts, adult novices), ‘low contact’ young adults (adult experts, child face novices) and ‘high contact’ teachers (adult and child face experts). Specifically, they showed participants 64 faces (32 own-age; 32 other-age) and measured how accurate participants were at recognising them. Half were presented upright, and half were upside-down. If perceptual expertise is responsible for own-­ age bias, one would expect novices (in this case, children and ‘low contact’ adults) to display the classic own-age bias for upright faces, but for this difference to disappear when the faces were inverted. The inversion cost (decrease in accuracy following inversion) would be expected to affect own-age faces more than other-age faces. For the teacher ‘high contact’ expert group, no own-age bias would be expected in upright faces (as they are experts with both own-age and children faces), and inversion should affect both groups of faces equally. A 3*2*2 mixed design was used, with one between-subjects factor: group (three levels: children, ‘low contact’ adults and ‘high contact’ teachers); and two repeated measures factors: face age in photograph (two levels: child, 8–11 years old, and young adult, 18–25 years old) and orientation (two levels: upright and inverted faces). The dependent variable was percentage accuracy (measuring how many faces participants correctly recognised).

283

CHAPTER 9

Factorial analysis of variance

CHAPTER 9

284

SPSS for Psychologists

Table 9.2  The numbering system for within-subjects factors Factor 1

Face age

Levels

Adult

Child

Factor 2

Orientation

Orientation

Levels Column name, SPSS data file, for conditions

Upright

Inverted

Upright

Inverted

fa1o1

fa1o2

fa2o1

fa2o2

For the purposes of this book, we have created a data file that will reproduce some of the findings of this study. The data file is available from macmillanihe.com/harrisonspss-7e. In the data file, the columns holding the data for the combination of levels of the two within-subjects factors have been named using the numbering systems described in the previous Section (see Table 9.2).

How to do it 1. Click on the word Analyze.

2. Click on General Linear Model.

3. Click on Repeated Measures. The Repeated Measures Define Factor(s) dialogue box will appear (see below).

4. Change the factor name suggested by SPSS by highlighting factor1 and typing the word that represents the first within-subjects factor, ‘faceage’.

5. Type the number ‘2’ in the Number of Levels box and click on the Add button, faceage(2) will appear in the box next to Add (see below).

6. Repeat steps 4 and 5 for the second within-subjects factor, i.e. type in the name that represents the second factor ‘orientation’ in the WithinSubject Factor Name: box and type ‘2’ in the Number of Levels box and click on the Add button.

7. Click the Define button, the Repeated Measures dialogue box will appear (see below).

285

CHAPTER 9

Factorial analysis of variance

CHAPTER 9

286

SPSS for Psychologists

If, when following Step 8, you put one of the levels in the wrong place, you can move it by highlighting it and then clicking on the up or down arrow as appropriate.

This key helps explain the numbers in the brackets below: ‘faceage’ refers to the first number and ‘orientation’ to the second number. This will help you enter the variables in the correct order into the Within-Subjects Variables box.

8. Select the levels of the within-subjects factors in the correct order and enter these into the Within-Subjects Variables box. They were labelled in such a way to make the correct order obvious: ‘fa1o1’ refers to the combination of the first level of the factor ‘faceage’ (adult) with the first level of the face ‘orientation’ (upright).

9. Select Group and move it to the Between-Subjects Factor(s) box. 10. Click on the EM Means… button to obtain descriptive statistics.

11. Move the factors and interactions into Display Means for: box.

12. Select Continue to return to the Repeated Measures dialogue box.

287

CHAPTER 9

Factorial analysis of variance

13. Click on the Options… button to obtain more statistics.

14. Click here to check for assumption of homogeneity of variance.

15. Click on the Continue button to return to the Repeated Measures dialogue box. If you want partial eta squared, a measure of effect size, select Estimates of effect size.

, SPSS will calculate the ANOVA and produce the output that is Click on explained below.

CHAPTER 9

288

SPSS for Psychologists

SPSS output for three-way mixed ANOVA  btained using menu items: General Linear Model > O Repeated Measures

If you requested Descriptive statistics in the Repeated Measures: Options dialogue box, the Descriptive Statistics table would appear after these first two tables. Instead, we moved all factors and interactions into the box labelled Display Means for in the Repeated Measures Options dialogue box, which produces tables that appear at the end of the output.

This table appears because we selected Homogeneity tests in the Repeated Measures Options dialogue box. This is checking that for each level of the between-­ subjects factor, the pattern of correlations among the levels of the within-subjects factor are the same. Check that this is not significant.

289

CHAPTER 9

Factorial analysis of variance

For information about these two tables, see Chapter 8 Section 3. The Mauchly’s Test of Sphericity is only important if there are more than two levels of either within-subjects factors. If this is not significant, the assumption of sphericity has not been violated.

CHAPTER 9

290

SPSS for Psychologists

This table shows the outcome of any part of the mixed ANOVA that incorporates a within-subjects factor. See Chapter 8, Section 3 for an explanation of the 2nd–4th rows in each section.

This table shows the outcome of trend tests (see Chapter 8, Section 3 for more information about these). In this case, as each within-subjects factor only has two levels, only linear tests can be carried out, and not quadratic. If one of your within-­subjects factors has three or more levels, quadratic tests could also be done.

This table appears because we selected Homogeneity tests in the Repeated Measures Options dialogue box. If tests based on the mean are non-significant for all levels of the within-subjects factors, the assumption of homogeneity has been met. If any were significant, this would have implications for the accuracy of F for the between-subjects factor.

291

CHAPTER 9

Factorial analysis of variance

CHAPTER 9

292

SPSS for Psychologists

These tables appear because we moved all factors and interactions into the box labelled Display Means for in the Repeated Measures Options dialogue box. The descriptives in the first three tables are for each level of each factor: ‘group’, ‘faceage’ and ‘orientation’. The next three tables show descriptives for each level of each two-way interaction, i.e. for each combination of levels of two of the factors (ignoring the third factor). Should any main effect or interaction be significant, you would look here at the relevant table to interpret the results.

293

CHAPTER 9

Factorial analysis of variance

The final matrix table provides descriptives for each level of the three-way interaction and hence for each of the twelve conditions.

The final section is for our Post Hoc tests. This table displays the Multiple Comparisons matrix table. You have to pick out the comparisons required, and ignore the repetitions.

As our factor had three levels, there are three possible comparisons: • High Contact vs Low Contact • High Contact vs Children • Low Contact vs Children All comparisons are significant.

CHAPTER 9

294

SPSS for Psychologists



Reporting the results In a report you might write:

 three-way mixed ANOVA was performed on participants’ face recognition A accuracy scores. While there was no significant main effect of face age (F  Linear O (method = enter)

This first table is produced by the Descriptives option, and is useful for reporting your results.

Remember that, for any nominal variables (‘psycha’ here), you should not report mean and SD, but counts (or frequencies) instead.

This second table gives details of the correlation between each pair of variables. We do not want strong correlations between the predictor variables. The values here are acceptable.

309

CHAPTER 10

Multiple regression

CHAPTER 10

310

SPSS for Psychologists

This third table tells us about the predictor variables and the method used. Here we can see that all our predictor variables were entered simultaneously; that is because we used the Enter option.

This table and the next are important. R2 indicates the overall explanatory power of the model. The Adjusted R Square value tells us that our model accounts for 66.0% of variance in the ‘se’ scores.

This table reports an ANOVA that assesses significance of R2. The first df is the number of predictors (m); the residual df is N-m-1 (here 90-6-1). p < .001, so the model is significant.

311

B (the unstandardised coefficient) for each predictor variable shows the predicted increase in the value of the criterion variable for a 1 unit increase in that predictor (while controlling for the other predictors). The next column gives the standard error of B.

The standardised coefficient, beta (β), gives a measure of the contribution of the variable to the model in terms of standard deviations. β is the predicted change in SD of the criterion variable, for a change of 1 SD in the predictor (while controlling for the other predictors). Thus, if trait test emotionality increased by 1 SD, we can predict that state test emotionality would increase by .37 SD; and if state test worry increased by 1SD, we can predict that state test emotionality would increase by .45 SD.

The t-test values indicate whether the predictor’s regression coefficient is significant. For the standard method, however, it only tests the unique variance explained by the predictor. Thus, a predictor that is correlated with the criterion variable but shares variance explained with another predictor may have a non-significant β.

If you request Collinearity diagnostics (in the Linear Regression: Statistics dialogue box), two additional columns appear on the right of the Coefficients table. These extra columns are usually sufficient to check whether your data meet this assumption.

The tolerance values are a measure of the correlation between the predictor variables and can vary between 0 and 1. The closer to zero the tolerance value is for a variable, the stronger the relationship between this and the other predictor variables. You should worry about variables that have a very low tolerance. SPSS will not include a predictor variable in a model if it has a tolerance of less than .0001. However, you may want to set your own criteria rather higher, perhaps excluding any variable that has a tolerance level of less than .01.

VIF is an alternative measure of collinearity (in fact, it is the reciprocal of tolerance); a large value indicates a strong relationship between predictor variables.

CHAPTER 10

Multiple regression

SPSS for Psychologists

CHAPTER 10

312

The extra columns shown above are normally sufficient, but this table also appears if you request Collinearity diagnostics from the Linear Regression: Statistics dialogue box. You can read up about this output if you want to understand it.

The Residuals Statistics table is produced if you request any Plots.

Charts This graph is produced by the Normal probability plot option. If the points, which represent the cumulative expected and observed probabilities, are reasonably close to the straight line, then you can assume normality of the residuals.

313

CHAPTER 10

Multiple regression

In the scatterplot of *ZRESID (standardised residuals) and *ZPRED (standardised predicted values), if the points form a rectangle across the middle of the graph, you can assume that your data meet assumptions about normality, linearity and homoscedasticity of residuals. This spread of points is not rectangular, but the distribution is not too bad. For more guidance on these issues, see Tabachnick and Fidell (2014).

Next, we give some guidance on reporting a multiple regression.

We haven’t shown this here, but you can save various new variables produced by the multiple regression as we showed you for bivariate regression (Chapter 6 Section 8). Click on Save in the Linear Regression dialogue box to get the Linear Regression: Save dialogue box. You could explore the new variables that are available through the options there.

Reporting the results When you describe your data analysis, in a methods subsection or in the results section, you should report assumption checks that you carried out, whether the data met these checks and any transformations you made as a consequence (for example to correct for skewed data).   When reporting the results of a multiple regression analysis, you should inform the reader about the proportion of variance accounted for by the model, the significance of the model and the significance of the predictor

CHAPTER 10

314

SPSS for Psychologists

variables. You should also include a table of the beta values for each predictor. In addition, if the regression coefficient of any predictor is negative (unlike in this example), you must point out that direction of impact on the criterion variable. Note that some psychological constructs are scored in opposite directions; for example, a high score on a self-esteem questionnaire may indicate either high self-esteem or low self-esteem. In addition to being clear in your methods subsection what a high score indicates for each of your constructs, you also need to think carefully about the direction of the relationship between each predictor and the criterion variable in order to describe these correctly in the results section. You would also include other information, such as summary descriptives for all variables, and the correlation matrix for all the variables you included in the multiple regression analysis.   For the multiple regression itself, in a report you might write: Those variables that were significantly correlated with the criterion variable, state test emotionality, were entered as predictors into a multiple regression using the standard method. A significant model emerged: F (6,83) = 29.82, p < .001. The model explains 66.0% of the variance in state test emotionality (adjusted R2 = .660). Table 10.1 gives information about regression coefficients for the predictor variables entered into the model. Trait test emotionality, Perceived test difficulty, and State test worry were significant predictors, with a positive relationship to State test emotionality. Psychology ‘A’ level, Trait test worry, and Statistics course anxiety were not significant predictors.

Table 10.1  The unstandardised and standardised regression coefficients for the variables entered into the model Variable

B

SE B

β

p

State test worry

.61

.11

.45

< .001

Trait test emotionality

.43

.10

.37

< .001

Perceived test difficulty

.30

.11

.21

.007

Psychology A level completed

.07

.13

.04

.582

Stats course anxiety

.05

.13

.03

.704

Trait test worry

.02

.11

.02

.850

315

CHAPTER 10

Multiple regression

Section 3: SEQUENTIAL OR HIERARCHICAL METHOD OF MULTIPLE REGRESSION Only use the sequential or hierarchical method if you have a rationale for entering predictors in a particular sequence, as described in Section 1. Remember that first you should run a standard multiple regression, requesting collinearity diagnostics and residual plots in order to check those assumptions.

Click on Analyze⇒Regression⇒Linear.

In the Linear Regression dialogue box, we enter predictors in blocks (as shown below).

CHAPTER 10

316

SPSS for Psychologists

Note this is Block 1 of 1.

We are only entering psycha (‘Psychology A level completed’) as a predictor in block 1. This is because it is the only predictor (of those correlated with the criterion) that was established before students began the course.

Note that we are using the Enter option. We will discuss options for each block below.

When you have entered predictors for this block, click on Next.

Note this should say “Block 2 of 2”. This is a bug in our version of SPSS. Note that the Previous button is now available for you to move between blocks.

The second block of predictors are those that were measured early in the course. Now click the Next button again to set up the third block.

We are now in Block 3 of 3. We have moved the third and final block of predictors (that were measured just before or after the test) into the Independents box, and are ready to move on.

Click on the Statistics button, for the next step.

When you have entered two or more blocks of predictors, you can use the Next and Previous buttons to move between blocks to check the predictor varaibles in each block.

When blocks are entered, SPSS will produce output for different models. Selecting R squared change will test for significant difference between successive models.We have not selected Collinearity diagnostics because we checked that output from the standard multiple regression (see Section 2).

Click on Continue, then OK.

The output is illustrated below. We have only shown the tables that are different from the output for the standard or simultaneous method shown in Section 2.

317

CHAPTER 10

Multiple regression

CHAPTER 10

318

SPSS for Psychologists

SPSS output for sequential multiple regression  btained using menu items: Regression > Linear (method = O enter; three blocks have been entered) Each time a block is introduced, a new model is formed, and a multiple regression is conducted on each successive model. Thus, with three blocks, there are three models. This tells us which variables were entered in each block. None were removed because we used Enter for each block.

We can see that Model 1, which included only ‘Psychology A level’, accounted for just 3.4% of the variance (Adjusted R2= .034). The inclusion of the second block of predictors into Model 2 resulted in an additional 38% of the variance being explained (R2 change = .380). Model 3 explained an additional 26% of the variance (R2 change = .258), and in total accounts for 66% of the variance (Adjusted R2= .660).

SPSS provides a test of whether each model is significantly different from the preceding model in terms of proportion of variance explained in the criterion variable. For Model 1, this comparison is against a ‘model’ with no predictors (that is, when R2 = 0), and the F and p values are therefore the same as for the model alone (shown in the next table). The next row indicates that Model 2 explains significantly more variance than Model 1 [F (3,85) = 18.74, p Discriminant (enter O independents together)

This table tells you that 100% of the 221 cases in the data file have been included in the analysis. If any case had a missing value for one of the IVs (the predictor variables), the case would have been dropped from the analysis and this would have been reported in this table.

359

This is the table of means we requested in the Discriminant Analysis: Statistics dialogue box. It gives the mean and SD for each of our IVs broken down by category membership. For example, we can see the mean age of those not reconvicted was 31.99 years compared with 26 years for those who were reconvicted.

This table was produced because we requested Univariate ANOVAs in the Discriminant Analysis: Statistics dialogue box. It shows whether there is a significant effect of category for each of the predictor variables. For example, here we can see that there is a significant difference in the age of those reconvicted and not reconvicted F(1, 219) = 11.818, p = 0.001. In addition, SPSS gives Wilks’ Lambda, a multivariate test of significance, which varies between 0 and 1. Values very close to 1 indicate that the differences are not significant.

Canonical functions are used to discriminate between pairs of categories, or groups. For each possible orthogonal (independent) contrast, a discriminant function is calculated that best discriminates between the categories. The number of canonical functions is either one less than the number of categories, or equal to the number of predictor variables, whichever is the smaller. The following tables give details of each of the discriminant functions calculated. In this example, there are only two categories, so only one function is calculated.

CHAPTER 12

Discriminant analysis and logistic regression

CHAPTER 12

360

SPSS for Psychologists

The eigenvalue is a measure of how well the discriminant function discriminates between the categories (the larger the value, the better the discrimination). The % of Variance column allows you to compare the relative success of the functions. When there is only one function (as here), this column and the Cumulative % column tell us nothing useful. Where there are several functions, you will probably find that only the first few usefully discriminate among groups.

This table provides a test of the null hypothesis that the value of the discriminant function is the same for the reconvicted and non-reconvicted cases. As p is less than 0.05, we can reject the null hypothesis.

This table allows you to see the extent to which each of the predictor variables is contributing to the ability to discriminate between the categories. The coefficients have been standardised so that you can compare the contribution of each regardless of the units in which it was measured. Rather like correlation coefficients, the values range from –1 to +1. In this case, ‘age’ and ‘2nd criminal history’ are making a larger contribution than the other predictor variables.

The Structure Matrix table gives a different measure of the contribution that each variable is making to the discriminant function. Here, the variables are ordered by the magnitude of their contribution. The negative value for the variable ‘age’ tells us that age correlates negatively with the value of the function, whereas ‘2nd criminal history’ correlates positively. This is because older prisoners are less likely to be reconvicted, whereas prisoners with a higher criminal history score are more likely to be reconvicted. If you have more than two categories, and hence more than one function calculated, this table also allows you to see which of the functions each variable is contributing most to (for example, is a particular variable helping you predict between membership of category a and b or between b and c?).

This table gives the mean value of the discriminant function for each of the categories. Note that here the mean value of the function is positive for reconvicted prisoners but negative for non-reconvicted prisoners. In this way, the function is discriminating between the two categories of prisoners.

This table informs you of the total number of cases processed, the number excluded and the number used in the output.

Prior probability is the assumed probability that a particular case will belong to a particular category. In this case, in the Discriminant Analysis: Classification dialogue box (see above), under Prior Probabilities, we selected ‘All groups equal’, so the probability is equal for all groups – i.e. 0.5 or 50%.

361

CHAPTER 12

Discriminant analysis and logistic regression

CHAPTER 12

362

SPSS for Psychologists

These are the SeparateGroups Plots we requested in the Discriminant Analysis: Classification dialogue box (see above). Note we have edited the plots so that the Y-axis is drawn to the same scale on both. If the discriminate function is discriminating between the two groups of inmates, then the distribution will be different in these two plots. It is apparent that the distribution for those who were reconvicted is slightly shifted to the right, relative to those who were not reconvicted, but there is a lot of overlap between the two groups, indicating the discrimination is far from perfect.

363

This is the Summary table we requested in the Discriminant Analysis: Classification dialogue box (see above). It provides a particularly useful summary of the success (or otherwise) of our discriminant function. It shows a cross-tabulation of the category membership (reconvicted or not) against what we would have predicted using our discriminant function. We can see that in 104 cases the discriminant function correctly predicted that the offender would not be reconvicted, and in 48 cases it correctly predicted that they would be reconvicted. Thus, 152 (104 + 48) of our 221 cases were correctly classified – a success rate of 68.8% (as noted in the footnote to the table). However, the table also shows us that 25% of the prisoners we predicted would not be reconvicted were reconvicted, and that 33.8% of the cases we predicted would be reconvicted were not. It is up to you to interpret these failures of prediction; in some cases it may be more important to avoid one type of error than another. For example, here you may feel that it is more important to avoid erroneously predicting that someone will not be reconvicted than erroneously predicting that they will.



Reporting the results In a report you might write:

 discriminant analysis was performed with reconviction as the DV and A age, number of previous convictions, and the criminal history and education and employment subscales of the LSI-R as predictor variables. A total of 221 cases were analysed. Univariate ANOVAs revealed that the reconvicted and non-­reconvicted prisoners differed significantly on each of the four predictor variables. A single discriminant function was calculated. The value of this function was significantly different for reconvicted and non-reconvicted prisoners (chi-­ square = 30.42, df = 4, p < .001). The correlations between predictor variables and the discriminant function suggested that age and criminal history were the best predictors of future convictions. Age was negatively correlated with the discriminant function value, suggesting that older prisoners were less likely to be reconvicted. Criminal history was positively correlated with the discriminant function value, suggesting that prisoners with higher numbers of previous convictions were more likely to be reconvicted. Overall, the discriminant function successfully predicted the outcome for 68.8% of cases, with accurate predictions being made for 66.2% of the prisoners who did not go on to be reconvicted and 75% of the prisoners who were reconvicted.

CHAPTER 12

Discriminant analysis and logistic regression

CHAPTER 12

364

SPSS for Psychologists

 o perform a stepwise (or statistical) discriminant T analysis To perform a stepwise discriminant analysis, follow the procedure outlined for the enter method except that, at step 7 (see above, page 358), select Use stepwise method, then click on the Method button to bring up the Discriminant Analysis: Stepwise Method dialogue box (see below).

Select one of these methods. These control the statistical rules used to determine when a variable is entered into the equation. If in doubt, select Wilks’ lambda, the default setting.

The criteria for entry and removal from the equation can be based on either F values or probability values, and the values for both can be adjusted. Tabachnick and Fidell (2014) suggest that the Entry probability value could be changed from .05 to .15. This is a more liberal rule that will ensure that any important variable gets entered into the equation.

If in doubt, leave the Method set to Wilks’ lambda, and the Criteria set to Use F Value, with the Entry and Removal values set to 3.84 and 2.71, respectively (see above). Then click on the Continue button and complete steps 8–15 (see above, pages 358–359). The output produced by the stepwise discriminant analysis is similar to that for the simultaneous discriminant analysis, and so only tables that differ are shown below. This output was produced using the default settings. This table shows that two variables have been entered. In the first step, the variable ‘2nd criminal history’ was entered. In the second step, ‘age’ was also entered. No variables were removed in this example.

Wilks’ Lambda is a measure of the discrimination between the categories. A smaller value indicates better discrimination. The value of Lambda falls as the second variable, ‘age’, is entered.

365

This table shows variables included in the analysis at each step. In this example, there are only two steps. The other variables were not added because they did not meet the entry criteria we set in the Discriminant Analysis: Stepwise Method dialogue box (see above).

This table lists all the variables not included in the equation at each step.

This table indicates how successful the discriminant function is at each step. A smaller value of Lambda indicates a better discrimination between categories. Here, the function significantly distinguishes between the two categories at both steps 1 and 2.

This is the Classification Results table for the stepwise discriminant analysis. It is interesting to compare this with the equivalent table produced using the enter method (see above). Which method results in the most successful prediction?

CHAPTER 12

Discriminant analysis and logistic regression

CHAPTER 12

366

SPSS for Psychologists

 sing discriminant function analysis to predict U group membership So far we have been trying to develop discriminant functions that can predict category membership with a reasonable level of accuracy. Having produced these functions, the next step is to use them to try to make real predictions. For example, if we collect data from another group of prisoners before they are released, we can apply our discriminant function to predict which of these prisoners will be reconvicted following release. To do this we need to compute the discriminant function for the new cases. The easiest way to do this is to add the new cases to the existing data file. Because we do not know whether these individuals will be reconvicted, we will have to enter a missing value for the variable ‘recon’. As a result, these new cases will not be included when the discriminant function is calculated, and therefore, the result will be identical to that found before these cases were added. Follow steps 1–15 listed above (pages 358–359), selecting Use stepwise method at step 7, but then click on the Save button in the Discriminant Analysis dialogue box. The Discriminant Analysis: Save dialogue box will be revealed (see below).

Select Predicted group membership. You may also like to select the other two options.

Click on the Continue button to return to the Discriminant Analysis dialogue box.

button. In addition to the output described above, several Now click on the new variables will be computed and added to your data file (see below).

This new variable Dis_1 gives the predicted category membership.

This new variable is the predicted probability of being in group 1 (not reconvicted).

This new variable Dis1_1 gives the value of the computed discriminant function. Notice that the value is higher for prisoners who are predicted to be reconvicted.

This new variable gives the predicted probability of being in group 2 (reconvicted).

You may want to rename the new variables to help you remember what they represent.

367

CHAPTER 12

Discriminant analysis and logistic regression

CHAPTER 12

368

SPSS for Psychologists

In this way, we can first compute the best discriminant function and then use this to make real predictions. For example, we can predict that participant 1 will be reconvicted on release, and that our estimate of the probability of reconviction is 71.4%. Based on this relatively high risk of reconviction, we might decide to target some extra resources at this individual prior to their release in the hope of reducing their likelihood of reconviction.

Section 4: AN INTRODUCTION TO LOGISTIC REGRESSION Logistic regression differs from discriminant analysis in that, whereas discriminant analysis computes a function that best discriminates between two categories, logistic regression computes the log odds that a particular outcome will occur. For example, we can use logistic regression to compute the odds that a particular prisoner will be reconvicted after release. The odds of an event occurring are given by the ratio of the probability of it occurring to the probability of it not occurring. For example, if four horses are running in a race and we pick one of them at random, the odds of our horse winning will be 0.25/(1 – 0.25) = 0.333. The odds for any event lie between the values of 0 and +infinity. This is problematic for the mathematics involved in logistic regression, and to overcome this problem, the log of the odds are calculated (natural log or loge). The log odds of an event will vary between –infinity and +infinity, with a high value indicating an increased probability of occurrence. A positive value indicates that the event is more likely to occur than not (odds are in favour), while a negative value indicates that the event is more likely not to occur (odds are against). To illustrate the difference between odds and log odds, consider the example of the four-horse race. We have already seen that the odds of us picking the correct horse are 0.333. The odds of us picking the wrong horse are given by 0.75/(1 – 0.75) = 3. If we now take the log of each of these values, we will see that the log odds of us picking the correct horse are loge (0.333) = –1.1 and the log odds of us picking the wrong horse are loge (3) = 1.1. The advantage of log odds over odds is clear from this example: unlike odds, log odds are symmetric about zero. The term ‘logistic’ in the name logistic regression derives from this use of log odds. It is possible to use logistic regression in situations when there are two or more categories of the grouping variable. In cases where there are just two categories of the grouping variable, the SPSS binary logistic regression command should be employed. Where there are more than two categories, the multinomial logistic regression command should be used. The multinomial command is not described here, but is similar to the binary logistic regression command. As for discriminant analysis, we will show how the analysis can be used to predict category membership for cases where it is not known.

369

Section 5: PERFORMING LOGISTIC REGRESSION ON SPSS

Example study: reconviction among offenders We will demonstrate binary logistic regression using the prisoner data set used to demonstrate discriminant analysis (see Section 3). This will allow us to compare the output of these two commands.

To perform a binary logistic regression 1. Click on Analyze.

2. Select Regression.

3. Select Binary Logistic. The Logistic Regression dialogue box will appear (see below).

4. Select the category variable (DV) and move it into the Dependent box.

5. Select the predictor (IV) variables and move them into the Covariates box.

6. If any of your predictor variables are categorical, click here to bring up the Define Categorical Variables dialogue box and select the categorical variables. 7. It is possible to include interaction terms. Select two or more variables and then click here.

8. Select Options to bring up the Logistic Regression: Options dialogue box (see below).

CHAPTER 12

Discriminant analysis and logistic regression

SPSS for Psychologists

CHAPTER 12

370

9. Select Classification plots.

10. Select Iteration history.

11. Select CI for exp(B). 12. Select the HosmerLemeshow goodness-offit test; this will help you to evaluate the model.

13. Click Continue to return to the Logistic Regression dialogue box.

Finally, click on the

button. The annotated output is shown below.

 PSS output for logistic regression using Enter S method Obtained using menu items: Regression > Binary Logistic

This table tells you that 100% of the 221 cases have been processed.

This table tells you how the two outcomes (reconvicted and not reconvicted) have been coded; this is important when interpreting the output.

371

This section of the output headed Block 0: Beginning Block reports the results of the most basic attempt to predict outcome; one in which all cases are predicted to result in the most common outcome.

This shows the iteration history.

This table reports the results of this simple prediction. As most of our prisoners are not reconvicted after release, the predicted outcome for all has been set to ‘not reconvicted’. This crude method results in an accurate prediction for 71% of cases. Hopefully, our logistic regression will do better than this.

These two tables tell us that so far no variables have been entered into the equation. We can ignore both these tables.

CHAPTER 12

Discriminant analysis and logistic regression

SPSS for Psychologists

CHAPTER 12

372

Block 1: Method = Enter

This block of output reports the results of the logistic regression analysis. This analysis should result in a more accurate prediction than that reported in Block 0.

Logistic regression employs a process known as iteration. In an iterative process, we attempt to arrive at the best answer to a problem through a series of approximations. Each iteration results in a slightly more accurate approximation than the previous iteration. This table reports this iterative process. The statistic –2 log likelihood is used in logistic regression to measure the success of the model. A high value indicates that the model poorly predicts the outcome. With each iteration we can see the value falling; however, the benefit derived at each iteration decreases until after four iterations there is no change in the value and SPSS terminates this process. You can also see how the coefficients of each of the predictor variables are adjusted at each iteration.

Omnibus tests are general tests of how well the model performs. In a report, we can use either this or the Hosmer–Lemeshow test (shown below). When the Enter method has been employed (as here), there is only one step and so the Step, Block and Model rows in this table will be identical.

373

This table gives useful statistics that are equivalent to R2 in multiple regression. It is not possible to compute an exact R2 in logistic regression, but these two statistics are useful approximations. Here, we can see that our model accounts for between 13.9% and 19.8% of the variance.

This table gives the results of the Hosmer– Lemeshow test we requested. It gives a measure of the agreement between the observed outcomes and the predicted outcomes. This statistic is a test of the null hypothesis that the model is good, hence a good model is indicated by a high p value, as in this example, where p = .661. If the p value is less than .05, then the model does not adequately fit the data.

This table is used in the calculation of the Hosmer–Lemeshow statistic reported in the previous table. The cases are ranked by estimated probability on the criterion variable and then divided into 10 ‘deciles of risk’. Within each decile, the numbers of observed and expected positive (reconviction) and negative (no reconviction) outcomes are calculated. The Hosmer–Lemeshow statistic (above) is then calculated from this 10*2 contingency table. Note that a high proportion of the participants in decile 1 are not reconvicted, whereas the majority of those in decile 10 are reconvicted. This pattern indicates that our model is good.

This table summarises the results of our prediction and should be compared with the equivalent table in Block 0. Our model correctly predicts the outcome for 71.5% of cases. Although, overall, this is not much better than the situation reported in Block 0, in this new classification table we correctly predict reconviction in 31.3% of prisoners who are reconvicted. Compare this with the outcome of the discriminant analysis reported earlier in this chapter.

CHAPTER 12

Discriminant analysis and logistic regression

SPSS for Psychologists

CHAPTER 12

374

This table contains some of the most critical information. The first column gives the coefficients for each predictor variable in the model. The negative coefficient for ‘age’ indicates that the odds of reconviction declines with increasing age. The Wald statistic and associated Sig. values indicate how useful each predictor variable is. In this case, only ‘age’ and ‘crimhis2’ are significant; the analysis could be rerun with only these variables included. The Exp(B) column gives an indication of the change in the predicted odds of reconviction for each unit change in the predictor variable. Values less than 1 indicate that an increase in the value of the predictor variable is associated with a decrease in the odds of the event. Thus, for every extra year of age, the odds of the prisoner being reconvicted on release decrease by a factor of 0.944.

 Reporting the results

In a report you might write:  logistic regression analysis was performed with reconviction as the A DV, and age, number of previous convictions, and the LSI-R criminal history, education and employment subscales as predictor variables. A  total of 221 cases were analysed, and the full model significantly predicted reconviction status (omnibus chi-square = 32.99, df = 4, p  Dimension Reduction O > Factor

This table is produced by the Univariate Descriptives option in the Factor Analysis: Descriptives dialogue box.

Remember that, in a real factor analysis, you should aim to test a minimum of 200 participants.

The descriptives table above is the only part of the output with a note of the number of participants you entered into the factor analysis, so we recommend that you always request this table, for reference purposes.

393

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

The contents of this table are produced by the Coefficients and Significance levels options in the Factor Analysis: Descriptives dialogue box. Here, the whole table is shown shrunk to fit. Next, there is an annotated section of the upper matrix.

A section of the matrix of correlation coefficients (from the table above): these are the observed correlations, which will later be compared with the reproduced correlations.

Above the diagonal, and mirrored below it, are the observed correlations. Some of the correlations are quite large, indicating that the variables have factorability (.3 is normally considered to be the lower cut-off). If you do the analysis, you will be able to see all correlation coefficients in the complete matrix.

If you selected the Inverse option in the Factor Analysis: Descriptives dialogue box, then that table would appear here. It is not required for principal components analysis, so we will show it in Section 3.

SPSS for Psychologists

CHAPTER 13

394

This table is produced by the KMO and Bartlett’s Test of Sphericity option in the Factor Analysis: Descriptives dialogue box. These tests give some information about the factorability of your data, in addition to the information from the various matrices as seen above and described in Section 1.

The KMO measure of sampling adequacy is a test of the amount of variance within the data that could be explained by factors. As a measure of factorability, a KMO value of .5 is poor; .6 is acceptable; a value closer to 1 is better. The KMO value here is the mean of individual values shown in the next table.

Bartlett’s test indicates that the data are probably factorable if p < .05; however, it is considered to be a sensitive test, so it is better to use it in reverse: if p > .05, do not continue; but if p < .05, check other indicators of factorability before proceeding.

This table is produced by the Anti-image option in the Factor Analysis: Descriptives dialogue box. The upper matrix contains negative partial covariances, and the lower matrix contains negative partial correlations. This is the whole table shrunk to fit. Below is an annotated section of the lower matrix.

395

A section of anti-image correlation matrix.

Above the diagonal, and mirrored below it, are the negative partial correlations. Many of these partial correlation values are small, indicative of a factor structure underlying the variables.

The on-diagonal values are the KMO values for each variable, indicating the factorability. Thus, for the variable ‘primrose’, KMO = .660. If any variable has a KMO value of less than .5, consider dropping it from the analysis. If you carry out the analysis, you will see that the KMO for ‘primrose’ is the smallest, so none of the variables need be dropped. The single KMO value, in the KMO and Bartlett’s Test table above, is the mean of all these KMO values.

The matrices above were all described in Section 1. The remaining tables in this factor analysis output are mostly new to you. The communalities indicate how much variance in each variable is explained by the analysis.

In a principal component analysis, the initial communalities are calculated using all possible components, and always = 1.

The extraction communalities are calculated using the extracted factors only, so these are the useful values. For ‘bluebell’, 85% of the variance is explained by the extracted factors. If a particular variable has a low communality, consider dropping it from the analysis.

We give a note on the calculation of extraction communalities on the Component Matrix table, further down.

SPSS reminds you of the extraction method you used, under all the tables whose values differ depending on the method.

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

CHAPTER 13

396

SPSS for Psychologists

This table summarises the total variance explained by the solution to the factor analysis. Here, it is shrunk to fit; each third of it is reproduced below, and annotated. This is the first part of the output that gives a clear indication of the solution, in terms of how many factors explain how much variance. The previous tables and matrices are important, though, for indicating whether the solution is likely to be a good one.

In this type of table, the rows relate not to variables but to factors/ components.

Below are the three sections of the Total Variance Explained table.

The left section contains initial eigenvalues: the eigenvalues for all possible components, ranked in order of how much variance each accounts for. There are 15 possible components: the same as the number of variables entered into the analysis, but that does not mean that each variable is a component. For each component, the total variance that it explains on its own (its eigenvalue) is followed by the variance that it explains expressed as a percentage of all the variance, then by the cumulative percentage.

The middle section contains information for those components with eigenvalue > 1.0: in this example there are three such components.

The three extracted components together explain 77.2% of variance.

397

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

These values are called extraction values, because they are calculated after extraction of components. Note that, in principal component analysis, these values are the initial values (the first three rows above).

The right section of the output table shows the values for the extracted components after rotation has been carried out. (This section only appears if you request rotation, and may differ depending on the rotation method.) Note that the eigenvalues have changed. Relative percentages and cumulative percentage of variance explained are not given for this rotation method.

a. When components are correlated, sums of squared loadings cannot be added to obtain a total variance.

CHAPTER 13

398

SPSS for Psychologists

The fact that three components have been extracted is neat; to that extent, the factor analysis might support our hypothetical aesthete’s view. We do not yet know, however, what the factors represent. For example, they could represent not the type of flower but the colour of the flowers (pink, blue and yellow). Nor do we yet know which variables will be associated with which of the factors. So, rejoicing should be postponed.

This graph is produced by the Scree plot option in the Factor Analysis: Extraction dialogue box. It can be used as an alternative to eigenvalues > 1.0, to decide which components should be extracted.

In the scree plot, the eigenvalues are plotted in decreasing order. It is called a scree plot because the shape of the curve is reminiscent of the profile of rock scree that accumulates at the foot of steep hills.

This dotted line, which we have superimposed, indicates the approximate point at which the scree plot ‘breaks’ between the steep and shallow parts of the slope: components above that point would be chosen. In this example, three factors are indicated, which concurs with the choice made on the basis of selecting factors with eigenvalues greater than 1. Some authors suggest that the next factor should also be included (giving four in this example): see Cooper (2010, 287–8) for discussion.

399

This is a table of the factor loadings before the rotation is carried out. Each column shows the loading of each variable on that component. The loading can be thought of as the correlation between the component and the variable: thus, the larger the number, the more likely it is that the component underlies that variable. Note that some loadings are positive and some negative: we will mention this in Section 3.

The variable ‘poppy’ has: quite a strong loading (.798) on component 1 a medium loading (–.315) on component 2 a medium loading (–.319) on component 3. These loadings may be useful for seeing the pattern of which variables load most strongly onto which factors: almost always a rotation will be used, however, and then the pattern becomes clearer. In particular, the negative loadings here may be an artefact of the method of calculation (Kline, 1994, 39).

The extraction communalities, in the Communalities table above, are calculated using the formula Σx2, where x is the factor loadings in this table. Thus, for ‘lavender’: (.774)2 + (–.316)2 + (–.403)2 = .862 As stated above, the size of the communality indicates how much of that variable’s variance is explained by the solution to the factor analysis.

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

CHAPTER 13

400

SPSS for Psychologists

This table is produced by the Reproduced option in the Factor Analysis: Descriptives dialogue box. The upper matrix contains the reproduced correlations and the lower matrix contains the residuals. This is the whole table shrunk to fit. Underneath is an annotated section of each matrix.

This is part of the reproduced correlations matrix. Compare this value, of .711 for ‘primrose’, with ‘celandine’, with the observed correlation of .566 between those two variables (see Correlation Matrix table above). The values are not very similar. Another point to note is that the factor analysis gives a stronger relationship between the two.

The diagonal holds the reproduced communalities. They are the same values as the extraction communalities in the Communalities table above.

Rather than do the comparison suggested above, one can simply inspect the residuals. The residual for ‘primrose’ with ‘celandine’ is –.145 (the negative sign indicates that the reproduced correlation is the stronger as mentioned above.) This is a fairly large residual, but inspect the whole matrix on your own screen to see that the other residuals are mostly small. That most residuals are small is another indication of factorability, and it is also an indication of a good factor analysis solution.

401

Note that the contents of the Reproduced Correlations table are calculated after the factor extraction has been carried out, so the values will vary depending on how many factors were extracted.

This is the Pattern Matrix table of factor loadings after the rotation is carried out. (The pattern matrix is easier to interpret than the structure matrix, shown below.) If you compare these values with those in the Component Matrix table above, you will see that the factor loading values have changed. At the foot of the table, SPSS notes the number of iterations required for the rotation.

The variable ‘poppy’ now has a strong loading (–.804) on component 3 and low loadings on components 1 (.214) and 2 (–.028). Whether a value is negative or positive is not relevant to the strength of the loading (as with r in correlation). However, if the loadings on one factor are not all in the same direction, there may be a problem; for example, if scores for reversed items were not corrected before the analysis. Note that components 1, 2 and 3 after rotation need not necessarily be the same as components 1, 2 and 3 before rotation; they are just listed in a convenient order in each table.

For each variable, we have highlighted its strongest loading: thus, the highlights indicate which variables load most strongly on which component, as specified in the annotations below.

The variables ‘cornflower’, ‘poppy’, ‘sweetpea’ and ‘lavender’ load most strongly onto component 3. The variables ‘celandine’, ‘primrose’, ‘bluebell’, ‘buttercup’, ‘daisy’ and ‘speedwell’ load most strongly onto component 2.

The variables ‘rose’, ‘wisteria’, ‘delphinium’, ‘aster’ and ‘hellebore’ load most strongly onto component 1.

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

CHAPTER 13

402

SPSS for Psychologists

We have shown you the Pattern Matrix table in the layout above to demonstrate all the factor loadings. SPSS will, however, display output in a more easily interpretable layout. See description of the Options dialogue box in Section 3.

The Structure Matrix table gives the correlations between factors and variables; whereas the Pattern Matrix table (above) gives the unique relationship between each factor and each variable excluding the overlap between factors. The Pattern Matrix table is easier to interpret in terms of which variables load onto which factors.

This table gives the factor transformation matrix that was used to carry out the rotation used to derive the rotated factors. See texts recommended in Section 1.

Under this and the two previous tables, SPSS reminds you of the rotation method that you used.

Considering the results Once you have inspected the Pattern Matrix table (above) to see which variables load on which factors, you need to think about what the variables have in common. This will help you to identify what the underlying potential factor (or psychological construct) might be, and decide on a suitable name to describe each factor. The extracted factors may, or may not, be the same as the factors you were expecting. If we now compare the pattern of loadings in the table with the aesthete’s initial suggestion, we can see that there is close agreement, with just one variable on a different factor. Use caution, however; the dimensions that were suggested (liking for types of flower) may not be the true factors. To be more certain, you should carefully check the characteristics of the questions/items that measure your variables, and consider all possibilities and alternative explanations. In the results section of a report, you could write about the outcome of the analysis as shown below. You could incorporate more tables (e.g. the pattern matrix) or values (e.g. the factor loadings; KMO) to illustrate your points as necessary. If you have used principal component analysis, then do ensure that you refer to that in your report, and not factor analysis.



Reporting the results



In a report you might write:  he data were analysed by means of a principal component analysis, T with direct oblimin rotation. The various indicators of factorability were good, and the residuals indicate that the solution was a good one. Three components with an eigenvalue greater than 1.0 were found; the scree plot also indicated three components. The components can be thought of as representing liking for different types of flowers: component 1 – formal garden flowers; component 2 – wild flowers; component 3 – cottage garden flowers. The component loadings are shown in Table 13.1.

Table 13.1  The components found by principal component analysis, and the variables that load on them Component 1

Component 2

Component 3

delphinium

.95

bluebell

.89

cornflower

–.97

hellebore

.82

celandine

.87

lavender

–.88

rose

.65

primrose

.82

poppy

–.80

aster

.63

speedwell

.81

sweetpea

–.65

wisteria

.63

buttercup

.81

daisy

.80

403

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

CHAPTER 13

404

SPSS for Psychologists

Section 3: OTHER ASPECTS OF FACTOR ANALYSIS

 ther options from the Factor Analysis: O Descriptives dialogue box Determinant The Determinant option is in the Correlation Matrix section of the Factor Analysis: Descriptives dialogue box. If you select it, the value of the determinant will be printed underneath the Correlation Matrix table (as shown below). Its value is an indication of whether factor analysis methods other than principal component analysis can be used. Its value must not be zero. If it is zero, the correlation matrix cannot be inverted (see below).

The determinant will be printed at the bottom left of the Correlation Matrix table. NB: The determinant may be printed to three decimal places, instead of in exponential format. If so, this determinant would appear as .000 (see tip box).

Whether the determinant is printed to three decimal places or in exponential format depends on a setting on the Edit, Options, General tab. On that tab, in the Output area, the No scientific notation for small numbers in tables allows you to select this; however, for numbers with three or more zeros following the decimal point, exponential format is more useful.

Inverse The Inverse option produces the Inverse of Correlation Matrix, a complete matrix for all the variables. A section of it is shown below.

The values in this table (obtained by inverting the correlation matrix) are used in the calculations of many methods of factor extraction other than principal component analysis. If this table cannot be obtained (see above), then those methods cannot be applied.

The details of matrix determinants and inverses are not covered in this book. See Tabachnick and Fidell (2014, 23–32).

 ther options from the Factor Analysis: O Extraction dialogue box Method You can choose from the various methods of extracting factors that SPSS allows. To make a sensible choice, you will need to read up about each of the methods. Here we make just a few points. Principal component analysis, shown in Section 2, tends to be the most robust method. Remember, though, that components are distinct from factors (Kline, 1994) and that principal component analysis and factor analysis are somewhat different things. Nonetheless, with large matrices, there is little difference between the solutions from different methods (Kline, 1994). Furthermore, if a factor analysis solution is stable, then you should obtain similar results regardless of the extraction method used (Tabachnick and Fidell, 2014). Cooper (2010), however, stresses that, because principal component analysis always gives larger loadings than other methods, some rules of thumb may be misleading with this method. The SPSS output from each of the other methods looks similar to the output from principal component analysis. The values in the particular tables that show the results of the factor extraction will, of course, differ at least slightly between methods. Some other differences and similarities are: 1 . The word ‘component’ is replaced by the word ‘factor’ for all other methods. 2. The same tables are produced, except that for both generalised least squares and maximum likelihood methods, SPSS prints a Goodness-of-fit Test table that can be used to test hypotheses about the number of factors. See Kline (1994) for information about this use of the goodness-of-fit test. If you change the number of factors to be extracted, you may also need to increase the Maximum Iterations for Convergence in the Factor Analysis: Extraction dialogue box to allow the Goodness-of-fit Test table to be produced. 3. Communalities table: a. Initial communalities have a value of 1.0 in principal component analysis. In all other methods they are less than 1.0, because those methods explain the variance that is shared between all the variables (common variance) and attempt to eliminate other variance (error variance and unique variance). b. Extraction communalities are calculated from the factor matrix in the same way as they are from the component matrix, except for generalised least squares. 4. Total Variance Explained table: a. The Initial part of the table will hold the same values for all methods of extraction because it shows how all the variance could be explained. b. In the Extraction Sums of Squared Loadings part of the table, for principal component analysis the row values are identical to the Initial Eigenvalues row

405

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

CHAPTER 13

406

SPSS for Psychologists

values for each component because that method explains all the variance for the extracted factors. For the other methods, the row values here differ from the Initial Eigenvalues row values because these methods explain common variance and eliminate other variance.

Extract options Two mutually exclusive options, described next, by which you can affect how many factors will be extracted are presented in the lower half of the Factor Analysis: Extraction dialogue box. Either of these options may be useful for your final analysis or for obtaining information on more of the initial factors. You will need to use one of those options if you wish to inspect ‘extracted factor’ information (e.g. the factor loadings) for more factors than may be extracted using the default values. That information may also be useful if you want to compare factors from your data with those obtained in previous research on the same variables. In any report, you should comment on how the number of factors to be extracted was decided upon. The first option is to change the minimum eigenvalue, normally set at 1. The eigenvalue varies according to the number of variables entered, so it is not a robust guide for the number of factors to extract. The second option is to set the number of factors (you could do that after inspecting the scree plot).

 ther options from the Factor Analysis: Rotation O dialogue box Method The default setting is None, for no rotation. The technique of rotation, however, was devised in order to simplify the solution to the factor analysis. Thus, you would normally select one of the rotation methods that SPSS allows, from the Factor Analysis: Rotation dialogue box. SPSS provides orthogonal rotation methods (Varimax, Eqamax and Quartimax) and oblique rotation methods (Direct Oblimin and Promax). As Kline (1994) pointed out, psychological constructs are likely to be correlated with one another. Of the oblique rotation methods, he recommends using Direct Oblimin, and we used this for the analysis described in Section 2. If you require an orthogonal method, Varimax is normally thought to give rotated factors that are the easiest to interpret. The Varimax output differs somewhat from that for Direct Oblimin, in the following ways: 1. In the Total Variance Explained table, the Rotation section has three columns as percent and cumulative percent of variance are included. 2. There is a single Rotated Component Matrix table in place of the Pattern Matrix and the Structure Matrix tables.

Display If you select a rotation method, then Rotated solution is also selected; don’t unselect it, as it provides the tables that contain the solution.

If you select Loading plot(s), SPSS draws a graph of the variables on the factor axes (up to a maximum of three factors). If you have requested a rotation, then the axes represent the rotated factors.

Factor Analysis: Options dialogue box As mentioned in Section 2, the Factor Analysis: Options dialogue box allows you to specify missing values, and control the appearance of part of the output, as described next.

Selecting either or both of these options will help you to interpret the output: see text.

Coefficient Display Format Two useful options here can make the output much easier to interpret. If you select Sorted by size, then, in the relevant tables (Component Matrix, Pattern and Structure Matrix), the variables will be sorted according to their factor loading size for each factor. If you select Suppress absolute values less than, then small factor loadings will be omitted from those tables. Once you have selected that option, the number can be changed, but the default of .10 seems sensible. You could try those options, and compare your output with the tables we have shown in Section 2.

Negative and positive factor loadings Just as bivariate correlation coefficients can be positive or negative, so can factor loadings. A trivial example is when an item that taps a particular psychological construct is reversed (to avoid participant response bias). Such items should be reversed before analysis (see Chapter 4, Section 10). For factor loadings after rotation, you should note

407

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

CHAPTER 13

408

SPSS for Psychologists

whether they are negative or positive. Negative loadings prior to rotation may be an artefact of the method of calculation used to extract factors (Kline, 1994, 39).

R factor analysis R factor analysis is regular factor analysis, carried out on correlations between variables, as we have described in this chapter. There are other types of factor analysis in which other things are factored. For example, Q factor analysis uses correlations between participants (the rows and columns in the data file have to be reversed). More information on this and other types of factor analysis can be found in Kline (1994).

Section 4: RELIABILITY ANALYSIS FOR SCALES AND QUESTIONNAIRES Anyone can produce a scale, and if the guidelines on writing items available in the literature are followed, then the individual items should be acceptable. There are many existing psychological scales, however, and it is unlikely that you cannot find one that assesses the construct/s in which you are interested. Moreover, it is considered better to use an existing scale than to produce another, as researchers can then compare findings from different samples and situations. For many existing scales, reliability information has been published. Nonetheless, cultural differences, or changes in language over time, or simply sample and situation differences, may affect a scale. Thus, whether you are constructing a scale or using an existing one, it is good practice to analyse the data for reliability and dimensionality. Many issues surrounding the use of scales are beyond the scope of this book. We recommend: Cooper (2010, particularly Chapters 15 and 18); Fife-Schaw (2012); Rust (2012); and John and BenetMartinez (2014). Here, we only consider reliability, and dimensionality (Section 5). Test–retest reliability involves testing the same participants with the same scale on two separate occasions, and calculating the correlation between the two sets of scores. It assesses the stability of a scale across time. Parallel forms, or parallel tests, describe the situation in which more than one version of a scale is available, designed to measure the same construct. To assess how similar they are, one would administer them to the same participants at the same time and correlate the scores. Internal consistency is the type of reliability we are concerned with here.

Internal consistency If items within a scale are intended to measure aspects of the same construct, then they should all be fairly strongly correlated with each other. One way of assessing this is to correlate every item with each of the other items and inspect the matrix. Measures of internal consistency have been developed, however, which greatly simplify this process. Split-half reliability is an early measure, in which responses for two halves of the items are summed and then the correlation between them is calculated. Which items should go in which half, however, is a matter of debate. Cronbach’s alpha, also

called ‘coefficient alpha’, became easy to obtain with increasing computer power. It is related to the mean correlation between each pair of items and the number of items in the scale. It is the most commonly reported measure, with a rule of thumb that a scale should have a minimum Cronbach’s alpha value of .7. It does have drawbacks, however. For example, even if a scale does have a high alpha, some individual items may be poorly correlated with the others. Thus, we should also inspect other information in the SPSS output. In the annotations below, we describe how to use three particular values, which are provided for each item: 1. The part–whole correlation (or item–total correlation), which is the correlation between each item and the sum of the other items. 2. The squared multiple correlation for each item; that is, the R2 obtained if the item is entered into a multiple regression as the criterion variable with all the other items as predictor variables. 3. The value of Cronbach’s alpha for the scale if a particular item is deleted. In addition to reliability, we should also assess whether a scale is unidimensional (see Section 5).

How to perform a reliability analysis Before you start on this section, you should run through Chapter 4, Section 10. You will need data file ‘ScaleV3.sav’ based on Larsen’s (1995) Attitudes Toward Recycling (ATR) scale, saved in the last exercise in that section. Remember that normally you would need many more cases. Below we demonstrate two of the ways of measuring reliability. Start by clicking on Analyze > Scale > Reliability Analysis. The Reliability Analysis dialogue box will appear (as shown below). Follow the instructions.

Move all of the scale items into the Items area.

The default reliability model is Cronbach’s alpha. We show you the output for that first, followed by the output for Split-half.

Click on the Statistics button for the Reliability Analysis: Statistics dialogue box (see below).

409

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

CHAPTER 13

410

SPSS for Psychologists

The Reliability Analysis: Statistics dialogue box has a number of useful options for the purpose of checking scales. Otherwise, SPSS only gives output for the reliability measure that you request under Model.

Select the options shown. NB: From Inter-Item, we have selected Correlations, as this generates useful output in addition to the correlation matrix itself. We will not show the matrix.

Click on Continue, then on

. The output is shown below.

411

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

 PSS output for reliability analysis S with Cronbach’s alpha  btained using menu item: Scale > Reliability Analysis O (model = alpha) This summary table shows the valid cases. Any cases with missing data are excluded listwise from the analysis.

Cronbach’s alpha for the scale is .899. This suggests that the scale has good reliability overall, but we still need to check individual items (see next page).

This measure is based on the assumption that the variances of each item are equal; not necessarily a valid assumption. It is printed if you request any Inter-Item options in the Reliability Analysis: Statistics dialogue box.

The subsequent tables are obtained from options in the Reliability Analysis: Statistics dialogue box.

This table (a section only shown here) is from the Item option in Descriptives for.

We requested Correlations from Inter-Item in the Reliability Analysis: Statistics dialogue box above. A table called the Inter-Item Correlation Matrix is produced, and would normally appear here. We have omitted it, but you could inspect those correlations.

CHAPTER 13

412

SPSS for Psychologists

This table is from the Scale if item deleted option in Descriptives for, except for the Squared Multiple Correlation column, which is produced if you also select any Inter-Item option. The part–whole correlation, or the correlation between each item and the sum of the rest of the items. The Squared Multiple Correlation is the R2, which would be obtained by entering the item as the criterion variable in a multiple regression with all the other items as predictor variables.

Cronbach’s alpha obtained if that item is omitted from the calculation.

For item ‘q_n’, there are strong indications that it is not a consistent part of the scale: the part–whole correlation is very low (.062), R2 is also low (.38), and Cronbach’s alpha is increased to .91 when this item is deleted. Thus, this item would be a strong candidate for either being deleted from the scale, or being rewritten. NB: Cronbach’s alpha including ‘q_n’ is .899 (shown in output above). That is well above the rule of thumb of .7 for a reliable scale. Thus, this example is a good illustration of the point that Cronbach’s alpha alone may be insufficient to be sure of the reliability of all items in the scale.

This table of descriptives, for the sum of responses for the whole scale for each participant, is from the Scale option in Descriptives for.

Acting on the results If this was the final scale, you could write: ‘Cronbach’s alpha for the ATR scale from the current sample was .90’. If you were assessing the scale, you would also describe other attributes from the SPSS output. For these results, however, you would consider deleting or rewriting item ‘q_n’, probably item ‘q_s’ and possibly others. If you delete items, you must repeat the reliability analyses on the remaining items. Additionally, you should consider dimensionality of the scale (Section 5).

SPSS output for reliability analysis with split-half  btained using menu item: Scale > Reliability Analysis O (model = split-half) In the Reliability Analysis dialogue box, select Split-half instead of Alpha from Model. Only those tables that differ from the output previously illustrated are shown here.

The items in each half are shown beneath this table. The first 10 variables you moved into the Items box of the Reliability Analysis dialogue box are put into the first half, and the remaining 10 into the second half.

The first and third rows show the Cronbach’s alpha for each half.

The correlation between the sums of items in each half.

You can control which items go into which half by, in the Reliability Analysis dialogue box, moving the variable names across one at a time in the order you wish. This procedure can be useful if you wish to compare different combinations of items. When assessing scale reliability, however, Cronbach’s alpha is more often used than split-half reliability.

413

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

CHAPTER 13

414

SPSS for Psychologists

The Scale Statistics table holds descriptives for the sum of responses from each half, in addition to those for the whole scale.

Section 5: DIMENSIONALITY OF SCALES AND QUESTIONNAIRES We will describe two ways in which you can use an analysis of the dimensionality of a scale. Relevant references are given at the beginning of Section 4. First, if you want a scale in which all items assess a single construct, you can assess how strongly each item loads onto a single component. Weakly loading items would be discarded or rewritten. Second, to assess whether there is more than one construct underlying the scale, you can carry out a component or factor analysis to determine the structure. You could then use the items that load strongly on separate components as subscales. (You would need to assess the reliability of each subscale separately.) Ensuring that a scale is unidimensional, or that subscales are identified, is an aspect of construct validity. Before you work through this section, you should be familiar with the content of Sections 1–3. Also, if you have not yet done so, you should run through Chapter 4, Section 10, as you will need data file ‘ScaleV3.sav’, from the last exercise in that section. We will show you the procedure with principal component analysis, but it may be more appropriate to use a factor analysis; for example, alpha factoring. Remember that in reality you would normally need many more cases than in this example, and with the data file that we are using, alpha factoring does not converge (does not produce a solution) when one factor is requested.

 o identify those items that load on a single T component Enter the items into a principal component analysis (Section 2). For this purpose, the only settings you need to make are as follows: 1. In the Factor Analysis: Extraction dialogue box, select the option Fixed number of factors, then type 1 in the Factors to extract field. 2. In the Factor Analysis: Options dialogue box, select for Sorted by size. For this purpose, two tables are important. An extract both of which are shown on the next page.

415

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

Note: the largest component explains only 38% of the variance.

The rule of thumb is that items with loadings of less than .4 should be discarded. In this case, we can discard these questions manually – but you can also ask SPSS to this for you using the Factor Analysis: Options box, selecting the Suppress small coefficients option, and typing 0.4 into the Absolute value below: box.

CHAPTER 13

416

SPSS for Psychologists

Acting on the results From this analysis, we would either discard items ‘q_s’ and ‘q_n’, or rewrite them. Note that these items were also identified in the reliability analysis (Item–Total Statistics table above). If items are discarded (i.e. removed from the data file or questionnaire), another analysis should be carried out on the data for the remaining items, as factor loadings will change somewhat. For these data, you will find that the variance explained by the largest component increases to 42%, and the loadings of the remaining 18 items are still all above .5. You could then use the scale data (for the 18 items) in other analyses, as long as you cite the reliability and dimensionality results in your report. You must be aware, however, that another sample may give different reliability and dimensionality results. If you were constructing a scale, you might decide to rewrite items instead of just deleting them; then data from a new sample must be collected with the new version of the scale. Data from the new version must then be subjected to reliability and dimensionality analyses. Thus, scale construction/modification is an iterative process.

To assess the structure of items within a scale Enter all 20 items into a principal component analysis or factor analysis. For this purpose, we carried out the analysis in the way described in Section 2, except that: ■■ In the Factor Analysis: Options dialogue box, we selected for Sorted by size. ■■

We selected Promax in the Factor Analysis: Rotation dialogue box, as the Direct Oblimin rotation failed to converge in 25 iterations. If you wish, you could try that.

Only some of the tables from this analysis are shown here; see Section 2 for a description of all the output. Check the indicators of factorability (described in Section 2). Some of them are reasonable. The KMO value, however, is only .59. Also, some of the individual KMO values, in the diagonal of the Anti-Image Correlation matrix, are well below the value of .5 that indicates poor performance of individual items. The section of the Total Variance Explained table below shows that there were six components with eigenvalue greater than one.

417

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

The Scree Plot, however, shows just two components above the break between the steep and shallow parts of the curve.

This dotted line, which we have superimposed, indicates the approximate point at which the scree plot ‘breaks’ between the steep and shallow parts of the slope.

CHAPTER 13

418

SPSS for Psychologists

In the Pattern Matrix table below (produced by rerunning the analysis, and selecting the Based on Eigenvalue option in the Factor Analysis: Extraction dialogue) the heaviest loadings for each item are highlighted. Thirteen of the 20 items load most strongly on one of the first two components, with four items on the third. The last three components each have only one item with heaviest loading; note that items ‘q_n’ and ‘q_s’ are here.

Acting on the results If you obtained results like those above with a sufficient sample size, you would need to decide which option makes the most sense in terms of factor extraction… as the different methods give you the option of two, three or six factors. Unlike other statistics in this book which tend to be more ‘black and white’ when it comes to interpreting what they mean, interpreting the outcome of a factor analysis or PCA is greyer. In fact, at this point things become a bit more qualitative than quantitative in their

nature. While SPSS can tell you how the different variables group together, to decide how many factors to extract, you need to look at the nature and content of the items themselves to see which groupings make the most sense in real terms. Your understanding of your research question and your data should help you decide which factors to keep, and which to disregard, when your eigenvalues and scree plots don’t give you a clear, single answer. The important thing is to try to account for as much variance in your data as possible, using as few factors as possible, but which group together in a meaningful way. In the above example, you might start by checking whether a two-factor solution makes the most sense. You could do this by checking the content of the items loading onto the first two components and assessing whether they represent two distinct factors (or subscales). If you find support for the two-factor option, you might then want to collect data from a new sample to assess whether the results are replicable. You could then use exploratory factor analysis as described above, or you could use confirmatory factor analysis to test hypotheses about your measurement model. However, that is outside the scope of this book.

Important As we have stated, a sample of 50 is small for reliability and dimensionality analyses. So if you collect data with the ATR scale (Larsen, 1995), you will probably get different results. Indeed, a different sample we analysed gave different reliability and dimensionality results from those shown above. Thus, in this and the previous section, we have shown you how to use these analyses; we have not given definitive results for the ATR scale.

Summary ■■

This chapter introduced you to factor analysis, and how to check the reliability and dimensionality of scales.

■■

Factor analysis allows you to investigate whether a factor structure underlies a set of variables.

■■

We have explained the output that SPSS generates for principal component analysis, which is the simplest type of factor analysis.

■■

Remember that, to be valid, factor analysis requires a large number of observations.

■■

When constructing a new scale, or using an existing scale in a new population, it is sensible to check its reliability and dimensionality.

■■

For guidance on incorporating SPSS output into a report, or on printing the output, see Chapter 14.

419

CHAPTER 13

Factor analysis, and reliability and dimensionality of scales

CHAPTER 14

14

Using syntax and other useful features of SPSS In this chapter ●● ●● ●● ●● ●● ●● ●●

The Syntax window Syntax examples Getting help in SPSS Option settings in SPSS Printing from SPSS Incorporating SPSS output into other documents SPSS and Excel: importing and exporting data files

SPSS for Psychologists online Visit macmillanihe.com/harrison-spss-7e for data sets, online tutorials and exercises.

Section 1: THE SYNTAX WINDOW

420

■■

The dialogue boxes we have been using to control SPSS form an interface that allows the user to specify the analysis they require. When you click on the button, this interface translates all your selections into a series of commands telling SPSS what to do. It is possible to control SPSS using these commands directly. The language in which these commands are expressed is called the SPSS syntax.

■■

For the more advanced user, being able to use syntax commands will significantly increase your efficiency, allowing you to repeat complex series of analyses and to undertake analyses that would otherwise be impractical.

■■

A useful analogy is to think about how you record a TV programme at home. Most of the time you probably just select the programme you want from the electronic programme guide and accept all the default settings. Sometimes, however, you will want to do something a bit different, perhaps recording only the second half of a programme or deliberately overrunning the scheduled end of the programme to allow for delays in broadcast. In this situation, you will want to talk directly to the recorder and independently set the start and stop times. In the same way, using syntax commands allows you to be more flexible in your use of SPSS.

■■

In this section, we describe how to control SPSS directly using syntax.

421

An example of a syntax command You will already have seen some SPSS syntax. Unless you have opted to suppress it, the syntax required for each command is included at the top of the output for that command (to make the output easier to follow in previous chapters we have sometimes omitted the syntax). For example, look back at the output we produced when using the Descriptives command in Chapter 3, Section 2 (reproduced below). You will see that the syntax for the command, including all the options you selected, appears at the start of the output. These commands have a precise structure, but it is not difficult to understand what each line does.

These three lines of text are the syntax commands that instruct SPSS to undertake the Descriptives analysis. The second and third lines relate to the options we selected. If this is the first analysis you have undertaken since opening the data file, you will have some additional lines of syntax above those shown here. These lines tell you which data file is in use.

The Descriptives syntax command is made up of three lines of text. The first specifies the Descriptives command and lists the variables to be included in the analysis. Note that the second and third lines are indented and start with a slash (/) to indicate that they are continuations of the Descriptives command. The second line (“/SAVE”) is related to the option Save standardized values as variables which we selected (See Chapter 3, Section 2). The third line lists the options we selected for this command and ends with a full stop indicating the end of the command.

The Paste button and the Syntax window You may have noticed that the dialogue boxes used to execute an analysis (those that include the button) also contain a button marked Paste, which looks like this: . If you click on the Paste button, the analysis is not executed, and no output is produced. Instead, control is switched to a new window called the Syntax Editor window (or ‘Syntax window’), and the command lines needed to execute your analysis are pasted into this window. You can now specify a second analysis and click on the button again. The syntax for the new command will be added to the Syntax window. In this way, you can build up a sequence of commands in the Syntax

CHAPTER 14

Using syntax and other useful features of SPSS

CHAPTER 14

422

SPSS for Psychologists

window, without executing any of them. Finally, when you have selected all the analyses you require, you can execute or run the commands. This might seem like an odd thing to want to do, but there are at least four good reasons for working in this way.

1. Duplicating actions You may choose to work in the Syntax Editor window because you need to repeat a complex command several times. For example, when analysing the data from the adoption survey described in Chapter 4, we might need to compute 10 new variables, each of which is the mean of 10 existing variables. This would be a tedious procedure if we used the dialogue boxes, but would be easy to perform using syntax commands. This example is demonstrated later in this section.

 . Keeping a record of your analysis and repeating 2 an analysis Sometimes, you may want to repeat a complex series of analyses after updating your data file. This is easy to do if you saved the lines of syntax needed to execute your analysis. Alternatively, if you are working with a large, complex data set, then it is common to make errors. For example, you might undertake the wrong analysis, or use the correct command but forget to select the options you need. As a result, the Output window will fill up with erroneous output. One solution is to use the Syntax window as a notepad in which to record the details of the successful analyses. When you have the analysis working the way you want, you can save the details to the Syntax window by clicking the button. In this way, you can build up a permanent record of the analysis, which you can then run to produce a ‘clean’ set of output. Some researchers always save the syntax of the final analysis they undertake before submitting a report or paper for publication. This means that even months later they will be able to quickly repeat an analysis and make changes if these are requested by a reviewer.

3. Describing the analysis you have undertaken Sometimes, you will want to be able to accurately and succinctly describe exactly how you analysed a data set; for example, when emailing a colleague or in an appendix to a publication or report. Syntax is perfect in this situation as it is easy to write and is concise and precise.

4. Tweaking the parameters of a command Another reason for choosing to work in the Syntax window is that some of the options or parameters associated with certain commands can only be accessed using the syntax commands. In order to keep the number of buttons and options on the dialogue boxes manageable, the SPSS programmers have preset certain features of the commands. Occasionally, experienced users may want to alter one of these default settings. This can only be done using syntax. Details of the additional features of a command which can be accessed only via syntax are described in special help files available from the button on the dialogue box. See Section 3 for an example.

423

The Syntax window The Syntax window is used to build up the syntax commands and execute them. In fact, syntax files are nothing more than simple text files, so you could use a text editor or word processing application to write these files, but the Syntax window incorporates several useful features to help you edit the syntax. The Edit menu in the Syntax window provides access to all the normal text editing functions such as Copy, Cut, Paste and Find & Replace. Using these functions, it is possible to copy a section of syntax produced by SPSS and edit it, for example changing variable names. In this way, we can quickly produce the syntax to undertake a long sequence of commands; something that would be tedious to do using the normal dialogue boxes. The toolbar across the top of the Syntax window includes a number of useful buttons to help us construct syntax files.

This button inserts an asterisk (*) to turn a line of text into a comment which will be ignored by SPSS. Comments can be used as notes, for example to remind you what the syntax does.

This is the Run Selection button, which runs (executes) any selected sections of syntax.

Click this button to access help about the syntax for this command.

Use this navigation pane of the window to move to a particular block of syntax.

Standardised colour coding helps to distinguish between command names (blue), subcommands (green), keywords (red / maroon) and variable names (black).

Once you have constructed a block of syntax commands, select the lines you want to execute, then click on the Run Selection button (the large green arrow on the menu bar).

Another useful feature available in the Syntax window is that it auto-completes the names of commands and offers you lists of options to choose from. To see this in operation, open a new Syntax window (File > New > Syntax). Now start typing the command Oneway. After you have typed the first few letters, SPSS offers you a list of commands to choose from. This is illustrated in the screenshot below.

CHAPTER 14

Using syntax and other useful features of SPSS

SPSS for Psychologists

CHAPTER 14

424

This is a good example of the use of comments to annotate the syntax file.

Auto-complete in action: when you start a new line with a slash character (/), SPSS will offer you the appropriate list of options for this command. Here, if you choose STATISTICS, you will be offered a list of statistics to choose from. In this way, SPSS helps you to write the syntax correctly.

Basic rules of syntax There are some important characteristics of syntax to note: 1. Each new command must start on a new line. In practice, leaving several blank lines between commands makes the syntax easier to understand. 2. Each command must end with a full stop or period mark (.). 3. Subcommands or options are usually separated by the forward slash mark (/). It is a good idea (but not essential) to start each subcommand on a new line. 4. You can split a command over several lines; it is safest to break the line at the start of a new subcommand. 5. A command or block of commands must be followed by the ‘Execute’ command. 6. It is useful to include notes to help you remember what the syntax means. These notes are called ‘comments’. A comment must start with an asterisk (*) and must end with a full stop. A comment can be split over more than one line of text. SPSS will not try to interpret comments. 7. Make sure that you spell your variable names correctly (i.e. exactly as they appear in the Data Editor window). Misspelling a variable name is one of the most common sources of errors when running syntax commands.

In practice, it is quite rare to write a piece of syntax ‘from scratch’. It is more usual to use the dialogue boxes to select an analysis and set the options, and then to paste the syntax for this command into the Syntax window using the button. This syntax can then be copied and edited as required before being executed. Using this approach, you can be sure that the syntax and spelling will be correct. By careful use of the Find and Replace commands (available from the Edit menu), you can copy the syntax of a command, and change the variable(s) quickly and accurately to build up a series of analyses. An example is given below. In this example, we are seeking to compute 10 new variables. Each of these variables is the mean of a block of 10 questionnaire responses. The original variables were given names that reflect the block and question number. For example, ‘b1q3’ is the third question in the first block, while ‘b9q8’ is the eighth question in the ninth block. We could use the dialogue boxes to perform all these Compute commands, but this would be a laborious task. It is much easier to produce a series of syntax commands to do the work for us, and this approach is less likely to result in errors, as we can carefully check the syntax before running it. Here are the steps we could use to build up the syntax we need to perform these Compute commands: 1. Using the Compute Variable dialogue box, enter the details needed to compute the new variable ‘b1mean’ for the block 1 mean (see Chapter 4, Section 6, for details of the Compute command). 2. Click on the button to paste the syntax commands into the Syntax Editor window (see next page). 3. Select this first block of text, copy it and then paste a copy of this block below the first. Leave a few blank lines between the two blocks. 4. Move the cursor to the start of the second block and use the Replace function to change all instances of the string ‘b1’ to ‘b2’; click on the Find Next button, and then click the Replace button repeatedly until all the changes are made. Do not use the Replace All button, as this will replace all instances, including those in the first block. 5. Now repeat steps 3 and 4 until you have 10 blocks of syntax, each instructing SPSS to compute the mean of the 10 variables that make up that block. Block 1 will compute the variable ‘b1mean’, block 2 will compute the variable ‘b2mean’ and so on. 6. Carefully check the syntax for the Compute commands. Make sure you have changed all variable names systematically and that you have a full stop at the end of each line. 7. Make sure that you have an Execute command (with a full stop) after the last Compute. 8. Select all 10 Compute commands and click on the Run Selection button. The 10 new variables will be computed and appended to your data file. These steps are illustrated below.

425

CHAPTER 14

Using syntax and other useful features of SPSS

CHAPTER 14

426

SPSS for Psychologists

This first block of syntax was produced using the Paste button on the Compute dialogue box.

This second block is a copy of the first which was edited to replace all instances of ‘b1’ with ‘b2’.

Note the blank line we have inserted between blocks.

The third block is being edited in the same way.

We are about to change ‘b1q3’ into ‘b3q3’ by clicking the Replace button. Don’t use the Replace All button, as this will change every instance of ‘b1’ into ‘b2’, including those in the earlier blocks.

Saving syntax files Once completed, a syntax file can be saved. If the Syntax window is the active window (i.e. if you are currently working in this window), you can simply save the contents of the window as a syntax file by selecting Save As from the File menu. SPSS will automatically add the suffix ‘.sps’ to the end of the file name. We strongly recommend that you accept this default suffix.

It is a good idea to use the same root name for all the files relating to one project. For example, in the case of the adoption survey described in Chapter 4, the data file might be called ‘Adopt.sav’. The output files produced from the analysis of this file might be saved as ‘Adopt1.spv’, ‘Adopt2.spv’ etc., and a syntax file for this research might be called ‘Adopt.sps’. In this way, it is easy to see which files relate to each other. These files could then all be saved in a common folder in your file system, perhaps called ‘Adoption Project Files’ or something similar.

Executing syntax commands Once you have written and saved your syntax commands, you can execute, or run, them, either by highlighting the lines of syntax you want to execute and clicking on the Run Selection button, or by choosing one of the options from the Run menu. It is safer to highlight the commands and use the Run Selection button.

We recommend that you get into the habit of executing syntax by highlighting the commands you want to run and clicking on the Run Selection button. This way you will only execute the lines you want.

Syntax errors

Once the syntax has been executed, any errors will be reported at the bottom of the Syntax window. These warnings are reasonably clear, and include a note of the line number at which the error occurs, so you shouldn’t have too much problem tracking down the error in your file. If you have more than one error, it’s a good idea to correct these one at a time, rerunning the syntax file after each error is corrected, as it is possible one error will generate several problems in the file. The screenshot of the Syntax window shown below illustrates an example of an error. In the second Frequencies command, we have deliberately misspelt the name of the variable (‘seex’ rather than ‘sex’) so that you can see what the error report looks like. SPSS reports that the error is on line 5 (which is also marked with an arrow) and explains that the variable name hasn’t been defined. It even recommends you check the spelling – good advice in this case! This warning also appears in the Output window.

Selecting the correct data file Another common error when performing data analyses using SPSS syntax is to run your syntax against the wrong data file. It is possible to have several different data files open in SPSS at the same time. This isn’t something novice users are likely to do, but as you become more experienced, you may find yourself working on projects where you have several related data files, a number of which may be open simultaneously. In this situation, one of the first things you should do before running a syntax file is to check which data file is active. The active data file can be selected from a drop-down list in the Syntax window (see below), or it can be specified in a syntaxcommand, which has the form DATASET ACTIVATE Dataset1. If you run your syntax file on the wrong data file, SPSS will normally report errors when it encounters undefined variables. However, if you have two data files that have common variable names, SPSS may not detect any errors and your mistake could go unnoticed. Be extra careful in this situation and consider including the DATASET ACTIVATE syntax line at the start of each block of syntax to reduce the risk of analysing the wrong data set.

427

CHAPTER 14

Using syntax and other useful features of SPSS

CHAPTER 14

428

SPSS for Psychologists

Note we have taken the sensible precaution of inserting a copy of the DATASET ACTIVATE command immediately prior to the syntax for our Frequencies analysis. This reduces the risk of analysing the wrong data set. Remember to highlight this line along with the rest of the syntax when executing the commands.

You can also select the active data set from this drop-down list.

This is the error report produced by SPSS due to our misspelling of the variable ‘sex’ in the Frequencies command. The error report tells us the error occurs in line 3 as part of the Frequencies command, and that the problem is an undefined variable name. We are advised to check the spelling of the variable name.

Section 2: SYNTAX EXAMPLES One of the big advantages of being able to use SPSS syntax is that it enables you to benefit from the enormous number of helpful websites that provide short pieces of syntax to perform statistical analyses which cannot be undertaken using the dialogue boxes alone. One useful resource is the website of IBM, the company that now owns and markets SPSS. The IBM Knowledge Center for SPSS provides lots of examples of syntax routines (to find this search for ‘IBM SPSS Knowledge center’). To illustrate how useful this resource is, we will now use syntax obtained from this site to compare correlation coefficients.

Comparing correlation coefficients In Chapter 6, Section 6, we demonstrated how to compare two independent correlation coefficients in order to determine whether they were significantly different from one another. This required us to use formulae to compute Fisher’s z-scores, which could then be compared against published tables of critical values. A much easier alternative is to use syntax to do these calculations for us. The section of syntax below

is adapted from the IBM Knowledge Center (https://www.ibm.com/support/pages/ differences-between-correlations). *testing equality of independent correlations. *H0: R1 = R2; r1 & r2 are sample corr of x,y for groups 1 & 2 . *n1 and n2 are sample sizes for groups 1 and 2. Compute z1 = .5*ln((1+r1)/(1-r1)). Compute z2 = .5*ln((1+r2)/(1-r2)). Compute sezdiff = sqrt(1/(n1 - 3) + 1/(n2-3)). Compute ztest = (z1 - z2)/sezdiff. Compute alpha = 2*(1 - cdf.normal(abs(ztest),0,1)). Formats z1 to alpha (f8.3). List z1 to alpha. Execute. The first three lines of this syntax program are comments to help us understand the calculation. The next four lines are Compute statements, and these are exactly equivalent to the formula given in Chapter 6. The fifth Compute command calculates the alpha value (the p value) for the calculated z-score, thus avoiding the need to use tables to look up the critical value. The Formats command ensures that the new variables are displayed to three decimal places. The List command displays the values for the new variables. Before we can run this syntax, we need a simple data file. The file will consist of just one row of data containing four variables: the correlation coefficients for the two groups to be compared, and the sample size (n) for each of these two groups. The data file is shown below.

This simple data file contains only four values: the correlation coefficients for the two groups (r1 and r2) and the sample sizes (n1 and n2).

Set up this simple data file and save it, then type the syntax commands into a Syntax Editor window. Make sure you include the Execute command at the end of the block of commands, highlight the lines of syntax and select Run. You should see the output given below.

429

CHAPTER 14

Using syntax and other useful features of SPSS

SPSS for Psychologists

CHAPTER 14

430

The output of the List command shows the value for the five new variables we have computed, including the value of Fisher’s z-test (‘ztest’) and the significance value (‘alpha’). Compare these two values with those calculated by hand in Chapter 6 (any minor differences are due to rounding errors).

The syntax file we created to undertake this analysis demonstrates the use of the keyword TO in SPSS syntax. This keyword allows us to specify a list of consecutive variables. In the example above, we computed a total of five new variables, ‘z1’, ‘z2’, ‘sezdiff’, ‘ztest’ and ‘alpha’. Because these are consecutive variables, occurring next to each other in the data file, we can specify all five with the syntax ‘z1 to alpha’. Thus, the syntax ‘List z1 to alpha’ lists all five variables, as seen in the output above. This can be a useful way of specifying a large number of variables very simply.

System variables SPSS reserves several special variables, called ‘system variables’, for its own use. You rarely see these, but they can be useful when writing syntax. System variable names always commence with the special character ‘$’. One useful system variable is $casenum, which is the case number – the row number in your data file. You can use this system variable in Compute commands. For example, the following lines of syntax will create a new variable ‘ParticipantNum’ and set it equal to the case number.



Compute ParticipantNum = $casenum. EXECUTE.



Type these lines of syntax into a Syntax window (don’t forget the full stops) and then highlight and run them. You will find that the new variable has been added to the active data file. Note, however, that $casenum is always equal to the current case number, so it is important to use this syntax to produce the $casenum variable before doing anything that might change the order of the rows in the data file (for example, using the Sort command reorders the cases; see Chapter 4). Another useful system variable is $sysmis, which SPSS uses to indicate a system missing value. System missing values appear as a dot in the appropriate cell of the data file. An example of the use of $sysmis is when creating a new variable. It is sometimes useful to initialise a new variable as missing, and then subsequently change this default value using Recode commands if certain conditions are met. An easy way to do this is with the system variable, $sysmis. The syntax is shown below. If you run this syntax, you will see the new variable is created and all values are set to system missing.



Compute NewVariable = $sysmis. EXECUTE.



Section 3: GETTING HELP IN SPSS It might seem odd to wait until the last chapter of this book to describe how to use the SPSS help system, but we hope that our instructions have provided all the assistance you required so far. However, you will need to make use of the help files provided with SPSS when trying to use functions or commands not covered in this book.

The Help button in dialogue boxes Each dialogue box includes a button. Click on this button to open a page of help in your internet browser. This contains a mass of useful information, including a more detailed description of the statistical procedure and the options available for the command. The page includes a number of links to related content, including a link to the syntax for the current command.

431

CHAPTER 14

Using syntax and other useful features of SPSS

CHAPTER 14

432

SPSS for Psychologists

This is the information provided when you click on the Help button in the Paired-Samples T Test dialogue box. Hover your mouse over this arrow to access a table of contents for this help.

The bottom of this page of help includes some useful links. Click here to access details of the syntax for this command.

Click this link to access more information about the Pairs Subcommand.

This page provides a summary of the syntax for the T-Test Command.

The Pairs Subcommand is an example of an additional feature only available through the use of syntax. If we undertake a Paired-Samples T-Test using the SPSS dialogue boxes (as described in Chapter 6), it is necessary to specify each pair of variables to be analysed. This can be time-consuming. Using syntax, it is possible to quickly specify a large number of pairings, for example pairing the first variable in a list with every other variable in the list, or pairing each variable in one list with each variable in another list. See below for details of these options.

433

CHAPTER 14

Using syntax and other useful features of SPSS

CHAPTER 14

434

SPSS for Psychologists

This page describes how to use the Pairs Subcommand.

The Help menu You can also access help from the Help menu on each window. This gives access to several sources of information. To search for help on a particular topic, select Topics, and then enter a few keywords into the search box on the left of the window. For example, if you search for ‘chi-square’, SPSS will find a long list of related help files, which it will list in the left-hand pane of the window. Select from this list to read the help information (see below).

435

Click Help then select Topics. This gives access to the help system shown below.

Start to enter a search term here, select from the alternatives and click Go.

CHAPTER 14

Using syntax and other useful features of SPSS

SPSS for Psychologists

CHAPTER 14

436

We selected Chi-square Test Options.

Other options in the Help menu include Command Syntax Reference, which provides access to a pdf file containing further details of the syntax for each command. You might like to explore all these forms of help.

What’s this? When viewing output in the Output window, double-click on a table to select it. If the pivot table opens in a Pivot Table Editor window, you can right-click on a heading and select What’s This? from the drop-down menu. This will provide a short but useful description of the item. The example below shows the description of the Levene’s Test for Equality of Variance in the Pivot Table for the Independent Samples Test.

437

CHAPTER 14

Using syntax and other useful features of SPSS

Section 4: OPTION SETTINGS IN SPSS There are a number of options that can be set in SPSS. These control such things as the appearance of the various windows, the way variables are listed in dialogue boxes, the appearance of output, and the location of files. Here, we describe how to access these options and highlight a few you might like to alter. If your screen looks different from the screenshots included in this book, this may be because some of these options settings are different. In particular, if your variables are always listed differently from ours, it may be that the Variable Lists options in your copy of SPSS are set differently to ours (see below).

Changing option settings The option settings can be accessed from any of the SPSS windows. Select Edit > Options. This will bring up the Options dialogue box (shown below). This dialogue box has a series of tabs across the top. Click on a tab to see that set of options.

Click on one of these tabs to view a group of options. Here, we are looking at the General tab.

If you make any changes to the options, click on the Apply button. Then click on the OK button.

Some useful option settings General tab One of the most useful options in the General tab allows you to control how variables are listed in the dialogue boxes. By default, SPSS is set to the Display labels option. In this setting, SPSS lists variables by their labels (with the variable name given in brackets). When working with complex data sets, the alternative Display names option is

CHAPTER 14

438

SPSS for Psychologists

often better; this forces SPSS to list variables by name. Other options on the General tab allow you to control whether the variables are listed in Alphabetical order or in the order they are listed in the data file (File) or grouped by Measurement level. (Note that you can also choose between these options when working in an SPSS dialogue box – just right-click on the box that lists all the variables and select the options you require.)

Syntax Editor tab In the Syntax Editor tab, you can change the colours applied to command names, subcommands and so on in the Syntax Editor window, and switch auto-complete on or off. You can also change where syntax is pasted into the Syntax window when you click the Paste button on a dialogue box (at the current cursor location or after the last command in the window).

Viewer tab The tick box on the bottom left-hand corner of the Viewer tab controls whether the command syntax is written to the output file.

Data tab The Display Format for New Numeric Variables section of the Data tab allows you to alter the default settings of the width and number of decimal places for a new variable. It might be useful to change this setting if you needed to create a large number of variables using the same settings. Remember, this setting alters only the way the number is displayed on screen, not the number of decimal places used when performing calculations.

Output tab From the Outline Labeling section of this tab, you can select whether you want variable labels, variable names or both variable labels and variable names to appear in output. Similarly, you can choose to display either value labels, values or both value labels and values.

File Locations tab On the File Locations tab, you can control the default locations for data files and other files in SPSS. You can also control the settings for the Journal file. This is a file that SPSS uses to keep a record of the syntax of all the operations you have undertaken in the current session. The Journal file can actually be really helpful if you carry out lots of analysis, and then accidentally forget to save your output or syntax file. (Something VH has done a number of times!) Because SPSS automatically keeps a running log of all of the analysis you have done (whether through the dialogue boxes or syntax), if you accidentally lose your work, you can simply look up the Journal location under the File Locations tab, navigate to the file location, and open it directly as a syntax file. Then all you need to do is highlight the section that relates to the analysis you have lost, and rerun it. Phew!

Section 5: PRINTING FROM SPSS Here we provide some information on how to print output, data and syntax files.

Printing output from the Output viewer window To print from the Output viewer window, either click on the printer icon at the top of the window, or select Print from the File menu. The option All visible output prints any output you could see by scrolling up or down in the Output window (i.e. not hidden output). Alternatively, you can print just the parts of the output you are interested in, using the Selected output option. The Page Setup and Page Attributes options under the File menu allow you to control the paper size and orientation, the margins and to add footers and headers to your pages.

Printing data and syntax files To obtain a printed copy of your data or syntax file, select Print from the File menu of the appropriate window.

The Fonts option under the View menu allows you to change the size and appearance of the font used to display and print the data.

Special output options for pivot tables Most of the tables of results that appear in the Output viewer window are pivot tables. Double-clicking on a pivot table will select it, and either open a Formatting toolbar or, in the case of more complex tables, will open the table in a new a window. The tools in the Formatting toolbar can be used to adjust the appearance of the table prior to printing it. A huge number of options are available, including rotating the table (swapping rows and columns), adding or removing grid lines and scaling the table to fit the size of paper being used. Below, we describe a few of the most useful actions available from the Pivot Table window menu bar: 1. From the Pivot menu, select Transpose Rows and Columns to swap the rows and columns of a table. 2. From the Format menu, select Table Properties. The tab-style dialogue box displayed will allow you to alter the appearance of the table. The Printing tab contains two useful options (Rescale wide table to fit page and Rescale long table to fit page), which force SPSS to automatically adjust the size of print so that the table will fit the page without being split. 3. Alternatively, select TableLooks from the Format menu and change the overall style of the table using one of a number of predesigned styles. See below for an example.

439

CHAPTER 14

Using syntax and other useful features of SPSS

CHAPTER 14

440

SPSS for Psychologists

Here we are going to edit the pivot table from the output of an independent samples t-test. First, double-click on the pivot table. This will open the table in the Pivot Table editor window (see below).

This is the Pivot Table editor window, which provides a number of tools to help change the appearance of the table. For example, click on Format and select TableLooks.

In the TableLooks window, you can select from a number of different table styles. The Academic style and the two APA styles are particularly useful. Select your preferred style and then click OK. The pivot table in the output file will be reformatted to match the new style. You can then copy and paste the reformatted table into your report.

4. From the Format menu select Autofit. This will resize the columns and rows of the table to a size that is appropriate for their contents. This usually makes the table slightly smaller and much neater.

Before using either of the rescale options (described in point 2 above), you could apply the Autofit option. This will remove any redundant spaces from the table before it is rescaled.

5. From the Insert menu, select Caption. This will allow you to insert a text caption inside the table. 6. Select a set of table cells by clicking and dragging over them. From the Format menu, select Cell Properties. Using the options under the three tabs of this dialogue box, you can set the size and colour of the font and background of the cells, how the content of the cells is aligned, and the format used to display the contents. Also under the Format menu, the Set Data Cell Widths menu item can be used to set the width of the cells. 7. Double-click on any text in the pivot table, including the table title or the row or column labels to edit the text. It is also possible to edit the contents of the cells in this way. 8. Close the Pivot Table Editor window to apply the changes you have made. The pivot table in the Output window will be updated to reflect these changes.

Once a pivot table is selected, it is possible to adjust the width of a column by clicking on and dragging the grid line dividing the columns. Doubleclicking on a cell allows you to change the cell contents.

Section 6: INCORPORATING SPSS OUTPUT INTO OTHER DOCUMENTS Once you have reformatted the pivot tables in your output, you will want to incorporate it into your word-processed research report. If you are using Microsoft Word, this is simply a matter of copying and pasting the sections of the output. Follow the steps below.

441

CHAPTER 14

Using syntax and other useful features of SPSS

SPSS for Psychologists

CHAPTER 14

442

Click on the section of output you want to copy. It will become highlighted.

Right-click on the selected table and choose Copy.

Switch to your Word document and move the cursor to the desired location.

Right-click and select Paste. Then select one of the three Paste Options. Hover the cursor over each option for a preview. The first option (keep source formatting) is likely to give the best result.

Exporting SPSS output It is possible to export the SPSS output in one of several widely adopted file formats. For example, you can save the output as a Rich Text file, which can then be read into a word processor. This might be particularly useful if you need to send the output to someone who does not have access to a copy of SPSS. To do this, first select File > Export.

Click here to select the file format for the export.

Specify the name and location of the export file.

Click OK to export the output in the new format.

Section 7: SPSS AND EXCEL: IMPORTING AND EXPORTING DATA FILES Many researchers use a spreadsheet program such as Excel to initially collate and tabulate their data. One reason for doing this is that most people will have access to an Excel-compatible spreadsheet, whereas not everyone will be able to access SPSS. A further advantage is that it is sometimes easier to pre-process a large data file in a spreadsheet program than it is to use SPSS. Fortunately, it is easy to open Excel files in SPSS. Similarly, it is quite straightforward to save an SPSS file in an Excel file format. These import and export operations are illustrated below.

Import: opening an Excel file in SPSS To illustrate how to open an Excel file in SPSS, we have created a simple Excel spreadsheet, shown below. The file contains some simple demographics and two variables,

443

CHAPTER 14

Using syntax and other useful features of SPSS

CHAPTER 14

444

SPSS for Psychologists

‘RT1’ and ‘RT2’. Note that in this case we have used the spreadsheet to compute ‘MeanRT’, the mean of ‘RT1’ and ‘RT2’.

This is a simple Excel file to demonstrate how to import into SPSS. Note that the cells in column F (MeanRT) contain the formula to calculate the mean of RT1 and RT2.

We can now open this Excel file in SPSS. First, switch to SPSS and click on the menu item File then follow the steps below.

Select File.

Select Import Data.

Select Excel.

445

CHAPTER 14

Using syntax and other useful features of SPSS

Select the Spreadsheet file then click Open.

Ensure the Excel file type is selected.

If the first row of your spreadsheet file was used for variable names, select this option, then click OK. If the spreadsheet doesn’t contain variable names, you can add them later. You are provided with a helpful preview of what your SPSS data file will look like.

SPSS for Psychologists

CHAPTER 14

446

The file has opened in SPSS. Note that SPSS has used the text from the first row of the Excel spreadsheet as variable names. Note also that the calculated values of the variable ‘MeanRT’ have been read into the data file.

You should check the Variable View of your new file and add labels and missing values before saving the file.

Export: saving an SPSS file to Excel It is also easy to save an SPSS data file as an Excel file.

Select File > Export > Excel…

447

CHAPTER 14

Using syntax and other useful features of SPSS

Check that the file type is set to Excel and give the file a suitable file name then click Save.

You can specify which variables to export to the Excel file.

Variable names can be written to the first row of data.

The file will be saved in Excel format, and by default the variable names will be written to the first row of the spreadsheet. In this way, it is easy to move files between SPSS and Excel.

Summary ■■

This chapter introduced SPSS syntax. We demonstrated how to use syntax files to control SPSS, and described some of the advantages of using syntax.

■■

Using syntax to control SPSS can enhance your productivity and gives you an invaluable record of your analysis. Syntax can also be used to repeat a set of analyses, for example after updating a data file. Using syntax to control your analysis can be particularly useful when working with large, complex data sets.

■■

This chapter also showed you how to access help files, including syntax help files.

■■

We also demonstrated some useful option settings in SPSS.

■■

The final sections of the chapter showed you how to print and export SPSS files, and how to incorporate the output into Microsoft Word documents.

■■

Finally, we demonstrated how to move data files between a spreadsheet program, such as Excel, and SPSS. This enables you to benefit from the best features of both programs.

Appendix

Appendix All data files are available to download from macmillanihe.com/harrison-spss-7e. We recommend that you enter the first few data files to become skilled at entering data and therefore include those for Chapters 4–7 here.

448

Data files for

Included here

Available from website

Data handling exercises





One-sample t-test





Independent t-test





Paired t-test





Mann–Whitney U test





Wilcoxon matched-pairs signed-­ranks test





Pearson’s r correlation





Spearman’s rho correlation





Chi-square test





McNemar test





One-way between-subjects ANOVA



Two-way between-subjects ANOVA



One-way within-subjects ANOVA



Two-way within-subjects ANOVA



Three-way mixed ANOVA



Kruskal–Wallis test and Friedman test



Multiple regression



ANCOVA and MANOVA



Discriminant analysis and logistic regression



Factor analysis



Log transformation exercise



Scale exercise



id

sex

ethnicity

religion

adopted

q1

q2

q3

q4

q5

q6

q7

q8

q9

q10

DATA FOR DATA HANDLING EXERCISES: CHAPTER 4, SECTIONS 1–9

1

2

2

3

1

4

4

4

5

2

4

3

5

2

2

2

1

2

3

0

4

1

5

5

1

3

2

3

1

5

3

2

4

2

1

2

3

2

2

1

2

1

1

1

1

4

2

3

2

0

2

1

1

3

3

3

3

1

1

1

5

2

2

6

0

4

5

3

4

4

2

4

5

4

3

6

2

5

5

0

2

1

9

1

2

1

1

1

1

1

7

1

4

3

4

1

1

1

2

1

1

1

3

2

5

8

1

1

6

4

1

2

2

1

1

1

2

2

3

1

9

2

2

3

2

1

1

2

1

1

2

1

1

1

1

10

2

3

3

3

2

2

1

2

3

2

2

3

2

2

11

2

4

5

0

5

4

5

5

5

5

5

5

5

5

12

1

4

5

1

4

4

3

2

4

3

4

3

4

3

13

2

4

3

1

3

3

3

3

2

3

3

2

3

4

14

1

1

6

2

2

3

5

4

2

3

4

1

3

5

15

1

3

3

1

9

1

2

1

1

1

1

2

3

2

16

2

4

1

4

1

2

1

2

2

2

1

2

2

2

17

2

5

2

0

5

4

4

4

3

4

3

4

5

5

18

1

1

3

2

5

4

9

4

4

3

3

2

4

4

19

2

1

2

4

1

1

1

2

1

2

2

1

1

1

20

2

2

3

9

1

2

3

2

3

2

2

2

3

3

449

Appendix

Appendix

Appendix

q_a

q_b

q_c

q_d

q_e

q_f

q_g

q_h

q_i

q_j

q_k

q_l

q_m

q_n

q_o

q_p

q_q

q_r

q_s

q_t

DATA FOR DATA HANDLING EXERCISES: CHAPTER 4, SECTION 10

Qnum

Appendix

450

1

4

2

3

4

4

4

2

2

2

4

2

2

3

5

4

3

5

3

3

5

2

2

2

6

3

4

4

1

2

3

4

2

2

4

4

5

3

3

3

3

2

3

4

2

4

4

4

5

2

2

2

5

3

2

3

2

5

2

3

3

3

4

4

4

2

4

5

5

5

1

1

2

5

2

1

1

3

5

1

2

2

3

5

5

4

2

5

4

5

4

1

2

2

2

3

2

2

5

4

2

2

2

4

4

6

4

2

3

3

4

3

2

3

2

4

3

2

3

2

4

3

3

3

4

4

7

5

1

5

4

5

5

1

2

2

5

2

1

2

4

5

2

2

2

4

4

8

5

2

5

5

4

4

2

2

3

5

3

2

2

5

5

1

3

3

3

4

9

4

1

5

5

4

4

2

1

2

5

1

1

3

5

5

2

4

1

1

5

10

5

2

5

5

5

5

1

2

2

4

2

2

3

1

5

3

3

3

3

4

11

4

2

3

4

5

5

1

2

2

4

2

2

2

4

4

2

2

2

4

4

12

4

1

4

4

4

4

1

2

2

5

2

1

2

4

4

2

2

2

4

4

13

4

1

3

4

5

5

2

1

1

5

1

3

2

2

4

2

2

3

5

5

14

4

1

3

4

4

4

1

2

2

4

1

1

2

2

3

1

4

3

3

4

15

4

3

3

4

4

4

1

3

2

5

2

2

3

4

4

2

3

3

3

4

16

5

2

5

5

5

5

1

2

2

5

2

1

2

5

4

1

3

1

4

4

17

4

3

4

4

5

2

2

3

3

4

5

2

5

4

4

2

4

3

3

3

18

2

2

3

3

5

4

1

3

2

4

4

3

4

2

5

2

2

4

3

2

19

4

2

5

5

5

3

1

3

3

5

1

2

3

2

4

1

3

1

4

4

20

4

2

5

5

5

5

1

2

2

5

2

1

3

5

5

2

2

2

5

5

21

4

1

4

4

5

4

2

2

2

5

1

3

2

3

4

2

3

2

3

3

22

4

2

3

4

4

3

2

2

2

4

3

4

3

4

4

3

3

3

2

4

23

4

2

5

5

5

5

1

2

1

5

1

1

2

5

5

2

2

2

3

5

24

5

1

4

5

5

5

1

1

1

5

1

1

1

3

4

2

3

3

3

3

25

5

1

5

5

5

5

1

3

3

4

3

1

5

2

5

2

4

2

4

4

26

3

2

3

4

4

4

2

3

2

4

2

2

3

4

2

2

3

3

3

3

q_a

q_b

q_c

q_d

q_e

q_f

q_g

q_h

q_i

q_j

q_k

q_l

q_m

q_n

q_o

q_p

q_q

q_r

q_s

q_t

27

2

4

2

2

2

1

4

4

5

1

5

5

5

4

2

5

5

5

2

1

28

3

2

3

4

4

4

3

3

4

3

3

4

5

3

3

4

4

3

2

2

29

4

2

4

5

5

4

2

2

2

4

2

3

2

5

4

3

2

3

4

9

30

3

2

4

4

4

3

2

2

2

4

2

2

4

4

4

2

3

2

4

4

31

4

2

5

5

5

5

1

2

2

5

3

3

4

5

5

1

2

3

4

5

32

4

3

5

5

5

9

3

3

4

4

4

4

4

3

3

4

4

4

2

4

33

3

2

3

5

4

4

1

2

2

2

5

2

2

4

5

2

3

3

5

5

34

4

2

2

5

5

4

2

2

3

5

3

4

5

4

4

2

5

3

4

4

35

4

2

5

4

4

4

2

2

4

4

2

2

4

4

4

2

2

4

4

4

36

4

3

4

5

5

5

1

2

2

5

1

1

2

5

5

3

4

2

3

4

37

4

2

5

4

5

3

2

2

3

5

2

2

4

4

5

3

3

3

3

4

38

5

2

3

4

4

4

2

2

2

4

2

3

2

3

3

2

3

2

3

4

39

5

1

4

5

5

5

1

1

1

5

1

4

3

2

4

1

1

2

3

5

40

4

1

4

4

4

3

2

2

2

4

2

2

3

4

4

2

3

2

4

4

41

4

4

4

4

4

2

4

4

4

4

4

2

4

3

4

2

5

3

3

5

42

5

1

4

5

5

5

1

3

5

5

1

1

1

5

4

1

3

1

3

1

43

5

1

5

5

5

5

1

1

1

5

1

1

2

5

5

1

2

1

3

5

44

4

2

3

4

4

5

2

2

2

5

3

1

3

4

4

2

2

3

5

5

45

2

4

2

4

4

3

2

3

3

4

3

3

3

4

3

2

4

2

3

3

46

5

4

5

5

5

5

1

2

3

4

4

2

4

5

5

2

2

3

2

5

47

3

2

5

3

5

4

2

2

2

5

2

2

3

3

5

1

2

2

4

2

48

4

2

2

4

4

5

1

2

3

5

3

1

3

3

5

2

3

2

3

5

49

3

2

2

4

3

3

2

2

2

4

3

4

2

3

4

2

3

3

3

3

50

5

5

5

5

5

1

2

4

5

4

4

2

5

5

4

2

5

2

4

4

451

Appendix

Qnum

Appendix

Appendix

452

Appendix

DATA FOR ONE-SAMPLE T-TEST: CHAPTER 5, SECTION 2 Score 68.00 62.00 58.00 67.00 65.00 69.00 72.00 76.00 62.00 64.00 69.00 70.00 71.00 66.00 68.00 67.00 61.00 72.00 73.00 78.00

DATA FOR INDEPENDENT T-TEST: CHAPTER 5, SECTION 3 GROUP 1 = mnemonic condition 2 = no mnemonic condition

SCORE

1

20

1

18

1

14

1

18

1

17

1

11

1

20

1

18

1

20

1

19

1

20

2

10

2

20

2

12

2

9

2

14

2

15

2

16

2

14

2

19

2

12

453

Appendix

Appendix

Appendix

454

Appendix

DATA FOR PAIRED T-TEST: CHAPTER 5, SECTION 4 LARGE SIZE DIFFERENCE

SMALL SIZE DIFFERENCE

936

878

923

1005

896

1010

1241

1365

1278

1422

871

1198

1360

1576

733

896

941

1573

1077

1261

1438

2237

1099

1325

1253

1591

1930

2742

1260

1357

1271

1963

DATA FOR MANN–WHITNEY U TEST: CHAPTER 5, SECTION 6 SEX 1 = male 2 = female

RATING

SEX 1 = male 2 = female

RATING

1

4

2

4

1

6

2

2

1

5

2

7

1

8

2

4

1

5

2

6

1

2

2

7

1

4

2

5

1

4

2

2

1

5

2

6

1

7

2

6

1

5

2

6

1

4

2

6

1

3

2

3

1

3

2

5

1

5

2

7

1

3

2

4

1

3

2

6

1

8

2

6

1

6

2

7

1

4

2

8

455

Appendix

Appendix

Appendix

456

Appendix

 ATA FOR WILCOXON MATCHED-PAIRS SIGNED-RANKS TEST: D CHAPTER 5, SECTION 7 E-FIT RATING 1 (from memory)

E-FIT RATING 2 (from photograph)

E-FIT RATING 1 (from memory)

E-FIT RATING 2 (from photograph)

3

6

4

4

3

4

4

2

3

5

4

5

5

6

3

3

2

3

5

3

4

3

4

3

5

3

3

2

5

3

3

3

4

3

6

4

3

3

3

3

2

3

3

2

6

6

3

3

5

3

2

4

4

3

2

5

3

3

5

6

3

5

3

5

4

5

6

4

3

2

2

3

4

5

5

5

3

5

4

2

3

2

3

5

5

6

3

2

3

4

5

6

4

3

4

2

DATA FOR PEARSON’S R CORRELATION: CHAPTER 6, SECTION 3 AGE (in years)

CFF

41

34.9

43

30.5

25

35.75

42

32.3

51

28.0

27

42.2

27

35.1

48

33.5

58

25.0

52

31.0

58

23.2

50

26.8

44

32.0

53

29.3

26

35.9

65

30.5

35

31.9

29

32.0

25

39.9

49

33.0

457

Appendix

Appendix

Appendix

458

Appendix

DATA FOR SPEARMAN’S RHO CORRELATION: CHAPTER 6, SECTION 4 CONFI­ DENCE

BELIEV­ ABILITY

ATTRACT­I­ VENESS

CONFI­ DENCE

BELIEV­ ABILITY

ATTRACT­I­ VENESS

4 4 4 4 4 4 4 5 4 6 4 4 4 6 5 4 2 6 3 3 2 5 5 5 4 5 5 4 4 3 5 5 1 5 5 5 5 4 4 4 5 4 5 4 5

4 3 6 6 4 4 3 5 4 5 6 5 4 5 5 5 4 4 5 3 5 5 6 4 5 5 5 4 4 5 6 5 5 5 5 6 5 5 5 5 5 5 5 4 6

2 3 4 4 3 4 2 4 3 4 4 5 3 4 4 3 3 4 3 3 5 4 4 4 4 4 4 5 4 4 4 4 3 4 4 4 5 4 4 4 4 4 3 2 5

4 6 3 3 4 4 6 3 4 5 3 5 5 4 4 6 5 5 6 5 6 5 2 3 3 4 4 5 5 3 2 6 4 5 4 4 4 4 4 4 5 4 5 6

5 5 5 5 4 3 6 5 4 5 1 6 5 5 4 1 5 5 6 5 6 5 4 4 4 4 5 5 5 4 3 5 5 4 5 5 4 4 5 5 4 6 5 5

3 2 4 4 3 3 4 2 3 4 3 4 4 4 4 1 4 4 5 3 5 2 4 4 4 4 4 4 3 4 5 5 3 4 4 4 4 4 4 5 4 4 3 4

DATA FOR CHI-SQUARE TEST: CHAPTER 7, SECTION 4 BACKGROUND 1 = Asian 2 = Caucasian 3 = other

MOTHER’S EMPLOYMENT 1 = full time 2 = none 3 = part time

SCHOOL 1= comprehensive 2 = private

TENDENCY TO ANOREXIA 1 = high 2 = low

2

1

1

1

2

1

1

1

2

1

1

1

2

3

1

1

2

3

2

1

2

3

2

1

2

2

2

1

2

2

2

1

2

2

2

1

2

1

2

1

2

1

2

1

2

1

2

1

2

3

2

1

2

3

2

1

2

3

2

1

2

3

2

1

2

2

2

1

2

2

2

1

2

2

2

1

2

2

2

1

2

3

2

1

2

3

2

1

1

3

2

1

1

1

2

1

1

1

2

1

459

Appendix

Appendix

Appendix

460

Appendix

BACKGROUND 1 = Asian 2 = Caucasian 3 = other

MOTHER’S EMPLOYMENT 1 = full time 2 = none 3 = part time

SCHOOL 1= comprehensive 2 = private

TENDENCY TO ANOREXIA 1 = high 2 = low

1

1

2

1

3

2

2

1

3

3

2

1

3

2

2

1

3

2

2

1

3

1

2

1

3

1

2

1

3

1

2

1

3

1

2

1

3

1

2

1

3

2

2

1

3

2

2

1

3

2

2

1

2

2

1

2

2

1

1

2

2

1

1

2

2

3

1

2

2

3

1

2

2

2

1

2

2

2

1

2

2

2

1

2

2

2

1

2

2

3

1

2

2

3

1

2

2

3

1

2

2

3

1

2

2

3

1

2

2

3

1

2

BACKGROUND 1 = Asian 2 = Caucasian 3 = other

MOTHER’S EMPLOYMENT 1 = full time 2 = none 3 = part time

SCHOOL 1= comprehensive 2 = private

TENDENCY TO ANOREXIA 1 = high 2 = low

2

3

1

2

2

3

1

2

2

2

1

2

2

2

1

2

2

2

1

2

2

3

1

2

2

3

1

2

2

2

1

2

2

2

1

2

2

2

1

2

2

2

1

2

2

3

1

2

2

1

1

2

2

1

1

2

2

1

2

2

2

1

2

2

2

1

2

2

2

1

2

2

2

1

2

2

2

1

2

2

2

1

2

2

1

1

2

2

1

1

2

2

3

1

2

2

3

1

2

2

3

1

2

2

3

1

2

2

461

Appendix

Appendix

Appendix

462

Appendix

DATA FOR MCNEMAR TEST: CHAPTER 7, SECTION 5 NORMAL HANDWRITING 1 = correct 2 = incorrect

HANDWRITING AS IF OPPOSITE SEX 1 = correct 2 = incorrect

NORMAL HANDWRITING 1 = correct 2 = incorrect

HANDWRITING AS IF OPPOSITE SEX 1 = correct 2 = incorrect

2

2

1

2

1

1

1

2

1

1

2

1

1

2

1

2

1

1

1

1

2

2

1

1

1

1

1

2

1

2

1

2

2

2

2

2

1

1

1

1

2

2

1

1

1

2

1

1

2

2

2

2

2

2

2

1

1

1

1

2

2

2

2

2

1

1

1

1

2

2

1

2

1

2

1

1

2

2

1

1

1

2

2

2

1

2

1

1

1

2

1

2

1

2

2

2

1

1

Glossary

Glossary This glossary provides an explanation of many of the terms used in the book. We have used italics to indicate terms that have their own entry in the glossary. For further information about statistical or experimental design concepts, we encourage you to consult a statistics text.

ANCOVA (analysis of covariance) An extension of analysis of variance (ANOVA) in which at least one covariate included in the design. In ANCOVA, a covariate is a variable that has a statistical association with the dependent variable. ANCOVA allows us to estimate the impact of the independent variable/s after allowing for the influence of the covariates.

ANOVA (analysis of variance) An inferential statistical test that allows analysis of data from designs with more than two experimental conditions and/or with more than one factor (or independent variable). ANOVA is appropriate for use with parametric data. The absence, however, of nonparametric equivalents for two or more factor designs means that ANOVA is often used in such circumstances. Fortunately, it is said to be fairly robust to violations of the assumptions for parametric tests, provided that the cell sizes are equal. Introduction to ANOVA One-way between-subjects ANOVA (see also Kruskal–Wallis) One-way within-subjects ANOVA (see also Friedman) Factorial between-subjects ANOVA Factorial within-subjects ANOVA Factorial mixed ANOVA

Association See correlation and chi-square.

Asymptotic significance The p value calculated under the assumption that the sample is large and has an asymptotic distribution. For the vast majority of inferential statistical tests, the p value given by SPSS is the asymptotic significance. For many tests, SPSS now gives an option to also calculate the exact significance.

Bar chart A graph used to display summary statistics such as the mean (in the case of a scale variable) or the frequency (in the case of a nominal variable). See also chart.

463

Glossary

464

Glossary

Between-participants design See between-subjects design.

Between-subjects design An experimental design in which all factors are between-subjects factors; that is, each participant contributes data to only one level of a factor (each participant only experiences one condition). See also independent groups design and natural independent groups design.

Binary logistic regression See logistic regression.

Bivariate An analysis involving two variables, for example, correlation. See also univariate and multivariate.

Bivariate regression An inferential statistical procedure used to investigate a linear relationship between two variables. It can indicate the extent to which one variable can be explained or predicted by the other variable. See also regression.

Case A case is the unit of analysis. In psychology, this is normally the data deriving from a single participant. An exception is in a matched-subjects design, when the pair of matched participants form the case. Each case should be entered into a separate row in the SPSS Data window. In some research, the cases will not be people. For example, we may be interested in the average academic attainment for pupils from different schools. Here, the cases would be the schools.

Cell An element in the Data Editor window table, into which a value is entered. In ANOVA and chi-square, the combination of one level of one factor and one level of another factor. The cell size is the number of cases (normally participants) that fall into that cell.

Chart The name that SPSS gives to a graph. A wide range of graph types are available from the Graphs menu item. In addition, the output of some statistical procedures include optional charts.

Chi-square An inferential statistical test that is used to analyse frequencies of nominal data. Chi-square allows comparison between the observed frequencies and the pattern that would be expected by chance. In psychology, the multidimensional chisquare is most often used. It can be thought of as a test of association between two variables, or as a test of difference between two independent groups. The chi-square distribution is used to assess significance for some other statistical tests, for example Friedman and Kruskal–Wallis.

Cleaning See data cleaning.

Compute An SPSS procedure that allows us to compute (calculate) a new variable based on one or more existing variables. The new variable is added to the data file.

Condition See level.

Confidence interval A pair of values that define a range within which we expect the population parameter, such as the mean, to fall. In the case of the 95% confidence interval, these values define the range within which there is a 95% probability that the parameter will fall.

Confounding variable Any uncontrolled variable that changes systematically across the levels of the independent variable or factor. If a study includes a confounding variable, it is not possible to determine whether the results of an experiment are due to the independent variable alone, to the confounding variable alone or to some interaction between those two variables.

Contingency table Displays the frequencies or counts for the levels of one or more variables.

Correlation Describes a linear relationship, or association, between two variables (measured on ordinal, interval or ratio level of measurement). Pearson’s r, Spearman’s rho and Kendall’s tau are inferential statistical tests of correlation. See also scatterplot.

465

Glossary

Glossary

Glossary

466

Glossary

Count An SPSS procedure that allows us to count the number of times a particular value occurs, in one or more variables.

Covariate See ANCOVA.

Criterion variable The variable that is explained by the predictor variable/s in a regression analysis. Some sources use the term ‘dependent variable’ or ‘outcome variable’ instead of criterion variable.

Data A set of values. A data set is typically made up of a number of variables. In quantitative research, data are numeric.

Data cleaning The process of checking the accuracy of a data file and correcting any errors that are found.

Data Editor window The SPSS window in which data are entered and edited. It has the appearance, but not the functionality, of a spreadsheet window.

Data file SPSS records a set of data in a data file. Data files in SPSS usually have the file extension ‘.sav’.

Data handling A range of operations performed on the data after they have been entered into SPSS. The different types of data handling are accessed through the menu items Data and Transform. See also compute, count, rank cases, recode, select cases, sort cases, split.

Data transformation In data transformation, each data value is replaced by a new value that is computed from the original value. A common data transformation is the logarithmic transformation.

Data View In SPSS, the Data Editor window has two display settings. The Data View shows the data table, with the variables in columns and the cases in rows. See also Variable View.

Degrees of freedom A value related to the number of participants who took part in an experiment (t-test, ANOVA) or to the number of factors (independent variables) in an experiment (ANOVA, chi-square). The degrees of freedom are required when using statistical tables of significance. Although SPSS gives the exact p value, degrees of freedom should still be reported as shown on the annotated output pages of those inferential statistical tests.

Dependent variable The variable that is measured in an experiment, and whose values are said to depend on those of the independent variable (or factor).

Descriptive statistics Procedures that allow you to describe data by summarising, displaying or illustrating them. Often used as a general term for summary descriptive statistics: measures of central tendency and measures of dispersion. Graphs (see chart) are descriptive statistics used to illustrate the data.

Dialogue box A box that appears on the computer screen, normally after you have clicked on a sequence of menu items. SPSS uses dialogue boxes to allow you to control the details of a statistical procedure. Each chapter includes illustrations showing the dialogue boxes and explaining how to complete them.

Dichotomous variable A variable that can only take one of two values, for example indicating the presence or absence of something.

Discriminant analysis An inferential statistical procedure used to determine which variables predict membership of (or discriminate between) different categories of another variable.

Effect size A measure of the magnitude of an effect. Can be expressed in the units used to measure the dependent variable, or in standardised units such as Cohen’s d.

467

Glossary

Glossary

Glossary

468

Glossary

Equality of variance See homogeneity of variance.

Error bar graph A graph in which the mean of each condition is plotted with a vertical bar that provides an indication of the magnitude of the error associated with the measurement of the mean, and hence an indication of how accurate the measurement is likely to be. In SPSS, we can use error bars to indicate several different measures of error, including standard error, standard deviation and confidence interval.

Exact significance The p value calculated on the assumption that our data are a small sample of the population and/or do not have an asymptotic distribution. For many tests, SPSS now gives an option to calculate the exact significance in addition to the default asymptotic significance. Fisher’s Exact test, an alternative in chi-­square, only gives the exact p value.

Experimental design Describes specific methods by which experiments are carried out and which are intended to prevent participant irrelevant variables from confounding the experiment; for example, within-subjects and between-subjects designs. One or more variables are systematically manipulated to see whether this impacts another variable. This allows researchers to establish cause and effect. Basic designs are described in Chapter 1, and other designs are described where relevant for particular statistical tests.

Experimental hypothesis See hypothesis.

Execute Part of SPSS syntax. The execute command instructs SPSS to perform the command specified in the preceding lines of syntax.

Factor In ANOVA: another term for independent variable. The term ‘factor’ is used particularly when discussing ANOVA statistical tests and designs, whereas the term ‘independent variable’ is used more often for two-sample designs. See also betweensubjects design and within-subjects design. In factor analysis: a dimension (or a psychological construct) that underlies several measured variables.

Factor analysis Statistical procedure used to identify whether a factor structure underlies correlations between a number of variables.

Factorial ANOVA A type of ANOVA (Analysis of Variance) that includes more than one independent variable (or factor).

Factorability Indicators of whether it is likely that there are any factors underlying a set of variables that are entered into a factor analysis.

F-ratio The statistic obtained in ANOVA calculations. It can be described as the variance due to manipulation of the factor divided by the variance due to error.

Frequency/ies The number of times a particular event or value occurs. Also an SPSS command available from the menu item Analyze, which will produce tables of frequencies showing the number of times a particular value occurs in each variable. See also bar chart.

Friedman A nonparametric equivalent of the one-way within-subjects ANOVA.

Graph See chart.

Graph Board Template Chooser A system that helps the user construct graphs by picking from a large collection of templates that can then be modified.

Grouping variable SPSS uses this term for the variable that specifies two or more groups of cases to be compared. For independent groups design, the grouping variable is the independent variable. For example, in an experiment to compare the performance of participants in either quiet or noisy conditions, the grouping variable will be the independent variable, noise level. In other cases, the grouping variable is an independent variable that is not manipulated by the experimenter. For example, if we wish to compare the performance of men and women, the independent variable will be sex. Grouping variables are nominal variables. See levels of measurement.

469

Glossary

Glossary

Glossary

470

Glossary

Help There are a number of different sources of help available within SPSS, including the Help menu and the Help button on most dialogue boxes.

Homogeneity of regression slopes An assumption of ANCOVA that the relationship between the dependent variable and the covariate is similar across all levels of the independent variable.

Homogeneity of variance Also referred to as equality of variance. One of the requirements for using parametric statistical tests: the variance of the data for one group or condition should be relatively similar to that of the other groups or conditions being analysed, even when they come from populations with different means.

Homogeneity of variance–covariance matrices An assumption of multivariate analysis of variance (MANOVA), which is an extension of the assumption of homogeneity of variance.

Hypothesis A prediction about the outcome of a study. The experimental or research hypothesis predicts that a difference between conditions will occur, that a relationship will be found or that an interaction will occur. The null hypothesis predicts that there will be no difference between conditions, that a relationship will not be found or that an interaction will not occur.

Independent groups design An experimental design in which each participant experiences only one level of the independent variable. See also natural independent groups design and between-­ subjects designs.

Independent variable A variable either that is systematically manipulated by the experimenter to have different values (true experiments), or the values of which are chosen by the experimenter (natural independent groups designs). Each value of the independent variable is called a level. See also factor.

Inferential statistical tests Procedures that allow you to draw inferences from the data collected. The outcome of an inferential statistical test estimates the probability of obtaining these data if the null hypothesis was true. If that probability is sufficiently small (p ≤ .05 in psychology), the null hypothesis is rejected; otherwise it is retained. Various inferential statistics are covered in this book.

Interaction An interaction is present between two variables When an interaction is present, the impact of one variable depends on the level of the other variable. For example, a high dose of drug A may impair performance more than a low dose, but drugs A and B may interact such that the effect of drug A is reversed in the presence of drug B.

Interaction graph A line graph showing each level of two factors. The dependent variable is on the y-axis and the levels of one factor on the x-axis; the levels of a second factor are indicated by separate lines on the graph. See also chart.

Interval data Data that have been measured at the interval level. Also referred to as continuous data. These are data that are measured along a scale, where the differences (or intervals) on adjacent points on the scale are the same. SPSS refers to these data type as ‘scale’.

Irrelevant variable Any variable other than the independent variable (or factor) and the dependent variable/s. Good experimental design should ensure that irrelevant variables are controlled so that they do not become confounding variables.

Kendall’s tau An inferential statistical test of correlation used to analyse nonparametric data.

Kruskal–Wallis A nonparametric equivalent of the one-way between-subjects ANOVA.

Level An independent variable or factor will have two or more levels. For example, the variable temperature may have two levels: hot and cold. If there is only one factor, then its levels are equivalent to the conditions of the experiment. When there are two or more factors, the conditions are defined by the particular combination of the levels of the factor (for example, male participants tested under hot conditions).

Levels of measurement The type of scale used to measure variables. Usually, four levels of measurement are described: nominal, ordinal, interval and ratio. The first two are classified as nonparametric levels of measurement, and the last two as parametric levels of measurement. SPSS uses the term scale to describe interval and ratio levels of measurement. See also measure.

471

Glossary

Glossary

Glossary

472

Glossary

Linear relationship A relationship between two variables is said to be linear if the data points follow a straight line (or close to a straight line) when plotted in a scatterplot.

Linear trend Describes the situation in which the mean values from a factor with three or more levels follow a straight line; this can be assessed for significance. See also quadratic trend.

Line graph A graph in which the points plotted are joined by a line. The points could each represent the mean of one sample, or they could represent the frequency of particular values in an SPSS variable. See also interaction graph and chart.

Logarithmic transformation A data transformation in which each point in a data set is replaced with its logarithmic (either natural or common) value. Logarithmic transformations are typically applied to a data set to reduce the influence of outliers, so the data more closely match the assumptions of a statistical procedure.

Logistic regression A statistical procedure that examines the impact of a number of predictor variables on a categorical (or nominal) dependent variable. SPSS distinguishes between binary logistic regression, in which the dependent variable has two categories, and multinomial logistic regression, in which the dependent variable can have more than two categories.

Log transformation See logarithmic transformation.

Mann–Whitney An inferential statistical test used to analyse nonparametric data from two-­sample independent groups designs.

Matched-subjects design An experimental design in which each participant is matched closely with another participant, to give a participant pair. Each member of the pair is then allocated, by a random process, to different levels of the independent variable. Also called matched pairs design. It is a type of related design.

McNemar An inferential statistical test used to analyse nominal data obtained by measuring a dichotomous variable for a two-sample related design.

Mean (M) A measure of central tendency: the scores are summed and the total is divided by the number of scores.

Measure SPSS uses the term measure, in the Variable View, to refer to the level of measurement for a variable. See levels of measurement.

Measure of central tendency The average or typical score for a sample. See mean, median and mode.

Measure of dispersion A measure of variability within a sample. See range, standard deviation, standard error and variance.

Median A measure of central tendency: the scores are put into rank order and the middle score is the median.

Menu items In SPSS, the menu items are the words in the highlighted bar across the top of the window, which give the user access to drop-down lists of options.

Mixed design A design in which at least one factor is between-subjects and at least one is withinsubjects. This is an aspect of the terminology used to describe ANOVA designs.

Missing values A data set may be incomplete, for example, if some observations or measurements failed or if participants didn’t respond to some questions. It is important to distinguish these missing data points from valid data. Missing values are the values SPSS has reserved for each variable to indicate that a data point is missing. These missing values can either be specified by the user (user missing) or automatically set by SPSS (system missing).

473

Glossary

Glossary

Glossary

474

Glossary

Mode The most common value in a sample of scores: a measure of central tendency. If a sample of scores has more than one mode, SPSS shows the lowest value.

Multinominal logistic regression See logistic regression.

Multiple regression An inferential statistical procedure used to investigate linear relationships between three or more variables. Multiple regression can indicate the extent to which one variable can be explained or predicted by one or more of the other variables. See also regression.

Multivariate Analysis of data in which either two or more dependent variables are measured while one or more factors are manipulated, for example, multivariate analysis of variance; or three or more variables are measured, for example multiple regression. See also univariate, bivariate and dependent variable.

Multivariate analysis of variance An inferential statistical procedure used to analyse data collected using designs in which two or more dependent variables are measured, while one or more factors are manipulated. See also ANOVA.

N In statistics, the character ‘N’ (uppercase) is typically used to indicate the size of a population, while ‘n’ (lowercase) is used to indicate the sample size. SPSS uses ‘N’ to refer to the number of cases being analysed.

Natural independent groups design An independent groups design or between-subjects design in which the groups are chosen by the experimenter from pre-existing (natural) groups. For example: male and female; smoker, ex-smoker and non-smoker. The results of natural groups studies cannot be used to draw conclusions about cause-­and-­effect relationships, but only about differences or associations that might be a result of variables other than those used to define the natural groups. Studies using a natural independent groups design are sometimes called ‘quasi-experimental studies’.

Nominal data Data collected at a level of measurement that yields nominal data (nominal just means ‘named’), also referred to as ‘categorical data’, where the value does not imply anything other than a label; for example, 1 = male and 2 = female.

Nonparametric A term used to denote: 1. nominal and ordinal levels of measurement 2. data that may be measured on ratio or interval scales but do not meet the other assumptions (homogeneity of variance and normality of distribution) underlying parametric statistical tests 3. the inferential statistical tests used to analyse nonparametric data. Nonparametric statistics make use of rank order, either of scores or the differences between scores, unlike parametric statistical tests. Because nonparametric tests make no assumptions about normality of distribution in the data, they are sometimes called ‘distribution-free tests’.

Null hypothesis The opposite of your research hypothesis. It predicts that there will be no significant effect or relationship between your variables.

One-tailed test You use a one-tailed test when you have a directional hypothesis; that is, when your hypothesis predicts the direction of the relationship between your variables or conditions (i.e. if you predict that one group will score higher than another; or that scores will increase on one variable as they decrease on another).

Options Options in dialogue boxes can be set to request additional statistics or control the appearance of charts. Also, selecting Options from the Edit menu item allows you to set options that will be applied more generally.

Ordinal data Ordinal data are data measured on a scale that only indicates rank and not absolute value. An example of an ordinal scale is academic performance measured by class rank. See also levels of measurement.

Outlier A value recorded for a variable that is far from the majority of values.

Output window See Viewer window.

P value The p value is the probability of obtaining the observed results if the null hypothesis were true. In psychology, it is conventional to declare a result to be statistically significant if the p value is less than .05. See also significance level.

475

Glossary

Glossary

Glossary

476

Glossary

Parameters A characteristic of an entire population, such as the mean. Normally, we estimate a population parameter based on the statistics of the sample.

Parametric A term used to denote: 1 . ratio and interval levels of measurement 2. data that are measured on one of those scales and also meet the two other requirements (homogeneity of variance and normality of distribution) for parametric statistical tests 3. the inferential statistical tests used to analyse parametric data. Parametric statistics make use of the actual values of scores in each sample, unlike nonparametric statistical tests.

Paste In SPSS, the Paste button pastes the lines of syntax required to perform a command into the Syntax Editor window.

Participant People who take part in an experiment or research study. Previously, the word ‘subject’ was used, and still is in many statistics books. The word ‘subject’ is often still used to describe ANOVA experimental designs and analyses (e.g. ‘2*2 within-­ subjects design’) as in this book.

Participant irrelevant variable Any irrelevant variable that is a property of the participants in an experiment. The term ‘subject irrelevant variables’ is sometimes still used.

Pearson’s r An inferential statistical test of correlation used to analyse parametric data.

Pivot table The name that SPSS gives to a table of results displayed in the Output viewer window. The appearance of a pivot table can be altered for the purposes of presentation in a report.

Planned comparisons A group of statistical tests used to compare particular conditions from ANOVA designs. Planned comparisons must be specified before data are collected. If used inappropriately, then the Type 1 error will be inflated. See also unplanned comparisons.

Population The total set of all possible scores for a particular variable. See also N.

Power The power of an inferential statistical procedure is the probability that it will yield a statistically significant result.

Predictor variable The variable/s used in a regression analysis to explain another variable (the criterion variable). Some sources use the term independent variable/s instead of predictor variable/s.

Principal components analysis (PCA) PCA is a statistical technique that aims to identify patterns in a large data set based on the patterns and correlations between individual data points. It aims to reduce a set of variables by looking for clusters of variables that appear to be related to one another (and therefore may be tapping into the same underlying factor). As such, PCA is primarily concerned with identifying variables that share variance with one another.

Print The content, or a selection, of all SPSS windows can be printed by selecting Print from the File menu item while the appropriate window is open.

Quadratic trend Describes the situation in which the mean values from a factor with three or more levels follow a curved line (for example, a U shape); this can be assessed for significance. See also linear trend.

Quantitative data Is used to describe numeric data measured on any of the four levels of measurement. Sometimes though, the term ‘qualitative data’ is then used to describe data measured with nominal scales.

Quantitative research In psychology, describes research that involves the analysis of numeric or quantitative data measured on any of the four levels of measurement. In contrast, qualitative research often involves the detailed analysis of non-numeric data, collected from small, non-random samples. SPSS is designed for use in quantitative research.

Range A measure of dispersion: the scores are put into rank order and the lowest score is subtracted from the highest score.

477

Glossary

Glossary

Glossary

478

Glossary

Rank cases An SPSS procedure that converts interval or ratio data into ordinal data. A new variable is added to the data file that gives the rank of one of the existing variables.

Ratio data Ratio data has all the properties of an interval variable, but also has an absolute value of 0. That is, when the variable equals 0 it means that there is none of that variable, which also gives more meaning to the relationship between different measurements on a ratio scale. For example, as weight is a ratio variable, a weight of 6g is twice as heavy as 3g. Whereas a temperature of 40°C is not twice as hot as 20°C, so while Celsius represents interval data, it is not ratio data as it does not have an absolute zero (i.e. 0°C does not represent an absence of temperature).

Recode An SPSS procedure that allows the user to systematically change a particular value or range of values in a variable.

Regression If two variables have been measured, bivariate regression can be used to allow prediction of a participant’s score on one variable from their score on the other variable. If three or more variables have been measured, then multiple regression can be used to analyse the data. A regression line is the line drawn using the regression formula, and represents the ‘best fit’ to the data points in a scatterplot.

Related designs A term that includes repeated measures and matched subjects designs. See also within-­subjects designs.

Repeated measures design An experimental design in which each participant experiences every level of the independent variable. It is a type of related design.

Sample A subset of a population. A smaller set of scores we hope is representative of the population. See also N.

Scale In SPSS, describes interval and ratio levels of measurement.

Scatterplot Sometimes called a ‘scattergram’ or ‘scattergraph’. Two variables are plotted, one on the x-axis the other on the y-axis, and a point is plotted for each case. Used to display the results of a correlation analysis. See also chart.

Select cases SPSS procedure that allows the user to select a subsample of the cases on the data file. Cases can be selected on the basis of the values of a particular variable. Subsequent analyses will only be performed on the selected cases.

Sig SPSS uses the shorthand ‘sig’ to indicate a p value. See also significance level.

Significance level The outcome of an inferential statistical test estimates the probability of obtaining the observed results assuming the null hypothesis is true. If that probability is equal to or less than the significance level, the null hypothesis is rejected; otherwise it is retained. By convention in psychology, the significance level is set to .05.

Situational irrelevant variable Any irrelevant variable that relates to the situation in which an experiment is carried out.

Skewed data If a data sample is not normally distributed but has a ‘tail’ of cases that are either particularly low or particularly high compared with most of the scores, the sample is said to be skewed. Such a sample does not meet the assumption of normality. See parametric.

Sort cases SPSS procedure by which the cases in the Data window can be sorted into a desired order based on the values of one or more variables.

Spearman’s rho An inferential statistical test of correlation used to analyse nonparametric data.

Sphericity An assumption when performing a within-subjects or mixed ANOVA that the variances of the differences between all possible pairs of levels of a within-­subjects factor are equal.

479

Glossary

Glossary

Glossary

480

Glossary

Split SPSS procedure that allows the user to split the cases into two or more groups based on the values of a grouping variable; subsequent analyses will be performed separately for each group or the groups will be compared.

Standard deviation (SD) A measure of dispersion that gives an indication of the average difference from the mean. SPSS uses the formula that is designed to estimate the standard deviation of a population based on a sample (i.e. N – 1 is used as the denominator rather than N).

Standard error (SE) A measure of dispersion that is equal to the standard deviation divided by the square root of N. The full name is ‘standard error of the mean’. (The ‘standard error of differences between means’ is obtained as part of calculations for the t-test; the ‘standard error of the estimate’ is used in regression.)

Statistics A general term for procedures for summarising or displaying data (descriptive statistics) and for analysing data (inferential statistical tests). A characteristic of a sample, such as the mean, used to estimate the population parameters.

Syntax The program language that can be used to directly control SPSS. For more experienced users, controlling SPSS in this way can sometimes be a useful alternative to using the dialogue boxes. Syntax commands may appear in the Output window. Syntax commands can be pasted into and edited in the Syntax window.

Syntax window The Syntax Editor window can be used to write syntax files to control an analysis. Beginners will not need to use the Syntax window.

System missing A missing value automatically assigned by SPSS. System missing values are shown as dots in the relevant data cells. See also user missing.

System variables Variables reserved by SPSS. System variables have names starting with the ‘$’ symbol.

t-test An inferential statistical test used to analyse parametric data. The one-sample t-test compares the mean of a single sample with a known population or score, for example average IQ score. For two-sample designs, there are two versions, both comparing two means: the independent t-test for independent groups designs, and the paired t-test for related designs.

Transformation See data transformation; logarithmic transformation.

Two-sample designs Experimental designs with two levels of one independent variable. See also independent groups design and related designs.

Two-tailed test You use a two-tailed test when your hypothesis predicts a difference between groups/conditions (or a relationship between variables), but it makes no reference to the direction of the effect. If your hypothesis is directional, then you should use a one-tailed test.

Type 1 error A Type 1 error occurs when the null hypothesis is rejected in error (i.e. when we accept that there is an effect, when in reality there isn’t). This is a false positive. If the significance level is set at .05 (the convention in psychology), we will make a Type 1 error on an average of 1 in 20 occasions. If the significance level is reduced, the chance of Type 1 errors will fall, but the Type 2 error rate will increase. Repeated, unplanned testing of a data set can also increase the Type 1 error rate. See also planned and unplanned comparisons.

Type 2 error The situation in which the null hypothesis is retained in error (i.e. incorrectly accepting the null hypothesis, when in reality there is a genuine effect in the population). This is a false negative. The Type 2 error rate depends partly on the significance level and partly on the power of the inferential statistical test.

Univariate An analysis involving just one dependent variable, for example, Mann–Whitney, paired t-test and ANOVA. See also bivariate and multivariate.

481

Glossary

Glossary

Glossary

482

Glossary

Unplanned comparisons A group of inferential statistical tests that may be used to make all the possible comparisons between conditions from ANOVA designs, as they control for the increased chance of obtaining Type 1 errors. See also planned comparisons.

User missing A missing value assigned by the user. See also system missing.

Value label The label assigned to a particular value of a variable in an SPSS data file. Value labels are particularly useful in the case of nominal variables. Value labels are included in the SPSS output and will help you to interpret your analysis.

Variable In experimental design, anything that varies that can have different values at different times or for different cases. See also confounding variable, dependent variable, independent variable and irrelevant variable. A variable in SPSS is represented by a column in the Data Editor window.

Variable label Explanatory label that you can give to an SPSS variable when you define it. The variable label is printed in the SPSS output and often is shown in dialogue boxes.

Variable name Name given to an SPSS variable when it is defined. The variable name will appear at the top of the column in the Data Editor window, and may appear in the SPSS output.

Variable View In SPSS, the Data Editor window has two display settings. The Variable View shows details of the settings for each variable in the data file. See also Data View.

Variance A measure of dispersion equal to the square of the standard deviation. Homogeneity of variance between the samples is one of the requirements for using parametric statistical tests. SPSS will test for equality of variance (e.g. when performing the independent t-test). A rule of thumb is that the larger variance should be no greater than three times the smaller variance.

Viewer window Displays the output of statistical procedures in SPSS. Also referred to as the Output window.

Wilcoxon matched-pairs signed-ranks test An inferential statistical test used to analyse nonparametric data from two-­sample related designs.

Within-participants design See within-subjects design.

Within-subjects design A design in which each participant experiences every level of every factor (or if there are matched subjects, in which each pair experiences every level of every factor). See also repeated measures design and related designs.

z-score A way of expressing a score in terms of its relationship to the mean and standard deviation of the sample. A z-score of –2.5 represents a number that is 2.5 standard deviations below the mean.

483

Glossary

Glossary

References

References *Earlier editions of this text will also be useful.

APA (American Psychological Association) (2019) Publication Manual of the American Psychological Association (7th edn). Washington, DC: APA. Bird, K.D. (2004) Analysis of Variance via Confidence Intervals. London: Sage. Brysbaert, M. (2011) Basic Statistics for Psychologists. London: Red Globe Press. *Cohen, J. (1988) Statistical Power Analysis for the Behavioural Sciences (2nd edn). New York: Academic Press. Cole, D.A., Maxwell, S.E., Arvey, R. and Salas, E. (1994) How the power of MANOVA can both increase and decrease as a function of the intercorrelations among the dependent variables. Psychological Bulletin, 115(3), 465–74. Cooper, C. (2010) Individual Differences and Personality (3rd edn). London: Hodder Education. Derks, K., Burger, J., van Doorn, J., Kossakowski, J. J., Matzke, D., Atticciati, L., … Wagenmakers, E.-J. (2018) Network models to organize a dispersed literature: the case of misunderstanding analysis of covariance. Journal of European Psychology Students, 9(1), 48–57. Field, A. (2005) Discovering Statistics Using SPSS (2nd edn). London: Sage. Field, A. (2013) Discovering Statistics Using SPSS (4th edn). London: Sage. Fife-Schaw, C. (2012) Questionnaire design. In G.M.  Breakwell, J.A.  Smith and D.B. Wright (eds) Research Methods in Psychology (4th edn), Chapter 6. London: Sage. Fritz, C.O., Morris, P.E. and Richler, J.J. (2012) Effect size estimates: current use, calculations and interpretation. Journal of Experimental Psychology: General, 141(1), 2–18. Giles, D.C. (2002) Advanced Research Methods in Psychology. Hove: Routledge. Hartley, J. (1991) Sex differences in handwriting: a comment on Spear. British Educational Research Journal, 17(2), 141–5. Hollin, C.R., Palmer, E.J. and Clark, D.A. (2003) The level of service inventory – revised profile of English prisoners: a needs analysis. Criminal Justice and Behavior, 30, 422–40. Hong, E. and Karstensson, L. (2002) Antecedents of state test anxiety. Contemporary Educational Psychology, 27(2), 348–67. Howell, D.C. (2002) Statistical Methods for Psychology (5th edn). Belmont, CA: Duxbury Press. *Howell, D.C. (2013) Statistical Methods for Psychology (8th edn), International Edition. Belmont, CA: Wadsworth, Cengage Learning. John, O.P. and Benet-Martinez, V. (2014) Measurement. In H.T. Reis and C.M. Judd (eds) Handbook of Research Methods in Social and Personality Psychology (2nd edn), Chapter 18. Cambridge: Cambridge University Press.

484

Kemp, R.I., McManus, I.C. and Pigott, T. (1990) Sensitivity to the displacement of facial features in negative and inverted images. Perception, 19(4), 531–43. Kline, P. (1994) An Easy Guide to Factor Analysis. London: Routledge. Larsen, K.S. (1995) Environmental waste: recycling attitudes and correlates. Journal of Social Psychology, 135(1), 83–8. Mackenzie Ross, S.J., Brewin, C.R., Curran, H.V. et al. (2010) Neuropsychological and psychiatric functioning in sheep farmers exposed to low levels of organophosphate pesticides. Neurotoxicology and Teratology, 32(4), 452–9. Mason, R.J., Snelgar, R.S., Foster, D.H. et  al. (1982) Abnormalities of chromatic and luminance critical flicker frequency in multiple sclerosis. Investigative Ophthalmology & Visual Science, 23, 246–52. Miles, J.N.V. and Shevlin, M.E. (2001) Applying Regression and Correlation: A Guide for Students and Researchers. London: Sage. Mulhern, G. and Greer, B. (2011) Making Sense of Data and Statistics in Psychology (2nd edn). London: Red Globe Press. Newlands, P. (1997) Eyewitness interviewing: does the cognitive interview fit the bill? Unpublished PhD thesis, University of Westminster, London. Rust, J. (2012) Psychometrics. In G.M. Breakwell, J.A. Smith and D.B. Wright (eds) Research Methods in Psychology (4th edn), Chapter 7. London: Sage. Siegel, S. and Castellan, N.J. (1988) Nonparametric Statistics for the Behavioral Sciences (2nd edn). New York: McGraw-Hill. Skinner, B.F. (1965) Science and Human Behavior. New York: The Free Press. Stroop, J.R. (1935) Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18(6), 643–62. *Tabachnick, B.G. and Fidell, L.S. (2014) Using Multivariate Statistics (6th edn), International Edition. Harlow: Pearson Education. Towell, N., Burton, A. and Burton, E. (1994) The effects of two matched memory tasks on concurrent finger tapping. Neuropsychologia, 32(1), 125–9. Towell, N., Kemp, R. and Pike, G. (1996) The effects of witness identity masking on memory and person perception. Psychology, Crime and Law, 2(4), 333–46.

485

References

References

Index

Index In this Index, numerals in italic format are page references for a glossary entry for the term.

A alternative hypothesis see hypothesis analysis of covariance (ANCOVA)  13, 326–350, 448 check for homogeneity of regression  330–332 check linear relationship between covariate and dependent variable  333–334 effect size  339 analysis of variance (ANOVA)  13, 463 effect size  222 factorial ANOVA  259–295 F-ratio 217–218 main effect and interactions  261 mixed ANOVA  282–295 one-way ANOVA  214–258 a posteriori comparisons see unplanned comparisons a priori comparisons see planned comparisons association see correlation, chi-square asymptotic significance  12, 200, 448

B bar chart  77, 196, 206–212, 448 beta  300–301, 310 between-participants design see between-subjects design between-subjects design  260, 464 see also independent groups design, independent t-test bivariate  161–164, 166–167, 464 bivariate regression  176–184, 464 residuals 178 boxplot  69–71, 77–84

C case  27, 40, 66, 464 see also missing cases categorical see nominal data categorical variables  4, 31 causal relationship  151, 164, 189–190, 298 cell  29, 31, 464 chart  464 chart builder  78–84 chart type see graph type

486

editing charts  82–84, 86, 157–160, 209, 212, 333 chart editor window  20, 82 see also bar chart, error bar graph, graph graphboard, histogram, line graph, scattergram stem and leaf plots chi-square distribution  188 chi-square test  188–205, 465 effect size  193 goodness of fit  188–189 multi-dimensional 189–205 variance explained  179 collinearity see multicollinearity column alignment  35 column format  34–35 component see factor analysis compute  105–108, 116–117, 425, 465 condition  465 confidence intervals  9–10, 13, 67, 129, 133, 135, 465 confirmatory factor analysis  379 confounding variable  327, 465 construct  378, 387, 408 construct validity  414 construct reliability  408–409 contingency table  190–195, 465 copying variable settings  38 correlation  18, 151–152, 161–168, 465 comparing correlations  171–174 effect size  164 multiple see multiple regression variance explained  164 see also Pearson’s r, scattergram, Spearman’s rs correlational designs  6–7 count 108–109, 466 covariate 326–328 criterion variable  175–178, 296–298, 466 Cronbach’s alpha  408–412 crosstabs see chi-square, McNemar test

D data  466 see also levels of measurement data cleaning  51, 71–76, 466

data editor window  25–27, 38–39, 466 data entry  38–41 data file  466 opening data file  43–45 printing data file  439 saving data file  42–43 data handling  88–125, 466 data reduction  378 data transformation  113–119, 466 data view  22, 25–28, 467 degrees of freedom (df)  467 dependent variable  467 descriptive see descriptive statistics descriptives command  53–57 descriptive statistics  9, 53–57, 467 see also Frequencies command, Explore command design correlated, related, uncorrelated, unrelated  8 see also between-subjects, independent groups,matched subjects, repeated measures, withinsubjects dialogue box  21, 467 dichotomous variable  186–188, 205, 299, 467 dimensionality of scales and questionnaires  414–418 discriminant analysis  19, 351–376, 467

E effect size  14, 15, 467 see also individual statistical tests Enter method in discriminant analysis  354–363 in logistic regression  369–374 in multiple regression  302, 305–314 Epsilon 244 equality of variance  133, 468 error bar chart/graph  77, 135–137, 468 exact option in chi-square  199–201 inMcNemar 207 exact significance  12–13, 468 in chi-square  202 in Mann–Whitney  145 inMcNemar 207 exit from SPSS  23 experimental design  7–8, 468 experimental hypothesis see hypothesis exploratory factor analysis see factor analysis explore command  65–71 extract options  406–407

487

F factor  48, 468 in ANOVA between-subjects factors  261 within-subjects factors  261 in factor analysis  377–378, 380, 385 factor analysis  19, 377–408, 469 Bartlett’s test  394, 395 communality  386, 395, 399 component 386 correlation matrix  380–382 determinant 404–405 eigenvalue  386–387, 397, 398 extraction  386, 389, 395–400, 405–406 factorability  381–384, 394, 395, 400, 469 factor loading  387, 398, 401, 402, 407 inverse 404 KMO measure of sampling adequacy  383, 394, 395 number of factors extracted  405–406 principal component analysis  386, 389–403 Q factor analysis  408 R factor analysis  408 rotation  387–388, 391, 406 scree plot  387, 398 variance explained  386–387, 395–396, 408 factorial ANOVA mixed ANOVA  282–295 two-way between-subjects  262–271 two-way within-subjects  272–282 figure see chart Fisher’s Exact Test  198–200 F-ratio 469 see also ANOVA frequency/ies 469 Frequencies command  61–64 Friedman test  19, 255–258, 469

G general linear model (GLM)  221 goodness of fit in factor analysis  406 in logistic regression  370 see also chi-square graph, graphing  78–87 graph types  77–78 graphboard graphboard editor  87 graphboard template chooser  84–86, 469 see also chart grouping variable  48, 469

Index

Index

Index

488 Index

H help in SPSS  431–436, 470 homogeneity of regression  329, 470 homogeneity of variance  215, 470 histogram  62, 64, 77, 85–86 hypothesis 5, 470 one-tailed  5, 12 two-tailed  5, 12

I independent groups design  46, 48–49, 470 see also between-subjects design, chi-square (multidimensional), independent t-test, Mann–Whitney independent t-test 18, 130–133 confidence interval  133, 135 effect size  133–134 independent variable  468 inferential statistics  11–19, 470 interaction  260, 261, 471 interaction graph  265–266, 269, 471 interval data  3–4, 34–35, 110 irrelevant variable  471

K Kendall’s tau  165, 168, 471 Kruskal–Wallis test  19, 241–243, 471

L level of independent variable or factor  471 levels of measurement  3–5, 35, 187, 471 Levene’s test  133, 228, 232–234 linear relationship  154, 159, 177, 471 linear trend  251–252, 471 line chart/graph  77, 471 logarithmic (log) transformation  113–119, 472 logistic regression  19, 351–352, 368–376, 472 binary 368–376 multinominal 368

M Mann–Whitney U test  18, 142–146, 472 matched subjects design/matched pairs design  126, 472 Mauchley’s test of sphericity  244–245, 249–250 McNemar test  205–207, 473 mean  9, 52, 55, 58, 62, 63, 67, 472 measure of central tendency  9, 52, 473 measure of dispersion  52, 473 median  52, 62, 63, 67, 68, 70, 473 menu items  22, 473

missing values  32–34, 101–102, 107–108, 125, 473 mixed ANOVA  282–295 mixed design  282, 473 mode  52, 62, 63, 474 multicollinearity  299, 342 multiple regression  19, 296–325, 474 beta (β)  300–301, 311, 314, 319, 320, 324 Enter method see standard method R 301 R2, adjusted R2  301, 310, 314, 318, 320, 323, 324 regression coefficient  300–301, 314 sequential/hierarchical method  303–304, 315–321 simultaneous/standard method302–303,  305–314 statistical methods  304, 321–324 stepwise method  304, 321–324 variance explained  302 multivariate  474 multivariate tests in univariate ANOVA  244 multivariate analysis of variance (MANOVA)  19, 339–350, 474 homogeneity of variance-covariance matrices  342, 470 effect size  348

N N  474 natural independent groups design  474 nominal data  4, 31, 35, 186–187, 474 nonparametric  16, 141, 165, 241, 255, 475 null hypothesis  5, 188, 223, 475

O observed power  15 one-tailed test  5, 12, 162 one-way ANOVA one-way between-subjects  224–234 one-way within-subjects  243–253 opening a data file  43–45 operationalisation 5–6 options 437–441, 475 ordinal data  4, 35–36, 475 orthogonal comparisons  237 orthogonal rotation see factor analysis (rotation) outlier  68, 159, 475 output window  20, 475 printing 439 incorporating output into other documents 441–443 see also viewer window

P p value  475 paired t-test  18, 126, 137–141 confidence interval  139 effect size  140–141 parameter(s) 9, 476 parameter estimation  183 parametric  16, 126–127, 133, 152, 215, 476 participant 8, 476 participant irrelevant variable  476 Pearson’s r  18, 152, 159, 161–164, 381, 476 effect size  164 pivot table  20, 59, 436, 476 planned comparisons  223–224, 237–240, 253–254, 337, 476 point estimate  9–10 point probability  202 population  10–12, 16, 477 post-hoc comparisons see unplanned comparisons power  14–15, 222, 310, 320, 327, 341, 477 practical equivalence  15 predictor variable  175–179, 296–304, 351–355 principal components analysis see factor analysis printing/print 439, 477 probability  11–12, 15, 218, 223, 352

Q quadratic trend  251, 477 qualitative 186 quantitative  3, 186 quantitative research  477 quasi-experimental designs  7–8 question 3–5 questionnaire  31, 38, 88, 120–125 see also dimensionality, reliability

R range  52, 55, 66, 72, 477 rank cases  110–113, 478 ratio data  3–5, 36, 110 recode  99–104, 124–125, 478 regression  478 see also bivariate regression, logistic regression, multiple regression regression line  152, 157–160, 167, 177 related design  8, 205, 478 see also matched-subjects design, repeated measures design reliability of scales and questionnaires  131, 133, 383, 414, 448–458 repeated measures design  46–50, 478

489

see also within-subjects design, McNemar test, paired t-test, Wilcoxon research hypothesis see hypothesis research question see question reversals 123

S sample  8–13, 15–19, 478 saving a data file  42 scale (level of measurement)  3–5, 16, 35–36, 76, 77, 85, 108, 155, 157, 160, 175, 478 scales and questionnaires  120–125, 377–419 see also dimensionality, reliability scatterplot  78, 151–176, 478 in analysis of covariance  333–334 select cases  23, 61–67, 304, 479 significance 11–15, 479 significance level  479 sig/sig value  12, 479 situational irrelevant variable  178, 479 skewed data  114–116, 125, 173, 305, 313, 388, 479 sort cases  89–90, 479 Spearman’s rs / rho  18, 152, 159, 162, 165–168, 479 Sphericity  244, 249–246, 277, 289, 394, 479 split (file)  23, 61, 89, 91–93, 480 split-half reliability  355, 408–409, 413 standard deviation  9, 52, 55, 57, 62, 64, 67, 480 standard error  311, 480 statistic  9, 14–15, 480 statistical inference  13 statistical method in discriminant analysis  354–355 in logistic regression  370–372 in multiple regression  304–305, 321, 324 statistical power see power statistical significance see significance stem and leaf plots  68, 70 stepwise method in discriminant analysis  354, 364, 366 in multiple regression  321, 323, 354 syntax  20, 60, 173, 420–434, 436–439, 446, 480 syntax files  20, 423–424, 426–427, 439, 447 syntax window  420–427, 431, 438, 480 system missing values  73–74, 101–102, 106, 125, 431, 480 system variables  430, 480

T toolbar  22, 23, 40, 49, 82, 423, 439 t-test  12, 15, 126–142, 186, 218, 298–299, 432–433, 440, 481 see also independent t-test, paired t-test

Index

Index

Index

490

Index

test/retest reliability  355, 408 two-tailed  5, 12, 127, 129–130, 140, 142, 146, 163, 164, 167 two-sample designs  481 two-way ANOVA see factorial ANOVA Type I error  12, 223, 340–341, 476, 481 Type II error  481

U univariate  225–227, 235, 235, 244–246, 263–269, 330–331, 335–339, 341–342, 347, 350, 357, 359, 363, 390, 392, 481 unplanned comparisons  215, 223–224, 234, 253–235, 238, 243, 253–255, 274, 335–336, 348, 482 unrelated design  8

V value label  31–34, 36, 38–42, 49, 50, 67, 72, 79, 85, 132, 203, 482 value labels button  40–42, 49 variable 3–18, 482 defining a variable in SPSS  25–38

variable label  30–31, 36–38, 46, 48, 54, 87, 105, 108, 123, 139, 147–148, 226, 274, 438, 482 variable name  28, 31, 36–38, 50, 112, 121, 138, 423–425, 427–428, 430, 437–438, 446–447, 482 variable type  29–30 variable view  22, 25–38, 72, 78, 78, 155, 482 variable width and decimal places  30, 48–49 variance  10, 11, 16, 52, 67, 126, 133, 164, 179, 199, 482 error variance  217–218, 221, 326–327, 405 viewer window  20, 53, 56–60, 66, 87, 160, 212, 333, 439, 483 see also output window

W Wilcoxon matched-pairs signed-ranks test  141, 146, 149, 483 within subjects design  219–221, 348, 483 see also matched subjects design, related designs, repeated measures design

Z z score  55, 57, 181, 428–429, 483