APPLIED MULTIVARIATE STATISTICS

FOR THE SOCIAL SCIENCES

Now in its 6th edition, the authoritative textbook Applied Multivariate Statistics for

the Social Sciences, continues to provide advanced students with a practical and conceptual understanding of statistical procedures through examples and data-sets from

actual research studies. With the added expertise of co-author Keenan Pituch (University of Texas-Austin), this 6th edition retains many key features of the previous editions, including its breadth and depth of coverage, a review chapter on matrix algebra,

applied coverage of MANOVA, and emphasis on statistical power. In this new edition,

the authors continue to provide practical guidelines for checking the data, assessing

assumptions, interpreting, and reporting the results to help students analyze data from

their own research confidently and professionally.

Features new to this edition include:

NEW chapter on Logistic Regression (Ch. 11) that helps readers understand and

use this very flexible and widely used procedure

NEW chapter on Multivariate Multilevel Modeling (Ch. 14) that helps readers

understand the benefits of this “newer” procedure and how it can be used in conventional and multilevel settings

NEW Example Results Section write-ups that illustrate how results should be presented in research papers and journal articles

NEW coverage of missing data (Ch. 1) to help students understand and address

problems associated with incomplete data

Completely re-written chapters on Exploratory Factor Analysis (Ch. 9), Hierarchical Linear Modeling (Ch. 13), and Structural Equation Modeling (Ch. 16) with

increased focus on understanding models and interpreting results

NEW analysis summaries, inclusion of more syntax explanations, and reduction

in the number of SPSS/SAS dialogue boxes to guide students through data analysis in a more streamlined and direct approach

Updated syntax to reflect newest versions of IBM SPSS (21) /SAS (9.3)

A free online resources site www.routledge.com/9780415836661 with data sets

and syntax from the text, additional data sets, and instructor’s resources (including

PowerPoint lecture slides for select chapters, a conversion guide for 5th edition

adopters, and answers to exercises).

Ideal for advanced graduate-level courses in education, psychology, and other social

sciences in which multivariate statistics, advanced statistics, or quantitative techniques

courses are taught, this book also appeals to practicing researchers as a valuable reference. Pre-requisites include a course on factorial ANOVA and covariance; however, a

working knowledge of matrix algebra is not assumed.

Keenan Pituch is Associate Professor in the Quantitative Methods Area of the Department of Educational Psychology at the University of Texas at Austin.

James P. Stevens is Professor Emeritus at the University of Cincinnati.

APPLIED MULTIVARIATE

STATISTICS FOR THE

SOCIAL SCIENCES

Analyses with SAS and

IBM‘s SPSS

Sixth edition

Keenan A. Pituch and James P. Stevens

Sixth edition published 2016

by Routledge

711 Third Avenue, New York, NY 10017

and by Routledge

2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN

Routledge is an imprint of the TaylorÂ€& Francis Group, an informa business

© 2016 TaylorÂ€& Francis

The right of Keenan A. Pituch and James P. Stevens to be identified as authors of this work has

been asserted by them in accordance with sectionsÂ€77 and 78 of the Copyright, Designs and Patents

Act 1988.

All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form

or by any electronic, mechanical, or other means, now known or hereafter invented, including

photocopying and recording, or in any information storage or retrieval system, without permission

in writing from the publishers.

Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are

used only for identification and explanation without intent to infringe.

Fifth edition published by Routledge 2009

Library of Congress Cataloging-in-Publication Data

Pituch, Keenan A.

â•… Applied multivariate statistics for the social sciences / Keenan A. Pituch and James

P. Stevens –– 6th edition.

â•…â•…pages cm

â•… Previous edition by James P. Stevens.

â•… Includes index.

â•‡1.â•‡ Multivariate analysis.â•… 2.â•‡ Social sciences––Statistical methods.â•… I.â•‡ Stevens, James (James

Paul)â•…II.â•‡ Title.

â•… QA278.S74 2015

â•… 519.5'350243––dc23

â•… 2015017536

ISBN 13: 978-0-415-83666-1(pbk)

ISBN 13: 978-0-415-83665-4(hbk)

ISBN 13: 978-1-315-81491-9(ebk)

Typeset in Times New Roman

by Apex CoVantage, LLC

Commissioning Editor: Debra Riegert

Textbook Development Manager: Rebecca Pearce

Project Manager: Sheri Sipka

Production Editor: Alf Symons

Cover Design: Nigel Turner

Companion Website Manager: Natalya Dyer

Copyeditor: Apex CoVantage, LLC

Keenan would like to dedicate this:

To his Wife: Elizabeth and

To his Children: Joseph and Alexis

Jim would like to dedicate this:

To his Grandsons: Henry and Killian and

To his Granddaughter: Fallon

This page intentionally left blank

CONTENTS

Preface

xv

1.Introduction

1.1Introduction

1.2 Type IÂ€Error, Type II Error, and Power

1.3 Multiple Statistical Tests and the Probability

of Spurious Results

1.4 Statistical Significance Versus Practical Importance

1.5 Outliers

1.6 Missing Data

1.7 Unit or Participant Nonresponse

1.8 Research Examples for Some Analyses

Considered in This Text

1.9 The SAS and SPSS Statistical Packages

1.10 SAS and SPSS Syntax

1.11 SAS and SPSS Syntax and Data Sets on the Internet

1.12 Some Issues Unique to Multivariate Analysis

1.13 Data Collection and Integrity

1.14 Internal and External Validity

1.15 Conflict of Interest

1.16Summary

1.17Exercises

2.

Matrix Algebra

2.1 Introduction

2.2 Addition, Subtraction, and Multiplication of a

Matrix by a Scalar

2.3 Obtaining the Matrix of Variances and Covariances

2.4 Determinant of a Matrix

2.5 Inverse of a Matrix

2.6 SPSS Matrix Procedure

1

1

3

6

10

12

18

31

32

35

35

36

36

37

39

40

40

41

44

44

47

50

52

55

58

viii

â†œæ¸€å±®

â†œæ¸€å±® Contents

2.7

2.8

2.9

3.

4.

5.

SAS IML Procedure

Summary

Exercises

Multiple Regression for Prediction

3.1Introduction

3.2 Simple Regression

3.3 Multiple Regression for Two Predictors: Matrix Formulation

3.4 Mathematical Maximization Nature of

Least Squares Regression

3.5 Breakdown of Sum of Squares and F Test for

Multiple Correlation

3.6 Relationship of Simple Correlations to Multiple Correlation

3.7Multicollinearity

3.8 Model Selection

3.9 Two Computer Examples

3.10 Checking Assumptions for the Regression Model

3.11 Model Validation

3.12 Importance of the Order of the Predictors

3.13 Other Important Issues

3.14 Outliers and Influential Data Points

3.15 Further Discussion of the Two Computer Examples

3.16 Sample Size Determination for a Reliable Prediction Equation

3.17 Other Types of Regression Analysis

3.18 Multivariate Regression

3.19 Summary

3.20 Exercises

60

61

61

65

65

67

69

72

73

75

75

77

82

93

96

101

104

107

116

121

124

124

128

129

Two-Group Multivariate Analysis of Variance

4.1 Introduction

4.2 Four Statistical Reasons for Preferring a Multivariate Analysis

4.3 The Multivariate Test Statistic as a Generalization of

the Univariate t Test

4.4 Numerical Calculations for a Two-Group Problem

4.5Three Post Hoc Procedures

4.6 SAS and SPSS Control Lines for Sample Problem

and Selected Output

4.7 Multivariate Significance but No Univariate Significance

4.8 Multivariate Regression Analysis for the Sample Problem

4.9 Power Analysis

4.10 Ways of Improving Power

4.11 A Priori Power Estimation for a Two-Group MANOVA

4.12 Summary

4.13Exercises

142

142

143

K-Group MANOVA: A Priori and Post Hoc Procedures

5.1Introduction

175

175

144

146

150

152

156

156

161

163

165

169

170

Contents

5.2

5.3

5.4

5.5

5.6

5.7

5.8

5.9

5.10

5.11

5.12

5.13

5.14

5.15

5.16

6.

7.

Multivariate Regression Analysis for a Sample Problem

Traditional Multivariate Analysis of Variance

Multivariate Analysis of Variance for Sample Data

Post Hoc Procedures

The Tukey Procedure

Planned Comparisons

Test Statistics for Planned Comparisons

Multivariate Planned Comparisons on SPSS MANOVA

Correlated Contrasts

Studies Using Multivariate Planned Comparisons

Other Multivariate Test Statistics

How Many Dependent Variables for a MANOVA?

Power Analysis—A Priori Determination of Sample Size

Summary

Exercises

â†œæ¸€å±®

â†œæ¸€å±®

176

177

179

184

187

193

196

198

204

208

210

211

211

213

214

Assumptions in MANOVA

6.1 Introduction

6.2 ANOVA and MANOVA Assumptions

6.3 Independence Assumption

6.4 What Should Be Done With Correlated Observations?

6.5 Normality Assumption

6.6 Multivariate Normality

6.7 Assessing the Normality Assumption

6.8 Homogeneity of Variance Assumption

6.9 Homogeneity of the Covariance Matrices

6.10 Summary

6.11 Complete Three-Group MANOVA Example

6.12 Example Results Section for One-Way MANOVA

6.13 Analysis Summary

Appendix 6.1 Analyzing Correlated Observations

Appendix 6.2 Multivariate Test Statistics for Unequal

Covariance Matrices

6.14 Exercises

219

219

220

220

222

224

225

226

232

233

240

242

249

250

255

Factorial ANOVA and MANOVA

7.1 Introduction

7.2 Advantages of a Two-Way Design

7.3 Univariate Factorial Analysis

7.4 Factorial Multivariate Analysis of Variance

7.5 Weighting of the Cell Means

7.6 Analysis Procedures for Two-Way MANOVA

7.7 Factorial MANOVA With SeniorWISE Data

7.8 Example Results Section for Factorial MANOVA With

SeniorWise Data

7.9 Three-Way MANOVA

265

265

266

268

277

280

280

281

259

262

290

292

ix

x

â†œæ¸€å±®

â†œæ¸€å±® Contents

7.10 Factorial Descriptive Discriminant Analysis

7.11 Summary

7.12 Exercises

294

298

299

8.

Analysis of Covariance

301

8.1Introduction

301

8.2 Purposes of ANCOVA

302

8.3 Adjustment of Posttest Means and Reduction of Error Variance 303

8.4 Choice of Covariates

307

8.5 Assumptions in Analysis of Covariance

308

8.6 Use of ANCOVA With Intact Groups

311

8.7 Alternative Analyses for Pretest–Posttest Designs

312

8.8 Error Reduction and Adjustment of Posttest Means for

Several Covariates

314

8.9 MANCOVA—Several Dependent Variables and

315

Several Covariates

8.10 Testing the Assumption of Homogeneous

Hyperplanes on SPSS

316

8.11 Effect Size Measures for Group Comparisons in

MANCOVA/ANCOVA317

8.12 Two Computer Examples

318

8.13 Note on Post Hoc Procedures

329

8.14 Note on the Use of MVMM

330

8.15 Example Results Section for MANCOVA

330

8.16 Summary

332

8.17 Analysis Summary

333

8.18Exercises

335

9.

Exploratory Factor Analysis

339

9.1Introduction

339

9.2 The Principal Components Method

340

9.3 Criteria for Determining How Many Factors to Retain

Using Principal Components Extraction

342

9.4 Increasing Interpretability of Factors by Rotation

344

9.5 What Coefficients Should Be Used for Interpretation?

346

9.6 Sample Size and Reliable Factors

347

9.7 Some Simple Factor Analyses Using Principal

Components Extraction

347

9.8 The Communality Issue

359

9.9 The Factor Analysis Model

360

9.10 Assumptions for Common Factor Analysis

362

9.11 Determining How Many Factors Are Present With

364

Principal Axis Factoring

9.12 Exploratory Factor Analysis Example With Principal Axis

Factoring365

9.13 Factor Scores

373

Contents

10.

11.

â†œæ¸€å±®

â†œæ¸€å±®

9.14

9.15

9.16

9.17

Using SPSS in Factor Analysis

Using SAS in Factor Analysis

Exploratory and Confirmatory Factor Analysis

Example Results Section for EFA of Reactions-toTests Scale

9.18Summary

9.19Exercises

376

378

382

Discriminant Analysis

10.1Introduction

10.2 Descriptive Discriminant Analysis

10.3 Dimension Reduction Analysis

10.4 Interpreting the Discriminant Functions

10.5 Minimum Sample Size

10.6 Graphing the Groups in the Discriminant Plane

10.7 Example With SeniorWISE Data

10.8 National Merit Scholar Example

10.9 Rotation of the Discriminant Functions

10.10 Stepwise Discriminant Analysis

10.11 The Classification Problem

10.12 Linear Versus Quadratic Classification Rule

10.13 Characteristics of a Good Classification Procedure

10.14 Analysis Summary of Descriptive Discriminant Analysis

10.15 Example Results Section for Discriminant Analysis of the

National Merit Scholar Example

10.16 Summary

10.17 Exercises

391

391

392

393

395

396

397

398

409

415

415

416

425

425

426

Binary Logistic Regression

11.1Introduction

11.2 The Research Example

11.3 Problems With Linear Regression Analysis

11.4 Transformations and the Odds Ratio With a

Dichotomous Explanatory Variable

11.5 The Logistic Regression Equation With a Single

Dichotomous Explanatory Variable

11.6 The Logistic Regression Equation With a Single

Continuous Explanatory Variable

11.7 Logistic Regression as a Generalized Linear Model

11.8 Parameter Estimation

11.9 Significance Test for the Entire Model and Sets of Variables

11.10 McFadden’s Pseudo R-Square for Strength of Association

11.11 Significance Tests and Confidence Intervals for

Single Variables

11.12 Preliminary Analysis

11.13 Residuals and Influence

434

434

435

436

383

385

387

427

429

429

438

442

443

444

445

447

448

450

451

451

xi

xii

â†œæ¸€å±®

â†œæ¸€å±® Contents

11.14Assumptions

453

11.15 Other Data Issues

457

11.16 Classification

458

11.17 Using SAS and SPSS for Multiple Logistic Regression

461

11.18 Using SAS and SPSS to Implement the Box–Tidwell

Procedure463

11.19 Example Results Section for Logistic Regression

With Diabetes Prevention Study

465

11.20 Analysis Summary

466

11.21 Exercises

468

12.

13.

Repeated-Measures Analysis

12.1 Introduction

12.2 Single-Group Repeated Measures

12.3 The Multivariate Test Statistic for Repeated Measures

12.4 Assumptions in Repeated-Measures Analysis

12.5 Computer Analysis of the Drug Data

12.6 Post Hoc Procedures in Repeated-Measures Analysis

12.7 Should We Use the Univariate or Multivariate Approach?

12.8 One-Way Repeated Measures—A Trend Analysis

12.9 Sample Size for PowerÂ€=Â€.80 in Single-Sample Case

12.10 Multivariate Matched-Pairs Analysis

12.11 One-Between and One-Within Design

12.12 Post Hoc Procedures for the One-Between and

One-Within Design

12.13 One-Between and Two-Within Factors

12.14 Two-Between and One-Within Factors

12.15 Two-Between and Two-Within Factors

12.16 Totally Within Designs

12.17 Planned Comparisons in Repeated-Measures Designs

12.18 Profile Analysis

12.19 Doubly Multivariate Repeated-Measures Designs

12.20 Summary

12.21 Exercises

471

471

475

477

480

482

487

488

489

494

496

497

505

511

515

517

518

520

524

528

529

530

Hierarchical Linear Modeling

537

13.1Introduction

537

13.2 Problems Using Single-Level Analyses of

Multilevel Data

539

13.3 Formulation of the Multilevel Model

541

13.4 Two-Level Model—General Formation

541

13.5 Example 1: Examining School Differences in

Mathematics545

13.6 Centering Predictor Variables

563

568

13.7 Sample Size

13.8 Example 2: Evaluating the Efficacy of a Treatment

569

13.9Summary

576

Contents

â†œæ¸€å±®

â†œæ¸€å±®

14.

Multivariate Multilevel Modeling

578

14.1Introduction

578

14.2 Benefits of Conducting a Multivariate Multilevel

Analysis579

14.3 Research Example

580

14.4 Preparing a Data Set for MVMM Using SAS and SPSS

581

14.5 Incorporating Multiple Outcomes in the Level-1 Model

584

14.6 Example 1: Using SAS and SPSS to Conduct Two-Level

Multivariate Analysis

585

14.7 Example 2: Using SAS and SPSS to Conduct

Three-Level Multivariate Analysis

595

14.8Summary

614

14.9 SAS and SPSS Commands Used to Estimate All

Models in the Chapter

615

15.

Canonical Correlation

15.1Introduction

15.2 The Nature of Canonical Correlation

15.3 Significance Tests

15.4 Interpreting the Canonical Variates

15.5 Computer Example Using SAS CANCORR

15.6 AÂ€Study That Used Canonical Correlation

15.7 Using SAS for Canonical Correlation on

Two Sets of Factor Scores

15.8 The Redundancy Index of Stewart and Love

15.9 Rotation of Canonical Variates

15.10 Obtaining More Reliable Canonical Variates

15.11 Summary

15.12 Exercises

16.

618

618

619

620

621

623

625

628

630

631

632

632

634

Structural Equation Modeling

639

16.1Introduction

639

16.2 Notation, Terminology, and Software

639

16.3 Causal Inference

642

16.4 Fundamental Topics in SEM

643

16.5 Three Principal SEM Techniques

663

16.6 Observed Variable Path Analysis

663

16.7 Observed Variable Path Analysis With the Mueller

Study668

16.8 Confirmatory Factor Analysis

689

16.9 CFA With Reactions-to-Tests Data

691

16.10 Latent Variable Path Analysis

707

16.11 Latent Variable Path Analysis With Exercise Behavior

Study711

16.12 SEM Considerations

719

16.13 Additional Models in SEM

724

16.14 Final Thoughts

726

xiii

xiv

â†œæ¸€å±®

â†œæ¸€å±® Contents

Appendix 16.1 Abbreviated SAS Output for Final Observed

Variable Path Model

Appendix 16.2 Abbreviated SAS Output for the Final

Latent Variable Path Model for Exercise Behavior

734

736

Appendix A: Statistical Tables

747

Appendix B: Obtaining Nonorthogonal Contrasts in Repeated Measures Designs

763

Detailed Answers

771

Index785

PREFACE

The first five editions of this text have been received warmly, and we are grateful for

that.

This edition, like previous editions, is written for those who use, rather than develop,

advanced statistical methods. The focus is on conceptual understanding rather than

proving results. The narrative and many examples are there to promote understanding,

and a chapter on matrix algebra is included for those who need the extra help. Throughout the book, you will find output from SPSS (version 21) and SAS (version 9.3) with

interpretations. These interpretations are intended to demonstrate what analysis results

mean in the context of a research example and to help you interpret analysis results

properly. In addition to demonstrating how to use the statistical programs effectively,

our goal is to show you the importance of examining data, assessing statistical assumptions, and attending to sample size issues so that the results are generalizable. The

text also includes end-of-chapter exercises for many chapters, which are intended to

promote better understanding of concepts and have you obtain additional practice in

conducting analyses and interpreting results. Detailed answers to the odd-numbered

exercises are included in the back of the book so you can check your work.

NEW TO THIS EDITION

Many changes were made in this edition of the text, including a new lead author of

the text. In 2012, Dr.Â€Keenan Pituch of the University of Texas at Austin, along with

Dr.Â€James Stevens, developed a plan to revise this edition and began work. The goals

in revising the text were to provide more guidance on practical matters related to data

analysis, update the text in terms of the statistical procedures used, and firmly align

those procedures with findings from methodological research.

Key changes to this edition are:

Inclusion of analysis summaries and example results sections

Focus on just two software programs (SPSS version 21 and SAS version 9.3)

xvi

â†œæ¸€å±®

â†œæ¸€å±® Preface

New chapters on Binary Logistic Regression (ChapterÂ€11) and Multivariate Multilevel Modeling (ChapterÂ€14)

Completely rewritten chapters on structural equation modeling (SEM), exploratory factor analysis, and hierarchical linear modeling.

ANALYSIS SUMMARIES AND EXAMPLE RESULTS SECTIONS

The analysis summaries provide a convenient guide for the analysis activities we generally recommend you use when conducting data analysis. Of course, to carry out these

activities in a meaningful way, you have to understand the underlying statistical concepts—something that we continue to promote in this edition. The analysis summaries and example results sections will also help you tie together the analysis activities

involved for a given procedure and illustrate how you may effectively communicate

analysis results.

The analysis summaries and example results sections are provided for several techniques.

Specifically, they are provided and applied to examples for the following procedures:

one-way MANOVA (sectionsÂ€6.11–6.13), two-way MANOVA (sectionsÂ€7.6–7.8), oneway MANCOVA (example 8.4 and sectionsÂ€8.15 and 8.17), exploratory factor analysis

(sectionsÂ€ 9.12, 9.17, and 9.18), discriminant analysis (sectionsÂ€ 10.7.1, 10.7.2, 10.8,

10.14, and 10.15), and binary logistic regression (sectionsÂ€11.19 and 11.20).

FOCUS ON SPSS AND SAS

Another change that has been implemented throughout the text is to focus the use of

software on two programs: SPSS (version 21) and SAS (version 9.3). Previous editions of this text, particularly for hierarchical linear modeling (HLM) and structural

equation modeling applications, have introduced additional programs for these purposes. However, in this edition, we use only SPSS and SAS because these programs

have improved capability to model data from more complex designs, and reviewers

of this edition expressed a preference for maintaining software continuity throughout

the text. This continuity essentially eliminates the need to learn (and/or teach) additional software programs (although we note there are many other excellent programs

available). Note, though, that for the structural equation modeling chapter SAS is used

exclusively, as SPSS requires users to obtain a separate add on module (AMOS) for

such analyses. In addition, SPSS and SAS syntax and output have also been updated

as needed throughout the text.

NEW CHAPTERS

ChapterÂ€11 on binary logistic regression is new to this edition. We included the chapter

on logistic regression, a technique that Alan Agresti has called the “most important

Preface

â†œæ¸€å±®

â†œæ¸€å±®

model for categorical response data,” due to the widespread use of this procedure in

the social sciences, given its ability to readily incorporate categorical and continuous predictors in modeling a categorical response. Logistic regression can be used for

explanation and classification, with each of these uses illustrated in the chapter. With

the inclusion of this new chapter, the former chapter on Categorical Data Analysis: The

Log Linear Model has been moved to the website for this text.

ChapterÂ€14 on multivariate multilevel modeling is another new chapter for this edition. This chapter is included because this modeling procedure has several advantages over the traditional MANOVA procedures that appear in ChaptersÂ€4–6 and

provides another alternative to analyzing data from a design that has a grouping

variable and several continuous outcomes (with discriminant analysis providing yet

another alternative). The advantages of multivariate multilevel modeling are presented in ChapterÂ€14, where we also show that the newer modeling procedure can

replicate the results of traditional MANOVA. Given that we introduce this additional

and flexible modeling procedure for examining multivariate group differences, we

have eliminated the chapter on stepdown analysis from the text, but make it available

on the web.

REWRITTEN AND IMPROVED CHAPTERS

In addition, the chapter on structural equation modeling has been completely rewritten

by Dr.Â€Tiffany Whittaker of the University of Texas at Austin. Dr.Â€Whittaker has taught

a structural equation modeling course for many years and is an active methodological

researcher in this area. In this chapter, she presents the three major applications of

SEM: observed variable path analysis, confirmatory factor analysis, and latent variable path analysis. Note that the placement of confirmatory factor analysis in the SEM

chapter is new to this edition and was done to allow for more extensive coverage of

the common factor model in ChapterÂ€ 9 and because confirmatory factor analysis is

inherently a SEM technique.

ChapterÂ€9 is one of two chapters that have been extensively revised (along with ChapterÂ€13). The major changes to ChapterÂ€9 include the inclusion of parallel analysis to

help determine the number of factors present, an updated section on sample size, sections covering an overall focus on the common factor model, a section (9.7) providing

a student- and teacher-friendly introduction to factor analysis, a new section on creating factor scores, and the new example results and analysis summary sections. The

research examples used here are also new for exploratory factor analysis, and recall

that coverage of confirmatory analysis is now found in ChapterÂ€16.

Major revisions have been made to ChapterÂ€13, Hierarchical Linear Modeling. SectionÂ€13.1 has been revised to provide discussion of fixed and random factors to help

you recognize when hierarchical linear modeling may be needed. SectionÂ€13.2 uses

a different example than presented in the fifth edition and describes three types of

xvii

xviii

â†œæ¸€å±®

â†œæ¸€å±® Preface

widely used models. Given the use of SPSS and SAS for HLM included in this

edition and a new example used in sectionÂ€13.5, the remainder of the chapter is

essentially new material. SectionÂ€13.7 provides updated information on sample size,

and we would especially like to draw your attention to sectionÂ€13.6, which is a new

section on the centering of predictor variables, a critical concern for this form of

modeling.

KEY CHAPTER-BY-CHAPTER REVISIONS

There are also many new sections and important revisions in this edition. Here, we

discuss the major changes by chapter.

•

ChapterÂ€1 (sectionÂ€1.6) now includes a discussion of issues related to missing data.

Included here are missing data mechanisms, missing data treatments, and illustrative analyses showing how you can select and implement a missing data analysis

treatment.

•The post hoc procedures have been revised for ChaptersÂ€4 and 5, which largely

reflect prevailing practices in applied research.

• ChapterÂ€6 adds more information on the use of skewness and kurtosis to evaluate

the normality assumption as well as including the new example results and analysis summary sections for one-way MANOVA. In ChapterÂ€6, we also include a new

data set (which we call the SeniorWISE data set, modeled after an applied study)

that appears in several chapters in the text.

• ChapterÂ€7 has been retitled (somewhat), and in addition to including the example

results and analysis summary sections for two-way MANOVA, includes a new

section on factorial descriptive discriminant analysis.

• ChapterÂ€8, in addition to the example results and analysis summary sections, includes a new section on effect size measures for group comparisons in ANCOVA/

MANCOVA, revised post hoc procedures for MANCOVA, and a new section that

briefly describes a benefit of using multivariate multilevel modeling that is particularly relevant for MANCOVA.

• The introduction to ChapterÂ€10 is revised, and recommendations are updated in

sectionÂ€ 10.4 for the use of coefficients to interpret discriminant functions. SectionÂ€10.7 includes a new research example for discriminant analysis, and sectionÂ€10.7.5 is particularly important in that we provide recommendations for

selecting among traditional MANOVA, discriminant analysis, and multivariate

multilevel modeling procedures. This chapter includes the new example results

and analysis summary sections for descriptive discriminant analysis and applies

these procedures in sectionsÂ€10.7 and 10.8.

• In ChapterÂ€12, the major changes include an update of the post hoc procedures

(sectionÂ€12.6), a new section on one-way trend analysis (sectionÂ€12.8), and a

revised example and a more extensive discussion of post hoc procedures for

the one-between and one-within subjects factors design (sectionsÂ€ 12.11 and

12.12).

Preface

â†œæ¸€å±®

â†œæ¸€å±®

ONLINE RESOURCES FOR TEXT

The book’s website www.routledge.com/9780415836661 contains the data sets from

the text, SPSS and SAS syntax from the text, and additional data sets (in SPSS and

SAS) that can be used for assignments and extra practice. For instructors, the site hosts

a conversion guide for users of the previous editions, 6 PowerPoint lecture slides providing a detailed walk-through for key examples from the text, detailed answers for all

exercises from the text, and downloadable PDFs of chapters 10 and 14 from the 5th

edition of the text for instructors that wish to continue assigning this content.

INTENDED AUDIENCE

As in previous editions, this book is intended for courses on multivariate statistics

found in psychology, social science, education, and business departments, but the

book also appeals to practicing researchers with little or no training in multivariate

methods.

A word on prerequisites students should have before using this book. They should

have a minimum of two quarter courses in statistics (covering factorial ANOVA and

ANCOVA). AÂ€two-semester sequence of courses in statistics is preferable, as is prior

exposure to multiple regression. The book does not assume a working knowledge of

matrix algebra.

In closing, we hope you find that this edition is interesting to read, informative, and

provides useful guidance when you analyze data for your research projects.

ACKNOWLEDGMENTS

We wish to thank Dr.Â€Tiffany Whittaker of the University of Texas at Austin for her

valuable contribution to this edition. We would also like to thank Dr.Â€Wanchen Chang,

formerly a graduate student at the University of Texas at Austin and now a faculty

member at Boise State University, for assisting us with the SPSS and SAS syntax

that is included in ChapterÂ€14. Dr.Â€Pituch would also like to thank his major professor Dr.Â€Richard Tate for his useful advice throughout the years and his exemplary

approach to teaching statistics courses.

Also, we would like to say a big thanks to the many reviewers (anonymous and otherwise) who provided many helpful suggestions for this text: Debbie Hahs-Vaughn

(University of Central Florida), Dennis Jackson (University of Windsor), Karin

Schermelleh-Engel (Goethe University), Robert Triscari (Florida Gulf Coast University), Dale Berger (Claremont Graduate University–Claremont McKenna College),

Namok Choi (University of Louisville), Joseph Wu (City University of Hong Kong),

Jorge Tendeiro (Groningen University), Ralph Rippe (Leiden University), and Philip

xix

xx

â†œæ¸€å±®

â†œæ¸€å±® Preface

Schatz (Saint Joseph’s University). We attended to these suggestions whenever

possible.

Dr.Â€Pituch also wishes to thank commissioning editor Debra Riegert and Dr.Â€Stevens

for inviting him to work on this edition and for their patience as he worked through the

revisions. We would also like to thank development editor Rebecca Pearce for assisting us in many ways with this text. We would also like to thank the production staff at

Routledge for bringing this edition to completion.

Chapter 1

INTRODUCTION

1.1â•‡INTRODUCTION

Studies in the social sciences comparing two or more groups very often measure their

participants on several criterion variables. The following are some examples:

1. A researcher is comparing two methods of teaching second-grade reading. On a

posttest the researcher measures the participants on the following basic elements

related to reading: syllabication, blending, sound discrimination, reading rate, and

comprehension.

2. A social psychologist is testing the relative efficacy of three treatments on

self-concept, and measures participants on academic, emotional, and social

aspects of self-concept. Two different approaches to stress management are being

compared.

3. The investigator employs a couple of paper-and-pencil measures of anxiety (say,

the State-Trait Scale and the Subjective Stress Scale) and some physiological

measures.

4. A researcher comparing two types of counseling (Rogerian and Adlerian) on client

satisfaction and client self-acceptance.

A major part of this book involves the statistical analysis of several groups on a set of

criterion measures simultaneously, that is, multivariate analysis of variance, the multivariate referring to the multiple dependent variables.

Cronbach and Snow (1977), writing on aptitude–treatment interaction research, echoed the need for multiple criterion measures:

Learning is multivariate, however. Within any one task a person’s performance

at a point in time can be represented by a set of scores describing aspects of the

performance .Â€.Â€. even in laboratory research on rote learning, performance can

be assessed by multiple indices: errors, latencies and resistance to extinction, for

2

â†œæ¸€å±®

â†œæ¸€å±® Introduction

example. These are only moderately correlated, and do not necessarily develop at

the same rate. In the paired associate’s task, sub skills have to be acquired: discriminating among and becoming familiar with the stimulus terms, being able to

produce the response terms, and tying response to stimulus. If these attainments

were separately measured, each would generate a learning curve, and there is no

reason to think that the curves would echo each other. (p.Â€116)

There are three good reasons that the use of multiple criterion measures in a study

comparing treatments (such as teaching methods, counseling methods, types of reinforcement, diets, etc.) is very sensible:

1. Any worthwhile treatment will affect the participants in more than one way.

Hence, the problem for the investigator is to determine in which specific ways the

participants will be affected, and then find sensitive measurement techniques for

those variables.

2. Through the use of multiple criterion measures we can obtain a more complete and

detailed description of the phenomenon under investigation, whether it is teacher

method effectiveness, counselor effectiveness, diet effectiveness, stress management technique effectiveness, and soÂ€on.

3. Treatments can be expensive to implement, while the cost of obtaining data on

several dependent variables is relatively small and maximizes informationÂ€gain.

Because we define a multivariate study as one with several dependent variables, multiple regression (where there is only one dependent variable) and principal components

analysis would not be considered multivariate techniques. However, our distinction is

more semantic than substantive. Therefore, because regression and component analysis are so important and frequently used in social science research, we include them

in thisÂ€text.

We have four major objectives for the remainder of this chapter:

1. To review some basic concepts (e.g., type IÂ€error and power) and some issues associated with univariate analysis that are equally important in multivariate analysis.

2. To discuss the importance of identifying outliers, that is, points that split off from

the rest of the data, and deciding what to do about them. We give some examples to show the considerable impact outliers can have on the results in univariate

analysis.

3 To discuss the issue of missing data and describe some recommended missing data

treatments.

4. To give research examples of some of the multivariate analyses to be covered later

in the text and to indicate how these analyses involve generalizations of what the

student has previously learned.

5. To briefly introduce the Statistical Analysis System (SAS) and the IBM Statistical

Package for the Social Sciences (SPSS), whose outputs are discussed throughout

theÂ€text.

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

1.2â•‡ TYPE IÂ€ERROR, TYPE II ERROR, ANDÂ€POWER

Suppose we have randomly assigned 15 participants to a treatment group and another

15 participants to a control group, and we are comparing them on a single measure of

task performance (a univariate study, because there is a single dependent variable).

You may recall that the t test for independent samples is appropriate here. We wish to

determine whether the difference in the sample means is large enough, given sampling

error, to suggest that the underlying population means are different. Because the sample means estimate the population means, they will generally be in error (i.e., they will

not hit the population values right “on the nose”), and this is called sampling error. We

wish to test the null hypothesis (H0) that the population means are equal:

H0 : μ1Â€=Â€μ2

It is called the null hypothesis because saying the population means are equal is equivalent to saying that the difference in the means is 0, that is, μ1 − μ2 = 0, or that the

difference isÂ€null.

Now, statisticians have determined that, given the assumptions of the procedure are

satisfied, if we had populations with equal means and drew samples of size 15 repeatedly and computed a t statistic each time, then 95% of the time we would obtain t

values in the range −2.048 to 2.048. The so-called sampling distribution of t under H0

would look likeÂ€this:

t (under H0)

95% of the t values

–2.048

2.048

This sampling distribution is extremely important, for it gives us a frame of reference

for judging what is a large value of t. Thus, if our t value was 2.56, it would be very

plausible to reject the H0, since obtaining such a large t value is very unlikely when

H0 is true. Note, however, that if we do so there is a chance we have made an error,

because it is possible (although very improbable) to obtain such a large value for t,

even when the population means are equal. In practice, one must decide how much of

a risk of making this type of error (called a type IÂ€error) one wishes to take. Of course,

one would want that risk to be small, and many have decided a 5% risk is small. This

is formalized in hypothesis testing by saying that we set our level of significance (α)

at the .05 level. That is, we are willing to take a 5% chance of making a type IÂ€error. In

other words, type IÂ€error (level of significance) is the probability of rejecting the null

hypothesis when it is true.

3

4

â†œæ¸€å±®

â†œæ¸€å±® Introduction

Recall that the formula for degrees of freedom for the t test is (n1 + n2 − 2); hence,

for this problem dfÂ€=Â€28. If we had set αÂ€=Â€.05, then reference to Appendix A.2 of this

book shows that the critical values are −2.048 and 2.048. They are called critical values because they are critical to the decision we will make on H0. These critical values

define critical regions in the sampling distribution. If the value of t falls in the critical

region we reject H0; otherwise we fail to reject:

t (under H0) for df = 28

–2.048

2.048

0

Reject H0

Reject H0

Type IÂ€error is equivalent to saying the groups differ when in fact they do not. The α

level set by the investigator is a subjective decision, but is usually set at .05 or .01 by

most researchers. There are situations, however, when it makes sense to use α levels

other than .05 or .01. For example, if making a type IÂ€error will not have serious

substantive consequences, or if sample size is small, setting αÂ€=Â€.10 or .15 is quite

reasonable. Why this is reasonable for small sample size will be made clear shortly.

On the other hand, suppose we are in a medical situation where the null hypothesis

is equivalent to saying a drug is unsafe, and the alternative is that the drug is safe.

Here, making a type IÂ€error could be quite serious, for we would be declaring the

drug safe when it is not safe. This could cause some people to be permanently damaged or perhaps even killed. In this case it would make sense to use a very small α,

perhaps .001.

Another type of error that can be made in conducting a statistical test is called a type II

error. The type II error rate, denoted by β, is the probability of accepting H0 when it is

false. Thus, a type II error, in this case, is saying the groups don’t differ when they do.

Now, not only can either type of error occur, but in addition, they are inversely related

(when other factors, e.g., sample size and effect size, affecting these probabilities are

held constant). Thus, holding these factors constant, as we control on type IÂ€error, type

II error increases. This is illustrated here for a two-group problem with 30 participants

per group where the population effect size d (defined later) is .5:

α

β

1−β

.10

.05

.01

.37

.52

.78

.63

.48

.22

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

Notice that, with sample and effect size held constant, as we exert more stringent control over α (from .10 to .01), the type II error rate increases fairly sharply (from .37 to

.78). Therefore, the problem for the experimental planner is achieving an appropriate

balance between the two types of errors. While we do not intend to minimize the seriousness of making a type IÂ€error, we hope to convince you throughout the course of

this text that more attention should be paid to type II error. Now, the quantity in the

last column of the preceding table (1 − β) is the power of a statistical test, which is the

probability of rejecting the null hypothesis when it is false. Thus, power is the probability of making a correct decision, or of saying the groups differ when in fact they do.

Notice from the table that as the α level decreases, power also decreases (given that

effect and sample size are held constant). The diagram in FigureÂ€1.1 should help to

make clear why this happens.

The power of a statistical test is dependent on three factors:

1. The α level set by the experimenter

2. SampleÂ€size

3. Effect size—How much of a difference the treatments make, or the extent to which

the groups differ in the population on the dependent variable(s).

FigureÂ€1.1 has already demonstrated that power is directly dependent on the α level.

Power is heavily dependent on sample size. Consider a two-tailed test at the .05 level

for the t test for independent samples. An effect size for the t test, as defined by Cohen

^

(1988), is estimated as =

d ( x1 − x2 ) / s, where s is the standard deviation. That is,

effect size expresses the difference between the means in standard deviation units.

^

Thus, if x1Â€=Â€6 and x2Â€=Â€3 and sÂ€=Â€6, then d= ( 6 − 3) / 6 = .5, or the means differ by

1

standard deviation. Suppose for the preceding problem we have an effect size of .5

2

standard deviations. Holding α (.05) and effect size constant, power increases dramatically as sample size increases (power values from Cohen, 1988):

n (Participants per group)

Power

10

20

50

100

.18

.33

.70

.94

As the table suggests, given this effect size and α, when sample size is large (say, 100

or more participants per group), power is not an issue. In general, it is an issue when

one is conducting a study where group sizes will be small (n ≤ 20), or when one is

evaluating a completed study that had small group size. Then, it is imperative to be

very sensitive to the possibility of poor power (or conversely, a high type II error rate).

Thus, in studies with small group size, it can make sense to test at a more liberal level

5

6

â†œæ¸€å±®

â†œæ¸€å±® Introduction

Figure 1.1:â•‡ Graph of F distribution under H0 and under H0 false showing the direct relationship

between type IÂ€error and power. Since type IÂ€error is the probability of rejecting H0 when true, it

is the area underneath the F distribution in critical region for H0 true. Power is the probability of

rejecting H0 when false; therefore it is the area underneath the F distribution in critical region when

H0 is false.

F (under H0)

F (under H0 false)

Reject for α = .01

Reject for α = .05

Power at α = .05

Power at α = .01

Type I error

for .01

Type I error for .05

(.10 or .15) to improve power, because (as mentioned earlier) power is directly related

to the α level. We explore the power issue in considerably more detail in ChapterÂ€4.

1.3â•‡MULTIPLE STATISTICAL TESTS AND THE PROBABILITY

OF SPURIOUS RESULTS

If a researcher sets αÂ€=Â€.05 in conducting a single statistical test (say, a t test), then,

if statistical assumptions associated with the procedure are satisfied, the probability

of rejecting falsely (a spurious result) is under control. Now consider a five-group

problem in which the researcher wishes to determine whether the groups differ significantly on some dependent variable. You may recall from a previous statistics course

that a one-way analysis of variance (ANOVA) is appropriate here. But suppose our

researcher is unaware of ANOVA and decides to do 10 t tests, each at the .05 level,

comparing each pair of groups. The probability of a false rejection is no longer under

control for the set of 10 t tests. We define the overall α for a set of tests as the probability of at least one false rejection when the null hypothesis is true. There is an important

inequality called the Bonferroni inequality, which gives an upper bound on overallÂ€α:

Overall α ≤ .05 + .05 + + .05 = .50

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

Thus, the probability of a few false rejections here could easily be 30 or 35%, that is,

much tooÂ€high.

In general then, if we are testing k hypotheses at the α1, α2, …, αk levels, the Bonferroni

inequality guaranteesÂ€that

Overall α ≤ α1 + α 2 + + α k

If the hypotheses are each tested at the same alpha level, say α′, then the Bonferroni

upper bound becomes

Overall α ≤ k α ′

This Bonferroni upper bound is conservative, and how to obtain a sharper (tighter)

upper bound is discussedÂ€next.

If the tests are independent, then an exact calculation for overall α is available. First,

(1 − α1) is the probability of no type IÂ€error for the first comparison. Similarly, (1 − α2)

is the probability of no type IÂ€error for the second, (1 − α3) the probability of no type

IÂ€error for the third, and so on. If the tests are independent, then we can multiply probabilities. Therefore, (1 − α1) (1 − α2) … (1 − αk) is the probability of no type IÂ€errors

for all k tests.Â€Thus,

Overall α = 1 − (1 − α1 ) (1 − α 2 ) (1 − α k )

is the probability of at least one type IÂ€error. If the tests are not independent, then overall α will still be less than given here, although it is very difficult to calculate. If we set

the alpha levels equal, say to α′ for each test, then this expression becomes

Overall α = 1 − (1 − α ′ ) (1 − α ′ ) (1 − α ′ ) = 1 − (1 − α ′ )

α′Â€=Â€.05

k

α′Â€=Â€.01

α′Â€=Â€.001

No. of tests

1 − (1 − α′)

kα′

1 − (1 − α′)

kα′

1 − (1 − α′)k

kα′

5

10

15

30

50

100

.226

.401

.537

.785

.923

.994

.25

.50

.75

1.50

2.50

5.00

.049

.096

.140

.260

.395

.634

â•‡.05

â•‡.10

â•‡.15

â•‡.30

â•‡.50

1.00

.00499

.00990

.0149

.0296

.0488

.0952

.005

.010

.015

.030

.050

.100

k

k

7

8

â†œæ¸€å±®

â†œæ¸€å±® Introduction

This expression, that is, 1 − (1 − α′)k, is approximately equal to kα′ for small α′. The

next table compares the two for α′Â€=Â€.05, .01, and .001 for number of tests ranging from

5 toÂ€100.

First, the numbers greater than 1 in the table don’t represent probabilities, because

a probability can’t be greater than 1. Second, note that if we are testing each of a

large number of hypotheses at the .001 level, the difference between 1 − (1 − α′)k

and the Bonferroni upper bound of kα′ is very small and of no practical consequence. Also, the differences between 1 − (1 − α′)k and kα′ when testing at α′Â€=Â€.01

are also small for up to about 30 tests. For more than about 30 tests 1 − (1 − α′)k

provides a tighter bound and should be used. When testing at the α′Â€=Â€.05 level, kα′

is okay for up to about 10 tests, but beyond that 1 − (1 − α′)k is much tighter and

should beÂ€used.

You may have been alert to the possibility of spurious results in the preceding example with multiple t tests, because this problem is pointed out in texts on intermediate

statistical methods. Another frequently occurring example of multiple t tests where

overall α gets completely out of control is in comparing two groups on each item of a

scale (test); for example, comparing males and females on each of 30 items, doing 30

t tests, each at the .05 level.

Multiple statistical tests also arise in various other contexts in which you may not readily recognize that the same problem of spurious results exists. In addition, the fact that

the researcher may be using a more sophisticated design or more complex statistical

tests doesn’t mitigate the problem.

As our first illustration, consider a researcher who runs a four-way ANOVA (A × B ×

C × D). Then 15 statistical tests are being done, one for each effect in the design: A, B, C,

and D main effects, and AB, AC, AD, BC, BD, CD, ABC, ABD, ACD, BCD, and

ABCD interactions. If each of these effects is tested at the .05 level, then all we

know from the Bonferroni inequality is that overall α ≤ 15(.05)Â€=Â€.75, which is not

very reassuring. Hence, two or three significant results from such a study (if they

were not predicted ahead of time) could very well be type IÂ€errors, that is, spurious

results.

Let us take another common example. Suppose an investigator has a two-way ANOVA

design (A × B) with seven dependent variables. Then, there are three effects being

tested for significance: A main effect, B main effect, and the A × B interaction. The

investigator does separate two-way ANOVAs for each dependent variable. Therefore,

the investigator has done a total of 21 statistical tests, and if each of them was conducted at the .05 level, then the overall α has gotten completely out of control. This

type of thing is done very frequently in the literature, and you should be aware of it in

interpreting the results of such studies. Little faith should be placed in scattered significant results from these studies.

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

A third example comes from survey research, where investigators are often interested

in relating demographic characteristics of the participants (sex, age, religion, socioeconomic status, etc.) to responses to items on a questionnaire. AÂ€statistical test for relating

each demographic characteristic to responses on each item is a two-way χ2. Often in

such studies 20 or 30 (or many more) two-way χ2 tests are run (and it is so easy to run

them on SPSS). The investigators often seem to be able to explain the frequent small

number of significant results perfectly, although seldom have the significant results

been predicted a priori.

A fourth fairly common example of multiple statistical tests is in examining the elements of a correlation matrix for significance. Suppose there were 10 variables in one

set being related to 15 variables in another set. In this case, there are 150 between

correlations, and if each of these is tested for significance at the .05 level, then

150(.05)Â€=Â€7.5, or about eight significant results could be expected by chance. Thus,

if 10 or 12 of the between correlations are significant, most of them could be chance

results, and it is very difficult to separate out the chance effects from the real associations. AÂ€way of circumventing this problem is to simply test each correlation for significance at a much more stringent level, say αÂ€=Â€.001. Then, by the Bonferroni inequality,

overall α ≤ 150(.001)Â€=Â€.15. Naturally, this will cause a power problem (unless n is

large), and only those associations that are quite strong will be declared significant. Of

course, one could argue that it is only such strong associations that may be of practical

importance anyway.

A fifth case of multiple statistical tests occurs when comparing the results of many

studies in a given content area. Suppose, for example, that 20 studies have been

reviewed in the area of programmed instruction and its effect on math achievement

in the elementary grades, and that only five studies show significance. Since at least

20 statistical tests were done (there would be more if there were more than a single

criterion variable in some of the studies), most of these significant results could be

spurious, that is, type IÂ€errors.

A sixth case of multiple statistical tests occurs when an investigator(s) selects

a small set of dependent variables from a much larger set (you don’t know this

has been done—this is an example of selection bias). The much smaller set is

chosen because all of the significance occurs here. This is particularly insidious.

Let us illustrate. Suppose the investigator has a three-way design and originally

15 dependent variables. Then 105Â€=Â€15 × 7 tests have been done. If each test is

done at the .05 level, then the Bonferroni inequality guarantees that overall alpha

is less than 105(.05)Â€=Â€5.25. So, if seven significant results are found, the Bonferroni procedure suggests that most (or all) of the results could be spurious. If all

the significance is confined to three of the variables, and those are the variables

selected (without your knowing this), then overall alphaÂ€=Â€21(.05)Â€=Â€1.05, and this

conveys a very different impression. Now, the conclusion is that perhaps a few of

the significant results are spurious.

9

10

â†œæ¸€å±®

â†œæ¸€å±® Introduction

1.4â•‡STATISTICAL SIGNIFICANCE VERSUS PRACTICAL

IMPORTANCE

You have probably been exposed to the statistical significance versus practical importance issue in a previous course in statistics, but it is sufficiently important to have us

review it here. Recall from our earlier discussion of power (probability of rejecting the

null hypothesis when it is false) that power is heavily dependent on sample size. Thus,

given very large sample size (say, group sizes > 200), most effects will be declared

statistically significant at the .05 level. If significance is found, often researchers seek

to determine whether the difference in means is large enough to be of practical importance. There are several ways of getting at practical importance; among themÂ€are

1. Confidence intervals

2. Effect size measures

3. Measures of association (variance accountedÂ€for).

Suppose you are comparing two teaching methods and decide ahead of time that the

achievement for one method must be at least 5 points higher on average for practical

importance. The results are significant, but the 95% confidence interval for the difference in the population means is (1.61, 9.45). You do not have practical importance,

because, although the difference could be as large as 9 or slightly more, it could also

be less thanÂ€2.

You can calculate an effect size measure and see if the effect is large relative to what

others have found in the same area of research. As a simple example, recall that the

Cohen effect size measure for two groups is d = ( x1 − x2 ) / s, that is, it indicates how

many standard deviations the groups differ by. Suppose your t test was significant

and the estimated effect size measure was d = .63 (in the medium range according

to Cohen’s rough characterization). If this is large relative to what others have found,

then it probably is of practical importance. As Light, Singer, and Willett indicated in

their excellent text By Design (1990), “because practical significance depends upon

the research context, only you can judge if an effect is large enough to be important”

(p.Â€195).

ˆ 2 , can also be used

Measures of association or strength of relationship, such as Hay’s ω

to assess practical importance because they are essentially independent of sample size.

However, there are limitations associated with these measures, as O’Grady (1982)

pointed out in an excellent review on measures of explained variance. He discussed

three basic reasons that such measures should be interpreted with caution: measurement, methodological, and theoretical. We limit ourselves here to a theoretical point

O’Grady mentioned that should be kept in mind before casting aspersions on a “low”

amount of variance accounted. The point is that most behaviors have multiple causes,

and hence it will be difficult in these cases to account for a large amount of variance

with just a single cause such as treatments. We give an example in ChapterÂ€4 to show

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

that treatments accounting for only 10% of the variance on the dependent variable can

indeed be practically significant.

Sometimes practical importance can be judged by simply looking at the means and

thinking about the range of possible values. Consider the following example.

1.4.1 Example

A survey researcher compares four geographic regions on their attitude toward education. The survey is sent out and 800 responses are obtained. Ten items, Likert scaled

from 1 to 5, are used to assess attitude. The group sizes, along with the means and

standard deviations for the total score scale, are givenÂ€here:

n

x

S

West

North

East

South

238

32.0

7.09

182

33.1

7.62

130

34.0

7.80

250

31.0

7.49

An analysis of variance on these groups yields FÂ€=Â€5.61, which is significant at the .001

level. Examining the p value suggests that results are “highly significant,” but are the

results practically important? Very probably not. Look at the size of the mean differences for a scale that has a range from 10 to 50. The mean differences for all pairs of

groups, except for East and South, are about 2 or less. These are trivial differences on

a scale with a range ofÂ€40.

Now recall from our earlier discussion of power the problem of finding statistical significance with small sample size. That is, results in the literature that are not significant

may be simply due to poor or inadequate power, whereas results that are significant,

but have been obtained with huge sample sizes, may not be practically significant. We

illustrate this statement with two examples.

First, consider a two-group study with eight participants per group and an effect

size of .8 standard deviations. This is, in general, a large effect size (Cohen, 1988),

and most researchers would consider this result to be practically significant. However, if testing for significance at the .05 level (two-tailed test), then the chances

of finding significance are only about 1 in 3 (.31 from Cohen’s power tables).

The danger of not being sensitive to the power problem in such a study is that a

researcher may abort a promising line of research, perhaps an effective diet or type

of psychotherapy, because significance is not found. And it may also discourage

other researchers.

11

12

â†œæ¸€å±®

â†œæ¸€å±® Introduction

On the other hand, now consider a two-group study with 300 participants per group

and an effect size of .20 standard deviations. In this case, when testing at the .05 level,

the researcher is likely to find significance (powerÂ€=Â€.70 from Cohen’s tables). To use

a domestic analogy, this is like using a sledgehammer to “pound out” significance. Yet

the effect size here may not be considered practically significant in most cases. Based

on these results, for example, a school system may decide to implement an expensive

program that may yield only very small gains in achievement.

For further perspective on the practical importance issue, there is a nice article by

Haase, Ellis, and Ladany (1989). Although that article is in the Journal of Counseling

Psychology, the implications are much broader. They suggest five different ways of

assessing the practical or clinical significance of findings:

1. Reference to previous research—the importance of context in determining whether

a result is practically important.

2. Conventional definitions of magnitude of effect—Cohen’s (1988) definitions of

small, medium, and large effectÂ€size.

3. Normative definitions of clinical significance—here they reference a special issue

of Behavioral Assessment (Jacobson, 1988) that should be of considerable interest

to clinicians.

4. Cost-benefit analysis.

5. The good-enough principle—here the idea is to posit a form of the null hypothesis

that is more difficult to reject: for example, rather than testing whether two population means are equal, testing whether the difference between them is at leastÂ€3.

Note that many of these ideas are considered in detail in Grissom and Kim (2012).

Finally, although in a somewhat different vein, with various multivariate procedures

we consider in this text (such as discriminant analysis), unless sample size is large relative to the number of variables, the results will not be reliable—that is, they will not

generalize. AÂ€major point of the discussion in this section is that it is critically important to take sample size into account in interpreting results in the literature.

1.5â•‡OUTLIERS

Outliers are data points that split off or are very different from the rest of the data. Specific examples of outliers would be an IQ of 160, or a weight of 350 lbs. in a group for

which the median weight is 180 lbs. Outliers can occur for two fundamental reasons:

(1) a data recording or entry error was made, or (2) the participants are simply different

from the rest. The first type of outlier can be identified by always listing the data and

checking to make sure the data have been read in accurately.

The importance of listing the data was brought home to Dr.Â€Stevens many years ago as

a graduate student. AÂ€regression problem with five predictors, one of which was a set

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

of random scores, was run without checking the data. This was a textbook problem to

show students that the random number predictor would not be related to the dependent variable. However, the random number predictor was significant and accounted

for a fairly large part of the variance on y. This happened simply because one of the

scores for the random number predictor was incorrectly entered as a 300 rather than

as a 3. In this case it was obvious that something was wrong. But with large data sets

the situation will not be so transparent, and the results of an analysis could be completely thrown off by 1 or 2 errant points. The amount of time it takes to list and check

the data for accuracy (even if there are 1,000 or 2,000 participants) is well worth the

effort.

Statistical procedures in general can be quite sensitive to outliers. This is particularly

true for the multivariate procedures that will be considered in this text. It is very important to be able to identify such outliers and then decide what to do about them. Why?

Because we want the results of our statistical analysis to reflect most of the data, and

not to be highly influenced by just 1 or 2 errant data points.

In small data sets with just one or two variables, such outliers can be relatively easy to

identify. We now consider some examples.

Example 1.1

Consider the following small data set with two variables:

Case number

x1

x2

1

2

3

4

5

6

7

8

9

10

111

92

90

107

98

150

118

110

117

94

68

46

50

59

50

66

54

51

59

97

Cases 6 and 10 are both outliers, but for different reasons. Case 6 is an outlier because

the score for case 6 on x1 (150) is deviant, while case 10 is an outlier because the score

for that subject on x2 (97) splits off from the other scores on x2. The graphical split-off

of cases 6 and 10 is quite vivid and is given in FigureÂ€1.2.

Example 1.2

In large data sets having many variables, some outliers are not so easy to spot

and could go easily undetected unless care is taken. Here, we give an example

13

14

â†œæ¸€å±®

â†œæ¸€å±® Introduction

Figure 1.2:â•‡ Plot of outliers for two-variable example.

x2

100

Case 10

90

80

(108.7, 60)–Location of means on x1 and x2.

70

Case 6

60

X

50

90

100 110 120 130 140 150

x1

of a somewhat more subtle outlier. Consider the following data set on four

variables:

Case number

x1

x2

x3

x4

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

111

92

90

107

98

150

118

110

117

94

130

118

155

118

109

68

46

50

59

50

66

54

51

59

67

57

51

40

61

66

17

28

19

25

13

20

11

26

18

12

16

19

9

20

13

81

67

83

71

92

90

101

82

87

69

97

78

58

103

88

The somewhat subtle outlier here is case 13. Notice that the scores for case 13 on none

of the xs really split off dramatically from the other participants’ scores. Yet the scores

tend to be low on x2, x3, and x4 and high on x1, and the cumulative effect of all this is

to isolate case 13 from the rest of the cases. We indicate shortly a statistic that is quite

useful in detecting multivariate outliers and pursue outliers in more detail in ChapterÂ€3.

Now let us consider three more examples, involving material learned in previous statistics courses, to show the effect outliers can have on some simple statistics.

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

Example 1.3

Consider the following small set of data: 2, 3, 5, 6, 44. The last number, 44, is an

obvious outlier; that is, it splits off sharply from the rest of the data. If we were to

use the mean of 12 as the measure of central tendency for this data, it would be quite

misleading, as there are no scores around 12. That is why you were told to use the

median as the measure of central tendency when there are extreme values (outliers in

our terminology), because the median is unaffected by outliers. That is, it is a robust

measure of central tendency.

Example 1.4

To show the dramatic effect an outlier can have on a correlation, consider the two scatterplots in FigureÂ€1.3. Notice how the inclusion of the outlier in each case drastically

changes the interpretation of the results. For case AÂ€there is no relationship without the

outlier but there is a strong relationship with the outlier, whereas for case B the relationship changes from strong (without the outlier) to weak when the outlier is included.

Example 1.5

As our final example, consider the followingÂ€data:

Group 1

Group 2

Group 3

y1

y2

y1

y2

y1

y2

15

18

12

12

9

10

12

20

21

27

32

29

18

34

18

36

17

22

15

12

20

14

15

20

21

36

41

31

28

47

29

33

38

25

6

9

12

11

11

8

13

30

7

26

31

38

24

35

29

30

16

23

For now, ignore variable y2, and we run a one-way ANOVA for y1. The score of 30

in group 3 is an outlier. With that case in the ANOVA we do not find significance

(FÂ€=Â€2.61, p < .095) at the .05 level, while with the case deleted we do find significance

well beyond the .01 level (FÂ€=Â€11.18, p < .0004). Deleting the case has the effect of

producing greater separation among the three means, because the means with the case

included are 13.5, 17.33, and 11.89, but with the case deleted the means are 13.5,

17.33, and 9.63. It also has the effect of reducing the within variability in group 3

substantially, and hence the pooled within variability (error term for ANOVA) will be

much smaller.

15

16

â†œæ¸€å±®

â†œæ¸€å±® Introduction

Figure 1.3:â•‡ The effect of an outlier on a correlation coefficient.

Case A

y

Data

x

y

rxy = .67 (with outlier)

20

6 8

7 6

7 11

8 4

8 6

9 10

10

4

10

8

11 11

12

6

13

9

20 18

16

12

8

rxy = .086 (without outlier)

4

4

8

12

16

20

24

x

y

20

Case B

Data

x y

2

3

4

6

7

8

9

10

11

12

13

24

16

rxy = .84 (without outlier)

12

8

rxy = .23 (with outlier)

4

4

8

12

16

20

24

3

6

8

4

10

14

8

12

14

12

16

5

x

1.5.1 Detecting Outliers

If a variable is approximately normally distributed, then z scores around 3 in absolute value should be considered as potential outliers. Why? Because, in an approximate normal distribution, about 99% of the scores should lie within three standard

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

deviations of the mean. Therefore, any z value > 3 indicates a value very unlikely to

occur. Of course, if n is large, say > 100, then simply by chance we might expect a

few participants to have z scores > 3 and this should be kept in mind. However, even

for any type of distribution this rule is reasonable, although we might consider extending the rule to z > 4. It was shown many years ago that regardless of how the data is

distributed, the percentage of observations contained within k standard deviations of

the mean must be at least (1 − 1/k2) × 100%. This holds only for k > 1 and yields the

following percentages for kÂ€=Â€2 throughÂ€5:

Number of standard deviations

Percentage of observations

2

3

4

5

at least 75%

at least 88.89%

at least 93.75%

at least 96%

Shiffler (1988) showed that the largest possible z value in a data set of size n is bounded

by ( n − 1) / n . This means for nÂ€=Â€10 the largest possible z is 2.846 and for nÂ€=Â€11 the

largest possible z is 3.015. Thus, for small sample size, any data point with a z around

2.5 should be seriously considered as a possible outlier.

After the outliers are identified, what should be done with them? The action to be

taken is not to automatically drop the outlier(s) from the analysis. If one finds after

further investigation of the outlying points that an outlier was due to a recording or

entry error, then of course one would correct the data value and redo the analysis.

Or, if it is found that the errant data value is due to an instrumentation error or that

the process that generated the data for that subject was different, then it is legitimate

to drop the outlier. If, however, none of these appears to be the case, then there are

different schools of thought on what should be done. Some argue that such outliers

should not be dropped from the analysis entirely, but perhaps report two analyses (one

including the outlier and the other excluding it). Another school of thought is that it

is reasonable to remove these outliers. Judd, McClelland, and Carey (2009) state the

following:

In fact, we would argue that it is unethical to include clearly outlying observations

that “grab” a reported analysis, so that the resulting conclusions misrepresent the

majority of the observations in a dataset. The task of data analysis is to build a

story of what the data have to tell. If that story really derives from only a few

overly influential observations, largely ignoring most of the other observations,

then that story is a misrepresentation. (p.Â€306)

Also, outliers should not necessarily be regarded as “bad.” In fact, it has been argued

that outliers can provide some of the most interesting cases for further study.

17

18

â†œæ¸€å±®

â†œæ¸€å±® Introduction

1.6â•‡ MISSINGÂ€DATA

It is not uncommon for researchers to have missing data, that is, incomplete responses

from some participants. There are many reasons why missing data may occur. Participants, for example, may refuse to answer “sensitive” questions (e.g., questions about

sexual activity, illegal drug use, income), may lose motivation in responding to questionnaire items and quit answering questions, may drop out of a longitudinal study, or

may be asked not to respond to a specific item by the researcher (e.g., skip this question

if you are not married). In addition, data collection or recording equipment may fail. If

not handled properly, missing data may result in poor (biased) estimates of parameters

as well as reduced statistical power. As such, how you treat missing data can threaten

or help preserve the validity of study conclusions.

In this section, we first describe general reasons (mechanisms) for the occurrence of

missing data. As we explain, the performance of different missing data treatments

depends on the presumed reason for the occurrence of missing data. Second, we will

briefly review various missing data treatments, illustrate how you may examine your

data to determine if there appears to be a random or systematic process for the occurrence of missing data, and show that modern methods of treating missing data generally provide for improved parameter estimates compared to other methods. As this is

a survey text on multivariate methods, we can only devote so much space to coverage

of missing data treatments. Since the presence of missing data may require the use of

fairly complex methods, we encourage you to consult in-depth treatments on missing

data (e.g., Allison, 2001; Enders, 2010).

We should also point out that not all types of missing data require sophisticated treatment. For example, suppose we ask respondents whether they are employed or not,

and, if so, to indicate their degree of satisfaction with their current employer. Those

employed may answer both questions, but the second question is not relevant to those

unemployed. In this case, it is a simple matter to discard the unemployed participants

when we conduct analyses on employee satisfaction. So, if we were to use regression

analysis to predict whether one is employed or not, we could use data from all respondents. However, if we then wish to use regression analysis to predict employee satisfaction, we would exclude those not employed from this analysis, instead of, for example,

attempting to impute their satisfaction with their employer had they been employed,

which seems like a meaningless endeavor.

This simple example highlights the challenges in missing data analysis, in that there

is not one “correct” way to handle all missing data. Rather, deciding how to deal with

missing data in a general sense involves a consideration of study variables and analysis

goals. On the other hand, when a survey question is such that a participant is expected

to respond but does not, then you need to consider whether the missing data appears to

be a random event or is predictable. This concern leads us to consider what are known

as missing data mechanisms.

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

1.6.1 Missing Data Mechanisms

There are three common missing data mechanisms discussed in the literature, two of

which have similar labels but have a critical difference. The first mechanism we consider is referred to as Missing Completely at Random (or MCAR). MCAR describes

the condition where data are missing for purely random reasons, which could happen,

for example, if a data recording device malfunctions for no apparent reason. As such,

if we were to remove all cases having any missing data, the resulting subsample can be

considered a simple random sample from the larger set of cases. More specifically, data

are said to be MCAR if the presence of missing data on a given variable is not related

to any variable in your analysis model of interest or related to the variable itself. Note

that with the last stipulation, that is, that the presence of missing data is not related to

the variable itself, Allison (2001) notes that we are not able to confirm that data are

MCAR, because the data we need to assess this condition are missing. As such, we

are only able to determine if the presence of missing data on a given variable is or is

not related to other variables in the data set. We will illustrate how one may assess

this later, but note that even if you find no such associations in your data set, it is still

possible that the MCAR assumption is violated.

We now consider two examples of MCAR violations. First, suppose that respondents

are asked to indicate their annual income and age, and that older workers tend to leave

the income question blank. In this example, missingness on income is predictable by

age and the cases with complete data are not a simple random sample of the larger data

set. As a result, running an analysis using just those participants with complete data

would likely introduce bias because the results would be based primarily on younger

workers. As a second example of a violation of MCAR, suppose that the presence

of missing data on income was not related to age or other variables at hand, but that

individuals with greater incomes chose not to report income. In this case, missingness

on income is related to income itself, but you could not determine this because these

income data are missing. If you were to use just those cases that reported income, mean

income and its variance would be underestimated in this example due to nonrandom

missingness, which is a form of self-censoring or selection bias. Associations between

variables and income may well be attenuated due to the restriction in range in the

income variable, given that the larger values for income are missing.

A second mechanism for missing data is known as Missing at Random (MAR), which

is a less stringent condition than MCAR and is a frequently invoked assumption for

missing data. MAR means that the presence of missing data is predictable from other

study variables and after taking these associations into account, missingness for a specific variable is not related to the variable itself. Using the previous example, the MAR

assumption would hold if missingness on income were predictable by age (because

older participants tended not to report income) or other study variables, but was not

related to income itself. If, on the other hand, missingness on income was due to those

with greater (or lesser) income not reporting income, then MAR would not hold. As

such, unless you have the missing data at hand (which you would not), you cannot

19

20

â†œæ¸€å±®

â†œæ¸€å±® Introduction

fully verify this assumption. Note though that the most commonly recommended procedures for treating missing data—use of maximum likelihood estimation and multiple

imputation—assume a MAR mechanism.

A third missing data mechanism is Missing Not at Random (MNAR). Data are MNAR

when the presence of missing data for a given variable is related to that variable itself

even after predicting missingness with the other variables in the data set. With our running example, if missingness on income is related to income itself (e.g., those with greater

income do not report income) even after using study variables to account for missingness

on income, the missing mechanism is MNAR. While this missing mechanism is the

most problematic, note that methods that are used when MAR is assumed (maximum

likelihood and multiple imputation) can provide for improved parameter estimates when

the MNAR assumption holds. Further, by collecting data from participants on variables

that may be related to missingness for variables in your study, you can potentially turn

an MNAR mechanism into an MAR mechanism. Thus, in the planning stages of a study,

it may helpful to consider including variables that, although may not be of substantive

interest, may explain missingness for the variables in your data set. These variables are

known as auxiliary variables and software programs that include the generally accepted

missing data treatments can make use of such variables to provide for improved parameter estimates and perhaps greatly reduce problems associated with missingÂ€data.

1.6.2 Deletion Strategies for MissingÂ€Data

This section, focusing on deletion methods, and three sections that follow present various missing data treatments suitable for the MCAR or MAR mechanisms or both.

Missing data treatments for the MNAR condition are discussed in the literature (e.g.,

Allison, 2001; Enders, 2010). The methods considered in these sections include traditionally used methods that may often be problematic and two generally recommended

missing data treatments.

A commonly used and easily implemented deletion strategy is listwise deletion, which

is not recommended for widespread use. With listwise deletion, which is the default

method for treating missing data in many software programs, cases that have any missing data are removed or deleted from the analysis. The primary advantages of listwise

deletion are that it is easy to implement and its use results in a single set of cases that

can be used for all study analyses. AÂ€primary disadvantage of listwise deletion is that

it generally requires that data are MCAR. If data are not MCAR, then parameter estimates and their standard errors using just those cases having complete data are generally biased. Further, even when data are MCAR, using listwise deletion may severely

reduce statistical power if many cases are missing data on one or more variables, as

such cases are removed from the analysis.

There are, however, situations where listwise deletion is sometimes recommended.

When missing data are minimal and only a small percent of cases (perhaps from 5%

to 10%) are removed with the use of listwise deletion, this method is recommended.

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

In addition, listwise deletion is a recommended missing data treatment for regression

analysis under any missing mechanism (even MNAR) if a certain condition is satisfied. That is, if missingness for variables used in a regression analysis are missing as a

function of the predictors only (and not the outcome), the use of listwise deletion can

outperform the two more generally recommended missing data treatments (i.e., maximum likelihood and multiple imputation).

Another deletion strategy used is pairwise deletion. With this strategy, cases with incomplete data are not excluded entirely from the analysis. Rather, with pairwise deletion,

a given case with missing data is excluded only from those analyses that involve variables for which the case has missing data. For example, if you wanted to report correlations for three variables, using the pairwise deletion method, you would compute the

correlation for variables 1 and 2 using all cases having scores for these variables (even

if such a case had missing data for variable 3). Similarly, the correlation for variables

1 and 3 would be computed for all cases having scores for these two variables (even if

a given case had missing data for variable 2) and so on. Thus, unlike listwise deletion,

pairwise deletion uses as much data as possible for cases having incomplete data. As a

result, different sets of cases are used to compute, in this case, the correlation matrix.

Pairwise deletion is not generally recommended for treating missing data, as its

advantages are outweighed by its disadvantages. On the positive side, pairwise deletion is easy to implement (as it is often included in software programs) and can

produce approximately unbiased parameter estimates when data are MCAR. However, when the missing data mechanism is MAR or MNAR, parameter estimates are

biased with the use of pairwise deletion. In addition, using different subsets of cases,

as in the earlier correlation example, can result in correlation or covariance matrices

that are not positive definite. Such matrices would not allow for the computation,

for example, of regression coefficients or other parameters of interest. Also, computing accurate standard errors with pairwise deletion is not straightforward because a

common sample size is not used for all variables in the analysis.

1.6.3 Single Imputation Strategies for MissingÂ€Data

Imputing data involves replacing missing data with score values, which are (hopefully) reasonable values to use. In general, imputation methods are attractive because

once the data are imputed, analyses can proceed with a “complete” set of data. Single

imputation strategies replace missing data with just a single value, whereas multiple

imputation, as we will see, provides multiple replacement values. Different methods

can be used to assign or impute score values. As is often the case with missing data

treatments, the simpler methods are generally more problematic than more sophisticated treatments. However, use of statistical software (e.g., SAS, SPSS) greatly simplifies the task of imputingÂ€data.

A relatively easy but generally unsatisfactory method of imputing data is to replace

missing values with the mean of the available scores for a given variable, referred to

21

22

â†œæ¸€å±®

â†œæ¸€å±® Introduction

as mean substitution. This method assumes that the missing mechanism is MCAR, but

even in this case, mean substitution can produce biased estimates. The main problem

with this procedure is that it assumes that all cases having missing data for a given

variable score only at the mean of the variable in question. This replacement strategy,

then, can greatly underestimate the variance (and standard deviation) of the imputed

variable. Also, given that variances are underestimated with mean substitution, covariances and correlations will also be attenuated. As such, missing data experts often

suggest not using mean substitution as a missing data treatment.

Another imputation method involves using a multiple regression equation to replace

missing values, a procedure known as regression substitution or regression imputation.

With this procedure, a given variable with missing data serves as the dependent variable

and is regressed on the other variables in the data set. Note that only those cases having

complete data are typically used in this procedure. Once the regression estimates (i.e.,

intercept and slope values) are obtained, we can then use the equation to predict or

impute scores for individuals having missing data by plugging into this equation their

scores on the equation predictors. AÂ€complete set of scores is then obtained for all participants. Although regression imputation is an improvement over mean substitution,

this procedure is also not recommended because it can produce attenuated estimates

of variable variances and covariances, due to the lack of variability that is inherent in

using the predicted scores from the regression equation as the replacement values.

An improved missing data replacement procedure uses this same regression idea, but

adds random variability to the predicted scores. This procedure is known as stochastic

regression imputation, where the term stochastic refers to the additional random component that is used in imputing scores. The procedure is similar to that described for

regression imputation but now includes a residual term, scores for which are included

when generating imputed values. Scores for this residual are obtained by sampling

from a population having certain characteristics, such as being normally distributed

with a mean of zero and a variance that is equal to the residual variance estimated from

the regression equation used to impute the scores.

Stochastic single regression imputation overcomes some of the limitations of the

other single imputation methods but still has one major shortcoming. On the positive

side, point estimates obtained with analyses that use such imputed data are unbiased

for MAR data. However, standard errors estimated when analyses are run using data

imputed by stochastic regression are negatively biased, leading to inflated test statistics

and an inflated type IÂ€error rate. This misestimation also occurs for the other single

imputation methods mentioned earlier. Improved estimates of standard errors can be

obtained by generating several such imputed data sets and incorporating variability

across the imputed data sets into the standard error estimates.

The last single imputation method considered here is a maximum likelihood approach

known as expectation maximization (EM). The EM algorithm uses two steps to estimate parameters (e.g., means, variances, and covariances) that may be of interest

by themselves or can be used as input for other analyses (e.g., exploratory factor

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

analysis). In the first step of the algorithm, the means and variance-covariance matrix

for the set of variables are estimated using the available (i.e., nonmissing) data. In the

second step, regression equations are obtained using these means and variances, with

the regression equations used (as in stochastic regression) to then obtain estimates for

the missing data. With these newly estimated values, the procedure then reestimates

the variable means and covariances, which are used again to obtain the regression

equations to provide new estimates for the missing data. This two-step process continues until the means and covariances are essentially the same from one iteration to

theÂ€next.

Of the single imputation methods discussed here, use of the EM algorithm is considered to be superior and provides unbiased parameter estimates (i.e., the means and

covariances). However, like the other single-imputation procedures, the standard errors

estimated from analyses using the EM-obtained means and covariances are underestimated. As such, this procedure is not recommended for analyses where standard errors

and associated statistical tests are used, as type IÂ€ error rates would be inflated. For

procedures that do not require statistical inference (principal component or principal

axis factor analysis), use of the EM procedure is recommended. The full information

maximum likelihood procedure described in sectionÂ€1.6.5 is an improved maximum

likelihood approach that can obtain proper estimates of standard errors.

1.6.4 Multiple Imputation

Multiple imputation (MI) is one of two procedures that are widely recommended for

dealing with missing data. MI involves three main steps. In the first step, the imputation phase, missing data are imputed using a version of stochastic regression imputation, except now this procedure is done several times, so that multiple “complete” data

sets are created. Given that a random procedure is included when imputing scores, the

imputed score for a given case for a given variable will differ across the multiple data

sets. Also, note while the default in statistical software is often to impute a total of

five data sets, current thinking is that this number is generally too small, as improved

standard error estimates and statistical test results are obtained with a larger number

of imputed data sets. Allison (personal communication, NovemberÂ€8, 2013) has suggested that 100 may be regarded as the maximum number of imputed data sets needed.

The second and third steps of this procedure involve analyzing the imputed data sets

and obtaining a final set of parameter estimates. In the second step, the analysis stage,

the primary analysis of interest is conducted with each of the imputed data sets. So, if

100 data sets were imputed, 100 sets of parameter estimates would be obtained. In the

final stage, the pooling phase, a final set of parameter estimates is obtained by combining the parameter estimates across the analyzed data sets. If the procedure is carried

out properly, parameter estimates and standard errors are unbiased when the missing

data mechanism is MCAR orÂ€MAR.

There are advantages and disadvantages to using MI as a missing data treatment.

The main advantages are that MI provides for unbiased parameter estimates when

23

24

â†œæ¸€å±®

â†œæ¸€å±® Introduction

the missing data mechanism is MCAR and MAR, and multiple imputation has great

flexibility in that it can be applied to a variety of analysis models. One main disadvantage of the procedure is that it can be relatively complicated to implement. As Allison

(2012) points out, users must make at least seven decisions when implementing this

procedure, and it may be difficult for the user to determine the proper set of choices

that should beÂ€made.

Another disadvantage of MI is that it is always possible that the imputation and analysis model differ, and such a difference may result in biased parameter estimation even

when the data follow an MCAR mechanism. As an example, the analysis model may

include interactions or nonlinearities among study variables. However, if such terms

were excluded from the imputation model, such interactions and nonlinear associations may not be found in the analysis model. While this problem can be avoided

by making sure that the imputation model matches or includes more terms than the

analysis model, Allison (2012) notes that in practice it is easy to make this mistake.

These latter difficulties can be overcome with the use of another widely recommended

missing data treatment, full information maximum likelihood estimation.

1.6.5 Full Information Maximum Likelihood Estimation

Full information maximum likelihood, or FIML (also known as direct maximum likelihood or maximum likelihood), is another widely recommended procedure for treating missing data. When the missing mechanism is MAR, FIML provides for unbiased

parameter estimation as well as accurate estimates of standard errors. When data are

MCAR, FIML also provides for accurate estimation and can provide for more power

than listwise deletion. For sample data, use of maximum likelihood estimation yields

parameter estimates that maximize the probability for obtaining the data at hand. Or,

as stated by Enders (2010), FIML tries out or “auditions” various parameter values

and finds those values that are most consistent with or provide the best fit to the

data. While the computational details are best left to missing data textbooks (e.g.,

Allison, 2001; Enders, 2010), FIML estimates model parameters, in the presence of

missing data, by using all available data as well as the implied values of the missing

data, given the observed data and assumed probability distribution (e.g., multivariate

normal).

Unlike other missing data treatments, FIML estimates parameters directly for the analysis model of substantive interest. Thus, unlike multiple imputation, there are no separate imputation and analysis models, as model parameters are estimated in the presence

of incomplete data in one step, that is, without imputing data sets. Allison (2012)

regards this simultaneous missing data treatment and estimation of model parameters

as a key advantage of FIML over multiple imputation. AÂ€key disadvantage of FIML is

that its implementation typically requires specialized software, in particular, software

used for structural equation modeling (e.g., LISREL, Mplus). SAS, however, includes

such capability, and we briefly illustrate how FIML can be implemented using SAS in

the illustration to which we nowÂ€turn.

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

1.6.6 Illustrative Example: Inspecting Data for

Missingness and Mechanism

This section and the next fulfill several purposes. First, using a small data set with missing data, we illustrate how you can assess, using relevant statistics, if the missing mechanism is consistent with the MCAR mechanism or not. Recall that some missing data

treatments require MCAR. As such, determining that the data are not MCAR would

suggest using a missing data treatment that does not require that mechanism. Second,

we show the computer code needed to implement FIML using SAS (as SPSS does not

offer this option) and MI in SAS and SPSS. Third, we compare the performance of

different missing data treatments for our small data set. This comparison is possible

because while we work with a data set having incomplete data, we have the full set of

scores or parent data set, from which the data set with missing values was obtained. As

such, we can determine how closely the parameters estimated by using various missing

data treatments approximate the parameters estimated for the parent dataÂ€set.

The hypothetical example considered here includes data collected from 300 adolescents

on three variables. The outcome variable is apathy, and the researchers, we assume, intend

to use multiple regression to determine if apathy is predicted by a participant’s perception of family dysfunction and sense of social isolation. Note that higher scores for each

variable indicate greater apathy, poorer family functioning, and greater isolation. While

we generated a complete set of scores for each variable, we subsequently created a data

set having missing values for some variables. In particular, there are no missing scores

for the outcome, apathy, but data are missing on the predictors. These missing data were

created by randomly removing some scores for dysfunction and isolation, but for only

those participants whose apathy score was above the mean. Thus, the missing data mechanism is MAR as whether data are missing or not for dysfunction and isolation depends

on apathy, where only those with greater apathy have missing data on the predictors.

We first show how you can examine data to determine the extent of missing data

as well as assess whether the data may be consistent with the MCAR mechanism.

TableÂ€1.1 shows relevant output for some initial missing data analysis, which may

obtained from the following SPSS commands:

[@SPSSÂ€CODE]

MVA VARIABLES=apathy dysfunction isolation

/TTEST

/TPATTERN DESCRIBE=apathy dysfunction isolation

/EM.

Note that some of this output can also be obtained in SAS by the commands shown in

sectionÂ€1.6.7.

In the top display of TableÂ€1.1, the means, standard deviations, and the number and percent of cases with missing data are shown. There is no missing data for apathy, but 20%

of the 300 cases did not report a score for dysfunction, and 30% of the sample did not

25

26

â†œæ¸€å±®

â†œæ¸€å±® Introduction

provide a score for isolation. Information in the second display in TableÂ€1.1 (Separate

Variance t Tests) can be used to assess whether the missing data are consistent with the

MCAR mechanism. This display reports separate variance t tests that test for a difference

in means between cases with and without missing data on a given variable on other study

variables. If mean differences are present, this suggests that cases with missing data differ

from other cases, discrediting the MCAR mechanism as an explanation for the missing

data. In this display, the second column (Apathy) compares mean apathy scores for cases

with and without scores for dysfunction and then for isolation. In that column, we see that

the 60 cases with missing data on dysfunction have much greater mean apathy (60.64)

than the other 240 cases (50.73), and that the 90 cases with missing data on isolation have

greater mean apathy (60.74) than the other 210 cases (49.27). The t test values, well above

a magnitude of 2, also suggest that cases with missing data on dysfunction and isolation

are different from cases (i.e., more apathetic) having no missing data on these predictors.

Further, the standard deviation for apathy (from the EM estimate obtained via the SPSS

syntax just mentioned) is about 10.2. Thus, the mean apathy differences are equivalent to

about 1 standard deviation, which is generally considered to be a large difference.

TableÂ€1.1:â•‡ Statistics Used to Describe MissingÂ€Data

Missing

Apathy

Dysfunction

Isolation

N

Mean

Std. deviation

Count

Percent

300

240

210

52.7104

53.7802

52.9647

10.21125

10.12854

10.10549

0

60

90

.0

20.0

30.0

Separate Variance t Testsa

Dysfunction

Isolation

Apathy

Dysfunction

Isolation

t

df

# Present

# Missing

Mean (present)

Mean (missing)

t

df

# Present

# Missing

Mean (present)

−9.6

146.1

240

60

50.7283

60.6388

−12.0

239.1

210

90

.

.

240

0

53.7802

.

−2.9

91.1

189

51

−2.1

27.8

189

21

52.5622

56.5877

.

.

210

0

49.2673

52.8906

52.9647

Mean (missing)

60.7442

57.0770

For each quantitative variable, pairs of groups are formed by indicator variables (present, missing).

a

Indicator variables with less than 5.0% missing are not displayed.

.

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

Tabulated Patterns

Missing patternsa

Number

Complete

of cases Apathy Dysfunction Isolation if .Â€.Â€.b

Apathyc

Dysfunctionc Isolationc

189

51

39

X

21

X

189

48.0361

52.8906

52.5622

X

240

60.7054

57.0770

.

X

300

60.7950

.

.

210

60.3486

.

56.5877

Patterns with less than 1.0% cases (3 or fewer) are not displayed.

a

Variables are sorted on missing patterns.

b

Number of complete cases if variables missing in that pattern (marked with X) are not used.

c

Means at each unique pattern.

The other columns in this output table (headed by dysfunction and isolation) indicate

that cases having missing data on isolation have greater mean dysfunction and those

with missing data on dysfunction have greater mean isolation. Thus, these statistics

suggest that the MCAR mechanism is not a reasonable explanation for the missing

data. As such, missing data treatments that assume MCAR should not be used with

these data, as they would be expected to produce biased parameter estimates.

Before considering the third display in TableÂ€1.1, we discuss other procedures that can

be used to assess the MCAR mechanism. First, Little’s MCAR test is an omnibus test

that may be used to assess whether all mean differences, like those shown in TableÂ€1.1,

are consistent with the MCAR mechanism (large p value) or not consistent with the

MCAR mechanism (small p value). For the example at hand, the chi-square test statistic for Little’s test, obtained with the SPSS syntax just mentioned, is 107.775 (dfÂ€=Â€5)

and statistically significant (p < .001). Given that the null hypothesis for this data is

that the data are MCAR, the conclusion from this test result is that the data do not

follow an MCAR mechanism. While Little’s test may be helpful, Enders (2010) notes

that it does not indicate which particular variables are associated with missingness and

prefers examining standardized group-mean differences as discussed earlier for this

purpose. Identifying such variables is important because they can be included in the

missing data treatment, as auxiliary variables, to improve parameter estimates.

A third procedure that can be used to assess the MCAR mechanism is logistic regression. With this procedure, you first create a dummy-coded variable for each variable

in the data set that indicates whether a given case has missing data for this variable or

not. (Note that this same thing is done in the t-test procedure earlier but is entirely automated by SPSS.) Then, for each variable with missing data (perhaps with a minimum

of 5% to 10% missing), you can use logistic regression with the missingness indicator

for a given variable as the outcome and other study variables as predictors. By doing

this, you can learn which study variables are uniquely associated with missingness.

27

28

â†œæ¸€å±®

â†œæ¸€å±® Introduction

If any are, this suggests that missing data are not MCAR and also identifies variables

that need to be used, for example, in the imputation model, to provide for improved (or

hopefully unbiased) parameter estimates.

For the example at hand, given that there is a substantial proportion of missing data

for dysfunction and isolation, we created a missingness indicator variable first for dysfunction and ran a logistic regression equation with this indicator as the outcome and

apathy and isolation as the predictors. We then created a missingness indicator for

isolation and used this indicator as the outcome in a second logistic regression with

predictors apathy and dysfunction. While the odds ratios obtained with the logistic

regressions should be examined, we simply note here that, for each equation, the only

significant predictor was apathy. This finding provides further evidence against the

MCAR assumption and suggests that the only study variable responsible for missingness is apathy (which in this case is consistent with how the missing data were

obtained).

To complete the description of missing data, we examine the third output selection

shown in TableÂ€1.1, labeled Tabulated Patterns. This output provides the number of

cases for each missing data pattern, sorted by the number of cases in each pattern, as

well as relevant group means. For the apathy data, note that there are four missing

data patterns shown in the Tabulated Patterns table. The first pattern, consisting of 189

cases, consists of cases that provided complete data on all study variables. The three

columns on the right side of the output show the means for each study variable for

these 189 cases. The second missing data pattern includes the 51 cases that provided

complete data on all variables except for isolation. Here, we can see that this group had

much greater mean apathy than those who provided complete scores for all variables

and somewhat higher mean dysfunction, again, discrediting the MCAR mechanism.

The next group includes those cases (nÂ€=Â€39) that had missing data for both dysfunction

and isolation. Note, then, that the Tabulated Pattern table provides additional information than provided by the Separate Variance t Tests table, in that now we can identify

the number of cases that have missing data on more than one variable. The final group

in this table (nÂ€=Â€21) consists of those who have missing data on the isolation variable

only. Inspecting the means for the three groups with missing data indicates that each of

these groups has much greater apathy, in particular, than do cases with complete data,

again suggesting the data are notÂ€MCAR.

1.6.7 Applying FIML and MI to the ApathyÂ€Data

We now use the results from the previous section to select a missing data treatment.

Given that the earlier analyses indicated that the data are not MCAR, this suggests

that listwise deletion, which could be used in some situations, should not be used

here. Rather, of the methods we have discussed, full information maximum likelihood

estimation and multiple imputation are the best choices. If we assume that the three

study variables approximately follow a multivariate normal distribution, FIML, due

to its ease of use and because it provides optimal parameter estimates when data are

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

MAR, would be the most reasonable choice. We provide SAS and SPSS code that can

be used to implement these missing data treatments for our example data set and show

how these methods perform compared to the use of more conventional missing data

treatments.

Although SPSS has capacity for some missing data treatments, it currently cannot implement a maximum likelihood approach (outside of the effective but limited mixed modeling procedure discussed in a ChapterÂ€14, which cannot handle

missingness in predictors, except for using listwise deletion for such cases). As

such, we use SAS to implement FIML with the relevant code for our example as

follows:

PROC CALIS DATAÂ€=Â€apathy METHODÂ€=Â€fiml;

PATH apathy <- dysfunction isolation;

RUN;

CALIS (Covariance Analysis of Linear Structural Equations) is capable of

implementing FIML. Note that after indicating the data set, you simply write fiml

following METHOD. Note that SAS assumes that a dot or period (like this. ) represents missing data in your data set. On the second line, the dependent variable (here,

apathy) for our regression equation of interest immediately follows PATH with the

remaining predictors placed after the <− symbols. Assuming that we do not have auxiliary variables (which we do not here), the code is complete. We will present relevant

results later in this section.

PROC

Both SAS and SPSS can implement multiple imputation, assuming that you have

the Missing Values Analysis module in SPSS. TableÂ€ 1.2 presents SAS and SPSS

code that can be used to implement MI for the apathy data. Be aware that both sets

of code, with the exception of the number of imputations, tacitly accept the default

choices that are embedded in each of the software programs. You should examine

SAS and SPSS documentation to see what these default options are and whether they

are reasonable for your particular set of circumstances. Note that SAS code follows

the three MI phases (imputation, analysis, and pooling of results). In the first line of

code in TableÂ€1.2, you write after the OUT command the name of the data set that

will contain the imputed data sets (apout, here). The NIMPUTE command is used

to specify the number of imputed data sets you wish to have (here, 100 such data

sets). The variables used in the imputation phase appear in the second line of code.

The PROC REG command, leading off the second block of code (corresponding

to the analysis phase), is used because the primary analysis of interest is multiple

regression. Note that regression analysis is applied to each of the 100 imputed data

sets (stored in the file apout), and the resulting 100 sets of parameter estimates are

output to another data file we call est. The final block of SAS code (corresponding

to the pooling phase) is used to combine the parameter estimates across the imputed

data sets and yields a final single set of parameter estimates, which is then used to

interpret the regression results.

29

30

â†œæ¸€å±®

â†œæ¸€å±® Introduction

Table 1.2:â•‡ SAS and SPSS Code for Multiple Imputation With the ApathyÂ€Data

SAS Code

PROC MI DATAÂ€=Â€apathy OUTÂ€=Â€apout NIMPUTEÂ€=Â€100;

VAR apathy dysfunction isolation;

RUN;

PROC REG DATAÂ€=Â€apout OUTESTÂ€=Â€est COVOUT;

MODEL apathyÂ€=Â€dysfunction isolation;

BY _Imputation_;

RUN;

PROC MIANALYZE DATAÂ€=Â€est;

MODELEFFECTS INTERCEPT dysfunction isolation;

RUN;

SPSS Code

MULTIPLE IMPUTATION apathy dysfunction isolation

/IMPUTE METHOD=AUTO NIMPUTATIONS=100

/IMPUTATIONSUMMARIES MODELS

/OUTFILE IMPUTATIONS=impute.

REGRESSION

/STATISTICS COEFF OUTS R ANOVA

/DEPENDENT apathy

/METHOD=ENTER dysfunction isolation.

SPSS syntax needed to implement MI for the apathy data are shown in the lower

half of TableÂ€1.2. In the first block of commands, MULTIPLE IMPUTATION is used

to create the imputed sets using the three variables appearing in that line. Note

that the second line of SPSS code requests 100 such imputed data sets, and the last

line in that first block outputs a data file that we named impute that has all 100

imputed data sets. With that data file active, the second block of SPSS code conducts the regression analysis of interest on each of the 100 data sets and produces a

final combined set of regression estimates used for interpretation. Note that if you

close the imputed data file and reopen it at some later time for analysis, you would

first need to click on View (in the Data Editor) and Mark Imputed Data prior to

running the regression analysis. If this step is not done, SPSS will treat the data in

the imputed data file as if they were from one data set, instead of, in this case, 100

imputed data sets. Results using MI for the apathy data are very similar for SAS and

SPSS, as would be expected. Thus, we report the final regression results as obtained

fromÂ€SPSS.

TableÂ€1.3 provides parameter estimates obtained by applying a variety of missing data

treatments to the apathy data as well as the estimates obtained from the parent data

set that had no missing observations. Note that the percent bias columns in TableÂ€1.3

are calculated as the difference between the respective regression coefficient obtained

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

Table 1.3:â•‡ Parameter Estimates for Dysfunction (β1) and Isolation (β2) Under Various

Missing Data Methods

Method

β1

β2

t (β1)

t (β2)

% Bias for β1

No missing data

Listwise

Pairwise

Mean substitution

FIML

MI

.289 (.058)

.245 (.067)

.307 (.076)

.334 (.067)

.300 (.068)

.303 (.074)

.280 (.067)

.202 (.067)

.226 (.076)

.199 (.072)

.247 (.071)

.242 (.078)

4.98

3.66

4.04

4.99

4.41

4.09

4.18

3.01

2.97

2.76

3.48

3.10

−15.2

6.2

15.6

3.8

4.8

–

% Bias for β2

–

−27.9

−19.3

−28.9

−11.8

−13.6

from the missing data treatment to that obtained by the complete or parent data set,

divided by the latter estimate, and then multiplied by 100 to obtain the percent. For

coefficient β1, we see that FIML and MI yielded estimates that are closest to the values

from the parent data set, as these estimates are less than 5% higher. Listwise deletion

and mean substitution produced the worst estimates for both regression coefficients,

and pairwise deletion also exhibited poorer performance than MI or FIML. In line with

the literature, FIML provided the most accurate estimates and resulted in more power

(exhibited by the t tests) than MI. Note, though, that with the greater amount of missing data for isolation (30%), the estimates for FIML and MI are more than 10% lower

than the estimate for the parent set. Thus, although FIML and MI are the best missing

data treatments for this situation (i.e., given that the data are MAR), no missing data is

the best kind of missing data to have.

1.6.8 Missing Data Summary

You should always determine and report the extent of missing data for your study

variables. Further, you should attempt to identify the most plausible mechanism for

missing data. SectionÂ€1.6.7 provided some procedures you can use for these purposes

and illustrated the selection of a missing data treatment given this preliminary analysis.

The two most widely recommended procedures are full information maximum likelihood and multiple imputation, although listwise deletion can be used in some circumstances (i.e., minimal amount of missing data and data MCAR). Also, to reduce the

amount of missing data, it is important to minimize the effort required by participants

to provide data (e.g., use short questionnaires, provide incentives for responding).

However, given that missing data are inevitable despite your best efforts, you should

consider collecting data on variables that may predict missingness for the study variables of interest. Incorporating such auxiliary variables in your missing data treatment

can provide for improved parameter estimates.

1.7â•‡ UNIT OR PARTICIPANT NONRESPONSE

SectionÂ€1.6 discussed the situation where data was collected from each respondent

but that some cases may not have provided a complete set of responses, resulting in

31

32

â†œæ¸€å±®

â†œæ¸€å±® Introduction

incomplete or missing data. AÂ€different type of missingness occurs when no data are

collected from some respondents, as when a survey respondent refuses to participate in

a survey. This nonparticipation, called unit or participant nonresponse, happens regularly in survey research and can be problematic because nonrespondents and respondents may differ in important ways. For example, suppose 1,000 questionnaires are sent

out and only 200 are returned. Of the 200 returned, 130 are in favor of some issue at

hand and 70 are opposed. As such, it appears that most of the people favor the issue.

But 800 surveys were not returned. Further, suppose that 55% of the nonrespondents

are opposed and 45% are in favor. Then, 440 of the nonrespondents are opposed and

360 are in favor. For all 1,000 individuals, we now have 510 opposed and 490 in favor.

What looked like an overwhelming majority in favor with the 200 respondents is now

evenly split among the 1,000 cases.

It is sometimes suggested, if one anticipates a low response rate and wants a certain

number of questionnaires returned, that the sample size should be simply increased.

For example, if one wishes 400 returned and a response rate of 20% is anticipated,

send out 2,000. This can be a dangerous and misleading practice. Let us illustrate.

Suppose 2,000 are sent out and 400 are returned. Of these, 300 are in favor and 100 are

opposed. It appears there is an overwhelming majority in favor, and this is true for the

respondents. But 1,600 did NOT respond. Suppose that 60% of the nonrespondents (a

distinct possibility) are opposed and 40% are in favor. Then, 960 of the nonrespondents are opposed and 640 are in favor. Again, what appeared to be an overwhelming

majority in favor is stacked against (1,060 vs. 940) for ALL participants.

Groves etÂ€al. (2009) discuss a variety of methods that can be used to reduce unit nonresponse. In addition, they discuss a weighting approach that can be used to adjust

parameter estimates for such nonresponse when analyzing data with unit nonresponse.

Note that the methods described in sectionÂ€1.6 for treating missing data, such as multiple imputation, are not relevant for unit nonresponse if there is a complete absence of

data from nonrespondents.

1.8â•‡RESEARCH EXAMPLES FOR SOME ANALYSES

CONSIDERED IN THISÂ€TEXT

To give you something of a feel for several of the statistical analyses considered in

succeeding chapters, we present the objectives in doing a multiple logistic regression

analysis, a multivariate analysis of variance and covariance, and an exploratory factor analysis, along with illustrative studies from the literature that use each of these

analyses.

1.8.1 Logistic Regression

In a previous course you have taken, simple linear regression was covered, where a

dependent variable (say chemistry achievement) is predicted from just one predictor,

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

such as IQ. It is certainly reasonable that other variables would also be related to

chemistry achievement and that we could obtain better prediction by making use of

these variables, such as previous average grade in science courses, attitude toward

education, and math ability. In addition, in some studies, a binary outcome (success

or failure) is of interest, and researchers are interested in variables that are related to

this outcome. When the outcome variable is binary (i.e., pass/fail), though, standard

regression analysis is not appropriate. Instead, in this case, logistic regression is often

used. Thus, the objective in multiple logistic regression (called multiple because we

have multiple predictors)Â€is:

Objective: Predict a binary dependent variable from a set of independent variables.

Example

Reingle Gonzalez and Connell (2014) were interested in determining which of several

predictors were related to medication continuity among a nationally representative

sample of US prisoners. AÂ€prisoner was said to have experienced medication continuity if that individual had been taking prescribed medication at intake into prison and

continued to take such medication after admission into prison. The logistic regression analysis indicated that, after controlling for other predictors, prisoners were more

likely to experience medication continuity if they were diagnosed with schizophrenia,

saw a health care professional in prison, were black, were older, and had served less

time than other prisoners.

1.8.2 One-Way Multivariate Analysis of Variance

In univariate analysis of variance, several groups of participants are compared to determine whether mean differences are present for a single dependent variable. But, as was

mentioned earlier in this chapter, any good treatment(s) generally affects participants

in several ways. Hence, it makes sense to collect data from participants on multiple

outcomes and then test whether the groups differ, on average, on the set of outcomes.

This provides for a more complete assessment of the efficacy of the treatments. Thus,

the objective in multivariate analysis of varianceÂ€is:

Objective: Determine whether mean differences are present across several groups for

a set of dependent variables.

Example

McCrudden, Schraw, and Hartley (2006) conducted an educational experiment to determine if college students exhibited improved learning relative to controls after they had

received general prereading relevance instructions. The researchers were interested in

determining if those receiving such instruction differed from control students for a set

of various learning outcomes, as well as a measure of learning effort (reading time).

The multivariate analysis indicated that the two groups had different means on the

set of outcomes. Follow-up testing revealed that students who received the relevance

instructions had higher mean scores on measures of factual and conceptual learning as

33

34

â†œæ¸€å±®

â†œæ¸€å±® Introduction

well as the number of claims made in an essay item and the essay item score. The two

groups did not differ, on average, on total reading time, suggesting that the relevance

instructions facilitated learning while not requiring greater effort.

1.8.3 Multivariate Analysis of Covariance

Objective: Determine whether several groups differ on a set of dependent variables

after the posttest means have been adjusted for any initial differences on the covariates

(which are often pretests).

Example

Friedman, Lehrer, and Stevens (1983) examined the effect of two stress management

strategies, directed lecture discussion and self-directed, and the locus of control of

teachers on their scores on the State-Trait Anxiety Inventory and on the Subjective

Stress Scale. Eighty-five teachers were pretested and posttested on these measures,

with the treatment extending to 5 weeks. Teachers who received the stress management programs reduced their stress and anxiety more than those in a control group.

However, teachers who were in a stress management program compatible with their

locus of control (i.e., externals with lectures and internals with the self-directed) did

not reduce stress significantly more than participants in the unmatched stress management groups.

1.8.4 Exploratory Factor Analysis

As you know, a bivariate correlation coefficient describes the degree of linear association between two variables, such as anxiety and performance. However, in many

situations, researchers collect data on many variables, which are correlated, and they

wish to determine if there are fewer constructs or dimensions that underlie responses

to these variables. Finding support for a smaller number of constructs than observed

variables provides for a more parsimonious description of results and may lead to identifying new theoretical constructs that may be the focus of future research. Exploratory

factor analysis is a procedure that can be used to determine the number and nature of

such constructs. Thus, the general objective in exploratory factor analysisÂ€is:

Objective: Determine the number and nature of constructs that underlie responses to

a set of observed variables.

Example

Wong, Pituch, and Rochlen (2006) were interested in determining if specific

emotion-related variables were predictive of men’s restrictive emotionality, where this

latter concept refers to having difficulty or fears about expressing or talking about one’s

emotions. As part of this study, the researchers wished to identify whether a smaller

number of constructs underlie responses to the Restrictive Emotionality scale and

eight other measures of emotion. Results from an exploratory factor analysis suggested

that three factors underlie responses to the nine measures. The researchers labeled the

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

constructs or factors as (1) Difficulty With Emotional Communication (which was

related to restrictive emotionality), (2) Negative Beliefs About Emotional Expression,

and (3) Fear of Emotions, and suggested that these constructs may be useful for future

research on men’s emotional behavior.

1.9â•‡ THE SAS AND SPSS STATISTICAL PACKAGES

As you have seen already, SAS and the SPSS are selected for use in this text for several

reasons:

1. They are very widely distributed andÂ€used.

2. They are easy toÂ€use.

3. They do a very wide range of analyses—from simple descriptive statistics to various analyses of variance designs to all kinds of complex multivariate analyses

(factor analysis, multivariate analysis of variance, discriminant analysis, logistic

multiple regression, etc.).

4. They are well documented, having been in development for decades.

In this edition of the text, we assume that instructors are familiar with one of these two

statistical programs. Thus, we do not cover the basics of working with these programs,

such as reading in a data set and/or entering data. Instead, we show, throughout the

text, how these programs can be used to run the analyses that are discussed in the relevant chapters. The versions of the software programs used in this text are SAS version

9.3 and SPSS version 21. Note that user’s guides for SAS and SPSS are available at

http://support.sas.com/documentation/cdl/en/statug/63962/HTML/default/viewer.htm

#titlepage.htm and http://www-01.ibm.com/support/docview.wss?uid=swg27024972,

respectively.

1.10â•‡ SAS AND SPSS SYNTAX

We nearly always use syntax, instead of dialogue boxes, to show how analyses can

be conducted throughout the text. While both SAS and SPSS offer dialogue boxes to

ease obtaining analysis results, we feel that providing syntax is preferred for several

reasons. First, using dialogue boxes for SAS and SPSS would “clutter up” the text

with pages of screenshots that would be needed to show how to conduct analyses. In

contrast, using syntax is a much more efficient way to show how analysis results may

be obtained. Second, with the use of the Internet, there is no longer any need for users

of this text to do much if any typing of commands, which is often dreaded by students.

Instead, you can simply download the syntax and related data sets and use these files

to run analyses that are in the textbook. That is about as easy as it gets! If you wish

to conduct analysis with your own data sets, it is a simple matter of using your own

data files and, for the most part, simply changing the variable names that appear in the

online syntax.

35

36

â†œæ¸€å±®

â†œæ¸€å±® Introduction

Third, instructors may not wish to devote much time to showing how analyses can

be obtained via statistical software and instead focus on understanding which analysis should be used for a given situation, the specific analysis steps that should be

taken (e.g., search for outliers, assess assumptions, the statistical tests and effect size

measures that are to be used), and how analysis results are to be interpreted. For these

instructors, then, it is a simple matter of ignoring the relatively short sections of the

text that discuss and present software commands. Also, for students, if this is the case

and you still you wish to know what specific sections of code are doing, we provide

relevant descriptions along the way to help youÂ€out.

Fourth, there may be occasions where you wish to keep a copy of the commands that

implemented your analysis. You could not easily do this if you exclusively use dialogue boxes, but your syntax file will contain the commands you used for analyses.

Fifth, implementing some analysis techniques requires use of commands, as not all

procedures can be obtained with the dialogue boxes. AÂ€relevant example occurs with

exploratory factor analysis (ChapterÂ€9), where parallel analysis can be implemented

only with commands. Sixth, as you continue to learn more advanced techniques (such

as multilevel and structural equation modeling), you will encounter other software programs (e.g., Mplus) that use only code to run analyses. Becoming familiar with using

code will better prepare you for this eventuality. Finally, while we anticipate this will

be not the case, if SAS or SPSS commands were to change before a subsequent edition of this text appears, we can simply update the syntax file online to handle recent

updates to the programmingÂ€code.

1.11â•‡SAS AND SPSS SYNTAX AND DATA SETS ON THE

INTERNET

Syntax and data files needed to replicate the analysis discussed throughout the text

are available on the Internet for both SAS and SPSS (www.psypress.com/books/

details/9780415836661/). You must, of course, open the SAS and SPSS programs on

your computer as well as the respective syntax and data files to run the analysis. If you

do not know how to do this, your instructor can helpÂ€you.

1.12â•‡ SOME ISSUES UNIQUE TO MULTIVARIATE ANALYSIS

Many of the techniques discussed in this text are mathematical maximization procedures, and hence there is great opportunity for capitalization on chance. Often, analysis

results that “look great” on a given sample may not translate well to other samples.

Thus, the results are sample specific and of limited scientific utility. Reliability of

results, then, is a real concern.

The notion of a linear combination of variables is fundamental to all the types of analysis we discuss. AÂ€general linear combination for p variables is givenÂ€by:

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

=

y a1 x1 + a2 x2 + a3 x3 + + a p x p ,

where a1, a2, a3, …, ap are the coefficients for the variables. This definition is abstract;

however, we give some simple examples of linear combinations that you are probably

already familiarÂ€with.

Suppose we have a treatment versus control group design with participants pretested

and posttested on some variable. Then, sometimes analysis is done on the difference

scores (gain scores), that is, posttest–pretest. If we denote the pretest variable by x1 and

the posttest variable by x2, then the difference variable yÂ€=Â€x2 − x1 is a simple linear

combination where a1Â€=Â€−1 and a2Â€=Â€1.

As another example of a simple linear combination, suppose we wished to sum three

subtest scores on a test (x1, x2, and x3). Then the newly created sum variable yÂ€=Â€x1 + x2 + x3

is a linear combination where a1Â€=Â€a2Â€=Â€a3Â€=Â€1.

Still another example of linear combinations that you may have encountered in an

intermediate statistics course is that of contrasts among means, as when planned comparisons are used. Consider the following four-group ANOVA, where T3 is a combination treatment, and T4 is a control group:

T1T2T3T4

µ1µ 2 µ 3µ 4

Then the following meaningful contrast

L1 =

µ1 + µ 2

− µ3

2

1

is a linear combination, where a1Â€=Â€a2Â€=Â€ and a3Â€=Â€−1, while the following contrast

2

among means,

L1 =

µ1 + µ 2 + µ 3

− µ4 ,

3

1

and a4Â€ =Â€ −1. The notions of

3

mathematical maximization and linear combinations are combined in many of the

multivariate procedures. For example, in multiple regression we talk about the linear

combination of the predictors that is maximally correlated with the dependent variable, and in principal components analysis the linear combinations of the variables that

account for maximum portions of the total variance are considered.

is also a linear combination, where a1Â€=Â€a2Â€=Â€a3Â€=Â€

1.13 DATA COLLECTION AND INTEGRITY

Although in this text we minimize discussion of issues related to data collection and

measurement of variables, as this text focuses on analysis, you are forewarned that

37

38

â†œæ¸€å±®

â†œæ¸€å±® Introduction

these are critical issues. No analysis, no matter how sophisticated, can compensate

for poor data collection and measurement problems. Iverson and Gergen (1997) in

chapterÂ€14 of their text on statistics hit on some key issues. First, they discussed the

issue of obtaining a random sample, so that one can generalize to some population of

interest. They noted:

We believe that researchers are aware of the need for randomness, but achieving

it is another matter. In many studies, the condition of randomness is almost never

truly satisfied. AÂ€majority of psychological studies, for example, rely on college

students for their research results. (Critics have suggested that modern psychology

should be called the psychology of the college sophomore.) Are college students

a random sample of the adult population or even the adolescent population? Not

likely. (p.Â€627)

Then they turned their attention to problems in survey research, and noted:

In interview studies, for example, differences in responses have been found

depending on whether the interviewer seems to be similar or different from the

respondent in such aspects as gender, ethnicity, and personal preferences.Â€.Â€.Â€.

The place of the interview is also important.Â€.Â€.Â€. Contextual effects cannot be

overcome totally and must be accepted as a facet of the data collection process.

(pp.Â€628–629)

Another point they mentioned is that what people say and what they do often do not correspond. They noted, “a study that asked about toothbrushing habits found that on the

basis of what people said they did, the toothpaste consumption in this country should

have been three times larger than the amount that is actually sold” (pp.Â€630–631).

Another problem, endemic in psychology, is using college freshmen or sophomores.

This raises issues of data integrity. AÂ€student, visiting Dr.Â€Stevens and expecting advice

on multivariate analysis, had collected data from college freshmen. Dr.Â€Stevens raised

concerns about the integrity of the data, worrying that for most 18- or 19-year-olds

concentration lapses after 5 or 10 minutes. As such, this would compromise the integrity of the data, which no analysis could help. Many freshmen may be thinking about

the next party or social event, and filling out the questionnaire is far from the most

important thing in their minds.

In ending this section, we wish to point out that many mail questionnaires and telephone interviews may be much too long. Mail questionnaires, for the most part, can

be limited to two pages, and telephone interviews to 5 to 10 minutes. If you think

about it, most if not all relevant questions can be asked within 5 minutes. It is always

a balance between information obtained and participant fatigue, but unless participants are very motivated, they may have too many other things going in their lives

to spend the time filling out a 10-page questionnaire or to spend 20 minutes on the

telephone.

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

1.14 INTERNAL AND EXTERNAL VALIDITY

Although this is a book on statistical analysis, the design you set up is critical. In a

course on research methods, you learn of internal and external validity, and of the

threats to each. If you have designed an experimental study, then internal validity

refers to the confidence you have that the treatment(s) are responsible for the posttest

group differences. There are various threats to internal validity (e.g., history, maturation, selection, regression toward the mean). In setting up a design, you want to be

confident that the treatment caused the difference, and not one of the threats. Random

assignment of participants to groups controls most of the threats to internal validity,

and for this reason it is often referred to as the “gold standard.” It is the best way of

assuring, within sampling error, that the groups are “equal” on all variables prior to

treatment implementation. However, if there is a variable (we will use gender and two

groups to illustrate) that is related to the dependent variable, then one should stratify

on that variable and then randomly assign within each stratum. For example, if there

were 36 females and 24 males, we would randomly assign 18 females and 12 males to

each group. By doing this, we ensure an equal number of males and females in each

group, rather than leaving this to chance. It is extremely important to understand that

good research design is essential. Light, Singer, and Willett (1990), in the preface of

their book, summed it up best by stating bluntly, “you can’t fix by analysis what you

bungled by design” (p. viii).

Treatment, as stated earlier, is generic and could refer to teaching methods, counseling

methods, drugs, diets, and so on. It is dangerous to assume that the treatment(s) will be

implemented as you planned, and hence it is very important to monitor the treatment

to help ensure that it is implemented as intended. If the planned and implemented treatments differ, it may not be clear what is responsible for the obtained group differences.

Further, posttest differences may not appear if the treatments are not implemented as

intended.

Now let us turn our attention to external validity. External validity refers to the generalizability of results. That is, to what population(s) of participants, settings, and times

can we generalize our results? AÂ€good book on external validity is by Shadish, Cook,

and Campbell (2002).

Two excellent books on research design are the aforementioned By Design by Light,

Singer, and Willett (which Dr.Â€Stevens used for many years) and a book by Alan Kazdin entitled Research Design in Clinical Psychology (2003). Both of these books

require, in our opinion, that students have at least two courses in statistics and a course

on research methods.

Before leaving this section, a word of warning on ratings as the dependent variable.

Often you will hear of training raters so that raters agree. This is fine. However, it does

not go far enough. There is still the issue of bias with the raters, and this can be very

39

40

â†œæ¸€å±®

â†œæ¸€å±® Introduction

problematic if the rater has a vested interest in the outcome. Dr.Â€Stevens has seen too

many dissertations where the person writing it is one of the raters.

1.15 CONFLICT OF INTEREST

Kazdin notes that conflict of interest can occur in many different ways (2003, p.Â€537).

One way is through a conflict between the scientific responsibility of the investigator(s) and a vested financial interest. We illustrate this with a medical example. In the

introduction to Overdosed America (2004), Abramson gives the following medical

conflict:

The second part, “The Commercialization of American Medicine,” presents a

brief history of the commercial takeover of medical knowledge and the techniques

used to manipulate doctors’ and the public’s understanding of new developments

in medical science and health care. One example of the depth of the problem was

presented in a 2002 article in the Journal of the American Medical Association,

which showed that 59% of the experts who write the clinical guidelines that define

good medical care have direct financial ties to the companies whose products are

being evaluated. (p.Â€xvii)

Kazdin (2003) gives examples that hit closer to home, that is, from psychology and

education:

In psychological research and perhaps specifically in clinical, counseling and educational psychology, it is easy to envision conflict of interest. Researchers may

own stock in companies that in some way are relevant to their research and their

findings. Also, a researcher may serve as a consultant to a company (e.g., that

develops software or psychological tests or that publishes books) and receive

generous consultation fees for serving as a resource for the company. Serving as

someone who gains financially from a company and who conducts research with

products that the company may sell could be a conflict of interest or perceived as

a conflict. (p.Â€539)

The example we gave earlier of someone serving as a rater for their dissertation is a

potential conflict of interest. That individual has a vested interest in the results, and for

him or her to remain objective in doing the ratings is definitely questionable.

1.16 SUMMARY

This chapter reviewed type IÂ€error, type II error, and power. It indicated that power

is dependent on the alpha level, sample size, and effect size. The problem of multiple statistical tests appearing in various situations was discussed. The important issue

of statistical versus practical importance was discussed, and some ways of assessing

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

practical importance (confidence intervals, effect sizes, and measures of association)

were mentioned. The importance of identifying outliers (e.g., participants who are 3 or

more standard deviations from the mean) was emphasized. We also considered at some

length issues related to missing data, discussed factors involved in selecting a missing

data treatment, and illustrated with a small data set how you can select and implement

a missing data treatment. We also showed that conventional missing data treatments

can produce relatively poor parameter estimates with MAR data. We also briefly discussed participant or unit nonresponse. SAS and SPSS syntax files and accompanying

data sets for the examples used in this text are available on the Internet, and these files

allow you to easily replicate analysis results shown in this text. Regarding data integrity, what people say and what they do often do not correspond. The critical importance

of a good design was also emphasized. Finally, it is important to keep in mind that

conflict of interest can undermine the integrity of results.

1.17â•‡EXERCISES

1. Consider a two-group independent-samples t test with a treatment group

(treatment is generic and could be intervention, diet, drug, counseling method,

etc.) and a control group. The null hypothesis is that the population means are

equal. What are the consequences of making a type IÂ€error? What are the consequences of making a type II error?

2. This question is concerned with power.

(a) Suppose a clinical study (10 participants in each of two groups) does not

find significance at the .05 level, but there is a medium effect size (which is

judged to be of practical importance). What should the investigator do in a

future replication study?

(b) It has been mentioned that there can be “too much power” in some studies. What is meant by this? Relate this to the “sledgehammer effect” mentioned in the chapter.

3. This question is concerned with multiple statistical tests.

(a)Consider a two-way ANOVA (A × B) with six dependent variables. If a univariate analysis is done at αÂ€=Â€.05 on each dependent variable, then how

many tests have been done? What is the Bonferroni upper bound on overall alpha? Compute the tighter bound.

(b)Now consider a three-way ANOVA (A × B × C) with four dependent variables. If a univariate analysis is done at αÂ€=Â€.05 on each dependent variable, then how many tests have been done? What is the Bonferroni upper

bound on overall alpha? Compute the tighter upper bound.

4. This question is concerned with statistical versus practical importance: AÂ€survey researcher compares four regions of the country on their attitude toward

education. To this survey, 800 participants respond. Ten items, Likert scaled

41

42

â†œæ¸€å±®

â†œæ¸€å±® Introduction

from 1 to 5, are used to assess attitude. AÂ€higher positive score indicates a

more positive attitude. Group sizes and the means are givenÂ€next.

N

x

North

South

East

West

238

32.0

182

33.1

130

34.0

250

31.0

An analysis of variance on these four groups yielded FÂ€=Â€5.61, which is significant at the .001 level. Discuss the practical importance issue.

5. This question concerns outliers: Suppose 150 participants are measured on

four variables. Why could a subject not be an outlier on any of the four variables and yet be an outlier when the four variables are considered jointly?

Suppose a Mahalanobis distance is computed for each subject (checking for

multivariate outliers). Why might it be advisable to do each test at the .001

level?

6. Suppose you have a data set where some participants have missing data on

income. Further, suppose you use the methods described in sectionÂ€1.6.6 to

assess whether the missing data appear to be MCAR and find that is missingness on income is not related to any of your study variables. Does that mean

the data are MCAR? Why or whyÂ€not?

7. If data are MCAR and a very small proportion of data is missing, would listwise

deletion, maximum likelihood estimation, and multiple imputation all be good

missing data treatments to use? Why or whyÂ€not?

REFERENCES

Abramson, J. (2004). Overdosed America: The broken promise of American medicine. New

York, NY: Harper Collins.

Allison, P.â•›D. (2001). Missing data. Newbury Park, CA:Â€Sage.

Allison, P.â•›D. (2012). Handling missing data by maximum likelihood. Unpublished manuscript. Retrieved from http://www.statisticalhorizons.com/resources/unpublished-papers

Cohen, J. (1988). Statistical power analysis for the social sciences (2nd ed.). Hillsdale, NJ:

Lawrence Erlbaum Associates.

Cronbach, L.,Â€& Snow, R. (1977). Aptitudes and instructional methods: AÂ€handbook for

research on interactions. New York, NY: Irvington.

Enders, C.â•›K. (2010). Applied missing data analysis. New York, NY: Guilford Press.

Friedman, G., Lehrer, B.,Â€& Stevens, J. (1983). The effectiveness of self-directed and lecture/

discussion stress management approaches and the locus of control of teachers. American

Educational Research Journal, 20, 563–580.

Chapter 1

â†œæ¸€å±®

â†œæ¸€å±®

Grissom, R.â•›J.,Â€& Kim, J.â•›J. (2012). Effect sizes for research: Univariate and multivariate applications (2nd ed.). New York, NY: Routledge.

Groves, R.â•›M., Fowler, F.â•›J., Couper, M.â•›P., Lepkowski, J.â•›M., Singer, E.,Â€& Tourangeau, R.

(2009). Survey methodology (2nd ed.). Hoboken, NJ: WileyÂ€&Â€Sons.

Haase, R., Ellis, M.,Â€& Ladany, N. (1989). Multiple criteria for evaluating the magnitude of

experimental effects. Journal of Consulting Psychology, 36, 511–516.

Iverson, G.,Â€& Gergen, M. (1997). Statistics: AÂ€conceptual approach. New York, NY:

Springer-Verlag.

Jacobson, N.â•›S. (Ed.). (1988). Defining clinically significant change [Special issue]. Behavioral

Assessment, 10(2).

Judd, C.â•›M., McClelland, G.â•›H.,Â€& Ryan, C.â•›S. (2009). Data analysis: AÂ€model comparison

approach (2nd ed.). New York, NY: Routledge.

Kazdin, A. (2003). Research design in clinical psychology. Boston, MA: AllynÂ€& Bacon.

Light, R., Singer, J.,Â€& Willett, J. (1990). By design. Cambridge, MA: Harvard University Press.

McCrudden, M.â•›T., Schraw, G.,Â€& Hartley, K. (2006). The effect of general relevance instructions on shallow and deeper learning and reading time. Journal of Experimental Education, 74, 291–310. doi:10.3200/JEXE.74.4.291-310

O’Grady, K. (1982). Measures of explained variation: Cautions and limitations. Psychological

Bulletin, 92, 766–777.

Reingle Gonzalez, J.â•›M.,Â€& Connell, N.â•›M. (2014). Mental health of prisoners: Identifying barriers to mental health treatment and medication continuity. American Journal of Public

Health, 104, 2328–2333. doi:10.2105/AJPH.2014.302043

Shadish, W.â•›R., Cook, T.â•›D.,Â€& Campbell, D.â•›T. (2002). Experimental and quasi-experimental

designs for generalized causal inference. Boston, MA: Houghton Mifflin.

Shiffler, R. (1988). Maximum z scores and outliers. American Statistician, 42, 79–80.

Wong, Y.â•›L., Pituch, K.â•›A.,Â€& Rochlen, A.â•›R. (2006). Men’s restrictive emotionality: An investigation of associations with other emotion-related constructs, anxiety, and underlying dimensions. Psychology of Men and Masculinity, 7, 113–126. doi:10.1037/1524-9220.7.2.113

43

Chapter 2

MATRIX ALGEBRA

2.1â•‡INTRODUCTION

This chapter introduces matrices and vectors and covers some of the basic matrix

operations used in multivariate statistics. The matrix operations included are by

no means intended to be exhaustive. Instead, we present some important tools that

will help you better understand multivariate analysis. Understanding matrix algebra

is important, as the values of multivariate test statistics (e.g., Hotelling’s Tâ•›2 and

Wilks’ lambda), effect size measures (D2 and multivariate eta square), and outlier

indicators (e.g., the Mahalanobis distance) are obtained with matrix algebra. We

assume here that you have no previous exposure to matrix operations. Also, while it

is helpful, at times, to compute matrix operations by hand (particularly for smaller

matrices), we include SPSS and SAS commands that can be used to perform matrix

operations.

A matrix is simply a rectangular array of elements. The following are examples of

matrices:

1 2 3 4

4 5 6 9

2×4

1

2

5

1

2 1

3 5

6 8

4 10

4×3

1 2

2 4

2×2

The numbers underneath each matrix are the dimensions of the matrix, and indicate

the size of the matrix. The first number is the number of rows and the second number the number of columns. Thus, the first matrix is a 2 × 4 since it has 2 rows and

4 columns.

A familiar matrix in educational research is the score matrix. For example, suppose

we had measured six subjects on three variables. We could represent all the scores as

a matrix:

Chapter 2

â†œæ¸€å±®

â†œæ¸€å±®

Variables

1 2 3

1 10

2 12

3 13

Subjects

4 16

5 12

6 15

4

6

2

8

3

9

18

21

20

16

14

13

This is a 6 × 3 matrix. More generally, we can represent the scores of N participants on

p variables in an N × p matrix as follows:

1

1 x11

2 x21

Subjects

N xN 1

Variables

2

3

x12

x13

x22

x23

xN 2

xN 3

p

x1 p

x2 p

xNp

The first subscript indicates the row and the second subscript the column. Thus, x12

represents the score of participant 1 on variable 2 and x2p represents the score of participant 2 on variableÂ€p.

The transpose A′ of a matrix A is simply the matrix obtained by interchanging rows

and columns.

Example 2.1

2 3 6

A=

5 4 8

2 5

A′ = 3 4

6 8

The first row of A has become the first column of A′ and the second row of A has

become the second column ofÂ€A′.

3 4

B = 5 6

1 3

In general, if a

are s ×Â€r.

2

3 5 1

4 6 3

5 → B′ =

2 5 8

8

matrix A has dimensions r × s, then the dimensions of the transpose

A matrix with a single row is called a row vector, and a matrix with a single column

is called a column vector. While matrices are written in bold uppercase letters, as we

45

46

â†œæ¸€å±®

â†œæ¸€å±®

MATRIX ALGEBRA

have seen, vectors are always indicated by bold lowercase letters. Also, a row vector is

indicated by a transpose, for example, x′, y′, and soÂ€on.

Example 2.2

4

6

x ′ = (1, 2,3)

y = 4 × 1 column vector

8

1 × 3 row vector

7

A row vector that is of particular interest to us later is the vector of means for a group

of participants on several variables. For example, suppose we have measured 100 participants on the California Psychological Inventory and have obtained their average

scores on five of the subscales. The five means would be represented as the following

row vectorÂ€x′:

x′â•›= (24, 31, 22, 27,Â€30)

The elements on the diagonal running from upper left to lower right are said to be on

the main diagonal of a matrix. AÂ€matrix A is said to be symmetric if the elements below

the main diagonal are a mirror reflection of the corresponding elements above the main

diagonal. This is saying a12Â€=Â€a21, a13Â€=Â€a31, and a23Â€=Â€a32 for a 3 × 3 matrix, since these

are the corresponding pairs. This is illustratedÂ€by:

a12

6

4

a13

8

a21

6

3

a23

7

a31

8

a32

7

1

Main diagonal

Denotes

corresponding pairs

In general, a matrix A is symmetric if aijÂ€=Â€aji, i ≠ j, that is, if all corresponding pairs of

elements above and below the main diagonal are equal.

An example of a symmetric matrix that is frequently encountered in statistical work is

that of a correlation matrix. For example, here is the matrix of intercorrelations for four

subtests of the Differential Aptitude Test forÂ€boys:

Verbal reas.

Numerical abil.

Clerical speed

Mechan. reas.

VR

NA

Cler.

Mech.

1.00

.70

.19

.55

.70

1.00

.36

.50

.19

.36

1.00

.16

.55

.50

.16

1.00

Chapter 2

â†œæ¸€å±®

â†œæ¸€å±®

This matrix is symmetric because, for example, the correlation between VR and NA is

the same as the correlation between NA andÂ€VR.

Two matrices A and B are equal if and only if all corresponding elements are equal.

That is to say, two matrices are equal only if they are identical.

2.2â•‡ADDITION, SUBTRACTION, AND MULTIPLICATION

OF A MATRIX BY A SCALAR

You add two matrices A and B by summing the corresponding elements.

Example 2.3

6 2

2 3

A=

B=

2 5

3 4

2 + 6 3 + 2 8 5

A+B=

3 + 2 4 + 5 = 5 9

Notice the elements in the (1, 1) positions, that is, 2 and 6, have been added, and soÂ€on.

Only matrices of the same dimensions can be added. Thus, addition would not be

defined for these matrices:

2 3 1 1 4

1 4 6 + 5 6 not defined

If two matrices are of the same dimension, you can then subtract one matrix from

another by subtracting corresponding elements.

A

B

A−B

1 4 2

1 −3 3

2 1 5

3 2 6 − 1 2 5 = 2 0 1

You multiply a matrix or a vector by a scalar (number) by multiplying each element of

the matrix or vector by the scalar.

Example 2.4

4 4 3

2 ( 3,1, 4 ) = ( 6, 2, 8 ) 1 3 =

3 1

2 1 8 4

4

=

1 5 4 20

47

48

â†œæ¸€å±®

â†œæ¸€å±®

MATRIX ALGEBRA

2.2.1 Multiplication of Matrices

There is a restriction as to when two matrices can be multiplied. Consider the product

AB. To multiply these matrices, the number of columns in A must equal the number

of rows in B. For example, if A is 2 × 3, then B must have 3 rows, although B could

have any number of columns. If two matrices can be multiplied they are said to be

сопformable. The dimensions of the product matrix, call it C, are simply the number

of rows of A by the number of columns of B. In the earlier example, if B were 3 × 4,

then C would be a 2 × 4 matrix. In general then, if A is an r × s matrix and B is an s × t

matrix, then the dimensions of the product AB are r ×Â€t.

Example 2.5

A

2 1 3

4 5 6

2×3

B

C

c11 c12

1 0

2 4 = c

21 c22

−1 5

2× 2

3× 2

Note first that A and B can be multiplied because the number of columns in A is 3,

which is equal to the number of rows in B. The product matrix C is a 2 × 2, that is,

the outer dimensions of A and B. To obtain the element c11 (in the first row and first

column), we multiply corresponding elements of the first row of A by the elements of

the first column of B. Then, we simply sum the products. To obtain c12 we take the sum

of products of the corresponding elements of the first row of A by the second column

of B. This procedure is presented next for all four elements ofÂ€C:

Element

c11

1

(2,1, 3) =

2 2(1) + 1(2) + 3(−1) = 1

−1

c12

0

(2,1, 3) =

4 2(0) + 1(4) + 3(5) =

19

5

c21

1

(4, 5, 6) =

2 4(1) + 5(2) + 6(−1) = 8

−1

c22

0

(4, 5, 6) =

4 4(0) + 5(4) + 6(5) =

50

5

Chapter 2

â†œæ¸€å±®

â†œæ¸€å±®

Therefore, the product matrix CÂ€is:

1 19

C=

8 50

We now multiply two more matrices to illustrate an important property concerning

matrix multiplication.

Example 2.6

A

2

1

B

1

4

5 2 ⋅ 3 + 1 ⋅ 5

=

6 1 ⋅ 3 + 4 ⋅ 5

3

5

B

3

5

AB

2 ⋅ 5 + 1 ⋅ 6 11

=

1 ⋅ 5 + 4 ⋅ 6 23

A

5

6

BA

1 3 ⋅ 2 + 5 ⋅ 1

=

4 5 ⋅ 2 + 6 ⋅ 1

2

1

16

29

3 ⋅ 1 + 5 ⋅ 4 11

=

5 ⋅ 1 + 6 ⋅ 4 16

23

29

Notice that AB ≠ BA; that is, the order in which matrices are multiplied makes a difference. The mathematical statement of this is to say that multiplication of matrices

is not commutative. Multiplying matrices in two different orders (assuming they are

conformable both ways) in general yields different results.

Example 2.7

A

x

Ax

3 1 2 2

18

1 4 5 6 = 41

2 5 2 3

40

( 3 × 3) ( 3 × 1) ( 3 × 1)

Note that multiplying a matrix on the right by a column vector takes the matrix into a

column vector.

3 1

(2, 5)

= (11, 22)

1 4

Multiplying a matrix on the left by a row vector results in a row vector. If we are

multiplying more than two matrices, then we may group at will. The mathematical

statement of this is that multiplication of matrices is associative. Thus, if we are considering the matrix product ABC, we get the same result if we multiply A and B first

(and then the result of that by C) as if we multiply B and C first (and then the result of

that by A), thatÂ€is,

A B CÂ€=Â€(A B) CÂ€= A (BÂ€C)

49

50

â†œæ¸€å±®

â†œæ¸€å±®

MATRIX ALGEBRA

A matrix product that is of particular interest to us in ChapterÂ€4 is of the followingÂ€form:

x′

1× p

S

p× p

x

p ×1

Note that this product yields a number, i.e., the product matrix is 1 × 1 or a number.

The multivariate test statistic for two groups, Hotelling’s Tâ•›2, is of this form (except for

a scalar constant in front). Other multivariate statistics, for example, that are computed

in a similar way are the Mahalanobis distance (sectionÂ€3.14.6) and the multivariate

effect size measure D2 (sectionÂ€4.11).

Example 2.8

â•‡â•›â•› x′â•‡â•‡â•‡â•‡Sâ•…â•›â•›â•‡â•›xÂ€â•›â•›â•›=Â€â•›(x′S)Â€â•‡â•‡â•›â•›x

4

10 3 4

= (46, 20) =

(4, 2)

184 + 40 = 224

2

3 4 2

2.3â•‡ OBTAINING THE MATRIX OF VARIANCES AND COVARIANCES

Now, we show how various matrix operations introduced thus far can be used to obtain

two very important matrices in multivariate statistics, that is, the sums of squares and

cross products (SSCP) matrix (which is computed as part of the Wilks’ lambda test)

and the matrix of variances and covariances for a set of variables (which is computed

as part of Hotelling’s Tâ•›2 test). Consider the following set ofÂ€data:

x1

x2

1

1

3

4

2

7

x1â•›=â•›2

x2â•›=â•›4

First, we form the matrix Xd of deviation scores, that is, how much each score deviates

from the mean on that variable:

X

X

1 1 2 4 −1 −3

X d = 3 4 − 2 4 = 1

0

2 7 2 4 0

3

Next we take the transpose of Xd:

−1 1 0

X′d =

−3 0 3

Chapter 2

â†œæ¸€å±®

â†œæ¸€å±®

Now we obtain the matrix of sums of squares and cross products (SSCP) as the product of X′d and Xd:

−1

SSCP =

−3

1

0

−1

0

1

3

0

−3

ss1

0 =

ss

3 21

ss12

ss2

The diagonal elements are just sums of squares:

ss1 = (−1)2 + 12 + 02Â€=Â€2

ss2 = (−3)2 + 02 + 32Â€=Â€18

Notice that these deviation sums of squares are the numerators of the variances for the

variables, because the variance for a variableÂ€is

s2 =

∑ (x

ii

i

− x)

2

(n − 1).

The sum of deviation cross products (ss12) for the two variablesÂ€is

ss12Â€=Â€ss21Â€=Â€(−1)(−3) + 1(0) + (0)(3)Â€=Â€3.

This is just the numerator for the covariance for the two variables, because the definitional formula for covariance is givenÂ€by:

n

∑ (x

i1

s12 =

i =1

− x1 ) ( xi 2 − x2 )

n −1

,

where ( xi1 − x1 ) is the deviation score for the ith case on x1 and ( xi2 − x2 ) is the deviation score for the ith case on x2.

Finally, the matrix of variances and covariances S is obtained from the SSCP matrix

by multiplying by a constant, namely, 1 ( n − 1) :

S=

SSCP

n −1

S=

1 2 3 1 1.5

=

2 3 18 1.5 9

where 1 and 9 are the variances for variables 1 and 2, respectively, and 1.5 is the

covariance.

Thus, in obtaining S we have done the following:

1. Represented the scores on several variables as a matrix.

2. Illustrated subtraction of matrices—to get Xd.

51

52

â†œæ¸€å±®

â†œæ¸€å±®

MATRIX ALGEBRA

3. Illustrated the transpose of a matrix—to get X′d.

4. Illustrated multiplication of matrices, that is, X′d Xd, to get SSCP.

5. Illustrated multiplication of a matrix by a scalar, that is, by 1 ( n − 1) , to obtainÂ€S.

2.4â•‡ DETERMINANT OF A MATRIX

The determinant of a matrix A, denoted by A , is a unique number associated with each

square matrix. There are two interrelated reasons that consideration of determinants is

quite important for multivariate statistical analysis. First, the determinant of a covariance matrix represents the generalized variance for several variables. That is, it is one

way to characterize in a single number how much variability remains for the set of

variables after removing the shared variance among the variables. Second, because the

determinant is a measure of variance for a set of variables, it is intimately involved in

several multivariate test statistics. For example, in ChapterÂ€3 on regression analysis,

we use a test statistic called Wilks’ Λ that involves a ratio of two determinants. Also,

in k group multivariate analysis of variance (ChapterÂ€5) the following form of Wilks’

Λ ( Λ = W T ) is the most widely used test statistic for determining whether several

groups differ on a set of variables. The W and T matrices are SSCP matrices, which are

multivariate generalizations of SSw (sum of squares within) and SSt (sum of squares total)

from univariate ANOVA, and are defined and described in detail in ChaptersÂ€4 andÂ€5.

There is a formal definition for finding the determinant of a matrix, but it is complicated, and we do not present it. There are other ways of finding the determinant, and

a convenient method for smaller matrices (4 × 4 or less) is the method of cofactors.

For a 2 × 2 matrix, the determinant could be evaluated by the method of cofactors;

however, it is evaluated more quickly as simply the difference in the products of the

diagonal elements.

Example 2.9

4

A=

1

1

2

A = 4 ⋅ 2 − 1 ⋅1 = 7

a b

In general, for a 2 × 2 matrix A =

, then |A| = ad − bc.

c d

To evaluate the determinant of a 3 × 3 matrix we need the method of cofactors and the

following definition.

Definition: The minor of an element aij is the determinant of the matrix formed by

deleting the ith row and the jth column.

Example 2.10

Consider the following matrix:

Chapter 2

â†œæ¸€å±®

â†œæ¸€å±®

a12 a13

↓

1 2

A = 2 2

3 1

↓

3

1

4

The minor of a12 (with this element equal to 2 in the matrix) is the determinant of the

2 1

matrix

obtained by deleting the first row and the second column. Therefore,

3 4

2 1

the minor of a12 is

= 8 − 3 = 5.

3 4

2 2

The minor of a13 (with this element equal to 3) is the determinant of the matrix

3 1

obtained by deleting the first row and the third column. Thus, the minor of a13 is

2 2

= 2 − 6 = −4.

3 1

Definition: The cofactor of aij =

i+ j

( −1)

× minor.

Thus, the cofactor of an element will differ at most from its minor by sign. We now

evaluate ( −1)i + j for the first three elements of the A matrix given:

a11 : ( −1)

=1

a12 : ( −1)

= −1

a13 : ( −1)

=1

1+1

1+ 2

1+ 3

Notice that the signs for the elements in the first row alternate, and this pattern continues for all the elements in a 3 × 3 matrix. Thus, when evaluating the determinant for a

3 × 3 matrix it will be convenient to write down the pattern of signs and use it, rather

than figuring out what ( −1)i + j is for each element. That pattern of signsÂ€is:

+ − +

− + −

+ − +

We denote the matrix of cofactors C as follows:

c11 c12

C = c21 c22

c31 c32

c13

c23

c33

53

54

â†œæ¸€å±®

â†œæ¸€å±®

MATRIX ALGEBRA

Now, the determinant is obtained by expanding along any row or column of the matrix

of cofactors. Thus, for example, the determinant of A would be givenÂ€by

=

|A| a11c11 + a12 c12 + a13c13

(expanding along the firstrow)

orÂ€by

=

|A| a12 c12 + a22 c22 + a32 c32

(expanding along the second column)

We now find the determinant of A by expanding along the firstÂ€row:

Element

Minor

Cofactor

Element × cofactor

a11Â€=Â€1

2 1

=7

1 4

7

7

a12Â€=Â€2

2 1

=5

3 4

−5

−10

a13Â€=Â€3

2 2

= −4

3 1

−4

−12

Therefore, |A|Â€=Â€7 + (−10) + (−12)Â€=Â€−15.

For a 4 × 4 matrix the pattern of signs is givenÂ€by:

+ − + −

− + − +

+ − + −

− + − +

and the determinant is again evaluated by expanding along any row or column. However, in this case the minors are determinants of 3 × 3 matrices, and the procedure

becomes quite tedious. Thus, we do not pursue it any furtherÂ€here.

In the example in 2.3, we obtained the following covariance matrix:

1.0 1.5

S=

1.5 9.0

We also indicated at the beginning of this section that the determinant of S can be

interpreted as the generalized variance for a set of variables.

Chapter 2

â†œæ¸€å±®

â†œæ¸€å±®

Now, the generalized variance for the two-variable example is just |S|Â€ =Â€ (1 × 9) −

(1.5 × 1.5)Â€=Â€6.75. Because for this example there is a nonzero covariance, the generalized variance is reduced by this. That is, some of the variance of variable 2 is shared

by variable 1. On the other hand, if the variables were uncorrelated (covarianceÂ€=Â€0),

then we would expect the generalized variance to be larger (because there is no shared

variance between variables), and this is indeed theÂ€case:

=

|S|

1 0

= 9

0 9

Thus, in representing the variance for a set of variables this measure takes into account

all the variances and covariances.

In addition, the meaning of the generalized variance is easy to see when we consider

the determinant of a 2 × 2 correlation matrix. Given the following correlation matrix

1

R=

r21

r12

,

1

the determinant of =

R R

= 1 − r 2 . Of course, since we know that r 2 can be interpreted as the proportion of variation shared, or in common, between variables, the

determinant of this matrix represents the variation remaining in this pair of variables

after removing the shared variation among the variables. This concept also applies to

larger matrices where the generalized variance represents the variation remaining in

the set of variables after we account for the associations among the variables. While

there are other ways to describe the variance of a set of variables, this conceptualization appears in the commonly used Wilks’ Λ test statistic.

2.5 INVERSE OF A MATRIX

The inverse of a square matrix A is a matrix A−1 that satisfies the following equation:

AA−1Â€=Â€A−1 AÂ€= In,

where In is the identity matrix of order n. The identity matrix is simply a matrix with

1s on the main diagonal and 0s elsewhere.

1 0 0

1 0

I2 =

I3 = 0 1 0

0

1

0 0 1

Why is finding inverses important in statistical work? Because we do not literally have

division with matrices, multiplying one matrix by the inverse of another is the analogue of division for numbers. This is why finding an inverse is so important. An analogy with univariate ANOVA may be helpful here. In univariate ANOVA, recall that

−1

the test statistic

=

F MS

=

MSb ( MS w ) , that is, a ratio of between to within

b MS w

55

56

â†œæ¸€å±®

â†œæ¸€å±®

MATRIX ALGEBRA

variability. The analogue of this test statistic in multivariate analysis of variance is

BW−1, where B is a matrix that is the multivariate generalization of SSb (sum of squares

between); that is, it is a measure of how differential the effects of treatments have been

on the set of dependent variables. In the multivariate case, we also want to “divide” the

between-variability by the within-variability, but we don’t have division per se. However, multiplying the B matrix by W−1 accomplishes this for us, because, again, multiplying a matrix by an inverse of a matrix is the analogue of division. Also, as shown in

the next chapter, to obtain the regression coefficients for a multiple regression analysis,

it is necessary to find the inverse of a matrix product involving the predictors.

2.5.1 Procedure for Finding the Inverse of a Matrix

1.

2.

3.

4.

Replace each element of the matrix A by its minor.

Form the matrix of cofactors, attaching the appropriate signs as illustrated later.

Take the transpose of the matrix of cofactors, forming what is called the adjoint.

Divide each element of the adjoint by the determinant ofÂ€A.

For symmetric matrices (with which this text deals almost exclusively), taking the

transpose is not necessary, and hence, when finding the inverse of a symmetric matrix,

Step 3 is omitted.

We apply this procedure first to the simplest case, finding the inverse of a 2 × 2 matrix.

Example 2.11

4 2

D=

2 6

The minor of 4 is the determinant of the matrix obtained by deleting the first row and

the first column. What is left is simply the number 6, and the determinant of a number

is that number. Thus we obtain the following matrix of minors:

6 2

2 4

Now for a 2 × 2 matrix we attach the proper signs by multiplying each diagonal element

by 1 and each off-diagonal element by −1, yielding the matrix of cofactors, whichÂ€is

6 −2

.

−2

4

The determinant of D = 6(4) − (−2)(−2)Â€=Â€20.

Finally then, the inverse of D is obtained by dividing the matrix of cofactors by the

determinant, obtaining

6

20

D−1 =

−2

20

−2

20

4

20

Chapter 2

â†œæ¸€å±®

â†œæ¸€å±®

To check that D−1 is indeed the inverse of D, noteÂ€that

D

6

4

2

20

2 6

−2

20

D −1

D −1

−2 6

20 20

=

4 −2

20 20

I2

−2 D

20 4 2 = 1 0

4 2 6 0 1

20

Example 2.12

Let us find the inverse for the 3 × 3 A matrix that we found the determinant for in the

previous section. Because A is a symmetric matrix, it is not necessary to find nine

minors, but only six, since the inverse of a symmetric matrix is symmetric. Thus we

just find the minors for the elements on and above the main diagonal.

1 2 3 Recall again that the minor of an element is the

A = 2 2 1 determinant of the matrix obtained by deleting the

3 1 4 row and column that the element is in.

Element

Matrix

Minor

a11Â€=Â€1

2 1

1 4

2 × 4 − 1 × 1Â€=Â€7

a12Â€=Â€2

2 1

3 4

2 × 4 − 1 × 3Â€=Â€5

a13Â€=Â€3

2 2

3 1

2 × 1 − 2 × 3Â€=Â€−4

a22Â€=Â€2

1 3

3 4

1 × 4 − 3 × 3Â€=Â€−5

a23Â€=Â€1

1 2

3 1

1 × 1 − 2 × 3Â€=Â€−5

a33Â€=Â€4

1 2

2 2

1 × 2 − 2 × 2Â€=Â€−2

Therefore, the matrix of minors for AÂ€is

7 5 −4

5 −5 −5 .

−4 −5 −2

Recall that the pattern of signsÂ€is

57

58

â†œæ¸€å±®

â†œæ¸€å±®

MATRIX ALGEBRA

+ − +

− + − .

+ − +

Thus, attaching the appropriate sign to each element in the matrix of minors and completing Step 2 of finding the inverse we obtain:

7 −5 −4

−5 −5 5 .

−4 5 −2

Now the determinant of A was found to be −15. Therefore, to complete the final step

in finding the inverse we simply divide the preceding matrix by −15, and the inverse

of AÂ€is

−7

15

1

A −1 =

3

4

15

1

4

3 15

1 −1

.

3

3

−1 2

3 15

Again, we can check that this is indeed the inverse by multiplying it by A to see if the

result is the identity matrix.

Note that for the inverse of a matrix to exist, the determinant of the matrix must not

be equal to 0. This is because in obtaining the inverse each element is divided by the

determinant, and division by 0 is not defined. If the determinant of a matrix BÂ€=Â€0, we

say B is singular. If |B| ≠ 0, we say B is nonsingular, and its inverse does exist.

2.6 SPSS MATRIX PROCEDURE

The SPSS matrix procedure was developed at the University of Wisconsin at Madison.

It is described in some detail in SPSS Advanced Statistics 7.5. Various matrix operations can be performed using the procedure, including multiplying matrices, finding

the determinant of a matrix, finding the inverse of a matrix, and so on. To indicate a

matrix you must: (1) enclose the matrix in braces, (2) separate the elements of each

row by commas, and (3) separate the rows by semicolons.

The matrix procedure must be run from the syntax window. To get to the syntax window, click on FILE, then click on NEW, and finally click on SYNTAX. Every matrix

program must begin with MATRIX. and end with END MATRIX. The periods are crucial, as each command must end with a period. To create a matrix A, use the following

COMPUTE AÂ€=Â€{2, 4, 1; 3, −2,Â€5}.

Chapter 2

â†œæ¸€å±®

â†œæ¸€å±®

Note that this is a 2 × 3 matrix. The use of the COMPUTE command to create a matrix

is not intuitive. However, at present, that is the way the procedure is set up. In the next

program we create matrices A, B, and E, multiply A and B, find the determinant and

inverse for E, and print out all matrices.

MATRIX.

COMPUTE A= {2, 4, 1; 3, −2,Â€5}.

COMPUTE B= {1, 2; 2, 1; 3,Â€4}.

COMPUTE C= A*B.

COMPUTE E= {1, −1, 2; −1, 3, 1; 2, 1,Â€10}.

COMPUTE DETE= DET(E).

COMPUTE EINV= INV(E).

PRINTÂ€A.

PRINTÂ€B.

PRINTÂ€C.

PRINTÂ€E.

PRINTÂ€DETE.

PRINTÂ€EINV.

END MATRIX.

The A, B, and E matrices are taken from the exercises at the end of the chapter. Note in

the preceding program that all commands in SPSS must end with a period. Also, note

that each matrix is enclosed in braces, and rows are separated by semicolons. Finally,

a separate PRINT command is required to print out each matrix.

To run (or EXECUTE) this program, click on RUN and then click on ALL from the

dropdown menu. When you do, the output shown in TableÂ€2.1 is obtained.

Table 2.1:â•‡ Output From SPSS Matrix Procedure

Matrix

Run Matrix procedure:

A

â•‡2

â•‡3

B

â•‡1

â•‡2

â•‡3

C

13

14

â•‡4

–2

1

5

â•‡2

â•‡1

â•‡4

12

24

(Continued )

59

60

â†œæ¸€å±®

â†œæ¸€å±®

MATRIX ALGEBRA

Table 2.1:â•‡ (Continued)

Matrix

E

1

–1

2

DETE

3

EINV

â•‡9.666666667

â•‡4.000000000

–2.333333333

----End Matrix----

–1

3

1

2

1

10

â•‡4.000000000

â•‡2.000000000

–1.000000000

–2.333333333

–1.000000000

.666666667

2.7 SAS IML PROCEDURE

The SAS IML procedure replaced the older PROC MATRIX procedure that was used

in version 5 of SAS. SAS IML is documented thoroughly in SAS/IML: Usage and Reference, Version 6 (1990). There are several features that are very nice about SAS IML,

and these are described on pages 2 and 3 of the manual. We mention just three features:

1. SAS/IML is a programming language.

2. SAS/IML software uses operators that apply to entire matrices.

3. SAS/IML software is interactive.

IML is an acronym for Interactive Matrix Language. You can execute a command as

soon as you enter it. We do not illustrate this feature, as we wish to compare it with

the SPSS Matrix procedure. So, we collect the SAS IML commands in a file and run

it thatÂ€way.

To indicate a matrix, you (1) enclose the matrix in braces, (2) separate the elements of

each row by a blank(s), and (3) separate the rows by commas.

To illustrate use of the SAS IML procedure, we create the same matrices as we did

with the SPSS matrix procedure and do the same operations and print all matrices. The

syntax is shown here, and the output appears in TableÂ€2.2.

procÂ€iml;

a= {2 4 1, 3–2 5} ;

b= {1 2, 2 1, 3 4} ;

c= a*b;

e= {1–1 2, −1 3 1, 2 1 10} ;

dete= det(e);

einv= inv(e);

print a b c e deteÂ€einv;

Chapter 2

â†œæ¸€å±®

â†œæ¸€å±®

Table 2.2:â•‡ Output From SAS IML Procedure

A

B

2

3

4

–2

1

5

E

1

–1

2

–1

3

1

2

1

10

1

2

3

DETE

3

C

2

1

4

EINV

9.6666667

4

–2.333333

13

14

12

24

4

2

–1

–2.333333

–1

0.6666667

2.8 SUMMARY

Matrix algebra is important in multivariate analysis for several reasons. For example,

data come in the form of a matrix when N participants are measured on p variables,

multivariate test statistics and effect size measures are computed using matrix operations, and statistics describing multivariate outliers also use matrix algebra. Although

addition and subtraction of matrices is easy, multiplication of matrices is more difficult and nonintuitive. Finding the determinant and inverse for 3 × 3 or larger square

matrices is quite tedious. Finding the determinant is important because the determinant

of a covariance matrix represents the generalized variance for a set of variables, that

is, the variance that remains in a set of variables after accounting for the associations

among the variables. Finding the inverse of a matrix is important since multiplying a

matrix by the inverse of a matrix is the analogue of division for numbers. Fortunately,

SPSS MATRIX and SAS IML will do various matrix operations, including finding the

determinant and inverse.

2.9 EXERCISES

1. Given:

1 2

1 3 5

2 4 1

A=

B = 2 1 C =

6 2 1

3 −2 5

3 4

1

1 −1 2

4 2

−1 3 1 X = 3

=

D=

E

4

2 6

2 1 10

5

2

u′ =(1, 3), v =

7

2

1

6

7

61

62

â†œæ¸€å±®

â†œæ¸€å±®

MATRIX ALGEBRA

Find, where meaningful, each of the following:

(a) A +Â€C

(b) A +Â€B

(c) AB

(d) AC

(e) u’DÂ€u

(f) u’v

(g)(A + C)’

(h)3Â€C

(i)|â•›

D|

(j) D−1

(k)|E|

(l) E−1

(m)u’D−1u

(n) BA (compare this result with [c])

(o) X’X

â•›â•›â•›â•›

2. In ChapterÂ€3, we are interested in predicting each person’s score on a dependent variable y from a linear combination of their scores on several predictors

(xi’s). If there were two predictors, then the equations for N cases would look

likeÂ€this:

y1Â€=Â€e1 + b0 + b1x11 + b2x12

y2Â€=Â€e2 + b0 + b1x21 + b2x22

y3Â€=Â€e3 + b0 + b1x31 + b2x32

yNÂ€=Â€eN + b0 + b1xN1 + b2xN2

Note: Each ei represents the portion of y not predicted by the xs, and each b

is a regression coefficient. Express this set of prediction equations as a single matrix equation. Hint: The right hand portion of the equation will be of

theÂ€form:

vector + matrix times vector

3. Using the approach detailed in sectionÂ€2.3, find the matrix of variances and

covariances for the followingÂ€data:

x1

x2

x3

4

5

8

9

10

3

2

6

6

8

10

11

15

9

5

Chapter 2

â†œæ¸€å±®

â†œæ¸€å±®

4. Consider the following two situations:

(a) s1Â€=Â€10, s2Â€=Â€7, r12Â€=Â€.80

(b) s1Â€=Â€9, s2Â€=Â€6, r12Â€=Â€.20

Compute the variance-covariance matrix for (a) and (b) and compute the determinant of each variance-covariance matrix. For which situation is the generalized variance larger? Does this surpriseÂ€you?

5. Calculate the determinantÂ€for

9 2 1

A = 2 4 5 .

1 5 3

Could A be a covariance matrix for a set of variables? Explain.

6. Using SPSS MATRIX or SAS IML, find the inverse for the following 4 × 4

Â�symmetric matrix:

6 8 7 6

8 9 2 3

7 2 5 2

6 3 2 1

7. Run the following SPSS MATRIX program and show that the output yields the

matrix, determinant, and inverse.

MATRIX.

COMPUTE A={6, 2, 4; 2, 3, 1; 4, 1,Â€5}.

COMPUTE DETA=DET(A).

COMPUTE AINV=INV(A).

PRINTÂ€A.

PRINTÂ€DETA.

PRINTÂ€AINV.

END MATRIX.

8. Consider the following two matrices:

2 3

A=

3 6

1 0

B=

0 1

Calculate the following products: AB andÂ€BA.

What do you get in each case? Do you see now why B is called the identity

matrix?

63

64

â†œæ¸€å±®

â†œæ¸€å±®

MATRIX ALGEBRA

9. Consider the following covariance matrix:

4 3 1

S = 3 9 2

1 2 1

(a) Use the SPSS MATRIX procedure to print S and find and print the determinant.

(b) Statistically, what does the determinant represent?

REFERENCES

SAS Institute. (1990). SAS/IML: Usage and Reference, Version 6. Cary, NC: Author.

SPSS, Inc. (1997). SPSS Advanced Statistics 7.5. Chicago: Author, pp.Â€469–512.

Chapter 3

MULTIPLE REGRESSION FOR

PREDICTION

3.1â•‡INTRODUCTION

In multiple regression we are interested in predicting a dependent variable from a set

of predictors. In a previous course in statistics, you probably studied simple regression, predicting a dependent variable from a single predictor. An example would be

predicting college GPA from high school GPA. Because human behavior is complex

and influenced by many factors, such single-predictor studies are necessarily limited

in their predictive power. For example, in a college GPA study, we are able to improve

prediction of college GPA by considering other predictors such as scores on standardized tests (verbal, quantitative), and some noncognitive variables, such as study habits

and attitude toward education. That is, we look to other predictors (often test scores)

that tap other aspects of criterion behavior.

Consider two other examples of multiple regression studies:

1. Feshbach, Adelman, and Fuller (1977) conducted a study of 850 middle-class

children. The children were measured in kindergarten on a battery of variables: the Wechsler Preschool and Primary Scale of Intelligence (WPPSI), the

deHirsch–Jansky Index (assessing various linguistic and perceptual motor skills),

the Bender Motor Gestalt, and a Student Rating Scale developed by the authors

that measures various cognitive and affective behaviors and skills. These measures were used to predict reading achievement for these same children in grades 1,

2, andÂ€3.

2. Crystal (1988) attempted to predict chief executive officer (CEO) pay for the top

100 of last year’s Fortune 500 and the 100 top entries from last year’s Service 500.

He used the following predictors: company size, company performance, company

risk, government regulation, tenure, location, directors, ownership, and age. He

found that only about 39% of the variance in CEO pay can be accounted for by

these factors.

In modeling the relationship between y and the xs, we are assuming that a linear model

is appropriate. Of course, it is possible that a more complex model (curvilinear) may

66

â†œæ¸€å±®

â†œæ¸€å±®

MuLtIpLe reGreSSIon For predIctIon

be necessary to predict y accurately. Polynomial regression may be appropriate, or if

there is nonlinearity in the parameters, then nonlinear procedures in SPSS (e.g., NLR)

or SAS can be used to fit a model.

This is a long chapter with many sections, not all of which are equally important.

The three most fundamental sections are on model selection (3.8), checking assumptions underlying the linear regression model (3.10), and model validation (3.11).

The other sections should be thought of as supportive of these. We discuss several

ways of selecting a “good” set of predictors, and illustrate these with two computer

examples.

A theme throughout the book is determining whether the assumptions underlying a

given analysis are tenable. This chapter initiates that theme, and we can see that there

are various graphical plots available for assessing assumptions underlying the regression model. Another very important theme throughout this book is the mathematical

maximization nature of many advanced statistical procedures, and the concomitant

possibility of results looking very good on the sample on which they were derived

(because of capitalization on chance), but not generalizing to a population. Thus, it

becomes extremely important to validate the results on an independent sample(s) of

data, or at least to obtain an estimate of the generalizability of the results. SectionÂ€3.11

illustrates both of the aforementioned ways of checking the validity of a given regression model.

A final pedagogical point on reading this chapter: SectionÂ€3.14 deals with outliers and

influential data points. We already indicated in ChapterÂ€1, with several examples, the

dramatic effect an outlier(s) can have on the results of any statistical analysis. SectionÂ€3.14 is rather lengthy, however, and the applied researcher may not want to plow

through all the details. Recognizing this, we begin that section with a brief overview

discussion of statistics for assessing outliers and influential data points, with prescriptive advice on how to flag such cases from computer output.

We wish to emphasize that our focus in this chapter is on the use of multiple regression for prediction. Another broad related area is the use of regression for explanation.

Cohen, Cohen, West, and Aiken (2003) and Pedhazur (1982) have excellent, extended

discussions of the use of regression for explanation. Note that ChapterÂ€16 in this text

includes the use of structural equation models, which is a more comprehensive analysis approach for explanation.

There have been innumerable books written on regression analysis. In our opinion,

books by Cohen etÂ€al. (2003), Pedhazur (1982), Myers (1990), Weisberg (1985), Belsley, Kuh, and Welsch (1980), and Draper and Smith (1981) are worthy of special attention. The first two books are written for individuals in the social sciences and have very

good narrative discussions. The Myers and Weisberg books are excellent in terms of

the modern approach to regression analysis, and have especially good treatments of

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

regression diagnostics. The Draper and Smith book is one of the classic texts, generally used for a more mathematical treatment, with most of its examples geared toward

the physical sciences.

We start this chapter with a brief discussion of simple regression, which most readers

likely encountered in a previous statistics course.

3.2â•‡ SIMPLE REGRESSION

For one predictor, the simple linear regression modelÂ€is

yi = β0 + β1 x1 + ei

i = 1, 2, , n,

where β0 and β1 are parameters to be estimated. The ei are the errors of prediction,

and are assumed to be independent, with constant variance and normally distributed

with a mean of 0. If these assumptions are valid for a given set of data, then the sample

prediction errors (e^ i ) should have similar properties. For example, the e^ i should be

normally distributed, or at least approximately normally distributed. This is considered

further in sectionÂ€3.9. The e^ i are called the residuals. How do we estimate the parameters? The least squares criterion is used; that is, the sum of the squared estimated errors

of prediction is minimized:

2

2

2

e^1 + e^ 2 + + e^ n =

n

∑e

^2

i

= min

i =1

Of course, e^ i = yi − y^ i , where yi is the actual score on the dependent variable and y^ i

is the estimated score for the ith subject.

The scores for each subject ( xi , yi ) define a point in the plane. What the least squares

criterion does is find the line that best fits the points. Geometrically, this corresponds to

minimizing the sum of the squared vertical distances (e^ 2i ) of each person’s score from

their estimated y score. This is illustrated in FigureÂ€3.1.

Example 3.1

To illustrate simple regression we use part of the Sesame Street database from Glasnapp

and Poggio (1985), who present data on many variables, including 12 background variables and 8 achievement variables for 240 participants. Sesame Street was developed

as a television series aimed mainly at teaching preschool skills to 3- to 5-year-old

children. Data were collected on many achievement variables both before (pretest) and

after (posttest) viewing of the series. We consider here only one of the achievement

variables, knowledge of body parts.

SPSS syntax for running the simple regression is given in TableÂ€3.1, along with

annotation. FigureÂ€3.2 presents a scatterplot of the variables, along with selected

67

68

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Figure 3.1:â•‡ Geometrical representation of least squares criterion.

6

4

1

3

2

5

1

Least squares minimizes the sum of

these squared vertical distances, i.e., it

finds the line that best fits the points.

1

Table 3.1:â•‡ SPSS Syntax for Simple Regression

TITLE ‘SIMPLE LINEAR REGRESSION ON SESAMEâ•… DATA.’

DATA LIST FREE/PREBODY POSTBODY.

BEGIN DATA.

DATA LINES

END DATA.

LIST.

REGRESSION DESCRIPTIVESÂ€=Â€DEFAULT/

VARIABLESÂ€=Â€PREBODY POSTBODY/

DEPENDENTÂ€=Â€POSTBODY/

(1) METHODÂ€=Â€ENTER/

(2) SCATTERPLOT (POSTBODY, PREBODY)/

(3) RESIDUALSÂ€=Â€HISTOGRAM(ZRESID)/.

(1)â•‡ DESCRIPTIVESÂ€=Â€DEFAULT subcommand yields the means, standard deviations and the correlation matrix for the variables.

(2)â•‡ This scatterplot subcommand yields a scatterplot for the variables.

(3)â•‡This RESIDUALS subcommand yields a histogram of the standardized

residuals.

output. Inspecting the scatterplot suggests there is a positive association between

the variables, reflecting a correlation of .65. Note that in the Model Summary table

of FigureÂ€3.2, the multiple correlation (R) is also .65, since there is only one predictor in the equation. In the Coefficients table of FigureÂ€3.2, the coefficients are

provided for the regression equation. The equation for the predicted outcome scores

is then POSTBODYÂ€ =Â€ 13.475 + .551 PEABODY. TableÂ€ 3.2 shows a histogram

of the standardized residuals, which suggests a fair approximation to a normal

distribution.

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

Figure 3.2:â•‡ Scatterplot and selected output for simple linear regression.

Scatterplot

Dependent Variable: POSTBODY

35

POSTBODY

30

25

20

15

10

5

10

15

20

PREBODY

25

30

35

Variables Entered/Removeda

Variables

Variables

Method

Entered

Removed

1

PREBODYb

Enter

a. Dependent Variable: POSTBODY

b. All requested variables entered.

Model

Model Summaryb

Model

R

R Square

0.423

1

0.650a

a. Predictors: (Constant), PREBODY

Adjusted R

Std. Error of the

Square

Estimate

0.421

4.119

Coefficientsa

Unstandardized Coefficients

Standardized

Coefficients

B

Std. Error

Beta

(Constant)

13.475

0.931

1

PREBODY

0.551

0.042

0.650

a. Dependent Variable: POSTBODY

Model

t

14.473

13.211

Sig.

0.000

0.000

3.3â•‡MULTIPLE REGRESSION FOR TWO PREDICTORS: MATRIX

FORMULATION

The linear model for two predictors is a simple extension of what we had for one

predictor:

yi = β0 + β1 x1 + β 2 x2 + ei ,

where β0 (the regression constant), β1, and β2 are the parameters to be estimated,

and e is error of prediction. We consider a small data set to illustrate the estimation

process.

69

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Table 3.2:â•‡ Histogram of Standardized Residuals

Histogram

Dependent Variable: POSTBODY

Mean = 4.16E-16

Std. Dev. = 0.996

N = 240

30

Frequency

70

20

10

–4

–2

0

2

Regression Standardized Residual

y

x1

x2

3

2

4

5

8

2

3

5

7

8

1

5

3

6

7

4

We model each subject’s y score as a linear function of theÂ€βs:

y1 =

y2 =

y3 =

y4 =

y5 =

1 × β 0 + 2 × β1 + 1 × β2

1 × β 0 + 3 × β1 + 5 × β2

1 × β 0 + 5 × β1 + 3 × β2

1 × β 0 + 7 × β1 + 6 × β2

1 × β 0 + 8 × β1 + 7 × β2

3=

2=

4=

5=

8=

+ e1

+ e2

+ e3

+ e4

+ e5

This series of equations can be expressed as a single matrix equation:

3 1

2 1

y = 4 = 1

5 1

8 1

X

β

e

2

3

5

7

8

1 β 0

5 β1 +

3 β 2

6

7

e1

e

2

e3

e4

e5

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

It is pretty clear that the y scores and the e define column vectors, while not so clear is

how the boxed-in area can be represented as the product of two matrices,Â€Xβ.

The first column of 1s is used to obtain the regression constant. The remaining two

columns contain the scores for the subjects on the two predictors. Thus, the classic

matrix equation for multiple regressionÂ€is:

y = Xβ + e

(1)

Now, it can be shown using the calculus that the least square estimates of the βs are

givenÂ€by:

^

−1

β = ( X ′X ) X ′y

(2)

Thus, for our data the estimated regression coefficients wouldÂ€be:

X′

1 1 1 1 1 1

2 3 5 7 8 1

^

β =

1

1

5

3

6

7

1

1

X

2

3

5

7

8

1

5

3

6

7

−1

X′

y

3

1 1 1 1 1

2 3 5 7 8 2

4

1 5 3 6 7

5

8

Let us do this in pieces. First,

22

5 25 22

X′ X = 25 151 130 and X ′ y = 131 .

22 130 120

11

Furthermore, you should showÂ€that

(X′ X)

−1

1220

1

=

− 140

1016

− 72

− 140

116

− 100

− 72

− 100 ,

130

where 1016 is the determinant of X′X. Thus, the estimated regression coefficients are

givenÂ€by

1220 −140 −72 22 .50

1

β=

−140 116 −100 131 = 1 .

1016

−72 −100 130 111 −.25

^

Therefore, the regression (prediction) equationÂ€is

71

72

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

y^ i = .50 + x1 − .25 x2 .

To illustrate the use of this equation, we find the predicted score for case 3 and the

residual for thatÂ€case:

y^ 3 = .5 + 5 − .25(3) = 4.75

e^ 3 = y3 − y^ 3 = 4 − 4.75 = −.75

Note that if you find yourself struggling with this matrix presentation, be assured that

you can still learn to use multiple regression properly and understand regression results.

3.4â•‡MATHEMATICAL MAXIMIZATION NATURE OF LEAST

SQUARES REGRESSION

In general, then, in multiple regression the linear combination of the xs that is maximally correlated with y is sought. Minimizing the sum of squared errors of prediction is equivalent to maximizing the correlation between the observed and predicted y

scores. This maximized Pearson correlation is called the multiple correlation, shown

as R = ryi y^ i . Nunnally (1978, p.Â€ 164) characterized the procedure as “wringing out

the last ounce of predictive power” (obtained from the linear combination of xs, that

is, from the regression equation). Because the correlation is maximum for the sample

from which it is derived, when the regression equation is applied to an independent

sample from the same population (i.e., cross-validated), the predictive power drops

off. If the predictive power drops off sharply, then the equation is of limited utility.

That is, it has no generalizability, and hence is of limited scientific value. After all, we

derive the prediction equation for the purpose of predicting with it on future (other)

samples. If the equation does not predict well on other samples, then it is not fulfilling

the purpose for which it was designed.

Sample size (n) and the number of predictors (k) are two crucial factors that determine

how well a given equation will cross-validate (i.e., generalize). In particular, the n/k

ratio is crucial. For small ratios (5:1 or less), the shrinkage in predictive power can

be substantial. AÂ€study by Guttman (1941) illustrates this point. He had 136 subjects

and 84 predictors, and found the multiple correlation on the original sample to be .73.

However, when the prediction equation was applied to an independent sample, the

new correlation was only .04. In other words, the good predictive power on the original sample was due to capitalization on chance, and the prediction equation had no

generalizability.

We return to the cross-validation issue in more detail later in this chapter, where we

show that as a rough guide for social science research, about 15 subjects per predictor

are needed for a reliable equation, that is, for an equation that will cross-validate with

little loss in predictive power.

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

3.5â•‡BREAKDOWN OF SUM OF SQUARES AND F TEST FOR

MULTIPLE CORRELATION

In analysis of variance we broke down variability around the grand mean into betweenand within-variability. In regression analysis, variability around the mean is broken

down into variability due to regression (i.e., variation of the predicted values) and

variability of the observed scores around the predicted values (i.e., variation of the

residuals). To get at the breakdown, we note that the variation of the residuals may be

expressed as the following identity:

yi − y^ i = ( yi − y ) − ( y^i − y )

Now we square both sides, obtaining

( yi − y^i )2 = [( yi − y ) − ( y^i − y )]2 .

Then we sum over the subjects, from 1 toÂ€n:

n

∑

( yi − y^i ) 2 =

i =1

n

∑ [( y − y ) − ( y − y )] .

^

i

2

i

i =1

By algebraic manipulation (see DraperÂ€& Smith, 1981, pp.Â€17–18), this can be

rewrittenÂ€as:

∑( y − y )

i

2

=

∑( y − y )

i

^

i

2

+

∑( y − y )

^

i

2

sum of squares = sum of sq

quares + sum of squares

around the mean

of the residuals

due to regression

SStot

= SSres

+

df : n − 1

= (n − k − 1)

+ k (df = degrees of freedom) (3)

SSreg

This results in the following analysis of variance table and the F test for determining whether the population multiple correlation is different fromÂ€0.

Analysis of Variance Table for Regression

Source

SS

df

MS

F

Regression

SSreg

K

SSreg / k

MSreg

Residual (error)

SSres

n−k−1

SSres / (n − k − 1)

MSres

Recall that since the residual for each subject is e^ i = yi − y^ i , the mean square error

term can be written as MSres = Σe^i2 ( n − k − 1) . Now, R2 (squared multiple correlation)

is givenÂ€by

73

74

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

sum of squares

due to regression Σ ( y^ − y )2 SSreg

=

=

.

sum of squares

Σ ( yi − y )2 SStot

about the mean

R2 =

Thus, R2 measures the proportion of total variance on y that is accounted for by the

set of predictors. By simple algebra, then, we can rewrite the F test in terms of R2 as

follows:

F=

(

1 − R2

R2 / k

)

(n − k − 1)

with k and (n − k − 1) df

(4)

We feel this test is of limited utility when prediction is the research goal, because it

does not necessarily imply that the equation will cross-validate well, and this is the

crucial issue in regression analysis for prediction.

Example 3.2

An investigator obtains R2Â€=Â€.50 on a sample of 50 participants with 10 predictors. Do

we reject the null hypothesis that the population multiple correlationÂ€=Â€0?

F=

.50 / 10

= 3.9 with 10 and 39 df

(1 − .50) / (50 − 10 − 1)

This is significant at the .01 level, since the critical value is 2.8.

However, because the n/k ratio is only 5/1, the prediction equation will probably not

predict well on other samples and is therefore of questionable utility.

Myers’ (1990) response to the question of what constitutes an acceptable value for R2

is illuminating:

This is a difficult question to answer, and, in truth, what is acceptable depends on

the scientific field from which the data were taken. AÂ€chemist, charged with doing

a linear calibration on a high precision piece of equipment, certainly expects to

experience a very high R2 value (perhaps exceeding .99), while a behavioral scientist, dealing in data reflecting human behavior, may feel fortunate to observe

an R2 as high as .70. An experienced model fitter senses when the value of R2 is

large enough, given the situation confronted. Clearly, some scientific phenomena lend themselves to modeling with considerably more accuracy then others.

(p.Â€37)

His point is that how well one can predict depends on context. In the physical sciences,

generally quite accurate prediction is possible. In the social sciences, where we are

attempting to predict human behavior (which can be influenced by many systematic

and some idiosyncratic factors), prediction is much more difficult.

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

3.6â•‡RELATIONSHIP OF SIMPLE CORRELATIONS TO MULTIPLE

CORRELATION

The ideal situation, in terms of obtaining a high R, would be to have each of the predictors significantly correlated with the dependent variable and for the predictors to be

uncorrelated with each other, so that they measure different constructs and are able to

predict different parts of the variance on y. Of course, in practice we will not find this,

because almost all variables are correlated to some degree. AÂ€good situation in practice, then, would be one in which most of our predictors correlate significantly with

y and the predictors have relatively low correlations among themselves. To illustrate

these points further, consider the following three patterns of correlations among three

predictors and an outcome.

(1)

Y

X1

X2

X1

X2

X3

.20

.10

.50

.30

.40

.60

(2)

Y

X1

X2

X1

X2

X3

.60

.50

.20

.70

.30

.20

(3)

Y

X1

X2

X1

X2

X3

.60

.70

.70

.70

.60

.80

In which of these cases would you expect the multiple correlation to be the largest

and the smallest respectively? Here it is quite clear that R will be the smallest for 1

because the highest correlation of any of the predictors with y is .30, whereas for the

other two patterns at least one of the predictors has a correlation of .70 with y. Thus,

we know that R will be at least .70 for Cases 2 and 3, whereas for Case 1 we know

only that R will be at least .30. Furthermore, there is no chance that R for Case 1

might become larger than that for cases 2 and 3, because the intercorrelations among

the predictors for 1 are approximately as large or larger than those for the other two

cases.

We would expect R to be largest for Case 2 because each of the predictors is moderately to strongly tied to y and there are low intercorrelations (i.e., little redundancy)

among the predictors—exactly the kind of situation we would hope to find in practice. We would expect R to be greater in Case 2 than in Case 3, because in Case 3

there is considerable redundancy among the predictors. Although the correlations

of the predictors with y are slightly higher in Case 3 (.60, .70, .70) than in Case 2

(.60, .50, .70), the much higher intercorrelations among the predictors for Case 3

will severely limit the ability of X2 and X3 to predict additional variance beyond

that of X1 (and hence significantly increase R), whereas this will not be true for

CaseÂ€2.

3.7 MULTICOLLINEARITY

When there are moderate to high intercorrelations among the predictors, as is the case

when several cognitive measures are used as predictors, the problem is referred to as

75

76

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

multicollinearity. Multicollinearity poses a real problem for the researcher using multiple regression for three reasons:

1. It severely limits the size of R, because the predictors are going after much of the

same variance on y. AÂ€study by Dizney and Gromen (1967) illustrates very nicely

how multicollinearity among the predictors limits the size of R. They studied how

well reading proficiency (x1) and writing proficiency (x2) would predict course

grades in college German. The following correlation matrix resulted:

x1

x2

y

x1

x2

y

1.00

.58

1.00

.33

.45

1.00

Note the multicollinearity for x1 and x2 (rx1x2Â€=Â€.58), and also that x2 has a simple

correlation of .45 with y. The multiple correlation R was only .46. Thus, the relatively high correlation between reading and writing severely limited the ability of

reading to add anything (only .01) to the prediction of a German grade above and

beyond that of writing.

2. Multicollinearity makes determining the importance of a given predictor difficult because the effects of the predictors are confounded due to the correlations

amongÂ€them.

3. Multicollinearity increases the variances of the regression coefficients. The greater

these variances, the more unstable the prediction equation willÂ€be.

The following are two methods for diagnosing multicollinearity:

1. Examine the simple correlations among the predictors from the correlation matrix.

These should be observed, and are easy to understand, but you need to be warned

that they do not always indicate the extent of multicollinearity. More subtle forms

of multicollinearity may exist. One such more subtle form is discussedÂ€next.

2. Examine the variance inflation factors for the predictors.

(

)

The quantity 1 1 − R 2j is called the jth variance inflation factor, where R 2j is the

squared multiple correlation for predicting the jth predictor from all other predictors.

The variance inflation factor for a predictor indicates whether there is a strong linear

association between it and all the remaining predictors. It is distinctly possible for a

predictor to have only moderate or relatively weak associations with the other predictors in terms of simple correlations, and yet to have a quite high R when regressed on

all the other predictors. When is the value for a variance inflation factor large enough

to cause concern? Myers (1990) offered the following suggestion:

Though no rule of thumb on numerical values is foolproof, it is generally believed

that if any VIF exceeds 10, there is reason for at least some concern; then one

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

should consider variable deletion or an alternative to least squares estimation to

combat the problem. (p.Â€369)

The variance inflation factors are easily obtained from SAS and SPSS (see TableÂ€3.6

for SAS and exercise 10 for SPSS).

There are at least three ways of combating multicollinearity. One way is to combine

predictors that are highly correlated. For example, if there are three measures having

similar variability relating to a single construct that have intercorrelations of about .80

or larger, then add them to form a single measure.

A second way, if one has initially a fairly large set of predictors, is to consider doing a

principal components or factor analysis to reduce to a much smaller set of predictors.

For example, if there are 30 predictors, we are undoubtedly not measuring 30 different

constructs. AÂ€factor analysis will suggest the number of constructs we are actually

measuring. The factors become the new predictors, and because the factors are uncorrelated by construction, we eliminate the multicollinearity problem. Principal components and factor analysis are discussed in ChapterÂ€9. In that chapter we also show how

to use SAS and SPSS to obtain factor scores that can then be used to do subsequent

analysis, such as being used as predictors for multiple regression.

A third way of combating multicollinearity is to use a technique called ridge regression. This approach is beyond the scope of this text, although Myers (1990) has a nice

discussion for those who are interested.

3.8â•‡ MODEL SELECTION

Various methods are available for selecting a good set of predictors:

1. Substantive Knowledge. As Weisberg (1985) noted, “the single most important

tool in selecting a subset of variables for use in a model is the analyst’s knowledge

of the substantive area under study” (p.Â€210). It is important for the investigator to

be judicious in his or her selection of predictors. Far too many investigators have

abused multiple regression by throwing everything in the hopper, often merely

because the variables are available. Cohen (1990), among others, commented on

the indiscriminate use of variables: There have been too many studies with prodigious numbers of dependent variables, or with what seemed to be far too many

independent variables, or (heaven help us)Â€both.

It is generally better to work with a small number of predictors because it is consistent with the scientific principle of parsimony and improves the n/k ratio, which helps

cross-validation prospects. Further, note the following from Lord and Novick (1968):

Experience in psychology and in many other fields of application has shown that

it is seldom worthwhile to include very many predictor variables in a regression

77

78

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

equation, for the incremental validity of new variables, after a certain point, is

usually very low. This is true because tests tend to overlap in content and consequently the addition of a fifth or sixth test may add little that is new to the battery

and still relevant to the criterion. (p.Â€274)

Or consider the following from Ramsey and Schafer (1997):

There are two good reasons for paring down a large number of exploratory variables to a smaller set. The first reason is somewhat philosophical: simplicity is

preferable to complexity. Thus, redundant and unnecessary variables should be

excluded on principle. The second reason is more concrete: unnecessary terms in

the model yield less precise inferences. (p.Â€325)

2. Sequential Methods. These are the forward, stepwise, and backward selection procedures that are popular with many researchers. All these procedures involve a

partialing-out process; that is, they look at the contribution of a predictor with the

effects of the other predictors partialed out, or held constant. Many of you may

have already encountered the notion of a partial correlation in a previous statistics

course, but a review is nevertheless in order.

The partial correlation between variables 1 and 2 with variable 3 partialed from both 1

and 2 is the correlation with variable 3 held constant, as you may recall. The formula

for the partial correlation is givenÂ€by:

r12 3 =

r12 − r13 r23

1 − r132 1 − r232

(5)

Let us put this in the context of multiple regression. Suppose we wish to know what

the partial correlation of y (dependent variable) is with predictor 2 with predictor 1

partialed out. The formula would be, following what we have earlier:

ry 2 1 =

ry 2 − ry1 r21

1 − ry21 1 − r212

(6)

We apply this formula to show how SPSS obtains the partial correlation of .528 for

INTEREST in TableÂ€3.4 under EXCLUDED VARIABLES in the first upcoming computer example. In this example CLARITY (abbreviated as clr) entered first, having a correlation of .862 with dependent variable INSTEVAL (abbreviated as inst). The following

correlations are taken from the correlation matrix, given near the beginning of TableÂ€3.4.

rinst int clr =

.435 − (.862)(.20)

1 − .8622 1 − .202

The correlation between the two predictors is .20, as shown.

We now give a brief description of the forward, stepwise, and backward selection

procedures.

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

FORWARD—The first predictor that has an opportunity to enter the equation is the

one with the largest simple correlation with y. If this predictor is significant, then

the predictor with the largest partial correlation with y is considered, and so on.

At some stage a given predictor will not make a significant contribution and the

procedure terminates. It is important to remember that with this procedure, once a

predictor gets into the equation, it stays.

STEPWISE—This is basically a variation on the forward selection procedure.

However, at each stage of the procedure, a test is made of the least useful

predictor. The importance of each predictor is constantly reassessed. Thus,

a predictor that may have been the best entry candidate earlier may now be

superfluous.

BACKWARD—The steps are as follows: (1) An equation is computed with ALL

the predictors. (2) The partial F is calculated for every predictor, treated as though

it were the last predictor to enter the equation. (3) The smallest partial F value,

say F1, is compared with a preselected significance, say F0. If F1 < F0, remove

that predictor and reestimate the equation with the remaining variables. Reenter

stageÂ€B.

3. Mallows’ Cp. Before we introduce Mallows’ Cp, it is important to consider the

consequences of under fitting (important variables are left out of the model) and

over fitting (having variables in the model that make essentially no contribution

or are marginal). Myers (1990, pp.Â€178–180) has an excellent discussion on the

impact of under fitting and over fitting, and notes that “a model that is too simple

may suffer from biased coefficients and biased prediction, while an overly complicated model can result in large variances, both in the coefficients and in the

prediction.”

This measure was introduced by C.â•›L. Mallows (1973) as a criterion for selecting a

model. It measures total squared error, and it was recommended by Mallows to choose

the model(s) where Cp ≈ p. For these models, the amount of under fitting or over fitting

is minimized. Mallows’ criterion may be writtenÂ€as

Cp

(s

= p+

2

− σ^

2

)( N − p)

σ^ 2

where ( p = k + 1) ,

(7)

where s 2 is the residual variance for the model being evaluated, and σ^ 2 is an

estimate of the residual variance that is usually based on the full model. Note

that if the residual variance of the model being evaluated, s 2 , is much larger than

σ^ 2, C p increases, suggesting that important variables have been left out of the

model.

4. Use of MAXR Procedure from SAS. There are many methods of model selection

in the SAS REG program, MAXR being one of them. This procedure produces

79

80

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

several models; the best one-variable model, the best two-variable model, and so

on. Here is the description of the procedure from the SAS/STAT manual:

The MAXR method begins by finding the one variable model producing the highest R2. Then another variable, the one that yields the greatest increase in R2, is

added. Once the two variable model is obtained, each of the variables in the model

is compared to each variable not in the model. For each comparison, MAXR determines if removing one variable and replacing it with the other variable increases

R2. After comparing all possible switches, MAXR makes the switch that produces

the largest increase in R2. Comparisons begin again, and the process continues

until MAXR finds that no switch could increase R2.Â€.Â€.Â€. Another variable is then

added to the model, and the comparing and switching process is repeated to find

the best three variable model. (p.Â€1398)

5. All Possible Regressions. If you wish to follow this route, then the SAS REG

program should be considered. The number of regressions increases quite sharply

as k increases, however, the program will efficiently identify good subsets. Good

subsets are those that have the smallest Mallows’ C value. We have illustrated this

in TableÂ€3.6. This pool of candidate models can then be examined further using

regression diagnostics and cross-validity criteria to be mentioned later.

Use of one or more of these methods will often yield a number of models of roughly

equal efficacy. As Myers (1990) noted:

The successful model builder will eventually understand that with many data sets,

several models can be fit that would be of nearly equal effectiveness. Thus the

problem that one deals with is the selection of one model from a pool of candidate

models. (p.Â€164)

One of the problems with the stepwise methods, which are very frequently used, is

that they have led many investigators to conclude that they have found the best model,

when in fact there may be some better models or several other models that are about

as good. As Huberty (1989) noted, “and one or more of these subsets may be more

interesting or relevant in a substantive sense” (p.Â€46).

In addition to the procedures just described, there are three other important criteria to

consider when selecting a prediction equation. The criteria all relate to the generalizability of the equation, that is, how well will the equation predict on an independent

sample(s) of data. The three methods of model validation, which are discussed in detail

in sectionÂ€3.11,Â€are:

1. Data splitting—Randomly split the data, obtain a prediction equation on one half

of the random split, and then check its predictive power (cross-validate) on the

other sample.

2

2. Use of the PRESS statistic ( RPress

), which is an external validation method particularly useful for small samples.

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

3. Obtain an estimate of the average predictive power of the equation on many other

samples from the same population, using a formula due to Stein (Herzberg, 1969).

The SPSS application guides comment on over fitting and the use of several models. There is no one test to determine the dimensionality of the best submodel. Some

researchers find it tempting to include too many variables in the model, which is called

over fitting. Such a model will perform badly when applied to a new sample from the

same population (cross-validation). Automatic stepwise procedures cannot do all the

work for you. Use them as a tool to determine roughly the number of predictors needed

(for example, you might find three to five variables). If you try several methods of selection, you may identify candidate predictors that are not included by any method. Ignore

them, and fit models with, say, three to five variables, selecting alternative subsets from

among the better candidates. You may find several subsets that perform equally as well.

Then, knowledge of the subject matter, how accurately individual variables are measured, and what a variable “communicates” may guide selection of the model to report.

We don’t disagree with these comments; however, we would favor the model that

cross-validates best. If two models cross-validate about the same, then we would favor

the model that makes most substantive sense.

3.8.1 Semipartial Correlations

We consider a procedure that, for a given ordering of the predictors, will enable us to

determine the unique contribution each predictor is making in accounting for variance

on y. This procedure, which uses semipartial correlations, will disentangle the correlations among the predictors.

The partial correlation between variables 1 and 2 with variable 3 partialed from both 1

and 2 is the correlation with variable 3 held constant, as you may recall. The formula

for the partial correlation is givenÂ€by

r12 3 =

r12 − r13 r23

1 − r132 1 − r232

.

We presented the partial correlation first for two reasons: (1) the semipartial correlation

is a variant of the partial correlation, and (2) the partial correlation will be involved in

computing more complicated semipartial correlations.

For breaking down R2, we will want to work with the semipartial, sometimes called

part, correlation. The formula for the semipartial correlationÂ€is

r12 3( s ) =

r12 − r13 r23

1 − r232

.

The only difference between this equation and the previous one is that the denominator

here doesn’t contain the standard deviation of the partialed scores for variableÂ€1.

81

82

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

In multiple correlation we wish to partial the independent variables (the predictors)

from one another, but not from the dependent variable. We wish to leave the dependent

2

variable intact and not partial any variance attributable to the predictors. Let Ry12k

denote the squared multiple correlation for the k predictors, where the predictors

appear after the dot. Consider the case of one dependent variable and three predictors.

It can be shownÂ€that:

Ry2 123 = ry21 + ry22 1( s ) + ry23 12( s ) ,

(8)

where

ry 2 1( s ) =

ry 2 − ry1r21

1 − r212

(9)

is the semipartial correlation between y and variable 2, with variable 1 partialed only

from variable 2, and ry 3 12( s ) is the semipartial correlation between y and variable 3

with variables 1 and 2 partialed only from variableÂ€3:

ry 3 12( s ) =

ry 3 1( s ) − ry 2 1( s ) r23 1

1 − r232 1

(10)

Thus, through the use of semipartial correlations, we disentangle the correlations

among the predictors and determine how much unique variance on each predictor is

related to variance onÂ€y.

3.9â•‡ TWO COMPUTER EXAMPLES

To illustrate the use of several of the aforementioned model selection methods, we

consider two computer examples. The first example illustrates the SPSS REGRESSION program, and uses data from Morrison (1983) on 32 students enrolled in an

MBA course. We predict instructor course evaluation from five predictors. The second

example illustrates SAS REG on quality ratings of 46 research doctorate programs in

psychology, where we are attempting to predict quality ratings from factors such as

number of program graduates, percentage of graduates who received fellowships or

grant support, and so on (SingerÂ€& Willett, 1988).

Example 3.3: SPSS Regression on Morrison MBAÂ€Data

The data for this problem are from Morrison (1983). The dependent variable is instructor course evaluation in an MBA course, with the five predictors being clarity, stimulation, knowledge, interest, and course evaluation. We illustrate two of the sequential

procedures, stepwise and backward selection, using SPSS. Syntax for running the

analyses, along with the correlation matrix, are given in TableÂ€3.3.

Table 3.3:â•‡ SPSS Syntax for Stepwise and Backward Selection Runs on the Morrison

MBA Data and the Correlation Matrix

TITLE ‘MORRISON MBA DATA’.

DATA LIST FREE/INSTEVAL CLARITY STIMUL KNOWLEDG INTEREST

COUEVAL.

BEGIN DATA.

1 1 2 1 1 2â•…â•… 1 2 2 1 1 1â•…â•… 1 1 1 1 1 2â•…â•… 1 1 2 1 1 2

2 1 3 2 2 2â•…â•… 2 2 4 1 1 2â•…â•… 2 3 3 1 1 2â•…â•… 2 3 4 1 2 3

2 2 3 1 3 3â•…â•… 2 2 2 2 2 2â•…â•… 2 2 3 2 1 2â•…â•… 2 2 2 3 3 2

2 2 2 1 1 2â•…â•… 2 2 4 2 2 2â•…â•… 2 3 3 1 1 3â•…â•… 2 3 4 1 1 2

2 3 2 1 1 2â•…â•… 3 4 4 3 2 2â•…â•… 3 4 3 1 1 4â•…â•… 3 4 3 1 2 3

3 4 3 2 2 3â•…â•… 3 3 4 2 3 3â•…â•… 3 3 4 2 3 3â•…â•… 3 4 3 1 1 2

3 4 5 1 1 3â•…â•… 3 3 5 1 2 3â•…â•… 3 4 4 1 2 3â•…â•… 3 4 4 1 1 3

3 3 3 2 1 3â•…â•… 3 3 5 1 1 2â•…â•… 4 5 5 2 3 4â•…â•… 4 4 5 2 3 4

END DATA.

REGRESSION DESCRIPTIVESÂ€=Â€DEFAULT/

(1)

VARIABLESÂ€=Â€INSTEVAL TO COUEVAL/

(2) STATISTICSÂ€=Â€DEFAULTS TOL SELECTION/

DEPENDENTÂ€=Â€INSTEVAL/

(3) METHODÂ€=Â€STEPWISE/

(4) SAVE COOK LEVER SRESID/

(5) SCATTERPLOT(*SRESID, *ZPRED).

CORRELATION MATRIX

INSTEVAL

CLARITY

STIMUL

KNOWLEDGE

INTEREST

COUEVAL

Insteval

Clarity

Stimul

Knowledge

Interest

Coueval

1.000

.862

.739

.282

.435

.738

.862

1.000

.617

.057

.200

.651

.739

.617

1.000

.078

.317

.523

.282

.057

.078

1.000

.583

.041

.435

.200

.317

.583

1.000

.448

.738

.651

.523

.041

.448

1.000

(1)â•…The DESCRIPTIVESÂ€=Â€DEFAULT subcommand yields the means, standard deviations, and the

correlation matrix for the variables.

(2)â•…The DEFAULTS part of the STATISTICS subcommand yields, among other things, the Â�ANOVA

table for each step, R, R2, and adjusted R2.

(3)â•… To obtain the backward selection procedure, we would simply put METHODÂ€=Â€BACKWARD/.

(4)â•…The SAVE subcommand places into the data set Cook’s distance—for identifying influential data points,

centered leverage values—for identifying outliers on predictors, and studentized residuals—for identifying

outliers on y.

(5)â•…This SCATTERPLOT subcommand yields the plot of the studentized residuals vs. the standardized

predicted values, which is very useful for determining whether any of the assumptions underlying the linear

regression model may be violated.

84

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

SPSS has “p values,” denoted by PIN and POUT, which govern whether a predictor will

enter the equation and whether it will be deleted. The default values are PINÂ€=Â€.05

and POUTÂ€=Â€.10. In other words, a predictor must be “significant” at the .05 level to

enter, or must not be significant at the .10 level to be deleted.

First, we discuss the stepwise procedure results. Examination of the correlation matrix

in TableÂ€3.3 reveals that three of the predictors (CLARITY, STIMUL, and COUEVAL)

are strongly related to INSTEVAL (simple correlations of .862, .739, and .738, respectively). Because clarity has the highest correlation, it will enter the equation first.

Superficially, it might appear that STIMUL or COUEVAL would enter next; however

we must take into account how these predictors are correlated with CLARITY, and

indeed both have fairly high correlations with CLARITY (.617 and .651 respectively).

Thus, they will not account for as much unique variance on INSTEVAL, above and

beyond that of CLARITY, as first appeared. On the other hand, INTEREST, which has

a considerably lower correlation with INSTEVAL (.44), is correlated only .20 with

CLARITY. Thus, the variance on INSTEVAL it accounts for is relatively independent

of the variance CLARITY accounted for. And, as seen in TableÂ€3.4, it is INTEREST

that enters the regression equation second. STIMUL is the third and final predictor to

enter, because its p value (.0086) is less than the default value of .05. Finally, the other

predictors (KNOWLEDGE and COUEVAL) don’t enter because their p values (.0989

and .1288) are greater than .05.

Table 3.4:â•‡ Selected Results SPSS Stepwise Regression Run on the Morrison MBAÂ€Data

Descriptive Statistics

INSTEVAL

CLARITY

STIMUL

KNOWLEDG

INTEREST

COUEVAL

Mean

Std. Deviation

N

2.4063

2.8438

3.3125

1.4375

1.6563

2.5313

.7976

1.0809

1.0906

.6189

.7874

.7177

32

32

32

32

32

32

Correlations

INSTEVAL CLARITY STIMUL KNOWLEDG INTEREST COUEVAL

Pearson

INSTEVAL 1.000

Correlation CLARITY

.862

STIMUL

.739

KNOWLEDG .282

INTEREST

.435

COUEVAL

.738

.862

1.000

.617

.057

.200

.651

.739

.617

1.000

.078

.317

.523

.282

.057

.078

1.000

.583

.041

.435

.200

.317

.583

1.000

.448

.738

.651

.523

.041

.448

1.000

Variables Entered/Removeda

Model

Variables Variables

Entered Removed Method

1

CLARITY

2

INTEREST

3

STIMUL

a

Stepwise (Criteria:

Probability-of-F-to-enter

<= .050,

Probability-of-F-to-remove

>= .100).

Stepwise (Criteria:

Probability-of-F-to-enter

<= .050,

Probability-of-F-to-remove

>= .100).

Stepwise (Criteria:

Probability-of-F-to-enter

<= .050,

Probability-of-F-to-Remove

>= .100).

This predictor enters the equation first, since it

has the highest simple correlation (.862) with the dependent

variable INSTEVAL.

INTEREST has the opportunity

to enter the equation next

since it has the largest partial

correlation of .528 (see the box

with EXCLUDED VARIABLES),

and does enter since its p value

(.002) is less than the default

entry value of .05.

Since STIMULUS has the

strongest tie to INSTEVAL,

after the effects of CLARITY

and INTEREST are partialed

out, it gets the opportunity to

enter next. STIMULUS does

enter, since its p value (.009) is

less than .05.

Dependent Variable: INSTEVAL

Model Summaryd

Selection Criteria

Model R

1

2

3

a

Std. Error Akaike

Amemiya Mallows’ Schwarz

Adjusted of the

Â�Information Prediction Prediction Bayesian

R Square R Square Estimate Criterion

Criterion Criterion Criterion

.862a .743

.903b .815

.925c .856

.734

.802

.840

.4112

.3551

.3189

Predictors: (Constant), CLARITY

Predictors: (Constant), CLARITY, INTEREST

c

Predictors: (Constant), CLARITY, INTEREST, STIMUL

d

Dependent Variable: INSTEVAL

b

−54.936

−63.405

−69.426

.292

.224

.186

35.297

19.635

11.517

−52.004

−59.008

−63.563

With just CLARITY in the equation we account for 74.3%

of the variance; adding INTEREST increases the variance

accounted for to 81.5%, and finally with 3 predictors

(STIMUL added) we account for 85.6% of the variance in

this sample.

(Continued )

TableÂ€3.4:â•‡ (Continued)

ANOVAd

Model

Sum of Squares

df

Mean Square

F

Sig.

1â•…Regression

â•… Residual

â•…â•‡Total

2â•…Regression

â•… Residual

â•…â•‡Total

3â•…Regression

â•… Residual

â•…â•‡Total

14.645

5.073

19.719

16.061

3.658

19.719

16.872

2.847

19.719

1

30

31

2

29

31

3

28

31

14.645

.169

86.602

.000a

8.031

.126

63.670

.000b

5.624

.102

55.316

.000c

Predictors: (Constant), CLARITY

Predictors: (Constant), CLARITY, INTEREST

c

Predictors: (Constant), CLARITY, INTEREST, STIMUL

d

Dependent Variable: INSTEVAL

a

b

Coefficienta

Unstandardized

Coefficients

Model

1

2

3

a

(Constant)

CLARITY

(Constant)

CLARITY

INTEREST

(Constant)

CLARITY

INTEREST

STIMUL

B

Std.

Error

.598

.636

.254

.596

.277

.021

.482

.223

.195

.207

.068

.207

.060

.083

.203

.067

.077

.069

Standardized

Coefficients

Collinearity

Statistics

Beta

t

Sig.

.862

2.882

9.306

1.230

9.887

3.350

.105

7.158

2.904

2.824

.007

.000

.229

.000

.002

.917

.000

.007

.009

.807

.273

.653

.220

.266

Tolerance

VIF

1.000

1.000

.960

.960

1.042

1.042

.619

.900

.580

1.616

1.112

1.724

Dependent Variable: INSTEVAL

These are the raw regression coefficients that define the prediction equation, i.e., INSTEVALÂ€=Â€.482 CLARITY

+ .223 INTEREST + .195 STIMUL + .021. The coefficient of .482 for CLARITY means that for every unit change

on CLARITY there is a predicted change of .482 units on INSTEVAL, holding the other predictors constant. The

coefficient of .223 for INTEREST means that for every unit change on INTEREST there is a predicted change of

.223 units on INSTEVAL, holding the other predictors constant. Note that the Beta column contains the estimates of the regression coefficients when all variables are in z score form. Thus, the value of .653 for CLARITY

means that for every standard deviation change in CLARITY there is a predicted change of .653 standard

deviations on INSTEVAL, holding constant the other predictors.

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

Excluded Variablesd

Collinearity Statistics

Model

Beta In

T

Sig.

Partial

Correlation

Tolerance

VIF

Minimum

Tolerance

1

.335a

.233a

.273a

.307a

.266b

.116b

.191b

.148c

.161c

3.274

2.783

3.350

2.784

2.824

1.183

1.692

1.709

1.567

.003

.009

.002

.009

.009

.247

.102

.099

.129

.520

.459

.528

.459

.471

.218

.305

.312

.289

.619

.997

.960

.576

.580

.656

.471

.647

.466

1.616

1.003

1.042

1.736

1.724

1.524

2.122

1.546

2.148

.619

.997

.960

.576

.580

.632

.471

.572

.451

2

3

STIMUL

KNOWLEDG

INTEREST

COUEVAL

STIMUL

KNOWLEDG

COUEVAL

KNOWLEDG

COUEVAL

Predictors in the Model: (Constant), CLARITY

Predictors in the Model: (Constant), CLARITY, INTEREST

c

Predictors in the Model: (Constant), CLARITY, INTEREST, STIMUL

d

Dependent Variable: INSTEVAL

Since neither of these p values is less than .05, no other predictors can enter, and the procedure terminates.

a

b

Selected output from the backward selection procedure appears in TableÂ€3.5. First,

all of the predictors are put into the equation. Then, the procedure determines which

of the predictors makes the least contribution when entered last in the equation. That

predictor is INTEREST, and since its p value is .9097, it is deleted from the equation.

None of the other predictors is further deleted because their p values are less than .10.

Interestingly, note that two different sets of predictors emerge from the two sequential

selection procedures. The stepwise procedure yields the set (CLARITY, INTEREST,

and STIMUL), where the backward procedure yields (COUEVAL, KNOWLEDGE,

STIMUL, and CLARITY). However, CLARITY and STIMUL are common to both

sets. On the grounds of parsimony, we might prefer the set (CLARITY, INTEREST,

and STIMUL), especially because the adjusted R2 values for the two sets are quite

close (.84 and .87). Note that the adjusted R2 is generally preferred over R2 as a measure of the proportion of y variability due to the model, although we will see later that

adjusted R2 does not work particularly well in assessing the cross-validity predictive

power of an equation.

Three other things should be checked out before settling on this as our chosen model:

1. We need to determine if the assumptions of the linear regression model are tenable.

2. We need an estimate of the cross-validity power of the equation.

3. We need to check for the existence of outliers and/or influential data points.

87

88

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Table 3.5:â•‡ Selected Printout From SPSS Regression for Backward Selection on the

Morrison MBAÂ€Data

Model Summaryc

Selection Criteria

Model R

1

2

Mallows’

Std. Error Akaike

Amemiya PreSchwarz

R

Adjusted of the

Information Prediction diction

Bayesian

Square R Square Estimate Criterion

Criterion

Criterion Criterion

.946a .894

.946b .894

.874

.879

.2831

.2779

−75.407

−77.391

.154

.145

6.000

4.013

−66.613

−70.062

Predictors: (Constant), COUEVAL, KNOWLEDG, STIMUL, INTEREST, CLARITY

Predictors: (Constant), COUEVAL, KNOWLEDG, STIMUL, CLARITY

c

Dependent Variable: INSTEVAL

a

b

Coefficientsa

Unstandardized

Coefficients

Model

B

Std. Error

1

−.443

.386

.197

.277

.011

.270

−.450

.384

.198

.285

.276

.235

.071

.062

.108

.097

.110

.222

.067

.059

.081

.094

2

a

(Constant)

CLARITY

STIMUL

KNOWLEDG

INTEREST

COUEVAL

(Constant)

CLARITY

STIMUL

KNOWLEDG

COUEVAL

Standardized

Coefficients

Beta

.523

.269

.215

.011

.243

.520

.271

.221

.249

Collinearity

Statistics

t

Sig.

−1.886

5.415

3.186

2.561

.115

2.459

−2.027

5.698

3.335

3.518

2.953

.070

.000

.004

.017

.910

.021

.053

.000

.002

.002

.006

Tolerance

VIF

.436

.569

.579

.441

.416

2.293

1.759

1.728

2.266

2.401

.471

.592

.994

.553

2.125

1.690

1.006

1.810

Dependent Variable: INSTEVAL

FigureÂ€3.4 shows a plot of the studentized residuals versus the predicted values from

SPSS. This plot shows essentially random variation of the points about the horizontal

line of 0, indicating no violations of assumptions.

The issues of cross-validity power and outliers are considered later in this chapter, and

are applied to this problem in sectionÂ€3.15, after both topics have been covered.

Example 3.4: SAS REG on Doctoral Programs in Psychology

The data for this example come from a National Academy of Sciences report (1982)

that, among other things, provided ratings on the quality of 46 research doctoral programs in psychology. The six variables used to predict qualityÂ€are:

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

NFACULTY—number of faculty members in the program as of DecemberÂ€1980

NGRADS—number of program graduates from 1975 throughÂ€1980

PCTSUPP—percentage of program graduates from 1975–1979 who received fellowships or training grant support during their graduate education

PCTGRANT—percentage of faculty members holding research grants from the

Alcohol, Drug Abuse, and Mental Health Administration, the National Institutes

of Health, or the National Science Foundation at any time during 1978–1980

NARTICLE—number of published articles attributed to program faculty members

from 1978–1980

PCTPUB—percentage of faculty with one or more published articles from

1978–1980

Both the stepwise and the MAXR procedures were used on this data to generate several regression models. SAS syntax for doing this, along with the correlation matrix,

are given in TableÂ€3.6.

Table 3.6:â•‡ SAS Syntax for Stepwise and MAXR Runs on the National Academy of

Sciences Data and the Correlation Matrix

DATA SINGER;

INPUT QUALITY NFACUL NGRADS PCTSUPP PCTGRT NARTIC PCTPUB; LINES;

DATA LINES

(1)â•… PROC REG SIMPLE CORR;

MODEL QUALITYÂ€=Â€NFACUL NGRADS PCTSUPP PCTGRT NARTIC PCTPUB/

(2)â•…

SELECTIONÂ€=Â€STEPWISE VIF R INFLUENCE;

RUN;

ODEL QUALITYÂ€=Â€NFACUL NGRADS PCTSUPP PCTGRT NARTIC PCTPUB/

M

SELECTIONÂ€=Â€MAXR VIF R INFLUENCE;

(1)â•… SIMPLE is needed to obtain descriptive statistics (means, variances, etc.) for all variables.

CORR is needed to obtain the correlation matrix for the variables.

(2)â•… In this MODEL statement, the dependent variable goes on the left and all predictors to the

right of the equals sign. SELECTION is where we indicate which of the procedures we wish to

use. There is a wide variety of other information we can get printed out. Here we have selected

VIF (variance inflation factors), R (analysis of residuals, hat elements, Cook’s D), and INFLUENCE (influence diagnostics).

Note that there are two separate MODEL statements for the two regression procedures being

requested. Although multiple procedures can be obtained in one run, you must have a separate

MODEL statement for each procedure.

CORRELATION MATRIX

NFACUL NCRADS

2

NFACUL

2

3

PCTSUPP PCTCRT NARTIC PCTPUB QUALITY

4

5

6

7

1

1.000

(Continued)

89

90

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

TableÂ€3.6:â•‡ (Continued)

CORRELATION MATRIX

NFACUL NCRADS

NCRADS

PCTSUPP

PCTCRT

NARTIC

PCTPUB

QUALITY

3

4

S

6

7

I

0.692

0.395

0.162

0.755

0.205

0.622

1.000

0.337

0.071

0.646

0.171

0.418

PCTSUPP PCTCRT NARTIC PCTPUB QUALITY

1.000

0.351

0.366

0.347

0.582

1.000

0.436

0.490

0.700

1.000

0.593

0.762

1.000

0.585

1.000

One very nice feature of SAS REG is that Mallows’ Cp is given for each model. The

stepwise procedure terminated after four predictors entered. Here is the summary

table, exactly as it appears in the output:

Summary of Stepwise Procedure for Dependent Variable QUALITY

Variable

Step

Entered

1

2

3

4

NARTIC

PCTGRT

PCTSUPP

NFACUL

Removed

Partial

Model

R**2

R**2

C(p)

F

Prob > F

0.5809

0.1668

0.0569

0.0176

0.5809

0.7477

0.8045

0.8221

55.1185

18.4760

7.2970

5.2161

60.9861

28.4156

12.2197

4.0595

0.0001

0.0001

0.0011

0.0505

This four predictor model appears to be a reasonably good one. First, Mallows’ Cp is

very close to p (recall pÂ€=Â€k + 1), that is, 5.216 ≈ 5, indicating that there is not much

bias in the model. Second, R2Â€=Â€.8221, indicating that we can predict quality quite well

from the four predictors. Although this R2 is not adjusted, the adjusted value will not

differ much because we have not selected from a large pool of predictors.

Selected output from the MAXR procedure run appears in TableÂ€3.7. From TableÂ€3.7

we can construct the following results:

BEST MODEL

VARIABLE(S)

MALLOWS Cp

for 1 variable

for 2 variables

for 3 variables

for 4 variables

NARTIC

PCTGRT, NFACUL

PCTPUB, PCTGRT, NFACUL

NFACUL, PCTSUPP, PCTGRT, NARTIC

55.118

16.859

9.147

5.216

In this case, the same four-predictor model is selected by the MAXR procedure that

was selected by the stepwise procedure.

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

Table 3.7:â•‡ Selected Results From the MAXR Run on the National Academy of

Â�SciencesÂ€ Data

Maximum R-Square Improvement of Dependent Variable QUALITY

Step 1

Variable NARTIC Entered

R-squareÂ€=Â€0.5809

The above model is the best 1-variable model found.

Variable PGTGRT Entered

R-squareÂ€=Â€0.7477

Step 2

Variable NARTIC Removed

R-squareÂ€=Â€0.7546

Step 3

Variable NFACUL Entered

The above model is the best 2-variable model found.

Step 4

Variable PCTPUB Entered

R-squareÂ€=Â€0.7965

The above model is the best 3-variable model found.

Variable PCTSUPP Entered

R-squareÂ€=Â€0.8191

Step 5

Variable PCTPUB Removed

R-squareÂ€=Â€0.8221

Step 6

Variable NARTIC Entered

Regression

Error

Total

C(p)Â€=Â€55.1185

C(p)Â€=Â€18.4760

C(p)Â€=Â€16.8597

C(p)Â€=Â€9.1472

C(p)Â€=Â€5.9230

C(p)Â€=Â€5.2161

DF

Sum of Squares

Mean Square

F

Prob > f

4

41

45

3752.82299

811.894403

4564.71739

938.20575

19.80230

47.38

0.0001

F

Prob > F

30.35

4.06

8.53

31.17

7.79

0.0001

0.0505

0.0057

0.0001

0.0079

Variable

Parameter

Estimate

Standard

Error

Type II

Sum of

Squares

INTERCEP

NFACUL

PCTSUPP

PCTGRT

NARTIC

9.06133

0.13330

0.094530

0.24645

0.05455

1.64473

0.06616

0.03237

0.04414

0.01955

601.05272

80.38802

168.91498

617.20528

154.24692

3.9.1 Caveat on p Values for the “Significance” of Predictors

The p values that are given by SPSS and SAS for the “significance” of each predictor

at each step for stepwise or the forward selection procedures should be treated tenuously, especially if your initial pool of predictors is moderate (15) or large (30). The

reason is that the ordinary F distribution is not appropriate here, because the largest

F is being selected out of all Fs available. Thus, the appropriate critical value will be

larger (and can be considerably larger) than would be obtained from the ordinary null

F distribution. Draper and Smith (1981) noted, “studies have shown, for example, that

in some cases where an entry F test was made at the a level, the appropriate probability

was qa, where there were q entry candidates at that stage” (p.Â€311). This is saying, for

example, that an experimenter may think his or her probability of erroneously including a predictor is .05, when in fact the actual probability of erroneously including the

predictor is .50 (if there were 10 entry candidates at that point).

91

92

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Thus, the F tests are positively biased, and the greater the number of predictors, the larger the bias. Hence, these F tests should be used only as rough guides

to the usefulness of the predictors chosen. The acid test is how well the predictors

do under cross-validation. It can be unwise to use any of the stepwise procedures

with 20 or 30 predictors and only 100 subjects, because capitalization on chance

is great, and the results may well not cross-validate. To find an equation that probably

will have generalizability, it is best to carefully select (using substantive knowledge or

any previous related literature) a small or relatively small set of predictors.

Ramsey and Schafer (1997) comment on this issue:

The cutoff value of 4 for the F-statistic (or 2 for the magnitude of the t-statistic)

corresponds roughly to a two-sided p-value of less than .05. The notion of “significance” cannot be taken seriously, however, because sequential variable selection

is a form of data snooping.

At step 1 of a forward selection, the cutoff of FÂ€=Â€4 corresponds to a hypothesis

test for a single coefficient. But the actual statistic considered is the largest of

several F-statistics, whose sampling distribution under the null hypothesis differs

sharply from an F-distribution.

To demonstrate this, suppose that a model contained ten explanatory variables and

a single response, with a sample size of nÂ€=Â€100. The F-statistic for a single variable

at step 1 would be compared to an F-distribution with 1 and 98 degrees of freedom,

where only 4.8% of the F-ratios exceed 4. But suppose further that all eleven variables were generated completely at random (and independently of each other), from

a standard normal distribution. What should be expected of the largest F-to-enter?

This random generation process was simulated 500 times on a computer. The following display shows a histogram of the largest among ten F-to-enter values, along

with the theoretical F-distribution. The two distributions are very different. At least

one F-to-enter was larger than 4 in 38% of the simulated trials, even though none of

the explanatory variables was associated with the response. (p.Â€93)

Simulated distribution of the largest of 10 F-statistics.

F-distribution with 1 and 98 df

(theoretical curve).

Largest of 10 F-to-enter values

(histogram from 500 simulations).

1

2

3

4

5

6

9

7

8

F-statistic

10

11

12

13

14

15

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

3.10 CHECKING ASSUMPTIONS FOR THE REGRESSIONÂ€MODEL

Recall that in the linear regression model it is assumed that the errors are independent

and follow a normal distribution with constant variance. The normality assumption

can be checked through the use of the histogram of the standardized or studentized

residuals, as we did in TableÂ€3.2 for the simple regression example. The independence assumption implies that the subjects are responding independently of one another.

This is an important assumption. We show in ChapterÂ€6, in the context of analysis of

variance, that if independence is violated only mildly, then the probability of a type

IÂ€error may be several times greater than the level the experimenter thinks he or she is

working at. Thus, instead of rejecting falsely 5% of the time, the experimenter may be

rejecting falsely 25% or 30% of theÂ€time.

We now consider an example where this assumption was violated. Suppose researchers had asked each of 22 college freshmen to write four in-class essays in two 1-hour

sessions, separated by a span of several months. Then, suppose a subsequent regression analysis were conducted to predict quality of essay response using an n of 88.

Here, however, the responses for each subject on the four essays are obviously going

to be correlated, so that there are not 88 independent observations, but onlyÂ€22.

3.10.1 ResidualÂ€Plots

Various types of plots are available for assessing potential problems with the regression model (DraperÂ€& Smith, 1981; Weisberg, 1985). One of the most useful graphs

the studentized residuals (r) versus the predicted values ( y i ). If the assumptions of

the linear regression model are tenable, then these residuals should scatter randomly

about a horizontal line defined by riÂ€ =Â€ 0, as shown in FigureÂ€ 3.3a. Any systematic

pattern or clustering of the residuals suggests a model violation(s). Three such systematic patterns are indicated in FigureÂ€3.3. FigureÂ€3.3b shows a systematic quadratic

(second-degree equation) clustering of the residuals. For FigureÂ€3.3c, the variability

of the residuals increases systematically as the predicted values increase, suggesting a

violation of the constant variance assumption.

It is important to note that the plots in FigureÂ€3.3 are somewhat idealized, constructed

to be clear violations. As Weisberg (1985) stated, “unfortunately, these idealized plots

cover up one very important point; in real data sets, the true state of affairs is rarely

this clear” (p.Â€131).

In FigureÂ€3.4 we present residual plots for three real data sets. The first plot is for the

Morrison data (the first computer example), and shows essentially random scatter of

the residuals, suggesting no violations of assumptions. The remaining two plots are

from a study by a statistician who analyzed the salaries of over 260 major league baseball hitters, using predictors such as career batting average, career home runs per time

at bat, years in the major leagues, and so on. These plots are from Moore and McCabe

(1989) and are used with permission. FigureÂ€ 3.4b, which plots the residuals versus

93

94

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Figure 3.3:â•‡ Residual plots of studentized residuals vs. predicted values.

ri

Plot when model

is correct

ri

Model violation:

nonlinearity

(a)

yˆi

(b)

Model violation:

nonconstant

variance

Model violation:

nonlinearity and

nonconstant variance

ri

ri

(c)

yˆi

yˆi

(d)

yˆi

predicted salaries, shows a clear violation of the constant variance assumption. For

lower predicted salaries there is little variability about 0, but for the high salaries there

is considerable variability of the residuals. The implication of this is that the model

will predict lower salaries quite accurately, but not so for the higher salaries.

FigureÂ€3.4c plots the residuals versus number of years in the major leagues. This plot

shows a clear curvilinear clustering, that is, quadratic. The implication of this curvilinear trend is that the regression model will tend to overestimate the salaries of players

who have been in the majors only a few years or over 15Â€years, and it will underestimate the salaries of players who have been in the majors about five to nine years.

In concluding this section, note that if nonlinearity or nonconstant variance is found,

there are various remedies. For nonlinearity, perhaps a polynomial model is needed.

Or sometimes a transformation of the data will enable a nonlinear model to be approximated by a linear one. For nonconstant variance, weighted least squares is one possibility, or more commonly, a variance-stabilizing transformation (such as square root or

log) may be used. We refer you to Weisberg (1985, chapterÂ€6) for an excellent discussion of remedies for regression model violations.

Figure 3.4:â•‡ Residual plots for three real data sets suggesting no violations, heterogeneous

variance, and curvilinearity.

Scatterplot

Dependent Variable: INSTEVAL

Regression Studentized Residual

3

2

1

0

–1

–2

–3

–3

–2

–1

0

1

Regression Standardized Predicted Value

Legend:

A = 1 OBS

B = 2 OBS

C = 3 OBS

5

4

A

A

3

Residuals

1

A

0

–1

–2

A

A

A

3

A

A

2

2

A

A

A

A

A

A

A AA A

A

A A A

A

A A

A

A

A

A

B

AA

AA

A

B

A

B

A

B AAA B

AA

A

AA AA

A A A

AA

AA A AA

A

A

AA B A A A A

B AA

A A A AA A A

AA B A A

A BA

A A

B B AA

A A AAA A A A A A A AAAAB A

A

AA A

A

A

AB A

A

A

A

A

A

A

AA

C AAAAAA A A AAA

AA

A AA

A

A

A

CB

A

BAB B BA

B A

AA A A A

AA

AA

A

A B AAAAAA A

B

B

A A

A

AA

AA

A B A AA

A

A

A

A BA

A

A

A A

A

B A B A A

A

A

A

A A

A

A

A

A

A

A

A

A

B

A

A

–3

–4

–250 –150 –50

50

150 250 350 450 550 650 750 850

Predicted value

(b)

950 1050 1150 1250

A

A

A

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Figure 3.3:â•‡ (Continued)

4

3

–1

–2

–3

A

A

A

1

0

Legend:

A = 1 OBS D = 4 OBS

B = 2 OBS E = 5 OBS

C = 3 OBS F = 6 OBS

A

2

Residuals

96

A

A

C

B

B

B

B

A

B

A

D

B

E

B

B

B

B

A

A

B

E

C

E

C

A

A

D

D

A

B

C

A

A

E

B

B

A

A

C

B

C

B

D

A

A

A

A

A

A

C

B

C

B

A

B

E

D

B

A

C

D

C

B

A

C

B

A

A

B

A

A

B

B

A

A

A

D

D

A

A

A

A

C

A

C

A

A

A

A

A

A

A

B

A

A

B

A

A

C

A

A

C

A

A

A

C

A

A

B

B

A

B

C

A

B

B

A

A

A

A

A

B

A

A

B

A

B

A

A

A

A

A

A

A

A

A

A

–4

–5

1

2

3

4

5

6 7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Number of years

(c)

3.11 MODEL VALIDATION

We indicated earlier that it was crucial for the researcher to obtain some measure of

how well the regression equation will predict on an independent sample(s) of data.

That is, it was important to determine whether the equation had generalizability. We

discuss here three forms of model validation, two being empirical and the other involving an estimate of average predictive power on other samples. First, we give a brief

description of each form, and then elaborate on each form of validation.

1. Data splitting. Here the sample is randomly split in half. It does not have to be

split evenly, but we use this for illustration. The regression equation is found on

the so-called derivation sample (also called the screening sample, or the sample

that “gave birth” to the prediction equation by Tukey). This prediction equation is

then applied to the other sample (called validation or calibration) to see how well

it predicts the y scores there.

2. Compute an adjusted R2. There are various adjusted R2 measures, or measures of

shrinkage in predictive power, but they do not all estimate the same thing. The

one most commonly used, and that which is printed out by both major statistical packages, is due to Wherry (1931). It is very important to note here that the

Wherry formula estimates how much variance on y would be accounted for if we

had derived the prediction equation in the population from which the sample was

drawn. The Wherry formula does not indicate how well the derived equation will

predict on other samples from the same population. AÂ€formula due to Stein (1960)

does estimate average cross-validation predictive power. As of this writing it is not

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

printed out by any of the three major packages. The formulas due to Wherry and

Stein are presented shortly.

3. Use the PRESS statistic. As pointed out by several authors, in many instances one

does not have enough data to be randomly splitting it. One can obtain a good measure of external predictive power by use of the PRESS statistic. In this approach the

y value for each subject is set aside and a prediction equation derived on the remaining data. Thus, n prediction equations are derived and n true prediction errors are

found. To be very specific, the prediction error for subject 1 is computed from the

equation derived on the remaining (n − 1) data points, the prediction error for subject 2 is computed from the equation derived on the other (n − 1) data points, and so

on. As Myers (1990) put it, “PRESS is important in that one has information in the

form of n validations in which the fitting sample for each is of size n − 1” (p.Â€171).

3.11.1 Data Splitting

Recall that the sample is randomly split. The regression equation is found on the derivation

sample and then is applied to the other sample (validation) to determine how well it will

predict y there. Next, we give a hypothetical example, randomly splitting 100 subjects.

Derivation Sample

nÂ€=Â€50

Prediction Equation

Validation Sample

nÂ€=Â€50

y

^

yi = 4 + .3x1 + .7 x2

6

4.5

7

x1

x2

1

2

.Â€.Â€.

5

.5

.3

.2

Now, using this prediction equation, we predict the y scores in the validation sample:

y^ 1 = 4 + .3(1) + .7(.5) = 4.65

^

y 2 = 4 + .3(2) + .7(.3) = 4.81

.Â€.Â€.

y^ 50 = 4 + .3(5) + .7(.2) = 5.64

The cross-validated R then is the correlation for the following set of scores:

y

yˆi

6

4.5

4.65

4.81

.Â€.Â€.

7

5.64

97

98

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Random splitting and cross-validation can be easily done using SPSS and the filter

case function.

3.11.2 Cross-Validation WithÂ€SPSS

To illustrate cross-validation with SPSS, we use the Agresti data that appears on this

book’s accompanying website. Recall that the sample size here was 93. First, we randomly

select a sample and do a stepwise regression on this random sample. We have selected an

approximate random sample of 60%. It turns out that nÂ€=Â€60 in our random sample. This

is done by clicking on DATA, choosing SELECT CASES from the dropdown menu, then

choosing RANDOM SAMPLE and finally selecting a random sample of approximately

60%. When this is done a FILTER_$ variable is created, with valueÂ€=Â€1 for those cases

included in the sample and valueÂ€=Â€0 for those cases not included in the sample. When the

stepwise regression was done, the variables SIZE, NOBATH, and NEW were included as

predictors and the coefficients, and so on, are given here for thatÂ€run:

Coefficientsa

Unstandardized Coefficients

Model

B

Std. Error

1â•…(Constant)

â•… SIZE

2â•…(Constant)

â•… SIZE

â•… NOBATH

3â•…(Constant)

â•… SIZE

â•… NOBATH

â•… NEW

–28.948

78.353

–62.848

62.156

30.334

–62.519

59.931

29.436

17.146

8.209

4.692

10.939

5.701

7.322

9.976

5.237

6.682

4.842

a

Standardized

Coefficients

Beta

.910

.722

.274

.696

.266

.159

t

Sig.

–3.526

16.700

–5.745

10.902

4.143

–6.267

11.444

4.405

3.541

.001

.000

.000

.000

.000

.000

.000

.000

.001

Dependent Variable: PRICE

The next step in the cross-validation is to use the COMPUTE statement to compute the

predicted values for the dependent variable. This COMPUTE statement is obtained by

clicking on TRANSFORM and then selecting COMPUTE from the dropdown menu.

When this is done the screen in FigureÂ€3.5 appears.

Using the coefficients obtained from the regression weÂ€have:

PREDÂ€= −62.519 + 59.931*SIZE + 29.436*NOBATH + 17.146*NEW

We wish to correlate the predicted values in the other part of the sample with the y

values there to obtain the cross-validated value. We click on DATA again, and use

SELECT IF FILTER_$Â€=Â€0. That is, we select those cases in the other part of the sample. There are 33 cases in the other part of the random sample. When this is done all

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

Figure 3.5:â•‡ SPSS screen that can be used to compute the predicted values for cross-validation.

the cases with FILTER_$Â€=Â€1 are selected, and a partial listing of the data appears as

follows:

1

2

3

4

5

6

7

8

Price

Size

nobed

nobath

new

filter_$

pred

48.50

55.00

68.00

137.00

309.40

17.50

19.60

24.50

1.10

1.01

1.45

2.40

3.30

.40

1.28

.74

3.00

3.00

3.00

3.00

4.00

1.00

3.00

3.00

1.00

2.00

2.00

3.00

3.00

1.00

1.00

1.00

.00

.00

.00

.00

1.00

.00

.00

.00

0

0

1

0

0

1

0

0

32.84

56.88

83.25

169.62

240.71

–9.11

43.63

11.27

Finally, we use the CORRELATION program to obtain the bivariate correlation between

PRED and PRICE (the dependent variable) in this sample of 33. That correlation is

.878, which is a drop from the maximized correlation of .944 in the derivation sample.

3.11.3 AdjustedÂ€R 2

Herzberg (1969) presented a discussion of various formulas that have been used to

estimate the amount of shrinkage found in R2. As mentioned earlier, the one most commonly used, and due to Wherry, is givenÂ€by

ρ^ 2 = 1 −

(n − 1)

(n − k − 1) (

)

1 − R 2 , (11)

where ρ^ is the estimate of ρ, the population multiple correlation coefficient. This is the

adjusted R2 printed out by SAS and SPSS. Draper and Smith (1981) commented on

EquationÂ€11:

( )

A related statistic .Â€.Â€. is the so called adjusted r Ra2 , the idea being that the statistic Ra2 can be used to compare equations fitted not only to a specific set of data

99

100

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

but also to two or more entirely different sets of data. The value of this statistic for

the latter purpose is, in our opinion, not high. (p.Â€92)

Herzberg noted:

In applications, the population regression function can never be known and one is

more interested in how effective the sample regression function is in other samples. AÂ€measure of this effectiveness is rc, the sample cross-validity. For any given

regression function rc will vary from validation sample to validation sample. The

average value of rc will be approximately equal to the correlation, in the population, of the sample regression function with the criterion. This correlation is the

population cross-validity, ρc. Wherry’s formula estimates ρ rather than ρc. (p.Â€4)

There are two possible models for the predictors: (1) regression—the values of the predictors are fixed, that is, we study y only for certain values of x, and (2) correlation—the

predictors are random variables—this is a much more reasonable model for social sci 2 under the

ence research. Herzberg presented the following formula for estimating ρ

c

correlation model:

2

ρ^ c = 1 −

(n − 1)

n − 2 n + 1

2

1 − R ,

n

k

n

k

n

1

2

−

−

−

−

(

)

(

)

(12)

where n is sample size and k is the number of predictors. It can be shown that ρc <Â€ρ.

If you are interested in cross-validity predictive power, then the Stein formula (EquationÂ€12) should be used. As an example, suppose nÂ€=Â€50, k = 10 and R2Â€=Â€.50. If you

used the Wherry formula (EquationÂ€11), then your estimateÂ€is

2

ρ^ = 1 − 49 / 39(.50) = .372,

whereas with the proper Stein formula you would obtain

ρ^ c = 1 − ( 49 / 39)( 48 / 38)(51 / 50)(.50) = .191.

2

In other words, use of the Wherry formula would give a misleadingly positive impression of the cross-validity predictive power of the equation. TableÂ€3.8 shows how the

estimated predictive power drops off using the Stein formula (EquationÂ€12) for small

to fairly large subject/variable ratios when R2Â€=Â€.50, .75, and .85.

3.11.4 PRESS Statistic

The PRESS approach is important in that one has n validations, each based on (n − 1)

observations. Thus, each validation is based on essentially the entire sample. This is

very important when one does not have large n, for in this situation data splitting is

really not practical. For example, if nÂ€=Â€60 and we have six predictors, randomly splitting the sample involves obtaining a prediction equation on only 30 subjects.

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

Table 3.8:â•‡ Estimated Cross-Validity Predictive Power for Stein Formulaa

Small (5:1)

Subject/variable ratio

Stein estimate

NÂ€=Â€50, kÂ€=Â€10, R Â€=Â€.50

NÂ€=Â€50, kÂ€=Â€10, R 2Â€=Â€.75

NÂ€=Â€50, kÂ€=Â€10, R 2Â€=Â€.85

NÂ€=Â€100, kÂ€=Â€10, R 2Â€=Â€.50

NÂ€=Â€100, kÂ€=Â€10, R 2Â€=Â€.75

NÂ€=Â€150, kÂ€=Â€10, R 2Â€=Â€.50

.191b

.595

.757

.374

.690

.421

2

Moderate (10:1)

Fairly large (15:1)

a

If there is selection of predictors from a larger set, then the median should be used as the k. For example, if

four predictors were selected from 30 by say stepwise regression, then the median between 4 and 30 (i.e., 17)

should be the k used in the Stein formula.

b

If we were to apply the prediction equation to many other samples from the same population, then on the

average we would account for 19.1% of the variance onÂ€y.

Recall that in deriving the prediction (via the least squares approach), the sum of the

squared errors is minimized. The PRESS residuals, on the other hand, are true prediction errors, because the y value for each subject was not simultaneously used for fit and

model assessment. Let us denote the predicted value for subject i, where that subject

^

was not used in developing the prediction equation, by y ( − i ) . Then the PRESS residual for each subject is givenÂ€by

^

^

e( − i ) = yi − y( − i )

and the PRESS sum of squared residuals is givenÂ€by

PRESS =

∑e(

^2

− i ) . (13)

Therefore, one might prefer the model with the smallest PRESS value. The preceding

PRESS value can be used to calculate an R2-like statistic that more accurately reflects

the generalizability of the model. It is givenÂ€by

2

RPress

= 1 − (PRESS) ∑( yi − y ) 2

(14)

Importantly, the SAS REG program routinely prints out PRESS, although it is called

PREDICTED RESID SS (PRESS). Given this value, it is a simple matter to calculate

the R2 PRESS statistic, because the variance of y is s 2y = ∑ ( yi − y )2 (n − 1).

3.12â•‡ IMPORTANCE OF THE ORDER OF THE PREDICTORS

The order in which the predictors enter a regression equation can make a great deal

of difference with respect to how much variance on y they account for, especially

for moderate or highly correlated predictors. Only for uncorrelated predictors (which

101

102

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

would rarely occur in practice) does the order not make a difference. We give two

examples to illustrate.

Example 3.5

A dissertation by Crowder (1975) attempted to predict ratings of individuals having

trainably mental retardation (TMs) using IQ (x2) and scores from a Test of Social Inference (TSI). He was especially interested in showing that the TSI had incremental predictive validity. The criterion was the average ratings by two individuals in charge of

the TMs. The intercorrelations among the variablesÂ€were:

rx1x2 = .59, ryx2 − .54, ryx1 = .566

Now, consider two orderings for the predictors, one where TSI is entered first, and the

other ordering where IQ is entered first.

First ordering % of variance

TSI

IQ

32.04

6.52

Second ordering % of variance

IQ

TSI

29.16

9.40

The first ordering conveys an overly optimistic view of the utility of the TSI scale.

Because we know that IQ will predict ratings, it should be entered first in the equation

(as a control variable), and then TSI to see what its incremental validity is—that is,

how much it adds to predicting ratings above and beyond what IQ does. Because of

the moderate correlation between IQ and TSI, the amount of variance accounted for by

TSI differs considerably when entered first versus second (32.04 vs. 9.4).

The 9.4% of variance accounted for by TSI when entered second is obtained through

the use of the semipartial correlation previously introduced:

ry1 2( s ) =

.566 − .54(.59)

1 − .59 2

= .306 ⇒ ry21 2( s ) = .094

Example 3.6

Consider the following correlations among three predictors and an outcome:

x1

x2

x3

y .60 .70 .70

x1

.70 .60

x2

.80

Notice that the predictors are strongly intercorrelated.

How much variance in y will x3 account for if entered first? if enteredÂ€last?

If x3 is entered first, then it will account for (.7)2 × 100 or 49% of variance on y—a

sizable amount.

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

To determine how much variance x3 will account for if entered last, we need to compute the following second-order semipartial correlation:

ry 3 12( s ) =

ry 3 1( s ) − ry 2 1( s ) r23 1

1 − r232 1

We show the details next for obtaining ry3 12(s):

ry 2 1( s ) =

ry 2 − ry1r21

1−

r212

=

.70 − (.6)(.7)

1 − .49

.28

= .392

.714

ry 3 − ry1r31 .7 − .6(6)

=

= .425

=

1 − r312

1 − .6 2

ry 2 1( s ) =

ry 3 1( s )

r23 1 =

r23 − r21r31

1−

ry 3 1( s ) =

ry23 12( s )

r212

1−

r312

=

.425 − .392(.665)

1 − .665

2

.80 − (.7)(.6)

= .665

1 − .49 1 − .36

=

.164

= .22

.746

= (.22)2 = .048

Thus, when x3 enters last it accounts for only 4.8% of the variance on y. This is a tremendous drop from the 49% it accounted for when entered first. Because the three predictors are so highly correlated, most of the variance on y that x3 could have accounted

for has already been accounted for by x1 and x2.

3.12.1 Controlling the Order of Predictors in the Equation

With the forward and stepwise selection procedures, the order of entry of predictors

into the regression equation is determined via a mathematical maximization procedure.

That is, the first predictor to enter is the one with the largest (maximized) correlation

with y, the second to enter is the predictor with the largest partial correlation, and so

on. However, there are situations where you may not want the mathematics to determine the order of entry of predictors. For example, suppose we have a five-predictor

problem, with two proven predictors from previous research. The other three predictors are included to see if they have any incremental validity. In this case we would

want to enter the two proven predictors in the equation first (as control variables), and

then let the remaining three predictors “fight it out” to determine whether any of them

add anything significant to predicting y above and beyond the proven predictors.

With SPSS REGRESSION or SAS REG we can control the order of predictors, and in

particular, we can force predictors into the equation. In TableÂ€3.9 we illustrate how this

is done for SPSS and SAS for the five-predictor situation.

103

104

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Table 3.9:â•‡ Controlling the Order of Predictors and Forcing Predictors Into the Equation

With SPSS Regression and SASÂ€Reg

SPSS REGRESSION

TITLE ‘FORCING X3 AND X4Â€& USING STEPWISE SELECTION FOR OTHERS’.

DATA LIST FREE/Y X1 X2 X3 X4 X5.

BEGIN DATA.

DATA LINES

END DATA.

LIST.

REGRESSION VARIABLESÂ€=Â€Y X1 X2 X3 X4 X5

/DEPENDENTÂ€=Â€Y

(1)

/METHODÂ€=Â€ENTER X3 X4

/METHODÂ€=Â€STEPWISE X1 X2 X5.

SAS REG

DATA FORCEPR;

INPUT Y X1 X2 X3 X4 X5;

LINES;

DATA LINES

PROC REG SIMPLE CORR;

(2) MODEL YÂ€=Â€X3 X4 X1 X2 X5/INCLUDEÂ€=Â€2 SELECTIONÂ€=Â€STEPWISE;

(1)â•‡The METHODÂ€=Â€ENTER subcommand forces variables X3 and X4 into the equation, and the

METHODÂ€=Â€STEPWISE subcommand will determine whether any of the remaining predictors (X1, X2 or

X5) have semipartial correlations large enough to be “significant.” If we wished to force in predictors X1, X3,

and X4 and then use STEPWISE, the subcommands are /METHODÂ€=Â€ENTER X1 X3 X4/METHODÂ€=Â€STEPWISE X2Â€X5.

(2)â•‡The INCLUDEÂ€=Â€2 forces the first 2 predictors listed in the MODEL statement into the prediction

equation. Thus, if we wish to force X3 and X4 we must list them first on the = statement.

3.13 OTHER IMPORTANT ISSUES

3.13.1 Preselection of Predictors

An industrial psychologist hears about the predictive power of multiple regression and

is excited. He wants to predict success on the job, and gathers data for 20 potential

predictors on 70 subjects. He obtains the correlation matrix for the variables and then

picks out six predictors that correlate significantly with success on the job and that

have low intercorrelations among themselves. The analysis is run, and the R2 is highly

significant. Furthermore, he is able to explain 52% of the variance on y (more than

other investigators have been able to do). Are these results generalizable? Probably

not, since what he did involves a double capitalization on chance:

1. In preselecting the predictors from a larger set, he is capitalizing on chance. Some

of these variables would have high correlations with y because of sampling error,

and consequently their correlations would tend to be lower in another sample.

2. The mathematical maximization involved in obtaining the multiple correlation

involves capitalizing on chance.

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

Preselection of predictors is common among many researchers who are unaware of

the fact that this tends to make their results sample specific. Nunnally (1978) had a

nice discussion of the preselection problem, and Wilkinson (1979) showed the considerable positive bias preselection can have on the test of significance of R2 in forward

selection. The following example from his tables illustrates. The critical value for a

four-predictor problem (nÂ€=Â€35) at .05 level is .26, and the appropriate critical value for

the same n and α level, when preselecting four predictors from a set of 20 predictors is

.51. Unawareness of the positive bias has led to many results in the literature that are

not replicable, for as Wilkinson noted:

A computer assisted search for articles in psychology using stepwise regression

from 1969 to 1977 located 71 articles. Out of these articles, 66 forward selections

analyses reported as significant by the usual F tests were found. Of these 66 analyses, 19 were not significant by [his] TableÂ€1. (p.Â€172)

It is important to note that both the Wherry and Stein formulas do not take into account

preselection. Hence, the following from Cohen and Cohen (1983) should be seriously

considered: “AÂ€more realistic estimate of the shrinkage is obtained by substituting for

k the total number of predictors from which the selection was made” (p.Â€107). In other

words, they are saying if four predictors were selected out of 15, use kÂ€=Â€15 in the Stein

formula (EquationÂ€12). While this may be conservative, using four will certainly lead

to a positive bias. Probably a median value between 4 and 15 would be closer to the

mark, although this needs further investigation.

3.13.2 Positive Bias ofÂ€Râ•›2

A study of California principals and superintendents illustrates how capitalization on

chance in multiple regression (if the researcher is unaware of it) can lead to misleading conclusions. Here, the interest was in validating a contingency theory of leadership, that is, that success in administering schools calls for different personality

styles depending on the social setting of the school. The theory seems plausible, and

in what follows we are not criticizing the theory per se, but the empirical validation

of it. The procedure that was used to validate the theory involved establishing a relationship between various personality attributes (24 predictors) and several measures

of administrative success in heterogeneous samples with respect to social setting

using multiple regression, that is, finding the multiple R for each measure of success

on 24 predictors. Then, it was shown that the magnitude of the relationships was

greater for subsamples homogeneous with respect to social setting. The problem

was that the sample size is much too low for a reliable prediction equation. Here

we present the total sample sizes and the subsamples homogeneous with respect to

social setting:

Total

Subsample(s)

Superintendents

Principals

nÂ€=Â€77

nÂ€=Â€29

nÂ€=Â€147

n1Â€=Â€35, n2Â€=Â€61, n3Â€=Â€36

105

106

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Indeed, in the homogeneous samples, the Rs were on the average .34 greater than in

the total samples; however, this was an artifact of the multiple regression procedure in

this case. As one proceeds from the total to the subsamples the number of predictors

(k) approaches sample size (n). For this situation the multiple correlation increases to 1

regardless of whether there is any relationship between y and the set of predictors. And

in three of four subsamples the n/k ratios are very close to 1. In particular, it is the case

that E(R2)Â€=Â€k / (n − 1), when the population multiple correlationÂ€=Â€0 (Morrison, 1976).

To dramatize this, consider Subsample 1 for the principals. Then E(R2)Â€=Â€24 / 34Â€=Â€.706,

even when there is no relationship between y and the set of predictors. The F critical value required just for statistical significance of R at .05 is 2.74, which implies

R2Â€ =Â€ .868, just to be confident that the population multiple correlation is different

fromÂ€0.

3.13.3Suppressor Variables

Lord and Novick (1968) stated the following two rules of thumb for the selection of

predictor variables:

1. Choose variables that correlate highly with the criterion but that have low

intercorrelations.

2. To these variables add other variables that have low correlations with the criterion

but that have high correlations with the other predictors. (p.Â€271)

At first blush, the second rule of thumb may not seem to make sense, but what they

are talking about is suppressor variables. To illustrate specifically why a suppressor

variable can help in prediction, we consider a hypothetical example.

Example 3.7

Consider a two-predictor problem with the following correlations among the variables:

ryx1 = .60, ryx2 = 0, and rx1x2 = .50.

Note that x1 by itself accounts for (.6)2Â€=Â€.36, or 36% of the variance on y. Now consider entering x2 into the regression equation first. It will of course account for no

variance on y, and it may seem like we have gained nothing. But, if we now enter x1

into the equation (after x2), its predictive power is enhanced. This is because there is

irrelevant variance on x1 (i.e., variance that does not relate to y), which is related to x2.

In this case that irrelevant variance is (.5)2Â€=Â€.25 or 25%. When this irrelevant variance

is partialed out (or suppressed), the remaining variance on x1 is more strongly tied to y.

Calculation of the semipartial correlation showsÂ€this:

ry1 2( s ) =

ryx1 − ryx2 rx1x2

1−

rx21x2

=

.60 − 0

1 − .52

= .693

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

Thus, ry21 2( s ) = .48, and the predictive power of x1 has increased from accounting for

36% to accounting for 48% of the variance onÂ€y.

3.14 OUTLIERS AND INFLUENTIAL DATA POINTS

Because multiple regression is a mathematical maximization procedure, it can be very

sensitive to data points that “split off” or are different from the rest of the points, that

is, to outliers. Just one or two such points can affect the interpretation of results, and

it is certainly moot as to whether one or two points should be permitted to have such

a profound influence. Therefore, it is important to be able to detect outliers and influential points. There is a distinction between the two because a point that is an outlier

(either on y or for the predictors) will not necessarily be influential in affecting the

regression equation.

The fact that a simple examination of summary statistics can result in misleading

interpretations was illustrated by Anscombe (1973). He presented four data sets that

yielded the same summary statistics (i.e., regression coefficients and same r2Â€=Â€.667).

In one case, linear regression was perfectly appropriate. In the second case, however,

a scatterplot showed that curvilinear regression was appropriate. In the third case, linear regression was appropriate for 10 of 11 points, but the other point was an outlier

and possibly should have been excluded from the analysis. In the fourth data set, the

regression line was completely determined by one observation, which if removed,

would not allow for an estimate of the slope.

Two basic approaches can be used in dealing with outliers and influential points. We

consider the approach of having an arsenal of tools for isolating these important points

for further study, with the possibility of deleting some or all of the points from the

analysis. The other approach is to develop procedures that are relatively insensitive to

wild points (i.e., robust regression techniques). (Some pertinent references for robust

regression are Hogg, 1979; Huber, 1977; MostellerÂ€& Tukey, 1977). It is important to

note that even robust regression may be ineffective when there are outliers in the space

of the predictors (Huber, 1977). Thus, even in robust regression there is a need for case

analysis. Also, a modification of robust regression (bounded-influence regression) has

been developed by Krasker and Welsch (1979).

3.14.1 Data Editing

Outliers and influential cases can occur because of recording errors. Consequently,

researchers should give more consideration to the data editing phase of the data analysis process (i.e., always listing the data and examining the list for possible errors).

There are many possible sources of error from the initial data collection to the final

data entry. First, some of the data may have been recorded incorrectly. Second, even

if recorded correctly, when all of the data are transferred to a single sheet or a few

sheets in preparation for data entry, errors may be made. Finally, even if no errors are

107

108

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

made in these first two steps, an error(s) could be made in entering the data into the

computer.

There are various statistics for identifying outliers on y and on the set of predictors, as

well as for identifying influential data points. We discuss first, in brief form, a statistic

for each, with advice on how to interpret that statistic. Equations for the statistics are

given later in the section, along with a more extensive and somewhat technical discussion for those who are interested.

3.14.2 Measuring Outliers onÂ€y

For finding participants whose predicted scores are quite different from their actual y

scores (i.e., they do not fit the model well), the studentized residuals (ri) can be used.

If the model is correct, then they have a normal distribution with a mean of 0 and a

standard deviation of 1. Thus, about 95% of the ri should lie within two standard deviations of the mean and about 99% within three standard deviations. Therefore, any

studentized residual greater than about 3 in absolute value is unusual and should be

carefully examined.

3.14.3 Measuring Outliers on Set of Predictors

The hat elements (hii) or leverage values can be used here. It can be shown that the

hat elements lie between 0 and 1, and that the average hat element is p / n, where

pÂ€=Â€k + 1. Because of this, Hoaglin and Welsch (1978) suggested that 2p / n may be

considered large. However, this can lead to more points than we really would want to

examine, and you should consider using 3p / n. For example, with six predictors and

100 subjects, any hat element, or leverage value, greater than 3(7) / 100Â€=Â€.21 should

be carefully examined. This is a very simple and useful rule for quickly identifying

participants who are very different from the rest of the sample on the set of predictors.

Note that instead of leverage SPSS reports a centered leverage value. For this statistic,

the earlier guidelines for identifying outlying values are now 2k / n (instead of 2p / n)

and 3k / n (instead of 3p /Â€n).

3.14.4 Measuring Influential Data Points

An influential data point is one that when deleted produces a substantial change in at

least one of the regression coefficients. That is, the prediction equations with and without the influential point are quite different. Cook’s distance (Cook, 1977) is very useful for identifying influential points. It measures the combined influence of the case’s

being an outlier on y and on the set of predictors. Cook and Weisberg (1982) indicated

that a Cook’s distanceÂ€=Â€1 would generally be considered large. This provides a “red

flag,” when examining computer output for identifying influential points.

All of these diagnostic measures are easily obtained from SPSS REGRESSION (see

TableÂ€3.3) or SAS REG (see TableÂ€3.6).

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

3.14.5 Measuring Outliers onÂ€y

The raw residuals, e^ i = yi − y^ i , in linear regression are assumed to be independent,

to have a mean of 0, to have constant variance, and to follow a normal distribution.

However, because the n residuals have only n − k degrees of freedom (k degrees of

freedom were lost in estimating the regression parameters), they can’t be independent.

If n is large relative to k, however, then the e^ i are essentially independent. Also, the

residuals have different variances. It can be shown (DraperÂ€& Smith, 1981, p.Â€144) that

the variance for the ith residual is givenÂ€by:

2

2

s=

σ^ (1 − hii ),(15)

ei

2

where σ^ is the estimate of variance not predictable from the regression (MSres), and

hii is the ith diagonal element of the hat matrix X(X′X)−1X′. Recall that X is the score

matrix for the predictors. The hii play a key role in determining the predicted values for

the subjects. RecallÂ€that

^

^

β = ( X ′X)−1 X ′Y and y^ = X β .

Therefore, ŷ = X(X′X)−1 X′y by simple substitution. Thus, the predicted values for

y are obtained by postmultiplying the hat matrix by the column vector of observed

scores onÂ€y.

Because the predicted values (ŷi) and the residuals are related by e^ i = yi − y^ i , it should

not be surprising in view of the foregoing that the variability of the e^ i would be

affected by the hii.

Because the residuals have different variances, we need to properly scale the residuals

so that we can meaningfully compare them. This is completely analogous to what is

done in comparing raw scores from distributions with different variances and different

means. There, one means of standardizing was to convert to z scores, using ziÂ€= Â€(xi − x) / s.

Here we also subtract off the mean (which is 0 and hence has no effect) and then

divide by the standard deviation, which is the square root of EquationÂ€15. Thus, the

studentized residual isÂ€then

ri =

e^ i − 0

σ^ 1 − hii

=

e^ i

.

σ^ 1 − hii (16)

Because the ri are assumed to have a normal distribution with a mean of 0 (if the

model is correct), then about 99% of the ri should lie within three standard deviations

of theÂ€mean.

3.14.6 Measuring Outliers on the Predictors

The hii are one measure of the extent to which the ith observation is an outlier for the

predictors. The hii are important because they can play a key role in determining the

predicted values for the subjects. RecallÂ€that

109

110

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

^

^

β = ( X ′X)−1 X ′Y and y^ = X β .

Therefore, y=X(X′X)−1 X′y by simple substitution.

Thus, the predicted values for y are obtained by postmultiplying the hat matrix by the

column vector of observed scores on y. It can be shown that the hii lie between 0 and

1, and that the average value for hiiÂ€=Â€k / n. From EquationÂ€15 it can be seen that when

hii is large (i.e., near 1), then the variance for the ith residual is near 0. This means

that y^ i ≈ y^ i . In other words, an observation may fit the linear model well and yet be

an influential data point. This second diagnostic, then, is “flagging” observations that

need to be examined carefully because they may have an unusually large influence on

the regression coefficients.

What is a significant value for the hii? Hoaglin and Welsch (1978) suggested that

2p / n may be considered large. Belsey etÂ€al. (1980, pp.Â€67–68) showed that when the

set of predictors is multivariate normal, then (n − p)[hii − 1 / n] / (1 − hii)(p − 1) is distributed as F with (p − 1) and (n − p) degrees of freedom.

Rather than computing F and comparing against a critical value, Hoaglin and Welsch

suggested 2p / n as rough guide for a large hii.

An important point to remember concerning the hat elements is that the points they

identify will not necessarily be influential in affecting the regression coefficients.

A second measure for identifying outliers on the predictors is Mahalanobis’ (1936)

distance for case i ( Di2 ). This measure indicates how far a case is from the centroid of

all cases for the predictors. AÂ€large distance indicates an observation that is an outlier

for the predictors. The Mahalanobis distance can be written in terms of the covariance

matrix SÂ€as

Di2 = (xi − x )′S −1 (xi − x ),

(17)

where xi is the vector of the data for case i and x is the vector of means (centroid) for

the predictors.

2

For a better understanding of Di , consider two small data sets. The first set has two

predictors. In TableÂ€3.10, the data are presented, as well as the Di2 and the descriptive

statistics (including S). The Di2 for cases 6 and 10 are large because the score for Case

6 on xi (150) was deviant, whereas for Case 10 the score on x2 (97) was very deviant.

The graphical split-off of Cases 6 and 10 is quite vivid and was displayed in FigureÂ€1.2

in ChapterÂ€1.

In the previous example, because the numbers of predictors and participants were

few, it would have been fairly easy to spot the outliers even without the Mahalanobis

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

distance. However, in practical problems with 200 or 300 cases and 10 predictors,

outliers are not always easy to spot and can occur in more subtle ways. For example,

a case may have a large distance because there are moderate to fairly large differences

on many of the predictors. The second small data set with four predictors and NÂ€=Â€15

2

in TableÂ€3.10 illustrates this latter point. The Di for case 13 is quite large (7.97) even

though the scores for that subject do not split off in a striking fashion for any of the

predictors. Rather, it is a cumulative effect that produces the separation.

Table 3.10:â•‡ Raw Data and Mahalanobis Distances for Two Small DataÂ€Sets

Case

Y

X1

X2

X3

X4

Dâ•›2i

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

Summary

Statistics

M

SD

476

457

540

551

575

698

545

574

645

556

634

637

390

562

560

111

92

90

107

98

150

118

110

117

94

130

118

91

118

109

68

46

50

59

50

66

54

51

59

97

57

51

44

61

66

17

28

19

25

13

20

11

26

18

12

16

19

14

20

13

81

67

83

71

92

90

101

82

87

69

97

78

64

103

88

0.30

1.55

1.47

0.01

0.76

5.48

0.47

0.38

0.23

7.24

561.70000

70.74846

108.70000

17.73289

60.00000

14.84737

(1)

314.455 19.483

S=

10.483 220.944

2

Note: Boxed-in entries are the first data set and corresponding Di . The 10 case numbers having the largest

2

Di for a four-predictor data set are: 10, 10.859; 13, 7.977; 6, 7.223; 2, 5.048; 14, 4.874; 7, 3.514; 5, 3.177; 3,

2.616; 8, 2.561; 4, 2.404.

2

(1)â•‡ Calculation of Di for CaseÂ€6:

D 6 = (41.3, 6)

2

S

−1

=

−1

314.455 19.483 41.3

19.483 220.444 6

.00320 −.00029

2

−.00029 .00456 → D6 = 5.484

111

112

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

How large must Di2 be before you can say that case i is significantly separated from

the rest of the data? Johnson and Wichern (2007) note that these distances, if multivariate normality holds, approximately follow a chi-square distribution with degrees

of freedom equal to the number of predictors (k), with this approximation improving

for larger samples. AÂ€common practice is to consider a multivariate outlier to be present when an obtained Mahalanobis distance exceeds a chi-square critical value at a

conservative alpha level (e.g., .001) with k degrees of freedom. Referring back to the

example with two predictors, if we assume multivariate normality, then neither case 6

( Di2 Â€=Â€5.48) nor case 10 ( Di2 Â€=Â€7.24) would be considered as a multivariate outlier at

the .001 level as the chi-square critical value is 13.815.

3.14.7 Measures for Influential Data Points

3.14.7.1 Cook’s Distance

Cook’s distance (CD) is a measure of the change in the regression coefficients that

would occur if this case were omitted, thus revealing which cases are most influential

in affecting the regression equation. It is affected by the case’s being an outlier both on

y and on the set of predictors. Cook’s distance is givenÂ€by

^ ^ ′

^ ^

CDi = β− β( − i ) X ′X β− β( − i ) ( k + 1) MSres , (18)

^

where β( −i ) is the vector of estimated regression coefficients with the ith data point

deleted, k is the number of predictors, and MSres is the residual (error) variance for the

full dataÂ€set.

^

^

Removing the ith data point should keep β( −i ) close to β unless the ith observation is

an outlier. Cook and Weisberg (1982, p.Â€118) indicated that a CDi > 1 would generally

be considered large. Cook’s distance can be written in an alternative revealingÂ€form:

h

1

CDi =

ri2 ii ,

(19)

(k + 1) 1 − hii

where ri is the studentized residual and hii is the hat element. Thus, Cook’s distance

measures the joint (combined) influence of the case being an outlier on y and on the

set of predictors. AÂ€case may be influential because it is a significant outlier only on y,

for example,

kÂ€=Â€5, nÂ€=Â€40, riÂ€=Â€4, hiiÂ€= .3: CDi >Â€1,

or because it is a significant outlier only on the set of predictors, for example,

kÂ€=Â€5, nÂ€=Â€40, riÂ€=Â€2, hiiÂ€= .7: CDi >Â€1.

Note, however, that a case may not be a significant outlier on either y or on the set of

predictors, but may still be influential, as in the following:

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

kÂ€=Â€3, nÂ€=Â€20, hiiÂ€=Â€.4, rÂ€= 2.5: CDi >Â€1

3.14.7.2 Dffits

This statistic (Belsley et al., 1980) indicates how much the ith fitted value will change

if the ith observation is deleted. It is givenÂ€by

DFFITSi =

y^ i − y^ i −1

.

s−1 h11

(20)

The numerator simply expresses the difference between the fitted values, with the ith

point in and with it deleted. The denominator provides a measure of variability since

s 2y = σ 2 hii . Therefore, DFFITS indicates the number of estimated standard errors that

the fitted value changes when the ith point is deleted.

3.14.7.3 Dfbetas

These are very useful in detecting how much each regression coefficient will change if

the ith observation is deleted. They are givenÂ€by

DFBETAi =

b j − b j −1

SE (b j −1 )

.

(21)

Each DFBETA therefore indicates the number of standard errors a given coefficient

changes when the ith point is deleted. DFBETAS are available on SAS and SPSS, with

SPSS referring to these as standardized DFBETAS. Any DFBETA with a value > |2|

indicates a sizable change and should be investigated. Thus, although Cook’s distance

is a composite measure of influence, the DFBETAS indicate which specific coefficients are being most affected.

It was mentioned earlier that a data point that is an outlier either on y or on the set of

predictors will not necessarily be an influential point. FigureÂ€3.6 illustrates how this

can happen. In this simplified example with just one predictor, both points A and B are

outliers on x. Point B is influential, and to accommodate it, the least squares regression

line will be pulled downward toward the point. However, Point A is not influential

because this point closely follows the trend of the rest of theÂ€data.

3.14.8 Summary

In summary, then, studentized residuals can be inspected to identify y outliers, and the

leverage values (or centered leverage values in SPSS) or the Mahalanobis distances

can be used to detect outliers on the predictors. Such outliers will not necessarily be

influential points. To determine which outliers are influential, find those whose Cook’s

distances are > 1. Those points that are flagged as influential by Cook’s distance need

to be examined carefully to determine whether they should be deleted from the analysis. If there is a reason to believe that these cases arise from a process different from

113

114

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Figure 3.6:â•‡ Examples of two outliers on the predictors: one influential and the other not

Â�influential.

Y

A

B

X

that for the rest of the data, then the cases should be deleted. For example, the failure

of a measuring instrument, a power failure, or the occurrence of an unusual event (perhaps inexplicable) would be instances of a different process.

If a point is a significant outlier on y, but its Cook’s distance is < 1, there is no real need

to delete the point because it does not have a large effect on the regression analysis.

However, one should still be interested in studying such points further to understand

why they did not fit the model. After all, the purpose of any study is to understand the

data. In particular, you would want to know if there are any communalities among the

cases corresponding to such outliers, suggesting that perhaps these cases come from

a different population. For an excellent, readable, and extended discussion of outliers,

influential points, identification of and remedies for, see Weisberg (1980, chaptersÂ€5

andÂ€6).

In concluding this summary, the following from Belsley etÂ€al. (1980) is appropriate:

A word of warning is in order here, for it is obvious that there is room for misuse of

the above procedures. High-influence data points could conceivably be removed

solely to effect a desired change in a particular estimated coefficient, its t value, or

some other regression output. While this danger exists, it is an unavoidable consequence of a procedure that successfully highlights such points .Â€.Â€. the benefits

obtained from information on influential points far outweigh any potential danger.

(pp.Â€15–16)

Example 3.8

We now consider the data in TableÂ€3.10 with four predictors (nÂ€=Â€15). This data was run

on SPSS REGRESSION. The regression with all four predictors is significant at the

.05 level (FÂ€=Â€3.94, p < .0358). However, we wish to focus our attention on the outlier

analysis, a summary of which is given in TableÂ€3.11. Examination of the studentized

residuals shows no significant outliers on y. To determine whether there are any significant outliers on the set of predictors, we examine the Mahalanobis distances. No cases

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

are outliers on the xs since the estimated chi-square critical value (.001, 4) is 18.465.

However, note that Cook’s distances reveal that both Cases 10 and 13 are influential

data points, since the distances are > 1. Note that Cases 10 and 13 are influential observations even though they were not considered as outliers on either y or on the set of

predictors. We indicated that this is possible, and indeed it has occurred here. This is

the more subtle type of influential point that Cook’s distance brings to our attention.

In TableÂ€3.12 we present the regression coefficients that resulted when Cases 10 and 13

were deleted. There is a fairly dramatic shift in the coefficients in each case. For Case

10 a dramatic shift occurs for x2, where the coefficient changes from 1.27 (for all data

points) to −1.48 (with Case 10 deleted). This is a shift of just over two standard errors

(standard error for x2 on the output is 1.34). For Case 13 the coefficients change in sign

for three of the four predictors (x2, x3, and x4).

Table 3.11:â•‡ Selected Output for Sample Problem on Outliers and Influential Points

Case Summariesa

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

Total

a

N

Studentized Residual

Mahalanobis Distance

Cook’s Distance

–1.69609

–.72075

.93397

.08216

1.19324

.09408

–.89911

.21033

1.09324

1.15951

.09041

1.39104

−1.73853

−1.26662

–.04619

15

.57237

5.04841

2.61611

2.40401

3.17728

7.22347

3.51446

2.56197

.17583

10.85912

1.89225

2.02284

7.97770

4.87493

1.07926

15

.06934

.07751

.05925

.00042

.11837

.00247

.07528

.00294

.02057

1.43639

.00041

.10359

1.05851

.22751

.00007

15

Limited to first 100 cases.

Table 3.12:â•‡ Selected Output for Sample Problem on Outliers and Influential Points

Model Summary

Model

R

1

.782

a

a

R Square

Adjusted R

Square

Std. Error of the

Estimate

.612

.456

57.57994

Predictors: (Constant), X4, X2, X3, X1

(Continued)

115

116

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Table 3.12:â•‡ (Continued)

ANOVA

a

Model

1

a

b

Regression

Residual

Total

Sum of

Squares

df

Mean Square

F

Sig.

52231.502

33154.498

85386.000

4

10

14

13057.876

3315.450

3.938

.036b

Dependent Variable: Y

Predictors: (Constant), X4, X2, X3, X1

Coefficientsa

Model

1

a

(Constant)

X1

X2

X3

X4

Unstandardized Coefficients

Standardized Coefficients

B

Std. Error

Beta

15.859

180.298

2.803

1.270

2.017

1.488

1.266

1.344

3.559

1.785

t

.586

.210

.134

.232

Sig.

.088

.932

2.215

.945

.567

.834

.051

.367

.583

.424

Dependent Variable: Y

Regression Coefficients With Case 10 Deleted

Regression Coefficients With Case 13 Deleted

Variable

B

Variable

B

(Constant)

X1

X2

X3

X4

23.362

3.529

–1.481

2.751

2.078

(Constant)

X1

X2

X3

X4

410.457

3.415

−.708

−3.456

−1.339

3.15â•‡FURTHER DISCUSSION OF THE TWO COMPUTER

EXAMPLES

3.15.1 MorrisonÂ€Data

Recall that for the Morrison data the stepwise procedure yielded the more parsimonious

model involving three predictors: CLARITY, INTEREST, and STIMUL. If we were

interested in an estimate of the predictive power in the population, then the Wherry

estimate given by EquationÂ€ 11 is appropriate. This is given under STEP NUMBER

3 on the SPSS output in TableÂ€3.4, which shows that the ADJUSTED R SQUARE is

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

.840. Here the estimate is used in a descriptive sense: to describe the relationship in the

population. However, if we are interested in the cross-validity predictive power, then

the Stein estimate (EquationÂ€12) should be used. The Stein adjusted R2 in this caseÂ€is

ρc2 = 1 − (31 / 28)(30 / 27)(33 / 32)(1 − .856) = .82.

This estimates that if we were to cross-validate the prediction equation on many other

samples from the same population, then on the average we would account for about

82% of the variance on the dependent variable. In this instance the estimated drop-off

in predictive power is very little from the maximized value of 85.6%. The reason is

that the association between the dependent variable and the set of predictors is very

strong. Thus, we can have confidence in the future predictive power of the equation.

It is also important to examine the regression diagnostics to check for any outliers or

influential data points. TableÂ€3.13 presents the appropriate statistics, as discussed in

sectionÂ€3.13, for identifying outliers on the dependent variable (studentized residuals),

outliers on the set of predictors (the centered leverage values), and influential data

points (Cook’s distance).

First, we would expect only about 5% of the studentized residuals to be > |2| if the linear model is appropriate. From TableÂ€3.13 we see that two of the studentized residuals

are > |2|, and we would expect about 32(.05)Â€=Â€1.6, so nothing seems to be awry here.

Next, we check for outliers on the set of predictors. Since we have centered leverage

values, the rough “critical value” here is 3k / nÂ€=Â€3(3) / 32Â€=Â€.281. Because no centered

leverage value in TableÂ€3.13 exceeds this value, we have no outliers on the set of predictors. Finally, and perhaps most importantly, we check for the existence of influential

data points using Cook’s distance. Recall that Cook and Weisberg (1982) suggested if

D > 1, then the point is influential. All the Cook’s distance values in TableÂ€3.13 are far

less than 1, so we have no influential data points.

Table 3.13:â•‡ Regression Diagnostics (Studentized Residuals, Centered Leverage

Â�Values, and Cook’s Distance) for Morrison MBAÂ€Data

Case Summariesa

1

2

3

4

5

6

7

8

9

Studentized Residual

Centered Leverage Value

Cook’s Distance

−.38956

−1.96017

.27488

−.38956

1.60373

.04353

−.88786

−2.22576

−.81838

.10214

.05411

.15413

.10214

.13489

.12181

.02794

.01798

.13807

.00584

.08965

.00430

.00584

.12811

.00009

.01240

.06413

.03413

(Continued )

117

118

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Table 3.13:â•‡ (Continued)

Case Summariesa

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

Total

a

N

Studentized Residual

Centered Leverage Value

Cook’s Distance

.59436

.67575

−.15444

1.31912

−.70076

−.88786

−1.53907

−.26796

−.56629

.82049

.06913

.06913

.28668

.28668

.82049

−.50388

.38362

−.56629

.16113

2.34549

1.18159

−.26103

1.39951

32

.07080

.04119

.20318

.05411

.08630

.02794

.05409

.09531

.03889

.10392

.09329

.09329

.09755

.09755

.10392

.14084

.11157

.03889

.07561

.02794

.17378

.18595

.13088

32

.01004

.00892

.00183

.04060

.01635

.01240

.05525

.00260

.00605

.02630

.00017

.00017

.00304

.00304

.02630

.01319

.00613

.00605

.00078

.08652

.09002

.00473

.09475

32

Limited to first 100 cases.

In summary, then, the linear regression model is quite appropriate for the Morrison

data. The estimated cross-validity power is excellent, and there are no outliers or influential data points.

3.15.2 National Academy of SciencesÂ€Data

Recall that both the stepwise procedure and the MAXR procedure yielded the same

“best” four-predictor set: NFACUL, PCTSUPP, PCTGRT, and NARTIC. The maximized R2Â€=Â€.8221, indicating that 82.21% of the variance in quality can be accounted

for by these four predictors in this sample. Now we obtain two measures of the

cross-validity power of the equation. First, SAS REG indicated for this example the

PREDICTED RESID SS (PRESS)Â€ =Â€ 1350.33. Furthermore, the sum of squares for

QUALITY is 4564.71. From these numbers we can use EquationÂ€14 to compute

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

2

RPress

= 1 − (1350.33) / 4564.71 = .7042.

This is a good measure of the external predictive power of the equation, where we have

n validations, each based on (n − 1) observations.

The Stein estimate of how much variance on the average we would account for if the

equation were applied to many other samplesÂ€is

ρc2 = 1 − ( 45 / 41)( 44 / 40)( 47 / 46)(1 − .822) = .7804.

Now we turn to the regression diagnostics from SAS REG, which are presented in

TableÂ€ 3.14. In terms of the studentized residuals for y (under the Student Residual

column), two stand out (−2.756 and 2.376 for observations 25 and 44). These are for

the University of Michigan and Virginia Polytech. In terms of outliers on the set of

predictors, using 3p / n to identify large leverage values [3(5) / 46Â€=Â€.326] suggests that

there is one unusual case: observation 25 (University of Michigan). Note that leverage

is referred to as Hat Diag H inÂ€SAS.

Table 3.14:â•‡ Regression Diagnostics (Studentized Residuals, Cook’s Distance, and Hat

Elements) for National Academy of ScienceÂ€Data

Obs

Student residual

Cook’s D

Hat diag H

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

−0.708

−0.0779

0.403

0.424

0.800

−1.447

1.085

−0.300

−0.460

1.694

−0.694

−0.870

−0.732

0.359

−0.942

1.282

0.424

0.227

0.877

0.643

−0.417

0.007

0.000

0.003

0.009

0.012

0.034

0.038

0.002

0.010

0.048

0.004

0.016

0.007

0.003

0.054

0.063

0.001

0.001

0.007

0.004

0.002

0.0684

0.1064

0.0807

0.1951

0.0870

0.0742

0.1386

0.1057

0.1865

0.0765

0.0433

0.0956

0.0652

0.0885

0.2328

0.1613

0.0297

0.1196

0.0464

0.0456

0.0429

(Continued )

119

120

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Table 3.14:â•‡ (Continued)

Obs

Student residual

Cook’s D

Hat diag H

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

0.193

0.490

0.357

−2.756

−1.370

−0.799

0.165

0.995

−1.786

−1.171

−0.994

1.394

1.568

−0.622

0.282

−0.831

1.516

1.492

0.314

−0.977

−0.581

0.0591

2.376

−0.508

−1.505

0.001

0.002

0.001

2.292

0.068

0.017

0.000

0.018

0.241

0.018

0.017

0.037

0.051

0.006

0.002

0.009

0.039

0.081

0.001

0.016

0.006

0.000

0.164

0.003

0.085

0.0696

0.0460

0.0503

0.6014

0.1533

0.1186

0.0573

0.0844

0.2737

0.0613

0.0796

0.0859

0.0937

0.0714

0.1066

0.0643

0.0789

0.1539

0.0638

0.0793

0.0847

0.0877

0.1265

0.0592

0.1583

Using the criterion of Cook’s D > 1, there is one influential data point, observation 25

(University of Michigan). Recall that whether a point will be influential is a joint function of being an outlier on y and on the set of predictors. In this case, the University

of Michigan definitely doesn’t fit the model and it differs dramatically from the other

psychology departments on the set of predictors. AÂ€ check of the DFBETAS reveals

that it is very different in terms of number of faculty (DFBETAÂ€=Â€−2.7653), and a scan

of the raw data shows the number of faculty at 111, whereas the average number of

faculty members for all the departments is only 29.5. The question needs to be raised

as to whether the University of Michigan is “counting” faculty members in a different

way from the rest of the schools. For example, are they including part-time and adjunct

faculty, and if so, is the number of these quite large?

For comparison purposes, the analysis was also run with the University of Michigan

deleted. Interestingly, the same four predictors emerge from the stepwise procedure,

although the results are better in some ways. For example, Mallows’ Ck is now 4.5248,

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

whereas for the full data set it was 5.216. Also, the PRESS residual sum of squares is

now only 899.92, whereas for the full data set it was 1350.33.

3.16â•‡SAMPLE SIZE DETERMINATION FOR A RELIABLE

PREDICTION EQUATION

In power analysis, you are interested in determining a priori how many subjects are

needed per group to have, say, powerÂ€=Â€.80 at the .05 level. Thus, planning is done ahead

of time to ensure that one has a good chance of detecting an effect of a given magnitude.

Now, in multiple regression for prediction, the focus is different and the concern, or at

least one very important concern, is development of a prediction equation that has generalizability. AÂ€study by Park and Dudycha (1974) provided several tables that, given certain

input parameters, enable one to determine how many subjects will be needed for a reliable

prediction equation. They considered from 3 to 25 random variable predictors, and found

that with about 15 subjects per predictor the amount of shrinkage is small (< .05) with high

probability (.90), if the squared population multiple correlation (ρ2) is .50. In TableÂ€3.15

we present selected results from the Park and Dudycha study for 3, 4, 8, and 15 predictors.

Table 3.15:â•‡ Sample Size Such That the Difference Between the Squared Multiple

Correlation and Squared Cross-Validated Correlation Is Arbitrarily Small With Given

Probability

Three predictors

Four predictors

γ

Γ

ρ2

ε

.99

.95

.90

.80

.60

.05

.01

.03

.01

.03

.05

.01

.03

.05

.10

.20

.01

.03

.05

.10

.20

.01

.03

858

269

825

271

159

693

232

140

70

34

464

157

96

50

27

235

85

554

166

535

174

100

451

151

91

46

22

304

104

64

34

19

155

55

421

123

410

133

75

347

117

71

36

17

234

80

50

27

15

120

43

290

79

285

91

51

243

81

50

25

12

165

57

36

20

12

85

31

158

39

160

50

27

139

48

29

15

8

96

34

22

13

9

50

20

.10

.25

.50

.40

81

18

88

27

14

79

27

17

7

6

55

21

14

9

7

30

13

ρ2

ε

.99

.95

.05 .01 1041 707

.03 312 201

.01 1006 691

.10 .03 326 220

.05 186 123

.01 853 587

.03 283 195

.25 .05 168 117

.10

84 58

.20

38 26

.01 573 396

.03 193 134

.50 .05 117 82

.10

60 43

.20

32 23

.01 290 201

.03 100 70

.90

.80

.60

.40

559

152

550

173

95

470

156

93

46

20

317

108

66

35

19

162

57

406

103

405

125

67

348

116

69

34

15

236

81

50

27

15

121

44

245

54

253

74

38

221

73

43

20

10

152

53

33

19

11

78

30

144

27

155

43

22

140

46

28

14

7

97

35

23

13

9

52

21

(Continued )

121

Table 3.15:â•‡ (Continued)

Three predictors

Four predictors

γ

ρ2

ε

.99

.75

.05

.10

.20

.01

.03

.05

.10

.20

51

28

16

23

11

9

7

6

.98

.95

35

20

12

17

9

7

6

6

Γ

.90

.80

.60

.40

ρ2

ε

.99

28

16

10

14

8

7

6

5

21

13

9

11

7

6

6

5

14

9

7

9

6

6

5

5

10

7

6

7

6

5

5

5

.75

.05

.10

.20

.01

.03

.05

.10

.20

62

34

19

29

14

10

8

7

.98

Eight predictors

.95

ε

.99

.95

.90

.80

.60

.40

37

21

13

19

10

8

7

7

28

17

11

15

9

8

7

6

20

13

9

12

8

7

7

6

15

11

7

10

7

7

6

6

44

25

15

22

11

9

8

7

Fifteen Â�predictors

γ

ρ2

.90

Γ

.80

.60

.40

.05 .01 1640 1226 1031 821 585 418

.03 447

313 251 187 116 71

.01 1616 1220 1036 837 611 450

.10 .03 503

373 311 246 172 121

.05 281

202 166 128 85 55

.01 1376 1047 893 727 538 404

.03 453 344 292 237 174 129

.25 .05 267 202 171 138 101 74

.10 128

95

80 63 45 33

.20

52

37

30 24 17 12

.01 927 707 605 494 368 279

.03 312 238 204 167 125 96

.50 .05 188 144 124 103 77 59

.10

96

74

64 53 40 31

.20

49

38

33 28 22 18

.01 470 360 308 253 190 150

.03 162 125 108 90 69 54

.75 .05 100

78

68 57 44 35

.10

54

43

38 32 26 22

.20

31

25

23 20 17 15

.01

47

38

34 29 24 21

.03

22

19

18 16 15 14

ρ2

ε

.01

.05 .03

.01

.10 .03

.05

.01

.03

.25 .05

.10

.20

.01

.03

.50 .05

.10

.20

.01

.03

.75 .05

.10

.20

.01

.03

.99

.95

.90

.80

.60

.40

2523

640

2519

762

403

2163

705

413

191

76

1461

489

295

149

75

741

255

158

85

49

75

36

2007

474

2029

600

309

1754

569

331

151

58

1188

399

261

122

62

605

210

131

72

42

64

33

1760 1486 1161 918

398 316 222 156

1794 1532 1220 987

524 438 337 263

265 216 159 119

1557 1339 1079 884

504 431 345 280

292 249 198 159

132 111

87 69

49

40

30 24

1057 911 738 608

355 306 249 205

214 185 151 125

109

94

77 64

55

48

40 34

539 466 380 315

188 164 135 113

118 103

86 73

65

58

49 43

39

35

31 28

59

53

46 41

31

29

27 25

Chapter 3

ρ2 ε

â•…â•…Eight predictors

Fifteen predictors

γ

Γ

ε

.99

.95

.90

.80 .60

.40

ρ2

.98 .05 17

.10 14

.20 12

16

13

11

15

12

11

14

12

11

12

11

10

.98 .05

.10

.20

13

11

11

â†œæ¸€å±®

.99

.95

.90

.80

.60

.40

28

23

20

26

21

19

25

21

19

24

20

19

23

20

18

22

19

18

2

â†œæ¸€å±®

2

Note: Entries in the body of the table are the sample size such that Ρ (ρ − ρc < ε ) = γ , where ρ is population multiple correlation, ε is some tolerance, and γ is the probability.

To use TableÂ€3.15 we need an estimate of ρ2, that is, the squared population multiple

correlation. Unless an investigator has a good estimate from a previous study that used

similar subjects and predictors, we feel taking ρ2Â€=Â€.50 is a reasonable guess for social

science research. In the physical sciences, estimates > .75 are quite reasonable. If we

set ρ2Â€=Â€.50 and want the loss in predictive power to be less than .05 with probabilityÂ€=Â€.90, then the required sample sizes are as follows:

Number of predictors

ρ Â€=Â€.50, εÂ€=Â€.05

2

N

n/k ratio

3

4

50

16.7

66

16.5

8

124

15.5

15

214

14.3

The n/k ratios in all 4 cases are around 15/1.

We had indicated earlier that, as a rough guide, generally about 15 subjects per predictor are needed for a reliable regression equation in the social sciences, that is, an

equation that will cross-validate well. Three converging lines of evidence support this

conclusion:

1. The Stein formula for estimated shrinkage (see results in TableÂ€3.8).

2. Personal experience.

3. The results just presented from the Park and Dudycha study.

However, the Park and Dudycha study (see TableÂ€3.15) clearly shows that the magnitude of ρ (population multiple correlation) strongly affects how many subjects will be

needed for a reliable regression equation. For example, if ρ2Â€=Â€.75, then for three predictors only 28 subjects are needed (assuming ε =.05, with probabilityÂ€=Â€.90), whereas

50 subjects are needed for the same case when ρ2Â€=Â€.50. Also, from the Stein formula

(EquationÂ€12), you will see if you plug in .40 for R2 that more than 15 subjects per

predictor will be needed to keep the shrinkage fairly small, whereas if you insert .70

for R2, significantly fewer than 15 will be needed.

123

124

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

3.17 OTHER TYPES OF REGRESSION ANALYSIS

Least squares regression is only one (although the most prevalent) way of conducting

a regression analysis. The least squares estimator has two desirable statistical properties; that is, it is an unbiased, minimum variance estimator. Mathematically, unbiased

^

means that Ε(β) = β, the expected value of the vector of estimated regression coefficients, is the vector of population regression coefficients. To elaborate on this a bit,

unbiased means that the estimate of the population coefficients will not be consistently

high or low, but will “bounce around” the population values. And, if we were to average the estimates from many repeated samplings, the averages would be very close to

the population values.

The minimum variance notion can be misleading. It does not mean that the variance of

the coefficients for the least squares estimator is small per se, but that among the class

of unbiased estimators β has the minimum variance. The fact that the variance of β can

be quite large led Hoerl and Kenard (1970a, 1970b) to consider a biased estimator of

β, which has considerably less variance, and the development of their ridge regression

technique. Although ridge regression has been strongly endorsed by some, it has also

been criticized (DraperÂ€& Smith, 1981; Morris, 1982; SmithÂ€& Campbell, 1980). Morris, for example, found that ridge regression never cross-validated better than other

types of regression (least squares, equal weighting of predictors, reduced rank) for a

set of data situations.

Another class of estimators are the James-Stein (1961) estimators. Regarding the utility of these, the following from Weisberg (1980) is relevant: “The improvement over

least squares will be very small whenever the parameter β is well estimated, i.e., collinearity is not a problem and β is not too close to O” (p.Â€258).

Since, as we have indicated earlier, least squares regression can be quite sensitive to

outliers, some researchers prefer regression techniques that are relatively insensitive

to outliers, that is, robust regression techniques. Since the early 1970s, the literature

on these techniques has grown considerably (Hogg, 1979; Huber, 1977; MostellerÂ€&

Tukey, 1977). Although these techniques have merit, we believe that use of least

squares, along with the appropriate identification of outliers and influential points, is a

quite adequate procedure.

3.18 MULTIVARIATE REGRESSION

In multivariate regression we are interested in predicting several dependent variables

from a set of predictors. The dependent variables might be differentiated aspects of

some variable. For example, Finn (1974) broke grade point average (GPA) up into GPA

required and GPA elective, and considered predicting these two dependent variables

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

from high school GPA, a general knowledge test score, and attitude toward education.

Or, one might measure “success as a professor” by considering various aspects of

success such as: rank (assistant, associate, full), rating of institution working at, salary,

rating by experts in the field, and number of articles published. These would constitute

the multiple dependent variables.

3.18.1 MathematicalÂ€Model

In multiple regression (one dependent variable), the modelÂ€was

yÂ€= Xβ +Â€e,

where y was the vector of scores for the subjects on the dependent variable, X was the

matrix with the scores for the subjects on the predictors, e was the vector of errors, and

β was vector of regression coefficients.

In multivariate regression the y, β, and e vectors become matrices, which we denote

by Y, B, andÂ€E:

YÂ€=Â€XB +Â€E

y11

y21

yn1

Y

B

E

X

y12 y1 p

b b1 p e11 e12 e1 p

1 x12 x1k b01 02

y22 y2 p 1 x22 y2 k b11 b12 b1 p e21 e22 e2 p

=

+

yn 2 ynp 1 xn 2 xnk bk1 bk 2 bkp en1 en 2 enp

The first column of Y gives the scores for the subjects on the first dependent variable,

the second column the scores on the second dependent variable, and so on. The first

column of B gives the set of regression coefficients for the first dependent variable,

the second column the regression coefficients for the second dependent variable, and

soÂ€on.

Example 3.11

As an example of multivariate regression, we consider part of a data set from Timm

(1975). The dependent variables are the Peabody Picture Vocabulary Test score and

the Raven Progressive Matrices Test score. The predictors were scores from different types of paired associate learning tasks, called “named still (ns),” “named action

(na),” and “sentence still (ss).” SPSS syntax for running the analysis using the SPSS

MANOVA procedure are given in TableÂ€3.16, along with annotation. Selected output

125

126

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

from the multivariate regression analysis run is given in TableÂ€3.17. The multivariate

test determines whether there is a significant relationship between the two sets of

variables, that is, the two dependent variables and the three predictors. At this point,

you should focus on Wilks’ Λ, the most commonly used multivariate test statistic.

We have more to say about the other multivariate tests in ChapterÂ€5. Wilks’ Λ here is

givenÂ€by:

Λ=

SSresid

SS tot

=

SSresid

SSreg + SSresid

,0 ≤ Λ ≤1

Recall from the matrix algebra chapter that the determinant of a matrix served as a multivariate generalization for the variance of a set of variables. Thus, |SSresid| indicates the

amount of variability for the set of two dependent variables that is not accounted for by

Table 3.16:â•‡ SPSS Syntax for Multivariate Regression Analysis of Timm Data—Two

Dependent Variables and Three Predictors

(1)

(3)

(2)

(4)

TITLE ‘MULT. REGRESS. – 2 DEP. VARS AND 3 PREDS’.

DATA LIST FREE/PEVOCAB RAVEN NS NA SS.

BEGIN DATA.

48

8

6

12

16

76

13

14

30

40

13

21

16

16

52

9

5

17

63

15

11

26

17

82

14

21

34

71

21

20

23

18

68

8

10

19

74

11

7

16

13

70

15

21

26

70

15

15

35

24

61

11

7

15

54

12

13

27

21

55

13

12

20

54

10

20

26

22

40

14

5

14

66

13

21

35

27

54

10

6

14

64

14

19

27

26

47

16

15

18

48

16

9

14

18

52

14

20

26

74

19

14

23

23

57

12

4

11

57

10

16

15

17

80

11

18

28

78

13

19

34

23

70

16

9

23

47

14

7

12

8

94

19

28

32

63

11

5

25

14

76

16

18

29

59

11

10

23

24

55

8

14

19

74

14

10

18

18

71

17

23

31

54

14

6

15

14

END DATA.

LIST.

MANOVA PEVOCAB RAVEN WITH NS NA SS/

PRINTÂ€=Â€CELLINFO(MEANS, COR).

(1)â•‡The variables are separated by blanks; they could also have been separated by commas.

(2)â•‡This LIST command is to get a listing of theÂ€data.

(3)â•‡The data is preceded by the BEGIN DATA command and followed by the END DATA command.

(4)â•‡ The predictors follow the keyword WITH in the MANOVA command.

27

8

25

14

25

14

17

8

16

10

26

8

21

11

32

21

12

26

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

Table 3.17:â•‡ Multivariate and Univariate Tests of Significance and Regression

Coefficients for TimmÂ€Data

EFFECT.. WITHIN CELLS REGRESSION

MULTIVARIATE TESTS OF SIGNIFICANCE (SÂ€=Â€2, MÂ€=Â€0, NÂ€=Â€15)

TEST NAME

VALUE

APPROX. F

PILLAIS

HOTELLINGS

WILKS

ROYS

.57254

1.00976

.47428

.47371

4.41203

5.21709

4.82197

HYPOTH. DF

6.00

6.00

6.00

ERROR DF

SIG. OF F

66.00

62.00

64.00

.001

.000

.000

This test indicates there is a significant (at αÂ€=Â€.05) regression of the set of 2 dependent variables

on the three predictors.

UNIVARIATE F-TESTS WITH (3.33) D.F.

VARIABLE

SQ. MUL.â•›R.

MUL. R

ADJ. R-SQ

F

SIG. OF F

PEVOCAB

RAVEN

.46345

.19429

.68077

.44078

.41467

.12104

(1) 9.50121

2.65250

.000

.065

These results show there is a significant regression for PEVOCAB, but RAVEN is not significantly

related to the three predictors at .05, since .065 > .05.

DEPENDENT VARIABLE.. PEVOCAB

COVARIATE

B

BETA

STD. ERR.

T-VALUE

SIG. OF T.

NS

NAâ•…(2)

SS

–.2056372599

1.01272293634

.3977340740

–.1043054487

.5856100072

.2022598804

.40797

.37685

.47010

–.50405

2.68737

.84606

.618

.011

.404

DEPENDENT VARIABLE.. RAVEN

COVARIATE

B

BETA

STD. ERR.

T-VALUE

SIG. OF T.

NS

NA

SS

.2026184278

.0302663367

–.0174928333

.4159658338

.0708355423

–.0360039904

.12352

.11410

.14233

1.64038

.26527

–.12290

.110

.792

.903

(1)â•… Using EquationÂ€4, F =

R2 k

2

(1- R ) (n - k - 1)

=

.46345 3

= 9.501.

.53655 (37 - 3 - 1)

(2)â•… These are the raw regression coefficients for predicting PEVOCAB from the three predictors, excluding

the regression constant.

regression, and |SStot| gives the total variability for the two dependent variables around

their means. The sampling distribution of Wilks’ Λ is quite complicated; however, there

is an excellent F approximation (due to Rao), which is what appears in TableÂ€3.17.

Note that the multivariate FÂ€=Â€4.82, p < .001, which indicates a significant relationship

between the dependent variables and the three predictors beyond the .01 level.

127

128

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

The univariate Fs are the tests for the significance of the regression of each dependent

variable separately. They indicate that PEVOCAB is significantly related to the set

of predictors at the .05 level (FÂ€=Â€9.501, p < .000), while RAVEN is not significantly

related at the .05 level (FÂ€=Â€2.652, pÂ€=Â€.065). Thus, the overall multivariate significance

is primarily attributable to PEVOCAB’s relationship with the three predictors.

It is important for you to realize that, although the multivariate tests take into account

the correlations among the dependent variables, the regression equations that appear at

the bottom of TableÂ€3.17 are those that would be obtained if each dependent variable

were regressed separately on the set of predictors. That is, in deriving the regression

equations, the correlations among the dependent variables are ignored, or not taken

into account. If you wished to take such correlations into account, multivariate multilevel modeling, described in ChapterÂ€14, can be used. Note that taking these correlations into account is generally desired and may lead to different results than obtained

by using univariate regression analysis.

We indicated earlier in this chapter that an R2 value around .50 occurs quite often with

educational and psychological data, and this is precisely what has occurred here with

the PEVOCAB variable (R2Â€=Â€.463). Also, we can be fairly confident that the prediction equation for PEVOCAB will cross-validate, since the n/k ratio is 12.33, which is

close to the ratio we indicated is necessary.

3.19 SUMMARY

1. A particularly good situation for multiple regression is where each of the predictors is correlated with y and the predictors have low intercorrelations, for then each

of the predictors is accounting for a relatively distinct part of the variance onÂ€y.

2. Moderate to high correlation among the predictors (multicollinearity) creates three

problems: (1) it severely limits the size of R, (2) it makes determining the importance of given predictor difficult, and (3) it increases the variance of regression coefficients, making for an unstable prediction equation. There are at least three ways

of combating this problem. One way is to combine into a single measure a set of

predictors that are highly correlated. AÂ€second way is to consider the use of principal

components or factor analysis to reduce the number of predictors. Because such

components are uncorrelated, we have eliminated multicollinearity. AÂ€third way is

through the use of ridge regression. This technique is beyond the scope of thisÂ€book.

3. Preselecting a small set of predictors by examining a correlation matrix from a

large initial set, or by using one of the stepwise procedures (forward, stepwise,

backward) to select a small set, is likely to produce an equation that is sample

specific. If one insists on doing this, and we do not recommend it, then the onus is

on the investigator to demonstrate that the equation has adequate predictive power

beyond the derivation sample.

4. Mallows’ Cp was presented as a measure that minimizes the effect of under fitting

(important predictors left out of the model) and over fitting (having predictors in

Chapter 3

5.

6.

7.

8.

9.

â†œæ¸€å±®

â†œæ¸€å±®

the model that make essentially no contribution or are marginal). This will be the

case if one chooses models for which Cp ≈Â€p.

With many data sets, more than one model will provide a good fit to the data. Thus,

one deals with selecting a model from a pool of candidate models.

There are various graphical plots for assessing how well the model fits the assumptions underlying linear regression. One of the most useful graphs plots the studentized residuals (y-axis) versus the predicted values (x-axis). If the assumptions

are tenable, then you should observe that the residuals appear to be approximately

normally distributed around their predicted values and have similar variance

across the range of the predicted values. Any systematic clustering of the residuals

indicates a model violation(s).

It is crucial to validate the model(s) by either randomly splitting the sample and

cross-validating, or using the PRESS statistic, or by obtaining the Stein estimate of

the average predictive power of the equation on other samples from the same population. Studies in the literature that have not cross-validated should be checked

with the Stein estimate to assess the generalizability of the prediction equation(s)

presented.

Results from the Park and Dudycha study indicate that the magnitude of the population multiple correlation strongly affects how many subjects will be needed for

a reliable prediction equation. If your estimate of the squared population value is

.50, then about 15 subjects per predictor are needed. On the other hand, if your

estimate of the squared population value is substantially larger than .50, then far

fewer than 15 subjects per predictor will be needed.

Influential data points, that is, points that strongly affect the prediction equation,

can be identified by finding those cases having Cook’s distances > 1. These points

need to be examined very carefully. If such a point is due to a recording error, then

one would simply correct it and redo the analysis. Or if it is found that the influential point is due to an instrumentation error or that the process that generated the

data for that subject was different, then it is legitimate to drop the case from the

analysis. If, however, none of these appears to be the case, then one strategy is to

perhaps report the results of several analyses: one analysis with all the data and an

additional analysis (or analyses) with the influential point(s) deleted.

3.20 EXERCISES

1. Consider this set ofÂ€data:

X

Y

2

3

4

6

7

8

3

6

8

4

10

14

129

130

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

X

Y

9

10

11

12

13

8

12

14

12

16

(a) Run a regression analysis with these data in SPSS and request a plot of

the studentized residuals (SRESID) by the standardized predicted values

(ZPRED).

(b) Do you see any pattern in the plot of the residuals? What does this suggest?

Does your inspection of the plot suggest that there are any outliers onÂ€Yâ•›?

(c) Interpret the slope.

(d) Interpret the adjusted R square.

2. Consider the following small set ofÂ€data:

PREDX

DEP

0

1

2

3

4

5

6

7

8

9

10

1

4

6

8

9

10

10

8

7

6

5

(a) Run a regression analysis with these data in SPSS and obtain a plot of the

residuals (SRESID by ZPRED).

(b) Do you see any pattern in the plot of the residuals? What does this suggest?

(c) Inspect a scatter plot of DEP by PREDX. What type of relationship exists

between the two variables?

3. Consider the following correlation matrix:

y

x1

x2

y

x1

x2

1.00

.60

.50

.60

1.00

.80

.50

.80

1.00

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

(a)How much variance on y will x1 account for if entered first?

(b)How much variance on y will x1 account for if entered second?

(c) What, if anything, do these results have to do with the multicollinearity

problem?

4. A medical school admissions official has two proven predictors (x1 and x2) of

success in medical school. There are two other predictors under consideration

(x3 and x4), from which just one will be selected that will add the most (beyond

what x1 and x2 already predict) to predicting success. Here are the correlations

among the predictors and the outcome gathered on a sample of 100 medical

students:

y

x1

x2

x3

x1

x2

x3

x4

.60

.55

.70

.60

.60

.80

.46

.20

.30

.60

(a) What procedure would be used to determine which predictor has the

greater incremental validity? Do not go into any numerical details, just

indicate the general procedure. Also, what is your educated guess as to

which predictor (x3 or x4) will probably have the greater incremental validity?

(b) Suppose the investigator found the third predictor, runs the regression,

and finds RÂ€=Â€.76. Apply the Stein formula, EquationÂ€12 (using kÂ€=Â€3), and

tell exactly what the resulting number represents.

5. This exercise has you calculate an F statistic to test the proportion of variance

explained by a set of predictors and also an F statistic to test the additional

proportion of variance explained by adding a set of predictors to a model that

already contains other predictors. Suppose we were interested in predicting

the IQs of 3-year-old children from four measures of socioeconomic status

(SES) and six environmental process variables (as assessed by a HOME inventory instrument) and had a total sample size of 105. Further, suppose we were

interested in determining whether the prediction varied depending on sex and

on race and that the following analyses wereÂ€done:

To examine the relations among SES, environmental process, and IQ, two

regression analyses were done for each of five samples: total group, males,

females, whites, and blacks. First, four SES variables were used in the regression analysis. Then, the six environmental process variables (the six HOME

inventory subscales) were added to the regression equation. For each analysis,

IQ was used as the criterion variable.

The following table reports 10 multiple correlations:

131

132

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Multiple Correlations Between Measures of Environmental Quality andÂ€IQ

Measure

Males

(nÂ€=Â€57)

Females

(nÂ€=Â€48)

Whites

(nÂ€=Â€37)

Blacks

(nÂ€=Â€68)

Total

(NÂ€=Â€105)

SES (A)

SES and HOME (A and B)

.555

.682

.636

.825

.582

.683

.346

.614

.556

.765

(a) Suppose that all of the multiple correlations are statistically significant (.05

level) except for .346 obtained for blacks with the SES variables. Show

that .346 is not significant at the .05 level. Note that F critical with (.05; 4;

63)Â€=Â€2.52.

(b) For males, does the addition of the HOME inventory variables to the prediction equation significantly increase predictive power beyond that of the

SES variables? Note that F critical with (.05; 6; 46)Â€=Â€2.30.

Note that the following F statistic is appropriate for determining whether

a set of variables B significantly adds to the prediction beyond what set A

contributes:

F=

(R2y,AB - R2y.A ) / kB

(1- R2y.AB ) / (n - k A - kB - 1)

, with kB and (n - k A - kB - 1)df,

where kA and kB represent the number of predictors in sets A and B, respectively.

â•‡6.Plante and Goldfarb (1984) predicted social adjustment from Cattell’s 16 personality factors. There were 114 subjects, consisting of students and employees

from two large manufacturing companies. They stated in their RESULTS section:

Stepwise multiple regression was performed.Â€.Â€.Â€. The index of social adjustment

significantly correlated with 6 of the primary factors of the 16 PF.Â€.Â€.Â€. Multiple

regression analysis resulted in a multiple correlation of RÂ€=Â€.41 accounting for

17% of the variance with these 6 factors. The multiple R obtained while utilizing

all 16 factors was RÂ€=Â€.57, thus accounting for 33% of the variance. (p.Â€1217)

(a) Would you have much faith in the reliability of either of these regression

equations?

(b)Apply the Stein formula (EquationÂ€12) for random predictors to the

16-variable equation to estimate how much variance on the average we

could expect to account for if the equation were cross-validated on many

other random samples.

â•‡7.Consider the following data for 15 subjects with two predictors. The dependent

variable, MARK, is the total score for a subject on an examination. The first

predictor, COMP, is the score for the subject on a so-called compulsory paper.

The other predictor, CERTIF, is the score for the subject on a previousÂ€exam.

Chapter 3

â†œæ¸€å±®

Candidate MARK

COMP

CERTIF

Candidate MARK

COMP

CERTIF

1

2

3

4

5

6

7

8

111

92

90

107

98

150

118

110

68

46

50

59

50

66

54

51

9

10

11

12

13

14

15

117

94

130

118

91

118

109

59

97

57

51

44

61

66

476

457

540

551

575

698

545

574

645

556

634

637

390

562

560

â†œæ¸€å±®

(a) Run a stepwise regression on thisÂ€data.

(b) Does CERTIF add anything to predicting MARK, above and beyond that

ofÂ€COMP?

(c) Write out the prediction equation.

â•‡8.A statistician wishes to know the sample size needed in a multiple regression

study. She has four predictors and can tolerate at most a .10 drop-off in predictive power. But she wants this to be the case with .95 probability. From previous related research the estimated squared population multiple correlation is

.62. How many subjects are needed?

â•‡9.Recall in the chapter that we mentioned a study where each of 22 college freshmen wrote four essays and then a stepwise regression analysis was applied to

these data to predict quality of essay response. It has already been mentioned

that the n of 88 used in the study is incorrect, since there are only 22 independent responses. Now let us concentrate on a different aspect of the study.

Suppose there were 17 predictors and that found 5 of them were “significant,”

accounting for 42.3% of the variance in quality. Using a median value between

5 and 17 and the proper sample size of 22, apply the Stein formula to estimate

the cross-validity predictive power of the equation. What do you conclude?

10.A regression analysis was run on the Sesame Street (nÂ€=Â€240) data set, predicting postbody from the following five pretest measures: prebody, prelet,

preform, prenumb, and prerelat. The SPSS syntax for conducting a stepwise

regression is given next. Note that this analysis obtains (in addition to other

output): (1) variance inflation factors, (2) a list of all cases having a studentized

residual greater than 2 in magnitude, (3) the smallest and largest values for the

studentized residuals, Cook’s distance and centered leverage, (4) a histogram

of the standardized residuals, and (5) a plot of the studentized residuals versus

the standardized predicted y values.

regression descriptives=default/

variablesÂ€=Â€prebody to prerelat postbody/

statisticsÂ€=Â€defaultsÂ€tol/

dependentÂ€=Â€postbody/

133

134

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

methodÂ€=Â€stepwise/

residualsÂ€=Â€histogram(zresid) outliers(sresid, lever, cook)/

casewise plot(zresid) outliers(2)/

scatterplot (*sresid, *zpred).

Selected results from SPSS appear in TableÂ€3.18. Answer the following

questions.

Table 3.18:â•‡ SPSS Results for ExerciseÂ€10

Regression

Descriptive Statistics

PREBODY

PRELET

PREFORM

PRENUMG

PRERELAT

POSTBODY

Mean

Std. Deviation

N

21.40

15.94

9.92

20.90

9.94

25.26

6.391

8.536

3.737

10.685

3.074

5.412

240

240

240

240

240

240

Correlations

PREBODY

PREBODY 1.000

.453

PRELET

.680

PREFORM

.698

PRENUMG

.623

PRERELAT

POSTBODY .650

PRELET

PREFORM

PRENUMG

PRERELAT

POSTBODY

.453

1.000

.506

.717

.471

.371

.680

.506

1.000

.673

.596

.551

.698

.717

.673

1.000

.718

.527

.623

.471

.596

.718

1.000

.449

.650

.371

.551

.527

.449

1.000

Variables Entered/Removeda

Model

Variables Entered

Variables Removed

Method

1

PREBODY

.

2

PREFORM

.

Stepwise (Criteria:

Probability-of-F-to-enter <= .050,

Probability-of-F-to-remove >= .100).

Stepwise (Criteria:

Probability-of-F-to-enter <= .050,

Probability-of-F-to-remove >= .100).

a

Dependent Variable: POSTBODY

Model Summaryc

Model

R

R Square

Adjusted R Square

Std. Error of the Estimate

1

2

.650a

.667b

.423

.445

.421

.440

4.119

4.049

a

Predictors: (Constant), PREBODY

Predictors: (Constant), PREBODY, PREFORM

c

Dependent Variable: POSTBODY

b

ANOVAa

Model

1

Regression

Residual

Total

Regression

Residual

Total

2

Sum of Squares

df

Mean Square

F

Sig.

2961.602

4038.860

7000.462

3114.883

3885.580

7000.462

1

238

239

2

237

239

2961.602

16.970

174.520

.000b

1557.441

16.395

94.996

.000c

a

Dependent Variable: POSTBODY

Predictors: (Constant), PREBODY

c

Predictors: (Constant), PREBODY, PREFORM

b

Coefficientsa

Unstandardized

Coefficients

Model

1

(Constant) 13.475

PREBODY .551

(Constant) 13.062

PREBODY .435

PREFORM .292

2

a

B

Std.

Error

.931

.042

.925

.056

.096

Standardized

Coefficients

Beta

.650

.513

.202

Collinearity Statistics

t

Sig.

14.473

13.211

14.120

7.777

3.058

.000

.000 1.000

.000

.000 .538

.002 .538

Tolerance

VIF

1.000

1.860

1.860

Dependent Variable: POSTBODY

Excluded Variablesa

Collinearity Statistics

Model

Beta In T

1

.096b

.202b

.143b

.072b

PRELET

PREFORM

PRENUMG

PRERELAT

1.742

3.058

2.091

1.152

Sig.

Partial

Â�Correlation Tolerance VIF

Minimum

Tolerance

.083

.002

.038

.250

.112

.195

.135

.075

.795

.538

.513

.612

.795

.538

.513

.612

1.258

1.860

1.950

1.634

(Continued )

Table 3.18:â•‡ (Continued)

Excluded Variablesa

Collinearity Statistics

Model

Beta In T

2

.050c

.075c

.017c

PRELET

PRENUMG

PRERELAT

.881

1.031

.264

Sig.

Partial

Â�Correlation Tolerance VIF

Minimum

Tolerance

.379

.304

.792

.057

.067

.017

.489

.432

.464

.722

.439

.557

1.385

2.277

1.796

a

Dependent Variable: POSTBODY

Predictors in the Model: (Constant), PREBODY

c

Predictors in the Model: (Constant), PREBODY, PREFORM

b

Casewise Diagnosticsa

Case Number

Stud. Residual

POSTBODY

Predicted Value

Residual

36

38

39

40

125

135

139

147

155

168

210

219

2.120

−2.115

−2.653

−2.322

−2.912

2.210

–3.068

2.506

–2.767

–2.106

–2.354

3.176

29

12

21

21

11

32

11

32

17

13

13

31

20.47

20.47

31.65

30.33

22.63

23.08

23.37

21.91

28.16

21.48

22.50

18.29

8.534

–8.473

–10.646

–9.335

–11.631

8.919

–12.373

10.088

–11.162

–8.477

–9.497

12.707

a

Dependent Variable: POSTBODY

Outlier Statisticsa (10 Cases Shown)

Stud. Residual

1

2

3

4

5

6

7

8

9

10

Case Number

Statistic

219

139

125

155

39

147

210

40

135

36

3.176

–3.068

–2.912

–2.767

–2.653

2.506

–2.354

–2.322

2.210

2.120

Sig. F

Outlier Statisticsa (10 Cases Shown)

Cook’s Distance

1

2

3

4

5

6

7

8

9

10

1

2

3

4

5

6

7

8

9

10

Centered

Leverage Value

Statistic

Sig. F

219

125

39

38

40

139

147

177

140

13

140

32

23

114

167

52

233

8

236

161

.081

.078

.042

.032

.025

.025

.025

.023

.022

.020

.047

.036

.030

.028

.026

.026

.025

.025

.023

.023

.970

.972

.988

.992

.995

.995

.995

.995

.996

.996

Dependent Variable: POSTBODY

Histogram

Dependent Variable: POSTBODY

Mean = 4.16E-16

Std. Dev. = 0.996

N = 240

30

Frequency

a

Case Number

20

10

–4

–2

0

2

Regression Standardized Residual

4

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Scatterplot

Dependent Variable: POSTBODY

4

Regression Studentized Residual

138

2

–2

–4

–3

–2

–1

0

1

Regression Standardized Predicted Value

2

3

(a) Why did PREBODY enter the prediction equation first?

(b) Why did PREFORM enter the prediction equation second?

(c) Write the prediction equation, rounding off to three decimals.

(d) Is multicollinearity present? Explain.

(e) Compute the Stein estimate and indicate in words exactly what it represents.

(f) Show by using the appropriate correlations from the correlation matrix

how the R-square change of .0219 can be calculated.

(g) Refer to the studentized residuals. Is the number of these greater than

121 about what you would expect if the model is appropriate? Why, or

whyÂ€not?

(h)Are there any outliers on the set of predictors?

(i)Are there any influential data points? Explain.

(j) From examination of the residual plot, does it appear there may be some

model violation(s)? Why or whyÂ€not?

(k) From the histogram of residuals, does it appear that the normality assumption is reasonable?

(l) Interpret the regression coefficient for PREFORM.

11.Consider the followingÂ€data:

Chapter 3

X1

X2

14

17

36

32

25

21

23

10

18

12

â†œæ¸€å±®

â†œæ¸€å±®

Find the Mahalanobis distance for caseÂ€4.

12.Using SPSS, run backward selection on the National Academy of Sciences

data. What model is selected?

13.From one of the better journals in your content area within the last 5Â€years find

an article that used multiple regression. Answer the following questions:

(a) Did the authors discuss checking the assumptions for regression?

(b) Did the authors report an adjusted squared multiple correlation?

(c) Did the authors discuss checking for outliers and/or influential observations?

(d) Did the authors say anything about validating their equation?

REFERENCES

Anscombe, V. (1973). Graphs in statistical analysis. American Statistician, 27, 13–21.

Belsley, D.â•›A., Kuh, E.,Â€& Welsch, R. (1980). Regression diagnostics: Identifying influential

data and sources of collinearity. New York, NY: Wiley.

Cohen, J. (1990). Things IÂ€have learned (so far). American Psychologist, 45, 1304–1312.

Cohen, J.,Â€& Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum.

Cohen, J., Cohen, P., West, S.â•›G.,Â€& Aiken, L.â•›S. (2003). Applied multiple regression/correlation for the behavioral sciences (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.

Cook, R.â•›D. (1977). Detection of influential observations in linear regression. Technometrics,

19, 15–18.

Cook, R.â•›D.,Â€& Weisberg, S. (1982). Residuals and influence in regression. New York, NY:

ChapmanÂ€&Â€Hall.

Crowder, R. (1975). An investigation of the relationship between social I.Q. and vocational

evaluation ratings with an adult trainable mental retardate work activity center population. Unpublished doctoral dissertation, University of Cincinnati,Â€OH.

Crystal, G. (1988). The wacky, wacky world of CEO pay. Fortune, 117, 68–78.

Dizney, H.,Â€& Gromen, L. (1967). Predictive validity and differential achievement on three

MLA Comparative Foreign Language tests. Educational and Psychological Measurement,

27, 1127–1130.

139

140

â†œæ¸€å±®

â†œæ¸€å±®

MULTIPLE REGRESSION FOR PREDICTION

Draper, N.â•›R.,Â€& Smith, H. (1981). Applied regression analysis. New York, NY: Wiley.

Feshbach, S., Adelman, H.,Â€& Fuller, W. (1977). Prediction of reading and related academic

problems. Journal of Educational Psychology, 69, 299–308.

Finn, J. (1974). A general model for multivariate analysis. New York, NY: Holt, RinehartÂ€&

Winston.

Glasnapp, D.,Â€& Poggio, J. (1985). Essentials of statistical analysis for the behavioral sciences.

Columbus, OH: Charles Merrill.

Guttman, L. (1941). Mathematical and tabulation techniques. Supplementary study B. In P.

Horst (Ed.), Prediction of personnel adjustment (pp.Â€251–364). New York, NY: Social Science Research Council.

Herzberg, P.â•›A. (1969). The parameters of cross-validation (Psychometric Monograph No.Â€16).

Richmond, VA: Psychometric Society. Retrieved from http://www.psychometrika.org/journal/online/MN16.pdf

Hoaglin, D.,Â€& Welsch, R. (1978). The hat matrix in regression and ANOVA. American Statistician, 32, 17–22.

Hoerl, A.â•›E.,Â€& Kennard, W. (1970a). Ridge regression: Biased estimation for non-orthogonal

problems. Technometrics, 12, 55–67.

Hoerl, A.â•›E.,Â€& Kennard, W. (1970b). Ridge regression: Applications to non-orthogonal problems. Technometrics, 12, 69–82.

Hogg, R.â•›V. (1979). Statistical robustness. One view of its use in application today. American

Statistician, 33, 108–115.

Huber, P. (1977). Robust statistical procedures (No.Â€27, Regional conference series in applied

mathematics). Philadelphia, PA:Â€SIAM.

Huberty, C.â•›J. (1989). Problems with stepwise methods—better alternatives. In B. Thompson

(Ed.), Advances in social science methodology (Vol.Â€1, pp.Â€43–70). Stamford, CT:Â€JAI.

Johnson, R.â•›A.,Â€& Wichern, D.â•›W. (2007). Applied multivariate statistical analysis (6th ed.).

Upper Saddle River, NJ: Pearson PrenticeÂ€Hall.

Jones, L.â•›V., Lindzey, G.,Â€& Coggeshall, P.â•›E. (Eds.). (1982). An assessment of research-doctorate

programs in the United States: SocialÂ€& behavioral sciences. Washington, DC: National

Academies Press.

Krasker, W.â•›S.,Â€& Welsch, R.â•›E. (1979). Efficient bounded-influence regression estimation

using alternative definitions of sensitivity. Technical Report #3, Center for Computational

Research in Economics and Management Science, Massachusetts Institute of Technology,

Cambridge,Â€MA.

Lord, R.,Â€& Novick, M. (1968). Statistical theories of mental test scores. Reading, MA:

Addison-Wesley.

Mahalanobis, P.â•›C. (1936). On the generalized distance in statistics. Proceedings of the

National Institute of Science of India, 12, 49–55.

Mallows, C.â•›L. (1973). Some comments on Cp. Technometrics, 15, 661–676.

Moore, D.,Â€& McCabe, G. (1989). Introduction to the practice of statistics. New York, NY:

Freeman.

Morris, J.â•›D. (1982). Ridge regression and some alternative weighting techniques: AÂ€comment on Darlington. Psychological Bulletin, 91, 203–210.

Chapter 3

â†œæ¸€å±®

â†œæ¸€å±®

Morrison, D.â•›F. (1983). Applied linear statistical methods. Englewood Cliffs, NJ: PrenticeÂ€Hall.

Mosteller, F.,Â€& Tukey, J.â•›

W. (1977). Data analysis and regression. Reading, MA:

Addison-Wesley.

Myers, R. (1990). Classical and modern regression with applications (2nd ed.). Boston, MA:

Duxbury.

Nunnally, J. (1978). Psychometric theory. New York, NY: McGraw-Hill.

Park, C.,Â€& Dudycha, A. (1974). AÂ€cross validation approach to sample size determination for

regression models. Journal of the American Statistical Association, 69, 214–218.

Pedhazur, E. (1982). Multiple regression in behavioral research (2nd ed.). New York, NY: Holt,

RinehartÂ€& Winston.

Plante, T.,Â€& Goldfarb, L. (1984). Concurrent validity for an activity vector analysis index of

social adjustment. Journal of Clinical Psychology, 40, 1215–1218.

Ramsey, F.,Â€& Schafer, D. (1997). The statistical sleuth. Belmont, CA: Duxbury.

SAS Institute. (1990) SAS/STAT User's Guide (Vol.Â€2). Cary, NC: Author.

Singer, J.,Â€& Willett, J. (1988, April). Opening up the black box of recipe statistics: Putting

the data back into data analysis. Paper presented at the annual meeting of the American

Educational Research Association, New Orleans,Â€LA.

Smith, G.,Â€& Campbell, F. (1980). AÂ€critique of some ridge regression methods. Journal of the

American Statistical Association, 75, 74–81.

Stein, C. (1960). Multiple regression. In I. Olkin (Ed.), Contributions to probability and statistics, essays in honor of Harold Hotelling (pp.Â€424–443). Stanford, CA: Stanford University

Press.

Timm, N.â•›H. (1975). Multivariate analysis with applications in education and psychology.

Monterey, CA: Brooks-Cole.

Weisberg, S. (1980). Applied linear regression. New York, NY: Wiley.

Weisberg, S. (1985). Applied linear regression (2nd ed.). New York, NY: Wiley.

Wherry, R.â•›J. (1931). AÂ€new formula for predicting the shrinkage of the coefficient of multiple

correlation. Annals of Mathematical Statistics, 2, 440–457.

Wilkinson, L. (1979). Tests of significance in stepwise regression. Psychological Bulletin, 86,

168–174.

141

Chapter 4

TWO-GROUP MULTIVARIATE

ANALYSIS OF VARIANCE

4.1â•‡INTRODUCTION

In this chapter we consider the statistical analysis of two groups of participants on

several dependent variables simultaneously; focusing on cases where the variables

are correlated and share a common conceptual meaning. That is, the dependent variables considered together make sense as a group. For example, they may be different

dimensions of self-concept (physical, social, emotional, academic), teacher effectiveness, speaker credibility, or reading (blending, syllabication, comprehension, etc.).

We consider the multivariate tests along with their univariate counterparts and show

that the multivariate two-group test (Hotelling’s T2) is a natural generalization of the

univariate t test. We initially present the traditional analysis of variance approach for

the two-group multivariate problem, and then later briefly present and compare a

regression analysis of the same data. In the next chapter, studies with more than two

groups are considered, where multivariate tests are employed that are generalizations

of Fisher’s F found in a univariate one-way ANOVA. The last part of this chapter (sectionsÂ€4.9–4.12) presents a fairly extensive discussion of power, including introduction

of a multivariate effect size measure and the use of SPSS MANOVA for estimating

power.

There are two reasons one should be interested in using more than one dependent variable when comparing two treatments:

1. Any treatment “worth its salt” will affect participants in more than one way—hence

the need for several criterion measures.

2. Through the use of several criterion measures we can obtain a more complete and

detailed description of the phenomenon under investigation, whether it is reading achievement, math achievement, self-concept, physiological stress, or teacher

effectiveness or counselor effectiveness.

If we were comparing two methods of teaching second-grade reading, we would obtain

a more detailed and informative breakdown of the differential effects of the methods

Chapter 4

â†œæ¸€å±®

â†œæ¸€å±®

if reading achievement were split into its subcomponents: syllabication, blending,

sound discrimination, vocabulary, comprehension, and reading rate. Comparing the

two methods only on total reading achievement might yield no significant difference;

however, the methods may be making a difference. The differences may be confined to

only the more basic elements of blending and syllabication. Similarly, if two methods

of teaching sixth-grade mathematics were being compared, it would be more informative to compare them on various levels of mathematics achievement (computations,

concepts, and applications).

4.2â•‡FOUR STATISTICAL REASONS FOR PREFERRING A

MULTIVARIATE ANALYSIS

1. The use of fragmented univariate tests leads to a greatly inflated overall type IÂ€error

rate, that is, the probability of at least one false rejection. Consider a two-group

problem with 10 dependent variables. What is the probability of one or more spurious results if we do 10 t tests, each at the .05 level of significance? If we assume

the tests are independent as an approximation (because the tests are not independent), then the probability of no type IÂ€errorsÂ€is:

(.95)(.95) (.95) ≈ .60

10 times

because the probability of not making a type IÂ€error for each test is .95, and with

the independence assumption we can multiply probabilities. Therefore, the probability of at least one false rejection is 1 − .60Â€=Â€.40, which is unacceptably high.

Thus, with the univariate approach, not only does overall α become too high, but

we can’t even accurately estimateÂ€it.

2. The univariate tests ignore important information, namely, the correlations among

the variables. The multivariate test incorporates the correlations (via the covariance matrix) right into the test statistic, as is shown in the next section.

3. Although the groups may not be significantly different on any of the variables

individually, jointly the set of variables may reliably differentiate the groups.

That is, small differences on several of the variables may combine to produce a

reliable overall difference. Thus, the multivariate test will be more powerful in

thisÂ€case.

4. It is sometimes argued that the groups should be compared on total test score first

to see if there is a difference. If so, then compare the groups further on subtest

scores to locate the sources responsible for the global difference. On the other

hand, if there is no total test score difference, then stop. This procedure could

definitely be misleading. Suppose, for example, that the total test scores were not

significantly different, but that on subtest 1 group 1 was quite superior, on subtest

2 group 1 was somewhat superior, on subtest 3 there was no difference, and on

subtest 4 group 2 was quite superior. Then it would be clear why the univariate

143

144

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

analysis of total test score found nothing—because of a canceling-out effect. But

the two groups do differ substantially on two of the four subsets, and to some

extent on a third. AÂ€multivariate analysis of the subtests reflects these differences

and would show a significant difference.

Many investigators, especially when they first hear about multivariate analysis of variance (MANOVA), will lump all the dependent variables in a single analysis. This is

not necessarily a good idea. If several of the variables have been included without

any strong rationale (empirical or theoretical), then small or negligible differences on

these variables may obscure a real difference(s) on some of the other variables. That

is, the multivariate test statistic detects mainly error in the system (i.e., in the set of

variables), and therefore declares no reliable overall difference. In a situation such as

this, what is called for are two separate multivariate analyses, one for the variables for

which there is solid support, and a separate one for the variables that are being tested

on a heuristic basis.

4.3â•‡THE MULTIVARIATE TEST STATISTIC AS A GENERALIZATION

OF THE UNIVARIATE TÂ€TEST

For the univariate t test the null hypothesisÂ€is:

H0 : μ1Â€= μ2 (population means are equal)

In the multivariate case the null hypothesisÂ€is:

µ11 µ12

µ µ

21

= 22 (population mean vectors are equal)

H0 :

µ µ

p1 p 2

Saying that the vectors are equal implies that the population means for the two groups

on variable 1 are equal (i.e., μ11 =μ12), population group means on variable 2 are equal

(μ21Â€=Â€μ22), and so on for each of the p dependent variables. The first part of the subscript refers to the variable and the second part to the group. Thus, μ21 refers to the

population mean for variable 2 in groupÂ€1.

Now, for the univariate t test, you may recall that there are three assumptions involved:

(1) independence of the observations, (2) normality, and (3) equality of the population

variances (homogeneity of variance). In testing the multivariate null hypothesis the

corresponding assumptions are: (1) independence of the observations, (2) multivariate

normality on the dependent variables in each population, and (3) equality of the covariance matrices. The latter two multivariate assumptions are much more stringent than

the corresponding univariate assumptions. For example, saying that two covariance

matrices are equal for four variables implies that the variances are equal for each of the

Chapter 4

â†œæ¸€å±®

â†œæ¸€å±®

variables and that the six covariances for each of the groups are equal. Consequences

of violating the multivariate assumptions are discussed in detail in ChapterÂ€6.

We now show how the multivariate test statistic arises naturally from the univariate t

by replacing scalars (numbers) by vectors and matrices. The univariate t is givenÂ€by:

y1 − y2

t=

( n1 − 1) s12 + ( n2 − 1) s22 1 +

n1

n1 + n2 − 2

2

1

n2

, (1)

2

where s1 and s2 are the sample variances for groups 1 and 2, respectively. The quantity under the radical, excluding the sum of the reciprocals, is the pooled estimate of

the assumed common within population variance, call it s2. Now, replacing that quantity by s2 and squaring both sides, we obtain:

t2 =

( y1 − y2 )2

1 1

s2 +

n1 n2

1 1

= ( y1 − y2 ) s 2 +

n1 n2

−1

( y1 − y2 )

−1

n + n

= ( y1 − y2 ) s 2 1 2 ( y1 − y2 )

n1n2

−1

nn

t 2 = 1 2 ( y1 − y2 ) s 2 ( y1 − y2 )

n1 + n2

( )

Hotelling’s Tâ•›â†œ2 is obtained by replacing the means on each variable by the vectors of

means in each group, and by replacing the univariate measure of within variability s2

by its multivariate generalization S (the estimate of the assumed common population

covariance matrix). Thus we obtain:

T2 =

n1n2

⋅ ( y1 − y2 )′ S −1 ( y1 − y2 ) (2)

n1 + n2

Recall that the matrix analogue of division is inversion; thus (s2)−1 is replaced by the

inverse ofÂ€S.

Hotelling (1931) showed that the following transformation of Tâ•›2 yields an exact F

distribution:

F=

n1 + n2 − p − 1 2 (3)

⋅T

( n1 + n2 − 2 ) p

145

146

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

with p and (N − p − 1) degrees of freedom, where p is the number of dependent variables and NÂ€=Â€n1 + n2, that is, the total number of subjects.

We can rewrite Tâ•›2Â€as:

T 2 = kd′S −1d,

where k is a constant involving the group sizes, d is the vector of mean differences,

and S is the covariance matrix. Thus, what we have reflected in Tâ•›2 is a comparison of

between-variability (given by the d vectors) to within-variability (given by S). This

may not be obvious, because we are not literally dividing between by within as in the

univariate case (i.e., FÂ€=Â€MSh / MSw). However, recall that inversion is the matrix analogue of division, so that multiplying by S−1 is in effect “dividing” by the multivariate

measure of within variability.

4.4 NUMERICAL CALCULATIONS FOR A TWO-GROUP PROBLEM

We now consider a small example to illustrate the calculations associated

with Hotelling’s Tâ•›2. The fictitious data shown next represent scores on two measures of counselor effectiveness, client satisfaction (SA) and client self-acceptance

(CSA). Six participants were originally randomly assigned to counselors who

used either a behavior modification or cognitive method; however, three in the

behavior modification group were unable to continue for reasons unrelated to the

treatment.

Behavior modification

Cognitive

SA

CSA

SA

CSA

1

3

2

3

7

2

y11 = 2

y21 = 4

4

6

6

5

5

4

6

8

8

10

10

6

y12 = 5

y22 = 8

Recall again that the first part of the subscript denotes the variable and the second part

the group, that is, y12 is the mean for variable 1 in groupÂ€2.

In words, our multivariate null hypothesis is: “There are no mean differences between

the behavior modification and cognitive groups when they are compared simultaneously on client satisfaction and client self-acceptance.” Let client satisfaction be

Chapter 4

â†œæ¸€å±®

â†œæ¸€å±®

variable 1 and client self-acceptance be variable 2. Then the multivariate null hypothesis in symbolsÂ€is:

µ11 µ12

H0 : =

µ 21 µ 22

That is, we wish to determine whether it is tenable that the population means are

equal for variable 1 (µ11Â€=Â€µ12) and that the population means for variable 2 are equal

(µ21Â€=Â€µ22). To test the multivariate null hypothesis we need to calculate F in EquationÂ€3. But to obtain this we first need Tâ•›2, and the tedious part of calculating Tâ•›2 is in

obtaining S, which is our pooled estimate of within-group variability on the set of two

variables, that is, our estimate of error. Before we begin calculating S it will be helpful

to go back to the univariate t test (EquationÂ€1) and recall how the estimate of error

variance was obtained there. The estimate of the assumed common within-population

variance (σ2) (i.e., error variance) is givenÂ€by

s2 =

(n1 − 1) s12 + (n2 − 1) s22 = ssg1 + ssg 2

n1 + n2 − 2

↓

(cf. Equation 1)

n1 + n2 − 2

(4)

↓

(from the definition of variance)

where ssg1 and ssg2 are the within sums of squares for groups 1 and 2. In the multivariate case (i.e., in obtaining S) we replace the univariate measures of within-group

variability (ssg1 and ssg2) by their matrix multivariate generalizations, which we call

W1 and W2.

W1 will be our estimate of within variability on the two dependent variables in group 1.

Because we have two variables, there is variability on each, which we denote by ss1 and

ss2, and covariability, which we denote by ss12. Thus, the matrix W1 will look as follows:

ss

W1 = 1

ss21

ss12

ss2

Similarly, W2 will be our estimate of within variability (error) on variables in group 2.

After W1 and W2 have been calculated, we will pool them (i.e., add them) and divide

by the degrees of freedom, as was done in the univariate case (see EquationÂ€ 4), to

obtain our multivariate error term, the covariance matrix S. TableÂ€4.1 shows schematically the procedure for obtaining the pooled error terms for both the univariate t test

and for Hotelling’s Tâ•›2.

4.4.1 Calculation of the Multivariate Error TermÂ€S

First we calculate W1, the estimate of within variability for group 1. Now, ss1 and

ss2 are just the sum of the squared deviations about the means for variables 1 and 2,

respectively.Â€Thus,

147

148

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

Table 4.1:â•‡ Estimation of Error Term for t Test and Hotelling’sÂ€Tâ•›â†œ2

t test (univariate)

Tâ•›2 (multivariate)

Within-group population covariance

Within-group population vari2

2

matrices are equal, Σ1Â€=Â€Σ2

ances are equal, i.e., σ1 = σ 2

Call the common value σ2

Call the common value Σ

To estimate these assumed common population values we employ the

three steps indicated next:

ssg1 and ssg2

W1 and W2

Assumption

Calculate the

within-group measures of variability.

Pool these estimates.

Divide by the degrees

of freedom

ssg1 + ssg2

W1 + W2

SS g 1 + SS g 2

= σˆ 2

n1 + n2 − 2

n1 + n2 − 2

W1 + W2

=

∑=S

Note: The rationale for pooling is that if we are measuring the same variability in each group (which is the

assumption), then we obtain a better estimate of this variability by combining our estimates.

ss1 =

3

∑( y ( ) − y

i =1

1i

11 )

2

= (1 − 2) 2 + (3 − 2) 2 + ( 2 − 2) 2 = 2

(y1(i) denotes the score for the ith subject on variableÂ€1)

and

ss2 =

3

∑( y ( ) − y

i =1

2i

21 )

2

= (3 − 4)2 + (7 − 4)2 + (2 − 4)2 = 14

Finally, ss12 is just the sum of deviation cross-products:

ss12 =

∑ ( y ( ) − 2) ( y ( ) − 4)

3

i =1

1i

2i

= (1 − 2) (3 − 4) + (3 − 2) (7 − 4) + (2 − 2) ( 2 − 4) = 4

Therefore, the within SSCP matrix for group 1Â€is

2 4

W1 =

.

4 14

Similarly, as we leave for you to show, the within matrix for group 2Â€is

4 4

W2 =

.

4 16

Chapter 4

â†œæ¸€å±®

â†œæ¸€å±®

Thus, the multivariate error term (i.e., the pooled within covariance matrix) is

calculatedÂ€as:

2 4 4 4

4 14 + 4 16

W1 + W2

= 6 / 7 8 / 7 .

=

S=

8 / 7 30 / 7

n1 + n2 − 2

7

Note that 6/7 is just the sample variance for variable 1, 30/7 is the sample variance for

variable 2, and 8/7 is the sample covariance.

4.4.2 Calculation of the Multivariate Test Statistic

To obtain Hotelling’s Tâ•›2 we need the inverse of S as follows:

1.810 −.483

S −1 =

−.483 .362

From EquationÂ€2 then, Hotelling’s Tâ•›2Â€is

T2 =

T2 =

T2 =

n1n2

( y1 − y 2 ) 'S −1 ( y1 − y 2 )

n1 + n2

3(6)

3+6

1.810 −.483 2 − 5

−.483 .362 4 − 8

( 2 − 5, 4 − 8)

−3.501

= 21

.001

( −6, −8)

The exact F transformation of T2 isÂ€then

F=

n=

n1 + n2 − p − 1 2 9 − 2 − 1

1

T =

( 21) = 9,

7 ( 2)

( n1 + n2 − 2 ) p

where F has 2 and 6 degrees of freedom (cf. EquationÂ€3).

If we were testing the multivariate null hypothesis at the .05 level, then we would

reject this hypothesis (because the critical valueÂ€ =Â€ 5.14) and conclude that the two

groups differ on the set of two variables.

After finding that the groups differ, we would like to determine which of the variables

are contributing to the overall difference; that is, a post hoc procedure is needed. This

is similar to the procedure followed in a one-way ANOVA, where first an overall F test

is done. If F is significant, then a post hoc technique (such as Tukey’s) is used to determine which specific groups differed, and thus contributed to the overall difference.

Here, instead of groups, we wish to know which variables contributed to the overall

multivariate significance.

149

150

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

Now, multivariate significance implies there is a linear combination of the dependent

variables (the discriminant function) that is significantly separating the groups. We

defer presentation of discriminant analysis (DA) to ChapterÂ€10. You may see discussions in the literature where DA is preferred over the much more commonly used procedures discussed in sectionÂ€4.5 because the linear combinations in DA may suggest

new “constructs” that a researcher may not have expected, and that DA makes use of

the correlations among outcomes throughout the analysis procedure. While we agree

that discriminant analysis can be of value, there are at least three factors that can mitigate its usefulness in many instances:

1. There is no guarantee that the linear combination (the discriminant function) will

be a meaningful variate, that is, that it will make substantive or conceptual sense.

2. Sample size must be considerably larger than many investigators realize in order

to have the results of a discriminant analysis be reliable. More details on this later.

3. The investigator may be more interested in identifying if group differences are

present for each specific variable, rather than on some combination ofÂ€them.

4.5 THREE POST HOC PROCEDURES

We now consider three possible post hoc approaches. One approach is to use the

Roy–Bose simultaneous confidence intervals. These are a generalization of the Scheffé

intervals, and are illustrated in Morrison (1976) and in Johnson and Wichern (1982).

The intervals are nice in that we not only can determine whether a pair of means is

different, but in addition can obtain a range of values within which the population

mean differences probably lie. Unfortunately, however, the procedure is extremely

conservative (HummelÂ€& Sligo, 1971), and this will hurt power (sensitivity for detecting differences). Thus, we cannot recommend this procedure for generalÂ€use.

As Bock (1975) noted, “their [Roy–Bose intervals] use at the conventional 90% confidence level will lead the investigator to overlook many differences that should be

interpreted and defeat the purposes of an exploratory comparative study” (p.Â€422).

What Bock says applies with particularly great force to a very large number of studies

in social science research where the group or effect sizes are small or moderate. In

these studies, power will be poor or not adequate to begin with. To be more specific,

consider the power table from Cohen (1988) for a two-tailed t test at the .05 level of

significance. For group sizes ≤ 20 and small or medium effect sizes through .60 standard deviations, which is a quite common class of situations, the largest power is .45.

The use of the Roy–Bose intervals will dilute the power even further to extremely low

levels.

A second widely used but also potentially problematic post hoc procedure we consider

is to follow up a significant multivariate test at the .05 level with univariate tests, each

at the .05 level. On the positive side, this procedure has the greatest power of the three

methods considered here for detecting differences, and provides accurate type IÂ€error

Chapter 4

â†œæ¸€å±®

â†œæ¸€å±®

control when two dependent variables are included in the design. However, the overall type IÂ€error rate increases when more than two dependent variables appear in the

design. For example, this rate may be as high as .10 for three dependent variables, .15

with four dependent variables, and continues to increase with more dependent variables. As such, we cannot not recommend this procedure if more than three dependent

variables are included in your design. Further, if you plan to use confidence intervals

to estimate mean differences, this procedure cannot be recommended because confidence interval coverage (i.e., the proportion of intervals that are expected to capture

the true mean differences) is lower than desired and becomes worse as the number of

dependent variables increases.

The third and generally recommended post hoc procedure is to follow a significant multivariate result by univariate ts, but to do each t test at the α/p level of

significance. Thus, if there were five dependent variables and we wished to have

an overall α of .05, then, we would simply compare our obtained p value for the t

(or F) test to α of .05/5Â€=Â€.01. By this procedure, we are assured by the Bonferroni

inequality that the overall type IÂ€error rate for the set of t tests will be less than α.

In addition, this Bonferroni procedure provides for generally accurate confidence

interval coverage for the set of mean differences, and so is the preferred procedure

when confidence intervals are used. One weakness of the Bonferroni-adjusted procedure is that power will be severely attenuated if the number of dependent variables is even moderately large (say > 7). For example, if pÂ€=Â€15 and we wish to set

overall αÂ€=Â€.05, then each univariate test would be done at the .05/15Â€=Â€.0033 level

of significance.

There are two things we may do to improve power for the t tests and yet provide reasonably good protection against type IÂ€errors. First, there are several reasons (which

we detail in ChapterÂ€5) for generally preferring to work with a relatively small number

of dependent variables (say ≤ 10). Second, in many cases, it may be possible to divide

the dependent variables up into two or three of the following categories: (1) those variables likely to show a difference, (2) those variables (based on past research) that may

show a difference, and (3) those variables that are being tested on a heuristic basis. To

illustrate, suppose we conduct a study limiting the number of variables to eight. There

is fairly solid evidence from the literature that three of the variables should show a

difference, while the other five are being tested on a heuristic basis. In this situation, as

indicated in sectionÂ€4.2, two multivariate tests should be done. If the multivariate test is

significant for the fairly solid variables, then we would test each of the individual variables at the .05 level. Here we are not as concerned about type IÂ€errors in the follow-up

phase, because there is prior reason to believe differences are present, and recall that

there is some type IÂ€error protection provided by use of the multivariate test. Then, a

separate multivariate test is done for the five heuristic variables. If this is significant,

we can then use the Bonferroni-adjusted t test approach, but perhaps set overall α

somewhat higher for better power (especially if sample size is small or moderate). For

example, we could set overall αÂ€=Â€.15, and thus test each variable for significance at the

.15/5Â€=Â€.03 level of significance.

151

152

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

4.6â•‡SAS AND SPSS CONTROL LINES FOR SAMPLE PROBLEM

AND SELECTED OUTPUT

TableÂ€4.2 presents SAS and SPSS commands for running the two-group sample

MANOVA problem. TableÂ€4.3 and TableÂ€4.4 show selected SAS output, and TableÂ€4.4

shows selected output from SPSS. Note that both SAS and SPSS give all four multivariate test statistics, although in different orders. Recall from earlier in the chapter

that for two groups the various tests are equivalent, and therefore the multivariate F is

the same for all four test statistics.

Table 4.2:â•‡ SAS and SPSS GLM Control Lines for Two-Group MANOVA Sample Problem

(1)

SAS

SPSS

TITLE ‘MANOVA’;

DATA twogp;

INPUT gp y1 y2 @@

LINES;

1 1 3 1 3 7 1 2 2

2 4 6 2 6 8 2 6 8

2 5 10 2 5 10 2 4 6

TITLE 'MANOVA'.

DATA LIST FREE/gp y1 y2.

BEGIN DATA.

PROC GLM;

(2)

CLASS gp;

(3)

MODEL y1 y2Â€=Â€gp;

(4)

MANOVA HÂ€=Â€gp/PRINTE

PRINTH;

(5)

MEANS gp;

RUN;

(6)

1 1

2 4

2 5

END

3 1 3 7 1 2 2

6 2 6 8 2 6 8

10 2 5 10 2 4 6

DATA.

(7)

GLM y1 y2 BY gp

(8)

/PRINT=DESCRIPTIVE

TEST(SSCP)

â•… /DESIGN= gp.

ETASQ

(1) The GENERAL LINEAR MODEL procedure is called.

(2) The CLASS statement tells SAS which variable is the grouping variable (gp, here).

(3) In the MODEL statement the dependent variables are put on the left-hand side and the grouping variable(s)

on the right-handÂ€side.

(4) You need to identify the effect to be used as the hypothesis matrix, which here by default is gp. After

the slash a wide variety of optional output is available. We have selected PRINTE (prints the error SSCP

matrix) and PRINTH (prints the matrix associated with the effect, which here is group).

(5) MEANS gp requests the means and standard deviations for each group.

(6) The first number for each triplet is the group identification with the remaining two numbers the scores on

the dependent variables.

(7) The general form for the GLM command is dependent variables BY grouping variables.

(8) This PRINT subcommand yields descriptive statistics for the groups, that is, means and standard deviations, proportion of variance explained statistics via ETASQ, and the error and between group SSCP matrices.

Chapter 4

â†œæ¸€å±®

â†œæ¸€å±®

Table 4.3:â•‡ SAS Output for the Two-Group MANOVA Showing SSCP Matrices and

MultivariateÂ€Tests

EÂ€=Â€Error SSCP Matrix

Y1

Y2

Y1

6

8

Y2

8

30

HÂ€=Â€Type III SSCP Matrix for GP

Y1

Y2

Y1

18

24

Y2

24

32

In 4.4, under CALCULATING THE Â�MULIVARIATE ERROR

TERM, we Â�computed the separate W1 + W2 matrices (the

within sums of squares and cross products Â�matrices),

and then pooled or added them to obtain the covariance

matrix S. What SAS is outputting here is this pooled

W1Â€=Â€W2 matrix.

Note that the diagonal elements of this hypothesis or

between-group SSCP matrix are just the between-group

sum-of-squares for the univariate F tests.

MANOVA Test Criteria and Exact F Statistics for the Hypothesis of No Overall GP Effect

HÂ€=Â€Type III SSCP Matrix for GP

EÂ€=Â€Error SSCP Matrix

S=1Â€M=0 N=2

Statistic

Value

F Value

Num DF

Den DF

Pr > F

Wilks’ Lambda

Pillai’s Trace

Hotelling-Lawley

Trace

Roy’s Greatest Root

0.25000000

0.75000000

3.00000000

9.00

9.00

9.00

2

2

2

6

6

6

0.0156

0.0156

0.0156

3.00000000

9.00

2

6

0.0156

In TableÂ€4.3, the within-group (or error) SSCP and between-group SSCP matrices

are shown along with the multivariate test results. Note that the multivariate F of 9

(which is equal to the F calculated in sectionÂ€4.4.2) is statistically significant (p <

.05), suggesting that group differences are present for at least one dependent variable. The univariate F tests, shown in TableÂ€4.4, using an unadjusted alpha of .05,

indicate that group differences are present for each outcome as each p value (.003,

029) is less than .05. Note that these Fs are equivalent to squared t values as FÂ€=Â€t2

for two groups. Given the group means shown in TableÂ€4.4, we can then conclude

that the population means for group 2 are greater than those for group 1 for both

outcomes. Note that if you wished to implement the Bonferroni approach for these

univariate tests (which is not necessary here for type IÂ€error control, given that we

153

154

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

Table 4.4:â•‡ SAS Output for the Two-Group MANOVA Showing Univariate Results

Dependent Variable: Y2

Source

DF

Sum of Squares

Mean Square

F Value Pr > F

Model

Error

Corrected Total

1

7

8

18.00000000

6.00000000

24.00000000

18.00000000

0.85714286

21.00

R-Square

CoeffVar

Root MSE

Y2 Mean

0.750000

23.14550

0.925820

4.000000

0.0025

Dependent Variable: Y2

Source

DF

Sum of Squares

Mean Square

F Value Pr > F

Model

Error

Corrected Total

1

7

8

32.00000000

30.00000000

62.00000000

32.00000000

4.28571429

7.47

R-Square

CoeffVar

Root MSE

Y2 Mean

0.516129

31.05295

2.070197

6.666667

Y1

0.0292

Y2

Level of

GP

N

Mean

StdDev

Mean

StdDev

1

3

2.00000000

1.00000000

4.00000000

2.64575131

2

6

5.00000000

0.89442719

8.00000000

1.78885438

have 2 dependent variables), you would simply compare the obtained p values to an

alpha of .05/2 or .025. You can also see that TableÂ€4.5, showing selected SPSS output,

provides similar information, with descriptive statistics, followed by the multivariate

test results, univariate test results, and then the between- and within-group SSCP

matrices. Note that a multivariate effect size measure (multivariate partial eta square)

appears in the Multivariate Tests output selection. This effect size measure is discussed in ChapterÂ€5. Also, univariate partial eta squares are shown in the output table

Test of Between-Subject Effects. This effect size measure is discussed is sectionÂ€4.8.

Although the results indicate that group difference are present for each dependent

variable, we emphasize that because the univariate Fs ignore how a given variable

is correlated with the others in the set, they do not give an indication of the relative importance of that variable to group differentiation. AÂ€technique for determining

the relative importance of each variable to group separation is discriminant analysis,

which will be discussed in ChapterÂ€10. To obtain reliable results with discriminant

analysis, however, a large subject-to-variable ratio is needed; that is, about 20 subjects

per variable are required.

Table 4.5:â•‡ Selected SPSS Output for the Two-Group MANOVA

Descriptive Statistics

Y1

Y2

GP

Mean

Std. Deviation

N

1.00

2.00

Total

1.00

2.00

Total

2.0000

5.0000

4.0000

4.0000

8.0000

6.6667

1.00000

.89443

1.73205

2.64575

1.78885

2.78388

3

6

9

3

6

9

Multivariate Testsa

Effect

GP

a

b

F

Hypothesis df

Error df

Sig.

Partial Eta

Squared

.750

9.000b

2.000

6.000

.016

.750

.250

9.000b

2.000

6.000

.016

.750

3.000

9.000b

2.000

6.000

.016

.750

3.000

9.000b

2.000

6.000

.016

.750

Value

Pillai’s

Trace

Wilks’

Lambda

Hotelling’s

Trace

Roy’s Largest Root

Design: Intercept + GP

Exact statistic

Tests of Between-Subjects Effects

Source

GP

Dependent

Variable

Y1

Y2

Error

Y1

Y2

Corrected Y1

Total

Y2

Type III Sum

of Squares

Df

18.000

32.000

6.000

30.000

24.000

62.000

1

1

7

7

8

8

Mean

Square

18.000

32.000

.857

4.286

F

Sig.

Partial Eta

Squared

21.000

7.467

.003

.029

.750

.516

Between-Subjects SSCP Matrix

Hypothesis

GP

Error

Y1

Y2

Y1

Y2

Based on Type III Sum of Squares

Note: Some nonessential output has been removed from the SPSS tables.

Y1

Y2

18.000

24.000

6.000

8.000

24.000

32.000

8.000

30.000

156

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

4.7â•‡MULTIVARIATE SIGNIFICANCE BUT NO UNIVARIATE

SIGNIFICANCE

If the multivariate null hypothesis is rejected, then generally at least one of the univariate ts will be significant, as in our previous example. This will not always be the case.

It is possible to reject the multivariate null hypothesis and yet for none of the univariate ts to be significant. As Timm (1975) pointed out, “furthermore, rejection of the

multivariate test does not guarantee that there exists at least one significant univariate

F ratio. For a given set of data, the significant comparison may involve some linear

combination of the variables” (p.Â€166). This is analogous to what happens occasionally

in univariate analysis of variance.

The overall F is significant, but when, say, the Tukey procedure is used to determine

which pairs of groups are significantly different, none is found. Again, all that significant F guarantees is that there is at least one comparison among the group means that is

significant at or beyond the same α level: The particular comparison may be a complex

one, and may or may not be a meaningfulÂ€one.

One way of seeing that there will be no necessary relationship between multivariate

significance and univariate significance is to observe that the tests make use of different information. For example, the multivariate test takes into account the correlations

among the variables, whereas the univariate do not. Also, the multivariate test considers the differences on all variables jointly, whereas the univariate tests consider the

difference on each variable separately.

4.8â•‡MULTIVARIATE REGRESSION ANALYSIS FOR THE SAMPLE

PROBLEM

This section is presented to show that ANOVA and MANOVA are special cases of

regression analysis, that is, of the so-called general linear model. Cohen’s (1968)

seminal article was primarily responsible for bringing the general linear model to

the attention of social science researchers. The regression approach to MANOVA

is accomplished by dummy coding group membership. This can be done, for the

two-group problem, by coding the participants in group 1 as 1, and the participants

in group 2 as 0 (or vice versa). Thus, the data for our sample problem would look

likeÂ€this:

y1

y2

x

1

3

2

3

7

2

1

1

1

groupÂ€1

Chapter 4

4

4

5

6

6

10

5

6

6

10

8

8

0

0

0

0

0

0

â†œæ¸€å±®

â†œæ¸€å±®

groupÂ€2

In a typical regression problem, as considered in the previous chapters, the predictors

have been continuous variables. Here, for MANOVA, the predictor is a categorical or

nominal variable, and is used to determine how much of the variance in the dependent

variables is accounted for by group membership.

The setup of the two-group MANOVA as a multivariate regression may seem somewhat

strange since there are two dependent variables and only one predictor. In the previous

chapters there has been either one dependent variable and several predictors, or several

dependent variables and several predictors. However, the examination of the association

is done in the same way. Recall that Wilks’ Λ is the statistic for determining whether

there is a significant association between the dependent variables and the predictor(s):

Λ=

Se

Se + S r

,

where Se is the error SSCP matrix, that is, the sum of square and cross products not

due to regression (or the residual), and Sr is the regression SSCP matrix, that is, an

index of how much variability in the dependent variables is due to regression. In this

case, variability due to regression is variability in the dependent variables due to group

membership, because the predictor is group membership.

Part of the output from SPSS for the two-group MANOVA, set up and run as a regression, is presented in TableÂ€4.6. The error matrix Se is called adjusted within-cells sum of

squares and cross products, and the regression SSCP matrix is called adjusted hypothesis sum of squares and cross products. Using these matrices, we can form Wilks’ Λ

(and see how the value of .25 is obtained):

6 8

Se

8 30

Λ=

=

6

8

Se + S r

18 24

8 30 + 24 32

6 8

8 30

116

Λ=

=

= .25

24 32 464

32 62

157

158

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

Table 4.6:â•‡ Selected SPSS Output for Regression Analysis on Two-Group MANOVA

with Group Membership as Predictor

GP

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

Source

Corrected Model

Intercept

GP

Error

.750

.250

3.000

3.000

9.000a

9.000a

9.000a

9.000a

2.000

2.000

2.000

2.000

Dependent

Variable

Type III Sum of

Squares

df

Mean

Square

Y1

Y2

Y1

Y2

Y1

Y2

Y1

Y2

18.000a

32.000b

98.000

288.000

18.000

32.000

6.000

30.000

1

1

1

1

1

1

7

7

18.000

32.000

98.000

288.000

18.000

32.000

.857

4.286

6.000

6.000

6.000

6.000

.016

.016

.016

.016

F

Sig.

21.000

7.467

114.333

67.200

21.000

7.467

.003

.029

.000

.000

.003

.029

Between-Subjects SSCP Matrix

Hypothesis

Intercept

GP

Error

Y1

Y2

Y1

Y2

Y1

Y2

Y1

98.000

168.000

18.000

24.000

6.000

8.000

Y2

168.000

288.000

24.000

32.000

8.000

30.000

Based on Type III Sum of Squares

Note first that the multivariate Fs are identical for TableÂ€4.5 and TableÂ€4.6; thus, significant separation of the group mean vectors is equivalent to significant association

between group membership (dummy coded) and the set of dependent variables.

The univariate Fs are also the same for both analyses, although it may not be clear to

you why this is so. In traditional ANOVA, the total sum of squares (sst) is partitionedÂ€as:

sstÂ€= ssb +Â€ssw

whereas in regression analysis the total sum of squares is partitioned as follows:

sstÂ€= ssreg + ssresid

The corresponding F ratios, for determining whether there is significant group separation and for determining whether there is a significant regression,Â€are:

=

F

SSreg / df reg

SSb / dfb

and F

=

SS w / df w

SSresid / df resid

Chapter 4

â†œæ¸€å±®

â†œæ¸€å±®

To see that these F ratios are equivalent, note that because the predictor variable is

group membership, ssreg is just the amount of variability between groups or ssb, and

ssresid is just the amount of variability not accounted for by group membership, or the

variability of the scores within each group (i.e., ssw).

The regression output also gives information that was obtained by the commands

in TableÂ€ 4.2 for traditional MANOVA: the squared multiple Rs for each dependent variable (labeled as partial eta square in TableÂ€4.5). Because in this case there

is just one predictor, these multiple Rs are just squared Pearson correlations. In

particular, they are squared point-biserial correlations because one of the variables is dichotomous (dummy-coded group membership). The relationship between

the point-biserial correlation and the F statistic is given by Welkowitz, Ewen, and

Cohen (1982):

rpb =

2

rpb

=

F

F + df w

F

F + df w

Thus, for dependent variable 1, weÂ€have

2

rpb

=

21

= .75.

21 + 7

This squared correlation (also known as eta square) has a very meaningful and important interpretation. It tells us that 75% of the variance in the dependent variable is

accounted for by group membership. Thus, we not only have a statistically significant

relationship, as indicated by the F ratio, but in addition, the relationship is very strong.

It should be recalled that it is important to have a measure of strength of relationship

along with a test of significance, as significance resulting from large sample size might

indicate a very weak relationship, and therefore one that may be of little practical

importance.

Various textbook authors have recommended measures of association or strength of

relationship measures (e.g., CohenÂ€& Cohen, 1975; GrissomÂ€& Kim, 2012; Hays,

1981). We also believe that they can be useful, but you should be aware that they have

limitations.

For example, simply because a strength of relationship indicates that, say, only 10%

of variance is accounted for, does not necessarily imply that the result has no practical importance, as O’Grady (1982) indicated in an excellent review on measures of

association. There are several factors that affect such measures. One very important

factor is context: 10% of variance accounted for in certain research areas may indeed

be practically significant.

159

160

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

A good example illustrating this point is provided by Rosenthal and Rosnow (1984).

They consider the comparison of a treatment and control group where the dependent

variable is dichotomous, whether the subjects survive or die. The following table is

presented:

Treatment outcome

Treatment

Control

Alive

66

34

100

Dead

34

66

100

100

100

Because both variables are dichotomous, the phi coefficient—a special case of the

Pearson correlation for two dichotomous variables (GlassÂ€& Hopkins, 1984)—measures the relationship betweenÂ€them:

φ=

342 − 662

100 (100 )(100 )(100 )

= −.32 φ 2 = .10

Thus, even though the treatment-control distinction accounts for “only” 10% of the

variance in the outcome, it increases the survival rate from 34% to 66%—far from

trivial. The same type of interpretation would hold if we considered some less dramatic type of outcome like improvement versus no improvement, where treatment

was a type of psychotherapy. Also, the interpretation is not confined to a dichotomous

outcome measure. Another factor to consider is the design of the study. As O’Grady

(1982) noted:

Thus, true experiments will frequently produce smaller measures of explained

variance than will correlational studies. At the least this implies that consideration

should be given to whether an investigation involves a true experiment or a correlational approach in deciding whether an effect is weak or strong. (p.Â€771)

Another point to keep in mind is that, because most behaviors have multiple causes,

it will be difficult in these cases to account for a large percent of variance with just a

single cause (say treatments). Still another factor is the homogeneity of the population

sampled. Because measures of association are correlational-type measures, the more

homogeneous the population, the smaller the correlation will tend to be, and therefore the smaller the percent of variance accounted for can potentially be (this is the

restriction-of-range phenomenon).

Finally, we focus on a topic that is important in the planning phase of a study: estimation of power for the overall multivariate test. We start at a basic level, reviewing what

power is, factors affecting power, and reasons that estimation of power is important.

Then the notion of effect size for the univariate t test is given, followed by the multivariate effect size concept for Hotelling’s T2

Chapter 4

â†œæ¸€å±®

â†œæ¸€å±®

4.9 POWER ANALYSIS*

Type IÂ€error, or the level of significance (α), is familiar to all readers. This is the

probability of rejecting the null hypothesis when it is true, that is, saying the groups

differ when in fact they do not. The α level set by the experimenter is a subjective decision, but is usually set at .05 or .01 by most researchers to minimize the

probability of making this kind of error. There is, however, another type of error

that one can make in conducting a statistical test, and this is called a type II error.

Type II error, denoted by β, is the probability of retaining H0 when it is false, that

is, saying the groups do not differ when they do. Now, not only can either of these

errors occur, but in addition they are inversely related. That is, when we hold effect

and group size constant, reducing our nominal type IÂ€rate increases our type II error

rate. We illustrate this for a two-group problem with a group size of 30 and effect

size dÂ€=Â€.5:

Α

β

1−β

.10

.05

.01

.37

.52

.78

.63

.48

.22

Notice that as we control the type IÂ€error rate more severely (from .10 to .01), type II

error increases fairly sharply (from .37 to .78), holding sample and effect size constant. Therefore, the problem for the experimental planner is achieving an appropriate

balance between the two types of errors. Although we do not intend to minimize the

seriousness of making a type IÂ€error, we hope to convince you that more attention

should be paid to type II error. Now, the quantity in the last column is the power of a

statistical test, which is the probability of rejecting the null hypothesis when it is false.

Thus, power is the probability of making a correct decision when, for example, group

mean differences are present. In the preceding example, if we are willing to take a 10%

chance of rejecting H0 falsely, then we have a 63% chance of finding a difference of a

specified magnitude in the population (here, an effect size of .5 standard deviations).

On the other hand, if we insist on only a 1% chance of rejecting H0 falsely, then we

have only about 2 chances out of 10 of declaring a mean difference is present. This

example with small sample size suggests that in this case it might be prudent to abandon the traditional α levels of .01 or .05 to a more liberal α level to improve power

sharply. Of course, one does not get something for nothing. We are taking a greater

risk of rejecting falsely, but that increased risk is more than balanced by the increase

in power.

There are two types of power estimation, a priori and post hoc, and very good

reasons why each of them should be considered seriously. If a researcher is going

* Much of the material in this section is identical to that presented in 1.2; however, it was believed to be worth repeating in this more extensive discussion of power.

161

162

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

to invest a great amount of time and money in carrying out a study, then he or

she would certainly want to have a 70% or 80% chance (i.e., power of .70 or

.80) of finding a difference if one is there. Thus, the a priori estimation of power

will alert the researcher to how many participants per group will be needed for

adequate power. Later on we consider an example of how this is done in the

multivariateÂ€case.

The post hoc estimation of power is important in terms of how one interprets the

results of completed studies. Researchers not sufficiently sensitive to power may interpret nonsignificant results from studies as demonstrating that treatments made no difference. In fact, it may be that treatments did make a difference but that the researchers

had poor power for detecting the difference. The poor power may result from small

sample size or effect size. The following example shows how important an awareness

of power can be. Cronbach and Snow had written a report on aptitude-treatment interaction research, not being fully cognizant of power. By the publication of their text

Aptitudes and Instructional Methods (1977) on the same topic, they acknowledged

the importance of power, stating in the preface, “[we] .Â€.Â€. became aware of the critical relevance of statistical power, and consequently changed our interpretations of

individual studies and sometimes of whole bodies of literature” (p. ix). Why would

they change their interpretation of a whole body of literature? Because, prior to being

sensitive to power when they found most studies in a given body of literature had nonsignificant results, they concluded no effect existed. However, after being sensitized to

power, they took into account the sample sizes in the studies, and also the magnitude

of the effects. If the sample sizes were small in most of the studies with nonsignificant

results, then lack of significance is due to poor power. Or, in other words, several

low-power studies that report nonsignificant results of the same character are evidence

for an effect.

The power of a statistical test is dependent on three factors:

1. The α level set by the experimenter

2. SampleÂ€size

3. Effect size—How much of a difference the treatments make, or the extent to which

the groups differ in the population on the dependent variable(s).

For the univariate independent samples t test, Cohen (1988) defined the population effect size, as we used earlier, dÂ€ =Â€ (µ 1 − µ2)/σ, where σ is the assumed

common population standard deviation. Thus, in this situation, the effect size

measure simply indicates how many standard deviation units the group means are

separatedÂ€by.

Power is heavily dependent on sample size. Consider a two-tailed test at the .05 level

for the t test for independent samples. Suppose we have an effect size of .5 standard deviations. The next table shows how power changes dramatically as sample size

increases.

Chapter 4

n (Subjects per group)

Power

10

20

50

100

.18

.33

.70

.94

â†œæ¸€å±®

â†œæ¸€å±®

As this example suggests, when sample size is large (say 100 or more subjects per

group) power is not an issue. It is when you are conducting a study where group sizes

are small (n ≤ 20), or when you are evaluating a completed study that had a small

group size, that it is imperative to be very sensitive to the possibility of poor power (or

equivalently, a type II error).

We have indicated that power is also influenced by effect size. For the t test, Cohen

(1988) suggested as a rough guide that an effect size around .20 is small, an effect size

around .50 is medium, and an effect size > .80 is large. The difference in the mean IQs

between PhDs and the typical college freshmen is an example of a large effect size

(about .8 of a standard deviation).

Cohen and many others have noted that small and medium effect sizes are very common in social science research. Light and Pillemer (1984) commented on the fact that

most evaluations find small effects in reviews of the literature on programs of various

types (social, educational, etc.): “Review after review confirms it and drives it home.

Its importance comes from having managers understand that they should not expect

large, positive findings to emerge routinely from a single study of a new program”

(pp.Â€153–154). Results from Becker (1987) of effect sizes for three sets of studies (on

teacher expectancy, desegregation, and gender influenceability) showed only three large

effect sizes out of 40. Also, Light, Singer, and Willett (1990) noted that “meta-analyses

often reveal a sobering fact: Effect sizes are not nearly as large as we all might hope”

(p.Â€195). To illustrate, they present average effect sizes from six meta-analyses in different areas that yielded .13, .25, .27, .38, .43, and .49—all in the small to medium range.

4.10â•‡ WAYS OF IMPROVINGÂ€POWER

Given how poor power generally is with fewer than 20 subjects per group, the following four methods of improving power should be seriously considered:

1. Adopt a more lenient α level, perhaps αÂ€=Â€.10 or αÂ€=Â€.15.

2. Use one-tailed tests where the literature supports a directional hypothesis. This

option is not available for the multivariate tests because they are inherently

two-tailed.

3. Consider ways of reducing within-group variability, so that one has a more sensitive design. One way is through sample selection; more homogeneous subjects

tend to vary less on the dependent variable(s). For example, use just males, rather

163

164

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

than males and females, or use only 6- and 7-year-old children rather than 6through 9-year-old children. AÂ€second way is through the use of factorial designs,

which we consider in ChapterÂ€7. AÂ€third way of reducing within-group variability is through the use of analysis of covariance, which we consider in ChapterÂ€8.

Covariates that have low correlations with each other are particularly helpful

because then each is removing a somewhat different part of the within-group

(error) variance. AÂ€fourth means is through the use of repeated-measures designs.

These designs are particularly helpful because all individual difference due to the

average response of subjects is removed from the error term, and individual differences are the main reason for within-group variability.

4. Make sure there is a strong linkage between the treatments and the dependent

variable(s), and that the treatments extend over a long enough period of time to

produce a large—or at least fairly large—effectÂ€size.

Using these methods in combination can make a considerable difference in effective

power. To illustrate, we consider a two-group situation with 18 participants per group

and one dependent variable. Suppose a two-tailed test was done at the .05 level, and

that the obtained effect sizeÂ€was

d = ( x1 − x2 ) / s = (8 − 4) / 10 = .40,

^

where s is pooled within standard deviation. Then, from Cohen (1988), powerÂ€=Â€.21,

which is veryÂ€poor.

Now, suppose that through the use of two good covariates we are able to reduce pooled

within variability (s2) by 60%, from 100 (as earlier) to 40. This is a definite realistic

^

possibility in practice. Then our new estimated effect size would be d ≈ 4 / 40 = .63.

Suppose in addition that a one-tailed test was really appropriate, and that we also take

a somewhat greater risk of a type IÂ€error, i.e., αÂ€=Â€.10. Then, our new estimated power

changes dramatically to .69 (Cohen, 1988).

Before leaving this section, it needs to be emphasized that how far one “pushes” the

power issue depends on the consequences of making a type IÂ€error. We give three

examples to illustrate. First, suppose that in a medical study examining the safety of a

drug we have the following null and alternative hypotheses:

H0 : The drug is unsafe.

H1 : The drug isÂ€safe.

Here making a type IÂ€error (rejecting H0 when true) is concluding that the drug is safe

when in fact it is unsafe. This is a situation where we would want a type IÂ€error to be

very small, because making a type IÂ€error could harm or possibly kill some people.

As a second example, suppose we are comparing two teaching methods, where method

AÂ€is several times more expensive than method B to implement. If we conclude that

Chapter 4

â†œæ¸€å±®

â†œæ¸€å±®

method AÂ€is more effective (when in fact it is not), this will be a very costly mistake

for a school district.

Finally, a classic example of the relative consequences of type IÂ€and type II errors can

be taken from our judicial system, under which a defendant is innocent until proven

guilty. Thus, we could formulate the following null and alternative hypotheses:

H0 : The defendant is innocent.

H1 : The defendant is guilty.

If we make a type IÂ€error, we conclude that the defendant is guilty when actually innocent. Concluding that the defendant is innocent when actually guilty is a type II error.

Most would probably agree that the type IÂ€error is by far the more serious here, and

thus we would want a type IÂ€error to be very small.

4.11â•‡

A PRIORI POWER ESTIMATION FOR A TWO-GROUP

MANOVA

Stevens (1980) discussed estimation of power in MANOVA at some length, and in

what follows we borrow heavily from his work. Next, we present the univariate and

multivariate measures of effect size for the two-group problem. Recall that the univariate measure was presented earlier.

Measures of effect size

Univariate

d=

µ1 − µ 2

σ

y −y

dˆ = 1 2

s

Multivariate

Dâ•›2Â€=Â€(μ1 − μ2)′Σ−1 (μ1 − μ2)

ˆ = ( y − y )′S−1 ( y − y )

D2

1

1

1

2

The first row gives the population measures, and the second row is used to estimate

ˆ 2 is Hotelling’s Tâ•›2

effect sizes for your study. Notice that the multivariate measure D

without the sample sizes (see EquationÂ€2); that is, it is a measure of separation of the

groups that is independent of sample size. D2 is called in the literature the Mahalanobis

ˆ 2 is a natural squared generalizadistance. Note also that the multivariate measure D

tion of the univariate measure d, where the means have been replaced by mean vectors

and s (standard deviation) has been replaced by its squared multivariate generalization of within variability, the sample covariance matrixÂ€S.

TableÂ€4.7 from Stevens (1980) provides power values for two-group MANOVA for

two through seven variables, with group size varying from small (15) to large (100),

165

166

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

and with effect size varying from small (D2Â€=Â€.25) to very large (D2Â€=Â€2.25). Earlier,

we indicated that small or moderate group and effect sizes produce inadequate power

for the univariate t test. Inspection of TableÂ€4.7 shows that a similar situation exists for

MANOVA. The following from Stevens (1980) provides a summary of the results in

TableÂ€4.7:

For values of D2 ≤ .64 and n ≤ 25, .Â€.Â€. power is generally poor (< .45) and never

really adequate (i.e., > .70) for αÂ€=Â€.05. Adequate power (at αÂ€=Â€.10) for two through

seven variables at a moderate overall effect size of .64 would require about 30

subjects per group. When the overall effect size is large (D ≥ 1), then 15 or more

subjects per group is sufficient to yield power values ≥ .60 for two through seven

variables at αÂ€=Â€.10. (p.Â€731)

In sectionÂ€4.11.2, we show how you can use TableÂ€4.7 to estimate the sample size

needed for a simple two-group MANOVA, but first we show how this table can be used

to estimate post hoc power.

Table 4.7:â•‡ Power of Hotelling’s Tâ•›â•›2 at αÂ€=Â€.05 and .10 for Small Through Large Overall

Effect and GroupÂ€Sizes

D2**

Number of

variables

n*

.25

2

2

2

2

3

3

3

3

5

5

5

5

7

7

7

7

15

25

50

100

15

25

50

100

15

25

50

100

15

25

50

100

26

33

60

90

23

28

54

86

21

26

44

78

18

22

40

72

.64

(32)

(47)

(77)

(29)

(41)

(65)

(25)

(35)

(59)

(22)

(31)

(52)

44

66

95

1

37

58

93

1

32

42

88

1

27

38

82

1

1

(60)

(80)

(55)

(74)

(98)

(47)

(68)

(42)

(62)

65

86

1

1

58

80

1

1

42

72

1

1

37

64

97

1

2.25

(77)

(72)

(66)

(59)

(81)

95***

97

1

1

91

95

1

1

83

96

1

1

77

94

1

1

Note: Power values at αÂ€=Â€.10 are in parentheses.

* Equal group sizes are assumed.

** Dâ•›2Â€=Â€(µ1 − µ2)´Σ−1(µ1 − µ2)

*** Decimal points have been omitted. Thus, 95 means a power of .95. Also, a value of 1 means the power is

approximately equal toÂ€1.

Chapter 4

â†œæ¸€å±®

â†œæ¸€å±®

4.11.1 Post Hoc Estimation ofÂ€Power

Suppose you wish to evaluate the power of a two-group MANOVA that was completed

in a journal in your content area. Here, TableÂ€4.7 can be used, assuming the number

of dependent variables in the study is between two and seven. Actually, with a slight

amount of extrapolation, the table will yield a reasonable approximation for eight or

nine variables. For example, for D2Â€=Â€.64, five variables, and nÂ€=Â€25, powerÂ€=Â€.42 at the

.05 level. For the same situation, but with seven variables, powerÂ€=Â€.38. Therefore, a

reasonable estimate for power for nine variables is about .34.

Now, to use TableÂ€4.7, the value of D2 is needed, and this almost certainly will not

be reported. Very probably then, a couple of steps will be required to obtain D2. The

investigator(s) will probably report the multivariate F. From this, one obtains Tâ•›2 by

reexpressing EquationÂ€ 3, which we illustrate in Example 4.2. Then, D2 is obtained

using EquationÂ€2. Because the right-hand side of EquationÂ€2 without the sample sizes

is D2, it follows that Tâ•›2Â€=Â€[n1n2/(n1 + n2)]D2, or D2Â€=Â€[(n1 + n2)/n1n2]Tâ•›2.

We now consider two examples to illustrate how to use TableÂ€4.7 to estimate power for

studies in the literature when (1) the number of dependent variables is not explicitly

given in TableÂ€4.7, and (2) the group sizes are not equal.

Example 4.2

Consider a two-group study in the literature with 25 participants per group that used

four dependent variables and reports a multivariate FÂ€=Â€2.81. What is the estimated

power at the .05 level? First, we convert F to the corresponding Tâ•›2 value:

FÂ€=Â€[(N − p − 1)/(N − 2)p]Tâ•›2 or Tâ•›2Â€= (N − 2)pF/(N − p −Â€1)

Thus, Tâ•›2Â€ =Â€ 48(4)2.81/45Â€ =Â€ 11.99. Now, because D2Â€ =Â€ (NTâ•›2)/n1n2, we have

D2Â€=Â€50(11.99)/625Â€=Â€.96. This is a large multivariate effect size. TableÂ€4.7 does not

have power for four variables, but we can interpolate between three and five variables

to approximate power. Using D2Â€=Â€1 in the table we findÂ€that:

Number of variables

n

Dâ•›2Â€=Â€1

3

5

25

25

.80

.72

Thus, a good approximation to power is .76, which is adequate power for a large effect

size. Here, as in univariate analysis, with a large effect size, not many participants are

needed per group to have adequate power.

Example 4.3

Now consider an article in the literature that is a two-group MANOVA with five

dependent variables, having 22 participants in one group and 32 in the other. The

167

168

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

investigators obtain a multivariate FÂ€=Â€1.61, which is not significant at the .05 level

(critical valueÂ€=Â€2.42). Calculate power at the .05 level and comment on the size of the

multivariate effect measure. Here the number of dependent variables (five) is given in

the table, but the group sizes are unequal. Following Cohen (1988), we use the harmonic mean as the n with which to enter the table. The harmonic mean for two groups

is ñÂ€=Â€2n1n2/(n1 + n2). Thus, for this case we have ñÂ€=Â€2(22)(32)/54Â€=Â€26.07. Now, to

get D2 we first obtain Tâ•›2:

T2Â€=Â€(N − 2)pF/(N − p − 1)Â€=Â€52(5)1.61/48Â€= 8.72

Now, D2Â€ =Â€ N Tâ•›2/n1n2Â€ =Â€ 54(8.72)/22(32)Â€ =Â€ .67. Using nÂ€ =Â€ 25 and D2Â€ =Â€ .64 to enter

TableÂ€4.7, we see that powerÂ€=Â€.42. Actually, power is slightly greater than .42 because

nÂ€=Â€26 and D2Â€=Â€.67, but it would still not reach even .50. Thus, given this effect size,

power is definitely inadequate here, but a sample medium multivariate effect size was

obtained that may be practically important.

4.11.2 A Priori Estimation of SampleÂ€Size

Suppose that from a pilot study or from a previous study that used the same kind of

participants, an investigator had obtained the following pooled within-group covariance matrix for three variables:

6 1.6

16

9

.9

S= 6

1.6 .9 1

Recall that the elements on the main diagonal of S are the variances for the variables:

16 is the variance for variable 1, and soÂ€on.

To complete the estimate of D2 the difference in the mean vectors must be estimated;

this amounts to estimating the mean difference expected for each variable. Suppose

that on the basis of previous literature, the investigator hypothesizes that the mean differences on variables 1 and 2 will be 2 and 1.5. Thus, they will correspond to moderate

effect sizes of .5 standard deviations. Why? (Use the variances on the within-group

covariance matrix to check this.) The investigator further expects the mean difference

on variable 3 will be .2, that is, .2 of a standard deviation, or a small effect size. What

is the minimum number of participants needed, at αÂ€=Â€.10, to have a power of .70 for

the test of the multivariate null hypothesis?

To answer this question we first need to estimate D2:

.0917 −.0511 −.1008 2.0

D = (2, 1.5, .2) −.0511

.1505 −.0538 1.5 = .3347

−.1008 −.0538 1.2100 .2

^2

Chapter 4

â†œæ¸€å±®

â†œæ¸€å±®

The middle matrix is the inverse of S. Because moderate and small univariate effect

ˆ 2 value .3347, such a numerical value for D2 would probably

sizes produced this D

occur fairly frequently in social science research. To determine the n required for

powerÂ€=Â€.70, we enter TableÂ€4.7 for three variables and use the values in parentheses.

For nÂ€=Â€50 and three variables, note that powerÂ€=Â€.65 for D2Â€=Â€.25 and powerÂ€=Â€.98 for

D2Â€=Â€.64. Therefore, weÂ€have

Power(D2Â€=Â€.33)Â€=Â€Power(D2 =.25) + [.08/.39](.33)Â€= .72.

4.12 SUMMARY

In this chapter we have considered the statistical analysis of two groups on several

dependent variables simultaneously. Among the reasons for preferring a MANOVA

over separate univariate analyses were (1) MANOVA takes into account important

information, that is, the intercorrelations among the variables, (2) MANOVA keeps the

overall α level under control, and (3) MANOVA has greater sensitivity for detecting

differences in certain situations. It was shown how the multivariate test (Hotelling’s

Tâ•›2) arises naturally from the univariate t by replacing the means with mean vectors

and by replacing the pooled within-variance by the covariance matrix. An example

indicated the numerical details associated with calculating T 2.

Three post hoc procedures for determining which of the variables contributed to the

overall multivariate significance were considered. The Roy–Bose simultaneous confidence interval approach cannot be recommended because it is extremely conservative, and hence has poor power for detecting differences. The Bonferroni approach

of testing each variable at the α/p level of significance is generally recommended,

especially if the number of variables is not too large. Another approach we considered that does not use any alpha adjustment for the post hoc tests is potentially problematic because the overall type IÂ€error rate can become unacceptably high as the

number of dependent variables increases. As such, we recommend this unadjusted t

test procedure for analysis having two or three dependent variables. This relatively

small number of variables in the analysis may arise in designs where you have collected just that number of outcomes or when you have a larger set of outcomes but

where you have firm support for expecting group mean differences for two or three

dependent variables.

Group membership for a sample problem was dummy coded, and it was run as a

regression analysis. This yielded the same multivariate and univariate results as

when the problem was run as a traditional MANOVA. This was done to show that

MANOVA is a special case of regression analysis, that is, of the general linear model.

In this context, we also discussed the effect size measure R2 (equivalent to eta square

and partial eta square for the one-factor design). We advised against concluding

169

170

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

that a result is of little practical importance simply because the R2 value is small

(say .10). Several reasons were given for this, one of the most important being context. Thus, 10% variance accounted for in some research areas may indeed be of

practical importance.

Power analysis was considered in some detail. It was noted that small and medium

effect sizes are very common in social science research. The Mahalanobis D2 was presented as a two-group multivariate effect size measure, with the following guidelines

for interpretation: D2Â€ =Â€ .25 small effect, D2Â€ =Â€ .50 medium effect, and D2 > 1 large

effect. We showed how you can compute D2 using data from a previous study to determine a priori the sample size needed for a two-group MANOVA, using a table from

Stevens (1980).

4.13 EXERCISES

1. Which of the following are multivariate studies, that is, involve several correlated dependent variables?

(a)An investigator classifies high school freshmen by sex, socioeconomic

status, and teaching method, and then compares them on total test score

on the Lankton algebraÂ€test.

(b)A treatment and control group are compared on measures of reading

speed and reading comprehension.

(c)An investigator is predicting success on the job from high school GPA and

a battery of personality variables.

2. An investigator has a 50-item scale and wishes to compare two groups of participants on the item scores. He has heard about MANOVA, and realizes that

the items will be correlated. Therefore, he decides to do a two-group MANOVA

with each item serving as a dependent variable. The scale is administered to 45

participants, and the investigator attempts to conduct the analysis. However,

the computer software aborts the analysis. Why? What might the investigator

consider doing before running the analysis?

3. Suppose you come across a journal article where the investigators have a

three-way design and five correlated dependent variables. They report the

results in five tables, having done a univariate analysis on each of the five

variables. They find four significant results at the .05 level. Would you be

impressed with these results? Why or why not? Would you have more confidence if the significant results had been hypothesized a priori? What else could

they have done that would have given you more confidence in their significant

results?

4. Consider the following data for a two-group, two-dependent-variable

problem:

Chapter 4

T1

â†œæ¸€å±®

â†œæ¸€å±®

T2

y1

y2

y1

y2

1

2

3

5

2

9

3

4

4

5

4

5

6

8

6

7

(a)Compute W, the pooled within-SSCP matrix.

(b) Find the pooled within-covariance matrix, and indicate what each of the

elements in the matrix represents.

(c) Find Hotelling’s T2.

(d) What is the multivariate null hypothesis in symbolicÂ€form?

(e)Test the null hypothesis at the .05 level. What is your decision?

5. An investigator has an estimate of Dâ•›2Â€=Â€.61 from a previous study that used the

same four dependent variables on a similar group of participants. How many

subjects per group are needed to have powerÂ€=Â€.70 at Â€=Â€.10?

6. From a pilot study, a researcher has the following pooled within-covariance

matrix for two variables:

8.6 10.4

S=

10.4 21.3

From previous research a moderate effect size of .5 standard deviations on

variable 1 and a small effect size of 1/3 standard deviations on variable 2 are

anticipated. For the researcher’s main study, how many participants per group

are needed for powerÂ€=Â€.70 at the .05 level? At the .10 level?

7. Ambrose (1985) compared elementary school children who received instruction on the clarinet via programmed instruction (experimental group) versus

those who received instruction via traditional classroom instruction on the

following six performance aspects: interpretation (interp), tone, rhythm, intonation (inton), tempo (tem), and articulation (artic). The data, representing the

average of two judges’ ratings, are listed here, with GPIDÂ€=Â€1 referring to the

experimental group and GPIDÂ€=Â€2 referring to the control group:

(a)Run the two-group MANOVA on these data using SAS or SPSS. Is the

multivariate null hypothesis rejected at the .05 level?

(b) What is the value of the Mahalanobis D 2? How would you characterize the

magnitude of this effect size? Given this, is it surprising that the null hypothesis was rejected?

(c) Setting overall αÂ€=Â€.05 and using the Bonferroni inequality approach, which

of the individual variables are significant, and hence contributing to the

overall multivariate significance?

171

172

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

GP

INT

TONE

RHY

INTON

TEM

ARTIC

1

1

1

1

1

1

1

1

1

1

1

1

2

2

2

2

2

2

2

2

2

2

2

4.2

4.1

4.9

4.4

3.7

3.9

3.8

4.2

3.6

2.6

3.0

2.9

2.1

4.8

4.2

3.7

3.7

3.8

2.1

2.2

3.3

2.6

2.5

4.1

4.1

4.7

4.1

2.0

3.2

3.5

4.1

3.8

3.2

2.5

3.3

1.8

4.0

2.9

1.9

2.1

2.1

2.0

1.9

3.6

1.5

1.7

3.2

3.7

4.7

4.1

2.4

2.7

3.4

4.1

4.2

1.9

2.9

3.5

1.7

3.5

4.0

1.7

2.2

3.0

2.2

2.2

2.3

1.3

1.7

4.2

3.9

5.0

3.5

3.4

3.1

4.0

4.2

3.4

3.5

3.2

3.1

1.7

1.8

1.8

1.6

3.1

3.3

1.8

3.4

4.3

2.5

2.8

2.8

3.1

2.9

2.8

2.8

2.7

2.7

3.7

4.2

3.7

3.3

3.6

2.8

3.1

3.1

3.1

2.8

3.0

2.6

4.2

4.0

3.5

3.3

3.5

3.2

4.5

4.0

2.3

3.6

3.2

2.8

3.0

3.1

3.1

3.4

1.5

2.2

2.2

1.6

1.7

1.7

1.5

2.7

3.8

1.9

3.1

8. We consider the Pope, Lehrer, and Stevens (1980) data. Children in kindergarten were measured on various instruments to determine whether they could

be classified as low risk or high risk with respect to having reading problems

later on in school. The variables considered are word identification (WI), word

comprehension (WC), and passage comprehension (PC).

â•‡1

â•‡2

â•‡3

â•‡4

â•‡5

â•‡6

â•‡7

â•‡8

â•‡9

10

11

GP

WI

WC

PC

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

5.80

10.60

8.60

4.80

8.30

4.60

4.80

6.70

6.90

5.60

4.80

9.70

10.90

7.20

4.60

10.60

3.30

3.70

6.00

9.70

4.10

3.80

8.90

11.00

8.70

6.20

7.80

4.70

6.40

7.20

7.20

4.30

5.30

Chapter 4

12

13

14

15

16

17

18

19

20

21

22

23

24

GP

WI

WC

PC

1.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.90

2.40

3.50

6.70

5.30

5.20

3.20

4.50

3.90

4.00

5.70

2.40

2.70

3.70

2.10

1.80

3.60

3.30

4.10

2.70

4.90

4.70

3.60

5.50

2.90

2.60

4.20

2.40

3.90

5.90

6.10

6.40

4.00

5.70

4.70

2.90

6.20

3.20

4.10

â†œæ¸€å±®

â†œæ¸€å±®

(a)Run the two group MANOVA on computer software. Is the multivariate test

significant at the .05 level?

(b)Are any of the univariate Fâ•›s significant at the .05 level?

9. The correlations among the dependent variables are embedded in the covariance matrix S. Why is thisÂ€true?

REFERENCES

Ambrose, A. (1985). The development and experimental application of programmed materials for teaching clarinet performance skills in college woodwind techniques courses.

Unpublished doctoral dissertation, University of Cincinnati,Â€OH.

Becker, B. (1987). Applying tests of combined significance in meta-analysis. Psychological

Bulletin, 102, 164–171.

Bock, R.â•›D. (1975). Multivariate statistical methods in behavioral research. New York, NY:

McGraw-Hill.

Cohen, J. (1968). Multiple regression as a general data-analytic system. Psychological Bulletin, 70, 426–443.

Cohen, J. (1988). Statistical power analysis for the social sciences (2nd ed.). Hillsdale, NJ:

Lawrence Erlbaum Associates.

Cohen, J.,Â€& Cohen, P. (1975). Applied multiple regression/correlation analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum.

Cronbach, L.,Â€& Snow, R. (1977). Aptitudes and instructional methods: AÂ€handbook for

research on interactions. New York, NY: Irvington.

Glass, G.â•›C.,Â€& Hopkins, K. (1984). Statistical methods in education and psychology. Englewood Cliffs, NJ: Prentice-Hall.

173

174

â†œæ¸€å±®

â†œæ¸€å±® TWO-GROUP MANOVA

Grissom, R.â•›J.,Â€& Kim, J.â•›J. (2012). Effect sizes for research: Univariate and multivariate applications (2nd ed.). New York, NY: Routledge.

Hays, W.â•›L. (1981). Statistics (3rd ed.). New York, NY: Holt, RinehartÂ€& Winston.

Hotelling, H. (1931). The generalization of student’s ratio. Annals of Mathematical Statistics,

2(3), 360–378.

Hummel, T.â•›J.,Â€& Sligo, J. (1971). Empirical comparison of univariate and multivariate analysis of variance procedures. Psychological Bulletin, 76, 49–57.

Johnson, N.,Â€& Wichern, D. (1982). Applied multivariate statistical analysis. Englewood

Cliffs, NJ: PrenticeÂ€Hall.

Light, R.,Â€& Pillemer, D. (1984). Summing up: The science of reviewing research. Cambridge,

MA: Harvard University Press.

Light, R., Singer, J.,Â€& Willett, J. (1990). By design. Cambridge, MA: Harvard University Press.

Morrison, D.â•›F. (1976). Multivariate statistical methods. New York, NY: McGraw-Hill.

O’Grady, K. (1982). Measures of explained variation: Cautions and limitations. Psychological

Bulletin, 92, 766–777.

Pope, J., Lehrer, B.,Â€& Stevens, J.â•›P. (1980). AÂ€multiphasic reading screening procedure. Journal of Learning Disabilities, 13, 98–102.

Rosenthal, R.,Â€& Rosnow, R. (1984). Essentials of behavioral research. New York, NY:

McGraw-Hill.

Stevens, J.â•›P. (1980). Power of the multivariate analysis of variance tests. Psychological Bulletin, 88, 728–737.

Timm, N.â•›H. (1975). Multivariate analysis with applications in education and psychology.

Monterey, CA: Brooks-Cole.

Welkowitz, J., Ewen, R.â•›B.,Â€& Cohen, J. (1982). Introductory statistics for the behavioral

sciences. New York: Academic Press.

Chapter 5

K-GROUP MANOVA

A Priori and Post Hoc Procedures

5.1â•‡INTRODUCTION

In this chapter we consider the case where more than two groups of participants are

being compared on several dependent variables simultaneously. We first briefly show

how the MANOVA can be done within the regression model by dummy-coding group

membership for a small sample problem and using it as a nominal predictor. In doing

this, we build on the multivariate regression analysis of two-group MANOVA that

was presented in the last chapter. (Note that sectionÂ€5.2 can be skipped if you prefer

a traditional presentation of MANOVA). Then we consider traditional multivariate

analysis of variance, or MANOVA, introducing the most familiar multivariate test statistic Wilks’ Λ. Two fairly similar post hoc procedures for examining group differences

for the dependent variables are discussed next. Each procedure employs univariate

ANOVAs for each outcome and applies the Tukey procedure for pairwise Â�comparisons.

The procedures differ in that one provides for more strict type IÂ€error control and better

confidence interval coverage while the other seeks to strike a balance between type

IÂ€error and power. This latter approach is most suitable for designs having a small

number of outcomes and groups (i.e., 2 or 3).

Next, we consider a different approach to the k-group problem, that of using planned

comparisons rather than an omnibus F test. Hays (1981) gave an excellent discussion

of this approach for univariate ANOVA. Our discussion of multivariate planned comparisons is extensive and is made quite concrete through the use of several examples,

including two studies from the literature. The setup of multivariate contrasts on SPSS

MANOVA is illustrated and selected output is discussed.

We then consider the important problem of a priori determination of sample size for 3-,

4-, 5-, and 6-group MANOVA for the number of dependent variables ranging from 2 to

15, using extensive tables developed by Lauter (1978). Finally, the chapter concludes

with a discussion of some considerations that mitigate generally against the use of a

large number of criterion variables in MANOVA.

176

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

5.2â•‡MULTIVARIATE REGRESSION ANALYSIS FOR A SAMPLE

PROBLEM

In the previous chapter we indicated how analysis of variance can be incorporated

within the regression model by dummy-coding group membership and using it as a

nominal predictor. For the two-group case, just one dummy variable (predictor) was

needed, which took on a value of 1 for participants in group 1 and 0 for the participants in the other group. For our three-group example, we need two dummy variables

(predictors) to identify group membership. The first dummy variable (x1) is 1 for all

subjects in Group 1 and 0 for all other subjects. The other dummy variable (x2) is 1

for all subjects in Group 2 and 0 for all other subjects. AÂ€third dummy variable is not

needed because the participants in Group 3 are identified by 0’s on x1 and x2, that is, not

in Group 1 or Group 2. Therefore, by default, those participants must be in Group 3. In

general, for k groups, the number of dummy variables needed is (k − 1), corresponding

to the between degrees of freedom.

The data for our two-dependent-variable, three-group problem are presented here:

y1

y2

x1

x2

2

3

5

2

3

4

4

5

1

1

1

1

0

0

Group1

0

0

4

5

6

8

6

7

0

0

0

1

1 Group 2

1

7

8

6

7

0

0

10

9

7

8

5

6

0

0

0

0

0

0 Group 3

0

0

Thus, cast in a regression mold, we are relating two sets of variables, the two dependent variables, and the two predictors (dummy variables). The regression analysis will

then determine how much of the variance on the dependent variables is accounted for

by the predictors, that is, by group membership.

In TableÂ€5.1 we present the control lines for running the sample problem as a multivariate regression on SPSS MANOVA, and the lines for running the problem as a

traditional MANOVA (using GLM). By running both analyses, you can verify that

the multivariate Fs for the regression analysis are identical to those obtained from the

MANOVA run.

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

Table 5.1:â•‡ SPSS Syntax for Running Sample Problem as Multivariate Regression and

as MANOVA

(1)

(2)

TITLE ‘THREE GROUP MANOVA RUN AS MULTIVARIATE REGRESSION’.

DATA LIST FREE/x1 x2 y1 y2.

BEGIN DATA.

1 0 2 3

1 0 3 4

1 0 5 4

1 0 2 5

0 1 4 8

0 1 5 6

0 1 6 7

0 0 7 6

0 0 8 7

0 0 10 8

0 0 9 5

0 0 7 6

END DATA.

LIST.

MANOVA y1 y2 WITH x1 x2.

TITLE ‘MANOVA RUN ON SAMPLE PROBLEM’.

DATA LIST FREE/gps y1 y2.

BEGIN DATA.

1 2 3

1 3 4

1 5 4

1 2 5

2 4 8

2 5 6

2 6 7

3 7 6

3 8 7

3 10 8

3 9 5

3 7 6

END DATA.

LIST.

GLM y1 y2 BY gps

/PRINT=DESCRIPTIVE

/DESIGN= gps.

(1) The first two columns of data are for the dummy variables x1 and x2, which identify group membership (cf.

the data display in sectionÂ€5.2).

(2) The first column of data identifies group membership—again compare the data display in sectionÂ€5.2.

5.3â•‡ TRADITIONAL MULTIVARIATE ANALYSIS OF VARIANCE

In the k-group MANOVA case we are comparing the groups on p dependent variables

simultaneously. For the univariate case, the null hypothesis is:

H0 : µ1Â€=Â€µ2Â€=Â€·Â€·Â€·Â€= µk (population means are equal)

whereas for MANOVA the null hypothesis is

H0 : µ1Â€=Â€µ2Â€=Â€·Â€·Â€·Â€= µk (population mean vectors are equal)

For univariate analysis of variance the F statistic (FÂ€=Â€MSb / MSw) is used for testing the

tenability of H0. What statistic do we use for testing the multivariate null hypothesis?

There is no single answer, as several test statistics are available. The one that is most

widely known is Wilks’ Λ, where Λ is given by:

Λ=

W

T

=

W

B+W

, where 0 ≤ Λ ≤ 1

177

178

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

|W| and |T| are the determinants of the within-group and total sum of squares and

cross-products matrices. W has already been defined for the two-group case, where

the observations in each group are deviated about the individual group means. Thus

W is a measure of within-group variability and is a multivariate generalization of the

univariate sum of squares within (SSw). In T the observations in each group are deviated about the grand mean for each variable. B is the between-group sum of squares

and cross-products matrix, and is the multivariate generalization of the univariate sum

of squares between (SSb). Thus, B is a measure of how differential the effect of treatments has been on a set of dependent variables. We define the elements of B shortly.

We need matrices to define within, between, and total variability in the multivariate

case because there is variability on each variable (these variabilities will appear on the

main diagonals of the W, B, and T matrices) as well as covariability for each pair of

variables (these will be the off diagonal elements of the matrices).

Because Wilks’ Λ is defined in terms of the determinants of W and T, it is important to

recall from the matrix algebra chapter (ChapterÂ€2) that the determinant of a covariance

matrix is called the generalized variance for a set of variables. Now, because W and T

differ from their corresponding covariance matrices only by a scalar, we can think of

|W| and |T| in the same basic way. Thus, the determinant neatly characterizes within

and total variability in terms of single numbers. It may also be helpful for you to recall

that the generalized variance may be thought of as the variation in a set of outcomes

that is unique to the set, that is, the variance that is not shared by the variables in the

set. Also, for one variable, variance indicates how much scatter there is about the mean

on a line, that is, in one dimension. For two variables, the scores for each participant on

the variables defines a point in the plane, and thus generalized variance indicates how

much the points (participants) scatter in the plane in two dimensions. For three variables, the scores for the participants define points in three-dimensional space, and hence

generalized variance shows how much the subjects scatter (vary) in three dimensions.

An excellent extended discussion of generalized variance for the more mathematically

inclined is provided in Johnson and Wichern (1982, pp.Â€103–112).

For univariate ANOVA you may recall that

SStÂ€= SSb + SSw,

where SSt is the total sum of squares.

For MANOVA the corresponding matrix analogue holds:

T=B+W

Total SSCPÂ€=Â€ Between SSCP + Within SSCP

Matrix

Matrix

Matrix

Notice that Wilks’ Λ is an inverse criterion: the smaller the value of Λ, the more evidence for treatment effects (between-group association). If there were no treatment

Chapter 5

effect, then BÂ€=Â€0 and Λ =

W

0+W

â†œæ¸€å±®

â†œæ¸€å±®

= 1, whereas if B were very large relative to W then

Λ would approach 0.

The sampling distribution of Λ is somewhat complicated, and generally an approximation is necessary. Two approximations are available: (1) Bartlett’s χ2 and (2) Rao’s F.

Bartlett’s χ2 is given by:

χ2Â€= −[(N − 1) − .5(p + k)] 1n Λ p(k − 1)df,

where N is total sample size, p is the number of dependent variables, and k is the number of groups. Bartlett’s χ2 is a good approximation for moderate to large sample sizes.

For smaller sample size, Rao’s F is a better approximation (Lohnes, 1961), although

generally the two statistics will lead to the same decision on H0. The multivariate F

given on SPSS is the Rao F. The formula for Rao’s F is complicated and is presented

later. We point out now, however, that the degrees of freedom for error with Rao’s F

can be noninteger, so that you should not be alarmed if this happens on the computer

printout.

As alluded to earlier, there are certain values of p and k for which a function of Λ is

exactly distributed as an F ratio (for example, kÂ€=Â€2 or 3 and any p; see Tatsuoka, 1971,

p.Â€89).

5.4â•‡MULTIVARIATE ANALYSIS OF VARIANCE FOR

SAMPLE DATA

We now consider the MANOVA of the data given earlier. For convenience, we present

the data again here, with the means for the participants on the two dependent variables

in each group:

y1

G1

y2

y1

2

3

5

2

3

4

4

5

y 11 = 3

y 21 = 4

G2

G3

y2

y1

y2

4

5

6

8

6

7

y 12 = 5

y 22 = 7

â•‡7

â•‡8

10

â•‡9

â•‡7

6

7

8

5

6

y 13 = 8.2

y 23 = 6.4

We wish to test the multivariate null hypothesis with the χ2 approximation for Wilks’

Λ. Recall that ΛÂ€=Â€|W| / |T|, so that W and T are needed. W is the pooled estimate of

within variability on the set of variables, that is, our multivariate error term.

179

180

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

5.4.1â•‡ Calculation of W

Calculation of W proceeds in exactly the same way as we obtained W for Hotelling’s

Tâ•›2 in the two-group MANOVA case in ChapterÂ€4. That is, we determine how much the

participants’ scores vary on the dependent variables within each group, and then pool

(add) these together. Symbolically, then,

WÂ€= W1 + W2 + W3,

where W1, W2, and W3 are the within sums of squares and cross-products matrices

for Groups 1, 2, and 3. As in ChapterÂ€4, we denote the elements of W1 by ss1 and ss2

(measuring the variability on the variables within Group 1) and ss12 (measuring the

covariability of the variables in Group 1).

ss

W1 = 1

ss21

ss12

ss2

Then, for Group 1, we have

ss1 =

4

∑( y ( ) − y

j =1

11 )

1 j

2

= (2 − 3) 2 + (3 − 3) 2 + (5 − 3) 2 + (2 − 3) 2 = 6

ss2 =

4

∑( y ( ) − y

j =1

2 j

21 )

2

= (3 − 4) 2 + ( 4 − 4) 2 + ( 4 − 4) 2 + (5 − 4) 2 = 2

ss12 = ss21

∑(y ( ) − y

4

j =1

1 j

11

)( y ( ) − y )

2 j

21

= (2 − 3) (3 − 4) + (3 − 3) (4 − 4) + (5 − 3) (4 − 4) + (2 − 3) (5 − 4) = 0

Thus, the matrix that measures within variability on the two variables in Group 1 is

given by:

6 0

W1 =

0 2

In exactly the same way the within SSCP matrices for groups 2 and 3 can be shown

to be:

2 −1

6.8 2.6

W2 =

W3 =

−1 2

2.6 5.2

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

Therefore, the pooled estimate of within variability on the set of variables is given by:

14.8 1.6

W = W1 + W2 + W3 =

1.6 9.2

5.4.2â•‡ Calculation of T

Recall, from earlier in this chapter, that TÂ€=Â€B + W. We find the B (between) matrix,

and then obtain the elements of T by adding the elements of B to the elements of W.

The diagonal elements of B are defined as follows:

bii =

k

∑n ( y

j

ij

− yi ) 2 ,

j =1

where nj is the number of subjects in group j, yij is the mean for variable i in group

j, and yi is the grand mean for variable i. Notice that for any particular variable, say

variable 1, b11 is simply the between-group sum of squares for a univariate analysis of

variance on that variable.

The off-diagonal elements of B are defined as follows:

k

∑n ( y

bmi = bim

j

ij

− yi

j =1

)( y

mj

− ym

)

To find the elements of B we need the grand means on the two variables. These are

obtained by simply adding up all the scores on each variable and then dividing by the

total number of scores. Thus y1 = 68 / 12Â€=Â€5.67, and y2Â€=Â€69 / 12Â€=Â€5.75.

Now we find the elements of the B (between) matrix:

b11 =

3

∑n ( y

j

1j

− y1 )2 , where y1 j is the mean of variable 1 in group j.

j =1

= 4(3 − 5.67) 2 + 3(5 − 5.67) 2 + 5(8.2 − 5.67) 2 = 61.87

b22 =

3

∑n ( y

j =1

j

2j

− y2 ) 2

= 4(4 − 5.75)2 + 3(7 − 5.75)2 + 5(6.4 − 5.75)2 = 19.05

b12 = b21

3

∑n ( y

j

j =1

1j

)(

− y1 y2 j − y2

)

= 4 (3 − 5.67) ( 4 − 5.75) + 3 (5 − 5.67 ) (7 − 5.75) + 5 (8.2 − 5.67 ) (6.4 − 5.75) = 24.4

181

182

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

Therefore, the B matrix is

61.87 24.40

B=

24.40 19.05

and the diagonal elements 61.87 and 19.05 represent the between-group sum of squares

that would be obtained if separate univariate analyses had been done on variables 1

and 2.

Because TÂ€=Â€B + W, we have

61.87 24.40 14.80 1.6 76.72 26.000

T=

+

=

24.40 19.05 1.6 9.2 26.00 28.25

5.4.3 Calculation of Wilks Λ and the Chi-Square Approximation

Now we can obtain Wilks’ Λ:

14.8

W

1.6

Λ=

=

76.72

T

26

1.6

14.8 (9.2) − 1.62

9.2

=

= .0897

26

76.72 ( 28.25) − 262

28.25

Finally, we can compute the chi-square test statistic:

χ2Â€=Â€−[(N − 1) − .5(p + k)] ln Λ, with p (k − 1) df

χ2Â€=Â€−[(12 − 1) − .5(2 + 3)] ln (.0897)

χ2Â€=Â€−8.5(−2.4116)Â€=Â€20.4987, with 2(3 − 1)Â€=Â€4 df

The multivariate null hypothesis here is:

µ11 µ12 µ13

µ = µ = µ

23

21

22

That is, that the population means in the three groups on variable 1 are equal, and

similarly that the population means on variable 2 are equal. Because the critical

value at .05 is 9.49, we reject the multivariate null hypothesis and conclude that

the three groups differ overall on the set of two variables. TableÂ€5.2 gives the multivariate Fs and the univariate Fs from the SPSS run on the sample problem and

presents the formula for Rao’s F approximation and also relates some of the output

from the univariate Fs to the B and W matrices that we computed. After overall

multivariate significance is attained, one often would like to find out which of the

outcome variables differed across groups. When such a difference is found, we

would then like to describe how the groups differed on the given variable. This is

considered next.

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

Table 5.2:â•‡ Multivariate Fâ•›s and Univariate Fâ•›s for Sample Problem From SPSS MANOVA

Multivariate Tests

Effect

gps

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

Value

F

Hypothesis df

Error df

Sig.

1.302

.090

5.786

4.894

8.390

9.358

10.126

22.024

4.000

4.000

4.000

2.000

18.000

16.000

14.000

9.000

.001

.000

.000

.000

1 − Λ1/s ms − p (k − 1) / 2 + 1

, where m = N − 1 − (p − k ) / 2 and

Λ1/s

p (k − 1)

s=

p 2 (k − 1)2 − 4

p 2 + (k − 1)2 − 5

is approximately distributed as F with p(k − 1) and ms − p(k − 1) / 2 + 1 degrees of freedom. Here

Wilks’ ΛÂ€=Â€.08967, pÂ€=Â€2, kÂ€=Â€3, and NÂ€=Â€12. Thus, we have mÂ€=Â€12 − 1Â€− (2 + 3) / 2Â€=Â€8.5 and

s = {4(3 − 1)2 − 4} / {4 + (2)2 − 5} = 12 / 3 = 2,

and

F=

1 − .08967 8.5 (2) − 2 (2) / 2 + 1 1 − .29945 16

⋅

=

⋅ = 9.357

2 (3 − 1)

.29945 4

.08967

as given on the printout, within rounding. The pair of degrees of freedom is p(kÂ€−Â€1)Â€=Â€2(3 − 1)Â€=Â€4 and

ms − p(k − 1) / 2 + 1Â€=Â€8.5(2) − 2(3 − 1) / 2 + 1Â€=Â€16.

Tests of Between-Subjects Effects

Source Dependent Variable Type III Sum of Squares df Mean Square F

gps

Error

y1

y2

y1

y2

(1)â•‡61.867

19.050

(2)â•‡14.800

9.200

2

2

9

9

30.933

9.525

1.644

1.022

Sig.

18.811 .001

9.318 .006

(1) These are the diagonal elements of the B (between) matrix we computed in the example:

61.87 24.40

24.40 19.05

B=

(2) Recall that the pooled within matrix computed in the example was

14.8 1.6

W=

1.6 9.2

(Continued )

183

184

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

TableÂ€5.2:â•‡ (Continued)

a nd these are the diagonal elements of W. The univariate F ratios are formed from the elements on the

main diagonals of B and W. Dividing the elements of B by hypothesis degrees of freedom gives the

hypothesis mean squares, while dividing the elements of W by error degrees of freedom gives the error

mean squares. Then, dividing hypothesis mean squares by error mean squares yields the F ratios. Thus, for

Y1 we have

F =

30.933

1.644

= 18.81.

5.5â•‡ POST HOC PROCEDURES

In general, when the multivariate null hypothesis is rejected, several follow-up procedures can be used. By far, the most commonly used method in practice is to conduct

a series of one-way ANOVAs for each outcome to identify whether group differences

are present for a given dependent variable. This analysis implies that you are interested

in identifying if there are group differences present for each of the correlated but distinct outcomes. The purpose of using the Wilks’ Λ prior to conducting these univariate

tests is to provide for accurate type IÂ€error control. Note that if one were interested in

learning whether linear combinations of dependent variables (instead of individual

dependent variables) distinguish groups, discriminant analysis (see ChapterÂ€10) would

be used instead of these procedures.

In addition, another procedure that may be used following rejection of the overall multivariate null hypothesis is step down analysis. This analysis requires that you establish

an a priori ordering of the dependent variables (from most important to least) based

on theory, empirical evidence, and/or reasoning. In many investigations, this may be

difficult to do, and study results depend on this ordering. As such, it is difficult to find

applications of this procedure in the literature. Previous editions of this text contained

a chapter on step down analysis. However, given its limited utility, this chapter has

been removed from the text, although it is available on the web.

Another analysis procedure that may be used when the focus is on individual dependent

variables (and not linear combinations) is multivariate multilevel modeling (MVMM).

This technique is covered in ChapterÂ€14, which includes a discussion of the benefits

of this procedure. Most relevant for the follow-up procedures are that MVMM can

be used to test whether group differences are the same or differ across multiple outcomes, when the outcomes are similarly scaled. Thus, instead of finding, as with the

use of more traditional procedures, that an intervention impacts, for example, three

outcomes, investigators may find that the effects of an intervention are stronger for

some outcomes than others. In addition, this procedure offers improved treatment of

missing data over the traditional approach discussed here.

The focus for the remainder of this section and the next is on the use of a series of

ANOVAs as follow-up tests given a significant overall multivariate test result. There

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

are different variations of this procedure that can be used, depending on the balance

of the type IÂ€error rate and power desired, as well as confidence interval accuracy. We

present two such procedures here. SAS and SPSS commands for the follow-up procedures are shown in sectionÂ€5.6 as we work through an applied example. Note also that

one may not wish to conduct pairwise comparisons as we do here, but instead focus

on a more limited number of meaningful comparisons as suggested by theory and/or

empirical work. Such planned comparisons are discussed in sectionsÂ€5.7–5.11.

5.5.1â•‡ P

rocedure 1—ANOVAS and Tukey Comparisons

With Alpha Adjustment

With this procedure, a significant multivariate test result is followed up with one-way

ANOVAs for each outcome with a Bonferroni-adjusted alpha used for the univariate tests. So if there are p outcomes, the alpha used for each ANOVA is the experiment-wise nominal alpha divided by p, or a / p. You can implement this procedure by

simply comparing the p value obtained for the ANOVA F test to this adjusted alpha

level. For example, if the experiment-wise type IÂ€ error rate were set at .05 and if 5

dependent variables were included, the alpha used for each one-way ANOVA would be

.05 / 5Â€=Â€.01. And, if the p value for an ANOVA F test were smaller than .01, this indicates that group differences are present for that dependent variable. If group differences

are found for a given dependent variable and the design includes three or more groups,

then pairwise comparisons can be made for that variable using the Tukey procedure, as

described in the next section, with this same alpha level (e.g., .01 for the five dependent

variable example). This generally recommended procedure then provides strict control of the experiment-wise type IÂ€error rate for all possible pairwise comparisons and

also provides good confidence interval coverage. That is, with this procedure, we can

be 95% confident that all intervals capture the true difference in means for the set of

pairwise comparisons. While this procedure has good type IÂ€error control and confidence interval coverage, its potential weakness is statistical power, which may drop to

low levels, particularly for the pairwise comparisons, especially when the number of

dependent variables increases. One possibility, then, is to select a higher level than .05

(e.g., .10) for the experiment-wise error rate. In this case, with five dependent variables,

the alpha level used for each of the ANOVAs is .10 / 5 or .02, with this same alpha level

also used for the pairwise comparisons. Also, when the number of dependent variables

and groups is small (i.e., two or perhaps three), procedure 2 can be considered.

5.5.2â•‡Procedure 2—ANOVAS With No Alpha Adjustment

and Tukey Comparisons

With this procedure, a significant overall multivariate test result is followed up with

separate ANOVAs for each outcome with no alpha adjustment (e.g., aÂ€=Â€.05). Again,

if group differences are present for a given dependent variable, the Tukey procedure

is used for pairwise comparisons using this same alpha level (i.e., .05). As such, this

procedure relies more heavily on the use of Wilks’ Λ as a protected test. That is, the

one-way ANOVAs will be considered only if Wilks’ Λ indicates that group differences

185

186

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

are present on the set of outcomes. Given no alpha adjustment, this procedure is more

powerful than the previous procedure but can provide for poor control of the experiment-wise type IÂ€error rate when the number of outcomes is greater than two or three

and/or when the number of groups increase (thus increasing the number of pairwise

comparisons). As such, we would generally not recommend this procedure with more

than three outcomes and more than three groups. Similarly, this procedure does not

maintain proper confidence interval coverage for the entire set of pairwise comparisons. Thus, if you wish to have, for example, 95% coverage for this entire set of comparisons or strict control of the family-wise error rate throughout the testing procedure,

the procedure in sectionÂ€5.5.1 should be used.

You may wonder why this procedure may work well when the number of outcomes

and groups is small. In sectionÂ€4.2, we mentioned that use of univariate ANOVAs

with no alpha adjustment for each of several dependent variables is not a good idea

because the experiment-wise type IÂ€error rate can increase to unacceptable levels.

The same applies here, except that the use of Wilks’ Λ provides us with some protection that is not present when we proceed directly to univariate ANOVAs. To illustrate, when the study design has just two dependent variables and two groups, the use

of Wilks’ Λ provides for strict control of the experiment-wise type IÂ€error rate even

when no alpha adjustment is used for the univariate ANOVAs, as noted by Levin,

Serlin, and Seaman (1994). Here is how this works. Given two outcomes, there are

three possibilities that may be present for the univariate ANOVAs. One possibility

is that there are no group differences for any of the two dependent variables. If that

is the case, use of Wilks’ Λ at an alpha of .05 provides for strict type IÂ€error control.

That is, if we reject the multivariate null hypothesis when no group differences are

present, we have made a type IÂ€error, and the expected rate of doing this is .05. So,

for this case, use of the Wilks’ Λ provides for proper control of the experiment-wise

type IÂ€error rate.

We now consider a second possibility. That is, here, the overall multivariate null

hypothesis is false and there is a group difference for just one of the outcomes. In this

case, we cannot make a type IÂ€error with the use of Wilks’ Λ since the multivariate null

hypothesis is false. However, we can certainly make a type IÂ€error when we consider

the univariate tests. In this case, with only one true null hypothesis, we can make a

type IÂ€error for only one of the univariate F tests. Thus, if we use an unadjusted alpha

for these tests (i.e., .05), then the probability of making a type IÂ€error in the set of univariate tests (i.e., the two separate ANOVAs) is .05. Again, the experiment-wise type

IÂ€error rate is properly controlled for the univariate ANOVAs. The third possibility is

that there are group differences present on each outcome. In this case, it is not possible to make a type IÂ€error for the multivariate test or the univariate F tests. Of course,

even in this latter case, when you have more than two groups, making type IÂ€errors

is possible for the pairwise comparisons, where some null group differences may be

present. The use of the Tukey procedure, then, provides some type IÂ€error protection

for the pairwise tests, but as noted, this protection generally weakens as the number of

groups increases.

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

Thus, similar to our discussion in ChapterÂ€4, we recommend use of this procedure for

analysis involving up to three dependent variables and three groups. Note that with

three dependent variables, the maximum type IÂ€error rate for the ANOVA F tests is

expected to be .10. In addition, this situation, three or fewer outcomes and groups,

may be encountered more frequently than you may at first think. It may come about

because, in the most obvious case, your research design includes three variables with

three groups. However, it is also possible that you collected data for eight outcome

variables from participants in each of three groups. Suppose, though, as discussed in

ChapterÂ€4, that there is fairly solid evidence from the literature that group mean differences are expected for two or perhaps three of the variables, while the others are being

tested on a heuristic basis. In this case, a separate multivariate test could be used for the

variables that are expected to show a difference. If the multivariate test is significant,

procedure 2, with no alpha adjustment for the univariate F tests, can be used. For the

more exploratory set of variables, then, a separate significant multivariate test would

be followed up by use of procedure 1, which uses the Bonferroni-adjusted F tests.

The point we are making here is that you may not wish to treat all dependent variables

the same in the analysis. Substantive knowledge and previous empirical research suggesting group mean differences can and should be taken into account in the analysis.

This may help you strike a reasonable balance between type IÂ€error control and power.

As Keppel and Wickens (2004) state, the “heedless choice of the most stringent error

correction can exact unacceptable costs in power” (p.Â€264). They advise that you need

to be flexible when selecting a strategy to control type IÂ€ error so that power is not

sacrificed.

5.6â•‡ THE TUKEY PROCEDURE

As used in the procedures just mentioned, the Tukey procedure enables us to examine

all pairwise group differences on a variable with experiment-wise error rate held in

check. The studentized range statistic (which we denote by q) is used in the procedure,

and the critical values for it are in Table A.4 of the statistical tables in Appendix A.

If there are k groups and the total sample size is N, then any two means are declared

significantly different at the .05 level if the following inequality holds:

y − y > q 05, k , N − k

i

j

MSW

,

n

where MSw is the error term for a one-way ANOVA, and n is the common group size.

Alternatively, one could compute a standard t test for a pairwise difference but compare that t ratio to a Tukey-based critical value of q / 2 , which allows for direct comparison to the t test. Equivalently, and somewhat more informatively, we can infer

that population means for groups i and j (μi and μj) differ if the following confidence

interval does not include 0:

yi − y j ± q 05;k , N − k

MSW

n

187

188

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

that is,

yi − y j − q 05;k , N − k

MSW

MSW

< µ − µ < yi − y j + q 05;k , N − k

i

j

n

n

If the confidence interval includes 0, we conclude that the population means are not

significantly different. Why? Because if the interval includes 0 that suggests 0 is a

likely value for the true difference in means, which is to say it is reasonable to act as

if uiÂ€=Â€uj.

The Tukey procedure assumes that the variances are homogenous and it also assumes

equal group sizes. If group sizes are unequal, even very sharply unequal, then various

studies (e.g., Dunnett, 1980; Keselman, Murray,Â€& Rogan, 1976) indicate that the procedure is still appropriate provided that n is replaced by the harmonic mean for each

pair of groups and provided that the variances are homogenous. Thus, for groups i and

j with sample sizes ni and nj, we replace n by

2

1 + 1

ni n j

The studies cited earlier showed that under the conditions given, the type IÂ€error rate

for the Tukey procedure is kept very close to the nominal alpha, and always less than

nominal alpha (within .01 for alphaÂ€=Â€.05 from the Dunnett study). Later we show how

the Tukey procedure may be obtained via SAS and SPSS and also show a hand calculation for one of the confidence intervals.

Example 5.1 Using SAS and SPSS for Post Hoc Procedures

The selection and use of a post hoc procedure is illustrated with data collected by

Novince (1977). She was interested in improving the social skills of college females

and reducing their anxiety in heterosexual encounters. There were three groups in

the study: control group, behavioral rehearsal, and a behavioral rehearsal + cognitive

restructuring group. We consider the analysis on the following set of dependent variables: (1) anxiety—physiological anxiety in a series of heterosexual encounters, (2) a

measure of social skills in social interactions, and (3) assertiveness.

Given the outcomes are considered to be conceptually distinct (i.e., not measures of

an single underlying construct), use of MANOVA is a reasonable choice. Because we

do not have strong support to expect group mean differences and wish to have strict

control of the family-wise error rate, we use procedure 1. Thus, for the separate ANOVAs, we will use a / p or .05 / 3Â€=Â€.0167 to test for group differences for each outcome.

This corresponds to a confidence level of 1 − .0167 or 98.33. Use of this confidence

level along with the Tukey procedure means that there is a 95% probability that all of

the confidence intervals in the set will capture the respective true difference in means.

TableÂ€5.3 shows the raw data and the SAS and SPSS commands needed to obtain the

results of interest. TablesÂ€5.4 and 5.5 show the results for the multivariate test (i.e.,

TUKEY;

3 4 5 5

3 4 6 5

2 6 2 2

2 5 2 3

1 4 5 4

1 4 4 4

TITLE ‘SPSS with novince data’.

DATA LIST FREE/gpid anx socskls assert.

BEGIN DATA.

1 5 3 3

1 5 4 3

1 4 5 4

1 4

1 3 5 5

1 4 5 4

1 4 5 5

1 4

1 5 4 3

1 5 4 3

1 4 4 4

2 6 2 1

2 6 2 2

2 5 2 3

2 6

2 4 4 4

2 7 1 1

2 5 4 3

2 5

2 5 3 3

2 5 4 3

2 6 2 3

3 4 4 4

3 4 3 3

3 4 4 4

3 4

3 4 5 5

3 4 4 4

3 4 5 4

3 4

3 4 4 4

3 5 3 3

3 4 4 4

END DATA.

LIST.

GLM anx socskls assert BY gpid

(2)/POSTHOC=gpid(TUKEY)

/PRINT=DESCRIPTIVE

(3)/CRITERIA=ALPHA(.0167)

/DESIGN= gpid.

SPSS

5 5

6 5

2 2

2 3

5 4

4 4

(1) CLDIFF requests confidence intervals for the pairwise comparisons, TUKEY requests use of the Tukey procedure, and ALPHA directs that these comparisons be made at the a / p

or .05 / 3Â€=Â€.0167 level. If desired, the pairwise comparisons for Procedure 2 can be implemented by specifying the desired alpha (e.g., .05).

(2) Requests the use of the Tukey procedure for the pairwise comparisons.

(3) The alpha used for the pairwise comparisons is a / p or .05 / 3Â€=Â€.0167. If desired, the pairwise comparisons for Procedure 2 can be implemented by specifying the desired alpha

(e.g., .05).

1 5 3 3

1 5 4 3

1 4 5 4

1 3 5 5

1 4 5 4

1 4 5 5

1 5 4 3

1 5 4 3

1 4 4 4

2 6 2 1

2 6 2 2

2 5 2 3

2 4 4 4

2 7 1 1

2 5 4 3

2 5 3 3

2 5 4 3

2 6 2 3

3 4 4 4

3 4 3 3

3 4 4 4

3 4 5 5

3 4 4 4

3 4 5 4

3 4 4 4

3 5 3 3

3 4 4 4

PROC PRINT;

PROC GLM;

CLASS gpid;

MODEL anx socskls assert=gpid;

MANOVA HÂ€=Â€gpid;

(1) MEANS gpid/ ALPHAÂ€=Â€.0167 CLDIFF

LINES;

DATA novince;

INPUT gpid anx socskls assert @@;

SAS

Table 5.3:â•‡ SAS and SPSS Control Lines for MANOVA, Univariate F Tests, and Pairwise Comparisons Using the Tukey Procedure

190

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

Table 5.4:â•‡ SAS Output for Procedure 1

SAS RESULTS

MANOVA Test Criteria and F Approximations for the Hypothesis of No Overall gpid Effect

H = Type III SSCP Matrix for gpid

E = Error SSCP Matrix

S=2 M=0 N=13

Statistic

Value

Wilks’ Lambda

Pillai’s Trace

Hotelling-Lawley

Trace

Roy’s Greatest Root

0.41825036

0.62208904

1.29446446

1.21508924

F Value

Num DF

Den DF

Pr> F

5.10

4.36

5.94

6

6

6

56

58

35.61

0.0003

0.0011

0.0002

11.75

3

29

<.0001

Note: F Statistic for Roy’s Greatest Root is an upper bound.

Note: F Statistic for Wilks’ Lambda is exact.

Dependent Variable: anx

Source

DF

Sum of Squares

Mean Square

F Value

Pr> F

Model

Error

Corrected Total

â•‡2

30

32

12.06060606

11.81818182

23.87878788

6.03030303

0.39393939

15.31

<.0001

Dependent Variable: socskls

Source

DF

Sum of Squares

Mean Square

F Value

Pr> F

Model

Error

Corrected Total

â•‡2

30

32

23.09090909

23.45454545

46.54545455

11.54545455

â•‡0.78181818

14.77

<.0001

Dependent Variable: assert

Source

DF

Sum of Squares

Mean Square

F Value

Pr> F

Model

Error

Corrected Total

â•‡2

30

32

14.96969697

19.27272727

34.24242424

7.48484848

0.64242424

11.65

0.0002

Wilks’ Λ) and the follow-up ANOVAs for SAS and SPSS, respectively, but do not

show the results for the pairwise comparisons (although the results are produced by

the commands). To ease reading, we present results for the pairwise comparisons in

TableÂ€5.6.

The outputs in TablesÂ€5.4 and 5.5 indicate that the overall multivariate null hypothesis

of no group differences on all outcomes is to be rejected (Wilks’ ΛÂ€=Â€.418, FÂ€=Â€5.10,

Table 5.5:â•‡ SPSS Output for Procedure 1

SPSS RESULTS

1

Multivariate Testsa

Effect

Gpid

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

Value

F

.622

.418

1.294

1.215

4.364

5.098b

5.825

11.746c

Hypothesis df

Error df

Sig.

6.000

6.000

6.000

3.000

58.000

56.000

54.000

29.000

.001

.000

.000

.000

Design: Intercept + gpid

Exact statistic

c

The statistic is an upper bound on F that yields a lower bound on the significance level.

a

b

Tests of Between-Subjects Effects

Source

Dependent Variable

Type III Sum

of Squares

Df

Gpid

Anx

Socskls

Assert

Anx

Socskls

Assert

12.061

23.091

14.970

11.818

23.455

19.273

2

2

2

30

30

30

Error

1

Mean Square

6.030

11.545

7.485

.394

.782

.642

F

Sig.

15.308

14.767

11.651

.000

.000

.000

Non-essential rows were removed from the SPSS tables.

Table 5.6:â•‡ Pairwise Comparisons for Each Outcome Using the Tukey Procedure

Contrast

Estimate

SE

98.33% confidence interval

for the mean difference

Anxiety

Rehearsal vs. Cognitive

Rehearsal vs. Control

Cognitive vs. Control

0.18

−1.18*

−1.36*

0.27

0.27

0.27

−.61, .97

−1.97, −.39

−2.15, −.58

Social Skills

Rehearsal vs. Cognitive

Rehearsal vs. Control

Cognitive vs. Control

0.09

1.82*

1.73*

0.38

0.38

0.38

−1.20, 1.02

.71, 2.93

.62, 2.84

Assertiveness

Rehearsal vs. Cognitive

Rehearsal vs. Control

Cognitive vs. Control

− .27

1.27*

1.55*

0.34

0.34

0.34

* Significant at the .0167 level using the Tukey HSD procedure.

−1.28, .73

.27, 2.28

.54, 2.55

192

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

pÂ€<Â€.05). Further, inspection of the ANOVAs indicates that there are mean differences

for anxiety (FÂ€=Â€15.31, p < .0167), social skills (FÂ€ =Â€ 14.77, p < .0167), and assertiveness (FÂ€=Â€11.65, p < .0167). TableÂ€5.6 indicates that at posttest each of the treatment groups had, on average, reduced anxiety compared to the control group (as the

respective intervals do not include zero). Further, each of the treatment groups had

greater mean social skills and assertiveness scores than the control group. The results

in TableÂ€5.6 do not suggest mean differences are present for the two treatment groups

for any dependent variable (as each such interval includes zero). Note that in addition

to using confidence intervals to merely indicate the presence or absence of a mean difference in the population, we can also use them to describe the size of the difference,

which we do in the next section.

Example 5.2 Illustrating Hand Calculation of the Tukey-Based Confidence

Interval

To illustrate numerically the Tukey procedure as well as an assessment of the importance of a group difference, we obtain a confidence interval for the anxiety (ANX)

variable for the data shown in TableÂ€5.3. In particular, we compute an interval with the

Tukey procedure using the 1 − .05 / 3 level or a 98.33% confidence interval for groups

1 (Behavioral Rehearsal) and 2 (Control). With this 98.33% confidence level, this

procedure provides us with 95% confidence that all the intervals in the set will include

the respective population mean difference. The sample mean difference, as shown in

TableÂ€5.6, is −1.18. Recall that the common group size in this study is nÂ€=Â€11. The

MSW, the mean square error, as shown in the outputs in TablesÂ€5.4 and 5.5, is .394 for

ANX. While Table A.4 provides critical values for this procedure, it does not do so

for the 98.33rd (1 − .0167) percentile. Here, we simply indicate that the critical value

for the studentized range statistic at q 0167,3,30 = 4.16. Thus, the confidence interval is

given by

.394

.394

< µ − µ < −1.18 + 4.16

1

2

11

11

−1.97 < µ − µ < −.39.

1

2

−1.18 − 4.16

Because this interval does not include 0, we conclude, as before, that the rehearsal

group population mean for anxiety is different from (i.e., lower than) the control population mean. Why is the confidence interval approach more informative, as indicated

earlier, than simply testing whether the means are different? Because the confidence

interval not only tells us whether the means differ, but it also gives us a range of values

within which the mean difference is likely contained. This tells us the precision with

which we have captured the mean difference and can be used in judging the practical importance of the difference. For example, given this interval, it is reasonable to

believe that the mean difference for the two groups in the population lies in the range

from −1.97 to −.39. If an investigator had decided on some grounds that a difference

of at least 1 point indicated a meaningful difference between groups, the investigator,

while concluding that group means differ in the population (i.e., the interval does not

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

include zero), would not be confident that an important difference is present (because

the entire interval does not exceed a magnitude of 1).

5.7â•‡ PLANNED COMPARISONS

One approach to the analysis of data is to first demonstrate overall significance, and

then follow this up to assess the subsources of variation (i.e., which dependent variables

have group differences). Two procedures using ANOVAs and pairwise comparisons

have been presented. That approach is appropriate in exploratory studies where the

investigator first has to establish that an effect exists. However, in many instances, there

is more of an empirical or theoretical base and the investigator is conducting a confirmatory study. Here the existence of an effect can be taken for granted, and the investigator

has specific questions he or she wishes to ask of the data. Thus, rather than examining

all 10 pairwise comparisons for a five-group problem, there may be only three or four

comparisons (that may or may not be paired comparisons) of interest. It is important

to use planned comparisons when the situation justifies them, because performing a

small number of statistical tests cuts down on the probability of spurious results (type

IÂ€errors), which can occur much more readily when a large number of tests are done.

Hays (1981) showed in univariate ANOVA that more powerful tests can be conducted

when comparisons are planned. This would carry over to MANOVA. This is a very

important factor weighing in favor of planned comparisons. Many studies in educational research have only 10 to 20 participants per group. With these sample sizes,

power is generally going to be poor unless the treatment effect is large (Cohen, 1988). If

we plan a small or moderate number of contrasts that we wish to test, then power can be

improved considerably, whereas control on overall α can be maintained through the use

of the Bonferroni Inequality. Recall this inequality states that if k hypotheses, k planned

comparisons here, are tested separately with type IÂ€error rates of α1, α2, .Â€.Â€., αk, then

overall α ≤ α1 + α2 + ··· + αk,

where overall α is the probability of one or more type IÂ€errors when all the hypotheses

are true. Therefore, if three planned comparisons were tested each at αÂ€=Â€.01, then the

probability of one or more spurious results can be no greater than .03 for the set of

three tests.

Let us now consider two situations where planned comparisons would be appropriate:

1. Suppose an investigator wishes to determine whether each of two drugs produces

a differential effect on three measures of task performance over a placebo. Then, if

we denote the placebo as group 2, the following set of planned comparisons would

answer the investigator’s questions:

ψ1Â€=Â€µ1 − µ2 and ψ2Â€= µ2 − µ3

193

194

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

2. Second, consider the following four-group schematic design:

Groups

Control

T1Â€& T2 combined

T1

T2

µ1

µ2

µ3

µ4

Note: T1 and T2 represent two treatments.

As outlined, this could represent the format for a variety of studies (e.g., if T1 and T2

were two methods of teaching reading, or if T1 and T2 were two counseling approaches).

Then the three most relevant questions the investigator wishes to answer are given by

the following planned and so-called Helmert contrasts:

1. Do the treatments as a set make a difference?

ψ1 = µ1 −

µ2 + µ2 + µ4

3

2. Is the combination of treatments more effective than either treatment alone?

ψ 2 = µ2 −

µ3 + µ 4

2

3. Is one treatment more effective than the other treatment?

ψ 3 = µ3 − µ 4

Assuming equal n per group, these two situations represent dependent versus independent planned comparisons. Two comparisons among means are independent if the

sum of the products of the coefficients is 0. We represent the contrasts for Situation 1

as follows:

Groups

Ψ1

Ψ2

1

2

3

1

0

−1

1

0

−1

These contrasts are dependent because the sum of products of the coefficients ≠ 0 as

shown:

Sum of productsÂ€=Â€1(0) + (−1)(1) + 0(−1)Â€= −1

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

Now consider the contrasts from Situation 2:

Groups

1

2

1

3

Ψ1

1

Ψ2

1

Ψ3

−

3

4

1

3

1

−

2

1

3

1

−

2

−

−

1

−1

Next we show that these contrasts are pairwise independent by demonstrating that the

sum of the products of the coefficients in each caseÂ€=Â€0:

1

1 1 1 1

ψ and ψ : 1(0) + − (1) + − − + − − = 0

1

2

3

3 2 3 2

1

1

1

ψ and ψ : 1(0) + − (0) + − (1) + − ( −1) = 0

1

3

3

3

3

1

1

ψ and ψ : 0 (0) + (1)(0) + − (1) + − ( −1) = 0

2

3

2

2

Now consider two general contrasts for k groups:

Ψ1Â€=Â€c11μ1 + c12μ2+ ··· + c1kμk

Ψ2Â€=Â€c21μ1 + c22μ2 + ··· +c2kμk

The first part of the c subscript refers to the contrast number and the second part to the

group. The condition for independence in symbols then is:

c11c21 + c12 c22 + + c1k c2k =

k

∑c

1 j c2 j

=0

j =1

If the sample sizes are not equal, then the condition for independence is more complicated and becomes:

c11c21 c12 c22

c c

+

+ + 1k 2 k = 0

n1

n2

nk

It is desirable, both statistically and substantively, to have orthogonal multivariate

planned comparisons. Because the comparisons are uncorrelated, we obtain a nice additive partitioning of the total between-group association (Stevens, 1972). You may recall

that in univariate ANOVA the between sum of squares is split into additive portions by a

195

196

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

set of orthogonal planned comparisons (see Hays, 1981, chap. 14). Exactly the same type

of thing is accomplished in the multivariate case; however, now the between matrix is

split into additive portions that yield nonoverlapping pieces of information. Because the

orthogonal comparisons are uncorrelated, the interpretation is clear and straightforward.

Although it is desirable to have orthogonal comparisons, the set to impose depends

on the questions that are of primary interest to the investigator. The first example we

gave of planned comparisons was not orthogonal, but corresponded to the important

questions the investigator wanted answered. The interpretation of correlated contrasts

requires some care, however, and we consider these in more detail later on in this chapter.

5.8â•‡ TEST STATISTICS FOR PLANNED COMPARISONS

5.8.1 Univariate Case

You may have been exposed to planned comparisons for a single dependent variable,

the univariate case. For k groups, with population means µ1, µ2, .Â€.Â€., µk, a contrast

among the population means is given by

ΨÂ€= c1µ1 + c2µ2 + ··· + ckµkâ•›,

where the sum of the coefficients (ci) must equal 0.

This contrast is estimated by replacing the population means by the sample means,

yielding

= c x + c x ++ c x

Ψ

1

2 2

k k

To test whether a given contrast is significantly different from 0, that is, to test

H0 : ΨÂ€= 0 vs. H1 : Ψ ≠ 0,

we need an expression for the standard error of a contrast. It can be shown that the

variance for a contrast is given by

2 = MS ⋅

σ

w

Ψ

k

∑

i =1

ci2

,(1)

ni

where MSw is the error term from all the groups (the denominator of the F test) and ni

are the group sizes. Thus, the standard error of a contrast is simply the square root of

EquationÂ€1 and the following t statistic can be used to determine whether a contrast is

significantly different from 0:

t=

Ψ

MS w ⋅

∑

ci2

i =1 n

i

k

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

SPSS MANOVA reports the univariate results for contrasts as F values. Recall that

because FÂ€=Â€t2, the following F test with 1 and N − k degrees of freedom is equivalent

to a two-tailed t test at the same level of significance:

2

Ψ

F=

MS w ⋅

∑

ci2

i =1 n

i

k

If we rewrite this as

2 /

Ψ

F=

∑

ci2

i =1 n

i (2)

,

k

MS w

we can think of the numerator of EquationÂ€2 as the sum of squares for a contrast, and

this will appear as the hypothesis sum of squares (HYPOTH. SS specifically) on the

SPSS print-out. MSw will appear under the heading ERROR MS.

Let us consider a special case of EquationÂ€2. Suppose the group sizes are equal and

we are making a simple paired comparison. Then the coefficient for one mean will be

1 and the coefficient for the other mean will be −1, and Then the F statistic can be

written as

2

/2 n

nΨ

( MS )−1 Ψ

. (3)

F=

= Ψ

w

MS w

2

We have rewritten the test statistic in the form on the extreme right because we will

be able to relate it more easily to the multivariate test statistic for a two-group planned

comparison.

5.8.2 Multivariate Case

All contrasts, whether univariate or multivariate, can be thought of as fundamentally

“two-group” comparisons. We are literally comparing two groups, or we are comparing

one set of means versus another set of means. In the multivariate case this means that

Hotelling’s T2 will be appropriate for testing the multivariate contrasts for significance.

We now have a contrast among the population mean vectors µ1, µ2, .Â€.Â€., µk, given by

ΨÂ€= c1µ1 + c2µ2 + ··· + ckµkâ•›.

This contrast is estimated by replacing the population mean vectors by the sample

mean vectors:

= c x + c x ++ c x

Ψ

1 1

2 2

k k

197

198

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

We wish to test that the contrast among the population mean vectors is the null vector:

H0 : ΨÂ€= 0

Our estimate of error is S, the estimate of the assumed common within-group population covariance matrix Σ, and the general test statistic is

T =

2

k

∑

i =1

ci2

ni

−1

' S −1 Ψ

, (4)

Ψ

where, as in the univariate case, the ni refer to the group sizes. Suppose we wish to contrast group 1 against the average of groups 2 and 3. If the group sizes are 20, 15, and

12, then the term in parentheses would be evaluated as [12 / 20 + (−.5)2 / 15 + (−.5)2Â€/

12]. Complete evaluation of a multivariate contrast is given later in TableÂ€5.10. Note

that the first part of EquationÂ€4, involving the summation, is exactly the same as in the

univariate case (see EquationÂ€2). Now, however, there are matrices instead of scalars.

For example, the univariate error term MSw has been replaced by the matrix S.

Again, as in the two-group MANOVA chapter, we have an exact F transformation of

Tâ•›2, which is given by

F=

(ne − p + 1) T 2 with p and

ne p

(ne − p + 1) degrees of freedom.

(5)

In EquationÂ€5, neÂ€=Â€N − k, that is, the degrees of freedom for estimating the pooled

within covariance matrix. Note that for kÂ€ =Â€ 2, EquationÂ€ 5 reduces to EquationÂ€ 3 in

ChapterÂ€4.

For equal n per group and a simple paired comparison, observe that EquationÂ€4 can be

written as

T2 =

n −1

Ψ ' S Ψ. (6)

2

Note the analogy with the univariate case in EquationÂ€ 3, except that now we have

matrices instead of scalars. The estimated contrast has been replaced by the estimated

) and the univariate error term (MSw) has been replaced by the

mean vector contrast (Ψ

corresponding multivariate error term S.

5.9 MULTIVARIATE PLANNED COMPARISONS ON SPSS MANOVA

SPSS MANOVA is set up very nicely for running multivariate planned comparisons.

The following type of contrasts are automatically generated by the program: Helmert

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

(which we have discussed), Simple, Repeated (comparing adjacent levels of a factor),

Deviation, and Polynomial. Thus, if we wish Helmert contrasts, it is not necessary to

set up the coefficients, the program does this automatically. All we need do is give the

following CONTRAST subcommand:

CONTRAST(FACTORNAME)Â€= HELMERT/

We remind you that all subcommands are indented at least one column and begin with

a keyword (in this case CONTRAST) followed by an equals sign, then the specifications, and are terminated by a slash.

An example of where Helmert contrasts are very meaningful has already been given.

Simple contrasts involve comparing each group against the last group. AÂ€situation

where this set of contrasts would make sense is if we were mainly interested in comparing each of several treatment groups against a control group (labeled as the last

group). Repeated contrasts might be of considerable interest in a repeated measures

design where a single group of subjects is measured at say five points in time (a longitudinal study). We might be particularly interested in differences at adjacent points in

time. For example, a group of elementary school children is measured on a standardized achievement test in grades 1, 3, 5, 7, and 8. We wish to know the extent of change

from grade 1 to grade 3, from grade 3 to grade 5, from grade 5 to grade 7, and from

grade 7 to grade 8. The coefficients for the contrasts would be as follows:

Grade

1

3

5

7

8

1

0

0

0

−1

â•‡1

â•‡0

â•‡0

â•‡0

−1

â•‡1

â•‡0

â•‡0

â•‡0

−1

â•‡1

â•‡0

â•‡0

â•‡0

−1

Polynomial contrasts are useful in trend analysis, where we wish to determine whether

there is a linear, quadratic, cubic, or other trend in the data. Again, these contrasts

can be of great interest in repeated measures designs in growth curve analysis, where

we wish to model the mathematical form of the growth. To reconsider the previous

example, some investigators may be more interested in whether the growth in some

basic skills areas such as reading and mathematics is linear (proportional) during the

elementary years, or perhaps curvilinear. For example, maybe growth is linear for a

while and then somewhat levels off, suggesting an overall curvilinear trend.

If none of these automatically generated contrasts answers the research questions of

interest, then one can set up contrasts using SPECIAL as the code name. Special contrasts are “tailor-made” comparisons for the group comparisons suggested by your

hypotheses. In setting these up, however, remember that for k groups there are only

199

200

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

(k − 1) between degrees of freedom, so that only (k − 1) nonredundant contrasts can be

run. The coefficients for the contrasts are enclosed in parentheses after special:

CONTRAST(FACTORNAME)Â€=Â€SPECIAL(1, 1, .Â€. ., 1

coefficients for contrasts)/

There must first be as many 1s as there are groups. We give an example illustrating

special contrasts shortly.

Example 5.3: Helmert Contrasts

An investigator has a three-group, two-dependent variable problem with five participants per group. The first is a control group, and the remaining two groups are treatment groups. The Helmert contrasts test each level (group) against the average of

the remaining levels. In this case the two single degree of freedom Helmert contrasts,

corresponding to the two between degrees of freedom, are very meaningful. The first

tests whether the control group differs from the average of the treatment groups on the

set of variables. The second Helmert contrast tests whether the treatments are differentially effective. In TableÂ€5.7 we present the control lines along with the data as part

of the command file, for running the contrasts. Recall that when the data is part of the

command file it is preceded by the BEGIN DATA command and the data is followed

by the END DATA command.

The means, standard deviations, and pooled within-covariance matrix S are presented

in TableÂ€5.8, where we also calculate S−1, which will serve as the error term for the multivariate contrasts (see EquationÂ€4). TableÂ€5.9 presents the output for the multivariate

Table 5.7â•‡ SPSS MANOVA Control Lines for Multivariate Helmert Contrasts

TITLE ‘HELMERT CONTRASTS’.

DATA LIST FREE/gps y1 y2.

BEGIN DATA.

1 5 6

1 6 7

1 6 7

1 4 5

2 2 2

2 3 3

2 4 4

2 3 2

3 4 3

3 6 7

3 3 3

3 5 5

END DATA.

LIST.

MANOVA y1 y2 BY gps(1,3)

/CONTRAST(gps)Â€=Â€HELMERT

(1) /PARTITION(gps)

(2) /DESIGNÂ€=Â€gps(1), gps(2)

/PRINTÂ€=Â€CELLINFO(MEANS, COV).

1 5 4

2 2 1

3 5 5

(1) In general, for k groups, the between degrees of freedom could be partitioned in various ways. If we wish

all single degree of freedom contrasts, as here, then we could put PARTITION(gps)Â€=Â€(1, 1)/. Or,

this can be abbreviated to PARTITION(gps)/.

(2) This DESIGN subcommand specifies the effects we are testing for significance, in this case the two

single degree of freedom multivariate contrasts. The numbers in parentheses refer to the part of the partition.

Thus, gps(1) refers to the first part of the partition (i.e., the first Helmert contrast) and gps(2) refers to

the second part of the partition (i.e., the second Helmert contrast).

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

Table 5.8â•‡ Means, Standard Deviations, and Pooled Within Covariance Matrix for

Helmert Contrast Example

Cell Means and Standard Deviations

Variable.. y1

FACTOR

CODE

Mean

Std. Dev.

gps

gps

gps

For entire sample

1

2

3

5.200

2.800

4.600

4.200

.837

.837

1.140

1.373

FACTOR

CODE

Mean

Std. Dev.

gps

gps

gps

For entire sample

1

2

3

5.800

2.400

4.600

4.267

1.304

1.140

1.673

1.944

Variable.. y2

Pooled within-cells Variance-Covariance matrix

Y1

Y2

y1

.900

y2

1.150

1.933

Determinant of pooled Covariance matrix of dependent vars.Â€=Â€.41750

To compute the multivariate test statistic for the contrasts we need the inverse of the above

Â�covariance matrix S, as shown in EquationÂ€4.

The procedure for finding the inverse of a matrix was given in sectionÂ€2.5. We obtain the matrix of

cofactors and then divide by the determinant. Thus, here we have

S −1 =

1 1.933 −1.15 4.631 −2.755

=

.9 −2.755

2.156

.4175 −1.15

and univariate Helmert contrasts comparing the treatment groups against the control

group. The multivariate contrast is significant at the .05 level (FÂ€=Â€4.303, pÂ€<Â€.042),

indicating that something is better than nothing. Note also that the Fs for all the multivariate tests are the same, since this is a single degree of freedom comparison and

thus effectively a two-group comparison. The univariate results show that there are

group differences on each of the two variables (i.e., p =.014 and .011). We also show

in TableÂ€ 5.9 how the hypothesis sum of squares is obtained for the first univariate

Helmert contrast (i.e., for y1).

In TableÂ€5.10 we present the multivariate and univariate Helmert contrasts comparing the two treatment groups. As the annotation indicates, both the multivariate

and univariate contrasts are significant at the .05 level. Thus, the treatment groups

differ on the set of variables, and the groups differ on each dependent variable.

201

202

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

Table 5.9â•‡ Multivariate and Univariate Tests for Helmert Contrast Comparing the

Control Group Against the Two Treatment Groups

EFFECT.. gps (1)

Multivariate Tests of Significance (SÂ€=Â€1, MÂ€=Â€0, NÂ€=Â€4 1/2)

Test Name

Value

Exact F

Hypoth. DF

Error DF

Sig. of F

Pillais

.43897

Hotellings

.78244

Wilks

.56103

Roys

.43897

Note.. F statistics are exact.

4.30339

4.30339

4.30339

2.00

2.00

2.00

11.00

11.00

11.00

â•‡â•‡ .042

.042

â•‡â•‡ .042

EFFECT.. gps (1) (Cont.)

Univariate F-tests with (1, 12) D. F.

Variable Hypoth. SS Error SS

â•‡7.50000

17.63333

y1

y2

10.80000

23.20000

Hypoth. MS

Error MS

F

Sig. of F

â•‡7.50000

17.63333

â•‡.90000

1.93333

8.33333

9.12069

.014

.011

The univariate contrast for y1 is given by ψ1Â€=Â€μ1 − (μ2 + μ3)/2.

Using the means of TableÂ€5.8, we obtain the following estimate for the contrast:

1 Â€=Â€5.2 − (2.8 + 4.6)/2Â€=Â€1.5.

Ψ

k

C i2

Recall from EquationÂ€2 that the hypothesis sum of squares is given by ψ 2 /

⋅ For equal group sizes, as

ni

i =1

∑

k

here, this becomes n ψ 2 /

∑

ci2 ⋅ Thus, HYPOTH SS =

i =1

5(1.5)2

= 7.5.

1 + (−.5)2 + (−.5)2

2

The error term for the contrast, MSw, appears under ERROR MSÂ€and is .900. Thus, the F ratio for y1 is

7.5/.90Â€=Â€8.333. Notice that both variables are significant at the .05 level.

This indicates that the multivariate contrast ψ1Â€=Â€μ1 − (μ2 + μ3)/2 is significant at the .05 level (because .042Â€< .05).

That is, the control group differs significantly from the average of the two treatment groups on the set of two variables.

InÂ€TableÂ€5.10 we also show in detail how the F value for the multivariate Helmert

contrast is arrived at.

Example 5.4: Special Contrasts

We indicated earlier that researchers can set up their own contrasts on MANOVA. We

now illustrate this for a four-group, five-dependent variable example. There are two

control groups, one of which is a Hawthorne control, and two treatment groups. Three

very meaningful contrasts are indicated schematically:

T1 (control) T2 (Hawthorne)

ψ1

ψ2

ψ3

−.5

â•‡â•›0

â•‡â•›0

−.5

â•‡â•›1

â•‡â•›0

T3

T4

â•‡.5

−.5

â•‡â•›1

â•‡.5

−.5

−1

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

Table 5.10â•‡ Multivariate and Univariate Tests for Helmert Contrast for the Two

Treatment Groups

EFFECT.. gps(2)

Multivariate Tests of Significance (SÂ€=Â€1, MÂ€=Â€0, NÂ€=Â€4 1/2)

Test Name

Value

Pillais

.43003

Hotellings

.75449

Wilks

.56997

Roys

.43003

Note.. F statistics are exact.

Exact F

Hypoth. DF

Error DF

Sig. of F

4.14970

4.14970

4.14970

2.00

(1) 2.00

2.00

11.00

11.00

11.00

.045

.045

.045

Recall from TableÂ€5.8 that the inverse of pooled within covariance matrix is

4.631 −2.755

S −1 =

−2.755 2.156

Since that is a simple contrast with equal n, we can use EquationÂ€6:

T2 =

nψ

’S −1 ψ

= n ( x − x )’S −1 ( x − x ) = 5 2.8 − 4.6

2

3

2

3

2

2

2 2.4 4.6

’

4.631 −2.755 −1.8

−2.755 2.156 −2.2 = 9.0535

To obtain the value of HOTELLING given on printout above we simply divide by error df, i.e.,

9.0535/12Â€=Â€.75446.

To obtain the F we use EquationÂ€5:

F=

(n

e

− p + 1)

ne p

T2 =

(12 − 2 + 1) 9.0535 = 4.1495,

(

)

12 (2)

With degrees of freedom pÂ€=Â€2 and (ne − p + 1)Â€=Â€11 as given above.

EFFECT.. GPS (2) (Cont.)

Univariate F-tests with (1, 12) D.â•›F.

Variable Hypoth. SS Error SS

Hypoth. MS

Error MS

F

Sig. of F

y1

y2

8.10000

12.10000

.90000

(2) 1.93333

9.00000

6.25862

.011

.028

8.10000

12.10000

10.80000

23.20000

(1) This multivariate test indicates that treatment groups differ significantly at the .05 level (because

.045Â€<Â€.05) on the set of two variables.

(2) These results indicate that both univariate contrasts are significant at .05 level, i.e., the treatment groups

differ on each variable.

The control lines for running these contrasts on SPSS MANOVA are presented in

TableÂ€5.11. (In this case we have just put in some data schematically and have used column input, simply to illustrate it.) As indicated earlier, note that the first four numbers

in the CONTRAST subcommand are 1s, corresponding to the number of groups. The

next four numbers define the first contrast, where we are comparing the control groups

against the treatment groups. The following four numbers define the second contrast,

and the last four numbers define the third contrast.

203

204

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

Table 5.11â•‡ SPSS MANOVA Control Lines for Special Multivariate Contrasts

TITLE ‘SPECIAL MULTIVARIATE CONTRASTS’.

DATA LIST FREE/gps 1 y1 3–4 y2 6–7(1) y3 9–11(2)

y4 13–15 y5 17–18.

BEGIN DATA.

1 28 13 476 215 74

.Â€.Â€.Â€.Â€.Â€.

4 24 31 668 355 56

END DATA.

LIST.

MANOVA y1 TO y5 BY gps(1, 4)

/CONTRAST(gps) = SPECIAL (1 1 1 1 −.5 −.5 .5 .5

0 1 −.5 −.5 0 0 1 −1)

/PARTITION(gps)

/DESIGNÂ€=Â€gps(1), gps(2), gps(3)

/PRINTÂ€=Â€CELLINFO(MEAN, COV, COR).

5.10â•‡ CORRELATED CONTRASTS

The Helmert contrasts we considered in Example 5.3 are, for equal n, uncorrelated.

This is important in terms of clarity of interpretation because significance on one

Helmert contrast implies nothing about significance on a different Helmert contrast.

For correlated contrasts this is not true. To determine the unique contribution a given

contrast is making we need to partial out its correlations with the other contrasts. We

illustrate how this is done on MANOVA.

Correlated contrasts can arise in two ways: (1) the sum of products of the coefficients ≠

0 for the contrasts, and (2) the sum of products of coefficientsÂ€=Â€0, but the group sizes

are not equal.

Example 5.5: Correlated Contrasts

We consider an example with four groups and two dependent variables. The contrasts

are indicated schematically here, with the group sizes in parentheses:

ψ1

ψ2

ψ3

T1Â€& T2 (12) combined

Hawthorne (14) control

T1 (11)

T2 (8)

0

0

1

1

1

0

−1

−.5

â•‡0

â•‡0

−.5

−1

Notice that ψ1 and ψ2 as well as ψ2 and ψ3 are correlated because the sum of products of

coefficients in each case ≠ 0. However, ψ1 and ψ3 are also correlated since group sizes

are unequal. The data for this problem are given next.

Chapter 5

GP1

GP2

GP3

â†œæ¸€å±®

â†œæ¸€å±®

GP4

y1

y2

y1

y2

y1

y2

y1

y2

18

13

20

22

21

19

12

10

15

15

14

12

5

6

4

8

9

0

6

5

4

5

0

6

18

20

17

24

19

18

15

16

16

14

18

14

19

23

9

5

10

4

4

4

7

7

5

3

2

4

6

2

17

22

22

13

13

11

12

23

17

18

13

5

7

5

9

5

5

6

3

7

7

3

13

9

9

15

13

12

13

12

3

3

3

5

4

4

5

3

1. We used the default method (UNIQUE SUM OF SQUARES, as of Release 2.1).

This gives the unique contribution of the contrast to between-group variation; that

is, each contrast is adjusted for its correlations with the other contrasts.

2. We used the SEQUENTIAL sum of squares option. This is obtained by putting the

following subcommand right after the MANOVA statement:

METHODÂ€= SEQUENTIAL/

With this option each contrast is adjusted only for all contrasts to the left of it in the

DESIGN subcommand. Thus, if our DESIGN subcommand is

DESIGNÂ€= gps(1), gps(2), gps(3)/

then the last contrast, denoted by gps(3), is adjusted for all other contrasts, and the

value of the multivariate test statistics for gps(3) will be the same as we obtained for

the default method (unique sum of squares). However, the value of the test statistics for

gps(2) and gps(1) will differ from those obtained using unique sum of squares, since

gps(2) is only adjusted for gps(1) and gps(1) is not adjusted for either of the other two

contrasts.

The multivariate test statistics for the contrasts using the unique decomposition are

presented in TableÂ€5.12, whereas the statistics for the hierarchical decomposition

are given in TableÂ€5.13. As explained earlier, the results for ψ3 are identical for both

approaches, and indicate significance at the .05 level (FÂ€=Â€3.499, p < .04). That is,

205

206

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

the combination of treatments differs from T2 alone. The results for the other two

contrasts, however, are quite different for the two approaches. The unique breakdown

indicates that ψ2 is significant at .05 (treatments differ from Hawthorne control) and ψ1

is not significant (T1 is not different from Hawthorne control). The results in TableÂ€5.12

for the hierarchical approach yield a different conclusion for ψ2. Obviously, the conclusions one draws in this study would depend on which approach was used to test the

contrasts for significance. We express a preference in general for the unique approach.

It should be noted that the unique contribution of each contrast can be

obtained using the hierarchical approach; however, in this case three DESIGN

Table 5.12â•‡ Multivariate Tests for Unique Contribution of Each Correlated Contrast to

Between Variation*

EFFECT.. gps (3)

Multivariate Tests of Significance (SÂ€=Â€1, MÂ€=Â€0, NÂ€=Â€19)

Test Name

Value

Pillais

.14891

Hotellings

.17496

Wilks

.85109

Roys

.14891

Note.. F statistics are exact.

Exact F

Hypoth. DF

Error DF

Sig. of F

3.49930

3.49930

3.49930

2.00

2.00

2.00

40.00

40.00

40.00

.040

.040

.040

EFFECT.. gps (2)

Multivariate Tests of Significance (SÂ€=Â€1, MÂ€=Â€0, NÂ€=Â€19)

Test Name

Value

Pillais

.18228

Hotellings

.22292

Wilks

.81772

Roys

.18228

Note.. F statistics are exact.

Exact F

Hypoth. DF

Error DF

Sig. of F

4.45832

4.45832

4.45832

2.00

2.00

2.00

40.00

40.00

40.00

.018

.018

.018

EFFECT.. gps (1)

Multivariate Tests of Significance (SÂ€=Â€1, MÂ€=Â€0, NÂ€=Â€19)

Test Name

Value

Pillais

.03233

Hotellings

.03341

Wilks

.96767

Roys

.03233

Note.. F statistics are exact.

*

Exact F

Hypoth. DF

Error DF

Sig. of F

.66813

.66813

.66813

2.00

2.00

2.00

40.00

40.00

40.00

.518

.518

.518

Each contrast is adjusted for its correlations with the other contrasts.

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

Table 5.13â•‡ Multivariate Tests of Correlated Contrasts for Hierarchical Option of

SPSSÂ€MANOVA

EFFECT.. gps (3)

Multivariate Tests of Significance (SÂ€=Â€1, MÂ€=Â€0, NÂ€=Â€19)

Test Name

Value

Pillais

.14891

Hotellings

.17496

Wilks

.85109

Roys

.14891

Note.. F statistics are exact.

Exact F

Hypoth. DF

Error DF

Sig. of F

3.49930

3.49930

3.49930

2.00

2.00

2.00

40.00

40.00

40.00

.040

.040

.040

EFFECT.. gps (2)

Multivariate Tests of Significance (SÂ€=Â€1, MÂ€=Â€0, NÂ€=Â€19)

Test Name

Value

Pillais

.10542

Hotellings

.11784

Wilks

.89458

Roys

.10542

Note.. F statistics are exact.

Exact F

Hypoth. DF

Error DF

Sig. of F

2.35677

2.35677

2.35677

2.00

2.00

2.00

40.00

40.00

40.00

.108

.108

.108

EFFECT.. gps (1)

Multivariate Tests of Significance (SÂ€=Â€1, MÂ€=Â€0, NÂ€=Â€19)

Test Name

Value

Pillais

.13641

Hotellings

.15795

Wilks

.86359

Roys

.13641

Note.. F statistics are exact.

Exact F

Hypoth. DF

Error DF

Sig. of F

3.15905

3.15905

3.15905

2.00

2.00

2.00

40.00

40.00

40.00

.053

.053

.053

Note: Each contrast is adjusted only for all contrasts to left of it in the DESIGN subcommand.

subcommands would be required, with each of the contrasts ordered last in one of

the subcommands:

DESIGNÂ€=Â€gps(1), gps(2), gps(3)/

DESIGNÂ€=Â€gps(2), gps(3), gps(1)/

DESIGNÂ€=Â€gps(3), gps(1), gps(2)/

All three orderings can be done in a single run.

207

208

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

5.11â•‡STUDIES USING MULTIVARIATE PLANNED

COMPARISONS

Clifford (1972) was interested in the effect of competition as a motivational technique

in the classroom. The participants were fifth graders, with the group about evenly

divided between girls and boys. AÂ€2-week vocabulary learning task was given under

three conditions:

1. Control—a noncompetitive atmosphere in which no score comparisons among

classmates were made.

2. Reward Treatment—comparisons among relatively homogeneous participants were made and accentuated by the rewarding of candy to high-scoring

participants.

3. Game Treatment—again, comparisons were made among relatively homogeneous

participants and accentuated in a follow-up game activity. Here high-scoring participants received an advantage in a game that was played immediately after the

vocabulary task was scored.

The three dependent variables were performance, interest, and retention. The retention

measure was given 2 weeks after the completion of treatments. Clifford had the following two planned comparisons:

1. Competition is more effective than noncompetition. Thus, she was testing the following contrast for significance:

Ψ1 =

µ 2 − µ3

− µ1

2

2. Game competition is as effective as reward with respect to performance on the

dependent variables. Thus, she was predicting the following contrast would not be

significant:

Ψ2Â€= µ2 − µ3

Clifford’s results are presented in TableÂ€ 5.14. As predicted, competition was more

effective than noncompetition for the set of three dependent variables. Estimation of

the univariate results in TableÂ€5.14 shows that the groups differed only on the interest

variable. Clifford’s second prediction was also confirmed, that there was no difference

in the relative effectiveness of reward versus game treatments (FÂ€=Â€.84, p < .47).

A second study involving multivariate planned comparisons was conducted by Stevens

(1972). He was interested in studying the relationship between parents’ educational

level and eight personality characteristics of their National Merit Scholar children. Part

of the analysis involved the following set of orthogonal comparisons (75 participants

per group):

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

Table 5.14â•‡ Means and Multivariate and Univariate Results for Two Planned

Comparisons in Clifford Study

df

MS

F

P

10.04

.0001

.64

29.24

.18

.43

.0001

.67

1st planned comparison (control vs. reward and game)

Multivariate test

Univariate tests

Performance

Interest

Retention

3/61

1/63

1/63

1/63

.54

4.70

4.01

2nd planned comparison (reward vs. game)

Multivariate test

Univariate tests

Performance

Interest

Retention

3/61

1/63

1/63

1/63

.002

.37

1.47

.84

.47

.003

2.32

.07

.96

.13

.80

Means for the groups

Variable

Control

Performance

Interest

Retention

Reward

â•‡5.72

â•‡2.41

30.85

â•‡5.92

â•‡2.63

31.55

Games

â•‡5.90

â•‡2.57

31.19

1. Group 1 (parents’ education eighth grade or less) versus group 2 (parents’ both

high school graduates).

2. Groups 1 and 2 (no college) versus groups 3 and 4 (college for both parents).

3. Group 3 (both parents attended college) versus group 4 (both parents at least one

college degree).

This set of comparisons corresponds to a very meaningful set of questions: Are differences in

children’s personality characteristics related to differences in parental degree of education?

Another set of orthogonal contrasts that could have been of interest in this study looks

like this schematically:

Groups

ψ1

ψ2

ψ3

1

2

3

4

1

0

0

−.33

0

1

−.33

1

−.50

−.33

−1

−.50

This would have resulted in a different meaningful, additive breakdown of the between association. However, one set of orthogonal contrasts does not have an empirical superiority over

another (after all, they both additively partition the between association). In terms of choosing one set over the other, it is a matter of which set best answers your research hypotheses.

209

210

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

5.12â•‡ OTHER MULTIVARIATE TEST STATISTICS

In addition to Wilks’ Λ, three other multivariate test statistics are in use and are printed

out on the packages:

1. Roy’s largest root (eigenvalue) of BW−1.

2. The Hotelling–Lawley trace, the sum of the eigenvalues of BW−1.

3. The Pillai–Bartlett trace, the sum of the eigenvalues of BT−1.

Notice that the Roy and Hotelling–Lawley multivariate statistics are natural generalizations of the univariate F statistic. In univariate ANOVA the test statistic is FÂ€=Â€MSb /

MSw, a measure of between- to within-group association. The multivariate analogue of

this is BW−1, which is a “ratio” of between- to within-group association. With matrices

there is no division, so we don’t literally divide the between by the within as in the

univariate case; however, the matrix analogue of division is inversion.

Because Wilks’ Λ can be expressed as a product of eigenvalues of WT−1, we see that all

four of the multivariate test statistics are some function of an eigenvalue(s) (sum, product). Thus, eigenvalues are fundamental to the multivariate problem. We will show

in ChapterÂ€10 on discriminant analysis that there are quantities corresponding to the

eigenvalues (the discriminant functions) that are linear combinations of the dependent

variables and that characterize major differences among the groups.

You might well ask at this point, “Which of these four multivariate test statistics should

be used in practice?” This is a somewhat complicated question that, for full understanding, requires a knowledge of discriminant analysis and of the robustness of the

four statistics to the assumptions in MANOVA. Nevertheless, the following will provide guidelines for the researcher. In terms of robustness with respect to type IÂ€error for

the homogeneity of covariance matrices assumption, Stevens (1979) found that any

of the following three can be used: Pillai–Bartlett trace, Hotelling–Lawley trace, or

Wilks’ Λ. For subgroup variance differences likely to be encountered in social science

research, these three are equally quite robust, provided the group sizes are equal or

largest

approximately equal

< 1.5 . In terms of power, no one of the four statistics

smallest

is always most powerful; which depends on how the null hypothesis is false. Importantly, however, Olson (1973) found that power differences among the four multivariate test statistics are generally quite small (< .06). So as a general rule, it won’t make

that much of a difference which of the statistics is used. But, if the differences among

the groups are concentrated on the first discriminant function, which does occur in

practice, then Roy’s statistic technically would be preferred since it is most powerful.

However, Roy’s statistic should be used in this case only if there is evidence to suggest

that the homogeneity of covariance matrices assumption is tenable. Finally, when the

differences among the groups involve two or more discriminant functions, the Pillai–

Bartlett trace is most powerful, although its power advantage tends to be slight.

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

5.13â•‡ HOW MANY DEPENDENT VARIABLES FOR A MANOVA?

Of course, there is no simple answer to this question. However, the following considerations mitigate generally against the use of a large number of criterion variables:

1. If a large number of dependent variables are included without any strong rationale

(empirical or theoretical), then small or negligible differences on most of them

may obscure a real difference(s) on a few of them. That is, the multivariate test

detects mainly error in the system, that is, in the set of variables, and therefore

declares no reliable overall difference.

2. The power of the multivariate tests generally declines as the number of dependent

variables is increased (DasGupta and Perlman, 1974).

3. The reliability of variables can be a problem in behavioral science work. Thus,

given a large number of criterion variables, it probably will be wise to combine

(usually add) highly similar response measures, particularly when the basic measurements tend individually to be quite unreliable (Pruzek, 1971). As Pruzek stated,

one should always consider the possibility that his variables include errors of

measurement that may attenuate F ratios and generally confound interpretations

of experimental effects. Especially when there are several dependent variables

whose reliabilities and mutual intercorrelations vary widely, inferences based on

fallible data may be quite misleading (Pruzek, 1971, p.Â€187).

4. Based on his Monte Carlo results, Olson had some comments on the design of

multivariate experiments that are worth remembering: For example, one generally

will not do worse by making the dimensionality p smaller, insofar as it is under

experimenter control. Variates should not be thoughtlessly included in an analysis

just because the data are available. Besides aiding robustness, a small value of p is

apt to facilitate interpretation (Olson, 1973, p.Â€906).

5. Given a large number of variables, one should always consider the possibility that

there is a much smaller number of underlying constructs that will account for most

of the variance on the original set of variables. Thus, the use of exploratory factor analysis as a preliminary data reduction scheme before the use of MANOVA

should be contemplated.

5.14â•‡POWER ANALYSIS—A PRIORI DETERMINATION OF

SAMPLEÂ€SIZE

Several studies have dealt with power in MANOVA (e.g., Ito, 1962; Lauter, 1978;

Olson, 1974; PillaiÂ€ & Jayachandian, 1967). Olson examined power for small and

moderate sample size, but expressed the noncentrality parameter (which measures the

extent of deviation from the null hypothesis) in terms of eigenvalues. Also, there were

many gaps in his tables: no power values for 4, 5, 7, 8, and 9 variables or 4 or 5 groups.

The Lauter study is much more comprehensive, giving sample size tables for a very

wide range of situations:

1. For αÂ€=Â€.05 or .01.

2. For 2, 3, 4, 5, 6, 8, 10, 15, 20, 30, 50, and 100 variables.

211

212

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

3. For 2, 3, 4, 5, 6, 8, and 10 groups.

4. For powerÂ€=Â€.70, .80, .90, and .95.

His tables are specifically for the Hotelling–Lawley trace criterion, and this might

seem to limit their utility. However, as Morrison (1967) noted for large sample size,

and as Olson (1974) showed for small and moderate sample size, the power differences

among the four main multivariate test statistics are generally quite small. Thus, the

sample size requirements for Wilks’ Λ, the Pillai–Bartlett trace, and Roy’s largest root

will be very similar to those for the Hotelling–Lawley trace for the vast majority of

situations.

Lauter’s tables are set up in terms of a certain minimum deviation from the multivariate

null hypothesis, which can be expressed in the following three forms:

j

1

µ ij − µ i ≥ q 2 , where μi is the total

1. There exists a variable i such that 2

σ j =1 j =1

mean and σ2 is variance.

∑(

)

2. There exists a variable i such that 1 / σ i µ ij1 − µ ij 2 ≥ d for two groups j1 and j2.

3. There exists a variable i such that for all pairs of groups 1 and m we have

1 / σ i µ il − µ il > c.

In Table A.5 of Appendix AÂ€of this text we present selected situations and power values that it is believed would be of most value to social science researchers: for 2, 3,

4, 5, 6, 8, 10, and 15 variables, with 3, 4, 5, and 6 groups, and for powerÂ€=Â€.70, .80,

and .90. We have also characterized the four different minimum deviation patterns

as very large, large, moderate, and small effect sizes. Although the characterizations

may be somewhat rough, they are reasonable in the following senses: They agree with

Cohen’s definitions of large, medium, and small effect sizes for one variable (Lauter

included the univariate case in his tables), and with Stevens’ (1980) definitions of

large, medium, and small effect sizes for the two-group MANOVA case.

It is important to note that there could be several ways, other than that specified by

Lauter, in which a large, moderate, or small multivariate effect size could occur. But

the essential point is how many participants will be needed for a given effect size,

regardless of the combination of differences on the variables that produced the specific

effect size. Thus, the tables do have broad applicability. We consider shortly a few specific examples of the use of the tables, but first we present a compact table that should

be of great interest to applied researchers:

Groups

Effect size

Very large

Large

Medium

Small

3

4

5

6

12–16

25–32

42–54

92–120

14–18

28–36

48–62

105–140

15–19

31–40

54–70

120–155

16–21

33–44

58–76

130–170

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

This table gives the range of sample sizes needed per group for adequate power (.70)

at αÂ€=Â€.05 when there are three to six variables.

Thus, if we expect a large effect size and have four groups, 28 participants per group

are needed for powerÂ€=Â€.70 with three variables, whereas 36 participants per group are

required if there were six dependent variables.

Now we consider two examples to illustrate the use of the Lauter sample size tables

in the appendix.

Example 5.6

An investigator has a four-group MANOVA with five dependent variables. He wishes

powerÂ€=Â€.80 at αÂ€=Â€.05. From previous research and his knowledge of the nature of the

treatments, he anticipates a moderate effect size. How many participants per group

will he need? Reference to Table A.5 (for four groups) indicates that 70 participants

per group are required.

Example 5.7

A team of researchers has a five-group, seven-dependent-variable MANOVA. They

wish powerÂ€ =Â€ .70 at αÂ€ =Â€ .05. From previous research they anticipate a large effect

size. How many participants per group are needed? Interpolating in Table A.5 (for

five groups) between six and eight variables, we see that 43 participants per group are

needed, or a total of 215 participants.

5.15â•‡SUMMARY

Cohen’s (1968) seminal article showed social science researchers that univariate ANOVA

could be considered as a special case of regression, by dummy-coding group membership. In this chapter we have pointed out that MANOVA can also be considered as a

special case of regression analysis, except that for MANOVA it is multivariate regression because there are several dependent variables being predicted from the dummy

variables. That is, separation of the mean vectors is equivalent to demonstrating that the

dummy variables (predictors) significantly predict the scores on the dependent variables.

For exploratory research where the focus is on individual dependent variables (and

not linear combinations of these variables), two post hoc procedures were given for

examining group differences for the outcome variables. Each procedure followed up

a significant multivariate test result with univariate ANOVAs for each outcome. If an

F test were significant for a given outcome and more than two groups were present,

pairwise comparisons were conducted using the Tukey procedure. The two procedures differ in that one procedure used a Bonferroni-adjusted alpha for the univariate

F tests and pairwise comparisons while the other did not. Of the two procedures, the

more widely recommended procedure is to use the Bonferroni-adjusted alpha for the

univariate ANOVAs and the Tukey procedure, as this procedure provides for greater

control of the overall type IÂ€error rate and a more accurate set of confidence intervals

213

214

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

(in terms of coverage). The procedure that uses no such alpha adjustment should be

considered only when the number of outcomes and groups is small (i.e., two orÂ€three).

For confirmatory research, planned comparisons were discussed. The setup of multivariate contrasts on SPSS MANOVA was illustrated. Although uncorrelated contrasts

are desirable because of ease of interpretation and the nice additive partitioning they

yield, it was noted that often the important questions an investigator has will yield

correlated contrasts. The use of SPSS MANOVA to obtain the unique contribution of

each correlated contrast was illustrated.

It was noted that the Roy and Hotelling–Lawley statistics are natural generalizations of

the univariate F ratio. In terms of which of the four multivariate test statistics to use in

practice, two criteria can be used: robustness and power. Wilks’ Λ, the Pillai–Bartlett

trace, and Hotelling–Lawley statistics are equally robust (for equal or approximately

equal group sizes) with respect to the homogeneity of covariance matrices assumption,

and therefore any one of them can be used. The power differences among the four statistics are in general quite small (< .06), so that there is no strong basis for preferring

any one of them over the others on power considerations.

The important problem, in terms of experimental planning, of a priori determination

of sample size was considered for three-, four-, five-, and six-group MANOVA for the

number of dependent variables ranging from 2 to 15.

5.16 EXERCISES

1. Consider the following data for a three-group, three-dependent-variable

problem:

Group 1

Group 2

Group 3

y1

y2

y3

y1

y2

y3

y1

y2

y3

2.0

1.5

2.0

2.5

1.0

1.5

4.0

3.0

3.5

1.0

1.0

2.5

2.0

3.0

4.0

2.0

3.5

3.0

4.0

3.5

1.0

2.5

2.5

1.5

2.5

3.0

1.0

2.5

3.0

3.5

3.5

1.0

2.0

1.5

1.0

3.0

4.5

1.5

2.5

3.0

4.0

3.5

4.5

3.0

4.5

4.5

4.0

4.0

5.0

2.5

2.5

3.0

4.5

3.5

3.0

3.5

5.0

1.0

1.0

1.5

2.0

2.0

2.5

2.0

1.0

1.0

2.0

2.0

2.0

1.0

2.5

3.0

3.0

2.5

1.0

1.5

3.5

1.0

1.5

1.0

2.0

2.5

2.5

2.5

1.0

1.5

2.5

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

Use SAS or SPSS to run a one-way MANOVA. Use procedure 1 (with the

adjusted Bonferroni F tests) to do the follow-up tests.

(a) What is the multivariate null hypothesis? Do you reject it at αÂ€=Â€.05?

(b) If you reject in part (a), then for which outcomes are there group differences at the .05 level?

(c) For any ANOVAs that are significant, use the post hoc tests to describe

group differences. Be sure to rank order group performance based on the

statistical test results.

2. Consider the following data from Wilkinson (1975):

Group A

5

6

6

4

5

6

7

7

5

4

Group B

4

5

3

5

2

2

3

4

3

2

2

3

4

2

1

Group C

7

5

6

4

4

4

6

3

5

5

3

7

3

5

5

4

5

5

5

4

Run a one-way MANOVA on SAS or SPSS. Do the various multivariate test

statistics agree in a decision on H0?

3. This table shows analysis results for 12 separate ANOVAs. The researchers

were examining differences among three groups for outpatient therapy, using

symptoms reported on the Symptom Checklist 90–Revised.

SCL 90–R Group Main Effects

Group

Group 1 Group 2

Dimension

Somatization

Obsessivecompulsive

Interpersonal

sensitivity

Depression

Anxiety

Hostility

Phobic anxiety

Group 3

NÂ€=Â€48

NÂ€=Â€60

NÂ€=Â€57

x¯

x¯

x¯

F

df

53.7

48.7

53.2

53.9

53.7

52.2

â•‡.03

2.75

2,141

2,141

ns

ns

47.3

51.3

52.9

4.84

2,141

p < .01

47.5

48.5

48.1

49.8

53.5

52.9

54.6

54.2

53.9

52.2

52.4

51.8

5.44

1.86

3.82

2.08

2,141

2,141

2,141

2,141

p < .01

ns

p < .03

ns

Significance

(Continued )

215

216

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

Dimension

Paranoid ideation

Psychoticism

Global Severity

index positive

symptom

Distress index

Positive symptom

total

x¯

x¯

x¯

F

df

Significance

51.4

52.4

49.7

54.7

54.6

54.4

54.0

54.2

54.0

1.38

.37

2.55

2,141

2,141

2,141

ns

ns

ns

49.3

50.2

55.8

52.9

53.2

54.4

3.39

1.96

2,141

2,141

p < .04

ns

(a) Could we be confident that these results would replicate? Explain.

(b) In this study, the authors did not a priori hypothesize differences on the

specific variables for which significance was found. Given that, what would

have been a better method of analysis?

4. A researcher is testing the efficacy of four drugs in inhibiting undesirable

responses in patients. Drugs AÂ€and B are similar in composition, whereas drugs

C and D are distinctly different in composition from AÂ€and B, although similar in

their basic ingredients. He takes 100 patients and randomly assigns them to five

groups: Gp 1—control, Gp 2—drug A, Gp 3—drug B, Gp 4—drug C, and Gp 5—

drug D. The following would be four very relevant planned comparisons to test:

Contrasts

1

2

3

4

Control

Drug A

Drug B

Drug C

Drug D

1

0

0

0

−.25

1

1

0

−.25

1

−1

0

−.25

−1

0

1

−.25

−1

0

−1

(a) Show that these contrasts are orthogonal.

Now, consider the following set of contrasts, which might also be of interest in the preceding study:

Contrasts

1

2

3

4

Control

Drug A

Drug B

Drug C

Drug D

1

1

1

0

−.25

−.5

0

1

−.25

−.5

0

1

−.25

0

−.5

−1

−.25

0

−.5

−1

(b) Show that these contrasts are not orthogonal.

(c) Because neither of these two sets of contrasts is one of the standard sets

that come out of SPSS MANOVA, it would be necessary to use the special

contrast feature to test each set. Show the control lines for doing this for

each set. Assume four criterion measures.

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

5. Find an article in one of the better journals in your content area from within the

last 5Â€years that used primarily MANOVA. Answer the following questions:

(a)How many statistical tests (univariate or multivariate or both) were done?

Were the authors aware of this, and did they adjust in any way?

(b) Was power an issue in this study? Explain.

(c) Did the authors address practical importance in ANY way? Explain.

REFERENCES

Clifford, M.â•›M. (1972). Effects of competition as a motivational technique in the classroom.

American Educational Research Journal, 9, 123–134.

Cohen, J. (1968). Multiple regression as a general data-analytic system. Psychological Bulletin, 70, 426–443.

Cohen, J. (1988). Statistical power analysis for the social sciences (2nd ed.). Hillsdale, NJ:

Lawrence Erlbaum Associates.

DasGupta, S.,Â€& Perlman, M.â•›D. (1974). Power of the noncentral F-test: Effect of additional

variates on Hotelling’s T2-Test. Journal of the American Statistical Association, 69, 174–180.

Dunnett, C.â•›W. (1980). Pairwise multiple comparisons in the homogeneous variance, unequal

sample size cases. Journal of the American Statistical Association, 75, 789–795.

Hays, W.â•›L. (1981). Statistics (3rd ed.). New York, NY: Holt, RinehartÂ€& Winston.

Ito, K. (1962). AÂ€comparison of the powers of two MANOVA tests. Biometrika, 49, 455–462.

Johnson, N.,Â€& Wichern, D. (1982). Applied multivariate statistical analysis. Englewood

Cliffs, NJ: Prentice Hall.

Keppel, G.,Â€& Wickens, T.â•›D. (2004). Design and analysis: AÂ€researcher’s handbook (4th ed.).

Upper Saddle River, NJ: Prentice Hall.

Keselman, H.â•›J., Murray, R.,Â€& Rogan, J. (1976). Effect of very unequal group sizes on Tukey’s

multiple comparison test. Educational and Psychological Measurement, 36, 263–270.

Lauter, J. (1978). Sample size requirements for the T2 test of MANOVA (tables for one-way

classification). Biometrical Journal, 20, 389–406.

Levin, J.â•›R., Serlin, R.â•›C.,Â€& Seaman, M.â•›A. (1994). AÂ€controlled, powerful multiple-comparison

strategy for several situations. Psychological Bulletin, 115, 153–159.

Lohnes, P.â•›R. (1961). Test space and discriminant space classification models and related

significance tests. Educational and Psychological Measurement, 21, 559–574.

Morrison, D.â•›F. (1967). Multivariate statistical methods. New York, NY: McGraw-Hill.

Novince, L. (1977). The contribution of cognitive restructuring to the effectiveness of behavior rehearsal in modifying social inhibition in females. Unpublished doctoral dissertation,

University of Cincinnati, OH.

Olson, C.â•›L. (1973). AÂ€Monte Carlo investigation of the robustness of multivariate analysis of

variance. Dissertation Abstracts International, 35, 6106B.

Olson, C.â•›L. (1974). Comparative robustness of six tests in multivariate analysis of variance.

Journal of the American Statistical Association, 69, 894–908.

217

218

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

Pillai, K.,Â€& Jayachandian, K. (1967). Power comparisons of tests of two multivariate hypotheses based on four criteria. Biometrika, 54, 195–210.

Pruzek, R.â•›M. (1971). Methods and problems in the analysis of multivariate data. Review of

Educational Research, 41, 163–190.

Stevens, J.â•›P. (1972). Four methods of analyzing between variation for the k-group MANOVA

problem. Multivariate Behavioral Research, 7, 499–522.

Stevens, J.â•›P. (1979). Comment on Olson: Choosing a test statistic in multivariate analysis of

variance. Psychological Bulletin, 86, 355–360.

Stevens, J.â•›P. (1980). Power of the multivariate analysis of variance tests. Psychological Bulletin, 88, 728–737.

Tatsuoka, M.â•›M. (1971). Multivariate analysis: Techniques for educational and psychological

research. New York, NY: Wiley.

Wilkinson, L. (1975). Response variable hypotheses in the multivariate analysis of variance.

Psychological Bulletin, 82, 408–412.

Chapter 6

ASSUMPTIONS IN MANOVA

6.1 INTRODUCTION

You may recall that one of the assumptions in analysis of variance is normality; that

is, the scores for the subjects in each group are normally distributed. Why should

we be interested in studying assumptions in ANOVA and MANOVA? Because, in

ANOVA and MANOVA, we set up a mathematical model based on these assumptions,

and all mathematical models are approximations to reality. Therefore, violations of

the assumptions are inevitable. The salient question becomes: How radically must a

given assumption be violated before it has a serious effect on type IÂ€and type II error

rates? Thus, we may set our αÂ€=Â€.05 and think we are rejecting falsely 5% of the time,

but if a given assumption is violated, we may be rejecting falsely 10%, or if another

assumption is violated, we may be rejecting falsely 40% of the time. For these kinds

of situations, we would certainly want to be able to detect such violations and take

some corrective action, but all violations of assumptions are not serious, and hence it

is crucial to know which assumptions to be particularly concerned about, and under

what conditions.

In this chapter, we consider in detail what effect violating assumptions has on type

IÂ€error and power. There has been plenty of research on violations of assumptions in

ANOVA and a fair amount of research for MANOVA on which to base our conclusions. First, we remind you of some basic terminology that is needed to discuss the

results of simulation (i.e., Monte Carlo) studies, whether univariate or multivariate.

The nominal α (level of significance) is the α level set by the experimenter, and is the

proportion of time one is rejecting falsely when all assumptions are met. The actual

α is the proportion of time one is rejecting falsely if one or more of the assumptions

is violated. We say the F statistic is robust when the actual α is very close to the level

of significance (nominal α). For example, the actual αs for some very skewed (nonnormal) populations may be only .055 or .06, very minor deviations from the level of

significance of .05.

220

â†œæ¸€å±®

â†œæ¸€å±®

ASSUMPtIONS IN MANOVA

6.2 ANOVA AND MANOVA ASSUMPTIONS

The three statistical assumptions for univariate ANOVAÂ€are:

1. The observations are independent. (violation very serious)

2. The observations are normally distributed on the dependent variable in each group.

(robust with respect to type IÂ€error)

(skewness has generally very little effect on power, while platykurtosis attenuates

power)

3. The population variances for the groups are equal, often referred to as the homogeneity of variance assumption.

(conditionally robust—robust if group sizes are equal or approximately equal—

largest/smallest < 1.5)

The assumptions for MANOVA are as follows:

1. The observations are independent. (violation very serious)

2. The observations on the dependent variables follow a multivariate normal distribution in each group.

(robust with respect to type IÂ€error)

(no studies on effect of skewness on power, but platykurtosis attenuates power)

3. The population covariance matrices for the p dependent variables are equal. (conditionally robust—robust if the group sizes are equal or approximately equal—

largest/smallest < 1.5)

6.3 INDEPENDENCE ASSUMPTION

Note that independence of observations is an assumption for both ANOVA and

MANOVA. We have listed this assumption first and are emphasizing it for three

reasons:

1. A violation of this assumption is very serious.

2. Dependent observations do occur fairly often in social science research.

3. Some statistics books do not mention this assumption, and in some cases where

they do, misleading statements are made (e.g., that dependent observations occur

only infrequently, that random assignment of subjects to groups will eliminate the

problem, or that this assumption is usually satisfied by using a random sample).

Now let us consider several situations in social science research where dependence

among the observations will be present. Cooperative learning has become very popular

since the early 1980s. In this method, students work in small groups, interacting with

each other and helping each other learn the lesson. In fact, the evaluation of the success

of the group is dependent on the individual success of its members. Many studies have

compared cooperative learning versus individualistic learning. It was once common

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

that such data was not analyzed properly (Hykle, Stevens,Â€& Markle, 1993). That is,

analyses would be conducted using individual scores while not taking into account the

dependence among the observations. With the increasing use of multilevel modeling,

such analyses are likely not as common.

Teaching methods studies constitute another broad class of situations where dependence of observations is undoubtedly present. For example, a few troublemakers in a

classroom would have a detrimental effect on the achievement of many children in

the classroom. Thus, their posttest achievement would be at least partially dependent

on the disruptive classroom atmosphere. On the other hand, even with a favorable

classroom atmosphere, dependence is introduced, because the achievement of many

of the children will be enhanced by the positive learning situation. Therefore, in either

case (positive or negative classroom atmosphere), the achievement of each child is not

independent of the other children in the classroom.

Another situation in which observations would be dependent is a study comparing

the achievement of students working in pairs at computers versus students working

in groups of three. Here, if Bill and John, say, are working at the same computer, then

obviously Bill’s achievement is partially influenced by John. If individual scores were

to be used in the analysis, clustering effects, due to working at the same computer,

need to be accounted for in the analysis.

Glass and Hopkins (1984) made the following statement concerning situations where

independence may or may not be tenable: “Whenever the treatment is individually

administered, observations are independent. But where treatments involve interaction

among persons, such as discussion method or group counseling, the observations may

influence each other” (p.Â€353).

6.3.1 Effect of Correlated Observations

We indicated earlier that a violation of the independence of observations assumption

is very serious. We now elaborate on this assertion. Just a small amount of dependence

among the observations causes the actual α to be several times greater than the level

of significance. Dependence among the observations is measured by the intraclass

correlation ICC, where:

ICCÂ€= MSb − MSw / [MSb + (n −1)MSw]

Mb and MSw are the numerator and denominator of the F statistic and n is the number

of participants in each group.

TableÂ€ 6.1, from Scariano and Davenport (1987), shows precisely how dramatic an

effect dependence has on type IÂ€error. For example, for the three-group case with 10

participants per group and moderate dependence (ICCÂ€=Â€.30), the actual α is .54. Also,

for three groups with 30 participants per group and small dependence (ICCÂ€=Â€.10), the

221

222

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

Table 6.1:â•‡ Actual Type IÂ€Error Rates for Correlated Observations in a One-WayÂ€ANOVA

Intraclass correlation

Number of Group

groups

size

.00

2

3

5

10

3

10

30

100

3

10

30

100

3

10

30

100

3

10

30

100

.0500

.0500

.0500

.0500

.0500

.0500

.0500

.0500

.0500

.0500

.0500

.0500

.0500

.0500

.0500

.0500

.01

.10

.30

.50

.70

.0522

.0606

.0848

.1658

.0529

.0641

.0985

.2236

.0540

.0692

.1192

.3147

.0560

.0783

.1594

.4892

.0740 .1402 .2374 .3819

.1654 .3729 .5344 .6752

.3402 .5928 .7205 .8131

.5716 .7662 .8446 .8976

.0837 .1866 .3430 .5585

.2227 .5379 .7397 .8718

.4917 .7999 .9049 .9573

.7791 .9333 .9705 .9872

.0997 .2684 .5149 .7808

.3151 .7446 .9175 .9798

.6908 .9506 .9888 .9977

.9397 .9945 .9989 .9998

.1323 .4396 .7837 .9664

.4945 .9439 .9957 .9998

.9119 .9986 1.0000 1.0000

.9978 1.0000 1.0000 1.0000

.90

.95

.99

.6275

.8282

.9036

.9477

.8367

.9639

.9886

.9966

.9704

.9984

.9998

1.0000

.9997

1.0000

1.0000

1.0000

.7339

.8809

.9335

.9640

.9163

.9826

.9946

.9984

.9923

.9996

1.0000

1.0000

1.0000

1.0000

1.0000

1.0000

.8800

.9475

.9708

.9842

.9829

.9966

.9990

.9997

.9997

1.0000

1.0000

1.0000

1.0000

1.0000

1.0000

1.0000

actual α is .49, almost 10 times the level of significance. Notice, also, from the table,

that for a fixed value of the intraclass correlation, the situation does not improve with

larger sample size, but gets far worse.

6.4â•‡WHAT SHOULD BE DONE WITH CORRELATED

OBSERVATIONS?

Given the results in TableÂ€6.1 for a positive intraclass correlation, one route investigators could take if they suspect that the nature of their study will lead to correlated observations is to test at a more stringent level of significance. For the three- and five-group

cases in TableÂ€6.1, with 10 observations per group and intraclass correlationÂ€=Â€.10, the

error rates are five to six times greater than the assumed level of significance of .05.

Thus, for this type of situation, it would be wise to test at αÂ€=Â€.01, realizing that the

actual error rate will be about .05 or somewhat greater. For the three- and five-group

cases in TableÂ€6.1 with 30 observations per group and intraclass correlationÂ€=Â€.10, the

error rates are about 10 times greater than .05. Here, it would be advisable to either test

at .01, realizing that the actual α will be about .10, or test at an even more stringent α

level.

If several small groups (counseling, social interaction, etc.) are involved in each treatment, and there are clear reasons to suspect that observations will be correlated within

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

the groups but uncorrelated across groups, then consider using the group mean as the

unit of analysis. Of course, this will reduce the effective sample size considerably;

however, this will not cause as drastic a drop in power as some have feared. The reason

is that the means are much more stable than individual observations and, hence, the

within-group variability will be farÂ€less.

TableÂ€6.2, from Barcikowski (1981), shows that if the effect size is medium or large,

then the number of groups needed per treatment for power .80 doesn’t have to be that

large. For example, at αÂ€=Â€.10, intraclass correlationÂ€=Â€.10, and medium effect size, 10

groups (of 10 subjects each) are needed per treatment. For power .70 (which we consider adequate) at αÂ€=Â€.15, one probably could get by with about six groups of 10 per

treatment. This is a rough estimate, because it involves double extrapolation.

A third and much more commonly used method of analysis is one that directly adjusts

parameter estimates for the degree of clustering. Multilevel modeling is a procedure that accommodates various forms of clustering. ChapterÂ€13 covers fundamental

concepts and applications, while ChapterÂ€14 covers multivariate extensions of this

procedure.

Table 6.2:â•‡ Number of Groups per Treatment Necessary for Power > .80 in a TwoTreatment-Level Design

Intraclass correlation for effect sizea

.10

α Level

.05

.10

a

.20

Number

of groups

.20

.50

.80

10

15

20

25

30

35

40

10

15

20

25

30

35

40

73

62

56

53

51

49

48

57

48

44

41

39

38

37

13

11

10

10

9

9

9

10

9

8

8

7

7

7

6

5

5

5

5

5

5

5

4

4

4

4

4

4

.20Â€=Â€small effect size; .50Â€=Â€medium effect size; .80Â€=Â€large effectÂ€size.

.20

.50

.80

107

97

92

89

87

86

85

83

76

72

69

68

67

66

18

17

16

16

15

15

15

14

13

13

12

12

12

12

8

8

7

7

7

7

7

7

6

6

6

6

5

5

223

224

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

Before we leave the topic of correlated observations, we wish to mention an interesting

paper by Kenny and Judd (1986), who discussed how nonindependent observations

can arise because of several factors, grouping being one of them. The following quote

from their paper is important to keep in mind for applied researchers:

Throughout this article we have treated nonindependence as a statistical nuisance,

to be avoided because of the bias it introduces.Â€.Â€.Â€. There are, however, many

occasions when nonindependence is the substantive problem that we are trying to

understand in psychological research. For instance, in developmental psychology,

a frequently asked question concerns the development of social interaction. Developmental researchers study the content and rate of vocalization from infants for

cues about the onset of interaction. Social interaction implies nonindependence

between the vocalizations of interacting individuals. To study interaction developmentally, then, we should be interested in nonindependence not solely as a statistical problem, but also a substantive focus in itself.Â€.Â€.Â€. In social psychology, one of

the fundamental questions concerns how individual behavior is modified by group

contexts. (p.Â€431)

6.5 NORMALITY ASSUMPTION

Recall that the second assumption for ANOVA is that the observations are normally

distributed in each group. What are the consequences of violating this assumption? An

excellent early review regarding violations of assumptions in ANOVA was done by

Glass, Peckham, and Sanders (1972). This review concluded that the ANOVA F test is

largely robust to normality violations. In particular, they found that skewness has only

a slight effect (generally only a few hundredths) on the alpha level or power associated

with the F test. The effects of kurtosis on level of significance, although greater, also

tend to be slight.

You may be puzzled as to how this can be. The basic reason is the Central Limit

Theorem, which states that the sum of independent observations having any distribution whatsoever approaches a normal distribution as the number of observations

increases. To be somewhat more specific, Bock (1975) noted, “even for distributions

which depart markedly from normality, sums of 50 or more observations approximate

to normality. For moderately nonnormal distributions the approximation is good with

as few as 10 to 20 observations” (p.Â€111). Because the sums of independent observations approach normality rapidly, so do the means, and the sampling distribution of F

is based on means. Thus, the sampling distribution of F is only slightly affected, and

therefore the critical values when sampling from normal and nonnormal distributions

will not differ byÂ€much.

With respect to power, a platykurtic distribution (a flattened distribution with thinner

tails relative to the normal distribution indicated by a negative kurtosis value) does

attenuate power. Note also that more recently, Wilcox (2012) pointed that the ANOVA

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

F test is not robust to certain violations of normality, which if present may inflate

the type IÂ€error rate to unacceptable levels. However, it appears that data have to be

very nonnormal for problems to arise, and these arise primarily when group sizes are

unequal. For example, in a meta analysis reported by Lix, Keselman, and Keselman

(1996), when skewÂ€=Â€2 and kurtosisÂ€=Â€6, the type IÂ€error rate for the ANOVA F test

remains close to its nominal value of .05 (mean alpha reported under nonnormality as

.059 with a standard deviation of .026). For unequal group size with the same degree

of nonnormality, type IÂ€error rates can be somewhat inflated (mean alphaÂ€=Â€.069 with

a standard deviation of .048). Thus, while the ANOVA F test appears to be largely

robust under normality violations, it is important to assess normality and take some

corrective steps when gross departures are found especially when group sizes are

unequal.

6.6 MULTIVARIATE NORMALITY

The multivariate normality assumption is a much more stringent assumption than the

corresponding assumption of normality on a single variable in ANOVA. Although it

is difficult to completely characterize multivariate normality, normality on each of the

variables separately is a necessary, but not sufficient, condition for multivariate normality to hold. That is, each of the individual variables must be normally distributed

for the variables to follow a multivariate normal distribution. Two other properties

of a multivariate normal distribution are: (1) any linear combination of the variables

are normally distributed, and (2) all subsets of the set of variables have multivariate

normal distributions. This latter property implies, among other things, that all pairs

of variables must be bivariate normal. Bivariate normality, for correlated variables,

implies that the scatterplots for each pair of variables will be elliptical; the higher the

correlation, the thinner the ellipse. Thus, as a partial check on multivariate normality,

one could obtain the scatterplots for pairs of variables from SPSS or SAS and see if

they are approximately elliptical.

6.6.1 Effect of Nonmultivariate Normality

on Type IÂ€Error andÂ€Power

Results from various studies that considered up to 10 variables and small or moderate

sample sizes (Everitt, 1979; HopkinsÂ€& Clay, 1963; Mardia, 1971; Olson, 1973) indicate that deviation from multivariate normality has only a small effect on type IÂ€error.

In almost all cases in these studies, the actual α was within .02 of the level of significance for levels of .05 and .10.

Olson found, however, that platykurtosis does have an effect on power, and the severity of the effect increases as platykurtosis spreads from one to all groups. For example,

in one specific instance, power was close to 1 under no violation. With kurtosis present

in just one group, the power dropped to about .90. When kurtosis was present in all

three groups, the power dropped substantially, to .55.

225

226

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

You should note that what has been found in MANOVA is consistent with what was

found in univariate ANOVA, in which the F statistic is often robust with respect to type

IÂ€error against nonnormality, making it plausible that this robustness might extend to the

multivariate case; this, indeed, is what has been found. Incidentally, there is a multivariate extension of the Central Limit Theorem, which also makes the multivariate results

not entirely surprising. Second, Olson’s result, that platykurtosis has a substantial effect

on power, should not be surprising, given that platykurtosis had been shown in univariate ANOVA to have a substantial effect on power for small n’s (Glass et al., 1972).

With respect to skewness, again the Glass etÂ€al. (1972) review suggesting that distortions of power values are rarely greater than a few hundredths for univariate ANOVA,

even with considerably skewed distributions. Thus, it could well be the case that multivariate skewness also has a negligible effect on power, although we have not located

any studies bearing on this issue.

6.7 ASSESSING THE NORMALITY ASSUMPTION

If a set of variables follows a multivariate normal distribution, each of the variables

must be normally distributed. Therefore, it is often recommended that before other

procedures are used, you check to see if the scores for each variable appear to approximate a normal distribution. If univariate normality does not appear to hold, we know

then that the multivariate normality assumption is violated. There are two other reasons it makes sense to assess univariate normality:

1. As Gnanadesikan (1977) has stated, “in practice, except for rare or pathological

examples, the presence of joint (multivariate) normality is likely to be detected

quite often by methods directed at studying the marginal (univariate) normality

of the observations on each variable” (p.Â€168). Johnson and Wichern (2007) made

essentially the same point: “Moreover, for most practical work, one-dimensional

and two-dimensional investigations are ordinarily sufficient. Fortunately, pathological data sets that are normal in lower dimensional representations but nonnormal in higher dimensions are not frequently encountered in practice” (p.Â€177).

2. Because the Box test for the homogeneity of covariance matrices assumption is

quite sensitive to nonnormality, we wish to detect nonnormality on the individual

variables and transform to normality to bring the joint distribution much closer to

multivariate normality so that the Box test is not unduly affected. With respect to

transformations, FigureÂ€6.1 should be quite helpful.

6.7.1 Assessing Univariate Normality

There are several ways to assess univariate normality. First, for each group, you can

examine values of skewness and kurtosis for your data. Briefly, skewness refers to lack

of symmetry in a score distribution, whereas kurtosis refers to how peaked a distribution is and the degree to which the tails of the distribution are light or heavy relative

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

Figure 6.1:â•‡ Distributional transformations (from Rummel, 1970).

Xj

Xj = (Xj)1/2

Xj

Xj = log Xj

Xj

Xj = arcsin (Xj)1/2

Xj

Xj

Xj

Xj = log

Xj

1 – Xj

Xj = 1/2 log 1 + Xj

1 – Xj

Xj = log

Xj

1 – Xj

Xj = raw data distribution

Xj = transformed data distribution

Xj

Xj = arcsin (Xj)1/2

Xj = 1/2 log

1 + Xj

1 – Xj

to the normal distribution. The formulas for these indicators as used by SAS and SPSS

are such that if scores are normally distributed, skewness and kurtosis will each have

a value ofÂ€zero.

There are two ways that skewness and kurtosis measures are used to evaluate the normality assumption. AÂ€simple rule is to compare each group’s skewness and kurtosis

227

228

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

values to a magnitude of 2 (although values of 1 or 3 are sometimes used). Then, if

the values of skewness and kurtosis are each smaller in magnitude than 2, you would

conclude that the distribution does not depart greatly from a normal distribution, or is

reasonably consistent with the normal distribution. The second way these measures

are sometimes used is to consider a score distribution to be approximately normal if

the sample values of skewness and kurtosis each lie within ±2 standard errors of the

respective measure. So, for example, suppose that the standard error for skewness

(as obtained by SAS or SPSS) were .75 and the standard error for kurtosis were .60.

Then, the scores would be considered to reasonably approximate a normal distribution if the sample skewness value were within the span of −1.5 to 1.5 (±2 × .75) and

the sample kurtosis value were within the span of −1.2 to 1.2 (±2 × .60). Note that

this latter procedure approximates a z test for skewness and kurtosis assuming an

alpha of .05. Like any statistical test, then, this procedure will be sensitive to sample

size, providing generally lower power for smaller n and greater power for largerÂ€n.

A second method of assessing univariate normality is to examine plots for each group.

Commonly used plots include a histogram, stem and leaf plot, box plot, and Q-Q plot.

The latter plot shows observations arranged in increasing order of magnitude and then

plotted against the expected normal distribution values. This plot should resemble a

straight line if normality is tenable. These plots are available on SAS and SPSS. Note

that with a small or moderate group size, it may be difficult to discern whether nonnormality is real or apparent, because of considerable sampling error. As such, the

skewness and kurtosis values may be examined, as mentioned, and statistical tests of

normality may conducted, which we considerÂ€next.

A third method of assessing univariate normality it to use omnibus statistical tests

for normality. These tests includes the chi-square goodness of fit, Kolmogorov–

Smirnov, Shapiro–Wilk, and the z test approximations for skewness and kurtosis

discussed earlier. The chi-square test suffers from the defect of depending on the

number of intervals used for the grouping, whereas the Kolmogorov–Smirnov test

was shown not to be as powerful as the Shapiro–Wilk test or the combination of

using the skewness and kurtosis coefficients in an extensive Monte Carlo study by

Wilk, Shapiro, and Chen (1968). These investigators studied 44 different distributions, with sample sizes ranging from 10 to 50, and found that the combination of

skewness and kurtosis coefficients and the Shapiro–Wilk test were the most powerful in detecting departures from normality. They also found that extreme nonnormality can be detected with sample sizes of less than 20 by using sensitive procedures

(like the two just mentioned). This is important, because for many practical problems, group sizes are small. Note though that with large group sizes, these tests may

be quite powerful. As such it is a good idea to use test results along with examining

plots and the skewness and kurtosis descriptive statistics to get a sense of the degree

of departure from normality.

For univariate tests, we prefer the Shapiro–Wilk statistic due to its superior performance for small samples. Note that the null hypothesis for this test is that the variable

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

being tested is normally distributed. Thus, a small p value (i.e., < .05) indicates a

violation of the normality assumption. This test statistic is easily obtained with the

EXAMINE procedure in SPSS. This procedure also yields the skewness and kurtosis

coefficients, along with their standard errors, and various plots. All of this information

is useful in determining whether there is a significant departure from normality, and

whether skewness or kurtosis is primarily responsible.

6.7.2 Assessing Multivariate Normality

Several methods can be used to assess the multivariate normality assumption. First, as

noted, checking to see if univariate normality is tenable provides a check on the multivariate normality assumption because if univariate normality is not present, neither

is multivariate normality. Note though that multivariate normality may not hold even

if univariate normality does. As noted earlier, assessing univariate normality is often

sufficient in practice to detect serious violations of the multivariate normality assumption, especially when combined with checking for bivariate normality. The latter can

be done by examining all possible bivariate scatter plots (although this becomes less

practical when many variables and many groups are present). Thus, for this edition

of the text (as in the previous edition), we will continue to focus on the use of these

methods to assess normality. We will, though, describe some multivariate methods for

assessing the multivariate normality assumption as these methods are beginning to

become available in general purpose software programs, such as SAS andÂ€SPSS.

Two different multivariate methods are available to assess whether the multivariate normality assumption is tenable. First, many different multivariate test statistics have been

developed to assess multivariate normality, including, for example, Mardia’s (1970) test

of multivariate skewness and kurtosis, Small’s (1980) omnibus test of multivariate normality, and the Henze–Zirkler (1990) test of multivariate normality. While there appears

to be limited evaluation of the performance of these multivariate tests, Looney (1995)

reports some simulation evidence suggesting that Small’s test has better performance

than some other tests, and Mecklin and Mundfrom (2003) found that the Henze–Zirkler

test is the best performing test of multivariate normality of the methods they examined.

As of this edition of the text, SPSS does not include any tests of multivariate normality

in its procedures. However, Decarlo (1997) has developed a macro that can be used

with SPSS (which is freely available at http://www.columbia.edu/~ld208/). This macro

implements a variety of tests for multivariate normality, including Small’s omnibus

test mentioned previously. SAS now includes multivariate normality tests in the PROC

MODEL procedure via the fit option, which includes the Henze–Zirkler test (as well as

other normality tests).

The second multivariate procedure that is available to assess multivariate normality is

a graphical assessment procedure. This graph compares the squared Mahalanobis distances associated with the dependent variables to the values expected if multivariate

normality holds (analogous to the univariate Q-Q plot). Often, the expected values are

229

230

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

obtained from a chi-square distribution. Note though that Rencher and Christensen

(2012) state that the chi-square approximation often used in this plot can be poor and do

not recommend it for assessing multivariate normality. They discuss an alternative plot

in theirÂ€text.

6.7.3 Assessing Univariate Normality UsingÂ€SPSS

We now show how you can use some of these procedures to assess normality. Our

example comes from a study on the cost of transporting milk from farms to dairy plants.

Example 6.1

From a survey, cost data on Y1Â€=Â€fuel, Y2Â€=Â€repair, and Y3Â€=Â€capital (all measures on

a per mile basis) were obtained for two types of trucks, gasoline and diesel. Thus, we

have a two-group MANOVA, with three dependent variables. First, we ran this data

through the SPSS DESCRIPTIVES program. The complete lines for doing so are presented in TableÂ€6.3. This was done to obtain the z scores for the variables within each

group. Converting to z scores makes it much easier to identify potential outliers. Any

variables with z values substantially greater than 2.5 or so (in absolute value) need to

be examined carefully. When we examined the z scores, we found three observations

with z scores greater than 2.5, all of which occurred for Y1. These scores were found

for case 9, z = 3.52, case 21, z = 2.91 (both in group 1), and case 52, z = 2.77 (in group

2). These cases, then, would need to be carefully examined to make sure data entry is

accurate and to make sure these score are valid.

Next, we used the SPSS EXAMINE procedure with these data to obtain, among other

things, the Shapiro–Wilk test for normality for each variable in each group and the

group skewness and kurtosis values. The commands for doing this appear in TableÂ€6.4.

The test results for the three variables in each group are shown next. If we were testing for normality in each case at the .05 level, then only variable Y1 deviates from

normality in just group 1, as the p value for the Shapiro–Wilk statistic is smaller

Table 6.3:â•‡ Control Lines for SPSS Descriptives for Three Variables in Two-Group MANOVA

TITLE ‘SPLIT FILE FOR MILK DATA’.

DATA LIST FREE/gp y1 y2 y3.

BEGIN DATA.

DATA LINES (raw data are on-line)

END DATA.

SPLIT FILE BY gp.

DESCRIPTIVES VARIABLES=y1 y2 y3

/SAVE

/STATISTICS=MEAN STDDEV MIN MAX.

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

Table 6.4:â•‡ SPSS Commands for the EXAMINE Procedure for the Two-Group MANOVA

TITLE ‘TWO GROUP MANOVA — 3 DEPENDENT VARIABLES’.

DATA LIST FREE/gp y1 y2 y3.

BEGIN DATA.

DATA LINES (data are on-line)

END DATA.

(1)â•… EXAMINE VARIABLESÂ€=Â€y1 y2 y3 BY gp

(2)â•… /PLOTÂ€=Â€STEMLEAF NPPLOT.

(1)â•‡The BY keyword will yield variety of descriptive statistics for each group: mean, median, skewness,

kurtosis,Â€etc.

(2)â•‡STEMLEAF will yield a stem-and-leaf plot for each variable in each group. NPPLOT yields normal

probability plots, as well as the Shapiro–Wilk and Kolmogorov–Smirnov statistical tests for normality for

each variable in each group.

than .05. In addition, while all other skewness and kurtosis values are smaller then

2, the skewness and kurtosis values for Y1 in group 1 are 1.87 and 4.88. Thus, both

the statistical test result and the kurtosis value indicate a violation of normality for

Y1 in group 1. Note that given the positive value for kurtosis, we would not expect

this departure from normality to have much of an effect on power, and hence we

would not be very concerned. We would have been concerned if we had found

deviation from normality on two or more variables, and this deviation was due

to platykurtosis (indicated by a negative kurtosis value). In this case, we would

have applied the last transformation in FigureÂ€6.1: [.05 log (1 + X)] / (1 − X). Note

also that the outliers found for group 1 greatly affect the assessment of normality.

If these values were judged not to be valid and removed from the analysis, the

resulting assessment of normality would have concluded no normality violations.

This highlights the value of attending to outliers prior to engaging in other analysis

activities.

Tests of normality

Kolmogorov-Smirnova

y1

y2

y3

*

a

Shapiro-Wilk

Gp

Statistic

df

Sig.

Statistic

df

Sig.

1.00

2.00

1.00

2.00

1.00

2.00

.157

.091

.125

.118

.073

.111

36

23

36

23

36

23

.026

.200*

.171

.200*

.200*

.200*

.837

.962

.963

.962

.971

.969

36

23

36

23

36

23

.000

.512

.262

.500

.453

.658

This is a lower bound of the true significance.

Lilliefors Significance Correction

231

232

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

6.8 HOMOGENEITY OF VARIANCE ASSUMPTION

Recall that the third assumption for ANOVA is that of equal population variances.

It is widely known that ANOVA F test is not robust when unequal group sizes are

combined with unequal variances. In particular, when group sizes are sharply unequal (largest/smallest > 1.5) and the population variances differ, then if the larger

groups have smaller variances the F statistic is liberal. AÂ€liberal test result means

we are rejecting falsely too often; that is, actual α > nominal level of significance.

Thus, you may think you are rejecting falsely 5% of the time, but the true rejection

rate (actual α) may be 11%. When the larger groups have larger variances, then the

F statistic is conservative. This means actual α < nominal level of significance. At

first glance, this may not appear to be a problem, but note that the smaller α will

cause a decrease in power, and in many studies, one can ill afford to have power

further attenuated.

With group sizes are equal or approximately equal (largest/smallest < 1.5), the

ANOVA F test is often robust to violations of equal group variance. In fact, early

research into this issue, such as reported in Glass etÂ€al. (1972), indicated that ANOVA

F test is robust to such violations provided that groups are of equal size. More recently,

though, research, as described in Coombs, Algina, and Oltman (1996), has shown

that the ANOVA F test, even when group sizes are equal, is not robust when group

variances differ greatly. For example, as reported in Coombs et al., if the common

group size is 11 and the variances are in the ratio of 16:1:1:1, then the type IÂ€error rate

associated with the F test is .109. While the ANOVA F test, then, is not completely

robust to unequal variances even when group sizes are the same, this research suggests that the variances must differ substantially for this problem to arise. Further,

the robustness of the ANOVA F test improves in this situation when the equal group

size is larger.

It is important to note that many of the frequently used tests for homogeneity of variance, such as Bartlett’s, Cochran’s, and Hartley’s Fmax, are quite sensitive to nonnormality. That is, with these tests, one may reject and erroneously conclude that the

population variances are different when, in fact, the rejection was due to nonnormality in the underlying populations. Fortunately, Levene has a test that is more robust

against nonnormality. This test is available in the EXAMINE procedure in SPSS. The

test statistic is formed by deviating the scores for the subjects in each group from

the group mean, and then taking the absolute values. Thus, zij = xij - x j , where x j

represents the mean for the jth group. An ANOVA is then done on the zij s. Although the

Levene test is somewhat more robust, an extensive Monte Carlo study by Conover,

Johnson, and Johnson (1981) showed that if considerable skewness is present, a modification of the Levene test is necessary for it to remain robust. The mean for each group

is replaced by the median, and an ANOVA is done on the deviation scores from the

group medians. This modification produces a more robust test with good power. It is

available on SAS andÂ€SPSS.

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

6.9 HOMOGENEITY OF THE COVARIANCE MATRICES*

The assumption of equal (homogeneous) covariance matrices is a very restrictive one.

Recall from the matrix algebra chapter (ChapterÂ€2) that two matrices are equal only

if all corresponding elements are equal. Let us consider a two-group problem with

five dependent variables. All corresponding elements in the two matrices being equal

implies, first, that the corresponding diagonal elements are equal. This means that the

five population variances in group 1 are equal to their counterparts in group 2. But all

nondiagonal elements must also be equal for the matrices to be equal, and this implies

that all covariances are equal. Because for five variables there are 10 covariances, this

means that the 10 population covariances in group 1 are equal to their counterpart covariances in group 2. Thus, for only five variables, the equal covariance matrices assumption requires that 15 elements of group 1 be equal to their counterparts in groupÂ€2.

For eight variables, the assumption implies that the eight population variances in group

1 are equal to their counterparts in group 2 and that the 28 corresponding covariances

for the two groups are equal. The restrictiveness of the assumption becomes more

strikingly apparent when we realize that the corresponding assumption for the univariate t test is that the variances on only one variable be equal.

Hence, it is very unlikely that the equal covariance matrices assumption would ever

literally be satisfied in practice. The relevant question is: Will the very plausible violations of this assumption that occur in practice have much of an effect on power?

6.9.1 Effect of Heterogeneous Covariance Matrices on Type IÂ€Error

Three major Monte Carlo studies have examined the effect of unequal covariance

matrices on error rates: Holloway and Dunn (1967) and Hakstian, Roed, and Linn

(1979) for the two-group case, and Olson (1974) for the k-group case. Holloway

and Dunn considered both equal and unequal group sizes and modeled moderate

to extreme heterogeneity. AÂ€representative sampling of their results, presented in

TableÂ€ 6.5, shows that equal ns keep the actual α very close to the level of significance (within a few percentage points) for all but the extreme cases. Sharply unequal

group sizes for moderate inequality, with the larger group having smaller variability,

produce a liberal test. In fact, the test can become very liberal (cf., three variables,

N1Â€=Â€35, N2Â€=Â€15, actual αÂ€=Â€.175). When larger groups have larger variability, this

produces a conservativeÂ€test.

Hakstian etÂ€al. (1979) modeled heterogeneity that was milder and, we believe, somewhat more representative of what is encountered in practice, than that considered in the

Holloway and Dunn study. They also considered more disparate group sizes (up to a

ratio of 5 to 1) for the 2-, 6-, and 10-variable cases. The following three heterogeneity

conditions were examined:

* Appendix 6.2 discusses multivariate test statistics for unequal covariance matrices.

233

234

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

Table 6.5:â•‡ Effect of Heterogeneous Covariance Matrices on Type IÂ€Error for Hotelling’s Tâ•›â•›2 (1)

Degree of heterogeneity

Number of observations per group

Number of variables N1

N2 (2)

3

3

3

3

3

7

7

7

7

7

10

10

10

10

10

35

30

25

20

15

35

30

25

20

15

35

30

25

20

15

15

20

25

30

35

15

20

25

30

35

15

20

25

30

35

DÂ€=Â€3 (3)

DÂ€=Â€10

(Moderate)

(Very large)

.015

.03

.055

.09

.175

.01

.03

.06

.13

.24

.01

.03

.08

.17

.31

0

.02

.07

.15

.28

0

.02

.08

.27

.40

0

.03

.12

.33

.40

(1)â•‡Nominal αÂ€=Â€.05.

(2)â•‡ Group 2 is more variable.

(3)â•‡ DÂ€=Â€3 means that the population variances for all variables in Group 2 are 3 times as large as the population variances for those variables in GroupÂ€1.

Source: Data from Holloway and Dunn (1967).

1. The population variances for the variables in Population 2 are only 1.44 times as

great as those for the variables in PopulationÂ€1.

2. The Population 2 variances and covariances are 2.25 times as great as those for all

variables in PopulationÂ€1.

3. The Population 2 variances and covariances are 2.25 times as great as those for

Population 1 for only half the variables.

The results in TableÂ€6.6 for the six-variable case are representative of what Hakstian etÂ€al.

found. Their results are consistent with the Holloway and Dunn findings, but they extend

them in two ways. First, even for milder heterogeneity, sharply unequal group sizes can

produce sizable distortions in the type IÂ€error rate (cf., 24:12, Heterogeneity 2 (negative):

actual αÂ€=Â€.127 vs. level of significanceÂ€=Â€.05). Second, severely unequal group sizes can

produce sizable distortions in type IÂ€error rates, even for very mild heterogeneity (cf.,

30:6, Heterogeneity 1 (negative): actual αÂ€=Â€.117 vs. level of significanceÂ€=Â€.05).

Olson (1974) considered only equal ns and warned, on the basis of the Holloway and

Dunn results and some preliminary findings of his own, that researchers would be well

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

Table 6.6:â•‡ Effect of Heterogeneous Covariance Matrices with Six Variables on Type I

Error for Hotelling’sÂ€Tâ•›â•›2

Heterog. 1

N1:N2(1)

Nominal α (2) POS.

18:18

.01

.05

.10

.01

.05

.10

.01

.05

.10

24:12

30:6

Heterog. 2

NEG. POS.

.006

.048

.099

.007

.035

.068

.004

.018

.045

Heterog. 3

NEG. POS.

.011

.057

.109

.020

.088

.155

.036

.117

.202

.005

.021

.051

.000

.004

.012

NEG. (3)

.012

.064

.114

.043

.127

.214

.103

.249

.358

.006

.028

.072

.003

.022

.046

.018

.076

.158

.046

.145

.231

(1)â•‡ Ratio of the group sizes.

(2)â•‡ Condition in which the larger group has the larger generalized variance.

(3)â•‡ Condition in which the larger group has the smaller generalized variance.

Source: Data from Hakstian, Roed, and Lind (1979).

advised to strive to attain equal group sizes in the k-group case. The results of Olson’s

study should be interpreted with care, because he modeled primarily extreme heterogeneity (i.e., cases where the population variances of all variables in one group were 36

times as great as the variances of those variables in all the other groups).

6.9.2 Testing Homogeneity of Covariance Matrices: The BoxÂ€Test

Box (1949) developed a test that is a generalization of the Bartlett univariate homogeneity of variance test, for determining whether the covariance matrices are equal. The test

uses the generalized variances; that is, the determinants of the within-covariance matrices. It is very sensitive to nonnormality. Thus, one may reject with the Box test because

of a lack of multivariate normality, not because the covariance matrices are unequal.

Therefore, before employing the Box test, it is important to see whether the multivariate normality assumption is reasonable. As suggested earlier in this chapter, a check of

marginal normality for the individual variables is probably sufficient (inspecting plots,

examining values for skewness and kurtosis, and using the Shapiro–Wilk test). Where

there is a departure from normality, use a suitable transformation (see FigureÂ€6.1).

Box has given an χ2 approximation and an F approximation for his test statistic, both

of which appear on the SPSS MANOVA output, as an upcoming example in this section shows. To decide to which of these one should pay more attention, the following

rule is helpful: When all group sizes are 20 and the number of dependent variables is

six, the χ2 approximation is fine. Otherwise, the F approximation is more accurate and

should beÂ€used.

235

236

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

Example 6.2

To illustrate the use of SPSS MANOVA for assessing homogeneity of the covariance

matrices, we consider, again, the data from Example 1. Note that we use the SPSS

MANOVA procedure instead of GLM in order to obtain the natural log of the determinants, as discussed later. Recall that this example involved two types of trucks (gasoline and diesel), with measurements on three variables: Y1Â€=Â€fuel, Y2Â€=Â€repair, and

Y3Â€=Â€capital. The raw data were provided in the syntax online. Recall that there were

36 gasoline trucks and 23 diesel trucks, so we have sharply unequal group sizes. Thus,

a significant Box test here will produce biased multivariate statistics that we need to

worry about.

The commands for running the MANOVA, along with getting the Box test and some

selected output, are presented in TableÂ€6.7. It is in the PRINT subcommand that we

obtain the multivariate (Box test) and univariate tests of homogeneity of variance.

Note in TableÂ€6.7 (center) that the Box test is significant well beyond the .01 level

(FÂ€=Â€5.088, pÂ€=Â€.000, approximately). We wish to determine whether the multivariate

test statistics will be liberal or conservative. To do this, we examine the determinants

of the covariance matrices. Remember that the determinant of the covariance matrix

is the generalized variance; that is, it is the multivariate measure of within-group variability for a set of variables. In this case, the larger group (group 1) has the smaller

generalized variance (i.e., 3,172). The effect of this is to produce positively biased

(liberal) multivariate test statistics. Also, although this is not presented in TableÂ€6.7,

the group effect is quite significant (FÂ€=Â€16.375, pÂ€=Â€.000, approximately). It is possible, then, that this significant group effect may be mainly due to the positive bias

present.

Table 6.7:â•‡ SPSS MANOVA and EXAMINE Control Lines for Milk Data and Selected Output

TITLE ‘MILK DATA’.

DATA LIST FREE/gp y1 y2 y3.

BEGIN DATA.

DATA LINES (raw data are on-line)

END DATA.

MANOVA y1 y2 y3 BY gp(1,2)

/PRINTÂ€=Â€HOMOGENEITY(COCHRAN, BOXM).

EXAMINE VARIABLESÂ€=Â€y1 y2 y3 BY gp

/PLOTÂ€=Â€SPREADLEVEL.

Cell Number.. 1

Determinant of Covariance matrix of dependent variables =

LOG (Determinant) =

Cell Number.. 2

Determinant of Covariance matrix of dependent variables =

LOG (Determinant) =

3172.91372

8.06241

4860.31030

8.48886

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

Determinant of pooled Covariance matrix of dependent vars. =

6619.49636

LOG (Determinant) =

8.79777

Multivariate test for Homogeneity of Dispersion matrices

Boxs M =

32.53409

F WITH (6,14625) DF =

5.08834,

PÂ€=Â€.000 (Approx.)

PÂ€=Â€.000 (Approx.)

Chi-Square with 6 DF =

30.54336,

Test of Homogeneity of Variance

y1

y2

y3

Based on Mean

Based on Mean

Based on Mean

Levene Statistic

df 1

df 2

Sig.

5.071

.961

6.361

1

1

1

57

57

57

.028

.331

.014

To see whether this is the case, we look for variance-stabilizing transformations that,

hopefully, will make the Box test not significant, and then check to see whether the

group effect is still significant. Note, in TableÂ€6.7, that the Levene’s tests of equal variance suggest there are significant variance differences for Y1 andÂ€Y3.

The EXAMINE procedure was also run, and indicated that the following new variables

will have approximately equal variances: NEWY1Â€=Â€Y1** (−1.678) and NEWY3Â€= Â€Y3**

(.395). When these new variables, along with Y2, were run in a MANOVA (see

TableÂ€6.8), the Box test was not significant at the .05 level (FÂ€=Â€1.79, pÂ€=Â€.097), but

the group effect was still significant well beyond the .01 level (FÂ€=Â€13.785, p > .001

approximately).

We now consider two variations of this result. In the first, a violation would not be of

concern. If the Box test had been significant and the larger group had the larger generalized variance, then the multivariate statistics would be conservative. In that case,

we would not be concerned, for we would have found significance at an even more

stringent level had the assumption been satisfied.

A second variation on the example results that would have been of concern is if

the large group had the large generalized variance and the group effect was not

significant. Then, it wouldn’t be clear whether the reason we did not find significance was because of the conservativeness of the test statistic. In this case, we could

simply test at a somewhat more liberal level, once again realizing that the effective

alpha level will probably be around .05. Or, we could again seek variance stabilizing

transformations.

With respect to transformations, there are two possible approaches. If there is a known

relationship between the means and variances, then the following two transformations are

237

238

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

Table 6.8:â•‡ SPSS MANOVA and EXAMINE Commands for Milk Data Using Two Transformed Variables and Selected Output

TITLE ‘MILK DATA – Y1 AND Y3 TRANSFORMED’.

DATA LIST FREE/gp y1 y2 y3.

BEGIN DATA.

DATA LINES

END DATA.

LIST.

COMPUTE NEWy1 = y1**(−1.678).

COMPUTE NEWy3 = y3**.395.

MANOVA NEWy1 y2 NEWy3 BY gp(1,2)

/PRINT = CELLINFO(MEANS) HOMOGENEITY(BOXM, COCHRAN).

EXAMINE VARIABLES = NEWy1 y2 NEWy3 BY gp

/PLOT = SPREADLEVEL.

Multivariate test for Homogeneity of Dispersion matrices

Boxs M =

11.44292

F WITH (6,14625) DF =

1.78967,

P = .097 (Approx.)

Chi-Square with 6 DF =

10.74274,

P = .097 (Approx.)

EFFECT .. GP

Multivariate Tests of Significance (S = 1, M = 1/2, N = 26 1/2)

Test Name

Value

Exact F

Hypoth.

DF

Error

DF

Sig.

of F

Pillais

.42920

13.78512

3.00

55.00

.000

Hotellings

.75192

13.78512

3.00

55.00

.000

Wilks

.57080

13.78512

3.00

55.00

.000

Roys

.42920

Levene

Statistic

df1

df2

Sig.

Note .. F statistics are exact.

Test of Homogeneity of Variance

NEWy1

Based on Mean

1.008

1

57

.320

Y2

Based on Mean

.961

1

57

.331

NEWy3

Based on Mean

.451

1

57

.505

helpful. The square root transformation, where the original scores are replaced by yij ,

will stabilize the variances if the means and variances are proportional for each group. This

can happen when the data are in the form of frequency counts. If the scores are proportions,

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

then the means and variances are related as follows: σ i2 = µ i (1 - µ i ). This is true because,

with proportions, we have a binomial variable, and for a binominal variable the variance is

this function of its mean. The arcsine transformation, where the original scores are replaced

by arcsin

yij , will also stabilize the variances in thisÂ€case.

If the relationship between the means and the variances is not known, then one can let

the data decide on an appropriate transformation (as in the previous example).

We now consider an example that illustrates the first approach, that of using a known

relationship between the means and variances to stabilize the variances.

Example 6.3

Group 1

Yâ•›1

MEANS

VARIANCES

Yâ•›2

.30

5

1.1

4

5.1

8

1.9

6

4.3

4

Yâ•›1Â€=Â€3.1

3.31

Yâ•›1

Group 2

Yâ•›2

3.5

4.0

4.3

7.0

1.9

7.0

2.7

4.0

5.9

7.0

Yâ•›2Â€=Â€5.6

2.49

Yâ•›1

Yâ•›2

5

4

5

4

12

6

8

3

13

4

Yâ•›1Â€=Â€8.5

8.94

Yâ•›1

Group 3

Yâ•›2

9 5

11 6

5 3

10 4

7 2

Yâ•›2Â€=Â€4

1.66

Yâ•›1

Yâ•›2

14

5

9

10

20

2

16

6

23

9

Yâ•›1Â€=Â€16

20

Yâ•›1

Y2

18

21

12

15

12

Yâ•›2Â€=Â€5.3

8.68

8

2

2

4

5

Notice that for Y1, as the means increase (from group 1 to group 3) the variances also

increase. Also, the ratio of variance to mean is approximately the same for the three

groups: 3.31 / 3.1Â€=Â€1.068, 8.94 / 8.5Â€=Â€1.052, and 20 / 16Â€=Â€1.25. Further, the variances

for Y2 differ by a fair amount. Thus, it is likely here that the homogeneity of covariance

matrices assumption is not tenable. Indeed, when the MANOVA was run on SPSS,

the Box test was significant at the .05 level (FÂ€=Â€2.821, pÂ€=Â€.010), and the Cochran

univariate tests for both variables were also significant at the .05 level (Y1: p =.047;

Y2: pÂ€=Â€.014).

Because the means and variances for Y1 are approximately proportional, as mentioned earlier, a square-root transformation will stabilize the variances. The commands for running SPSS MANOVA, with the square-root transformation on Y1,

are given in TableÂ€6.9, along with selected output. AÂ€few comments on the commands: It is in the COMPUTE command that we do the transformation, calling the

transformed variable RTY1. We then use the transformed variable RTY1, along with

Y2, in the MANOVA command for the analysis. Note the stabilizing effect of the

square root transformation on Y1; the standard deviations are now approximately

equal (.587, .522, and .568). Also, Box’s test is no longer significant (FÂ€ =Â€ 1.73,

pÂ€=Â€.109).

239

240

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

Table 6.9:â•‡ SPSS Commands for Three-Group MANOVA with Unequal Variances (Illustrating Square-Root Transformation)

TITLE ‘THREE GROUP MANOVA – TRANSFORMING y1’.

DATA LIST FREE/gp y1 y2.

BEGIN DATA.

â•…â•…DATA LINES

END DATA.

COMPUTE RTy1Â€=Â€SQRT(y1).

MANOVA RTy1 y2 BY gp(1,3)

â•…â•‡/PRINTÂ€=Â€CELLINFO(MEANS) HOMOGENEITY(COCHRAN, BOXM).

Cell Means and Standard Deviations

Variable .. RTy1

CODE

Mean

Std. Dev.

FACTOR

gp

1

1.670

.587

gp

2

2.873

.522

gp

3

3.964

.568

For entire sample

2.836

1.095

- — - — - — - — - — - — - — - — - — - — - — - — - — - — - — - — - — - — Variable .. y2

FACTOR

CODE

Mean

Std. Dev.

gp

1

5.600

1.578

gp

2

4.100

1.287

gp

3

5.300

2.946

For entire sample

5.000

2.101

- — - — - — - — - — - — - — - — - — - — - — - — - — - — - — - — - — - — Univariate Homogeneity of Variance Tests

Variable .. RTy1

â•…â•… Cochrans C(9,3) =â•…â•…â•…â•…â•…â•…â•…â•…â•…â•… .36712, â•‡PÂ€=Â€1.000 (approx.)

â•…â•… Bartlett-Box F(2,1640) =â•…â•…â•…â•…â•…â•›â•›â•›.06176, PÂ€=Â€ .940

Variable .. y2

â•…â•… Cochrans C(9,3) =â•…â•…â•…â•…â•…â•…â•…â•…â•…â•… .67678,â•‡PÂ€=â•… .014 (approx.)

â•…â•… Bartlett-Box F(2,1640) =â•…â•…â•…â•…â•› 3.35877,â•…Â€

PÂ€=â•… .035

- — - — - — - — - — - — - — - — - — - — - — - — - — - — - — - — - — - — Multivariate test for Homogeneity of Dispersion matrices

Boxs M =

11.65338

F WITH (6,18168) DF =â•…â•…â•…â•…â•…â•‡1.73378, P =â•…â•… .109 (Approx.)

Chi-Square with 6 DF =â•…â•…â•…â•‡â•›â•›â•›10.40652, P =â•…â•… .109 (Approx.)

6.10 SUMMARY

We have considered each of the assumptions in MANOVA in some detail individually.

We now tie together these pieces of information into an overall strategy for assessing

assumptions in a practical problem.

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

1. Check to determine whether it is reasonable to assume the participants are responding independently; a violation of this assumption is very serious. Logically, from

the context in which the participants are receiving treatments, one should be able

to make a judgment. Empirically, the intraclass correlation is a measure of the

degree of dependence. Perhaps the most flexible analysis approach for correlated

observations is multilevel modeling. This method is statistically correct for situations in which individual observations are correlated within clusters, and multilevel models allow for inclusion of predictors at the participant and cluster level,

as discussed in ChapterÂ€13. As a second possibility, if several groups are involved

for each treatment condition, consider using the group mean as the unit of analysis, instead of the individual outcome scores.

2. Check to see whether multivariate normality is reasonable. In this regard, checking

the marginal (univariate) normality for each variable should be adequate. The EXAMINE procedure from SPSS is very helpful. If departure from normality is found,

consider transforming the variable(s). FigureÂ€6.1 can be helpful. This comment from

Johnson and Wichern (1982) should be kept in mind: “Deviations from normality are

often due to one or more unusual observations (outliers)” (p.Â€163). Once again, we

see the importance of screening the data initially and converting to z scores.

3. Apply Box’s test to check the assumption of homogeneity of the covariance matrices. If normality has been achieved in Step 2 on all or most of the variables, then

Box’s test should be a fairly clean test of variance differences, although keep in

mind that this test can be very powerful when sample size is large. If the Box test

is not significant, then all isÂ€fine.

4. If the Box test is significant with equal ns, then, although the type IÂ€error rate will

be only slightly affected, power will be attenuated to some extent. Hence, look for

transformations on the variables that are causing the covariance matrices to differ.

5. If the Box test is significant with sharply unequal ns for two groups, compare the

determinants of S1 and S2 (i.e., the generalized variances for the two groups). If the

larger group has the smaller generalized variance, Tâ•›2 will be liberal. If the larger

group as the larger generalized variance, Tâ•›2 will be conservative.

6. For the k-group case, if the Box test is significant, examine the |Si| for the groups.

If the groups with larger sample sizes have smaller generalized variances, then

the multivariate statistics will be liberal. If the groups with the larger sample sizes

have larger generalized variances, then the statistics will be conservative.

It is possible for the k-group case that neither of these two conditions hold. For example, for three groups, it could happen that the two groups with the smallest and the

largest sample sizes have large generalized variances, and the remaining group has a

variance somewhat smaller. In this case, however, the effect of heterogeneity should

not be serious, because the coexisting liberal and conservative tendencies should cancel each other out somewhat.

Finally, because there are several test statistics in the k-group MANOVA case, their

relative robustness in the presence of violations of assumptions could be a criterion

for preferring one over the others. In this regard, Olson (1976) argued in favor of the

241

242

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

Pillai–Bartlett trace, because of its presumed greater robustness against heterogeneous

covariances matrices. For variance differences likely to occur in practice, however,

Stevens (1979) found that the Pillai–Bartlett trace, Wilks’ Λ, and the Hotelling–Lawley trace are essentially equally robust.

6.11 COMPLETE THREE-GROUP MANOVA EXAMPLE

In this section, we illustrate a complete set of analysis procedures for one-way

MANOVA with a new data set. The data set, available online, is called SeniorWISE,

because the example used is adapted from the SeniorWISE (Wisdom Is Simply Exploration) study (McDougall et al., 2010a, 2010b). In the example used here, we assume

that individuals 65 or older were randomly assigned to receive (1) memory training,

which was designed to help adults maintain and/or improve their memory-related abilities; (2) a health intervention condition, which did not include memory training but is

included in the study to determine if those receiving memory training would have better memory performance than those receiving an active intervention, albeit unrelated

to memory; or (3) a wait-list control condition. The active treatments were individually administered and posttest intervention measures were completed individually.

Further, we have data (computer generated) for three outcomes, the scores for which

are expected to be approximately normally distributed. The outcomes are thought to tap

distinct constructs but are expected to be positively correlated. The first outcome, self-efficacy, is a measure of the degree to which individuals feel strong and confident about performing everyday memory-related tasks. The second outcome is a measure that assesses

aspects of verbal memory performance, particularly verbal recall and recognition abilities. For the final outcome measure, the investigators used a measure of daily functioning

that assesses participant ability to successfully use recall to perform tasks related to, for

example, communication skills, shopping, and eating. We refer to this outcome as DAFS,

because it is based on the Direct Assessment of Functional Status. Higher scores on each

of these measures represent a greater (and preferred) level of performance.

To summarize, we have individuals assigned to one of three treatment conditions

(memory training, health training, or control) and have collected posttest data on memory self-efficacy, verbal memory performance, and daily functioning skills (or DAFS).

Our research hypothesis is that individuals in the memory training condition will have

higher average posttest scores on each of the outcomes compared to control participants. On the other hand, it is not clear how participants in the health training condition will do relative to the other groups, as it is possible this intervention will have no

impact on memory but also possible that the act of providing an active treatment may

result in improved memory self-efficacy and performance.

6.11.1 Sample Size Determination

We first illustrate a priori sample size determination for this study. We use Table A.5

in Appendix A, which requires us to provide a general magnitude for the effect size

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

threshold, which we select as moderate, the number of groups (three), the number of

dependent variables (three), power (.80), and alpha (.05) used for the test of the overall

multivariate null hypothesis. With these values, Table A.5 indicates that 52 participants

are needed for each of the groups. We assume that the study has a funding source, and

investigators were able to randomly assign 100 participants to each group. Note that

obtaining a larger number of participants than “required” will provide for additional

power for the overall test, and will help provide for improved power and confidence

interval precision (narrower limits) for the pairwise comparisons.

6.11.2â•‡ Preliminary Analysis

With the intervention and data collection completed, we screen data to identify outliers, assess assumptions, and determine if using the standard MANOVA analysis is supported. TableÂ€6.10 shows the SPSS commands for the entire analysis. Selected results

are shown in TablesÂ€6.11 and 6.12. Examining TableÂ€6.11 shows that there are no missing data, means for the memory training group are greater than the other groups, and

that variability is fairly similar for each outcome across the three treatment groups. The

bivariate pooled within-group correlations (not shown) among the outcomes support

the use of MANOVA as each correlation is of moderate strength and, as expected, is

positive (correlations are .342, .337, and .451).

Table 6.10:â•‡ SPSS Commands for the Three-Group MANOVA Example

SORT CASES BY Group.

SPLIT FILE LAYERED BY Group.

FREQUENCIES VARIABLES=Self_Efficacy Verbal DAFS

/FORMAT=NOTABLE

/STATISTICS=STDDEV MINIMUM MAXIMUM MEAN MEDIAN SKEWNESS SESKEW

KURTOSIS SEKURT

/HISTOGRAM NORMAL

/ORDER=ANALYSIS.

DESCRIPTIVES VARIABLES=Self_Efficacy Verbal DAFS

/SAVE

/STATISTICS=MEAN STDDEV MIN MAX.

REGRESSION

/STATISTICS COEFF

/DEPENDENT CASE

/METHOD=ENTER Self_Efficacy Verbal DAFS

/SAVE MAHAL.

SPLIT FILE OFF.

EXAMINE VARIABLESÂ€=Â€Self_Efficacy Verbal DAFS BY group

/PLOTÂ€=Â€STEMLEAF NPPLOT.

MANOVA Self_Efficacy Verbal DAFS BY Group(1,3)

(Continuedâ•›)

243

Table 6.10:â•‡(Continued)

/printÂ€=Â€error (stddev cor).

DESCRIPTIVES VARIABLES= ZSelf_Efficacy ZVerbal ZDAFS /STATISTICS=MEAN STDDEV MIN MAX.

GLM Self_Efficacy Verbal DAFS BY Group

/POSTHOC=Group(TUKEY)

/PRINT=DESCRIPTIVE ETASQ HOMOGENEITY

/CRITERIA =ALPHA(.0167).

Table 6.11:â•‡ Selected SPSS Output for Data Screening for the Three-Group MANOVA Example

Statistics

GROUP

Memory

Training

Health

Training

Control

N

Valid

Missing

Mean

Median

Std. Deviation

Skewness

Std. Error of Skewness

Kurtosis

Std. Error of Kurtosis

Minimum

Maximum

N

Valid

Missing

Mean

Median

Std. Deviation

Skewness

Std. Error of Skewness

Kurtosis

Std. Error of Kurtosis

Minimum

Maximum

N

Valid

Missing

Mean

Median

Std. Deviation

Skewness

Std. Error of Skewness

Kurtosis

Self_Efficacy

Verbal

DAFS

100

0

58.5053

58.0215

9.19920

.052

.241

–.594

.478

35.62

80.13

100

0

50.6494

51.3928

8.33143

.186

.241

.037

.478

31.74

75.85

100

0

48.9764

47.7576

10.42036

.107

.241

.245

100

0

60.2273

61.5921

9.65827

–.082

.241

.002

.478

32.39

82.27

100

0

50.8429

52.3650

9.34031

–.412

.241

.233

.478

21.84

70.07

100

0

52.8810

52.7982

9.64866

–.211

.241

–.138

100

0

59.1516

58.9151

9.74461

.006

.241

–.034

.478

36.77

84.17

100

0

52.4093

53.3766

10.27314

–.187

.241

–.478

.478

27.20

75.10

100

0

51.2481

51.1623

8.55991

–.371

.241

.469

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

Statistics

GROUP

Self_Efficacy

Std. Error of Kurtosis

Minimum

Maximum

Verbal

.478

19.37

73.64

.478

29.89

76.53

DAFS

.478

28.44

69.01

Verbal

GROUP: Health Training

20

Mean = 50.84

Std. Dev. = 9.34

N = 100

Frequency

15

10

5

20

30

40

50

Verbal

60

70

80

Inspection of the within-group histograms and z scores for each outcome suggests the

presence of an outlying value in the health training group for self-efficacy (z = 3.0) and

verbal performance (zÂ€=Â€−3.1). The outlying value for verbal performance can be seen

in the histogram in TableÂ€ 6.11. Note though that when each of the outlying cases is

temporarily removed, there is little impact on study results as the means for the health

training group for self-efficacy and verbal performance change by less than 0.3 points.

In addition, none of the statistical inference decisions (i.e., reject or retain the null) is

changed by inclusion or exclusion of these cases. So, these two cases are retained for the

entire analysis.

We also checked for the presence of multivariate outliers by obtaining the within-group Mahalanobis distance for each participant. These distances are obtained by

the REGRESSION procedure shown in TableÂ€ 6.10. Note here that “case id” serves

as the dependent variable (which is of no consequence) and the three predictor variables in this equation are the three dependent variables appearing in the MANOVA.

Johnson and Wichern (2007) note that these distances, if multivariate normality holds,

approximately follow a chi-square distribution with degrees of freedom equal to, in

this context, the number of dependent variables (p), with this approximation improving for larger samples. AÂ€common guide, then, is to consider a multivariate outlier to be

present when an obtained Mahalanobis distance exceeds a chi-square critical value at a

245

246

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

conservative alpha (.001) with p degrees of freedom. For this example, the chi-square

critical value (.001, 3)Â€=Â€16.268, as obtained from Appendix A, Table A.1. From our

regression results, we ignore everything in this analysis except for the Mahalanobis

distances. The largest such value obtained of 11.36 does not exceed the critical value

of 16.268. Thus, no multivariate outliers are indicated.

The formal assumptions for the MANOVA procedure also seem to be satisfied. Based

on the values for skewness and kurtosis, which are all close to zero as shown in

TableÂ€6.11, as well as inspection of each of the nine histograms (not shown), does not

suggest substantial departures from univariate normality. We also used the Shapiro–

Wilk statistic to test the normality assumption. Using a Bonferroni adjustment for the

nine tests yields an alpha level of about .0056, and as each p value from these tests

exceeded this alpha level, there is no reason to believe that the normality assumption

is violated.

We previously noted that group variability is similar for each outcome, and the

results of Box’s M test (pÂ€ =Â€ .054), as shown in TableÂ€ 6.12, for equal variancecovariance matrices does not indicate a violation of this assumption. Note though

that because of the relatively large sample size (NÂ€=Â€300) this test is quite powerful.

As such, it is often recommended that an alpha of .01 be used for this test when

large sample sizes are present. In addition, Levene’s test for equal group variances

for each variable considered separately does not indicate a violation for any of

the outcomes (smallest p value is .118 for DAFS). Further, the study design, as

described, does not suggest any violations of the independence assumption in part

as treatments were individually administered to participants who also completed

posttest measures individually.

6.11.3 Primary Analysis

TableÂ€6.12 shows the SPSS GLM results for the MANOVA. The overall multivariate null hypothesis is rejected at the .05 level, F Wilks’ Lambda(6, 590)Â€=Â€14.79,

p < .001, indicating the presence of group differences. The multivariate effect size

measure, eta square, indicates that the proportion of variance between groups on the

set of outcomes is .13. Univariate F tests for each dependent variable, conducted

using an alpha level of .05 / 3, or .0167, shows that group differences are present for

self-efficacy (F[2, 297]Â€=Â€29.57, p < .001), verbal performance (F[2, 297]Â€=Â€26.71,

p < .001), and DAFS (F[2, 297]Â€=Â€19.96, p < .001). Further, the univariate effect

size measure, eta square, shown in TableÂ€6.12, indicates the proportion of variance

explained by the treatment for self-efficacy is 0.17, verbal performance is 0.15, and

DAFS is 0.12.

We then use the Tukey procedure to conduct pairwise comparisons using an alpha of

.0167 for each outcome. For each dependent variable, there is no statistically significant difference in means between the health training and control groups. Further, the

memory training group has higher population means than each of the other groups for

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

all outcomes. For self-efficacy, the confidence intervals for the difference in means

indicate that the memory training group population mean is about 4.20 to 11.51 points

greater than the mean for the health training group and about 5.87 to 13.19 points

greater than the control group mean. For verbal performance, the intervals indicate that

the memory training group mean is about 5.65 to 13.12 points greater than the mean

Table 6.12:â•‡ SPSS Selected GLM Output for the Three-Group MANOVA Example

Box’s Test of Equality of Covariance

Matricesa

Box’s M

F

df1

df2

Sig.

Levene’s Test of Equality of Error Variancesa

F

21.047

1.728

12

427474.385

.054

Self_Efficacy

df1 df2 Sig.

1.935

2

297 .146

Verbal

.115

2

297 .892

DAFS

2.148

2

297 .118

Tests the null hypothesis that the error variance of

the dependent variable is equal across groups.

a

Design: Intercept + GROUP

Tests the null hypothesis that the observed

covariance matrices of the dependent variables

are equal across groups.

a

Design: Intercept + GROUP

Multivariate Testsa

Effect

GROUP

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

Value

.250

.756

.316

.290

F

14.096

14.791b

15.486

28.660c

Hypothesis

df

6.000

6.000

6.000

3.000

Error df

592.000

590.000

588.000

296.000

Sig.

.000

.000

.000

.000

Partial Eta

Squared

.125

.131

.136

.225

a

Design: Intercept + GROUP

Exact statistic

c

The statistic is an upper bound on F that yields a lower bound on the significance level.

b

Tests of Between-Subjects Effects

Source

GROUP

Error

Dependent

Variable

Self_Efficacy

Verbal

DAFS

Self_Efficacy

Verbal

DAFS

Type III

Sum of

Squares

5177.087

4872.957

3642.365

25999.549

27088.399

27102.923

df

2

2

2

297

297

297

Mean

Square

2588.543

2436.478

1821.183

87.541

91.207

91.256

F

29.570

26.714

19.957

Sig.

.000

.000

.000

Partial Eta

Squared

.166

.152

.118

(Continuedâ•›)

247

248

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

Table 6.12:â•‡ (Continued)

Multiple Comparisons

Tukey HSD

98.33% Confidence

Interval

Dependent

Variable

Verbal

(I) GROUP

Memory Training Control

9.5289* 1.32318 .000

Health Training

1.6730

Control

Upper

Bound

5.8727

13.1850

1.32318 .417 -1.9831

5.3291

Memory Training Health Training 9.3844* 1.35061 .000

5.6525

13.1163

Memory Training Control

3.6144

11.0782

1.35061 .288 -5.7700

1.6938

Health Training

DAFS

(J) GROUP

Mean

Difference

Lower

(I-J)

Std. Error Sig. Bound

Control

7.3463* 1.35061 .000

-2.0381

Memory Training Health Training 6.7423* 1.35097 .000

3.0094

10.4752

Memory Training Control

7.9034* 1.35097 .000

4.1705

11.6363

Health Training

1.1612

1.35097 .666 -4.8940

2.5717

Control

Based on observed means.

The error term is Mean Square(Error) = 91.256.

* The mean difference is significant at the .0167 level.

for the health training group and about 3.61 to 11.08 points greater than the control

group mean. For DAFS, the intervals indicate that the memory training group mean

is about 3.01 to 10.48 points greater than the mean for the health training group and

about 4.17 to 11.64 points greater than the control group mean. Thus, across all outcomes, the lower limits of the confidence intervals suggest that individuals assigned

to the memory training group score, on average, at least 3 points greater than the other

groups in the population.

Note that if you wish to report the Cohen’s d effect size measure, you need to compute

these manually. Remember that the formula for Cohen’s d is the raw score difference

in means between two groups divided by the square root of the mean square error from

the one-way ANOVA table for a given outcome. To illustrate two such calculations,

consider the contrast between the memory and health training groups for self-efficacy.

The Cohen’s d for this difference is 7.8559 87.541 = 0.84, indicating that this difference in means is .84 standard deviations (conventionally considered a large effect).

For the second example, Cohen’s d for the difference in verbal performance means

between the memory and health training groups is 9.3844 91.207 = 0.98, again

indicative of a large effect by conventional standards.

Having completed this example, we now present an example results section from this

analysis, followed by an analysis summary for one-way MANOVA where the focus is

on examining effects for each dependent variable.

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

6.12 EXAMPLE RESULTS SECTION FOR ONE-WAY MANOVA

The goal of this study was to determine if at-risk older adults who were randomly

assigned to receive memory training have greater mean posttest scores on memory

self-efficacy, verbal memory performance, and daily functional status than individuals who were randomly assigned to receive a health intervention or a wait-list

control condition. AÂ€one-way multivariate analysis of variance (MANOVA) was

conducted for three dependent variables (i.e., memory self-efficacy, verbal performance, and functional status) with type of training (memory, health, and none)

serving as the independent variable. Prior to conducting the formal MANOVA procedures, the data were examined for univariate and multivariate outliers. Two such

observations were found, but they did not impact study results. We determined this

by recomputing group means after temporarily removing each outlying observation

and found small differences between these means and the means based on the entire

sample (less than three-tenths of a point for each mean). Similarly, temporarily

removing each outlier and rerunning the MANOVA indicated that neither observation changed study findings. Thus, we retained all 300 observations throughout the

analyses.

We also assessed whether the MANOVA assumptions seemed tenable. Inspecting histograms, skewness and kurtosis values, and Shapiro–Wilk test results did not indicate any material violations of the normality assumption. Further, Box’s test provided

support for the equality of covariance matrices assumption (i.e., pÂ€=Â€.054). Similarly,

examining the results of Levene’s test for equality of variance provided support that

the dispersion of scores for self-efficacy (pÂ€=Â€.15), verbal performance (pÂ€=Â€.89), and

functional status (pÂ€=Â€.12) was similar across the three groups. Finally, we did not consider there to be any violations of the independence assumption because the treatments

were individually administered and participants responded to the outcome measures

on an individual basis.

TableÂ€1 displays the means for each of the treatment groups, which shows that participants in the memory training group scored, on average, highest across each dependent

variable, with much lower mean scores observed in the health training and control groups. Group means differed on the set of dependent variables, λÂ€=Â€.756, F(6,

590)Â€ =Â€ 14.79, p < .001. Given the interest in examining treatment effects for each

outcome (as opposed to attempting to establish composite variables), we conducted

a series of one-way ANOVAs for each outcome at the .05 / 3 (or .0167) alpha level.

Group mean differences are present for self-efficacy (F[2, 297]Â€=Â€29.6, p < .001), verbal performance (F[2, 297]Â€=Â€26.7, p < .001), and functional status (F[2, 297]Â€=Â€20.0,

p < .001). Further, the values of eta square for each outcome suggest that treatment

effects for self-efficacy (η2Â€=Â€.17), verbal performance (η2Â€=Â€.15), and functional status

(η2Â€=Â€.12) are generally strong.

TableÂ€2 presents information on the pairwise contrasts of interest. Comparisons of

treatment means were conducted using the Tukey HSD approach, with an alpha of

249

250

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

Table 1:â•‡ Group Means (SD) for the Dependent Variables (nÂ€=Â€100)

Group

Self-efficacy

Verbal performance

Functional status

Memory training

Health training

Control

58.5 (9.2)

50.6 (8.3)

49.0 (10.4)

60.2 (9.7)

50.8 (9.3)

52.9 (9.6)

59.2 (9.7)

52.4 (10.3)

51.2 (8.6)

Table 2:â•‡ Pairwise Contrasts for the Dependent Variables

Dependent variable

Contrast

Differences in

means (SE)

95% C.I.a

Self-efficacy

Memory vs. health

Memory vs. control

Health vs. control

Memory vs. health

Memory vs. control

Health vs. control

Memory vs. health

Memory vs. control

Health vs. control

7.9* (1.32)

9.5* (1.32)

1.7 (1.32)

9.4* (1.35)

7.3* (1.35)

−2.0 (1.35)

6.7* (1.35)

7.9* (1.35)

1.2 (1.35)

4.2, 11.5

5.9, 13.2

−2.0, 5.3

5.7, 13.1

3.6, 11.1

−5.8, 1.7

3.0, 10.5

4.2, 11.6

−2.6, 4.9

Verbal performance

Functional status

a

C.I. represents the confidence interval for the difference in means.

Note: * indicates a statistically significant difference (p < .0167) using the Tukey HSD procedure.

.0167 used for these contrasts. TableÂ€2 shows that participants in the memory training

group scored significantly higher, on average, than participants in both the health training and control groups for each outcome. No statistically significant mean differences

were observed between the health training and control groups. Further, given that a

raw score difference of 3 points on each of the similarly scaled variables represents the

threshold between negligible and important mean differences, the confidence intervals

indicate that, when differences are present, population differences are meaningful as

the lower bounds of all such intervals exceed 3. Thus, after receiving memory training, individuals, on average, have much greater self-efficacy, verbal performance, and

daily functional status than those in the health training and control groups.

6.13 ANALYSIS SUMMARY

One-way MANOVA can be used to describe differences in means for multiple dependent variables among multiple groups. The design has one factor that represents group

membership and two or more continuous dependent measures. MANOVA is used

instead of multiple ANOVAs to provide better protection against the inflation of the

overall type IÂ€error rate and may provide for more power than a series of ANOVAs.

The primary steps in a MANOVA analysisÂ€are:

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

I. Preliminary Analysis

A. Conduct an initial screening of theÂ€data.

1) Purpose: Determine if the summary measures seem reasonable and

support the use of MANOVA. Also, identify the presence and pattern

(ifÂ€any) of missingÂ€data.

2) Procedure: Compute various descriptive measures for each group (e.g.,

means, standard deviations, medians, skewness, kurtosis, frequencies)

on each of the dependent variables. Compute the bivariate correlations

for the outcomes. If there is missing data, conduct missing data analysis.

3) Decision/action: If the values of the descriptive statistics do not make

sense, check data entry for accuracy. If all of the correlations are near

zero, consider using a series of ANOVAs. If one or more correlations are

very high (e.g., .8, .9), consider forming one or more composite variables. If there is missing data, consider strategies to address missingÂ€data.

B. Conduct case analysis.

1) Purpose: Identify any problematic individual observations.

2)Procedure:

i) Inspect the distribution of each dependent variable within each group

(e.g., via histograms) and identify apparent outliers. Scatterplots may

also be inspected to examine linearity and bivariate outliers.

ii)Inspect z-scores and Mahalanobis distances for each variable within

each group. For the z scores, absolute values larger than perhaps 2.5

or 3 along with a judgment that a given value is distinct from the

bulk of the scores indicate an outlying value. Multivariate outliers

are indicated when the Mahalanobis distance exceeds the corresponding critical value.

iii) If any potential outliers are identified, conduct a sensitivity study to

determine the impact of one or more outliers on major study results.

3) Decision/action: If there are no outliers with excessive influence, continue with the analysis. If there are one or more observations with excessive influence, determine if there is a legitimate reason to discard the

observations. If so, discard the observation(s) (documenting the reason)

and continue with the analysis. If not, consider use of variable transformations to attempt to minimize the effects of one or more outliers. If

necessary, discuss any ambiguous conclusions in the report.

C. Assess the validity of the MANOVA assumptions.

1) Purpose: Determine if the standard MANOVA procedure is valid for the

analysis of theÂ€data.

2) Some procedures:

i) Independence: Consider the sampling design and study circumstances to identify any possible violations.

ii) Multivariate normality: Inspect the distribution of each dependent variable in each group (via histograms) and inspect values for

Â�skewness and kurtosis for each group. The Shapiro–Wilk test statistic can also be used to test for nonnormality.

251

252

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

iii) Equal covariance matrices: Examine the standard deviations for each

group as a preliminary assessment. Use Box’s M test to assess if this

assumption is tenable, keeping in mind that it requires the assumption

of multivariate normality to be satisfied and with large samples may

be an overpowered test of the assumption. If significant, examine

Levene’s test for equality of variance for each outcome to identify

problematic dependent variables (which should also be conducted if

univariate ANOVAs are the follow-up test to a significant MANOVA).

3)Decision/action:

i) Any nonnormal distributions and/or inequality of covariance matrices may be of substantive interest in their own right and should be

reported and/or further investigated. If needed, consider the use of

variable transformations to address these problems.

ii) Continue with the standard MANOVA analysis when there is no evidence of violations of any assumption or when there is evidence of a

specific violation but the technique is known to be robust to an existing

violation. If the technique is not robust to an existing violation and

cannot be remedied with variable transformations, use an alternative

analysis technique.

D. Test any preplanned contrasts.

1) Purpose: Test any strong a priori research hypotheses with maximum power.

2) Procedure: If there is rationale supporting group mean differences on

two or three multiple outcomes, test the overall multivariate null hypothesis for these outcomes using Wilks’ Λ. If significant, use an ANOVA

F test for each outcome with no alpha adjustment. For any significant

ANOVAs, follow up (if more than two groups are present) with tests and

interval estimates for all pairwise contrasts using the Tukey procedure.

II. Primary Analysis

A. Test the overall multivariate null hypothesis.

1) Purpose: Provide “protected testing” to help control the inflation of the

overall type IÂ€errorÂ€rate.

2) Procedure: Examine the test result for Wilks’Â€Λ.

3) Decision/action: If the p-value associated with this test is sufficiently

small, continue with further tests of specific contrasts. If the p-value is

not small, do not continue with any further testing of specific contrasts.

B.If the overall null hypothesis has been rejected, test and estimate all

post hoc contrasts of interest.

1) Purpose: Describe the differences among the groups for each of the

dependent variables, while controlling the overall errorÂ€rate.

2)Procedures:

i) Test the overall ANOVA null hypothesis for each dependent variable using a Bonferroni-adjusted alpha. (A conventional unadjusted

alpha can be considered when the number of outcomes is relatively

small, such as two or three.)

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

ii) For each dependent variable for which the overall univariate null

hypothesis is rejected, follow up (if more than two groups are present) with tests and interval estimates for all pairwise contrasts using

the Tukey procedure.

C. Report and interpret at least one of the following effect size measures.

1) Purpose: Indicate the strength of the relationship between the dependent

variable(s) and the factor (i.e., group membership).

2) Procedure: Raw score differences in means should be reported. Other

possibilities include (a) the proportion of generalized total variation

explained by group membership for the set of dependent variables (multivariate eta square), (b) the proportion of variation explained by group

membership for each dependent variable (univariate eta square), and/or

(c) Cohen’s d for two-group contrasts.

REFERENCES

Barcikowski, R.â•›S. (1981). Statistical power with group mean as the unit of analysis. Journal

of Educational Statistics, 6, 267–285.

Bock, R.â•›D. (1975). Multivariate statistical methods in behavioral research. New York, NY:

McGraw-Hill.

Box, G.E.P. (1949). AÂ€general distribution theory for a class of likelihood criteria. Biometrika,

36, 317–346.

Burstein, L. (1980). The analysis of multilevel data in educational research and evaluation.

Review of Research in Education, 8, 158–233.

Christensen, W.,Â€& Rencher, A. (1995, August). A comparison of Type IÂ€error rates and power

levels for seven solutions to the multivariate Behrens-Fisher problem. Paper presented at

the meeting of the American Statistical Association, Orlando,Â€FL.

Conover, W.â•›J., Johnson, M.â•›E.,Â€& Johnson, M.â•›M. (1981). Composite study of tests for homogeneity of variances with applications to the outer continental shelf bidding data. Technometrics, 23, 351–361.

Coombs, W., Algina, J.,Â€& Oltman, D. (1996). Univariate and multivariate omnibus hypothesis tests selected to control Type IÂ€error rates when population variances are not necessarily equal. Review of Educational Research, 66, 137–179.

DeCarlo, L.â•›T. (1997). On the meaning and use of kurtosis. Psychological Methods, 2, 292–307.

Everitt, B.â•›S. (1979). AÂ€Monte Carlo investigation of the robustness of Hotelling’s one and two

sample T2 tests. Journal of the American Statistical Association, 74, 48–51.

Glass, G.â•›C.,Â€& Hopkins, K. (1984). Statistical methods in education and psychology. Englewood Cliffs, NJ: Prentice-Hall.

Glass, G., Peckham, P.,Â€& Sanders, J. (1972). Consequences of failure to meet assumptions

underlying the fixed effects analysis of variance and covariance. Review of Educational

Research, 42, 237–288.

Glass, G.,Â€& Stanley, J. (1970). Statistical methods in education and psychology. Englewood

Cliffs, NJ: Prentice-Hall.

253

254

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

Gnanadesikan, R. (1977). Methods for statistical analysis of multivariate observations. New

York, NY: Wiley.

Hakstian, A.â•›R., Roed, J.â•›C.,Â€& Lind, J.â•›C. (1979). Two-sample T–2 procedure and the assumption of homogeneous covariance matrices. Psychological Bulletin, 86, 1255–1263.

Hays, W. (1963). Statistics for psychologists. New York, NY: Holt, RinehartÂ€& Winston.

Hedges, L. (2007). Correcting a statistical test for clustering. Journal of Educational and

Behavioral Statistics, 32, 151–179.

Henze, N.,Â€& Zirkler, B. (1990). AÂ€class of invariant consistent tests for multivariate normality.

Communication in Statistics: Theory and Methods, 19, 3595–3618.

Holloway, L.â•›N., & Dunn, O.â•›J. (1967). The robustness of Hotelling’s T2. Journal of the American Statistical Association, 62(317), 124–136.

Hopkins, J.â•›

W.,Â€& Clay, P.P.F. (1963). Some empirical distributions of bivariate T2 and

homoscedasticity criterion M under unequal variance and leptokurtosis. Journal of the

American Statistical Association, 58, 1048–1053.

Hykle, J., Stevens, J.â•›P.,Â€& Markle, G. (1993, April). Examining the statistical validity of studies

comparing cooperative learning versus individualistic learning. Paper presented at the

annual meeting of the American Educational Research Association, Atlanta,Â€GA.

Johnson, N.,Â€& Wichern, D. (1982). Applied multivariate statistical analysis. Englewood

Cliffs, NJ: PrenticeÂ€Hall.

Johnson, R.â•›A.,Â€& Wichern, D.â•›W. (2007). Applied multivariate statistical analysis (6th ed.).

Upper Saddle River, NJ: Pearson PrenticeÂ€Hall.

Kenny, D.,Â€& Judd, C. (1986). Consequences of violating the independent assumption in

analysis of variance. Psychological Bulletin, 99, 422–431.

Kreft, I.,Â€& de Leeuw, J. (1998). Introducing multilevel modeling. Thousand Oaks, CA:Â€Sage.

Lix, L.â•›M., Keselman, C.â•›J.,Â€& Kesleman, H.â•›J. (1996). Consequences of assumption violations

revisited: AÂ€quantitative review of alternatives to the one-way analysis of variance. Review

of Educational Research, 66, 579–619.

Looney, S.â•›W. (1995). How to use tests for univariate normality to assess multivariate normality. American Statistician, 49, 64–70.

Mardia, K.â•›V. (1970). Measures of multivariate skewness and kurtosis with applications.

Biometrika, 57, 519–530.

Mardia, K.â•›V. (1971). The effect of non-normality on some multivariate tests and robustness

to nonnormality in the linear model. Biometrika, 58, 105–121.

Maxwell, S.â•›E.,Â€& Delaney, H.â•›D. (2004). Designing experiments and analyzing data: AÂ€model

comparison perspective (2nd ed.). Mahwah, NJ: Lawrence Erlbaum.

McDougall, G.â•›J., Becker, H., Pituch, K., Acee, T.â•›W., Vaughan, P.â•›W.,Â€& Delville, C. (2010a). Differential benefits of memory training for minority older adults. Gerontologist, 5, 632–645.

McDougall, G.â•›J., Becker, H., Pituch, K., Acee, T.â•›W., Vaughan, P.â•›W.,Â€& Delville, C. (2010b).

The SeniorWISE study: Improving everyday memory in older adults. Archives of Psychiatric Nursing, 24, 291–306.

Mecklin, C.â•›J.,Â€& Mundfrom, D.â•›J. (2003). On using asymptotic critical values in testing for multivariate normality. InterStat, available online at http_interstatstatvteduInterStatARTICLES

2003articlesJ03001pdf

Nel, D.â•›G.,Â€& van der Merwe, C.â•›A. (1986). AÂ€solution to the multivariate Behrens-Fisher problem. Communications in Statistics: Theory and Methods, 15, 3719–3735.

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

Olson, C. L. (1973). AÂ€Monte Carlo investigation of the robustness of multivariate analysis of

variance. Dissertation Abstracts International, 35, 6106B.

Olson, C.â•›L. (1974). Comparative robustness of six tests in multivariate analysis of variance.

Journal of the American Statistical Association, 69, 894–908.

Olson, C.â•›L. (1976). On choosing a test statistic in MANOVA. Psychological Bulletin, 83, 579–586.

Rencher, A.â•›

C.,Â€& Christensen, W.â•›

F. (2012). Method of multivariate analysis (3rd ed.).

Hoboken, NJ: John WileyÂ€&Â€Sons.

Rummel, R.â•›J. (1970). Applied factor analysis. Evanston, IL: Northwestern University Press.

Scariano, S.,Â€& Davenport, J. (1987). The effects of violations of the independence assumption in the one way ANOVA. American Statistician, 41, 123–129.

Scheffe, H. (1959). The analysis of variance. New York, NY: Wiley.

Small, N.J.H. (1980). Marginal skewness and kurtosis in testing multivariate normality.

Applied Statistics, 29, 85–87.

Snijders, T.,Â€& Bosker, R. (1999). Multilevel analysis. Thousand Oaks, CA:Â€Sage.

Stevens, J.â•›P. (1979). Comment on Olson: Choosing a test statistic in multivariate analysis of

variance. Psychological Bulletin, 86, 355–360.

Wilcox, R.â•›R. (2012). Introduction to robust estimation and hypothesis testing (3rd ed.).

Waltham, MA: Elsevier.

Wilk, H.â•›B., Shapiro, S.â•›S.,Â€& Chen, H.â•›J. (1968). AÂ€comparative study of various tests of normality. Journal of the American Statistical Association, 63, 1343–1372.

Zwick, R. (1985). Nonparametric one-way multivariate analysis of variance: AÂ€computational

approach based on the Pillai-Bartlett trace. Psychological Bulletin, 97, 148–152.

APPENDIX 6.1

Analyzing Correlated Observations*

Much has been written about correlated observations, and that INDEPENDENCE of

observations is an assumption for ANOVA and regression analysis. What is not apparent from reading most statistics books is how critical an assumption it is. Hays (1963)

indicated over 40Â€ years ago that violation of the independence assumption is very

serious. Glass and Stanley (1970) in their textbook talked about the critical importance

of this assumption. Barcikowski (1981) showed that even a SMALL violation of the

independence assumption can cause the actual alpha level to be several times greater

than the nominal level. Kreft and de Leeuw (1998) note: “This means that if intraclass correlation is present, as it may be when we are dealing with clustered data, the

assumption of independent observations in the traditional linear model is violated”

(p.Â€9). The Scariano and Davenport (1987) table (TableÂ€6.1) shows the dramatic effect

dependence can have on type IÂ€error rate. The problem is, as Burstein (1980) pointed

out more than 25Â€years ago, is that “most of what goes on in education occurs within

some group context” (p.Â€ 158). This gives rise to nested data and hence correlated

* The authoritative book on ANOVA (Scheffe, 1959) states that one of the assumptions in ANOVA

is statistical independence of the errors. But this is equivalent to the independence of the observations (MaxwellÂ€& Delaney, 2004, p.Â€110).

255

256

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

observations. More generally, nested data occurs quite frequently in social science

research. Social psychology often is focused on groups. In clinical psychology, if we

are dealing with different types of psychotherapy, groups are involved. The hierarchical, or multilevel, linear model (ChaptersÂ€13 and 14) is a commonly used method for

dealing with correlated observations.

Let us first turn to a simpler analysis, which makes practical sense if the effect anticipated (from previous research) or desired is at least MODERATE. With correlated

data, we first compute the mean for each cluster, and then do the analysis on the means.

TableÂ€6.2, from Barcikowski (1981), shows that if the effect is moderate, then about 10

groups per treatment are necessary at the .10 alpha level for powerÂ€=Â€.80 when there are

10 participants per group. This implies that about eight or nine groups per treatment

would be needed for powerÂ€=Â€.70. For a large effect size, only five groups per treatment

are needed for powerÂ€=Â€.80. For a SMALL effect size, the number of groups per treatment for adequate power is much too large and impractical.

Now we consider a very important paper by Hedges (2007). The title of the paper is

quite revealing: “Correcting a Significance Test for Clustering.” He develops a correction for the t test in the context of randomly assigning intact groups to treatments. But

the results have broader implications. Here we present modified information from his

study, involving some results in the paper and some results not in the paper, but which

were received from Dr.Â€Hedges (nominal alphaÂ€=Â€.05):

M (clusters)

2

2

2

2

2

2

2

2

5

5

5

5

10

10

10

10

n (S’s per cluster)

100

100

100

100

30

30

30

30

10

10

10

10

5

5

5

5

Intraclass correlation

.05

.10

.20

.30

.05

.10

.20

.30

.05

.10

.20

.30

.05

.10

.20

.30

Actual rejection rate

.511

.626

.732

.784

.214

.330

.470

.553

.104

.157

.246

.316

.074

.098

.145

.189

In this table, we have m clusters assigned to each treatment and an assumed alpha level

of .05. Note that it is the n (number of participants in each cluster), not m, that causes

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

the alpha rate to skyrocket. Compare the actual alpha levels for intraclass correlation

fixed at .10 as n varies from 100 to 5 (.626, .330, .157 and .098).

For equal cluster size (n), Hedges derives the following relationship between the t

(uncorrected for the cluster effect) and tA, corrected for the cluster effect:

tAÂ€= ct, with h degrees of freedom.

The correction factor is c = ( N - 2) - 2 (n - 1) p / ( N - 2) 1 + ( n - 1) p , where

p represents the intraclass correlation, and hÂ€ =Â€ (N − 2) / [1 + (n − 1) p] (good

approximation).

To see the difference the correction factor and the reduced df can make, we consider

an example. Suppose we have three groups of 10 participants in each of two treatment

groups and that pÂ€=Â€.10. AÂ€noncorrected tÂ€=Â€2.72 with dfÂ€=Â€58, and this is significant at

the .01 level for a two-tailed test. The corrected tÂ€=Â€1.94 with hÂ€=Â€30.5 df, and this is

NOT even significant at the .05 level for a two-tailedÂ€test.

We now consider two practical situations where the results from the Hedges study

can be useful. First, teaching methods is a big area of concern in education. If we are

considering two teaching methods, then we will have about 30 students in each class.

Obviously, just two classes per method will yield inadequate power, but the modified

information from the Hedges study shows that with just two classes per method and

nÂ€=Â€30, the actual type IÂ€error rate is .33 for intraclass correlationÂ€=Â€.10. So, for more

than two classes per method, the situation will just get worse in terms of type IÂ€error.

Now, suppose we wish to compare two types of counseling or psychotherapy. If we

assign five groups of 10 participants each to each of the two types and intraclass correlationÂ€=Â€.10 (and it could be larger), then actual type IÂ€error is .157, not .05 as we

thought. The modified information also covers the situation where the group size is

smaller and more groups are assigned to each type. Now, consider the case were 10

groups of size nÂ€=Â€5 are assigned to each type. If intraclass correlationÂ€=Â€.10, then actual

type IÂ€errorÂ€=Â€.098. If intraclass correlationÂ€=Â€.20, then actual type IÂ€errorÂ€=Â€.145, almost

three times what we want it toÂ€be.

Hedges (2007) has compared the power of clustered means analysis to the power of

his adjusted t test when the effect is quite LARGE (one standard deviation). Here are

some results from his comparison:

Power

n

m

Adjusted t

Cluster means

pÂ€=Â€.10

10

25

10

2

2

3

.607

.765

.788

.265

.336

.566

(Continuedâ•›)

257

258

â†œæ¸€å±®

â†œæ¸€å±®

Power

pÂ€=Â€.20

Assumptions in MANOVA

n

m

Adjusted t

Cluster means

25

10

25

3

4

4

.909

.893

.968

.703

.771

.889

10

25

10

25

10

25

2

2

3

3

4

4

.449

.533

.620

.710

.748

.829

.201

.230

.424

.490

.609

.689

These results show the power of cluster means analysis does not fare well when

there are three or fewer means per treatment group, and this is for a large effect

size (which is NOT realistic of what one will generally encounter in practice). For a

medium effect size (.5 SD) Barcikowski (1981) shows that for power > .80 you will

need nine groups per treatment if group size is 30 for intraclass correlationÂ€=Â€.10 at

the .05 level.

So, the bottom line is that correlated observations occur very frequently in social

science research, and researchers must take this into account in their analysis. The

intraclass correlation is an index of how much the observations correlate, and an

estimate of it—or at least an upper bound for it—needs to be obtained, so that the

type IÂ€error rate is under control. If one is going to consider a cluster means analysis, then a table from Barcikowski (1981) indicates that one should have at least

seven groups per treatment (with 30 observations per group) for powerÂ€=Â€.80 at the

.10 level. One could probably get by with six or five groups for powerÂ€=Â€.70. The

same table from Barcikowski shows that if group size is 10, then at least 10 groups

per counseling method are needed for powerÂ€=Â€.80 at the .10 level. One could probably get by with eight groups per method for powerÂ€=Â€.70. Both of these situations

assume we wish to detect at least a moderate effect size. Hedges’ adjusted t has

some potential advantages. For pÂ€=Â€.10, his power analysis (presumably at the .05

level) shows that probably four groups of 30 in each treatment will yield adequate

power (> .70). The reason we say “probably” is that power for a very large effect

size is .968, and nÂ€=Â€25. The question is, for a medium effect size at the .10 level,

will power be adequate? For pÂ€ =Â€ .20, we believe we would need five groups per

treatment.

Barcikowski (1981) has indicated that intraclass correlations for teaching various subjects are generally in the .10 to .15 range. It seems to us, that for counseling or psychotherapy methods, an intraclass correlation of .20 is prudent. Snidjers and Bosker

(1999) indicated that in the social sciences intraclass correlations are generally in the

0 to .4 range, and often narrower bounds can be found.

Chapter 6

â†œæ¸€å±®

â†œæ¸€å±®

In finishing this appendix, we think it is appropriate to quote from Hedges’ (2007)

conclusion:

Cluster randomized trials are increasingly important in education and the social

and policy sciences. However, these trials are often improperly analyzed by ignoring the effects of clustering on significance tests.Â€.Â€.Â€.Â€This article considered only

t tests under a sampling model with one level of clustering. The generalization of

the methods used in this article to more designs with additional levels of clustering

and more complex analyses would be desirable. (p.Â€173)

APPENDIX 6.2

Multivariate Test Statistics for Unequal Covariance Matrices

The two-group test statistic that should be used when the population covariance matrices are not equal, especially with sharply unequal group sizes,Â€is

T*2

S S

= ( y1 - y 2 ) ' 1 + 2

n1 n2

-1

( y1 - y 2 ).

This statistic must be transformed, and various critical values have been proposed

(see Coombs et al., 1996). An important Monte Carlo study comparing seven solutions to the multivariate Behrens–Fisher problem is by Christensen and Rencher

(1995). They considered 2, 5, and 10 variables (p), and the data were generated

such that the population covariance matrix for group 2 was d times the covariance

matrix for group 1 (d was set at 3 and 9). The sample sizes for different p values are

givenÂ€here:

n1 > n2

n1Â€=Â€n2

n1 < n2

pÂ€=Â€2

pÂ€=Â€5

pÂ€=Â€10

10:5

10:10

10:20

20:10

20:20

20:40

30:20

30:30

30:60

FigureÂ€6.2 shows important results from their study.

They recommended the Kim and Nel and van der Merwe procedures because they are

conservative and have good power relative to the other procedures. To this writer, the

Yao procedure is also fairly good, although slightly liberal. Importantly, however, all

the highest error rates for the Yao procedure (including the three outliers) occurred

when the variables were uncorrelated. This implies that the adjusted power of the Yao

(which is somewhat low for n1 > n2) would be better for correlated variables. Finally,

for test statistics for the k-group MANOVA case, see Coombs etÂ€al. (1996) for appropriate references.

259

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

Figure 6.2â•‡ Results from a simulation study comparing the performance of methods when unequal covariance matrices are present (from Christensen and Rencher, 1995).

Box and whisker plots for type I errors

0.45

0.40

0.35

Type I error

0.30

0.25

0.20

0.15

0.10

0.05

Kim

Hwang and

Paulson

Nel and

Van der Merwe

Johansen

Yao

James

Bennett

Hotelling

0.00

Average alpha-adjusted power

0.65

nl = n2

nl > n2

nl < n2

0.55

0.45

Kim

Hwang

Nel

Joh

Yao

James

Ben

0.35

Hot

260

2

The approximate test by Nel and van der Merwe (1986) uses T* , which is approximately distributed as Tp,v2,Â€with

V=

{

( )

tr ( Se )2 + [ tr ( Se )]2

(n1 - 1) -1 tr V12 + tr (V1 )

2

} + (n - 1) {tr (V ) + tr (V ) }

2

-1

2

2

2

2

SPSS Matrix Procedure Program for Calculating Hotelling’s T2 and v (knu) for the Nel and

van der Merwe Modification and Selected Output

MATRIX.

COMPUTE S1Â€=Â€{23.013, 12.366, 2.907; 12.366, 17.544, 4.773; 2.907, 4.773, 13.963}.

COMPUTE S2Â€=Â€{4.362, .760, 2.362; .760, 25.851, 7.686; 2.362, 7.686, 46.654}.

COMPUTE V1Â€=Â€S1/36.

COMPUTE V2Â€=Â€S2/23.

COMPUTE TRACEV1Â€=Â€TRACE(V1).

COMPUTE SQTRV1Â€=Â€TRACEV1*TRACEV1.

COMPUTE TRACEV2Â€=Â€TRACE(V2).

COMPUTE SQTRV2Â€=Â€TRACEV2*TRACEV2.

COMPUTE V1SQÂ€=Â€V1*V1.

COMPUTE V2SQÂ€=Â€V2*V2.

COMPUTE TRV1SQÂ€=Â€TRACE(V1SQ).

COMPUTE TRV2SQÂ€=Â€TRACE(V2SQ).

COMPUTE SEÂ€=Â€V1 + V2.

COMPUTE SESQÂ€=Â€SE*SE.

COMPUTE TRACESEÂ€=Â€TRACE(SE).

COMPUTE SQTRSEÂ€=Â€TRACESE*TRACESE.

COMPUTE TRSESQÂ€=Â€TRACE(SESQ).

COMPUTE SEINVÂ€=Â€INV(SE).

COMPUTE DIFFMÂ€=Â€{2.113, −2.649, −8.578}.

COMPUTE TDIFFMÂ€=Â€T(DIFFM).

COMPUTE HOTLÂ€=Â€DIFFM*SEINV*TDIFFM.

COMPUTE KNUÂ€=Â€(TRSESQ + SQTRSE)/(1/36*(TRV1SQ + SQTRV1) + 1/23*(TRV2SQ + SQTRV2)).

PRINT S1.

PRINT S2.

PRINT HOTL.

PRINT KNU.

END MATRIX.

Matrix

Run MATRIX procedure

S1

23.01300000

12.36600000

2.90700000

12.36600000

17.54400000

4.77300000

2.90700000

4.77300000

13.96300000

4.36200000

.76000000

2.36200000

.76000000

25.85100000

7.68600000

2.36200000

7.68600000

46.65400000

S2

HOTL

43.17860426

KNU

40.57627238

END MATRIX

262

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

6.14 EXERCISES

1. Describe a situation or class of situations where dependence of the observations would be present.

2. An investigator has a treatment versus control group design with 30 participants per group. The intraclass correlation is calculated and found to be .20. If

testing for significance at .05, estimate what the actual type IÂ€error rateÂ€is.

3. Consider a four-group study with three dependent variables. What does the

homogeneity of covariance matrices assumption imply in thisÂ€case?

4. Consider the following three MANOVA situations. Indicate whether you would

be concerned in each case with the type IÂ€error rate associated with the overall

multivariate test of mean differences. Suppose that for each case the p value

for the multivariate test for homogeneity of dispersion matrices is smaller than

the nominal alpha of .05.

(a)

(b)

(c)

Gp 1

Gp 2

Gp 3

n1Â€=Â€15

|S1|Â€=Â€4.4

n2Â€=Â€15

|S2|Â€=Â€7.6

n3Â€=Â€15

|S3|Â€=Â€5.9

Gp 1

Gp 2

n1Â€=Â€21

|S1|Â€=Â€14.6

n2Â€=Â€57

|S2|Â€=Â€2.4

Gp 1

Gp 2

Gp 3

Gp 4

n1Â€=Â€20

|S1|Â€=Â€42.8

n2Â€=Â€15

|S2|Â€=Â€20.1

n3Â€=Â€40

|S3|Â€=Â€50.2

n4Â€=Â€29

|S4|Â€=Â€15.6

5. Zwick (1985) collected data on incoming clients at a mental health center who

were randomly assigned to either an oriented group, which saw a videotape

describing the goals and processes of psychotherapy, or a control group. She

presented the following data on measures of anxiety, depression, and anger

that were collected in a 1-month follow-up:

Anxiety

Depression

Anger

Anxiety

Oriented group (n1 = 20)

285

23

325

45

165

15

Depression

Anger

Control group (n2 = 26)

168

277

190

230

160

63

Chapter 6

Anxiety

Depression

Anger

Anxiety

Oriented group (n1 = 20)

40

215

110

65

43

120

250

14

0

5

75

27

30

183

47

385

83

87

85

307

110

105

160

180

335

20

15

23

303

113

25

175

117

520

95

27

18

60

50

24

44

80

185

3

5

12

95

40

28

100

46

23

26

2

Depression

â†œæ¸€å±®

â†œæ¸€å±®

Anger

Control group (n2 = 26)

153

306

252

143

69

177

73

81

63

64

88

132

122

309

147

223

217

74

258

239

78

70

188

157

80

440

350

205

55

195

57

120

63

53

125

225

60

355

135

300

235

67

185

445

40

50

165

330

29

105

175

42

10

75

32

7

0

35

21

9

38

135

83

30

130

20

115

145

48

55

87

67

(a)Run the EXAMINE procedure on this data. Focusing on the Shapiro–Wilk

test and doing each test at the .025 level, does there appear to be a problem with the normality assumption?

(b)Now, recall the statement in the chapter by Johnson and Wichern that lack

of normality can be due to one or more outliers. Obtain the z scores for the

variables in each group. Identify any cases having a z score greater than

|2.5|.

(c) Which cases have z above this magnitude? For which variables do they

occur? Remove any case from the Zwick data set having a z score greater

than |2.5| and rerun the EXAMINE procedure. Is there still a problem with

lack of normality?

(d) Look at the stem-and-leaf plots for the variables. What transformation(s)

from FigureÂ€6.1 might be helpful here? Apply the transformation to the

variables and rerun the EXAMINE procedure one more time. How many of

the Shapiro–Wilk tests are now significant at the .025 level?

263

264

â†œæ¸€å±®

â†œæ¸€å±®

Assumptions in MANOVA

6. In Appendix 6.1 we illustrate what a difference the Hedges’ correction factor,

a correction for clustering, can have on t with reduced degrees of freedom.

We illustrated this for pÂ€=Â€.10. Show that, if pÂ€=Â€.20, the effect is even more

dramatic.

7. Consider TableÂ€6.6. Show that the value of .035 for N1: N2Â€=Â€24:12 for nominal

αÂ€=Â€.05 for the positive condition makes sense. Also, show that the valueÂ€=Â€.076

for the negative condition makes sense.

Chapter 7

FACTORIAL ANOVA AND

MANOVA

7.1â•‡INTRODUCTION

In this chapter we consider the effect of two or more independent or classification

variables (e.g., sex, social class, treatments) on a set of dependent variables. Four

schematic two-way designs, where just the classification variables are shown, are

givenÂ€here:

Treatments

Gender

1

2

Teaching methods

Aptitude

3

Male

Female

Schizop.

Depressives

2

Low

Average

High

Drugs

Diagnosis

1

1

2

Stimulus complexity

3

4

Intelligence

Easy

Average

Hard

Average

Super

We first indicate what the advantages of a factorial design are over a one-way design.

We also remind you what an interaction means, and distinguish between two types of

interactions (ordinal and disordinal). The univariate equal cell size (balanced design)

situation is discussed first, after which we tackle the much more difficult disproportional (non-orthogonal or unbalanced) case. Three different ways of handling the

unequal n case are considered; it is indicated why we feel one of these methods is

generally superior. After this review of univariate ANOVA, we then discuss a multivariate factorial design, provide an analysis guide for factorial MANOVA, and apply

these analysis procedures to a fairly large data set (as most of the data sets provided

in the chapter serve instructional purposes and have very small sample sizes). We

266

â†œæ¸€å±®

â†œæ¸€å±®

FACtORIAL ANOVA AnD MANOVA

also provide an example results section for factorial MANOVA and briefly discuss

three-way MANOVA, focusing on the three-way interaction. We conclude the chapter

by showing how discriminant analysis can be used in the context of a multivariate

factorial design. Syntax for running various analyses is provided along the way, and

selected output from SPSS is discussed.

7.2 ADVANTAGES OF A TWO-WAY DESIGN

1. A two-way design enables us to examine the joint effect of the independent variables on the dependent variable(s). We cannot get this information by running two

separate one-way analyses, one for each of the independent variables. If one of

the independent variables is treatments and the other some individual difference

characteristic (sex, IQ, locus of control, age, etc.), then a significant interaction

tells us that the superiority of one treatment over another depends on or is moderated by the individual difference characteristic. (An interaction means that the

effect one independent variable has on a dependent variable is not the same for

all levels of the other independent variable.) This moderating effect can take two

forms:

Teaching method

High ability

Low ability

T1

T2

T3

85

60

80

63

76

68

(a) The degree of superiority changes, but one subgroup always does better than

another. To illustrate this, consider this ability by teaching methods design:

While the superiority of the high-ability students drops from 25 for T1 (i.e.,

85–60) to 8 for T3 (76–68), high-ability students always do better than

low-ability students. Because the order of superiority is maintained, in this

example, with respect to ability, this is called an ordinal interaction. (Note that

this does not hold for the treatment, as T1 works better for high ability but T3

is better for low ability students, leading to the next point.)

(b) The superiority reverses; that is, one treatment is best with one group, but

another treatment is better for a different group. AÂ€study by Daniels and Stevens (1976) provides an illustration of a disordinal interaction. For a group of

college undergraduates, they considered two types of instruction: (1) a traditional, teacher-controlled (lecture) type and (2) a contract for grade plan. The

students were classified as internally or externally controlled, using Rotter’s

scale. An internal orientation means that those individuals perceive that positive events occur as a consequence of their actions (i.e., they are in control),

whereas external participants feel that positive and/or negative events occur

more because of powerful others, or due to chance or fate. The design and

Chapter 7

â†œæ¸€å±®

â†œæ¸€å±®

the means for the participants on an achievement posttest in psychology are

givenÂ€here:

Instruction

Locus of control

Contract for grade

Teacher controlled

Internal

50.52

38.01

External

36.33

46.22

The moderator variable in this case is locus of control, and it has a substantial

effect on the efficacy of an instructional method. That is, the contract for grade

method works better when participants have an internal locus of control, but

in a reversal, the teacher controlled method works better for those with external locus of control. As such, when participant locus of control is matched

to the teaching method (internals with contract for grade and externals with

teacher controlled) they do quite well in terms of achievement; where there is

a mismatch, achievement suffers.

This study also illustrates how a one-way design can lead to quite misleading

results. Suppose Daniels and Stevens had just considered the two methods,

ignoring locus of control. The means for achievement for the contract for grade

plan and for teacher controlled are 43.42 and 42.11, nowhere near significance.

The conclusion would have been that teaching methods do not make a difference. The factorial study shows, however, that methods definitely do make

a difference—a quite positive difference if participant’s locus of control is

matched to teaching methods, and an undesirable effect if there is a mismatch.

The general area of matching treatments to individual difference characteristics of

participants is an interesting and important one, and is called aptitude–treatment

interaction research. AÂ€classic text in this area is Aptitudes and Instructional

Methods by Cronbach and Snow (1977).

2. In addition to allowing you to detect the presence of interactions, a second advantage of factorial designs is that they can lead to more powerful tests by reducing

error (within-cell) variance. If performance on the dependent variable is related

to the individual difference characteristic (i.e., the blocking variable), then the

reduction in error variance can be substantial. We consider a hypothetical sex ×

treatment design to illustrate:

T1

Males

Females

18, 19, 21

20, 22

11, 12, 11

13, 14

T2

(2.5)

(1.7)

17, 16, 16

18, 15

9, 9, 11

8, 7

(1.3)

(2.2)

267

268

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

Notice that within each cell there is very little variability. The within-cell variances

quantify this, and are given in parentheses. The pooled within-cell error term for

the factorial analysis is quite small, 1.925. On the other hand, if this had been

considered as a two-group design (i.e., without gender), the variability would be

much greater, as evidenced by the within-group (treatment) variances for T1 and

T2 of 18.766 and 17.6, leading to a pooled error term for the F test of the treatment

effect of 18.18.

7.3 UNIVARIATE FACTORIAL ANALYSIS

7.3.1 Equal Cell n (Orthogonal)Â€Case

When there is an equal number of participants in each cell of a factorial design, then

the sum of squares for the different effects (main and interactions) are uncorrelated

(orthogonal). This is helpful when interpreting results, because significance for one

effect implies nothing about significance for another. This provides for a clean and

clear interpretation of results. It puts us in the same nice situation we had with uncorrelated planned comparisons, which we discussed in ChapterÂ€5.

Overall and Spiegel (1969), in a classic paper on analyzing factorial designs, discussed

three basic methods of analysis:

Method 1:â•…Adjust each effect for all other effects in the design to obtain its unique

contribution (regression approach), which is referred to as type III sum of

squares in SAS and SPSS.

Method 2:â•…Estimate the main effects ignoring the interaction, but estimate the interaction effect adjusting for the main effects (experimental method), which

is referred to as type II sum of squares.

Method 3:â•…Based on theory or previous research, establish an ordering for the

effects, and then adjust each effect only for those effects preceding it in

the ordering (hierarchical approach), which is referred to as type IÂ€sum

of squares.

Note that the default method in SPSS is to provide type III (method 1) sum of squares,

whereas SAS, by default, provides both type III (method 1) and type I (method 3) sum

of squares.

For equal cell size designs all three of these methods yield the same results, that is,

the same F tests. Therefore, it will not make any difference, in terms of the conclusions a researcher draws, as to which of these methods is used. For unequal cell sizes,

however, these methods can yield quite different results, and this is what we consider

shortly. First, however, we consider an example with equal cell size to show two things:

(a) that the methods do indeed yield the same results, and (b) to demonstrate, using

effect coding for the factors, that the effects are uncorrelated.

Chapter 7

â†œæ¸€å±®

â†œæ¸€å±®

Example 7.1: Two-Way Equal CellÂ€n

Consider the following 2 × 3 factorial dataÂ€set:

B

A

1

2

3

1

3, 5, 6

2, 4, 8

11, 7, 8

2

9, 14, 5

6, 7, 7

9, 8, 10

In TableÂ€7.1 we give SPSS syntax for running the analysis. In the general linear model

commands, we indicate the factors after the keyword BY. Method 3, the hierarchical

approach, means that a given effect is adjusted for all effects to its left in the ordering.

The effects here would go in the following order: FACA (factor A), FACB (factor B),

FACA by FACB. Thus, the AÂ€main effect is not adjusted for anything. The B main effect

is adjusted for the AÂ€main effect, and the interaction is adjusted for both main effects.

Table 7.1:â•‡ SPSS Syntax and Selected Output for Two-Way Equal Cell NÂ€ANOVA

TITLE ‘TWO WAY ANOVA EQUAL N’.

DATA LIST FREE/FACA FACB DEP.

BEGIN DATA.

1 1 3 1 1 5 1 1 6

1 2 2 1 2 4 1 2 8

1 3 11 1 3 7 1 3 8

2 1 9 2 1 14 2 1 5

2 2 6 2 2 7 2 2 7

2 3 9 2 3 8 2 3 10

END DATA.

LIST.

GLM DEP BY FACA FACB

/PRINTÂ€=Â€DESCRIPTIVES.

Tests of Significance for DEP using UNIQUE sums of squares (known as Type III sum of squares)

Tests of Between-Subjects Effects

Dependent Variable: DEP

Source

Corrected

Model

Intercept

Type III Sum of

Squares

df

Mean Square

F

Sig.

69.167a

5

13.833

2.204

.122

924.500

1

924.500

147.265

.000

(Continuedâ•›)

269

270

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

Table 7.1:â•‡(Continued)

Tests of Significance for DEP using UNIQUE sums of squares (known as Type III sum of squares)

Tests of Between-Subjects Effects

Dependent Variable: DEP

Source

Type III Sum of

Squares

df

Mean Square

F

Sig.

FACA

FACB

FACA * FACB

Error

Total

Corrected Total

24.500

30.333

14.333

75.333

1069.000

144.500

1

2

2

12

18

17

24.500

15.167

7.167

6.278

3.903

2.416

1.142

.072

.131

.352

a

R Squared = .479 (Adjusted R Squared = .261)

Tests of Significance for DEP using SEQUENTIAL Sums of Squares (known as Type IÂ€sum

of squares)

Tests of Between-Subjects Effects

Dependent Variable: DEP

Source

Type IÂ€Sum of

Squares

df

Corrected Model

Intercept

FACA

FACB

FACA * FACB

Error

Total

Corrected Total

69.167a

924.500

24.500

30.333

14.333

75.333

1069.000

144.500

5

1

1

2

2

12

18

17

a

Mean

Square

13.833

924.500

24.500

15.167

7.167

6.278

F

Sig.

2.204

147.265

3.903

2.416

1.142

.122

.000

.072

.131

.352

R SquaredÂ€=Â€.479 (Adjusted R SquaredÂ€=Â€.261)

The default in SPSS is to use Method 1 (type III sum of squares), which is obtained by

the syntax shown in TableÂ€7.1. Recall that this method obtains the unique contribution

of each effect, adjusting for all other effects. Method 3 (type IÂ€sum of squares) is implemented in SPSS by inserting the line /METHODÂ€=Â€SSTYPE(1) immediately below

the GLM line appearing in TableÂ€7.1. Note, however, that the F ratios for Methods 1 and

3 are identical (see TableÂ€7.1). Why? Because the effects are uncorrelated due to the

equal cell size, and therefore no adjustment takes place. Thus, the F test for an effect

“adjusted” is the same as an effect unadjusted. To show that the effects are indeed

uncorrelated, we used effect coding as described in TableÂ€7.2 and ran the problem as a

regression analysis. The coding scheme is explained there.

Table 7.2:â•‡ Regression Analysis of Two-Way Equal n ANOVA With Effect Coding and

Correlation Matrix for the Effects

TITLE ‘EFFECT CODING FOR EQUAL CELL SIZE 2-WAY ANOVA’.

DATA LIST FREE/Y A1 B1 B2 A1B1 A1B2.

BEGIN DATA.

3 1 1 0 1 0

5 1 1 0 1 0

6 1 1 0 1 0

2 1 0 1 0 1

4 1 0 1 0 1

8 1 0 1 0 1

11 1 –1 –1–1 –1 7 1 –1 –1–1 –1 8 1 –1 –1–1 –1

9 –1 1 0–1 0

14 –1 1 0–1 0 5 –1 1 0 –1 0

6 –1 0 1 0 –1

7 –1 0 1 0 –1 7 –1 0 1 0 –1

9 –1 –1 –1 1 1 8 –1 –1–1 1 1 10 –1 –1 –1 1 1

END DATA.

LIST.

REGRESSION DESCRIPTIVESÂ€=Â€DEFAULT

/VARIABLESÂ€=Â€Y TO A1B2

/DEPENDENTÂ€=Â€Y

/METHODÂ€=Â€ENTER.

Y

A1

(1) B1

B2

A1B1

A1B2

3.00

5.00

6.00

2.00

4.00

8.00

11.00

7.00

8.00

9.00

14.00

5.00

6.00

7.00

7.00

9.00

8.00

10.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

1.00

1.00

1.00

.00

.00

.00

–1.00

–1.00

–1.00

1.00

1.00

1.00

.00

.00

.00

–1.00

–1.00

–1.00

.00

.00

.00

1.00

1.00

1.00

–1.00

–1.00

–1.00

.00

.00

.00

1.00

1.00

1.00

–1.00

–1.00

–1.00

1.00

1.00

1.00

.00

.00

.00

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

.00

.00

.00

1.00

1.00

1.00

.00

.00

.00

1.00

1.00

1.00

–1.00

–1.00

–1.00

.00

.00

.00

–1.00

–1.00

–1.00

1.00

1.00

1.00

Correlations

Y

A1

Y

A1

B1

B2

A1B1

A1B2

1.000

–.412

–.412

1.000

–.264

.000

–.456

.000

–.312

.000

–.120

.000

(Continuedâ•›)

272

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

Table 7.2:â•‡(Continued)

Correlations

Y

B1

B2

A1B1

A1B2

–.264

–.456â•…(2)

–.312

–.120

A1

.000

.000

.000

.000

B1

B2

A1B1

A1B2

1.000

.500

.000

.000

.500

1.000

.000

.000

.000

.000

1.000

.500

.000

.000

.500

1.000

(1)â•‡For the first effect coded variable (A1), the S’s in the first level of AÂ€are coded with a 1, with the S’s in the

last level coded as −1. Since there are 3 levels of B, two effect coded variables are needed. The S’s in the

first level of B are coded as 1s for variable B1, with the S’s for all other levels of B, except the last, coded

as 0s. The S’s in the last level of B are coded as –1s. Similarly, the S’s on the second level of B are coded

as 1s on the second effect-coded variable (B2 here), with the S’s for all other levels of B, except the last,

coded as 0’s. Again, the S’s in the last level of B are coded as –1s for B2. To obtain the variables needed to

represent the interaction, i.e., A1B1 and A1B2, multiply the corresponding coded variables (i.e., A1 × B1,

A1 ×Â€B2).

(2)â•‡Note that the correlations between variables representing different effects are all 0. The only nonzero

correlations are for the two variables that jointly represent the B main effect (B1 and B2), and for the two

variables (A1B1 and A1B2) that jointly represent the AB interaction effect.

Predictor A1 represents factor A, predictors B1 and B2 represent factor B, and predictors A1B1 and A1B2 are variables needed to represent the interaction between

factors AÂ€ and B. In the regression framework, we are using these predictors to

explain variation on y. Note that the correlations between predictors representing

different effects are all 0. This means that those effects are accounting for distinct

parts of the variation on y, or that we have an orthogonal partitioning of the y

variation.

In TableÂ€7.3 we present sequential regression results that add one predictor variable

at a time in the order indicated in the table. There, we explain how the sum of squares

obtained for each effect is exactly the same as was obtained when the problem was run

as a traditional ANOVA in TableÂ€7.1.

Example 7.2: Two-Way Disproportional CellÂ€Size

The data for our disproportional cell size example is given in TableÂ€7.4, along with the

effect coding for the predictors, and the correlation matrix for the effects. Here there

definitely are correlations among the effects. For example, the correlations between

A1 (representing the AÂ€main effect) and B1 and B2 (representing the B main effect)

are −.163 and −.275. This contrasts with the equal cell n case where the correlations

among the different effects were all 0 (TableÂ€7.2). Thus, for disproportional cell sizes

the sources of variation are confounded (mixed together). To determine how much

unique variation on y a given effect accounts for we must adjust or partial out how

Table 7.3:â•‡ Sequential Regression Results for Two-Way Equal n ANOVA With Effect

Coding

Model No.

1

Variable Entered

A1

Analysis of Variance

Sum of Squares

DF

Mean Square

F Ratio

3.267

Regression

24.500

1

24.500

Residual

120.000

16

7.500

Model No.

2

Variable Added

B2

Analysis of Variance

Sum of Squares

DF

Mean Square

F Ratio

4.553

Regression

54.583

2

27.292

Residual

89.917

15

5.994

Model No.

3

Variable Added

B1

Analysis of Variance

Sum of Squares

DF

Mean Square

F Ratio

2.854

Regression

54.833

3

18.278

Residual

89.667

14

6.405

Model No.

4

Variable Added

A1B1

Analysis of Variance

Sum of Squares

DF

Mean Square

F Ratio

2.963

Regression

68.917

4

17.229

Residual

75.583

13

5.814

Model No.

Variable Added

5

A1B2

Analysis of Variance

Sum of Squares

DF

Mean Square

F Ratio

2.204

Regression

69.167

5

13.833

Residual

75.333

12

6.278

Note: The sum of squares (SS) for regression for A1, representing the AÂ€main effect, is the same as the SS

for FACA in TableÂ€7.1. Also, the additional SS for B1 and B2, representing the B main effect, is 54.833 −

24.5Â€=Â€30.333, the same as SS for FACB in TableÂ€7.1. Finally, the additional SS for A1B1 and A1B2, representing the AB interaction, is 69.167 − 54.833Â€=Â€14.334, the same as SS for FACA by FACB in TableÂ€7.1.

274

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

much of that variation is explainable because of the effect’s correlations with the

other effects in the design. Recall that in ChapterÂ€5 the same procedure was employed

to determine the unique amount of between variation a given planned comparison

accounts for in a set of correlated planned comparisons.

In TableÂ€7.5 we present the control lines for running the disproportional cell size example, along with Method 3 (type IÂ€sum of squares) and Method 1 (type III sum of

squares) results. The F ratios for the interaction effect are the same, but the F ratios for

the main effects are quite different. For example, if we had used Method 3 we would

have declared a significant B main effect at the .05 level, but with Method 1 (unique

decomposition) the B main effect is not significant at the .05 level. Therefore, with

unequal n designs the method used can clearly make a difference in terms of the conclusions reached in the study. This raises the question of which of the three methods

should be used for disproportional cell size factorial designs.

Table 7.4:â•‡ Effect Coding of the Predictors for the Disproportional Cell n ANOVA and

Correlation Matrix for the Variables

Design

B

A

A1

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

3, 5, 6

2, 4, 8

11, 7, 8, 6, 9

9, 14, 5, 11

6, 7, 7, 8, 10,

5, 6

9, 8, 10

B1

1.00

1.00

1.00

.00

.00

.00

–1.00

–1.00

–1.00

–1.00

–1.00

1.00

1.00

1.00

1.00

.00

.00

B2

.00

.00

.00

1.00

1.00

1.00

–1.00

–1.00

–1.00

–1.00

–1.00

.00

.00

.00

.00

1.00

1.00

A1B1

1.00

1.00

1.00

.00

.00

.00

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

.00

.00

A1B2

.00

.00

.00

1.00

1.00

1.00

–1.00

–1.00

–1.00

–1.00

–1.00

.00

.00

.00

.00

–1.00

–1.00

Y

3.00

5.00

6.00

2.00

4.00

8.00

11.00

7.00

8.00

6.00

9.00

9.00

14.00

5.00

11.00

6.00

7.00

Design

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

–1.00

.00

.00

.00

.00

.00

–1.00

–1.00

–1.00

1.00

1.00

1.00

1.00

1.00

–1.00

–1.00

–1.00

.00

.00

.00

.00

.00

1.00

1.00

1.00

–1.00

–1.00

–1.00

–1.00

–1.00

1.00

1.00

1.00

7.00

8.00

10.00

5.00

6.00

9.00

8.00

10.00

For AÂ€main effect â•… For B main effect â•…â•…â•… For AB interaction effect

Correlation:

â•…A1â•… â•…â•…â•…â•…B1â•‡â•‡â•‡â•…â•…â•…â•…â•‡

B2â•… â•…â•…A1B1â•‡â•‡â•‡â•…â•…â•…A1B2

A1

B1

B2

A1B1

A1B2

Y

1.000

–.163

–.275

–0.72

.063

–.361

–.163

1.000

.495

0.59

.112

–.148

–.275

.495

1.000

1.39

–.088

–.350

–.072

.059

.139

1.000

.468

–.332

.063

.112

–.088

.468

1.000

–.089

Y

–.361

–.148

–.350

–.332

–.089

1.000

Note: The correlations between variables representing different effects are boxed in. Compare these correlations to those for the equal cell size situation, as presented in TableÂ€7.2

Table 7.5:â•‡ SPSS Syntax for Two-Way Disproportional Cell n ANOVA With the Sequential and Unique Sum of Squares F Ratios

TITLE ‘TWO WAY UNEQUAL N’.

DATA LIST FREE/FACA FACB DEP.

BEGIN DATA.

1 1 3

1 1 5

1 1 6

1 2 2

1 2 4

1 2 8

1 3 11

1 3 7

1 3 8

1 3 6

2 1 9

2 1 14

2 1 5

2 1 11

2 2 6

2 2 7

2 2 7

2 2 8

2 3 9

2 3 8

2 3 10

END DATA

LIST.

UNIANOVA DEP BY FACA FACB

/ METHODÂ€=Â€SSTYPE(1)

/ PRINTÂ€=Â€DESCRIPTIVES.

1 3 9

2 2 10

2 2 5

2 2 6

(Continuedâ•›)

276

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

Table 7.5:â•‡(Continued)

Tests of Between-Subjects Effects

Dependent Variable: DEP

Source

Type I Sum of

Squares

df

Mean Square

Corrected Model

Intercept

FACA

FACB

FACA * FACB

Error

Total

Corrected Total

78.877a

1354.240

23.221

38.878

16.778

98.883

1532.000

177.760

5

1

1

2

2

19

25

24

15.775

1354.240

23.221

19.439

8.389

5.204

F

Sig.

3.031

260.211

4.462

3.735

1.612

.035

.000

.048

.043

.226

Tests of Between-Subjects Effects

Dependent Variable: DEP

Source

Type III Sum of

Squares

df

Mean Square

F

Sig.

Corrected Model

Intercept

FACA

FACB

FACA * FACB

Error

Total

Corrected Total

78.877a

1176.155

42.385

30.352

16.778

98.883

1532.000

177.760

5

1

1

2

2

19

25

24

15.775

1176.155

42.385

15.176

8.389

5.204

3.031

225.993

8.144

2.916

1.612

.035

.000

.010

.079

.226

a

R SquaredÂ€=Â€.444 (Adjusted R SquaredÂ€=Â€.297)

7.3.2â•‡ Which Method Should BeÂ€Used?

Overall and Spiegel (1969) recommended Method 2 as generally being most appropriate. However, most believe that Method 2 is rarely be the method of choice, since it

estimates the main effects ignoring the interaction. Carlson and Timm’s (1974) comment is appropriate here: “We find it hard to believe that a researcher would consciously design a factorial experiment and then ignore the factorial nature of the data

in testing the main effects” (p.Â€156).

We feel that Method 1, where we are obtaining the unique contribution of each effect,

is generally more appropriate and is also widely used. This is what Carlson and Timm

(1974) recommended, and what Myers (1979) recommended for experimental studies

Chapter 7

â†œæ¸€å±®

â†œæ¸€å±®

(random assignment involved), or as he put it, “whenever variations in cell frequencies

can reasonably be assumed due to chance” (p.Â€403).

When an a priori ordering of the effects can be established (OverallÂ€& Spiegel, 1969,

give a nice psychiatric example), Method 3 makes sense. This is analogous to establishing an a priori ordering of the predictors in multiple regression. To illustrate we

adapt an example given in Cohen, Cohen, Aiken, and West (2003), where the research

goal is to predict university faculty salary. Using 2 predictors, sex and number of

publications, a presumed causal ordering is sex and then number of publications. The

reasoning would be that sex can impact number of publications but number of publications cannot impactÂ€sex.

7.4â•‡ FACTORIAL MULTIVARIATE ANALYSIS OF VARIANCE

Here, we are considering the effect of two or more independent variables on a set of

dependent variables. To illustrate factorial MANOVA we use an example from Barcikowski (1983). Sixth-grade students were classified as being of high, average, or

low aptitude, and then within each of these aptitudes, were randomly assigned to one

of five methods of teaching social studies. The dependent variables were measures of

attitude and achievement. These data, with the scores for the attitude and achievement

appearing in each cell,Â€are:

Method of instruction

1

2

3

4

5

High

15, 11

9, 7

Average

18, 13

8, 11

6, 6

11, 9

16, 15

19, 11

12, 9

12, 6

25, 24

24, 23

26, 19

13, 11

10, 11

14, 13

9, 9

14, 15

29, 23

28, 26

19, 14

7, 8

6, 6

11, 14

14, 10

8, 7

15, 9

13, 13

7, 7

14, 16

14, 8

18, 16

18, 17

11, 13

Low

17, 10

7, 9

7, 9

17, 12

13, 15

9, 12

Of the 45 subjects who started the study, five were lost for various reasons. This resulted

in a disproportional factorial design. To obtain the unique contribution of each effect, the

unique sum of squares decomposition was obtained. The syntax for doing so is given

in TableÂ€7.6, along with syntax for simple effects analyses, where the latter is used to

explore the interaction between method of instruction and aptitude. The results of the

multivariate and univariate tests of the effects are presented in TableÂ€7.7. All of the multivariate effects are significant at the .05 level. We use the F’s associated with Wilks

to illustrate (aptitude by method: FÂ€=Â€2.19, pÂ€=Â€.018; method: FÂ€=Â€2.46, pÂ€=Â€.025; and

277

278

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

aptitude: FÂ€=Â€5.92, pÂ€=Â€.001). Because the interaction is significant, we focus our interpretation on it. The univariate tests for this effect on attitude and achievement are also both

significant at the .05 level. Focusing on simple treatment effects for each level of aptitude, inspection of means and simple effects testing (not shown,) indicated that treatment

effects were present only for those of average aptitude. For these students, treatments 2

and 3 were generally more effective than other treatments for each dependent variable,

as indicated by pairwise comparisons using a Bonferroni adjustment. This adjustment is

used to provide for greater control of the family-wise type IÂ€error rate for the 10 pairwise

comparisons involving method of instruction for those of average aptitude.

Table 7.6:â•‡ Syntax for Factorial MANOVA on SPSS and Simple Effects Analyses

TITLE ‘TWO WAY MANOVA’.

DATA LIST FREE/FACA FACB ATTIT ACHIEV.

BEGIN DATA.

1 1 15 11

1 1 9 7

1 2 19 11

1 2 12 9

1 3 14 13

1 3 9 9

1 4 19 14

1 4 7 8

1 5 14 16

1 5 14 8

2 1 18 13

2 1 8 11

2 2 25 24

2 2 24 23

2 3 29 23

2 3 28 26

2 4 11 14

2 4 14 10

2 5 18 17

2 5 11 13

3 1 11 9

3 1 16 15

3 2 13 11

3 2 10 11

3 3 17 10

3 3 7 9

3 4 15 9

3 4 13 13

3 5 17 12

3 5 13 15

END DATA.

LIST.

GLM ATTIT ACHIEV BY FACA FACB

/PRINTÂ€=Â€DESCRIPTIVES.

1

1

1

1

2

2

2

3

4

5

1

2

12 6

14 15

6 6

18 16

6 6

26 19

2 4 8 7

3 3 7 9

3 4 7 7

3 5 9 12

Simple Effects Analyses

GLM

ATTIT BY FACA FACB

/PLOTÂ€=Â€PROFILE (FACA*FACB)

/EMMEANSÂ€=Â€TABLES(FACB) COMPARE ADJ(BONFERRONI)

/EMMEANSÂ€=Â€TABLES (FACA*FACB) COMPARE (FACB) ADJ(BONFERRONI).

GLM

ACHIEV BY FACA FACB

/PLOTÂ€=Â€PROFILE (FACA*FACB)

/EMMEANSÂ€=Â€TABLES(FACB) COMPARE ADJ(BONFERRONI)

/EMMEANSÂ€=Â€TABLES (FACA*FACB) COMPARE (FACB) ADJ(BONFERRONI).

Table 7.7:â•‡ Selected Results From Factorial MANOVA

Multivariate Testsa

Effect

Value

F

Hypothesis df

Error df

Sig.

Intercept

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

.965

.035

27.429

27.429

329.152

329.152b

329.152b

329.152b

2.000

2.000

2.000

2.000

24.000

24.000

24.000

24.000

.000

.000

.000

.000

FACA

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

.574

.449

1.179

1.135

â†œ5.031

â†œ5.917b

â†œ6.780

â†œ14.187c

4.000

4.000

4.000

2.000

50.000

48.000

46.000

25.000

.002

.001

.000

.000

FACB

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

.534

.503

.916

.827

2.278

2.463b

2.633

5.167c

8.000

8.000

8.000

4.000

50.000

48.000

46.000

25.000

.037

.025

.018

.004

FACA *

FACB

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

.757

.333

1.727

1.551

1.905

2.196b

2.482

4.847c

16.000

16.000

16.000

8.000

50.000

48.000

46.000

25.000

.042

.018

.008

.001

b

Design: Intercept + FACA + FACB + FACA *Â€FACB

Exact statistic

c

The statistic is an upper bound on F that yields a lower bound on the significance level.

a

b

Tests of Between-Subjects Effects

Source

Corrected

Model

Intercept

FACA

FACB

FACA *

FACB

Error

Total

Corrected

Total

a

b

Dependent

Variable

Type III Sum

of Squares

df

Mean Square

ATTIT

ACHIEV

ATTIT

ACHIEV

ATTIT

ACHIEV

ATTIT

ACHIEV

ATTIT

ACHIEV

ATTIT

ACHIEV

ATTIT

ACHIEV

ATTIT

ACHIEV

972.108a

764.608b

7875.219

6156.043

256.508

267.558

237.906

189.881

503.321

343.112

460.667

237.167

9357.000

7177.000

1432.775

1001.775

14

14

1

1

2

2

4

4

8

8

25

25

40

40

39

39

69.436

54.615

7875.219

6156.043

128.254

133.779

59.477

47.470

62.915

42.889

18.427

9.487

R SquaredÂ€=Â€.678 (Adjusted R SquaredÂ€=Â€.498)

R SquaredÂ€=Â€.763 (Adjusted R SquaredÂ€=Â€.631)

F

Sig.

3.768

5.757

427.382

648.915

6.960

14.102

3.228

5.004

3.414

4.521

.002

.000

.000

.000

.004

.000

.029

.004

.009

.002

280

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

7.5â•‡ WEIGHTING OF THE CELLÂ€MEANS

In experimental studies that wind up with unequal cell sizes, it is reasonable to assume

equal population sizes, and equal cell weighting is appropriate in estimating the grand

mean. However, when sampling from intact groups (sex, age, race, socioeconomic

status [SES], religions) in nonexperimental studies, the populations may well differ

in size, and the sizes of the samples may reflect the different population sizes. In such

cases, equally weighting the subgroup means will not provide an unbiased estimate

of the combined (grand) mean, whereas weighting the means will produce an unbiased estimate. In some situations, you may wish to use both weighted and unweighted

cell means in a single factorial design, that is, in a semi-experimental design. In such

designs one of the factors is an attribute factor (sex, SES, ethnicity, etc.) and the other

factor is treatments.

Suppose for a given situation it is reasonable to assume there are twice as many middle

SES cases in a population as lower SES, and that two treatments are involved. Forty

lower SES participants are sampled and randomly assigned to treatments, and 80 middle SES participants are selected and assigned to treatments. Schematically then, the

setup of the weighted treatment (column) means and unweighted SES (row) meansÂ€is:

SES

Weighted means

Lower

Middle

T1

T2

Unweighted means

n11Â€=Â€20

n21Â€=Â€40

n12Â€=Â€20

n22Â€=Â€40

(μ11 + μ12) / 2

(μ21 + μ22) / 2

n11µ11 + n21µ 21

n11 + n21

n12 µ12 + n22 µ 22

n12 + n22

Note that Method 3 (type IÂ€sum of squares) the sequential or hierarchical approach,

described in sectionÂ€7.3 can be used to provide a partitioning of variance that implements a weighted means solution.

7.6â•‡ ANALYSIS PROCEDURES FOR TWO-WAY MANOVA

In this section, we summarize the analysis steps that provide a general guide for

you to follow in conducting a two-way MANOVA where the focus is on examining

effects for each of several outcomes. SectionÂ€7.7 applies the procedures to a fairly

large data set, and sectionÂ€7.8 presents an example results section. Note that preliminary analysis activities for the two-way design are the same as for the one-way

MANOVA as summarized in sectionÂ€6.11, except that these activities apply to the

cells of the two-way design. For example, for a 2 × 2 factorial design, the scores are

assumed to follow a multivariate normal distribution with equal variance-covariance

Chapter 7

â†œæ¸€å±®

â†œæ¸€å±®

matrices across each of the 4 cells. Since preliminary analysis for the two-factor

design is similar to the one-factor design, we focus our summary of the analysis procedures on primary analysis.

7.6.1 Primary Analysis

1. Examine the Wilks’ lambda test for the multivariate interaction.

A. If this test is statistically significant, examine the F test of the two-way interaction for each dependent variable, using a Bonferroni correction unless the

number of dependent variables is small (i.e., 2 orÂ€3).

B. If an interaction is present for a given dependent variable, use simple effects

analyses for that variable to interpret the interaction.

2. If a given univariate interaction is not statistically significant (or sufficiently

strong) OR if the Wilks’ lambda test for the multivariate interaction is not statistically significant, examine the multivariate tests for the main effects.

A. If the multivariate test of a given main effect is statistically significant, examine the F test for the corresponding main effect (i.e., factor AÂ€or factor B) for

each dependent variable, using a Bonferroni adjustment (unless the number of

outcomes is small). Note that the main effect for any dependent variable for

which an interaction was present may not be of interest due to the qualified

nature of the simple effect description.

B. If the univariate F test is significant for a given dependent variable, use pairwise comparisons (if more than 2 groups are present) to describe the main

effect. Use a Bonferroni adjustment for the pairwise comparisons to provide

protection for the inflation of the type IÂ€errorÂ€rate.

C. If no multivariate main effects are significant, do not proceed to the univariate

test of main effects. If a given univariate main effect is not significant, do not

conduct further testing (i.e., pairwise comparisons) for that main effect.

3. Use one or more effect size measures to describe the strength of the effects and/

or the differences in the means of interest. Commonly used effect size measures

include multivariate partial eta square, univariate partial eta square, and/or raw

score differences in means for specific comparisons of interest.

7.7â•‡ FACTORIAL MANOVA WITH SENIORWISEÂ€DATA

In this section, we illustrate application of the analysis procedures for two-way

MANOVA using the SeniorWISE data set used in sectionÂ€6.11, except that these

data now include a second factor of gender (i.e., female, male). So, we now assume

that the investigators recruited 150 females and 150 males with each being at least

65Â€years old. Then, within each of these groups, the participants were randomly

assigned to receive (a) memory training, which was designed to help adults maintain and/or improve their memory related abilities, (b) a health intervention condition, which did not include memory training, or (c) a wait-list control condition.

The active treatments were individually administered and posttest intervention

measures were completed individually. The dependent variables are the same as

281

282

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

in sectionÂ€ 6.11 and include memory self-efficacy (self-efficacy), verbal memory

performance (verbal), and daily functioning skills (DAFS). Higher scores on these

measures represent a greater (and preferred) level of performance. Thus, we have a

3 (treatment levels) by 2 (gender groups) multivariate design with 50 participants

in each of 6 cells.

7.7.1â•‡ Preliminary Analysis

The preliminary analysis activities for factorial MANOVA are the same as with

one-way MANOVA except, of course, the relevant groups now are the six cells formed

by the crossing of the two factors. As such, the scores in each cell (in the population)

must be multivariate normal, have equal variance-covariance matrices, and be independent. To facilitate examining the degree to which the assumptions are satisfied and

to readily enable other preliminary analysis activities, TableÂ€7.8 shows SPSS syntax

for creating a cell membership variable for this data set. Also, the syntax shows how

Mahalanobis distance values may be obtained for each case within each of the 6 cells,

as such values are then used to identify multivariate outliers.

For this data set, there is no missing data as each of the 300 participants has a score for

each of the study variables. There are no multivariate outliers as the largest within-cell

Table 7.8:â•‡ SPSS Syntax for Creating a Cell Variable and Obtaining Mahalanobis Distance Values

*/ Creating Cell Variable.

IF (GroupÂ€=Â€1 and GenderÂ€=Â€0)

IF (GroupÂ€=Â€2 and GenderÂ€=Â€0)

IF (GroupÂ€=Â€3 and GenderÂ€=Â€0)

IF (GroupÂ€=Â€1 and GenderÂ€=Â€1)

IF (GroupÂ€=Â€2 and GenderÂ€=Â€1)

IF (GroupÂ€=Â€3 and GenderÂ€=Â€1)

EXECUTE.

Cell=1.

Cell=2.

Cell=3.

Cell=4.

Cell=5.

Cell=6.

*/ Organizing Output By Cell.

SORT CASES BY Cell.

SPLIT FILE SEPARATE BY Cell.

*/ Requesting within-cell Mahalanobis’ distances for each case.

REGRESSION

/STATISTICS COEFF ANOVA

/DEPENDENT Case

/METHOD=ENTER Self_Efficacy Verbal Dafs

/SAVE MAHAL.

*/ REMOVING SPLIT FILE.

SPLIT FILE OFF.

Chapter 7

â†œæ¸€å±®

â†œæ¸€å±®

Mahalanobis distance value, 10.61, is smaller than the chi-square critical value of

16.27 (aÂ€=Â€.001; dfÂ€=Â€3 for the 3 dependent variables). Similarly, we did not detect

any univariate outliers, as no within-cell z score exceeded a magnitude of 3. Also,

inspection of the 18 histograms (6 cells by 3 outcomes) did not suggest the presence

of any extreme scores. Further, examining the pooled within-cell correlations provided support for using the multivariate procedure as the three correlations ranged

from .31 to .47.

In addition, there are no serious departures from the statistical assumptions

associated with factorial MANOVA. Inspecting the 18 histograms did not suggest any substantial departures of univariate normality. Further, no kurtosis or

skewness value in any cell for any outcome exceeded a magnitude of .97, again,

suggesting no substantial departure from normality. For the assumption of equal

variance-covariance matrices, we note that the cell standard deviations (not shown)

were fairly similar for each outcome. Also, Box’s M test (MÂ€=Â€30.53, pÂ€=Â€.503),

did not suggest a violation. Similarly, examining the results of Levene’s test for

equality of variance (not shown) provided support that the dispersion of scores

for self-efficacy (â•›pÂ€=Â€.47), verbal performance (â•›pÂ€=Â€.78), and functional status

(â•›pÂ€=Â€.33) was similar across the six cells. For the independence assumption, the

study design, as described in sectionÂ€6.11, does not suggest any violation in part

as treatments were individually administered to participants who also completed

posttest measures individually.

7.7.2â•‡ Primary Analysis

TableÂ€7.9 shows the syntax used for the primary analysis, and TablesÂ€7.10 and 7.11

show the overall multivariate and univariate test results. Inspecting TableÂ€7.10 indicates that an overall group-by-gender interaction is present in the set of outcomes,

Wilks’ lambdaÂ€ =Â€ .946, F (6, 584)Â€=Â€2.72, pÂ€=Â€.013. Examining the univariate test

results for the group-by-gender interaction in TableÂ€7.11 suggests that this interaction is present for DAFS, F (2, 294)Â€=Â€6.174, pÂ€=Â€.002, but not for self-efficacy F

(2, 294)Â€=Â€1.603, p = .203 or verbal F (2, 294)Â€=Â€.369, pÂ€=Â€.692. Thus, we will focus

on examining simple effects associated with the treatment for DAFS but not for the

other outcomes. Of course, main effects may be present for the set of outcomes as

well. The multivariate test results in TableÂ€7.10 indicate that a main effect in the set

of outcomes is present for both group, Wilks’ lambdaÂ€=Â€.748, F (6, 584)Â€=Â€15.170,

p < .001, and gender, Wilks’ lambdaÂ€=Â€.923, F (3, 292)Â€=Â€3.292, p < .001, although

we will focus on describing treatment effects, not gender differences, from this point

on. The univariate test results in TableÂ€7.11 indicate that a main effect of the treatment is present for self-efficacy, F (2, 294)Â€=Â€29.931, p < .001, and verbal F (2,

294)Â€=Â€26.514, p < .001. Note that a main effect is present also for DAFS but the

interaction just noted suggests we may not wish to describe main effects. So, for

self-efficacy and verbal, we will examine pairwise comparisons to examine treatment effects pooling across the gender groups.

283

Table 7.9:â•‡ SPSS Syntax for Factorial MANOVA With SeniorWISEÂ€Data

GLM Self_Efficacy Verbal Dafs BY Group Gender

/SAVE=ZRESID

/EMMEANS=TABLES(Group)

/EMMEANS=TABLES(Gender)

/EMMEANS=TABLES(Gender*Group)

/PLOT=PROFILE(GROUP*GENDER GENDER*GROUP)

/PRINT=DESCRIPTIVE ETASQ HOMOGENEITY.

*Follow-up univariates for Self-Efficacy and Verbal to obtain

pairwise comparisons; Bonferroni method used to maintain consistency with simple effects analyses (for Dafs).

UNIANOVA Self_Efficacy BY Gender Group

/EMMEANS=TABLES(Group)

/POSTHOC=Group(BONFERRONI).

UNIANOVA Verbal BY Gender Group

/EMMEANS=TABLES(Group)

/POSTHOC=Group(BONFERRONI).

* Follow-up simple effects analyses for Dafs with Bonferroni

method.

GLM

Dafs BY Gender Group

/EMMEANSÂ€=Â€TABLES (Gender*Group) COMPARE (Group)

ADJ(Bonferroni).

Table 7.10:â•‡ SPSS Results of the Overall MultivariateÂ€Tests

Multivariate Testsa

Effect

Intercept

GROUP

Value

Pillai’s

Trace

Wilks’

Lambda

Hotelling’s

Trace

Roy’s Largest Root

Pillai’s

Trace

Wilks’

Lambda

F

Hypothesis

df

Error df

Sig.

Partial Eta

Squared

.983

5678.271b

3.000

292.000

.000

.983

.017

5678.271b

3.000

292.000

.000

.983

58.338

5678.271b

3.000

292.000

.000

.983

58.338

5678.271b

3.000

292.000

.000

.983

.258

14.441

6.000

586.000

.000

.129

.748

15.170b

6.000

584.000

.000

.135

Multivariate Testsa

Effect

GENDER

GROUP *

GENDER

Value

F

Hypothesis

df

Error df

Sig.

Partial Eta

Squared

Hotelling’s

Trace

Roy’s Largest Root

.328

15.900

6.000

582.000

.000

.141

.301

29.361c

3.000

293.000

.000

.231

Pillai’s

Trace

Wilks’

Lambda

Hotelling’s

Trace

Roy’s Largest Root

.077

8.154b

3.000

292.000

.000

.077

.923

8.154b

3.000

292.000

.000

.077

.084

8.154b

3.000

292.000

.000

.077

.084

8.154b

3.000

292.000

.000

.077

.054

2.698

6.000

586.000

.014

.027

.946

2.720b

6.000

584.000

.013

.027

.057

2.743

6.000

582.000

.012

.027

.054

5.290c

3.000

293.000

.001

.051

Pillai’s

Trace

Wilks’

Lambda

Hotelling’s

Trace

Roy’s Largest Root

Design: Intercept + GROUP + GENDER + GROUP * GENDER

Exact statistic

c

The statistic is an upper bound on F that yields a lower bound on the significance level.

a

b

Table 7.11:â•‡ SPSS Results of the Overall UnivariateÂ€Tests

Tests of Between-Subjects Effects

Source

Dependent

Variable

Type III Sum

ofÂ€Squares

Corrected Self_Efficacy

5750.604a

Verbal

4944.027b

Model

DAFS

6120.099c

Intercept Self_Efficacy 833515.776

Verbal

896000.120

DAFS

883559.339

GROUP

Self_Efficacy

5177.087

Verbal

4872.957

DAFS

3642.365

df

Mean Square

5

5

5

1

1

1

2

2

2

1150.121

988.805

1224.020

833515.776

896000.120

883559.339

2588.543

2436.478

1821.183

F

13.299

10.760

14.614

9637.904

9750.188

10548.810

29.931

26.514

21.743

Partial Eta

Sig. Squared

.000

.000

.000

.000

.000

.000

.000

.000

.000

.184

.155

.199

.970

.971

.973

.169

.153

.129

(Continuedâ•›)

286

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

Table 7.11:â•‡(Continued)

Tests of Between-Subjects Effects

Source

Dependent

Variable

Type III Sum

ofÂ€Squares

GENDER

Self_Efficacy

296.178

Verbal

3.229

DAFS

1443.514

GROUP * Self_Efficacy

277.339

67.842

GENDER Verbal

DAFS

1034.220

Error

Self_Efficacy 25426.031

Verbal

27017.328

DAFS

24625.189

Total

Self_Efficacy 864692.411

Verbal

927961.475

DAFS

914304.627

Corrected Self_Efficacy 31176.635

Verbal

31961.355

Total

DAFS

30745.288

df

Mean Square

1 296.178

1

3.229

1 1443.514

2 138.669

2

33.921

2 517.110

294

86.483

294

91.896

294

83.759

300

300

300

299

299

299

F

3.425

.035

17.234

1.603

.369

6.174

Partial Eta

Sig. Squared

.065

.851

.000

.203

.692

.002

.012

.000

.055

.011

.003

.040

R SquaredÂ€=Â€.184 (Adjusted R SquaredÂ€=Â€.171)

R SquaredÂ€=Â€.155 (Adjusted R SquaredÂ€=Â€.140)

c

R SquaredÂ€=Â€.199 (Adjusted R SquaredÂ€=Â€.185)

a

b

TableÂ€7.12 shows results for the simple effects analyses for DAFS focusing on the

impact of the treatments. Examining the means suggests that group differences for

females are not particularly large, but the treatment means for males appear quite different, especially for the memory training condition. This strong effect of the memory

training condition for males is also evident in the plot in TableÂ€7.12. For females, the F

test for treatment mean differences, shown near the bottom of TableÂ€7.12, suggests that

no differences are present in the population, F(2, 294)Â€=Â€2.405, pÂ€=Â€.092. For males,

on the other hand, treatment group mean differences are present F(2, 294)Â€=Â€25.512,

p < .001. Pairwise comparisons for males, using Bonferroni adjusted p values, indicate that participants in the memory training condition outscored, on average, those

in the health training (â•›p < .001) and control conditions (â•›p < .001). The difference in

means between the health training and control condition is not statistically significant

(â•›pÂ€=Â€1.00).

TableÂ€7.13 and TableÂ€7.14 show the results of Bonferroni-adjusted pairwise comparisons of treatment group means (pooling across gender) for the dependent variables

self-efficacy and verbal performance. The results in TableÂ€ 7.13 indicate that the

large difference in means between the memory training and health training conditions is statistically significant (â•›p < .001) as is the difference between the memory

Table 7.12:â•‡ SPSS Results of the Simple Effects Analyses forÂ€DAFS

Estimated Marginal Means GENDER * GROUP

Estimates

Dependent Variable: DAFS

95% Confidence Interval

GENDER

GROUP

FEMALE

Memory

Training

Health

Training

Control

MALE

Memory

Training

Health

Training

Control

Mean

Std. Error

Lower

Bound

Upper

Bound

54.337

1.294

51.790

56.884

51.388

50.504

1.294

1.294

48.840

47.956

53.935

53.051

63.966

1.294

61.419

53.431

51.993

1.294

1.294

50.884

49.445

66.513

55.978

54.540

Pairwise Comparisons

Dependent Variable: DAFS

GENDER (I) GROUP (J) GROUP

FEMALE

Memory

Training

Health

Training

Control

MALE

Memory

Training

Health

Training

Health Training

Control

Memory

Training

Control

Memory

Training

Health Training

Mean

Difference

(I-J)

95% Confidence

Interval for

Differenceb

Std. Error Sig.b

Lower

Bound

Upper

Bound

2.950

3.833

-2.950

1.830

1.830

1.830

.324

.111

.324

-1.458

-.574

-7.357

7.357

8.241

1.458

.884

-3.833

1.830

1.830

1.000

.111

-3.523

-8.241

5.291

.574

-.884

1.830

1.000

-5.291

3.523

1.830

1.830

1.830

.000

.000

.000

6.128

7.566

-14.942

14.942

16.381

-6.128

Health Training

10.535*

Control

11.973*

Memory

-10.535*

Training

(Continuedâ•›)

Table 7.12:â•‡(Continued)

Pairwise Comparisons

Dependent Variable: DAFS

GENDER (I) GROUP (J) GROUP

Control

Mean

Difference

(I-J)

Control

1.438

Memory

-11.973*

Training

Health Training -1.438

95% Confidence

Interval for

Differenceb

Std. Error Sig.b

Lower

Bound

Upper

Bound

1.830

1.830

1.000

.000

-2.969

-16.381

5.846

-7.566

1.830

1.000

-5.846

2.969

Based on estimated marginalÂ€means

* The mean difference is significant at the .050 level.

b. Adjustment for multiple comparisons: Bonferroni.

Univariate Tests

Dependent Variable: DAFS

GENDER

FEMALE

Contrast

Error

Contrast

Error

MALE

Sum of Squares

Df

Mean Square

402.939

24625.189

4273.646

24625.189

2

294

2

294

201.469

83.759

2136.823

83.759

F

Sig.

2.405

.092

25.512

.000

Each F tests the simple effects of GROUP within each level combination of the other effects shown. These

tests are based on the linearly independent pairwise comparisons among the estimated marginal means.

Estimated Marginal Means of DAFS

Group

Memory Training

Health Training

Control

Estimated Marginal Means

62.50

60.00

57.50

55.00

52.50

50.00

Female

Gender

Male

Table 7.13:â•‡ SPSS Results of Pairwise Comparisons for Self-Efficacy

Estimated Marginal Means

GROUP

Dependent Variable: Self_Efficacy

95% Confidence

Interval

GROUP

Mean

Std. Error

Lower

Bound

Upper

Bound

Memory Training

Health Training

Control

58.505

50.649

48.976

.930

.930

.930

56.675

48.819

47.146

60.336

52.480

50.807

Post Hoc Tests GROUP

Dependent Variable: Self_Efficacy

Bonferroni

(I) GROUP

(J) GROUP

Mean

Difference

(I-J)

Memory Training

Health Training

Control

Memory Training

Control

Memory Training

Health Training

7.856*

9.529*

-7.856*

1.673

-9.529*

-1.673

Health Training

Control

95% Confidence

Interval

Std.

Error

Sig.

Lower

Bound

1.315

1.315

1.315

1.315

1.315

1.315

.000

.000

.000

.613

.000

.613

4.689

6.362

-11.022

-1.494

-12.695

-4.840

Upper

Bound

11.022

12.695

-4.689

4.840

-6.362

1.494

Based on observed means.

The error term is Mean Square(Error)Â€=Â€86.483.

* The mean difference is significant at the .050 level.

Table 7.14:â•‡ SPSS Results of Pairwise Comparisons for Verbal Performance

Estimated Marginal Means

GROUP

Dependent Variable: Verbal

95% Confidence Interval

GROUP

Mean

Std. Error

Lower

Bound

Upper

Bound

Memory Training

Health Training

Control

60.227

50.843

52.881

.959

.959

.959

58.341

48.956

50.994

62.114

52.730

54.768

(Continuedâ•›)

290

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

Table 7.14:â•‡(Continued)

Post Hoc Tests GROUP

Multiple Comparisons

Dependent Variable: Verbal

Bonferroni

95% Confidence

Interval

(I) GROUP

Memory Training

Health Training

Control

(J)

GROUP

Health

Training

Control

Memory

Training

Control

Memory

Training

Health

Training

Mean

Difference (I-J)

Std.

Error

Sig.

9.384*

1.356

.000

6.120

12.649

7.346*

-9.384*

1.356

1.356

.000

.000

4.082

-12.649

10.610

-6.120

-2.038

-7.346*

1.356

1.356

.401

.000

-5.302

-10.610

1.226

-4.082

2.038

1.356

.401

-1.226

5.302

Lower Bound

Upper

Bound

Based on observed means.

The error term is Mean Square(Error)Â€=Â€91.896.

*

The mean difference is significant at the .050 level.

training and control groups (â•›p < .001). The smaller difference in means between the

health intervention and control condition is not statistically significant (â•›pÂ€=Â€.613).

Inspecting TableÂ€7.14 indicates a similar pattern for verbal performance, where

those receiving memory training have better average performance than participants

receiving heath training (â•›p < .001) and those in the control group (â•›p < .001). The

small difference between the latter two conditions is not statistically significant

(â•›pÂ€=Â€.401).

7.8 EXAMPLE RESULTS SECTION FOR FACTORIAL

MANOVA WITH SENIORWISE DATA

The goal of this study was to determine if at-risk older males and females obtain similar or different benefits of training designed to help memory functioning across a

set of memory-related variables. As such, 150 males and 150 females were randomly

Chapter 7

â†œæ¸€å±®

â†œæ¸€å±®

assigned to memory training, a health intervention or a wait-list control condition.

AÂ€two-way (treatment by gender) multiple analysis of variance (MANOVA) was conducted with three memory-related dependent variables—memory self-efficacy, verbal

memory performance, and daily functional status (DAFS)—all of which were collected following the intervention.

Prior to conducting the factorial MANOVA, the data were examined to identify

the degree of missing data, presence of outliers and influential observations, and

the degree to which the outcomes were correlated. There were no missing data. No

multivariate outliers were indicated as the largest within-cell Mahalanobis distance

(10.61) was smaller than the chi-square critical value of 16.27 (.05, 3). Also, no

univariate outliers were suggested as all within-cell univariate z scores were smaller

than |3|. Further, examining the pooled within-cell correlations suggested that the

outcomes are moderately and positively correlated, as these three correlations ranged

from .31 to .47.

We also assessed whether the MANOVA assumptions seemed tenable. Inspecting

histograms for each group for each dependent variable as well as the corresponding

values for skew and kurtosis (all of which were smaller than |1|) did not indicate

any material violations of the normality assumption. For the assumption of equal

variance-covariance matrices, the cell standard deviations were fairly similar for

each outcome, and Box’s M test (MÂ€=Â€30.53, pÂ€=Â€.503) did not suggest a violation.

In addition, examining the results of Levene’s test for equality of variance provided

support that the dispersion of scores for self-efficacy (â•›pÂ€=Â€.47), verbal performance

(â•›pÂ€=Â€.78), and functional status (â•›pÂ€=Â€.33) was similar across cells. For the independence assumption, the study design did not suggest any violation in part as treatments

were individually administered to participants who also completed posttest measures

individually.

TableÂ€1 displays the means for each cell for each outcome. Inspecting these means

suggests that participants in the memory training group generally had higher mean

posttest scores than the other treatment conditions across each outcome. However, a significant multivariate test of the treatment-by-gender interaction, Wilks’

lambdaÂ€=Â€.946, F(6, 584)Â€=Â€2.72, pÂ€=Â€.013, suggested that treatment effects were different for females and males. Univariate tests for each outcome indicated that the

two-way interaction is present for DAFS, F(2, 294)Â€=Â€6.174, pÂ€=Â€.002, but not for

self-efficacy F(2, 294)Â€=Â€1.603, p = .203 or verbal F(2, 294)Â€=Â€.369, pÂ€=Â€.692. Simple

effects analyses for DAFS indicated that treatment group differences were present

for males, F(2, 294)Â€=Â€25.512, p < .001, but not females, F(2, 294)Â€=Â€2.405, pÂ€=Â€.092.

Pairwise comparisons for males, using Bonferroni adjusted p values, indicate that participants in the memory training condition outscored, on average, those in the health

training, t(294) = 5.76, p < .001, and control conditions t(294) = 6.54, p < .001. The

difference in means between the health training and control condition is not statistically significant, t(294) = 0.79, pÂ€=Â€1.00.

291

292

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

Table 1:â•‡ Treatment by Gender Means (SD) For Each Dependent Variable

Treatment conditiona

Gender

Memory training

Health training

Control

Self-efficacy

Females

Males

56.15 (9.01)

60.86 (8.86)

50.33 (7.91)

50.97 (8.80)

48.67 (9.93)

49.29 (10.98)

Verbal performance

Females

Males

60.08 (9.41)

60.37 (9.99)

50.53 (8.54)

51.16 (10.16)

53.65 (8.96)

52.11 (10.32)

Daily functional skills

Females

Males

a

54.34 (9.16)

63.97 (7.78)

51.39 (10.61)

53.43 (9.92)

50.50 (8.29)

51.99 (8.84)

nÂ€=Â€50 perÂ€cell.

In addition, the multivariate test for main effects indicated that main effects were

present for the set of outcomes for treatment condition, Wilks’ lambdaÂ€ =Â€ .748, F(6,

584)Â€=Â€15.170, p < .001, and gender, Wilks’ lambdaÂ€=Â€.923, F(3, 292)Â€=Â€3.292, p < .001,

although we focus here on treatment differences. The univariate F tests indicated that

a main effect of the treatment was present for self-efficacy, F(2, 294)Â€=Â€29.931, p <

.001, and verbal F(2, 294)Â€=Â€26.514, p < .001. For self-efficacy, pairwise comparisons

(pooling across gender), using a Bonferroni-adjustment, indicated that participants in

the memory training condition had higher posttest scores, on average, than those in the

health training, t(294) = 5.97, p < .001, and control groups, t(294) = 7.25, p < .001, with

no support for a mean difference between the latter two conditions (â•›pÂ€=Â€.613). AÂ€similar

pattern was present for verbal performance, where those receiving memory training had

better average performance than participants receiving heath training t(294) = 6.92, p <

.001 and those in the control group, t(294) = 5.42, p < .001. The small difference between

the latter two conditions was not statistically significant, t(294) = −1.50, pÂ€=Â€.401.

7.9â•‡ THREE-WAY MANOVA

This section is included to show how to set up SPSS syntax for running a three-way

MANOVA, and to indicate a procedure for interpreting a three-way interaction. We

take the aptitude by method example presented in sectionÂ€7.4 and add sex as an additional factor. Then, assuming we will use the same two dependent variables, the only

change that is required for the syntax to run the factorial MANOVA as presented in

TableÂ€7.6 is that the GLM command becomes:

GLM ATTIT ACHIEV BY FACA FACBÂ€SEX

We wish to focus our attention on the interpretation of a three-way interaction, if it

were significant in such a design. First, what does a significant three-way interaction

Chapter 7

â†œæ¸€å±®

â†œæ¸€å±®

mean in the context of a single outcome variable? If the three factors are denoted by A,

B, and C, then a significant ABC interaction implies that the two-way interaction profiles for the different levels of the third factor are different. AÂ€nonsignificant three-way

interaction means that the two-way profiles are the same; that is, the differences can be

attributed to sampling error.

Example 7.3

Consider a sex, by treatment, by school grade design. Suppose that the two-way design

(collapsed on grade) looked likeÂ€this:

Treatments

Males

Females

1

2

60

40

50

42

This profile suggests a significant sex main effect and a significant ordinal interaction

with respect to sex (because the male average is greater than the female average for

each treatment, and, of course, much greater under treatment 1). But it does not tell

the whole story. Let us examine the profiles for grades 6 and 7 separately (assuming

equal cellÂ€n):

Grade 6

M

F

Grade 7

T1

T2

65

40

50

47

M

F

T1

T1

55

40

50

37

We see that for grade 6 that the same type of interaction is present as before, whereas

for grade 7 students there appears to be no interaction effect, as the difference in means

between males and females is similar across treatments (15 points vs. 13 points). The

two profiles are distinctly different. The point is, school grade further moderates the

sex-by-treatment interaction.

In the context of aptitude–treatment interaction (ATI) research, Cronbach (1975) had

an interesting way of characterizing higher order interactions:

When ATIs are present, a general statement about a treatment effect is misleading

because the effect will come or go depending on the kind of person treated.Â€.Â€.Â€. An

ATI result can be taken as a general conclusion only if it is not in turn moderated

by further variables. If Aptitude×Treatment×Sex interact, for example, then the

Aptitude×Treatment effect does not tell the story. Once we attend to interactions,

we enter a hall of mirrors that extends to infinity. (p.Â€119)

293

294

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

Thus, to examine the nature of a significant three-way multivariate interaction, one

might first determine which of the individual variables are significant (by examining

the univariate F’s for the three-way interaction). If any three-way interactions are present for a given dependent variable, we would then consider the two-way profiles to see

how they differ for those outcomes that are significant.

7.10 FACTORIAL DESCRIPTIVE DISCRIMINANT ANALYSIS

In this section, we present a discriminant analysis approach to describe multivariate

effects that are statistically significant in a factorial MANOVA. Unlike the traditional

MANOVA approach presented previously in this chapter, where univariate follow-up

tests were used to describe statistically significant multivariate interactions and main

effects, the approach described in this section uses linear combinations of variables to

describe such effects. Unlike the traditional MANOVA approach, discriminant analysis uses the correlations among the discriminating variables to create composite variables that separate groups. When such composites are formed, you need to interpret the

composites and use them to describe group differences. If you have not already read

ChapterÂ€10, which introduces discriminant analysis in the context of a simpler single

factor design, you should read that chapter before taking on the factorial presentation

presentedÂ€here.

We use the same SeniorWISE data set used in sectionÂ€7.7. So, for this example, the two

factors are treatment having 3 levels and gender with 2 levels. The dependent variables

are self-efficacy, verbal, and DAFS. Identical to traditional two-way MANOVA, there

will be overall multivariate tests for the two-way interaction and for the two main

effects. If the interaction is significant, you can then conduct a simple effects analyses

by running separate one-way descriptive discriminant analyses for each level of a factor of interest. Given the interest in examining treatment effects with the SeniorWISE

data, we would run a one-way discriminant analysis for females and then a separate

one-way discriminant analysis for males with treatment as the single factor. According

to Warner (2012), such an analysis, for this example, allows us to examine the composite variables that best separate treatment groups for females and that best separate

treatment groups for males.

In addition to the multivariate test for the interaction, you should also examine

the multivariate tests for main effects and identify the composite variables associated with such effects, since the composite variables may be different from those

involved in the interaction. Also, of course, if the multivariate test for the interaction

is not significant, you would also examine the multivariate tests for the main effects.

If the multivariate main effect were significant, you can identify the composite variables involved in the effect by running a single-factor descriptive discriminant analysis pooling across (or ignoring) the other factor. So, for example, if there were a

significant multivariate main effect for the treatment, you could run a descriptive

Chapter 7

â†œæ¸€å±®

â†œæ¸€å±®

discriminant analysis with treatment as the single factor with all cases included.

Such an analysis was done in sectionÂ€10.7. If a multivariate main effect for gender

were significant, you could run a descriptive discriminant analysis with gender as

the single factor.

We now illustrate these analyses for the SeniorWISE data. Note that the preliminary

analysis for the factorial descriptive discriminant analysis is identical to that described

in sectionÂ€7.7.1, so we do not describe it any further here. Also, in sectionÂ€7.7.2, we

reported that the multivariate test for the overall group-by-gender interaction indicated

that this effect was statistically significant, Wilks’ lambdaÂ€=Â€.946, F(6, 584)Â€=Â€2.72,

pÂ€=Â€.013. In addition, the multivariate test results indicated a statistically significant

main effect for treatment group, Wilks’ lambdaÂ€=Â€.748, F(6, 584)Â€=Â€15.170, p < .001,

and gender Wilks’ lambdaÂ€=Â€.923, F(3, 292)Â€=Â€3.292, p < .001. Given the interest in

describing treatment effects for these data, we focus the follow-up analysis on treatment effects.

To describe the multivariate gender-by-group interaction, we ran descriptive discriminant analysis for females and a separate analysis for males. TableÂ€7.15 provides the

syntax for this simple effects analysis, and TablesÂ€7.16 and 7.17 provide the discriminant analysis results for females and males, respectively. For females, TableÂ€7.16

indicates that one linear combination of variables separates the treatment groups,

Wilks’ lambdaÂ€=Â€.776, chi-square (6)Â€=Â€37.10, p < .001. In addition, the square of the

canonical correlation (.442) for this function, when converted to a percent, indicates

that about 19% of the variation for the first function is between treatment groups.

Inspecting the standardized coefficients suggest that this linear combination is dominated by verbal performance and that high scores for this function correspond to high

verbal performance scores. In addition, examining the group centroids suggests that,

for females, the memory training group has much higher verbal performance scores,

on average, than the other treatment groups, which have similar means for this composite variable.

Table 7.15:â•‡ SPSS Syntax for Simple Effects Analysis Using Discriminant Analysis

* The first set of commands requests analysis results separately for each group (females, then

males).

SORT CASES BY Gender.

SPLIT FILE SEPARATE BY Gender.

* The following commands are the typical discriminant analysis syntax.

DISCRIMINANT

/GROUPS=Group(1 3)

/VARIABLES=Self_Efficacy Verbal Dafs

/ANALYSISÂ€=Â€ALL

/STATISTICS=MEAN STDDEV UNIVF.

295

Table 7.16:â•‡ SPSS Discriminant Analysis Results for Females

Summary of Canonical Discriminant Functions

Eigenvaluesa

Function

Eigenvalue

% of Variance

Cumulative %

Canonical Correlation

1

2

.240

.040b

85.9

14.1

â•‡85.9

100.0

.440

.195

a

b

b

GENDER = FEMALE

First 2 canonical discriminant functions were used in the analysis.

Wilks’ Lambdaa

Test of

Function(s)

Wilks’

Lambda

Chi-square

df

Sig.

1 through 2

2

.776

.962

37.100

â•‡5.658

6

2

.000

.059

a

GENDER = FEMALE

Standardized Canonical Discriminant Function Coefficientsa

Function

Self_Efficacy

Verbal

DAFS

a

1

2

.452

.847

-.218

.850

-.791

.434

GENDER = FEMALE

Structure Matrixa

Function

Verbal

Self_Efficacy

DAFS

1

2

.905*

.675

.328

-.293

.721*

.359*

Pooled within-groups correlations between discriminating variables and standardized canonical discriminant

functions.

Variables ordered by absolute size of correlation within function.

* Largest absolute correlation between each variable and any discriminant function

a

GENDER = FEMALE

Functions at Group Centroidsa

Function

GROUP

1

2

Memory Training

Health Training

Control

.673

-.452

-.221

.054

.209

-.263

Unstandardized canonical discriminant functions evaluated at group means.

a

GENDERÂ€=Â€FEMALE

Chapter 7

â†œæ¸€å±®

â†œæ¸€å±®

For males, TableÂ€7.17 indicates that one linear combination of variables separates the

treatment groups, Wilks’ lambdaÂ€=Â€.653, chi-square (6)Â€=Â€62.251, p < .001. In addition, the

square of the canonical correlation (.5832) for this composite, when converted to a percent,

indicates that about 34% of the composite score variation is between treatment. Inspecting the standardized coefficients indicates that self-efficacy and DAFS are the important variables that comprise the composite. Examining the group centroids indicates that,

for males, the memory group has much greater self-efficacy and daily functional skills

(DAFS) than the other treatment groups, which have similar means for this composite.

Summarizing the simple effects analysis following the statistically significant multivariate test of the gender-by-group interaction, we conclude that females assigned

to the memory training group had much higher verbal performance than the other

treatment groups, whereas males assigned to the memory training group had much

higher self-efficacy and daily functioning skills. There appear to be trivial differences

between the health intervention and control groups.

Table 7.17:â•‡ SPSS Discriminant Analysis Results forÂ€Males

Summary of Canonical Discriminant Functions

Eigenvaluesa

Function

Eigenvalue

% of Variance Cumulative %

Canonical Correlation

1

2

.516

.011b

98.0

2.0

.583

.103

a

b

b

98.0

100.0

GENDERÂ€=Â€MALE

First 2 canonical discriminant functions were used in the analysis.

Wilks’ Lambdaa

Test of

Function(s)

Wilks’ Lambda

Chi-square

Df

Sig.

1 through 2

2

.653

.989

62.251

1.546

6

2

.000

.462

a

GENDERÂ€=Â€MALE

Standardized Canonical Discriminant Function Coefficientsa

â•…â•…â•…â•…â•…â•…â•…â•…â•…â•…â•…Function

Self_Efficacy

Verbal

DAFS

a

1

2

.545

.050

.668

-.386

â•›â•›1.171

-.436

GENDERÂ€=Â€MALE

(Continuedâ•›)

297

298

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

Table 7.17:â•‡Continued

Structure Matrixa

Function

1

DAFS

Self_Efficacy

Verbal

2

.844

.748*

.561

.025

-.107

.828*

*

Pooled within-groups correlations between discriminating variables and

standardized canonical discriminant functions.

Variables ordered by absolute size of correlation within function.

*

Largest absolute correlation between each variable and any discriminant function.

a

GENDERÂ€=Â€MALE

Functions at Group Centroidsa

Function

GROUP

Memory Training

Health Training

Control

1

.999

-.400

-.599

2

.017

-.133

.116

Unstandardized canonical discriminant functions evaluated at group means

a

GENDERÂ€=Â€MALE

Also, as noted, the multivariate main effect of the treatment was also statistically significant. The follow-up analysis for this effect, which is the same as reported in ChapterÂ€10 (sectionÂ€10.7.2), indicates that the treatment groups differed on two composite

variables. The first of these composites is composed of self-efficacy and verbal performance, while the second composite is primarily verbal performance. However, with

the factorial analysis of the data, we learned that treatment group differences related to

these composite variables are different between females and males. Thus, we would not

use results involving the treatment main effects to describe treatment group differences.

7.11 SUMMARY

The advantages of a factorial over a one way design are discussed. For equal cell n, all

three methods that Overall and Spiegel (1969) mention yield the same F tests. For unequal cell n (which usually occurs in practice), the three methods can yield quite different results. The reason for this is that for unequal cell n the effects are correlated. There

is a consensus among experts that for unequal cell size the regression approach (which

yields the UNIQUE contribution of each effect) is generally preferable. In SPSS and

SAS, type III sum of squares is this unique sum of squares. AÂ€traditional MANOVA

approach for factorial designs is provided where the focus is on examining each outcome that is involved in the main effects and interaction. In addition, a discriminant

Chapter 7

â†œæ¸€å±®

â†œæ¸€å±®

analysis approach for multivariate factorial designs is illustrated and can be used when

you are interested in identifying if there are meaningful composite variables involved

in the main effects and interactions.

7.12 EXERCISES

1. Consider the following 2 × 4 equal cell size MANOVA data set (two dependent

variables, Y1 and Y2, and factors FACA and FACB):

B

A

6, 10

7, 8

9, 9

11, 8

7, 6

10, 5

13, 16

11, 15

17, 18

9, 11

8, 8

14, 9

21, 19

18, 15

16, 13

10, 12

11, 13

14, 10

4, 12

10, 8

11, 13

11, 10

9, 8

8, 15

(a)Run the factorial MANOVA with SPSS using the commands: GLM Y1 Y2

BY FACAÂ€FACB.

(b) Which of the multivariate tests for the three different effects is (are) significant at the .05 level?

(c) For the effect(s) that show multivariate significance, which of the individual variables (at .025 level) are contributing to the multivariate significance?

(d)Run the data with SPSS using the commands:

GLM Y1 Y2 BY FACA FACB /METHOD=SSTYPE(1).

Recall that SSTYPE(1) requests the sequential sum of squares associated

with Method 3 as described in sectionÂ€7.3. Are the results different? Explain.

2. An investigator has the following 2 × 4 MANOVA data set for two dependent

variables:

B

7, 8

A

11, 8

7, 6

10, 5

6, 12

9, 7

11, 14

13, 16

11, 15

17, 18

9, 11

8, 8

14, 9

13, 11

21, 19

18, 15

16, 13

10, 12

11, 13

14, 10

14, 12

10, 8

11, 13

11, 10

9, 8

8, 15

17, 12

13, 14

299

300

â†œæ¸€å±®

â†œæ¸€å±®

Factorial ANOVA and MANOVA

(a)Run the factorial MANOVA on SPSS using the commands:

GLM Y1 Y2 BY FACAÂ€FACB

/EMMEANS=TABLES(FACA)

/EMMEANS=TABLES(FACB)

/EMMEANS=TABLES(FACA*FACB)

/PRINT=HOMOGENEITY.

(b) Which of the multivariate tests for the three effects are significant at the .05

level?

(c) For the effect(s) that show multivariate significance, which of the individual variables contribute to the multivariate significance at the .025 level?

(d)Is the homogeneity of the covariance matrices assumption for the cells

tenable at the .05 level?

(e)Run the factorial MANOVA on the data set using the sequential sum of

squares (Type I) option of SPSS. Are the univariate F ratios different?

Explain.

REFERENCES

Barcikowski, R.â•›S. (1983). Computer packages and research design, Vol.Â€3: SPSS and SPSSX.

Washington, DC: University Press of America.

Carlson, J.â•›E.,Â€& Timm, N.â•›H. (1974). Analysis of non-orthogonal fixed effect designs. Psychological Bulletin, 8, 563–570.

Cohen, J., Cohen, P., West, S.â•›G.,Â€& Aiken, L.â•›S. (2003). Applied multiple regression/correlation for the behavioral sciences (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.

Cronbach, L.â•›J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30, 116–127.

Cronbach, L.,Â€& Snow, R. (1977). Aptitudes and instructional methods: AÂ€handbook for

research on interactions. New York, NY: Irvington.

Daniels, R.â•›L.,Â€& Stevens, J.â•›P. (1976). The interaction between the internal-external locus of

control and two methods of college instruction. American Educational Research Journal,

13, 103–113.

Myers, J.â•›L. (1979). Fundamentals of experimental design. Boston, MA: AllynÂ€& Bacon.

Overall, J.â•›E.,Â€& Spiegel, D.â•›K. (1969). Concerning least squares analysis of experimental data.

Psychological Bulletin, 72, 311–322.

Warner, R.â•›M. (2012). Applied statistics: From bivariate through multivariate techniques (2nd

ed.). Thousand Oaks, CA:Â€Sage.

Chapter 8

ANALYSIS OF COVARIANCE

8.1â•‡INTRODUCTION

Analysis of covariance (ANCOVA) is a statistical technique that combines regression analysis and analysis of variance. It can be helpful in nonrandomized studies in

drawing more accurate conclusions. However, precautions have to be taken, otherwise

analysis of covariance can be misleading in some cases. In this chapter we indicate

what the purposes of ANCOVA are, when it is most effective, when the interpretation

of results from ANCOVA is “cleanest,” and when ANCOVA should not be used. We

start with the simplest case, one dependent variable and one covariate, with which

many readers may be somewhat familiar. Then we consider one dependent variable

and several covariates, where our previous study of multiple regression is helpful.

Multivariate analysis of covariance (MANCOVA) is then considered, where there are

several dependent variables and several covariates. We show how to run MANCOVA

on SAS and SPSS, interpret analysis results, and provide a guide for analysis.

8.1.1 Examples of Univariate and Multivariate Analysis of

Covariance

What is a covariate? AÂ€potential covariate is any variable that is significantly correlated with the dependent variable. That is, we assume a linear relationship between

the covariate (x) and the dependent variable (yâ•›). Consider now two typical univariate ANCOVAs with one covariate. In a two-group pretest–posttest design, the pretest

is often used as a covariate, because how the participants score before treatments is

generally correlated with how they score after treatments. Or, suppose three groups

are compared on some measure of achievement. In this situation IQ may be used as a

covariate, because IQ is usually at least moderately correlated with achievement.

You should recall that the null hypothesis being tested in ANCOVA is that the adjusted

population means are equal. Since a linear relationship is assumed between the covariate and the dependent variable, the means are adjusted in a linear fashion. We consider

this in detail shortly in this chapter. Thus, in interpreting output, for either univariate

302

â†œæ¸€å±®

â†œæ¸€å±®

ANaLYSIS OF COVaRIaNce

or MANCOVA, it is the adjusted means that need to be examined. It is important to

note that SPSS and SAS do not automatically provide the adjusted means; they must

be requested.

Now consider two situations where MANCOVA would be appropriate. AÂ€counselor

wishes to examine the effect of two different counseling approaches on several personality variables. The subjects are pretested on these variables and then posttested 2 months

later. The pretest scores are the covariates and the posttest scores are the dependent variables. Second, a teacher wishes to determine the relative efficacy of two different methods of teaching 12th-grade mathematics. He uses three subtest scores of achievement on

a posttest as the dependent variables. AÂ€plausible set of covariates here would be grade

in math 11, an IQ measure, and, say, attitude toward education. The null hypothesis that

is tested in MANCOVA is that the adjusted population mean vectors are equal. Recall

that the null hypothesis for MANOVA was that the population mean vectors are equal.

Four excellent references for further study of ANCOVA/MANCOVA are available: an

elementary introduction (Huck, Cormier,Â€& Bounds, 1974), two good classic review

articles (Cochran, 1957; Elashoff, 1969), and especially a very comprehensive and

thorough text by Huitema (2011).

8.2â•‡ PURPOSES OF ANCOVA

ANCOVA is linked to the following two basic objectives in experimental design:

1. Elimination of systematicÂ€bias

2. Reduction of within group or error variance.

The best way of dealing with systematic bias (e.g., intact groups that differ systematically on several variables) is through random assignment of participants to groups,

thus equating the groups on all variables within sampling error. If random assignment

is not possible, however, then ANCOVA can be helpful in reducingÂ€bias.

Within-group variability, which is primarily due to individual differences among the

participants, can be dealt with in several ways: sample selection (participants who are

more homogeneous will vary less on the criterion measure), factorial designs (blocking), repeated-measures analysis, and ANCOVA. Precisely how covariance reduces

error will be considered soon. Because ANCOVA is linked to both of the basic objectives of experimental design, it certainly is a useful tool if properly used and interpreted.

In an experimental study (random assignment of participants to groups) the main purpose of covariance is to reduce error variance, because there will be no systematic bias.

However, if only a small number of participants can be assigned to each group, then

chance differences are more possible and covariance is useful in adjusting the posttest

means for the chance differences.

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

In a nonexperimental study the main purpose of covariance is to adjust the posttest

means for initial differences among the groups that are very likely with intact groups.

It should be emphasized, however, that even the use of several covariates does not

equate intact groups, that is, does not eliminate bias. Nevertheless, the use of two or

three appropriate covariates can make for a fairer comparison.

We now give two examples to illustrate how initial differences (systematic bias) on

a key variable between treatment groups can confound the interpretation of results.

Suppose an experimental psychologist wished to determine the effect of three methods of extinction on some kind of learned response. There are three intact groups to

which the methods are applied, and it is found that the average number of trials to

extinguish the response is least for Method 2. Now, it may be that Method 2 is more

effective, or it may be that the participants in Method 2 didn’t have the response as

thoroughly ingrained as the participants in the other two groups. In the latter case, the

response would be easier to extinguish, and it wouldn’t be clear whether it was the

method that made the difference or the fact that the response was easier to extinguish

that made Method 2 look better. The effects of the two are confounded, or mixed

together. What is needed here is a measure of degree of learning at the start of the

extinction trials (covariate). Then, if there are initial differences between the groups,

the posttest means will be adjusted to take this into account. That is, covariance will

adjust the posttest means to what they would be if all groups had started out equally

on the covariate.

As another example, suppose we are comparing the effect of two different teaching

methods on academic achievement for two different groups of students. Suppose

we learn that prior to implementing the treatment methods, the groups differed on

motivation to learn. Thus, if the academic performance of the group with greater

initial motivation was better than the other group at posttest, we would not know if

the performance differences were due to the teaching method or due to this initial

difference on motivation. Use of ANCOVA may provide for a fairer comparison

because it compares posttest performance assuming that the groups had the same

initial motivation.

8.3â•‡ADJUSTMENT OF POSTTEST MEANS AND REDUCTION OF

ERROR VARIANCE

As mentioned earlier, ANCOVA adjusts the posttest means to what they would be if

all groups started out equally on the covariate, at the grand mean. In this section we

derive the general equation for linearly adjusting the posttest means for one covariate.

Before we do that, however, it is important to discuss one of the assumptions underlying the analysis of covariance. That assumption for one covariate requires equal

within-group population regression slopes. Consider a three-group situation, with 15

participants per group. Suppose that the scatterplots for the three groups looked as

given in FigureÂ€8.1.

303

304

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

Figure 8.1:â•‡ Scatterplots of y and x for three groups.

y

Group 1

y

Group 2

x

y

x

Group 3

x

Recall from beginning statistics that the x and y scores for each participant determine

a point in the plane. Requiring that the slopes be equal is equivalent to saying that the

nature of the linear relationship is the same for all groups, or that the rate of change

in y as a function of x is the same for all groups. For these scatterplots the slopes are

different, with the slope being the largest for group 2 and smallest for group 3. But the

issue is whether the population slopes are different and whether the sample slopes differ sufficiently to conclude that the population values are different. With small sample

sizes as in these scatterplots, it is dangerous to rely on visual inspection to determine

whether the population values are equal, because of considerable sampling error. Fortunately, there is a statistic for this, and later we indicate how to obtain it on SAS and

SPSS. In deriving the equation for the adjusted means we are going to assume the

slopes are equal. What if the slopes are not equal? Then ANCOVA is not appropriate,

and we indicate alternatives later in the chapter.

The details of obtaining the adjusted mean for the ith group (i.e., any group) are

given in FigureÂ€ 8.2. The general equation follows from the definition for the slope

of a straight line and some basic algebra. In FigureÂ€8.3 we show the adjusted means

geometrically for a hypothetical three-group data set. AÂ€positive correlation is assumed

between the covariate and the dependent variable, so that a higher mean on x implies

a higher mean on y. Note that because group 3 scored below the grand mean on the

covariate, its mean is adjusted upward. On the other hand, because the mean for group

2 on the covariate is above the grand mean, covariance estimates that it would have

scored lower on y if its mean on the covariate was lower (at grand mean), and therefore

the mean for group 2 is adjusted downward.

8.3.1 Reduction of Error Variance

Consider a teaching methods study where the dependent variable is chemistry achievement and the covariate is IQ. Then, within each teaching method there will be considerable variability on chemistry achievement due to individual differences among

the students in terms of ability, background, attitude, and so on. AÂ€sizable portion

of this within-variability, we assume, is due to differences in IQ. That is, chemistry

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

Figure 8.2:â•‡ Deriving the general equation for the adjusted means in covariance.

y

Regression line

(x, yi)

yi – yi

(xi, yi)

x – xi

yi

x

xi

Slope of straight line = b =

x

change in y

change in x

y –y

b= i i

x – xi

b(x – xi) = yi – yi

yi = yi + b(x – xi)

yi = yi – b(xi – x)

achievement scores differ partly because the students differ in IQ. If we can statistically remove this part of the within-variability, a smaller error term results, and hence

a more powerful test of group posttest differences can be obtained. We denote the correlation between IQ and chemistry achievement by rxy. Recall that the square of a correlation can be interpreted as “variance accounted for.” Thus, for example, if rxyÂ€=Â€.71,

then (.71)2Â€=Â€.50, or 50% of the within-group variability on chemistry achievement can

be accounted for by variability onÂ€IQ.

We denote the within-group variability of chemistry achievement by MSw, the usual

error term for ANOVA. Now, symbolically, the part of MSw that is accounted for by

IQ is MSwrxy2. Thus, the within-group variability that is left after the portion due to the

covariate is removed,Â€is

(

)

MS w − MS w rxy2 =−

MS w 1 rxy2 ,

(1)

and this becomes our new error term for analysis of covariance, which we denote by

MSw*. Technically, there is an additional factor involved,

305

306

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

Figure 8.3:â•‡ Regression lines and adjusted means for three-group analysis of covariance.

y

Gp 2

b

Gp 1

a

Gp 3

y2

c

y2

y3

x3

y3

x

Grand mean

x2

x

a positive correlation assumed between x and y

b

ws on the regression lines indicate that the adjusted

means can be obtained by sliding the mean up (down) the

regression line until it hits the line for the grand mean.

c y2 is actual mean for Gp 2 and y2 represents the adjusted mean.

(

)

=

MS w* MS w 1 − rxy2 {1 + 1 ( f e − 2 )} , (2)

where fe is error degrees of freedom. However, the effect of this additional factor is

slight as long as N ≥Â€50.

To show how much of a difference a covariate can make in increasing the sensitivity

of an experiment, we consider a hypothetical study. An investigator runs a one-way

ANOVA (three groups with 20 participants per group), and obtains FÂ€=Â€200/100Â€=Â€2,

which is not significant, because the critical value at .05 is 3.18. He had pretested the

subjects, but did not use the pretest as a covariate because the groups didn’t differ

significantly on the pretest (even though the correlation between pretest and posttest

was .71). This is a common mistake made by some researchers who are unaware of an

important purpose of covariance, that of reducing error variance. The analysis is redone

by another investigator using ANCOVA. Using the equation that we just derived for

the new error term for ANCOVA she finds:

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

MS w* ≈ 100[1 − (.71)2 ] = 50

Thus, the error term for ANCOVA is only half as large as the error term for ANOVA! It

is also necessary to obtain a new MSb for ANCOVA; call it MSb*. Because the formula

for MSb* is complicated, we do not pursue it. Let us assume the investigator obtains

the following F ratio for covariance analysis:

F*Â€=Â€190 / 50Â€= 3.8

This is significant at the .05 level. Therefore, the use of covariance can make the difference between not finding significance and finding significance due to the reduced

error term and the subsequent increase in power. Finally, we wish to note that MSb*

can be smaller or larger than MSb, although in a randomized study the expected values

of the two are equal.

8.4 CHOICE OF COVARIATES

In general, any variables that theoretically should correlate with the dependent variable, or variables that have been shown to correlate for similar types of participants,

should be considered as possible covariates. The ideal is to choose as covariates variables that of course are significantly correlated with the dependent variable and that

have low correlations among themselves. If two covariates are highly correlated (say

.80), then they are removing much of the same error variance from y; use of x2 will

not offer much additional power. On the other hand, if two covariates (x1 and x2) have

a low correlation (say .20), then they are removing relatively distinct pieces of the

error variance from y, and we will obtain a much greater total error reduction. This

is illustrated in FigureÂ€8.4 with Venn diagrams, where the circle represents error variance onÂ€y.

The shaded portion in each case represents the additional error reduction due to adding x2 to the model that already contains x1, that is, the part of error variance on y it

removes that x1 did not. Note that this shaded area is much smaller when x1 and x2 are

highly correlated.

Figure 8.4:â•‡ Venn diagrams with solid lines representing the part of variance on y that x1

accounts for and dashed lines representing the variance on y that x2 accountsÂ€for.

x1 and x2 Low correl.

x1 and x2 High correl.

Solid lines—part of

variance on y that x1

accounts for.

Dashed lines—part of

variance on y that x2

accounts for.

307

308

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

If the dependent variable is achievement in some content area, then one should always

consider the possibility of at least three covariates:

1. A measure of ability in that specific contentÂ€area

2. A measure of general ability (IQ measure)

3. One or two relevant noncognitive measures (e.g., attitude toward education, study

habits, etc.).

An example of this was given earlier, where we considered the effect of two different

teaching methods on 12th-grade mathematics achievement. We indicated that a plausible set of covariates would be grade in math 11 (a previous measure of ability in mathematics), an IQ measure, and attitude toward education (a noncognitive measure).

In studies with small or relatively small group sizes, it is particularly imperative to

consider the use of two or three covariates. Why? Because for small or medium effect

sizes, which are very common in social science research, power for the test of a treatment will be poor for small group size. Thus, one should attempt to reduce the error

variance as much as possible to obtain a more sensitive (powerful)Â€test.

Huitema (2011, p.Â€231) recommended limiting the number of covariates to the extent

that theÂ€ratio

C + ( J − 1)

N

< .10, (3)

where C is the number of covariates, J is the number of groups, and N is total sample size.

Thus, if we had a three-group problem with a total of 60 participants, then (C + 2) / 60 < .10

or C < 4. We should use fewer than four covariates. If this ratio is > .10, then the estimates

of the adjusted means are likely to be unstable. That is, if the study were replicated, it

could be expected that the equation used to estimate the adjusted means in the original

study would yield very different estimates for another sample from the same population.

8.4.1 Importance of Covariates Being Measured Before Treatments

To avoid confounding (mixing together) of the treatment effect with a change on the

covariate, one should use information from only those covariates gathered before treatments are administered. If a covariate that was measured after treatments is used and

that variable was affected by treatments, then the change on the covariate may be correlated with change on the dependent variable. Thus, when the covariate adjustment is

made, you will remove part of the treatment effect.

8.5 ASSUMPTIONS IN ANALYSIS OF COVARIANCE

Analysis of covariance rests on the same assumptions as analysis of variance. Note that

when assessing assumptions, you should obtain the model residuals, as we show later,

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

and not the within-group outcome scores (where the latter may be used in ANOVA).

Three additional assumptions are a part of ANCOVA. That is, ANCOVA also assumes:

1. A linear relationship between the dependent variable and the covariate(s).*

2. Homogeneity of the regression slopes (for one covariate), that is, that the slope of

the regression line is the same in each group. For two covariates the assumption is

parallelism of the regression planes, and for more than two covariates the assumption is known as homogeneity of the regression hyperplanes.

3. The covariate is measured without error.

Because covariance rests partly on the same assumptions as ANOVA, any violations

that are serious in ANOVA (such as the independence assumption) are also serious

in ANCOVA. Violation of all three of the remaining assumptions of covariance may

be serious. For example, if the relationship between the covariate and the dependent

variable is curvilinear, then the adjustment of the means will be improper. In this case,

two possible courses of actionÂ€are:

1. Seek a transformation of the data that is linear. This is possible if the relationship

between the covariate and the dependent variable is monotonic.

2. Fit a polynomial ANCOVA model to theÂ€data.

There is always measurement error for the variables that are typically used as covariates in social science research, and measurement error causes problems in both randomized and nonrandomized designs, but is more serious in nonrandomized designs. As

Huitema (2011) notes, in randomized experimental designs, the power of ANCOVA

is reduced when measurement error is present but treatment effect estimates are not

biased, provided that the treatment does not impact the covariate.

When measurement error is present on the covariate, then treatment effects can be

seriously biased in nonrandomized designs. In FigureÂ€8.5 we illustrate the effect measurement error can have when comparing two different populations with analysis of

covariance. In the hypothetical example, with no measurement error we would conclude that group 1 is superior to group 2, whereas with considerable measurement error

the opposite conclusion is drawn. This example shows that if the covariate means are

not equal, then the difference between the adjusted means is partly a function of the

reliability of the covariate. Now, this problem would not be of particular concern if

we had a very reliable covariate such as IQ or other cognitive variables from a good

standardized test. If, on the other hand, the covariate is a noncognitive variable, or a

variable derived from a nonstandardized instrument (which might well be of questionable reliability), then concern would definitely be justified.

A violation of the homogeneity of regression slopes can also yield misleading results

if ANCOVA is used. To illustrate this, we present in FigureÂ€8.6 a situation where the

* Nonlinear analysis of covariance is possible (cf., Huitema, 2011, chap. 12), but is rarely done.

309

Figure 8.5:â•‡ Effect of measurement error on covariance results when comparing subjects from

two different populations.

Group 1

Measurement error—group 2

declared superior to

group 1

Group 2

No measurement error—group 1

declared superior to group 2

x

Regression lines for the groups with no measurement error

Regression line for group 1 with considerable measurement error

Regression line for group 2 with considerable measurement error

Figure 8.6:â•‡ Effect of heterogeneous slopes on interpretation in ANCOVA.

Equal slopes

y

adjusted means

(x1, y1)

y1

Superiority of group 1 over group 2,

as estimated by covariance

y2

(x2, y2)

x

Heterogeneous slopes

case 1

Gp 1

For x = a, superiority of

Gp 1 overestimated

by covariance, while

for x = b superiority

of Gp 1 underestimated

x

Heterogeneous slopes

case 2

Gp 1

Gp 2

a

x

b

x

Covariance estimates

no difference

between the Gps.

But, for x = c, Gp 2

superior, while for

x = d, Gp 1 superior.

Gp 2

c

x

d

x

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

assumption is met and two situations where the assumption is violated. Notice that

with homogeneous slopes the estimated superiority of group 1 at the grand mean is an

accurate estimate of group 1’s superiority for all levels of the covariate, since the lines

are parallel. On the other hand, for case 1 of heterogeneous slopes, the superiority of

group 1 (as estimated by ANCOVA) is not an accurate estimate of group 1’s superiority

for other values of the covariate. For xÂ€=Â€a, group 1 is only slightly better than group 2,

whereas for xÂ€=Â€b, the superiority of group 1 is seriously underestimated by covariance.

The point is, when the slopes are unequal there is a covariate by treatment interaction.

That is, how much better group 1 is depends on which value of the covariate we specify.

For case 2 of heterogeneous slopes, the use of covariance would be totally misleading. Covariance estimates no difference between the groups, while for xÂ€=Â€c,

group 2 is quite superior to group 1. For xÂ€=Â€d, group 1 is superior to group 2. We

indicate later in the chapter, in detail, how the assumption of equal slopes is tested

onÂ€SPSS.

8.6â•‡ USE OF ANCOVA WITH INTACT GROUPS

It should be noted that some researchers (Anderson, 1963; Lord, 1969) have argued

strongly against using ANCOVA with intact groups. Although we do not take this

position, it is important that you be aware of the several limitations or possible dangers when using ANCOVA with intact groups. First, even the use of several covariates

will not equate intact groups, and one should never be deluded into thinking it can.

The groups may still differ on some unknown important variable(s). Also, note that

equating groups on one variable may result in accentuating their differences on other

variables.

Second, recall that ANCOVA adjusts the posttest means to what they would be if all

the groups had started out equal on the covariate(s). You then need to consider whether

groups that are equal on the covariate would ever exist in the real world. Elashoff

(1969) gave the following example:

Teaching methods A and B are being compared. The class using A is composed

of high-ability students, whereas the class using B is composed of low-ability

students. A covariance analysis can be done on the posttest achievement scores

holding ability constant, as if A and B had been used on classes of equal and average ability.Â€.Â€.Â€. It may make no sense to think about comparing methods A and

B for students of average ability, perhaps each has been designed specifically for

the ability level it was used with, or neither method will, in the future, be used for

students of average ability. (p.Â€387)

Third, the assumptions of linearity and homogeneity of regression slopes need to be

satisfied for ANCOVA to be appropriate.

311

312

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

A fourth issue that can confound the interpretation of results is differential growth of

participants in intact or self-selected groups on some dependent variable. If the natural

growth is much greater in one group (treatment) than for the control group and covariance finds a significance difference after adjusting for any pretest differences, then it

is not clear whether the difference is due to treatment, differential growth, or part of

each. Bryk and Weisberg (1977) discussed this issue in detail and propose an alternative approach for such growth models.

A fifth problem is that of measurement error. Of course, this same problem is present

in randomized studies. But there the effect is merely to attenuate power. In nonrandomized studies measurement error can seriously bias the treatment effect. Reichardt

(1979), in an extended discussion on measurement error in ANCOVA, stated:

Measurement error in the pretest can therefore produce spurious treatment effects

when none exist. But it can also result in a finding of no intercept difference when

a true treatment effect exists, or it can produce an estimate of the treatment effect

which is in the opposite direction of the true effect. (p.Â€164)

It is no wonder then that Pedhazur (1982), in discussing the effect of measurement

error when comparing intact groups,Â€said:

The purpose of the discussion here was only to alert you to the problem in the hope

that you will reach two obvious conclusions: (1) that efforts should be directed to

construct measures of the covariates that have very high reliabilities and (2) that

ignoring the problem, as is unfortunately done in most applications of ANCOVA,

will not make it disappear. (p.Â€524)

Huitema (2011) discusses various strategies that can be used for nonrandomized

designs having covariates.

Given all of these problems, you may well wonder whether we should abandon the

use of ANCOVA when comparing intact groups. But other statistical methods for

analyzing this kind of data (such as matched samples, gain score ANOVA) suffer

from many of the same problems, such as seriously biased treatment effects. The

fact is that inferring cause–effect from intact groups is treacherous, regardless of the

type of statistical analysis. Therefore, the task is to do the best we can and exercise

considerable caution, or as Pedhazur (1982) put it, “the conduct of such research,

indeed all scientific research, requires sound theoretical thinking, constant vigilance,

and a thorough understanding of the potential and limitations of the methods being

used” (p.Â€525).

8.7â•‡ ALTERNATIVE ANALYSES FOR PRETEST–POSTTEST DESIGNS

When comparing two or more groups with pretest and posttest data, the following

three other modes of analysis are possible:

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

1. An ANOVA is done on the difference or gain scores (posttest–pretest).

2. A two-way repeated-measures ANOVA (this will be covered in ChapterÂ€12)

is done. This is called a one between (the grouping variable) and one within

(pretest–posttest part) factor ANOVA.

3. An ANOVA is done on residual scores. That is, the dependent variable is regressed

on the covariate. Predicted scores are then subtracted from observed dependent

scores, yielding residual scores (e^ i ). An ordinary one-way ANOVA is then performed on these residual scores. Although some individuals feel this approach is

equivalent to ANCOVA, Maxwell, Delaney, and Manheimer (1985) showed the

two methods are not the same and that analysis on residuals should be avoided.

The first two methods are used quite frequently. Huck and McLean (1975) and Jennings (1988) compared the first two methods just mentioned, along with the use of

ANCOVA for the pretest–posttest control group design, and concluded that ANCOVA

is the preferred method of analysis. Several comments from the Huck and McLean article are worth mentioning. First, they noted that with the repeated-measures approach

it is the interaction F that is indicating whether the treatments had a differential effect,

and not the treatment main effect. We consider two patterns of means to illustrate the

interaction of interest.

Situation 1

Pretest

Treatment

Control

70

60

Situation 2

Posttest

80

70

Pretest

Treatment

Control

65

60

Posttest

80

68

In Situation 1 the treatment main effect would probably be significant, because there

is a difference of 10 in the row means. However, the difference of 10 on the posttest

just transferred from an initial difference of 10 on the pretest. The interaction would

not be significant here, as there is no differential change in the treatment and control groups here. Of course, in a randomized study, we should not observe such

between-group differences on the pretest. On the other hand, in Situation 2, even

though the treatment group scored somewhat higher on the pretest, it increased 15

points from pretest to posttest, whereas the control group increased just 8 points. That

is, there was a differential change in performance in the two groups, and this differential change is the interaction that is being tested in repeated measures ANOVA.

One way of thinking of an interaction effect is as a “difference in the differences.”

This is exactly what we have in Situation 2, hence a significant interaction effect.

Second, Huck and McLean (1975) noted that the interaction F from the repeatedmeasures ANOVA is identical to the F ratio one would obtain from an ANOVA on the

gain (difference) scores. Finally, whenever the regression coefficient is not equal to

1 (generally the case), the error term for ANCOVA will be smaller than for the gain

score analysis and hence the ANCOVA will be a more sensitive or powerful analysis.

313

314

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

Although not discussed in the Huck and McLean paper, we would like to add a caution concerning the use of gain scores. It is a fairly well-known measurement fact that

the reliability of gain (difference) scores is generally not good. To be more specific,

as the correlation between the pretest and posttest scores approaches the reliability

of the test, the reliability of the difference scores goes to 0. The following table from

Thorndike and Hagen (1977) quantifies things:

Average reliability of two tests

Correlation between tests

.50

.60

.70

.80

.90

.95

.00

.40

.50

.60

.70

.80

.90

.95

.50

.17

.00

.60

.33

.20

.00

.70

.50

.40

.25

.00

.80

.67

.60

.50

.33

.00

.90

.83

.80

.75

.67

.50

.00

.95

.92

.90

.88

.83

.75

.50

.00

If our dependent variable is some noncognitive measure, or a variable derived from a

nonstandardized test (which could well be of questionable reliability), then a reliability

of about .60 or so is a definite possibility. In this case, if the correlation between pretest

and posttest is .50 (a realistic possibility), the reliability of the difference scores is only

.20. On the other hand, this table also shows that if our measure is quite reliable (say

.90), then the difference scores will be reliable provided that the correlation is not too

high. For example, for reliabilityÂ€=Â€.90 and pre–post correlationÂ€=Â€.50, the reliability of

the differences scores is .80.

8.8â•‡ERROR REDUCTION AND ADJUSTMENT OF POSTTEST

MEANS FOR SEVERAL COVARIATES

What is the rationale for using several covariates? First, the use of several covariates

may result in greater error reduction than can be obtained with just one covariate. The

error reduction will be substantially greater if the covariates have relatively low intercorrelations among themselves (say < .40). Second, with several covariates, we can

make a better adjustment for initial differences between intact groups.

For one covariate, the amount of error reduction is governed primarily by the magnitude

of the correlation between the covariate and the dependent variable (see EquationÂ€2).

For several covariates, the amount of error reduction is determined by the magnitude

of the multiple correlation between the dependent variable and the set of covariates

(predictors). This is why we indicated earlier that it is desirable to have covariates

with low intercorrelations among themselves, for then the multiple correlation will

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

be larger, and we will achieve greater error reduction. Also, because R2 has a variance

accounted for interpretation, we can speak of the percentage of within variability on

the dependent variable that is accounted for by the set of covariates.

Recall that the equation for the adjusted posttest mean for one covariate was givenÂ€by:

yi* = yi − b ( xi − x), (4)

where b is the estimated common regression slope.

With several covariates (x1, x2, .Â€.Â€., xk), we are simply regressing y on the set of xs, and

the adjusted equation becomes an extension:

(

)

(

(

)

)

y *j = y j − b1 x1 j − x1 − b2 x2 j − x2 − − bk xkj − xk , (5)

−

where the bi are the regression coefficients, x1 j is the mean for the covariate 1 in group

−

j, x 2 j is the mean for covariate 2 in group j, and so on, and the x− i are the grand means

for the covariates. We next illustrate the use of this equation on a sample MANCOVA

problem.

8.9â•‡MANCOVA—SEVERAL DEPENDENT VARIABLES AND

SEVERAL COVARIATES

In MANCOVA we are assuming there is a significant relationship between the set of

dependent variables and the set of covariates, or that there is a significant regression

of the ys on the xs. This is tested through the use of Wilks’ Λ. We are also assuming,

for more than two covariates, homogeneity of the regression hyperplanes. The null

hypothesis that is being tested in MANCOVA is that the adjusted population mean

vectors are equal:

H 0 : µ1adj = µ 2adj = µ3adj = = µ jadj

In testing the null hypothesis in MANCOVA, adjusted W and T matrices are needed;

we denote these by W* and T*. In MANOVA, recall that the null hypothesis was

tested using Wilks’ Λ. Thus, weÂ€have:

MANOVA MANCOVA

Test

=

Λ

Statistic

W

=

Λ*

T

W*

T*

The calculation of W* and T* involves considerable matrix algebra, which we wish

to avoid. For those who are interested in the details, however, Finn (1974) has a nicely

worked out example.

315

316

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

In examining the output from statistical packages it is important to first make two

checks to determine whether MANCOVA is appropriate:

1. Check to see that there is a significant relationship between the dependent variables and the covariates.

2. Check to determine that the homogeneity of the regression hyperplanes is satisfied.

If either of these is not satisfied, then covariance is not appropriate. In particular, if

condition 2 is not met, then one should consider using the Johnson–Neyman technique,

which determines a region of nonsignificance, that is, a set of x values for which the

groups do not differ, and hence for values of x outside this region one group is superior

to the other. The Johnson–Neyman technique is described by Huitema (2011), and

extended discussion is provided in Rogosa (1977, 1980).

Incidentally, if the homogeneity of regression slopes is rejected for several groups,

it does not automatically follow that the slopes for all groups differ. In this case, one

might follow up the overall test with additional homogeneity tests on all combinations

of pairs of slopes. Often, the slopes will be homogeneous for many of the groups. In

this case one can apply ANCOVA to the groups that have homogeneous slopes, and

apply the Johnson–Neyman technique to the groups with heterogeneous slopes. At

present, neither SAS nor SPSS offers the Johnson–Neyman technique.

8.10â•‡TESTING THE ASSUMPTION OF HOMOGENEOUS

HYPERPLANES ONÂ€SPSS

Neither SAS nor SPSS automatically provides the test of the homogeneity of the

regression hyperplanes. Recall that, for one covariate, this is the assumption of equal

regression slopes in the groups, and that for two covariates it is the assumption of

parallel regression planes. To set up the syntax to test this assumption, it is necessary

to understand what a violation of the assumption means. As we indicated earlier (and

displayed in FigureÂ€8.4), a violation means there is a covariate-by-treatment interaction. Evidence that the assumption is met means the interaction is not present, which is

consistent with the use of MANCOVA.

Thus, what is done on SPSS is to set up an effect involving the interaction (for a given

covariate), and then test whether this effect is significant. If so, this means the assumption is not tenable. This is one of those cases where researchers typically do not want

significance, for then the assumption is tenable and covariance is appropriate. With

the SPSS GLM procedure, the interaction can be tested for each covariate across the

multiple outcomes simultaneously.

Example 8.1: Two Dependent Variables and One Covariate

We call the grouping variable TREATS, and denote the dependent variables by

Y1 and Y2, and the covariate by X1. Then, the key parts of the GLM syntax that

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

produce a test of the assumption of no treatment-covariate interaction for any of the

outcomesÂ€are

GLM Y1 Y2 BY TREATS WITHÂ€X1

/DESIGN=TREATS X1 TREATS*X1.

Example 8.2: Three Dependent Variables and Two Covariates

We denote the dependent variables by Y1, Y2, and Y3, and the covariates by X1 and X2.

Then, the relevant syntaxÂ€is

GLM Y1 Y2 Y3 BY TREATS WITH X1Â€X2

/DESIGN=TREATS X1 X2 TREATS*X1 TREATS*X2.

These two syntax lines will be embedded in others when running a MANCOVA on

SPSS, as you can see in a computer example we consider later. With the previous two

examples and the computer examples, you should be able to generalize the setup of the

control lines for testing homogeneity of regression hyperplanes for any combination of

dependent variables and covariates.

8.11â•‡EFFECT SIZE MEASURES FOR GROUP COMPARISONS IN

MANCOVA/ANCOVA

A variety of effect size measures are available to describe the differences in adjusted

means. AÂ€raw score (unstandardized) difference in adjusted means should be reported

and may be sufficient if the scale of the dependent variable is well known and easily

understood. In addition, as discussed in Olejnik and Algina (2000) a standardized difference in adjusted means between two groups (essentially a Cohen’s d measure) may

be computedÂ€as

d=

yadj1 − yadj 2

MSW 1/ 2

,

where MSW is the pooled mean squared error from a one-way ANOVA that includes

the treatment as the only explanatory variable (thus excluding any covariates). This

effect size measure, among other things, assumes that (1) the covariates are participant

attribute variables (or more properly variables whose variability is intrinsic to the population of interest, as explained in Olejnik and Algina, 2000) and (2) the homogeneity

of variance assumption for the outcome is satisfied.

In addition, one may also use proportion of variance explained effect size measures

for treatment group differences in MANOVA/ANCOVA. For example, for a given

outcome, the proportion of variance explained by treatment group differences may be

computedÂ€as

η2 =

SS

effect

,

SS

total

317

318

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

where SSeffect is the sum of squares due to the treatment from the ANCOVA and SStotal is

the total sum of squares for a given dependent variable. Note that computer software

commonly reports partial η2, which is not the effect size discussed here and which

removes variation due to the covariate from SStotalâ•›. Conceptually, η2 describes the

strength of the treatment effect for the general population, whereas partial η2 describes

the strength of the treatment for participants having the same values on the covariates

(i.e., holding scores constant on all covariates). In addition, an overall multivariate

strength of association, multivariate eta square (also called tau square), can be computed andÂ€is

η2multivariate = 1 − Λ

1

r,

where Λ is Wilk’s lambda and r is the smaller of (p, q), where p is the number of

dependent variables and q is the degrees of freedom for the treatment effect. This

effect size is interpreted as the proportion of generalized variance in the set of outcomes that is due the treatment. Use of these effect size measures is illustrated in

Example 8.4.

8.12 TWO COMPUTER EXAMPLES

We now consider two examples to illustrate (1) how to set up syntax to run MANCOVA on SAS GLM and then SPSS GLM, and (2) how to interpret the output, including determining whether use of covariates is appropriate. The first example uses

artificial data and is simpler, having just two dependent variables and one covariate,

whereas the second example uses data from an actual study and is a bit more complex,

involving two dependent variables and two covariates. We also conduct some preliminary analysis activities (checking for outliers, assessing assumptions) with the second

example.

Example 8.3: MANCOVA on SASÂ€GLM

This example has two groups, with 15 participants in group 1 and 14 participants in

group 2. There are two dependent variables, denoted by POSTCOMP and POSTHIOR

in the SAS GLM syntax and on the printout, and one covariate (denoted by PRECOMP). The syntax for running the MANCOVA analysis is given in TableÂ€8.1, along

with annotation.

TableÂ€8.2 presents two multivariate tests for determining whether MANCOVA is

appropriate, that is, whether there is a significant relationship between the two dependent variables and the covariate, and whether there is no covariate by group interaction.

The multivariate test at the top of TableÂ€8.2 indicates there is a significant relationship

between the covariate and the set of outcomes (FÂ€=Â€21.46, pÂ€=Â€.0001). Also, the multivariate test in the middle of the table shows there is not a covariate-by-group interaction effect (FÂ€=Â€1.90, p < .1707). This supports the decision to use MANCOVA.

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

Table 8.1:â•‡ SAS GLM Syntax for Two-Group MANCOVA: Two Dependent Variables and

One Covariate

TITLE ‘MULTIVARIATE ANALYSIS OF COVARIANCE’; DATA COMP;

INPUT GPID PRECOMP POSTCOMP POSTHIOR @@;

LINES;

1 15 17 3 1 10 6 3 1 13 13 1 1 14 14 8

1 12 12 3 1 10 9 9 1 12 12 3 1 8 9 12

1 12 15 3 1 8 10 8 1 12 13 1 1 7 11 10

1 12 16 1 1 9 12 2 1 12 14 8

2 9 9 3 2 13 19 5 2 13 16 11 2 6 7 18

2 10 11 15 2 6 9 9 2 16 20 8 2 9 15 6

2 10 8 9 2 8 10 3 2 13 16 12 2 12 17 20

2 11 18 12 2 14 18 16

PROC PRINT;

PROC REG;

MODEL POSTCOMP POSTHIOR = PRECOMP;

MTEST;

PROC GLM;

CLASS GPID;

MODEL POSTCOMP POSTHIOR = PRECOMP GPID PRECOMP*GPID;

MANOVA H = PRECOMP*GPID;

PROC GLM;

CLASS GPID;

MODEL POSTCOMP POSTHIOR = PRECOMP GPID;

MANOVA H = GPID;

LSMEANS GPID/PDIFF;

RUN;

â•‡ PROC REG is used to examine the relationship between the two dependent variables and the covariate.

The MTEST is needed to obtain the multivariate test.

â•‡Here GLM is used with the MANOVA statement to obtain the multivariate test of no overall PRECOMP

BY GPID interaction effect.

â•‡ GLM is used again, along with the MANOVA statement, to test whether the adjusted population mean

vectors are equal.

â•‡ This statement is needed to obtain the adjusted means.

The multivariate null hypothesis tested in MANCOVA is that the adjusted population

mean vectors are equal, thatÂ€is,

*

*

µ11

µ12

H0 : * = * .

µ 21 µ 22

319

320

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

Table 8.2:â•‡ Multivariate Tests for Significant Regression, Covariate-by-Treatment Interaction, and Group Differences

Multivariate Test:

Multivariate Statistics and Exact F Statistics

SÂ€=Â€1

MÂ€=Â€0

NÂ€=Â€12

Statistic

Value

F

Num DF

Den DF

Pr > F

Wilks’ Lambda

Pillar’s Trace

Hotelling-Lawley Trace

Roy’s Greatest Root

0.37722383

0.62277617

1.65094597

1.65094597

21.46

21.46

21.46

21.46

2

2

2

2

26

26

26

26

0.0001

0.0001

0.0001

0.0001

MANOVA Test Criteria and Exact F Statistics for the Hypothesis

of no Overall PRECOMP*GPID Effect

HÂ€=Â€Type III SS&CP Matrix for PRECOMP*GPID

SÂ€=Â€1

MÂ€=Â€0

EÂ€=Â€Error SS&CPMatrix

NÂ€=Â€11

Statistic

Value

F

Num DF

Den DF

Pr > F

Wilks’ Lambda

Pillar’s Trace

Hotelling-Lawley Trace

Roy’s Greatest Root

0.86301048

0.13698952

0.15873448

0.15873448

1.90

1.90

1.90

1.90

2

2

2

2

24

24

24

24

0.1707

0.1707

0.1707

0.1707

MANOVA Test Criteria and Exact F Statistics for the Hypothesis of no Overall GPID Effect

HÂ€=Â€Type III SS&CP Matrix for GPID

SÂ€=Â€1

MÂ€=Â€0

EÂ€=Â€Error SS&CP Matrix

NÂ€=Â€11.5

Statistic

Value

F

Num DF

Den DF

Pr > F

Wilks’ Lambda

Pillar’s Trace

Hotelling-Lawley Trace

Roy’s Greatest Root

0.64891393

0.35108107

0.54102455

0.54102455

6.76

6.76

6.76

6.76

2

2

2

2

25

25

25

25

0.0045

0.0045

0.0045

0.0045

The multivariate test at the bottom of TableÂ€8.2 (FÂ€=Â€6.76, pÂ€=Â€.0045) shows that

we reject the multivariate null hypothesis at the .05 level, and hence conclude that

the groups differ on the set of adjusted means. The univariate ANCOVA follow-up F

tests in TableÂ€8.3 (FÂ€=Â€5.26 for POSTCOMP, pÂ€=Â€.03, and FÂ€=Â€9.84 for POSTHIOR,

pÂ€=Â€.004) indicate that adjusted means differ for each of the dependent variables. The

adjusted means for the variables are also given in TableÂ€8.3.

Can we have confidence in the reliability of the adjusted means? From Huitema’s

inequality we need C + (J − 1) / N < .10. Because here JÂ€=Â€2 and NÂ€=Â€29, we obtain

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

Table 8.3:â•‡ Univariate Tests for Group Differences and AdjustedÂ€Means

Source

DF

Type IÂ€SS

Mean Square

F Value

Pr > F

PRECOMP

GPID

1

1

237.6895679

28.4986009

237.6895679

28.4986009

43.90

5.26

<0.001

0.0301

Source

DF

Type III SS

Mean Square

F Value

Pr > F

PRECOMP

GPID

1

1

247.9797944

28.4986009

247.9797944

28.4986009

45.80

5.26

<0.001

0.0301

Source

DF

Type IÂ€SS

Mean Square

F Value

Pr > F

PRECOMP

GPID

1

1

17.6622124

211.5902344

17.6622124

211.5902344

0.82

9.84

0.3732

0.0042

Source

DF

Type III SS

Mean Square

F Value

Pr > F

PRECOMP

GPID

1

1

10.2007226

211.5902344

10.2007226

211.5902344

0.47

9.84

0.4972

0.0042

General Linear Models Procedure Least Squares Means

GPID

1

2

GPID

1

2

POSTCOMP

LSMEAN

12.0055476

13.9940562

POSTHIOR

LSMEAN

5.0394385

10.4577444

Pr > |T| H0:

LSMEAN1Â€=Â€LSMEAN2

0.0301

Pr > |T| H0:

LSMEAN1Â€=Â€LSMEAN2

0.0042

(C + 1) / 29 < .10 or C < 1.9. Thus, we should use fewer than two covariates for reliable

results, and we have used just one covariate.

Example 8.4: MANCOVA on SPSS MANOVA

Next, we consider a social psychological study by Novince (1977) that examined the

effect of behavioral rehearsal (group 1) and of behavioral rehearsal plus cognitive

restructuring (combination treatment, group 3) on reducing anxiety (NEGEVAL) and

facilitating social skills (AVOID) for female college freshmen. There was also a control group (group 2), with 11 participants in each group. The participants were pretested and posttested on four measures, thus the pretests were the covariates.

For this example we use only two of the measures: avoidance and negative evaluation. In TableÂ€8.4 we present syntax for running the MANCOVA, along with annotation explaining what some key subcommands are doing. TableÂ€8.5 presents syntax

for obtaining within-group Mahalanobis distance values that can be used to identify

multivariate outliers among the variables. TablesÂ€8.6, 8.7, 8.8, 8.9, and 8.10 present

selected analysis results. Specifically, TableÂ€ 8.6 presents descriptive statistics for

the study variables, TableÂ€8.7 presents results for tests of the homogeneity of the

321

322

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

regression planes, and TableÂ€8.8 shows tests for homogeneity of variance. TableÂ€8.9

provides the overall multivariate tests as well as follow-up univariate tests for the

MANCOVA, and TableÂ€8.10 presents the adjusted means and Bonferroni-adjusted

comparisons for adjusted mean differences. As in one-way MANOVA, the Bonferroni adjustments guard against type IÂ€error inflation due to the number of pairwise

comparisons.

Before we use the MANCOVA procedure, we examine the data for potential outliers,

examine the shape of the distributions of the covariates and outcomes, and inspect

descriptive statistics. Using the syntax in TableÂ€8.5, we obtain the Mahalanobis distances for each case to identify if multivariate outliers are present on the set of dependent variables and covariates. The largest obtained distance is 7.79, which does not

exceed the chi-square critical value (.001, 4) of 18.47. Thus, no multivariate outliers

Table 8.4:â•‡ SPSS MANOVA Syntax for Three-Group Example: Two Dependent Variables

and Two Covariates

TITLE ‘NOVINCE DATA — 3 GP ANCOVA-2 DEP VARS AND 2 COVS’.

DATA LIST FREE/GPID AVOID NEGEVAL PREAVOID PRENEG.

BEGIN DATA.

1

1

1

2

2

2

3

3

3

91 81 70 102

137 119 123 117

127 101 121 85

107 88 116 97

104 107 105 113

94 87 85 96

121 134 96 96

139 124 122 105

120 123 80 77

END DATA.

1

1

1

2

2

2

3

3

3

107 132 121 71

138 132 112 106

114 138 80 105

76 95 77 64

96 84 97 92

92 80 82 88

140 130 120 110

121 123 119 122

140 140 121 121

1

1

1

2

2

2

3

3

3

121 97 89 76

133 116 126 97

118 121 101 113

116 87 111 86

127 88 132 104

128 109 112 118

148 123 130 111

141 155 104 139

95 103 92 94

1 86 88 80 85

1 114 72 112 76

2 126 112 121 106

2 99 101 98 81

3 147 155 145 118

3 143 131 121 103

LIST.

GLM AVOID NEGEVAL BY GPID WITH PREAVOID PRENEG

/PRINT=DESCRIPTIVE ETASQ

â•‡/DESIGN=GPID PREAVOID PRENEG GPID*PREAVOID GPID*PRENEG.

â•‡GLM AVOID NEGEVAL BY GPID WITH PREAVOID PRENEG

/EMMEANS=TABLES(GPID) COMPARE ADJ(BONFERRONI)

â•…/PLOT=RESIDUALS

â•… /SAVE=RESID ZRESID

â•… /PRINT=DESCRIPTIVE ETASQ HOMOGENEITY

â•… /DESIGN=PREAVOID PRENEG GPID.

â•‡ With the first set of GLM commands, the design subcommand requests a test of the equality of regression

planes assumption for each outcome. In particular, GPID*PREAVOID GPID*PRENEG creates the

product variables needed to test the interactions of interest.

â•‡ This second set of GLM commands produces the standard MANCOVA results. The EMMEANS subcommand requests comparisons of adjusted means using the Bonferroni procedure.

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

Table 8.5:â•‡ SPSS Syntax for Obtaining Within-Group Mahalanobis Distance Values

â•… SORT CASES BY gpid(A).

SPLIT FILE by gpid.

â•…REGRESSION

/STATISTICS COEFF OUTS R ANOVA

/DEPENDENT case

/METHOD=ENTER avoid negeval preavoid preneg

/SAVE MAHAL.

EXECUTE.

SPLIT FILE OFF.

â•‡ To obtain the Mahalanobis’ distances within groups, cases must first be sorted by the grouping variable.

The SPLIT FILE command is needed to obtain the distances for each group separately.

â•‡ The regression procedure obtains the distances. Note that case (which is the case ID) is the

dependent variable, which is irrelevant here because the procedure uses information from the

“predictors” only in computing the distance values. The “predictor” variables here are the dependent

variables and covariates used in the MANCOVA, which are entered with the METHOD subcommand.

are indicated. We also computed within-group z scores for each of the variables separately and did not find any observation lying more than 2.5 standard deviations from

the respective group mean, suggesting no univariate outliers are present. In addition,

examining histograms of each of the variables as well as scatterplots of each outcome

and each covariate for each group did not suggest any unusual values and suggested

that the distributions of each variable appear to be roughly symmetrical. Further,

examining the scatterplots suggested that each covariate is linearly related to each of

the outcome variables, supporting the linearity assumption.

TableÂ€8.6 shows the means and standard deviations for each of the study variables

by treatment group (GPID). Examining the group means for the outcomes (AVOID,

NEGEVAL) indicates that Group 3 has the highest means for each outcome and Group

2 has the lowest. For the covariates, Group 3 has the highest mean and the means for

Groups 2 and 1 are fairly similar. Given that random assignment has been properly

done, use of MANCOVA (or ANCOVA) is preferable to MANOVA (or ANOVA) for

the situation where covariate means appear to differ across groups because use of the

covariates properly adjusts for the differences in the covariates across groups. See

Huitema (2011, pp.Â€202–208) for a discussion of this issue.

Having some assurance that there are no outliers present, the shapes of the distributions

are fairly symmetrical, and linear relationships are present between the covariates and

the outcomes, we now examine the formal assumptions associated with the procedure.

(Note though that the linearity assumption has already been assessed.) First, TableÂ€8.7

provides the results for the test of the assumption that there is no treatment-covariate

interaction for the set of outcomes, which the GLM procedure performs separately for

323

324

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

Table 8.6:â•‡ Descriptive Statistics for the Study Variables byÂ€Group

Report

GPID

1.00

2.00

3.00

Mean

AVOID

NEGEVAL

PREAVOID

PRENEG

116.9091

108.8182

103.1818

93.9091

N

11

11

11

11

Std. deviation

17.23052

22.34645

20.21296

16.02158

Mean

105.9091

94.3636

103.2727

95.0000

N

11

11

11

11

Std. deviation

16.78961

11.10201

17.27478

15.34927

Mean

132.2727

131.0000

113.6364

108.7273

N

11

11

11

11

Std. deviation

16.16843

15.05988

18.71509

16.63785

each covariate. The results suggest that there is no interaction between the treatment

and PREAVOID for any outcome, multivariate FÂ€=Â€.277, pÂ€=Â€.892 (corresponding to

Wilks’ Λ) and no interaction between the treatment and PRENEG for any outcome,

multivariate FÂ€=Â€.275, pÂ€=Â€.892. In addition, Box’s M test, M = 6.689, pÂ€=Â€.418, does

not indicate the variance-covariance matrices of the dependent variables differs across

groups. Note that Box’s M does not test the assumption that the variance-covariance

matrices of the residuals are similar across groups. However, Levene’s test assesses

whether the residuals for a given outcome have the same variance across groups. The

results of these tests, shown in TableÂ€8.8, provide support that this assumption is not

violated for the AVOID outcome, FÂ€=Â€1.184, pÂ€=Â€.320 and for the NEGEVAL outcome,

F = 1.620, pÂ€=Â€.215. Further, TableÂ€8.9 shows that PREAVOID is related to the set of

outcomes, multivariate FÂ€=Â€17.659, p < .001, as is PRENEG, multivariate FÂ€=Â€4.379,

pÂ€=Â€.023.

Having now learned that there is no interaction between the treatment and covariates for any outcome, that the residual variance is similar across groups for each

outcome, and that the each covariate is related to the set of outcomes, we attend to

the assumption that the residuals from the MANCOVA procedure are independently

distributed and follow a multivariate normal distribution in each of the treatment

populations. Given that the treatments were individually administered and individuals completed the assessments on an individual basis, we have no reason to suspect that the independence assumption is violated. To assess normality, we examine

graphs and compute skewness and kurtosis of the residuals. The syntax in TableÂ€8.4

obtains the residuals from the MANCOVA procedure for the two outcomes for each

group. Inspecting the histograms does not suggest a serious departure from normality, which is supported by the skewness and kurtosis values, none of which exceeds

a magnitude of 1.5.

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

Table 8.7:â•‡ Multivariate Tests for No Treatment-Covariate Interactions

Multivariate Testsa

Effect

Intercept

GPID

PREAVOID

PRENEG

GPID *

PREAVOID

GPID *

PRENEG

Hypothesis

df

Error

df

Sig.

Partial

eta

squared

b

Value

F

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

Pillai’s Trace

.200

.800

.249

.249

.143

.862

.156

.111

.553

.447

1.239

1.239

.235

.765

.307

.307

.047

2.866

2.866b

2.866b

2.866b

.922

.889b

.856

1.334c

14.248b

14.248b

14.248b

14.248b

3.529b

3.529b

3.529b

3.529b

.287

2.000

2.000

2.000

2.000

4.000

4.000

4.000

2.000

2.000

2.000

2.000

2.000

2.000

2.000

2.000

2.000

4.000

23.000

23.000

23.000

23.000

48.000

46.000

44.000

24.000

23.000

23.000

23.000

23.000

23.000

23.000

23.000

23.000

48.000

.077

.077

.077

.077

.459

.478

.498

.282

.000

.000

.000

.000

.046

.046

.046

.046

.885

.200

.200

.200

.200

.071

.072

.072

.100

.553

.553

.553

.553

.235

.235

.235

.235

.023

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

Pillai’s Trace

.954

.048

.040

.047

.277b

.266

.485c

.287

4.000

4.000

2.000

4.000

46.000

44.000

24.000

48.000

.892

.898

.622

.885

.023

.024

.039

.023

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest Root

.954

.048

.035

.275b

.264

.415c

4.000

4.000

2.000

46.000

44.000

24.000

.892

.900

.665

.023

.023

.033

a

Design: Intercept + GPID + PREAVOID + PRENEG + GPID * PREAVOID + GPID * PRENEG

Exact statistic

c

The statistic is an upper bound on F that yields a lower bound on the significance level.

b

Table 8.8:â•‡ Homogeneity of Variance Tests for MANCOVA

Box’s test of equality of covariance matricesa

Box’s M

F

df1

df2

Sig.

6.689

1.007

6

22430.769

.418

Tests the null hypothesis that the observed covariance matrices of the

dependent variables are equal across groups.

a

Design: Intercept + PREAVOID + PRENEG + GPID

325

Levene’s test of equality of error variancesa

AVOID

NEGEVAL

F

df1

df2

Sig.

1.184

1.620

2

2

30

30

.320

.215

Tests the null hypothesis that the error variance of the dependent variable is equal across groups.

a

Design: Intercept + PREAVOID + PRENEG + GPID

Table 8.9:â•‡ MANCOVA and ANCOVA Test Results

Multivariate testsa

Effect

Intercept

PREAVOID

PRENEG

GPID

Value

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest

Root

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest

Root

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest

Root

Pillai’s Trace

Wilks’ Lambda

Hotelling’s Trace

Roy’s Largest

Root

F

Hypothesis

df

Error

df

Sig.

Partial eta

squared

.219

.781

.280

.280

3.783b

3.783b

3.783b

3.783b

2.000

2.000

2.000

2.000

27.000

27.000

27.000

27.000

.036

.036

.036

.036

.219

.219

.219

.219

.567

.433

1.308

1.308

17.659b

17.659b

17.659b

17.659b

2.000

2.000

2.000

2.000

27.000

27.000

27.000

27.000

.000

.000

.000

.000

.567

.567

.567

.567

.245

.755

.324

.324

4.379b

4.379b

4.379b

4.379b

2.000

2.000

2.000

2.000

27.000

27.000

27.000

27.000

.023

.023

.023

.023

.245

.245

.245

.245

.491

.519

.910

.889

4.555

5.246b

5.913

12.443c

4.000

4.000

4.000

2.000

56.000

54.000

52.000

28.000

.003

.001

.001

.000

.246

.280

.313

.471

a

Design: Intercept + PREAVOID + PRENEG +Â€GPID

Exact statistic

c

The statistic is an upper bound on F that yields a lower bound on the significance level.

b

Tests of between-subjects effects

Source

Dependent

variable

Type III sum

of squares

Corrected

model

AVOID

NEGEVAL

9620.404a

9648.883b

df

Mean

square

F

Sig.

Partial

eta

squared

4

4

2405.101

2412.221

25.516

10.658

.000

.000

.785

.604

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

Tests of between-subjects effects

Source

Intercept

PREAVOID

PRENEG

GPID

Error

Total

Corrected

Total

a

b

Dependent

variable

Type III sum

of squares

df

AVOID

NEGEVAL

AVOID

NEGEVAL

AVOID

NEGEVAL

AVOID

NEGEVAL

AVOID

NEGEVAL

AVOID

NEGEVAL

AVOID

NEGEVAL

321.661

1479.664

3402.401

262.041

600.646

1215.510

1365.612

4088.115

2639.232

6336.995

474588.000

425470.000

12259.636

15985.879

1

1

1

1

1

1

2

2

28

28

33

33

32

32

Mean

square

321.661

1479.664

3402.401

262.041

600.646

1215.510

682.806

2044.057

94.258

226.321

F

Sig.

Partial

eta

squared

3.413

6.538

36.097

1.158

6.372

5.371

7.244

9.032

.075

.016

.000

.291

.018

.028

.003

.001

.109

.189

.563

.040

.185

.161

.341

.392

R SquaredÂ€=Â€.785 (Adjusted R SquaredÂ€=Â€.754)

R SquaredÂ€=Â€.604 (Adjusted R SquaredÂ€=Â€.547)

Having found sufficient support for using MANCOVA, we now focus on the primary

test of interest, which assesses whether or not there is a difference in adjusted means

for the set of outcomes. The multivariate FÂ€=Â€5.246 (pÂ€=Â€.001), shown in first output

selection of TableÂ€8.9, indicates that the adjusted means differ in the population for

the set of outcomes, with η2multivariate = 1 − .5191/ 2 = .28. The univariate ANCOVAs on

the bottom part of TableÂ€8.9 suggest that the adjusted means differ across groups for

AVOID, FÂ€=Â€7.24, pÂ€ =Â€ .003, with η2Â€=Â€1365.61 / 12259.64Â€=Â€.11, and NEGEVAL

F = 9.02, pÂ€=Â€.001, with η2Â€=Â€4088.12 / 15985.88Â€=Â€.26. Note that we ignore the partial

eta squares that are in the table.

Since group differences on the adjusted means are present for both outcomes, we

consider the adjusted means and associated pairwise comparisons for each outcome,

which are shown in TableÂ€8.10. Considering the social skills measure first (AVOID),

examining the adjusted means indicates that the combination treatment (Group 3)

has the greatest mean social skills, after adjusting for the covariates, compared to the

other groups, and that the control group (Group 2) has the lowest social skills. The

results of the pairwise comparisons, using a Bonferroni adjusted alpha (i.e., .05 / 3),

indicates that the two treatment groups (Groups 1 and 3) have similar adjusted mean

social skills and that each of the treatment groups has greater adjusted mean social

skills than the control group. Thus, for this outcome, behavioral rehearsal seems to

327

328

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

be an effective way to improve social skills, but the addition of cognitive restructuring does not seem to further improve these skills. The d effect size measure, using

MSW of 280.067 (MSWâ•›1/2Â€=Â€16.74) with no covariates in the analysis model, is 0.27

for Group 3 versus Group 1, 0.68 for Group 1 versus Group 2, and 0.95 for Group

3 versus GroupÂ€2.

For the anxiety outcome (NEGEVAL), where higher scores indicate less anxiety,

inspecting the adjusted means at the top part of TableÂ€8.10 suggests a similar pattern. However, the error variance is much greater for this outcome, as evidenced

by the larger standard errors shown in TableÂ€8.10. As such, the only difference in

adjusted means present in the population for NEGEVAL is between Group 3 and

the control, where dÂ€=Â€29.045 / 16.83Â€=Â€1.73 (with MSWÂ€=Â€283.14). Here, then the

behavioral rehearsal and cognitive restructuring treatment shows promise as this

Table 8.10:â•‡ Adjusted Means and Bonferroni-Adjusted Pairwise Comparisons

Estimates

95% Confidence interval

Dependent variable

GPID

Mean

AVOID

1.00

2.00

3.00

1.00

2.00

3.00

120.631

109.250a

125.210a

111.668a

96.734a

125.779a

NEGEVAL

a

Std. error

Lower bound

Upper bound

2.988

2.969

3.125

4.631

4.600

4.843

114.510

103.168

118.808

102.183

87.310

115.860

126.753

115.331

131.612

121.154

106.158

135.699

a

Covariates appearing in the model are evaluated at the following values: PREAVOIDÂ€=Â€106.6970,

PRENEG = 99.2121.

Pairwise comparisons

Mean

Dependent

difference

variable

(I) GPID (J) GPID (I-J)

Std. error Sig.b

AVOID

1.00

2.00

3.00

2.00

3.00

1.00

3.00

1.00

2.00

11.382*

−4.578

−11.382*

−15.960*

4.578

15.960*

4.142

4.474

4.142

4.434

4.474

4.434

.031

.945

.031

.004

.945

.004

95% Confidence interval

for differenceb

Lower

bound

Upper bound

.835

−15.970

−21.928

−27.252

−6.813

4.668

21.928

â•‡6.813

â•‡−.835

−4.668

15.970

27.252

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

Pairwise comparisons

Mean

Dependent

difference

variable

(I) GPID (J) GPID (I-J)

Std. error Sig.b

NEGEVAL

1.00

2.00

3.00

2.00

3.00

1.00

3.00

1.00

2.00

14.934

−14.111

−14.934

−29.045*

14.111

29.045*

6.418

6.932

6.418

6.871

6.932

6.871

.082

.154

.082

.001

.154

.001

95% Confidence interval

for differenceb

Lower

bound

Upper bound

−1.408

−31.763

−31.277

−46.543

−3.541

11.548

31.277

3.541

1.408

−11.548

31.763

46.543

Based on estimated marginalÂ€means

*

The mean difference is significant at the .050 level.

b

Adjustment for multiple comparisons: Bonferroni.

group had much less mean anxiety, after adjusting for the covariates, than the control group.

Can we have confidence in the reliability of the adjusted means for this study? Huitema’s inequality suggests we should be somewhat cautious, because the inequality sugC + ( J − 1)

in this example is

gests we should just use one covariate, as the ratio

N

2 + ( 3 − 1)

= .12, which is larger than the recommended value of .10. Thus, replication

33

of this study using a larger sample size would provide for more confidence in the

results.

8.13â•‡ NOTE ON POST HOC PROCEDURES

Note that in previous editions of this text, the Bryant-Paulson (1976) procedure was

used to conduct inferences for pairwise differences among groups in MANCOVA (or

ANCOVA). This procedure was used, instead of the Tukey (or Tukey–Kramer) procedure because the covariate(s) used in social science research are essentially always

random, and it was thought to be important that this information be incorporated into

the post hoc procedures, which the Tukey procedure does not. Huitema (2011), however, notes that Hochberg and Varon-Salomon (1984) found that the Tukey procedure

adequately controls for the inflation of the family-wise type IÂ€error rate for pairwise

comparisons when a covariate is random and has greater power (and provides narrower intervals) than other methods. As such, Huitema (2011, chaps. 9–10) recommends use of the procedure to obtain simultaneous confidence intervals for pairwise

329

330

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

comparisons. However, at present, SPSS does not incorporate this procedure for

MANCOVA (or ANCOVA). Readers interested in using the Tukey procedure may

consult Huitema. We used the Bonferroni procedure because it can be readily obtained

with SAS and SPSS, but note that this procedure is somewhat less powerful than the

Tukey approach.

8.14â•‡ NOTE ON THE USE OFÂ€MVMM

An alternative to traditional MANCOVA is available with multivariate multilevel

modeling (MVMM; see ChapterÂ€14). In addition to the advantages associated with

MVMM discussed there, MVMM also allows for different covariates to be used for

each outcome. The more traditional general linear model (GLM) procedure, as implemented in this chapter with SPSS and SAS, requires that any covariate that appears in

the model be included as an explanatory variable for every dependent variable, even

if a given covariate were not related to a given outcome. Thus, MVMM, in addition

to other benefits, allows for more flexible use of covariates for multiple analysis of

covariance models.

8.15â•‡ EXAMPLE RESULTS SECTION FOR MANCOVA

For the example results section, we use the study discussed in Example 8.4.

The goal of this study was to determine whether female college freshmen randomly

assigned to either behavioral rehearsal or behavioral rehearsal plus cognitive restructuring (called combined treatment) have better social skills and reduced anxiety after

treatment compared to participants in a control condition. AÂ€one-way multivariate

analysis of covariance (MANCOVA) was conducted with two dependent variables,

social skills and anxiety, where higher scores on these variables reflect greater social

skills and less anxiety. Given the small group size available (nÂ€=Â€11), we administered pretest measures of each outcome, which we call pre-skills and pre-anxiety, to

allow for greater power in the analysis. Each participant reported complete data for

all measures.

Prior to conducting MANCOVA, the data were examined for univariate and multivariate outliers, with no such observations found. We also assessed whether the MANCOVA

assumptions seemed tenable. First, tests of the homogeneity of regression assumption

indicated that there was no interaction between treatment and pre-skills, ΛÂ€ =Â€ .954,

F(4, 46)Â€=Â€.277, pÂ€ =Â€ .892, and between treatment and pre-anxiety, ΛÂ€ =Â€ .954,

F(4, 46)Â€=Â€.275, pÂ€=Â€.892, for any outcome. In addition, no violation of the

variance-covariance matrices assumption was indicated (Box’s M = 6.689, pÂ€=Â€.418),

and the variance of the residuals was not different across groups for social skills, Levene’s F(2, 30)Â€=Â€1.184, pÂ€=Â€.320, and anxiety, F(2, 30) = 1.620, pÂ€=Â€.215. Further,

there were no substantial departures from normality, as suggested by inspection of

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

histograms of the residuals for each group and that all values for skewness and kurtosis

of the residuals were smaller than |1.5|. Further, examining scatterplots suggested that

each covariate is positively and linearly related to each of the outcome variables. Test

results from the MANCOVA indicated that pre-skills is related to the set of outcomes,

ΛÂ€ =Â€ .433, F(2, 27)Â€=Â€17.66, p < .001, as is pre-anxiety, ΛÂ€ =Â€ .755, F(2, 27)Â€=Â€4.38,

pÂ€=Â€.023. Finally, we did not consider there to be any violations of the independence

assumption because the treatments were individually administered and participants

responded to the measures on an individual basis.

TableÂ€1 displays the group means, which show that participants in the combined treatment had greater posttest mean scores for social skills and anxiety (less anxiety) than

those in the other groups, and performance in the control condition was worst. Note

that while sample pretest means differ somewhat, use of covariance analysis provides

proper adjustments for these preexisting differences, with these adjusted means shown

in TableÂ€1. MANCOVA results indicated that the adjusted group means differ on the set

of outcomes, λÂ€=Â€.519, F(4, 54)Â€=Â€5.23, pÂ€=Â€.001. Univariate ANCOVAs indicated that

group adjusted mean differences are present for social skills, F(2, 28)Â€=Â€7.24, pÂ€=Â€.003,

and anxiety, F(2, 28)Â€=Â€9.03, pÂ€=Â€.001.

Table 1:â•‡ Observed (SD) and Adjusted Means for the Analysis Variables (n = 11)

Group

Pre-skills

Social skills

Social

skills1

Pre-anxiety

Anxiety

Anxiety1

Combined

113.6 (18.7)

132.3 (16.2)

125.2

108.7 (16.6)

131.0 (15.1)

125.8

Behavioral

Rehearsal

103.2 (20.2)

116.9 (17.2)

120.6

93.9 (16.0)

108.8 (22.2)

111.7

Control

103.3 (17.3)

105.9 (16.8)

109.3

95.0 (15.3)

94.4 (11.1)

â•‡96.7

1

This column shows the adjusted group means.

TableÂ€2 presents information on the pairwise contrasts. Comparisons of adjusted

means were conducted using the Bonferroni approach to provide type IÂ€error control for the number of pairwise comparisons. TableÂ€2 shows that adjusted mean

social skills are greater in the combined treatment and behavioral rehearsal group

compared to the control group. The contrast between the two intervention groups

is not statistically significant. For social skills, Cohen’s d values indicate the presence of fairly large effects associated with the interventions, relative to the control

group.

For anxiety, the only difference in adjusted means present in the population is between

the combined treatment and control condition. Cohen’s d for this contrast indicates

that this mean difference is quite large relative to the other effects in this study.

331

332

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

Table 2:â•‡ Pairwise Contrasts for the AdjustedÂ€Means

Outcome

Contrast

Contrast (SE)

Cohen’s d

Social skills

Combined vs. control

Behavioral rehearsal vs. control

Combined vs. behavioral rehearsal

Combined vs. control

Behavioral rehearsal vs. control

Combined vs. behavioral rehearsal

15.96* (4.43)

11.38* (4.14)

4.58 (4.47)

29.05* (6.87)

14.93 (6.42)

14.11 (6.93)

0.95

0.68

0.27

1.73

0.89

0.84

Anxiety

Note: * indicates a statistically significant contrast (p < .05) using the Bonferroni procedure.

8.16 SUMMARY

The numbered list below highlights the main points of the chapter.

1. In analysis of covariance a linear relationship is assumed between the dependent

variable(s) and the covariate(s).

2. Analysis of covariance is directly related to the two basic objectives in experimental design of (1) eliminating systematic bias and (2) reduction of error variance. Although ANCOVA does not eliminate bias, it can reduce bias. This can be

helpful in nonexperimental studies comparing intact groups. The bias is reduced

by adjusting the posttest means to what they would be if all groups had started out

equally on the covariate(s), that is, at the grand mean(s). There is disagreement

among statisticians about the use of ANCOVA with intact groups, and several

precautions were mentioned in sectionÂ€8.6.

3. The main reason for using ANCOVA in an experimental study (random assignment

of participants to groups) is to reduce error variance, yielding a more powerful test

of group differences. When using several covariates, greater error reduction may

occur when the covariates have low intercorrelations among themselves.

4. Limit the number of covariates (C) soÂ€that

C + ( J − 1)

N

< .10,

where J is the number of groups and N is total sample size, so that stable estimates

of the adjusted means are obtained.

5. In examining output from the statistical packages, make two checks to determine

whether MANCOVA is appropriate: (1) Check that there is a significant relationship between the dependent variables and the covariates, and (2) check that the

homogeneity of the regression hyperplanes assumption is tenable. If either of

these is not satisfied, then MANCOVA is not appropriate. In particular, if (2) is not

satisfied, then the Johnson–Neyman technique may provide for a better analysis.

6. Measurement error for covariates causes loss of power in randomized designs, and

can lead to seriously biased treatment effects in nonrandomized designs. Thus, if

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

one has a covariate of low or questionable reliability, then true score ANCOVA

should be considered.

7. With three or more groups, use the Tukey or Bonferroni procedure to obtain confidence intervals for pairwise differences.

8.17 ANALYSIS SUMMARY

The key analysis procedures for one-way MANCOVAÂ€are:

I. Preliminary Analysis

A. Conduct an initial screening of theÂ€data.

1) Purpose: Determine if the summary measures seem reasonable and support the use of MANCOVA. Also, identify the presence and pattern (if

any) of missingÂ€data.

2) Procedure: Compute various descriptive measures for each group (e.g.,

means, standard deviations, medians, skewness, kurtosis, frequencies) for

the covariate(s) and dependent variables. If there is missing data, conduct

missing data analysis.

B. Conduct a case analysis.

1) Purpose: Identify any problematic individual observations that may

change important study results.

2)Procedure:

i) Inspect bivariate scatterplots of each covariate and outcome for

each group to identify apparent outliers. Compute and inspect

within-group Mahalanobis distances for the covariate(s) and outcome(s) and within-group z-scores for each variable. From the final

analysis model, obtain standardized residuals. Note that absolute values larger than 2.5 or 3 for these residuals indicate outlying values.

ii) If any potential outliers are identified, consider doing a sensitivity

study to determine the impact of one or more outliers on major study

results.

C. Assess the validity of the statistical assumptions.

1) Purpose: Determine if the standard MANCOVA procedure is valid for the

analysis of theÂ€data.

2) Some procedures:

i) Homogeneity of regression: Test treatment-covariate interactions.

AÂ€nonsignificant test result supports the use of MANCOVA.

ii) Linearity of regression: Inspect the scatterplot of each covariate and

each outcome within each group to assess linearity. If the association

appears to be linear, test the association between the covariate(s) and

the set of outcomes to assess if the covariates should be included in

the final analysis model.

iii) Independence assumption: Consider study circumstances to identify

possible violations.

333

334

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

iv) Equality of covariance matrices assumption of the residuals: Levene’s test can be used to identify if the residual variation is the same

across groups for each outcome. Note that Box’s M test assesses if

the covariance matrices of outcome scores (not residuals) are equal

across groups.

v) Multivariate normality: Inspect the distribution of the residuals for

each group. Compute within-group skewness and kurtosis values,

with values exceeding |2| indicative of nonnormality.

vi) Each covariate is measured with perfect reliability: Report a measure

of reliability for the covariate scores (e.g., Cronbach’s alpha). Consider using an alternate technique (e.g., structural equation modeling)

when low reliability is combined with a decision to retain the null

hypothesis of no treatment effects.

3) Decision/action: Continue with the standard MANCOVA when there is

(a) no evidence of violations of any assumptions or (b) there is evidence

of a specific violation but the technique is known to be robust to an existing violation. If the technique is not robust to an existing violation, use an

alternative analysis technique.

II. Primary Analysis

A. Test the overall multivariate null hypothesis of no difference in adjusted

means for the set of outcomes.

1) Purpose: Provide “protected testing” to help control the inflation of the

overall type IÂ€errorÂ€rate.

2) Procedure: Examine the results of the Wilks’ lambda test associated with

the treatment.

3) Decision/action: If the p-value associated with this test is sufficiently

small, continue with further testing as described later. If the p-value is not

small, do not continue with any further testing.

B. If the multivariate null hypothesis has been rejected, test for group differences

on each dependent variable.

1) Purpose: Describe the adjusted mean outcome differences among the

groups for each of the dependent variables.

2)Procedures:

i) Test the overall ANCOVA null hypothesis for each dependent variable using a conventional alpha (e.g., .05) that provides for greater

power when the number of outcomes is relatively small (i.e., two or

three) or with a Bonferroni adjustment for a larger number of outcomes or whenever there is great concern about committing type

IÂ€Â�errors.

ii) For each dependent variable for which the overall univariate null

hypothesis is rejected, follow up (if more than two groups are

present) with tests and interval estimates for all pairwise contrasts

using a Bonferroni adjustment for the number of pairwise comparisons.

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

C. Report and interpret at least one of the following effect size measures.

1) Purpose: Indicate the strength of the relationship between the dependent

variable(s) and the factor (i.e., group membership).

2) Procedure: Adjusted means and their differences should be reported. Other possibilities include (a) the proportion of generalized total variation

explained by group membership for the set of dependent variables (multivariate eta square), (b) the proportion of variation explained by group

membership for each dependent variable (univariate eta square), and/or

(c) Cohen’s d for two-group contrasts.

8.18 EXERCISES

1. Consider the following data from a two-group MANCOVA with two dependent

variables (Y1 and Y2) and one covariate (X):

GPID

X

Y1

Y2

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

2.00

12.00

10.00

11.00

14.00

13.00

10.00

8.00

8.00

12.00

10.00

12.00

7.00

12.00

9.00

12.00

9.00

16.00

11.00

8.00

10.00

7.00

16.00

9.00

10.00

8.00

16.00

12.00

15.00

12.00

13.00

6.00

17.00

14.00

12.00

6.00

12.00

6.00

12.00

12.00

13.00

14.00

16.00

9.00

14.00

10.00

16.00

17.00

16.00

14.00

18.00

20.00

12.00

11.00

13.00

19.00

15.00

17.00

21.00

3.00

5.00

2.00

8.00

6.00

8.00

3.00

12.00

7.00

8.00

2.00

10.00

1.00

2.00

10.00

6.00

8.00

8.00

21.00

15.00

12.00

7.00

9.00

7.00

4.00

6.00

20.00

7.00

14.00

335

336

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

Run a MANCOVA using SAS orÂ€SPSS.

(a)Is MANCOVA appropriate? Explain.

(b)If MANCOVA is appropriate, then are the adjusted mean vectors significantly different at the .05 level?

(c) Are adjusted group mean differences present for both variables?

(d) What are the adjusted means? Which group has better performance?

(e) Compute Cohen’s d for the two contrasts (which requires a MANOVA to

obtain the relevant MSW for each outcome).

2. Consider a three-group study (randomized) with 24 participants per group. The

correlation between the covariate and the dependent variable is .25, which is

statistically significant at the .05 level. Is ANCOVA going to be very useful in

this study? Explain.

3. Suppose we were comparing two different teaching methods and that the

covariate was IQ. The homogeneity of regression slopes is tested and rejected,

implying a covariate-by-treatment interaction. Relate this to what we would

have found had we blocked (or formed groups) on IQ and ran a factorial

ANOVA (IQ by methods) on achievement.

4. In this example, three tasks were employed to ascertain differences between

good and poor undergraduate writers on recall and manipulation of information: an ordered letters task, an iconic memory task, and a letter reordering

task. In the following table are means and standard deviations for the percentage of correct letters recalled on the three dependent variables. There were 15

participants in each group.

Good writers

Poor writers

Task

M

SD

M

SD

Ordered letters

Iconic memory

Letter reordering

57.79

49.78

71.00

12.96

14.59

4.80

49.71

45.63

63.18

21.79

13.09

7.03

Consider this results section:

The data were analyzed via a multivariate analysis of covariance using the

background variables (English usage ACT subtest, composite ACT, and grade

point average) as covariates, writing ability as the independent variable, and

task scores (correct recall in the ordered letters task, correct recall in the iconic

memory task, and correct recall in the letter reordering task) as the dependent

variables. The global test was significant, F(3, 23)Â€=Â€5.43, p < .001. To control for

experiment-wise type IÂ€error rate at .05, each of the three univariate analyses

Chapter 8

â†œæ¸€å±®

â†œæ¸€å±®

was conducted at a per comparison rate of .017. No significant difference was

observed between groups on the ordered letters task, univariate F(1, 25)Â€=Â€1.92,

p > .10. Similarly, no significant difference was observed between groups on

the iconic memory task, univariate F < 1. However, good writers obtained significantly higher scores on the letter reordering task than the poor writers,

univariate F(1, 25)Â€=Â€15.02, p < .001.

(a)From what was said here, can we be confident that covariance is appropriateÂ€here?

(b)The “global” multivariate test referred to is not identified as to whether it

is Wilks’ Λ, Roy’s largest root, and so on. Would it make a difference as to

which multivariate test was employed in thisÂ€case?

(c)The results mention controlling the experiment-wise error rate at .05 by

conducting each test at the .017 level of significance. Which post hoc procedure is being usedÂ€here?

(d)Is there a sufficient number of participants for us to have confidence in the

reliability of the adjusted means?

5. What is the main reason for using covariance analysis in a randomized

study?

REFERENCES

Anderson, N.â•›H. (1963). Comparison of different populations: Resistance to extinction and

transfer. Psychological Bulletin, 70, 162–179.

Bryant, J.â•›L.,Â€& Paulson, A.â•›S. (1976). An extension of Tukey’s method of multiple comparisons to experimental design with random concomitant variables. Biometrika, 63(3),

631–638.

Bryk, A.â•›D.,Â€& Weisberg, H.â•›I. (1977). Use of the nonequivalent control group design when

subjects are growing. Psychological Bulletin, 85, 950–962.

Cochran, W.â•›G. (1957). Analysis of covariance: Its nature and uses. Biometrics, 13, 261–281.

Elashoff, J.â•›D. (1969). Analysis of covariance: AÂ€delicate instrument. American Educational

Research Journal, 6, 383–401.

Finn, J. (1974). A general model for multivariate analysis. New York, NY: Holt, RinehartÂ€&

Winston.

Hochberg, Y.,Â€& Varon-Salomon, Y. (1984). On simultaneous pairwise comparisons in analysis of covariance. Journal of the American Statistical Association, 79, 863–866.

Huck, S., Cormier, W.,Â€& Bounds, W. (1974). Reading statistics and research. New York, NY:

HarperÂ€&Â€Row.

Huck, S.,Â€& McLean, R. (1975). Using a repeated measures ANOVA to analyze the data from a

pretest–posttest design: AÂ€potentially confusing task. Psychological Bulletin, 82, 511–518.

Huitema, B.â•›E. (2011). The analysis of covariance and alternatives: Statistical methods for

experiments (2nd ed.). Hoboken, NJ: Wiley.

337

338

â†œæ¸€å±®

â†œæ¸€å±®

Analysis of Covariance

Jennings, E. (1988). Models for pretest-posttest data: Repeated measures ANOVA revisited.

Journal of Educational Statistics, 13, 273–280.

Lord, F. (1969). Statistical adjustments when comparing pre-existing groups. Psychological

Bulletin, 70, 162–179.

Maxwell, S., Delaney, H.â•›D.,Â€& Manheimer, J. (1985). ANOVA of residuals and ANCOVA: Correcting an illusion by using model comparisons and graphs. Journal of Educational Statistics, 95, 136–147.

Novince, L. (1977). The contribution of cognitive restructuring to the effectiveness of behavior rehearsal in modifying social inhibition in females. Unpublished doctoral dissertation,

University of Cincinnati,Â€OH.

Olejnik, S.,Â€& Algina, J. (2000). Measures of effect size for comparative studies: Applications,

interpretations, and limitations. Contemporary Educational Psychology 25, 241–286.

Pedhazur, E. (1982). Multiple regression in behavioral research (2nd ed.). New York, NY: Holt,

RinehartÂ€& Winston.

Reichardt, C. (1979). The statistical analysis of data from nonequivalent group designs. In T.

CookÂ€& D. Campbell (Eds.), Quasi-experimentation: Design and analysis issues for field

settings (pp.Â€147–206). Chicago, IL: Rand McNally.

Rogosa, D. (1977). Some results for the Johnson-Neyman technique. Unpublished doctoral

dissertation, Stanford University,Â€CA.

Rogosa, D. (1980). Comparing non-parallel regression lines. Psychological Bulletin, 88,

307–321.

Thorndike, R.,Â€& Hagen, E. (1977). Measurement and evaluation in psychology and education. New York, NY: Wiley.

Chapter 9

EXPLORATORY FACTOR

ANALYSIS

9.1 INTRODUCTION

Consider the following two common classes of research situations:

1. Exploratory regression analysis: An experimenter has gathered a moderate to large

number of predictors (say 15 to 40) to predict some dependent variable.

2. Scale development: An investigator has assembled a set of items (say 20 to 50)

designed to measure some construct(s) (e.g., attitude toward education, anxiety,

sociability). Here we think of the items as the variables.

In both of these situations the number of simple correlations among the variables is

very large, and it is quite difficult to summarize by inspection precisely what the pattern of correlations represents. For example, with 30 items, there are 435 simple correlations. Some way is needed to determine if there is a small number of underlying

constructs that might account for the main sources of variation in such a complex set

of correlations.

Furthermore, if there are 30 items, we are undoubtedly not measuring 30 different constructs; hence, it makes sense to use a variable reduction procedure that will indicate

how the variables cluster or hang together. Now, if sample size is not large enough

(how large N needs to be is discussed in sectionÂ€9.6), then we need to resort to a logical

clustering (grouping) based on theoretical or substantive grounds. On the other hand,

with adequate sample size an empirical approach is preferable. Two basic empirical

approaches are (1) principal components analysis for variable reduction, and (2) factor

analysis for identifying underlying factors or constructs. In both approaches, the basic

idea is to find a smaller number of entities (components or factors) that account for

most of the variation or the pattern of correlations. In factor analysis a mathematical

model is set up and factor scores may be estimated, whereas in components analysis

we are simply transforming the original variables into a new set of linear combinations

(the principal components).

340

â†œæ¸€å±®

â†œæ¸€å±®

eXpLoratorY Factor AnaLYSIS

In this edition of the text, we focus this chapter on exploratory factor analysis (not principal components analysis) because researchers in psychology, education, and the social

sciences in general are much more likely to use exploratory factor analysis, particularly

as it used to develop and help validate measuring instruments. We do, though, begin the

chapter with the principal components method. This method has been commonly used

to extract factors in factor analysis (and remains the default method in SPSS and SAS).

Even when different extraction methods, such as principal axis factoring, are used in

factor analysis, the principal components method is often used in the initial stages of

exploratory factor analysis. Thus, having an initial exposure to principal components

will allow you to make an easy transition to principal axis factoring, which is presented

later in the chapter, and will also allow you to readily see some underlying differences

between these two procedures. Note that confirmatory factor analysis, covered in this

chapter in previous editions of the text, is now covered in ChapterÂ€16.

9.2â•‡ THE PRINCIPAL COMPONENTS METHOD

If we have a single group of participants measured on a set of variables, then principal

components partitions the total variance (i.e., the sum of the variances for the original

variables) by first finding the linear combination of the variables that accounts for the

maximum amount of variance:

y1 = a11 x1 + a12 x2 + + a1 p x p ,

where y1 is called the first principal component, and if the coefficients are scaled such

that a1′a1Â€=Â€1 [where a1′Â€=Â€(a11, a12, .Â€.Â€., a1p)] then the variance of y1 is equal to the largest eigenvalue of the sample covariance matrix (Morrison, 1967, p.Â€224). The coefficients of the principal component are the elements of the eigenvector corresponding to

the largest eigenvalue.

Then the procedure finds a second linear combination, uncorrelated with the first component, such that it accounts for the next largest amount of variance (after the variance

attributable to the first component has been removed) in the system. This second component, y2,Â€is

y2 = a21 x1 + a22 x2 + + a2 p x p ,

and the coefficients are scaled so that a2′a2= 1, as for the first component. The fact that

the two components are constructed to be uncorrelated means that the Pearson correlation between y1 and y2 is 0. The coefficients of the second component are the elements

of the eigenvector associated with the second largest eigenvalue of the covariance

matrix, and the sample variance of y2 is equal to the second largest eigenvalue.

The third principal component is constructed to be uncorrelated with the first two, and

accounts for the third largest amount of variance in the system, and so on. The principal

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

components method is therefore still another example of a mathematical maximization

procedure, where each successive component accounts for the maximum amount of

the variance in the original variables that isÂ€left.

Thus, through the use of principal components, a set of correlated variables is transformed into a set of uncorrelated variables (the components). The goal of such an analysis is to obtain a relatively small number of components that account for a significant

proportion of variance in the original set of variables. When this method is used to

extract factors in factor analysis, you may also wish to make sense of or interpret the

factors.

The factors are interpreted by using coefficients that describe the association between a

given factor and observed variable (called factor or component loadings) that are sufficiently large in absolute magnitude. For example, if the first factor loaded high and

positive on variables 1, 3, 5, and 6, then we could interpret that factor by attempting

to determine what those four variables have in common. The analysis procedure has

empirically clustered the four variables, and the psychologist may then wish to give a

name to the factor to make sense of the composite variable.

In the preceding example we assumed that the loadings were all in the same direction

(all positive for a given component). Of course, it is possible to have a mixture of high

positive and negative loadings on a particular component. In this case we have what

is called a bipolar component. For example, in factor analyses of IQ tests, the second

factor may be bipolar contrasting verbal abilities against spatial-perceptual abilities.

Social science researchers often extract factors from a correlation matrix. The reason

for this standardization is that scales for tests used in educational, sociological, and

psychological research are usually arbitrary. If, however, the scales are reasonably

commensurable, performing a factor analysis on the covariance matrix is preferable

for statistical reasons (Morrison, 1967, p.Â€ 222). The components obtained from the

correlation and covariance matrices are, in general, not the same. The option of doing

factor analysis on either the correlation or covariance matrix is available on SAS and

SPSS. Note though that it is common practice to conduct factor analysis using a correlation matrix, which software programs will compute behind the scenes from raw data

prior to conducting the analysis.

A precaution that researchers contemplating a factor analysis with a small sample

size (certainly any N less than 100) should take, especially if most of the elements in

the sample correlation matrix are small (< |.30|), is to apply Bartlett’s sphericity test

(CooleyÂ€& Lohnes, 1971, p.Â€103). This procedure tests the null hypothesis that the variables in the population correlation matrix are uncorrelated. If one fails to reject with

this test, then there is no reason to do the factor analysis because we cannot conclude

that the variables are correlated. Logically speaking, if observed variables do not “hang

together,” an analysis that attempts to cluster variables based on their associations does

not make sense. The sphericity test is available on both the SAS and SPSS packages.

341

342

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

Also, when using principal components extraction in factor analysis, the composite variables are sometimes referred to as components. However, since we are using principal

components simply as a factor extraction method, we will refer to the entities obtained

as factors. Note that when principal axis factoring is used, as it is later in the chapter, the

entities extracted in that procedure are, by convention, referred to as factors.

9.3â•‡CRITERIA FOR DETERMINING HOW MANY FACTORS TO

RETAIN USING PRINCIPAL COMPONENTS EXTRACTION

Perhaps the most difficult decision in factor analysis is to determine the number of factors that should be retained. When the principal components method is used to extract

factors, several methods can be used to decide how many factors to retain.

1. A widely used criterion is that of Kaiser (1960): Retain only those factors having

eigenvalues are greater than 1. Although using this rule generally will result in

retention of only the most important factors, blind use could lead to retaining

factors that may have no practical importance (in terms of percent of variance

accountedÂ€for).

Studies by Cattell and Jaspers (1967), Browne (1968), and Linn (1968) evaluated the

accuracy of the eigenvalue > 1 criterion. In all three studies, the authors determined

how often the criterion would identify the correct number of factors from matrices

with a known number of factors. The number of variables in the studies ranged from

10 to 40. Generally, the criterion was accurate to fairly accurate, with gross overestimation occurring only with a large number of variables (40) and low communalities

(around .40). Note that the communality of a variable is the amount of variance for a

variable accounted for by the set of factors. The criterion is more accurate when the

number of variables is small (10 to 15) or moderate (20 to 30) and the communalities

are high (> .70). Subsequent studies (e.g., ZwickÂ€& Velicer, 1982, 1986) have shown

that while use of this rule can lead to uncovering too many factors, it may also lead

to identifying too few factors.

2. A graphical method called the scree test has been proposed by Cattell (1966). In

this method the magnitude of the eigenvalues (vertical axis) is plotted against their

ordinal numbers (whether it was the first eigenvalue, the second, etc.). Generally

what happens is that the magnitude of successive eigenvalues drops off sharply

(steep descent) and then tends to level off. The recommendation is to retain all

eigenvalues (and hence factors) in the sharp descent before the first one on the line

where they start to level off. This method will generally retain factors that account

for large or fairly large and distinct amounts of variances (e.g., 31%, 20%, 13%,

and 9%). However, blind use might lead to not retaining factors that, although they

account for a smaller amount of variance, might be meaningful. Several studies

(CattellÂ€& Jaspers, 1967; Hakstian, Rogers,Â€& Cattell, 1982; Tucker, Koopman,Â€&

Linn, 1969) support the general accuracy of the scree procedure. Hakstian etÂ€al.

note that for N > 250 and a mean communality > .60, either the Kaiser or scree

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

rules will yield an accurate estimate for the number of true factors. They add that

such an estimate will be just that much more credible if the Q / P ratio is < .30 (P is

the number of variables and Q is the number of factors). With mean communality

.30 or Q / P > .3, the Kaiser rule is less accurate and the scree rule much less accurate. AÂ€primary concern associated with the use of the scree plot is that it requires

subjective judgment in determining the number of factors present, unlike the

numerical criterion provided by Kaiser’s rule and parallel analysis, discussedÂ€next.

3. A procedure that is becoming more widely used is parallel analysis (Horn, 1965),

where a “parallel” set of eigenvalues is created from random data and compared to

eigenvalues from the original data set. Specifically, a random data set having the

same number of cases and variables is generated by computer. Then, factor analysis

is applied to these data and eigenvalues are obtained. This process of generating random data and factor analyzing them is repeated many times. Traditionally, you then

compare the average of the “random” eigenvalues (for a given factor across these

replicated data sets) to the eigenvalue obtained from the original data set for the

corresponding factor. The rule for retaining factors is to retain a factor in the original

data set only if its eigenvalue is greater than the average eigenvalue for its random

counterpart. Alternatively, instead of using the average of the eigenvalues for a given

factor, the 95th percentile of these replicated values can be used as the comparison

value, which provides a somewhat more stringent test of factor importance.

Fabrigar and Wegener (2012, p.Â€60) note that while the performance of parallel

process analysis has not been investigated exhaustively, studies to date have shown

it to perform fairly well in detecting the proper number of factors, although the

procedure may suggest the presence of too many factors at times. Nevertheless,

given that none of these methods performs perfectly under all conditions, the use of

parallel process analysis has been widely recommended for factor analysis. While

not available at present in SAS or SPSS, parallel analysis can be implemented using

syntax available at the website of the publisher of Fabrigar and Wegener’s text. We

illustrate the use of this procedure in sectionÂ€9.12.

4. There is a statistical significance test for the number of factors to retain that was

developed by Lawley (1940). However, as with all statistical tests, it is influenced

by sample size, and large sample size may lead to the retention of too many factors.

5. Retain as many factors as will account for a specified amount of total variance. Generally, one would want to account for a large proportion of the total variance. In some

cases, the investigator may not be satisfied unless 80–85% of the variance is accounted

for. Extracting factors using this method, though, may lead to the retention of factors

that are essentially variable specific, that is, load highly on only a single variable,

which is not desirable in factor analysis. Note also that in some applications, the actual

amount of variance accounted for by meaningful factors may be 50% or lower.

6. Factor meaningfulness is an important consideration in deciding on the number of

factors that should be retained in the model. The other criteria are generally math

ematical, and their use may not always yield meaningful or interpretable factors. In

exploratory factor analysis, your knowledge of the research area plays an important

role in interpreting factors and deciding if a factor solution is worthwhile. Also, it is

343

344

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

not uncommon that use of the different methods shown will suggest different numbers of factors present. In this case, the meaningfulness of different factor solutions

takes precedence in deciding among solutions with empirical support.

So what criterion should be used in deciding how many factors to retain? Since the

methods look at the issue from different perspectives and have certain strengths and

limitations, multiple criteria should be used. Since the Kaiser criterion has been shown

to be reasonably accurate when the number of variables is < 30 and the communalities are > .70, or when N > 250 and the mean communality is > .60, we would use

it under these circumstances. For other situations, use of the scree test with an N >

200 will probably not lead us too far astray, provided that most of the communalities

are reasonably large. We also recommend general use of parallel analysis as it has

performed well in simulation studies. Note that these methods can be relied upon to a

lesser extent when researchers have some sense of the number of factors that may be

present. In addition, when the methods conflict in the number of factors that should be

retained, you can conduct multiple factor analyses directing your software program to

retain different numbers of factors. Given that the goal is to arrive at a coherent final

model, the solution that seems most interpretable (most meaningful) can be reported.

9.4â•‡INCREASING INTERPRETABILITY OF FACTORS BY

ROTATION

Although a few factors may, as desired, account for most of the variance in a large set

of variables, often the factors are not easily interpretable. The factors are derived not to

provide interpretability but to maximize variance accounted for. Transformation of the

factors, typically referred to as rotation, often provides for much improved interpretability. Also, as noted by Fabrigar and Wegener (2012, p.Â€79), it is important to know

that rotation does not change key statistics associated with model fit, including (1) the

total amount (and proportion) of variance explained by all factors and (2) the values of

the communalities. In other words, unrotated and rotated factor solutions have the same

mathematical fit to the data, regardless of rotation method used. As such, it makes sense

to use analysis results based on those factor loadings that facilitate factor interpretation.

Two major classes of rotations are available:

1. Orthogonal (rigid) rotations—Here the new factors obtained by rotation are still

uncorrelated, as were the initially obtained factors.

2. Oblique rotations—Here the new factors are allowed to be correlated.

9.4.1 Orthogonal Rotations

We discuss two such rotations:

1. Quartimax—Here the idea is to clean up the variables. That is, the rotation is

done so that each variable loads mainly on one factor. Then that variable can be

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

considered to be a relatively pure measure of the factor. The problem with this

approach is that most of the variables tend to load on a single factor (producing the

“g factor” in analyses of IQ tests), making interpretation of the factor difficult.

2. Varimax—Kaiser (1960) took a different tack. He designed a rotation to clean

up the factors. That is, with his rotation, each factor has high correlations with a

smaller number of variables and low or very low correlations with the other variables. This will generally make interpretation of the resulting factors easier. The

varimax rotation is available in SPSS andÂ€SAS.

It should be mentioned that when rotation is done, the maximum variance property

of the originally obtained factors is destroyed. The rotation essentially reallocates the

loadings. Thus, the first rotated factor will no longer necessarily account for the maximum amount of variance. The amount of variance accounted for by each rotated factor

has to be recalculated.

9.4.2 Oblique Rotations

Numerous oblique rotations have been proposed: for example, oblimax, quartimin,

maxplane, orthoblique (Harris–Kaiser), promax, and oblimin. Promax and oblimin are

available on SPSS andÂ€SAS.

Many have argued that correlated factors are much more reasonable to assume in

most cases (Cliff, 1987; FabrigarÂ€ & Wegener, 2012; PedhazurÂ€ & Schmelkin, 1991;

PreacherÂ€& MacCallum, 2003), and therefore oblique rotations are generally preferred.

The following from Pedhazur and Schmelkin (1991) is interesting:

From the perspective of construct validation, the decision whether to rotate factors orthogonally or obliquely reflects one’s conception regarding the structure of

the construct under consideration. It boils down to the question: Are aspects of a

postulated multidimensional construct intercorrelated? The answer to this question is relegated to the status of an assumption when an orthogonal rotation is

employed.Â€ .Â€ .Â€ . The preferred course of action is, in our opinion, to rotate both

orthogonally and obliquely. When, on the basis of the latter, it is concluded that

the correlations among the factors are negligible, the interpretation of the simpler

orthogonal solution becomes tenable. (p.Â€615)

You should know, though, that when using an oblique solution, interpretation of the

factors becomes somewhat more complicated, as the associations between variables

and factors are provided in two matrices:

1. Factor pattern matrix—The elements here, called pattern coefficients, are analogous to standardized partial regression coefficients from a multiple regression analysis. From a factor analysis perspective, a given coefficient indicates the unique

importance of a factor to a variable, holding constant the other factors in the model.

2. Factor structure matrix—The elements here, known as structure coefficients, are

the simple correlations of the variables with the factors.

345

346

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

For orthogonal rotations or completely orthogonal factors these two matrices are

identical.

9.5â•‡WHAT COEFFICIENTS SHOULD BE USED FOR

INTERPRETATION?

Two issues arise in deciding which coefficients are to be used to interpret factors.

The first issue has to do with the type of rotation used: orthogonal or oblique. When

an orthogonal rotation is used, interpretations are based on the structure coefficients

(as the structure and pattern coefficients are identical). When using an oblique rotation, as mentioned, two sets of coefficients are obtained. While it is reasonable to

examine structure coefficients, Fabrigar and Wegener (2012) argue that using pattern

coefficients is more consistent with the use of oblique rotation because the pattern

coefficients take into account the correlation between factors and are parameters of a

correlated-factor model, whereas the structure coefficients are not. As such, they state

that focusing exclusively on the structure coefficients in the presence of an oblique

rotation is “inherently inconsistent with the primary goals of oblique rotation” (p.Â€81).

Given that we have selected the type of coefficient (structure or pattern), the second

issue pertains to which observed variables should be used to interpret a given factor.

While there is no universal standard available to make this decision, the idea is to use

only those variables that have a strong association with the factor. AÂ€threshold value that

can be used for a structure or pattern coefficient is one that is equal to or greater than a

magnitude of .40. For structure coefficients, using a value of |.40| would imply that an

observed variable shares more than 15% of its variance (.42Â€=Â€.16) with the factor that it

is going to be used to help name. Other threshold values that are used are .32 (because it

corresponds to approximately 10% variance explained) and .50, which is a stricter standard corresponding to 25% variance explained. This more stringent value seems sensible

to use when sample size is relatively small and may also be used if it improves factor

interpretability. For pattern coefficients, although a given coefficient cannot be squared

to obtain the proportion of shared variance between an observed variable and factor,

these different threshold values are generally considered to represent a reasonably strong

association for standardized partial regression coefficients in general (e.g., Kline, 2005,

p.Â€122). To interpret what the variables with high loadings have in common, that is, to

name the component, a researcher with expertise in the content area is typically needed.

Also, we should point out that standard errors associated with factor loadings are not

available for some commonly used factor analysis methods, including principal component and principal axis factoring. As such, statistical tests for the loadings are not

available with these methods. One exception involves the use of maximum likelihood

estimation to extract factors. This method, however, assumes the observed variables

follow a multivariate normal distribution in the population and would require a user to

rely on the procedure to be robust to this violation. We do not cover maximum likelihood factor extraction in this chapter, but interested readers can consult Fabrigar and

Wegener (2012), who recommend use of this procedure.

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

9.6 SAMPLE SIZE AND RELIABLE FACTORS

Various rules have been suggested in terms of the sample size required for reliable

factors. Many of the popular rules suggest that sample size be determined as a function

of the number of variables being analyzed, ranging anywhere from two participants

per variable to 20 participants per variable. And indeed, in a previous edition of this

text, five participants per variable as the minimum needed were suggested. However, a

Monte Carlo study by Guadagnoli and Velicer (1988) indicated, contrary to the popular

rules, that the most important factors are factor saturation (the absolute magnitude of

the loadings) and absolute sample size. Also, the number of variables per factor is somewhat important. Subsequent research (MacCallum, Widaman, Zhang,Â€& Hong, 1999;

MacCallum, Widaman, Preacher,Â€& Hong, 2001; VelicerÂ€& Fava, 1998) has highlighted

the importance of communalities along with the number and size of loadings.

Fabrigar and Wegener (2012) discuss this research and minimal sample size requirements as related to communalities and the number of strong factor loadings. We summarize the minimal sample size requirements they suggest as follows:

1. When the average communality is .70 or greater, good estimates can be obtained

with sample sizes as low as 100 (and possibly lower) provided that there are at

least three substantial loadings per factor.

2. When communalities range from .40 to .70 and there are at least three strong loadings per factor, good estimates may be obtained with a sample size of aboutÂ€200.

3. When communalities are small (< .40) and when there are only two substantial

loadings on some factors, sample sizes of 400 or greater may be needed.

These suggestions are useful in establishing at least some empirical basis, rather than

a seat-of-the-pants judgment, for assessing what factors we can have confidence in.

Note though that they cover only a certain set of situations, and it may be difficult in

the planning stages of a study to have a good idea of what communalities and loadings

may actually be obtained. If that is the case, Fabrigar and Wegener (2012) suggest

planning on “moderate” conditions to hold in your study, as described by the earlier

second point, which implies a minimal sample size ofÂ€200.

9.7â•‡SOME SIMPLE FACTOR ANALYSES USING PRINCIPAL

COMPONENTS EXTRACTION

We provide a simple hypothetical factor analysis example with a small number of

observed variables (items) to help you get a better handle on the basics of factor analysis. We use a small number of variables in this section to enable better understanding of

key concepts. SectionÂ€9.12 provides an example using real data where a larger number

of observed variables are involved. That section also includes extraction of factors

with principal axis factoring.

For the example in this section, we assume investigators are developing a scale to

measure the construct of meaningful professional work, have written six items related

347

348

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

to meaningful professional work, and would like to identify the number of constructs underlying these items. Further, we suppose that the researchers have carefully reviewed relevant literature and have decided that the concept of meaningful

professional work involves two interrelated constructs: work that one personally finds

engaging and work that one feels is valued by one’s workplace. As such, they have

written three items intended to reflect engagement with work and another three items

designed to reflect the idea of feeling valued in the workplace. The engagement items,

we suppose, ask workers to indicate the degree to which they find work stimulating

(item 1), challenging (item 2), and interesting (item 3). The “feeling valued” concept

is thought to be adequately indicated by responses to items asking workers to indicate

the degree to which they feel recognized for effort (item 4), appreciated for good work

(item 5), and fairly compensated (item 6). Responses to these six items—stimulate,

challenge, interest, recognize, appreciate, and compensate—have been collected, we

suppose from a sample of 300 employees who each provided responses for all six

items. Also, higher scores for each item reflect greater properties of the attribute being

measured (e.g., more stimulating, more challenging work, and soÂ€on).

9.7.1 Principal Component Extraction With ThreeÂ€Items

For instructional purposes, we initially use responses from just three items: stimulate,

challenge, and interest. The correlations between these items are shown in TableÂ€9.1. Note

each correlation is positive and indicates a fairly strong relationship between variables.

Thus, the items seem to share something in common, lending support for a factor analysis.

In conducting this analysis, the researchers wish to answer the following research

questions:

1. How many factors account for meaningful variation in the item scores?

2. Which items are strongly related to any resultant factors?

3. What is the meaning of any resultant factor(s)?

To address the first research question about the number of factors that are present, we

apply multiple criteria including, for the time being, inspecting eigenvalues, examining

a scree plot, and considering the meaningfulness of any obtained factors. Note that we

add to this list the use of parallel analysis in sectionÂ€9.12. An eigenvalue indicates the

strength of relationship between a given factor and the set of observed variables. As we

know, the strength of relationship between two variables is often summarized by a correlation coefficient, with values larger in magnitude reflecting a stronger association.

Table 9.1:â•‡ Bivariate Correlations for Three Work-RelatedÂ€Items

Correlation matrix

Stimulate

Challenge

Interest

1

2

3

1.000

.659

.596

.659

1.000

.628

.596

.628

1.000

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

Another way to describe the strength of association between two variables is to square

the value of the correlation coefficient. When the correlation is squared, this measure

may be interpreted as the proportion of variance in one variable that is explained by

another. For example, if the correlation between a factor and an observed variable is

.5, the proportion of variance in the variable that is explained by the factor is .25. In

factor analysis, we are looking for factors that are not just associated with one variable

but are strongly related to, or explain the variance of, a set of variables (here, items).

With principal components extraction, an eigenvalue is the amount of variance in the

set of observed variables that is explained by a given factor and is also the variance of

the factor. As you will see, this amount of variance can be converted to a proportion of

explained variance. In brief, larger eigenvalues for a factor means that it explains more

variance in the set of variables and is indicative of important factors.

TableÂ€9.2 shows selected summary results using principal components extraction with

the three work-related items. In the Component column of the first table, note that three

factors (equal to the number of variables) are formed in the initial solution. In the Total

column, the eigenvalue for the first factor is 2.256, which represents the total amount

of variance in the three items that is explained by this factor. Recall that the total maximum variance in a set of variables is equal to the number of variables. So, the total

amount of variance that could have been explained is three. Therefore, the proportion

of the total variance that is accounted for by the first factor is 2.256 / 3, which is about

.75 or 75%, and is shown in the third (and fourth column) for the initial solution. The

remaining factors have eigenvalues well below 1, and using Kaiser’s rule would not be

considered as important in explaining the remaining item variance. Therefore, applying

Kaiser’s rule suggests that one factor explains important variation in the set of items.

Further, examining the scree plot shown in TableÂ€ 9.2 provides support for the

single-factor solution. Recall that the scree plot displays the eigenvalues as a function

of the factor number. Notice that the plot levels off or is horizontal at factor two. Since

the procedure is to retain only those factors that appear before the leveling off occurs,

only one factor is to be retained for this solution. Thus, applying Kaiser’s rule and

inspecting the scree plot provides empirical support for a single-factor solution.

To address the second research question—which items are related to the factor—we

examine the factor loadings. When one factor has been extracted (as in this example),

a factor loading is the correlation between a given factor and variable (also called a

structure coefficient). Here, in determining if a given item is related to the factor, we

use a loading of .40 in magnitude or greater. The loadings for each item are displayed

in the component matrix of TableÂ€9.2, with each loading exceeding .80. Thus, each

item is related to the single factor and should be used for interpretation.

Now that we have determined that each item is important in defining the factor, we

can attempt to label the factor to address the final research question. Generally, since

the factor loadings are all positive, we can say that employees with high scores on

the factor also have high scores on stimulate, challenge, and interest (with an analogous statement holding for low scores on the factor). Of course, in this example,

349

% of Variance

75.189

13.618

11.193

Total

2.256

.409

.336

0.0

0.5

1.0

1.5

2.0

2.5

1

2

Factor Number

Scree Plot

75.189

88.807

100.000

3

Cumulative %

Initial eigenvalues

Extraction Method: Principal Component Analysis.

1

2

3

Component

Total variance explained

Component matrixa

75.189

% of Variance

.867

.881

.852

1

Component

75.189

Cumulative %

Extraction sums of squared loadings

Extraction Method: Principal Component Analysis.

Stimulate

Challenge

Interest

2.256

Total

Table 9.2:â•‡ Selected Results Using Principal Components Extraction for ThreeÂ€Items

Eigenvalue

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

the researchers have deliberately constructed the items so that they reflect the idea of

engaging professional work. This analysis, then, provides empirical support for referring to this construct as engagement withÂ€work.

9.7.2â•‡AÂ€Two-Factor Orthogonal Model Using

Principal Components Extraction

A second illustrative factor analysis using principal components extraction includes all

six work-related variables and will produce uncorrelated factors (due to an orthogonal

rotation selected). The reason we will present an analysis that constrains the factors

to be uncorrelated is that it is sometimes used in factor analysis. In sectionÂ€9.7.4, we

present an analysis that allows the factors to be correlated. We will also examine how

the results of the orthogonal and oblique factor solutions differ.

For this example, factor analysis with principal components extraction is used to

address the following research questions:

1. How many factors account for substantial variation in the set of items?

2. Is each item strongly related to factors that are obtained?

3. What is the meaning of any resultant factor(s)?

When the number of observed variables is fairly small and/or the pattern of their correlations is revealing, inspecting the bivariate correlations may provide an initial indication

of the number of factors that are present. TableÂ€9.3 shows the correlations among the

six items. Note that the first three items are strongly correlated with each other and last

three items are strongly correlated with each other. However, the correlations between

these two apparent sets of items are, perhaps, only moderately correlated. Thus, this

“eyeball” analysis suggests that two distinct factors are associated with the six items.

Selected results of a factor analysis using principal components extraction for these

data are shown in TableÂ€9.4. Inspecting the initial eigenvalues suggests, applying Kaiser’s rule, that two factors are related to the set of items, as the eigenvalues for the first

two factors are greater than 1. In addition, the scree plot provides additional support

for the two-factor solution, as the plot levels off after the second factor. Note that the

two factors account for 76% of the variance in the six items.

Table 9.3:â•‡ Bivariate Correlations for Six Work-RelatedÂ€Items

Variable

1

2

3

4

5

6

Stimulate

Challenge

Interest

Recognize

Appreciate

Compensate

1.000

.659

.596

.177

.112

.140

–

1.000

.628

.111

.109

.104

–

–

1.000

.107

.116

.096

–

–

–

1.000

.701

.619

–

–

–

–

1.000

.673

–

–

–

–

–

1.000

351

% of Variance

44.205

32.239

6.954

6.408

5.683

4.511

Total

2.652

1.934

.417

.384

.341

.271

0.0

0.5

1.0

1.5

2.0

2.5

3.0

1

2

3

4

Factor Number

Scree Plot

44.205

76.445

83.398

89.806

95.489

100.000

5

Cumulative %

Initial eigenvalues

Extraction Method: Principal Component Analysis.

1

2

3

4

5

6

Component

Total variance explained

2.652

1.934

Total

6

44.205

32.239

% of Variance

44.205

76.445

Cumulative %

Extraction sums of squared loadings

Table 9.4:â•‡ Selected Results from a Principal Components Extraction Using an Orthogonal Rotation

Eigenvalue

2.330

2.256

Total

38.838

37.607

% of Variance

38.838

76.445

Cumulative %

Rotation sums of squared loadings

.651

.628

.609

.706

.706

.682

.572

.619

.598

-.521

-.559

-.532

Extraction Method: Principal Component Analysis.

a

2 components extracted.

Stimulate

Challenge

Interest

Recognize

Appreciate

Compensate

.100

.052

.052

.874

.899

.863

.880

.851

.086

.058

.062

Extraction Method: Principal Component Â�Analysis.

Rotation Method: Varimax with Kaiser

Â�Normalization.

a

Rotation converged in 3 iterations.

Stimulate

Challenge

Interest

Recognize

Appreciate

Compensate

2

.861

1

1

2

Component

Rotated component matrixa

Component

Component matrixa

354

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

With a multifactor solution, factor interpretability is often more readily accomplished

with factor rotation. This analysis was conducted with varimax rotation, an orthogonal

rotation that assumes factors are uncorrelated (and keeps them that way). When factors

are extracted and rotated, the variance accounted for by a given factor is referred to as

the sum of squared loadings. These sums, following rotation, are shown in TableÂ€9.4,

under the heading Rotation Sums of Squared Loadings. Note that the sums of squared

loadings for each factor are more alike following rotation. Also, while the sums of

squared loadings (after rotation) are numerically different from the original eigenvalues,

the total amount and percent of variance accounted for by the set of factors is the same

pre- and post-rotation. Note in this example that after rotation, each factor accounts

for about the same percent of item variance (e.g., 39%, 38%). Note, then, that if you

want to compare the relative importance of the factors, it makes more sense to use the

sum of squared loadings following rotation, because the factor extraction procedure is

designed to yield initial (unrotated) factors that have descending values for the amount

of variance explained (i.e., the first factor will have the largest eigenvalue and soÂ€on).

Further, following rotation, the factor loadings will generally be quite different from

the unrotated loadings. The unrotated factor loadings (corresponding to the initially

extracted factors) are shown in the component matrix in TableÂ€ 9.4. Note that these

structure coefficients are difficult to interpret as the loadings for the first factor are all

positive and fairly strong whereas the loadings for the second factor are positive and

negative. This pattern is fairly common in multifactor solutions.

A much more interpretable solution, consistent in this case with the item correlations,

is achieved after factor rotation. The Rotated Component Matrix displays the rotated

loadings, and with an orthogonal rotation, the loadings still represent the correlation

between a component and given item. Thus, using the criteria in the previous section

(loadings greater than .40 represent a strong association), we see that the variables

stimulate, challenge, and interest load highly only on factor two and that variables

recognize, appreciate, and compensate load highly only on the other factor. As such, it

is clear that factor two represents the engagement factor, as labeled previously. Factor

one is composed of items where high scores on the factor, given the positive loadings on the important items, are indicative of employees who have high scores on the

items recognize, appreciate, and compensate. Therefore, there is empirical support for

believing that these items tap the degree to which employees feel valued at the workplace. Thus, answering the research questions posed at the beginning of this section,

two factors (an engagement and a valued factor) are strongly related to the set of items,

and each item is related to only one of the factors.

Note also that the values of the structure coefficients in the Rotated Component Matrix

of TableÂ€ 9.4 are characteristic of a specific form of what is called simple structure.

Simple structure is characterized by (1) each observed variable having high loadings

on only some components and (2) each component having multiple high loadings and

the rest of the loadings near zero. Thus, in a multifactor model each factor is defined

by a subset of variables, and each observed variable is related to at least one factor.

With these data, using rotation achieves a very interpretable pattern where each item

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

loads on only one factor. Note that while this is often desired because it eases interpretation, simple structure does not require that a given variable load on only one factor.

For example, consider a math problem-solving item on a standardized test. Performance on such an item may be related to examinee reading and math ability. Such

cross-loading associations may be expected in a factor analysis and may represent a

reasonable and interpretable finding. Often, though, in instrument development, such

items are considered to be undesirable and removed from the measuring instrument.

9.7.3 Calculating the Sum of the Squared Loadings and

Communalities From Factor Loadings

Before we consider results from the use of an oblique rotation, we show how the sum

of squared loadings and communalities can be calculated given the item-factor correlations. The sum of squared loadings can be computed by squaring the item-factor correlations for a given factor and summing these squared values for that factor (column).

TableÂ€9.5 shows the rotated loadings from the two-factor solution with an orthogonal

(varimax) rotation from the previous section with the squares to be taken for each

value. As shown in the bottom of the factor column, this value is the sum of squared

loadings for the factor and represents the amount of variance explained by a factor

for the set of items. As such, it is an aggregate measure of the strength of association

between a given factor and set of observed variables. Here it is useful to think of a

given factor as an independent variable explaining variation in a set of responses. Thus,

a low value for the sum of squared loadings suggests that a factor is not strongly related

to the set of observed variables. Recall that such factors (factors 3–6) have already been

removed from this analysis because their eigenvalues were each smaller thanÂ€1.

The communalities, on the other hand, as shown in TableÂ€9.5, are computed by summing the squared loadings across each of the factors for a given observed variable

and represent the proportion of variance in a variable that is due to the factors. Thus,

Table 9.5:â•‡ Variance Calculations from the Two-Factor OrthogonalÂ€Model

Items

Stimulate

Challenge

Interest

Recognize

Appreciate

Compensate

Sum down the

column

Value factor

.100

.0522

.0522

.8742

.8992

.8632

.1002 + .0522 +

.0522 + .8742 +

.8992 + .8632

Amount of variance = 2.33

(sum of squared

loadings)

2

Engagement factor

.861

.8802

.8512

.0862

.0582

.0622

.8612 + .8802 + .8512 +

.0862 + .0582 + .0622

2

= 2.25

Sum across the

row

Communality

.100 + .861

.0522 + .8802

.0522 + .8512

.8742 + .0862

.8992 + .0582

.8632 + .0622

= .75

= .78

= .73

= .77

= .81

= .75

2

2

355

356

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

a communality is an aggregate measure of the strength of association between a given

variable and a set of factors. Here, it is useful to think of a given observed variable as

a dependent variable and the factors as independent variables, with the communality

representing the r-square (or squared multiple correlation) from a regression equation with the factors as predictors. Thus, a low communality value suggests that an

observed variable is not related to any of the factors. Here, all variables are strongly

related to the factors, as the factors account for (minimally) 73% of the variance in the

Interest variable and up to 81% of the variance in the Appreciate variable.

Variables with low communalities would likely not be related to any factor (which

would also then be evident in the loading matrix), and on that basis would not be used

to interpret a factor. In instrument development, such items would likely be dropped

from the instrument and/or undergo subsequent revision. Note also that communalities

do not change with rotation, that is, these values are the same pre- and post-rotation.

Thus, while rotation changes the values of factor loadings (to improve interpretability)

and the amount of variance that is due to a given factor, rotation does not change the

values of the communalities and the total amount of variance explained by both factors.

9.7.4â•‡AÂ€Two-Factor Correlated Model Using Principal

ComponentsÂ€Extraction

The final illustrative analysis we consider with the work-related variables presents results

for a two-factor model that allows factors to be correlated. By using a varimax rotation, as

we did in sectionÂ€9.7.2, the factors were constrained to be completely uncorrelated. However, assuming the labels we have given to the factors (engagement and feeling valued in

the workplace) are reasonable, you might believe that these factors are positively related.

If you wish to estimate the correlation among factors, an oblique type of rotation must be

used. Note that using an oblique rotation does not force the factors to be correlated but

rather allows this correlation to be nonzero. The oblique rotation used here is direct quartimin, which is also known as direct oblimin when the parameter that controls the degree

of correlation (called delta or tau) is at its recommended default value of zero. This oblique

rotation method is commonly used. For this example, the same three research questions

that appeared in sectionÂ€9.7.2 are of interest, but we are now interested in an additional

research question: What is the direction and magnitude of the factor correlation?

Selected analysis results for these data are shown in TableÂ€ 9.6. As in sectionÂ€ 9.7.2,

examining the initial eigenvalues suggests, applying Kaiser’s rule, that two factors

are strongly related to the set of items, and inspecting the scree plot provides support

for this solution. Note though the values in TableÂ€9.6 under the headings Initial Eigenvalues and Extraction Sums of Squared Loadings are identical to those provided in

TableÂ€ 9.4 even though we have requested an oblique rotation. The reason for these

identical results is that the values shown under these headings are from a model where

the factors have not been rotated. Thus, using various rotation methods does not affect

the eigenvalues that are used to determine the number of important factors present in

the data. After the factors have been rotated, the sums of squared loadings associated

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

with each factor are located under the heading Rotation Sums of Squared Loadings.

Like the loadings for the orthogonal solution, these sums of squared loadings are more

similar for the two factors than before rotation. However, unlike the values for the

orthogonal rotation, these post-rotation values cannot be meaningfully summed to provide a total amount of variance explained by the two factors, as such total variance

obtained by this sum would now include the overlapping (shared) variance between

factors. (Note that the total amount of variance explained by the two factors remains

2.652 + 1.934Â€=Â€4.586.) The post-rotation sum of squared loadings can be used, though,

Table 9.6:â•‡ Selected Analysis Results Using Oblique Rotation and Principal

Components Extraction

Total variance explained

Initial eigenvalues

Extraction sums of squared

loadings

Rotation

sums of

squared

loadingsa

Component

Total

% of

Variance

Cumulative

%

Total

% of

Variance

Cumulative

%

Total

1

2

3

4

5

6

2.652

1.934

.417

.384

.341

.271

44.205

32.239

6.954

6.408

5.683

4.511

44.205

76.445

83.398

89.806

95.489

100.000

44.205

32.239

44.205

76.445

2.652

1.934

2.385

2.312

Extraction Method: Principal Component Analysis.

a

When components are correlated, sums of squared loadings cannot be added to obtain a total variance.

Scree Plot

3.0

Eigenvalue

2.5

2.0

1.5

1.0

0.5

0.0

1

2

3

4

Component Number

5

6

(Continued )

357

358

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

TableÂ€9.6:â•‡ (Continued)

Pattern matrix

Structure matrix

Component

Stimulate

Challenge

Interest

Recognize

Appreciate

Compensate

1

2

.033

−.017

−.015

.875

.902

.866

.861

.884

.855

.018

−.012

−.005

Component

Stimulate

Challenge

Interest

Recognize

Appreciate

Compensate

1

2

.167

.120

.118

.878

.900

.865

.867

.882

.853

.154

.128

.129

Extraction Method: Principal Component Analysis.

Rotation Method: Oblimin with Kaiser

Â�Normalization.

Extraction Method: Principal Component Analysis.

Rotation Method: Oblimin with Kaiser

Â�Normalization.

Component correlation matrix

Component

1

2

1

2

1.000

.155

.155

1.000

to make a rough comparison of the relative importance of each factor. Here, given that

three items load on each factor, and that the each factor has similarly large and similarly small loadings, each factor is about the same in importance.

As noted in sectionÂ€9.4.2, with an oblique rotation, two types of matrices are provided.

TableÂ€9.6 shows the pattern matrix, containing values analogous to standardized partial

regression coefficients, and the structure matrix, containing the correlation between

the factors and each observed variable. As noted, using pattern coefficients to interpret

factors is more consistent with an oblique rotation. Further, using the pattern coefficients often provides for a clearer picture of the factors (enhanced simple structure).

In this case, using either matrix leads to the same conclusion as the items stimulate,

challenge, and interest are related strongly (pattern coefficient >. 40) to only one factor

and the items recognize, appreciate, and compensate are related strongly only to the

other factor.

Another new piece of information provided by use of the oblique rotation is the factor

correlation. Here, TableÂ€ 9.6 indicates a positive but not strong association of .155.

Given that the factors are not highly correlated, the orthogonal solution would not be

unreasonable to use here, although a correlation of this magnitude (i.e., > |.10|) can be

considered as small but meaningful (Cohen, 1988).

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

Thus, addressing the research questions for this example, two factors are important

in explaining variation in the set of items. Further, inspecting the values of the pattern coefficients suggested that each item is related to only its hypothesized factor.

Using an oblique rotation to estimate the correlation between factors, we found that

the two factors—engagement and feeling valued in the workplace—are positively but

modestly correlated at 0.16.

Before proceeding to the next section, we wish to make a few additional points. Nunnally (1978, pp.Â€ 433–436) indicated several ways in which one can be fooled by factor analysis. One point he made that we wish to comment on is that of ignoring the

simple correlations among the variables after the factors have been derived—that

is, not checking the correlations among the variables that have been used to define a

factor—to see if there is communality among them in the simple sense. As Nunnally noted, in some cases, variables used to define a factor may have simple correlations near zero. For our example this is not the case. Examination of the simple

correlations in TableÂ€9.3 for the three variables used to define factor 1 shows that the correlations are fairly strong. The same is true for the observed variables used to define factorÂ€2.

9.8â•‡ THE COMMUNALITYÂ€ISSUE

With principal components extraction, we simply transform the original variables into

linear combinations of these variables, and often a limited number of these combinations (i.e., the components or factors) account for most of the total variance. Also, we

use 1s in the diagonal of the correlation matrix. Factor analysis using other extraction

methods differs from principal components extraction in two ways: (1) the hypothetical factors that are derived in pure or common factor analysis can only be estimated

from the original variables whereas with principal components extraction, because the

components are specific linear combinations, no estimate is involved; and (2) numbers

less than 1, the communalities, are put in the main diagonal of the correlation matrix in

common factor analysis. AÂ€relevant question is: Will different factors emerge if communalities (e.g., the squared multiple correlation of each variable with all the others)

are placed in the main diagonal?

The following quotes from five different sources give a pretty good sense of what might

be expected in practice under some conditions. Cliff (1987) noted that “the choice of

common factors or components methods often makes virtually no difference to the

conclusions of a study” (p.Â€349). Guadagnoli and Velicer (1988) cited several studies

by Velicer etÂ€al. that “have demonstrated that principal components solutions differ

little from the solutions generated from factor analysis methods” (p.Â€266). Harman

(1967) stated, “as a saving grace, there is much evidence in the literature that for all

but very small data sets of variables, the resulting factorial solutions are little affected

by the particular choice of communalities in the principal diagonal of the correlation

matrix” (p.Â€83). Nunnally (1978) noted, “it is very safe to say that if there are as many

as 20 variables in the analysis, as there are in nearly all exploratory factor analysis, then

359

360

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

it does not matter what one puts in the diagonal spaces” (p.Â€418). Gorsuch (1983) took

a somewhat more conservative position: “If communalities are reasonably high (e.g.,

.7 and up), even unities are probably adequate communality estimates in a problem

with more than 35 variables” (p.Â€108). AÂ€general, somewhat conservative conclusion

from these is that when the number of variables is moderately large (say > 30), and

the analysis contains virtually no variables expected to have low communalities (e.g.,

.4), then practically any of the factor procedures will lead to the same interpretations.

On the other hand, principal components and common factor analysis may provide

different results when the number of variables is fairly small (< 20), and some communalities are low. Further, Fabrigar and Wegener (2012) state, despite the Nunnally

assertion described earlier, that these conditions (relatively low communalities and a

small number of observed variables or loadings on a factor) are not that unusual for

social science research. For this reason alone, you may wish to use a common factor

analysis method instead of, or along with, principal components extraction. Further, as

we discuss later, the common factor analysis method is conceptually more appropriate

when you hypothesize that latent variables are present. As such, we now consider the

common factor analysis model.

9.9â•‡ THE FACTOR ANALYSISÂ€MODEL

As mentioned, factor analysis using principal components extraction may provide similar results to other factor extraction methods. However, the principal components and

common factor model, discussed later, have some fundamental differences and may

at times lead to different results. We briefly highlight the key differences between

the principal component and common factor models and point out general conditions

where use of the common factor model may have greater appeal. AÂ€key difference

between the two models has to do with the goal of the analysis. The goal of principal

component analysis is to obtain a relatively small number of variates (linear combinations of variables) that account for as much variance in the set of variables as

possible. In contrast, the goal of common factor analysis is to obtain a relatively small

number of latent variables that account for the maximum amount of covariation in a

set of observed variables. The classic example of this latter situation is when you are

developing an instrument to measure a psychological attribute (e.g., motivation) and

you write items that are intended to tap the unobservable latent variable (motivation).

The common factor analysis model assumes that respondents with an underlying high

level of motivation will respond similarly across the set of motivation items (e.g., have

high scores across such items) because these items are caused by a common factor

(here motivation). Similarly, respondents who have low motivation will provide generally low responses across the same set of items, again due to the common underlying

factor. Thus, if you assume that unobserved latent variables are causing individuals

to respond in predictable ways across a set of items (or observed variables), then the

common factor model is conceptually better suited than use of principal components

for this purpose.

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

A visual display of the key elements of a factor analysis model helps further highlight

differences between the common factor and principal component models. FigureÂ€9.1

is a hypothesized factor analysis model using the previous example involving the

work-related variables. The ovals at the bottom of FigureÂ€ 9.1 represent the hypothesized “engagement” and “valued” constructs or latent variables. As shown by the

single-headed arrows, these constructs, which are unobservable, are hypothesized to

cause responses in the indicators (the engagement items, E1–E3, and the value items,

V1–V3). Contrary to that shown in FigureÂ€9.1, note that in exploratory factor analysis, each construct is assumed to linearly impact each of the observed variables.

That is, arrows would appear from the engagement oval to each of the value variables

(V1–V3) and from the valued oval to variables E1–E3. In exploratory factor analysis such cross-loadings cannot be set a priori to zero. So, the depiction in FigureÂ€9.1

represents the researcher’s hypothesis of interest. If this hypothesis is correct, these

undepicted cross-loadings would be essentially zero. Note also that the double-headed

curved arrow linking the two constructs at the bottom of FigureÂ€9.1 means that the constructs are assumed to be correlated. As such, an oblique rotation that allows for such a

correlation would be used. Further, the ovals at the top of FigureÂ€9.1 represent unique

variance that affects each observed variable. This unique variance, according to the

factor analysis model, is composed of two entities: systematic variance that is inherent

or specific to a given indicator (e.g., due to the way an item is presented or written)

and variance due to random measurement error. Note that in the factor analysis model

this unique variance (composed of these two entities) is removed from the observed

variables (i.e., removed from E1, E2, and soÂ€on).

Figure 9.1:â•‡ AÂ€hypothesized factor analysis model with three engagement (E1–E3) and three

value items (V1–V3).

Ue1

Ue2

Ue3

Uv1

Uv2

Uv3

E1

E2

E3

V1

V2

V3

Engagement

Valued

361

362

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

So, how does this depiction compare to the principal components model? First, the

principal components model does not acknowledge the presence of the unique variances associated with the items as shown in the top of FigureÂ€9.1. Thus, the principal

components model assumes that there is no random measurement error and no specific

variance associated with each indicator. As such, in the principal components model,

all variation associated with a given variable is included in the analysis. In contrast,

in common factor analysis, only the variation that is shared or is in common among

the indicators (assumed to be in common or shared because of underlying factors)

can be impacted by the latent variables. This common variance is referred to as the

communality, which is often measured initially by the proportion of variance in an

observed variable that is common to all of the other observed variables. When the

unique variance is small (or the communalities are high), as noted, factor analysis and

the principal components method may well lead to the same analysis results because

they are both analyzing essentially the same variance (i.e., the common variance and

total variable variance are almost the same).

Another primary difference between factor and principal components analysis is the

assumed presence of latent variables in the factor analysis model. In principal components, the composite variables (the components) are linear combinations of observed

variables and are not considered to be latent variables, but instead are weighted sums of

the observed variables. In contrast, the factor analysis model, with the removal of the

unique variance, assumes that latent variables are present and underlie (as depicted in

FigureÂ€9.1) responses to the indicators. Thus, if you are attempting to identify whether

latent variables underlie responses to observed variables and explain why observed

variables are associated (as when developing a measuring instrument or attempting to

develop theoretical constructs from a set of observed variables), the exploratory factor

analysis model is consistent with the latent variable hypothesis and is, theoretically, a

more suitable analysis model.

Also, a practical difference between principal components and common factor analysis is the possibility that in common factor analysis unreasonable parameter estimates may be obtained (such as communalities estimated to be one or greater). Such

occurrences, called Heywood cases, may be indicative of a grossly misspecified factor

analysis model (too many or too few factors), an overly small sample size, or data that

are inconsistent with the assumptions of exploratory factor analysis. On the positive

side, then, Heywood cases could have value in alerting you to potential problems with

the model orÂ€data.

9.10â•‡ ASSUMPTIONS FOR COMMON FACTOR ANALYSIS

Now that we have discussed the factor analysis model, we should make apparent the

assumptions underlying the use of common factor analysis. First, as suggested by FigureÂ€9.1, the factors are presumed to underlie or cause responses among the observed

variables. The observed variables are then said to be “effect” or “reflective” indicators.

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

This presumed causal direction is the reason why observed variables that are caused

by a common factor should be fairly highly correlated. While this causal direction

may be reasonable for many situations, indicators are sometimes thought to cause

changes in the factors. For example, it may be reasonable to assume indicators of

socioeconomic status (SES, such as income or salary) are causally related to an SES

factor, with increases in the observed indicators (e.g., inheriting a fortune) causing an

increase in the SES construct. Common factor analysis is not intended for such causal

or formative-indicator models. Having a good conceptual understanding of the variables being studied will help you determine if it is believable that observed variables are

effect indicators. Further discussion of causal indicator and other related models can

be found in, for example, Bollen and Bauldry (2011).

A second assumption of the factor analysis model is that the factors and observed

variables are linearly related. Given that factors are unobservable, this assumption is

difficult to assess. However, there are two things worth keeping in mind regarding the

linearity assumption. First, factor analysis results that are not meaningful (i.e., uninterpretable factors) may be due to nonlinearity. In such a case, if other potential causes

are ruled out, such as obtaining too few factors, it may be possible to use data transformations on the observed variables to obtain more sensible results. Second, considering the measurement properties of the observed variables can help us determine if

linearity is reasonable to assume. For example, observed variables that are categorical or strictly ordinal in nature are generally problematic for standard factor analysis

because linearity presumes an approximate equal interval between scale values. That

is, the interpretation of a factor loading—a 1-unit change in the factor producing a certain unit change in a given observed variable—is not meaningful without, at least, an

approximate equal interval. With such data, factor analysis is often implemented with

structural equation modeling or other specialized software to take into account these

measurement properties. Note that Likert-scaled items are, perhaps, often considered

to operate in the gray area between the ordinal and interval property and are sometimes

said to have a quasi-interval like property. Floyd and Widamen (1995) state that standard factor analysis often performs well with such scales, especially those having five

to seven response options.

Another assumption for common factor analysis is that there is no perfect multicollinearity present among the observed variables. This situation is fairly straightforward

to diagnose with the collinearity diagnostics discussed in this book. Note that this

assumption implies that a given observed variable is not a linear sum or composite of

other variables involved in the factor analysis. If a composite variable (i.e., y3Â€=Â€y1 +

y2) and its determinants (i.e., y1, y2) were included in the analysis, software programs

would typically provide an error message indicating the correlation matrix is not positive definite, with no further results being provided.

An assumption that is NOT made when principal components or principal axis factoring is used is multivariate or univariate normality. Given that these two procedures are almost always implemented without estimating standard errors or using

363

364

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

statistical inference, there is no assumption that the observed variables follow

a multivariate or univariate normal distribution. Note though that related to the

quasi-interval measurement property just discussed, more replicable factor analysis

results are generally obtained when scores do not deviate grossly from a normal

distribution.

One important nonstatistical consideration has to do with the variables included in

the factor analysis. The variables selected should be driven by the constructs one is

hypothesizing to be present. If constructs are poorly defined or if the observed variables poorly represent a construct of interest, then factor analysis results may not be

meaningful. As such, hypothesized factors may not emerge or may only be defined by

a single indicator.

9.11â•‡DETERMINING HOW MANY FACTORS ARE PRESENT WITH

PRINCIPAL AXIS FACTORING

In this section, we discuss criteria for determining the number of factors in exploratory factor analysis given the use of principal axis factoring. Principal axis factoring

is a factor extraction method suitable for the common factor model. While there are

several methods that can be used to extract factors in exploratory factor analysis, some

of which are somewhat better at approximating the observed variable correlations,

principal axis factoring is a readily understood and commonly used method. Further,

principal axis factoring has some advantages relative to other extraction methods, as it

does not assume multivariate normality and is not as likely to run into estimation problems as is, for example, maximum likelihood extraction (FabrigarÂ€& Wegener, 2012).

As mentioned, mathematically, the key difference between principal components and

principal axis factoring is that in the latter estimates of communalities replace the 1s

used in the diagonal of the correlation matrix. This altered correlation matrix, with

estimated communalities in the diagonal, is often referred to as the reduced correlation

matrix.

While the analysis procedures associated with principal axis factoring are very similar

to those used with principal components extraction, the use of the reduced correlation

matrix complicates somewhat the issue of using empirical indicators to determine the

number of factors present. Recall that with principal components extraction, the use of

Kaiser’s rule (eigenvalues > 1) to identify the number of factors is based on the idea

that a given factor, if important, ought to account for at least as much as the variance

of a given observed variable. However, in common factor analysis, the variance of

the observed variables that is used in the analysis excludes variance unique to each

variable. As such, the observed variable variance included in the analysis is smaller

when principal axis factoring is used. Kaiser’s rule, then, as applied to the reduced

correlation matrix is overly stringent and may lead to overlooking important factors.

Further, no similar rule (eigenvalues > 1) is generally used for the eigenvalues from

the reduced correlation matrix.

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

Thus, the multiple criteria used in factor analysis to identify the number of important

factors are somewhat different from those used with the principal components method.

Following the suggestion in Preacher and MacCallum (2003), we will still rely on

Kaiser’s rule except that this rule will be applied to the matrix used with principal components extraction, the unreduced correlation matrix—that is, the unaltered or conventional correlation matrix of the observed variables with 1s on the diagonal. Even

though this correlation matrix is not used in common factor analysis, Preacher and

MacCallum note that this procedure may identify the proper number of factors (especially when communalities are high) and is reasonable to use provided other criteria

are used to identify the number of factors. Second, although the application of Kaiser’s

rule is not appropriate for the reduced correlation matrix, you can still examine the

scree plot of the eigenvalues obtained from use of the reduced correlation matrix to

identify the number of factors. Third, parallel analysis based on the reduced correlation

matrix can be used to identify the number of factors. Note that the eigenvalues from

the reduced correlation matrix are readily obtained with the use of SAS software, but

SPSS does not, at present, provide these eigenvalues. Further, neither software program provides the eigenvalues for the parallel analysis procedure. The eigenvalues

from the reduced correlation matrix as well as the eigenvalues produced via the parallel analysis procedure can be obtained using syntax found in Fabrigar and Wegener

(2012). Further, the publisher’s website for that text currently provides the needed

syntax in electronic form, which makes it easier to implement these procedures. Also,

as before, we will consider the meaningfulness of any retained factors as an important

criterion. This interpretation depends on the pattern of factor loadings as well as, for

an oblique rotation, the correlation among the factors.

9.12â•‡EXPLORATORY FACTOR ANALYSIS EXAMPLE WITH

PRINCIPAL AXIS FACTORING

We now present an example using exploratory factor analysis with principal axis factoring. In this example, the observed variables are items from a measure of test anxiety

known as the Reactions to Tests (RTT) scale. The RTT questionnaire was developed

by Sarason (1984) to measure the four hypothesized dimensions of worry, tension,

test-irrelevant thinking, and bodily symptoms. The summary data (i.e., correlations)

used here are drawn from a study of the scale by Benson and Bandalos (1992), who

used confirmatory factor analysis procedures. Here, we suppose that there has been

no prior factor analytic work with this scale (as when the scale is initially developed),

which makes exploratory factor analysis a sensible choice. For simplicity, only three

items from each scale are used. Each item has the same four Likert-type response

options, with larger score values indicating greater tension, worry, and so on. In this

example, data are collected from 318 participants, which, assuming at least moderate

communalities, is a sufficiently large sampleÂ€size.

The hypothesized factor model is shown in FigureÂ€9.2. As can be seen from the figure,

each of the three items for each scale is hypothesized to load only on the scale it was

365

366

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

written to measure (which is typical for an instrument development context), and the

factors are hypothesized to correlate with each other. The 12 ovals in the top of the

figure (denoted by U1, U2, and so on) represent the unique variance associated with

each indicator, that is, the variance in a given item that is not due to the presumed

underlying latent variable. Given the interest in determining if latent variables underlie

responses to the 12 items (i.e., account for correlations across items), a common factor

analysis is suitable.

9.12.1 Preliminary Analysis

TableÂ€9.7 shows the correlations for the 12 items. Examining these correlations suggests that the associations are fairly strong within each set and weaker across the sets

(e.g., strong correlations among the tension items, among the worry items, and so on,

but not across the different sets). The exception occurs with the body items, which

have reasonably strong correlations with other body items but appear to be somewhat

similarly correlated with the tension items. The correlation matrix suggests multiple

factors are present but it is not clear if four distinct factors are present. Also, note that

correlations are positive as expected, and several correlations exceed a magnitude of

.30, which supports the use of factor analysis.

We do not have the raw data and cannot perform other preliminary analysis activities, but we can describe the key activities. Histograms (or other plots) of the scores

of each item and associated z-scores should be examined to search for outlying

values. Item means and standard deviations should also be computed and examined

to see if they are reasonable values. All possible bivariate scatterplots could also

be examined to check for bivariate outliers, although this becomes less practical as

the number of item increases. Further, the data set should be inspected for missing

data. Note that if it were reasonable to assume that data are missing at random,

use of the Expectation Maximization (EM) algorithm can be an effective missing

data analysis strategy, given that no hypothesis testing is conducted (i.e., no standard errors need to be estimated). Further, multicollinearity diagnostics should be

Figure 9.2:â•‡ Four-factor test anxiety model with three indicators per factor.

U1

U2

U3

U4

U5

U6

U7

U8

U9

U10

U11

U12

Ten1

Ten2

Ten3

Wor1

Wor2

Wor3

Tirt1

Tirt2

Tirt3

Body1

Body2

Body3

Tension

Worry

Testirrelevant

thinking

Bodily

symptoms

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

Table 9.7:â•‡ Item Correlations for the Reactions-to-TestsÂ€Scale

1

2

3

4

5

6

7

8

9

10

11

12

Ten1 1.000

Ten2

.657 1.000

Ten3

.652 .660 1.000

Wor1

.279 .338 .300 1.000

Wor2

.290 .330 .350 .644 1.000

Wor3

.358 .462 .440 .659 .566 1.000

Tirt1

.076 .093 .120 .317 .313 .367 1.000

Tirt2

.003 .035 .097 .308 .305 .329 .612 1.000

Tirt3

.026 .100 .097 .305 .339 .313 .674 .695 1.000

Body1 .287 .312 .459 .271 .307 .351 .122 .137 .185 1.000

Body2 .355 .377 .489 .261 .277 .369 .196 .191 .197 .367 1.000

Body3 .441 .414 .522 .320 .275 .383 .170 .156 .101 .460 .476 1.000

examined, as estimation with principal axis factoring will fail if there is perfect

multicollinearity.

Note also that you can readily obtain values for the Mahalanobis distance (to identify

potential multivariate outliers) and variance inflation factors (to determine if excessive

multicollinearity is present) from SPSS and SAS. To do this, use the regression package of the respective software program and regress ID or case number (a meaningless

dependent variable) on the variables used in the factor analysis. Be sure to request

values for the Mahalanobis distance and variance inflation factors, as these values will

then appear in your data set and/or output. See sectionsÂ€3.7 and 3.14.6 for a discussion

of the Mahalanobis distance and variance inflation factors, respectively. SectionsÂ€9.14

and 9.15 present SPSS and SAS instructions for factor analysis.

9.12.2 Primary Analysis

Given that a four-factor model is hypothesized, we begin by requesting a four-factor

solution from SPSS (and SAS) using principal axis factoring with an oblique rotation,

which allows us to estimate the correlations among the factors. The oblique rotation

we selected is the commonly used direct quartimin. The software output shown next is

mostly from SPSS, which we present here because of the somewhat more complicated

nature of results obtained with the use of this program. TableÂ€9.8 presents the eigenvalues SPSS provides when running this factor analysis. The initial eigenvalues on the left

side of TableÂ€9.8 are those obtained with a principal components solution (because that

is what SPSS reports), that is, from the correlation matrix with 1s in the diagonal. While

this is not the most desirable matrix to use when using principal axis factoring (which

uses communalities on the diagonal), we noted previously that Kaiser’s rule, which

should not be applied to the reduced matrix, can at times identify the correct number

of factors when applied to the standard correlation matrix. Here, applying Kaiser’s rule

367

368

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

suggests the presence of three factors, as the eigenvalues associated with factors 1–3

are each larger than 1. Note though that under the heading Extraction Sums of Squared

Loadings four factors are extracted, as we asked SPSS to disregard Kaiser’s rule and

simply extract four factors. Note that the values under the extracted heading are the

sum of squared loadings obtained via principal axis factoring prior to rotation, while

those in the final column are the sum of squared loadings after factor rotation. Neither

of these latter estimates are the initial or preliminary eigenvalues from the reduced correlation matrix that are used to identify (or help validate) the number of factors present.

In addition to Kaiser’s rule, a second criterion we use to identify if the hypothesized

four-factor model is empirically supported is to obtain the initial eigenvalues from the

reduced matrix and examine a scree plot associated with these values. In SAS, these

values would be obtained when you request extraction with principal axis factoring.

With SPSS, these initial eigenvalues are currently not part of the standard output, as

we have just seen, but can be obtained by using syntax mentioned previously and provided in Fabrigar and Wegener (2012). These eigenvalues are shown on the left side

of TableÂ€9.9. Note that, as discussed previously, it is not appropriate to apply Kaiser’s

Table 9.8:â•‡ Eigenvalues and Sum-of-Squared Loadings Obtained from SPSS for the

Four-FactorÂ€Model

Total variance explained

Initial eigenvalues

Factor

Total

% of

Â�Variance

Cumulative

%

1

2

3

4

5

6

7

8

9

10

11

12

4.698

2.241

1.066

.850

.620

.526

.436

.385

.331

.326

.278

.243

39.149

18.674

8.886

7.083

5.167

4.381

3.636

3.210

2.762

2.715

2.314

2.024

39.149

57.823

66.709

73.792

78.959

83.339

86.975

90.186

92.947

95.662

97.976

100.000

Extraction sums of squared

loadings

Rotation

sums of

squared

loadingsa

Total

% of

Variance

Cumulative

%

Total

4.317

1.905

.720

.399

35.972

15.875

5.997

3.322

35.972

51.848

57.845

61.168

3.169

2.610

3.175

3.079

Extraction Method: Principal Axis Factoring.

a

When factors are correlated, sums of squared loadings cannot be added to obtain a total variance.

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

rule to these eigenvalues, but it is appropriate to inspect a scree plot of these values,

which is shown in TableÂ€9.9. Note that this scree plot of initial eigenvalues from the

reduced correlation matrix would not be produced by SPSS, as it produces a scree plot

of the eigenvalues associated with the standard or unreduced correlation matrix. So,

with SPSS, you need to request a scatterplot (i.e., outside of the factor analysis procedure) with the initial eigenvalues from the reduced correlation matrix appearing on

the vertical axis and the factor number on the horizontal axis. This scatterplot, shown

in TableÂ€9.9, appears to indicate the presence of at least two factors and possibly up to

Table 9.9:â•‡ Eigenvalues From Principal Axis Extraction and Parallel Process Analysis

Raw Data Eigenvalues From

Reduced Correlation Matrix

Root

Eigen.

1.000000

4.208154

2.000000

1.790534

3.000000

.577539

4.000000

.295796

5.000000

.014010

6.000000

-.043518

7.000000

-.064631

8.000000

-.087582

9.000000

-.095449

10.000000

-.156846

11.000000

-.169946

12.000000

-.215252

Random Data Eigenvalues From

Parallel Analysis

Root

Means

1.000000

.377128

2.000000

.288660

3.000000

.212321

4.000000

.150293

5.000000

.098706

6.000000

.046678

7.000000

-.006177

8.000000

-.050018

9.000000

-.099865

10.000000

-.143988

11.000000

-.194469

12.000000

-.249398

Prcntyle

.464317

.356453

.260249

.188854

.138206

.082671

.031410

-.015718

-.064311

-.112703

-.160554

-.207357

5.0

Eigenvalue

4.0

3.0

2.0

1.0

0.0

–1.0

0

2

4

6

Factor

8

10

12

369

370

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

four factors, as there is a bit of a drop after the fourth factor, with the plot essentially

leveling off completely after the fourth factor. As mentioned previously, use of this

plot can be somewhat subjective.

A third criterion that we use to help us assess if the four-factor model is reasonable

involves parallel analysis. Recall that with use of parallel analysis a set of eigenvalues

is obtained from replicated random datasets (100 used in this example), and these

values are compared to the eigenvalues from the data set being analyzed (here, the

initial eigenvalues from the reduced matrix). TableÂ€ 9.9 shows the eigenvalues from

the reduced matrix on the left side. The right side of the table shows the mean eigenvalue as well as the value at the 95th percentile for each factor from the 100 replicated

random data sets. Note that the first eigenvalue from the analyzed factor model (4.21)

is greater than the mean eigenvalue (.377), as well as the value at the 95th percentile

(.464) for the corresponding factor from parallel analysis. The same holds for factors two through four but not for factor five. Thus, use of parallel analysis supports

a four-factor solution, as the variation associated with the first four factors is greater

than variation expected by chance. (Note that is common to obtain negative eigenvalues when the reduced correlation matrix is analyzed. The factors associated with such

eigenvalues are obviously not important.)

Before we consider the factor loadings and correlations to see if the four factors are

interpreted as hypothesized, we consider the estimated communalities, which are

shown in TableÂ€9.10, and indicate the percent of item variance explained by the four

extracted factors. An initial communality (a best guess) is the squared multiple correlation between a given item and all other items, whereas the values in the extraction column represent the proportion of variance in each item that is due to the four extracted

factors obtained from the factor model. Inspecting the extracted communalities suggests that each item is at least moderately related to the set of factors. As such, we

would expect that each item will have reasonably high loadings on at least one factor.

Although we do not show the communalities as obtained via SAS, note that SPSS and

SAS provide identical values for the communalities. Note also, as shown in the seventh

column of TableÂ€9.8, that the four factors explain 61% of the variance in the items.

TableÂ€9.11 shows the pattern coefficients, structure coefficients, and the estimated correlations among the four factors. Recall that the pattern coefficients are preferred over

the structure coefficients for making factor interpretations given the use of an oblique

rotation. Inspecting the pattern coefficients and applying a value of |.40| to identify

important item-factor associations lends support to the hypothesized four-factor solution. That is, factor 1 is defined by the body items, factor 2 by the test-irrelevant thinking items, factor 3 by the worry items, and factor 4 by the tension items. Note the

inverse association among the items and factors for factors 3 and 4. For example, for

factor 3, participants scoring higher on the worry items (greater worry) have lower

scores on the factor. Thus, higher scores on factor three reflect reduced anxiety (less

worry) related to tests whereas higher scores on factor 2 are suggestive of greater anxiety (greater test-irrelevant thinking).

Table 9.10:â•‡ Item Communalities for the Four-FactorÂ€Model

Communalities

TEN1

TEN2

TEN3

WOR1

WOR2

WOR3

IRTHK1

IRTHK2

IRTHK3

BODY1

BODY2

BODY3

Initial

Extraction

.532

.561

.615

.552

.482

.565

.520

.542

.606

.322

.337

.418

.636

.693

.721

.795

.537

.620

.597

.638

.767

.389

.405

.543

Extraction Method: Principal Axis Factoring.

Table 9.11:â•‡ Pattern, Structure, and Correlation Matrices From the Four-FactorÂ€Model

Pattern matrixa

Factor

TEN1

TEN2

TEN3

WOR1

WOR2

WOR3

IRTHK1

IRTHK2

IRTHK3

BODY1

BODY2

BODY3

1

2

3

4

.022

−.053

.361

−.028

.028

.110

−.025

.066

−.038

.630

.549

.710

−.027

.005

.005

−.056

.071

.087

.757

.776

.897

−.014

.094

−.034

−.003

−.090

.041

−.951

−.659

−.605

−.039

−.021

.030

−.068

.027

−.014

−.783

−.824

−.588

.052

−.049

−.137

−.033

.094

−.027

.061

−.102

−.042

Extraction Method: Principal Axis Factoring.

Rotation Method: Oblimin with Kaiser Normalization.

a

Rotation converged in 8 iterations.

(Continued )

372

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

TableÂ€9.11:â•‡ (Continued)

Structure matrix

Factor

TEN1

TEN2

TEN3

WOR1

WOR2

WOR3

IRTHK1

IRTHK2

IRTHK3

BODY1

BODY2

BODY3

1

2

3

4

.529

.532

.727

.400

.411

.527

.227

.230

.214

.621

.628

.735

.047

.102

.135

.376

.390

.411

.771

.796

.875

.187

.242

.173

−.345

−.426

−.400

−.888

−.728

−.761

−.394

−.375

−.381

−.351

−.337

−.373

−.797

−.829

−.806

−.341

−.362

−.481

−.097

−.023

−.064

−.380

−.457

−.510

Extraction Method: Principal Axis Factoring.

Rotation Method: Oblimin with Kaiser Normalization.

Factor correlation matrix

Factor

1

2

3

4

1

2

3

4

1.000

.278

−.502

−.654

.278

1.000

−.466

−.084

−.502

−.466

1.000

.437

−.654

−.084

.437

1.000

Extraction Method: Principal Axis Factoring.

Rotation Method: Oblimin with Kaiser Normalization.

These factor interpretations are important when you examine factor correlations,

which are shown in TableÂ€ 9.11. Given these interpretations, the factor correlations

seem sensible and indicate that factors are in general moderately correlated. One

exception to this pattern is the correlation between factors 2 (test irrelevant thinking)

and 4 (tension), where the correlation is near zero. We note that the near-zero correlation between test-irrelevant thinking and tension is not surprising, as other studies

have found the test-irrelevant thinking factor to be the most distinct of the four factors.

AÂ€second possible exception to the moderate correlation pattern is the fairly strong

association between factors 1 (body) and 4 (tension). This fairly high correlation might

suggest to some that these factors may not be that distinct. Recall that we made a similar observation when we examined the correlations among the observed variables, and

that applying Kaiser’s rule supported the presence of three factors.

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

Thus, to explore this issue further, you could estimate a factor model requesting software to extract three factors. You can then inspect the pattern coefficients and factor

correlations to determine if the three-factor solution seems meaningful. When we did

this, we found that the body and tensions items loaded only on one factor, the worry

items loaded only on a second factor, and the test-irrelevant thinking items loaded only

on a third factor. Further, the factor correlations were reasonable. Thus, assuming that

it is reasonable to consider that the tension and body items reflect a single factor, there

is support for both the three- and four-factor models. In such a case, a researcher might

present both sets of results and/or offer arguments for why one solution would be

preferred over another. For example, Sarason (1984, p.Â€937) preferred the four-factor

model stating that it allowed for a more fine-grained analysis of test anxiety. Alternatively, such a finding, especially in the initial stages of instrument development, might

compel researchers to reexamine the tension and body items and possibly rewrite them

to make them more distinct. It is also possible that this finding is sample specific and

would not appear in a subsequent study. Further research with this scale may then be

needed to resolve this issue.

Before proceeding to the next section, we show some selected output for this same

example that is obtained from SAS software. The top part of TableÂ€ 9.12 shows the

preliminary eigenvalues for the four-factor model. These values are the same as those

shown in TableÂ€9.9, which were obtained with SPSS by use of a specialized macro.

AÂ€scree plot of these values, which can readily be obtained in SAS, would essentially be

the same plot as shown in TableÂ€9.9. Note that inspecting the eigenvalues in TableÂ€9.12

indicates a large drop off after factor 2 and somewhat of a drop off after factor 4, possibly then supporting the four-factor solution. The middle part of TableÂ€9.12 shows the

pattern coefficients for this solution. These values have the same magnitude as those

shown in TableÂ€9.11, but note that the signs for the defining coefficients (i.e., pattern

coefficients > |.40|) here are all positive, suggesting that the signs of the coefficients are

somewhat arbitrary and indeed are done for computational convenience by software.

(In fact, within a given column of pattern or structure coefficients, you can reverse all

of the signs if that eases interpretation.) In this case, the positive signs ease factor interpretation because higher scores on each of the four anxiety components reflect greater

anxiety (i.e., greater tension, worrying, test-irrelevant thinking, and bodily symptoms).

Accordingly, all factor correlations, as shown in TableÂ€9.12, are positive.

9.13 FACTOR SCORES

In some research situations, you may, after you have achieved a meaningful factor

solution, wish to estimate factor scores, which are considered as estimates of the true

underlying factor scores, to use for subsequent analyses. Factor scores can be used as

predictors in a regression analysis, dependent variables in a MANOVA or ANOVA, and

so on. For example, after arriving at the four-factor model, Sarason (1984) obtained

scale scores and computed correlations for each of the four subscales of the RTT questionnaire (discussed in sectionÂ€9.12) and a measure of “cognitive interference” in order

373

Table 9.12:â•‡ Selected SAS Output for the Four-FactorÂ€Model

Preliminary Eigenvalues: Total = 6.0528105 Average = 0.50440088

1

2

3

4

5

6

7

8

9

10

11

12

Eigenvalue

Difference

Proportion

Cumulative

4.20815429

1.79053420

0.57753939

0.29579639

0.01401025

–.04351826

–.06463103

–.08758203

–.09544946

–.15684578

–.16994562

–.21525183

2.41762010

1.21299481

0.28174300

0.28178614

0.05752851

0.02111277

0.02295100

0.00786743

0.06139632

0.01309984

0.04530621

0.6952

0.2958

0.0954

0.0489

0.0023

–0.0072

–0.0107

–0.0145

–0.0158

–0.0259

–0.0281

–0.0356

0.6952

0.9911

1.0865

1.1353

1.1377

1.1305

1.1198

1.1053

1.0896

1.0636

1.0356

1.0000

Rotated Factor Pattern (Standardized Regression Coefficients)

TEN1

TEN2

TEN3

WOR1

WOR2

WOR3

TIRT1

TIRT2

TIRT3

BODY1

BODY2

BODY3

Factor1

Factor2

Factor3

Factor4

–0.02649

0.00510

0.00509

–0.05582

0.07103

0.08702

0.75715

0.77575

0.89694

–0.01432

0.09430

–0.03364

0.00338

0.09039

–0.04110

0.95101

0.65931

0.60490

0.03878

0.02069

–0.02998

0.06826

–0.02697

0.01416

0.78325

0.82371

0.58802

–0.05231

0.04864

0.13716

0.03331

–0.09425

0.02734

–0.06017

0.10260

0.04294

0.02214

–0.05340

0.36089

–0.02737

0.02843

0.10978

–0.02514

0.06618

–0.03847

0.62960

0.54821

0.70957

Inter-Factor Correlations

Factor1

Factor2

Factor3

Factor4

Factor1

Factor2

Factor3

Factor4

1.00000

0.46647

0.08405

0.27760

0.46647

1.00000

0.43746

0.50162

0.08405

0.43746

1.00000

0.65411

0.27760

0.50162

0.65411

1.00000

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

to obtain additional evidence for the validity of the scales. Sarason hypothesized (and

found) that the worry subscale of the RTT is most highly correlated with cognitive

interference.

While several different methods of estimating factor scores are available, two are commonly

used. One method is to estimate factor scores using a regression method. In this method,

regression weights (not the factor loadings) are obtained and factor scores are created by

multiplying each weight by the respective observed variable, which is in z-score form.

For example, for the six work-related variables that appeared in sectionÂ€9.7, TableÂ€9.13

shows the regression weights that are obtained when you use principal axis factoring

(weights can also be obtained for the principal components method). With these weights,

scores for the first factor (engagement) are formed as follows: EngagementÂ€ =Â€ .028 ×

zstimulate + .000 × zchallenge + .001 × zinterest + .329 × zrecognize + .463 × zappreciate + .251 × zcompensate.

Scores for the second factor are computed in a similar way by using the weights in the

next column of that table. Note that SPSS and SAS can do these calculations for you and

place the factor scores in your data set (so no manual calculation is required).

A second, and simpler, method to estimate factor scores especially relevant for scale

construction is to sum or average scores across the observed variables that load highly

on a given factor as observed in the pattern matrix. This method is known as unit

weighting because values of 1 are used to weight important variables as opposed to the

exact regression weights used in the foregoing procedure. To illustrate, consider estimating factor or scale scores for the RTT example in sectionÂ€9.12. For the first factor,

inspecting the pattern coefficients in TableÂ€9.11 indicated that only the bodily symptom

items (Body1, Body2, Body3) are strongly related to that factor. Thus, scores for this

factor can be estimated as Bodily SymptomsÂ€=Â€1 × Body1 + 1 × Body2 + 1 × Body3,

which, of course, is the same thing as summing across the three items. When variables

Table 9.13:â•‡ Factor Score Regression Weights for the Six Work-Related Variables

Factor score coefficient matrix

Factor

Stimulate

Challenge

Interest

Recognize

Appreciate

Compensate

Extraction Method: Principal Axis Factoring.

Rotation Method: Oblimin with Kaiser Normalization.

Factor Scores Method: Regression.

1

2

.028

.000

.001

.329

.463

.251

.338

.431

.278

.021

.006

.005

375

376

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

are on the same scale, as they often are in an instrument development context, averaging

the scores can be used here as well, which provides for greater meaning because the

score scale of the observed scores and factor scores are the same, making averaging

an appealing option. Note that if the observed variables are not on the same scale, the

observed variables could first be placed in z-score form and then summed (or averaged).

Note that for some factors the defining coefficients may all be negative. In this case,

negative signs can be used to obtain factor scores. For example, scores for factor 4 in

the RTT example, given the signs of the pattern coefficients in TableÂ€9.11, can be computed as TensionÂ€=Â€−1 × Ten1 − 1 × Ten2 − 1 × Ten3. However, scale scores in this case

will be negative, which is probably not desired. Given that the coefficients are each

negative, here, a more sensible alternative is to simply sum scores across the tension

items ignoring the negative signs. (Remember that it is appropriate to change signs of

the factor loadings within a given column provided that all signs are changed within

that column.) When that is done, higher scale scores reflect greater tension, which is

consistent with the item scores (and with the output produced by SAS in TableÂ€9.12).

Be aware that the signs of the correlations between this factor and all other factors will

be reversed. For example, in TableÂ€9.11, the correlation between factors 1 (body) and 4

(tension) is negative. If scores for the tension items were simply summed or averaged

(as recommended here), this correlation of scale scores would then be positive, indicating that those reporting greater bodily symptoms also report greater tension. For this

example, then, summing or averaging raw scores across items for each subscale would

produce all positive correlations between factors, as likely desired.

You should know that use of different methods to estimate factor scores, including

these two methods, will not produce the same factor scores (which is referred to as factor score indeterminacy), although such scores may be highly correlated. Also, when

factor scores are estimated, the magnitude of the factor correlations as obtained in the

factor analysis (i.e., like those in TableÂ€9.11) will not be the same as those obtained

if you were to compute factor scores and then compute correlations associated with

these estimated scores. One advantage with regression weighting is that its use maximizes the correlation between the underlying factors and the estimated factor scores.

However, use of regression weights to produce factor scores, while optimal for the

sample at hand, do not tend to hold up well in independent samples. As such, simple

unit weighting is often recommended, and studies examining the performance of unit

weighting support its use in estimating factor scores (FavaÂ€& Velicer, 1992; Grice,

2001; Nunnally, 1978). Note also that this issue does not arise with the principal components method. That is, with principal components extraction and when the regression method is used to obtain factor (or component) scores, the obtained factor score

correlations will match those found via the use of principal components extraction.

9.14â•‡ USING SPSS IN FACTOR ANALYSIS

This section presents SPSS syntax that can be used to conduct a factor analysis with

principal axis extraction. Note that SPSS can use raw data or just the correlations

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

obtained from raw data to implement a factor analysis. Typically, you will have raw

data available and will use that in conducting a factor analysis. TableÂ€9.14 shows syntax needed assuming you are using raw data, and TableÂ€9.15 shows syntax that can

be used when you only have a correlation matrix available. Syntax in TableÂ€9.15 will

allow you to duplicate analysis results presented in sectionÂ€9.12. SectionÂ€9.15 presents

the corresponding SAS syntax.

The left side of TableÂ€9.14 shows syntax that can be used when you wish to apply

Kaiser’s rule to help determine the number of factors present. Note that in the second line of the syntax, where the VARIABLES subcommand appears, you simply

list the observed variables from your study (generically listed here as var1, var2,

and so on). For the ANALYSIS subcommand in the next line, you can list the

same variables again or, as shown here, list the first variable name followed by the

word “to” and then the last variable name, assuming that the variables 1–6 appear

in that order in your data set. Further, SPSS will apply Kaiser’s rule to eigenvalues obtained from the unreduced correlation matrix when you specify MINEIGEN(1) after the CRITERIA subcommand in line 5 of the code. As discussed in

sectionsÂ€9.11 and 9.12, you must use supplemental syntax to obtain eigenvalues

from the reduced correlation matrix, which is desirable when using principal axis

factoring.

In the next line, following the EXTRACTION subcommand, PAF requests SPSS

to use principal axis factoring. (If PC were used instead of PAF, all analysis results

would be based on the use of principal components extraction). After the ROTATION

command, OBLIMIN requests SPSS to use the oblique rotation procedure known as

direct quartimin. Note that replacing OBLIMIN with VARIMAX would direct SPSS

to use the orthogonal rotation varimax. The SAVE subcommand is an optional line that

requests the estimation of factor scores using the regression procedure discussed in

Table 9.14:â•‡ SPSS Syntax for Factor Analysis With Principal Axis Extraction

UsingÂ€RawÂ€Data

Using Kaiser’s Rule

FACTOR

/VARIABLES var1 var2 var3 var4 var5

var6

/ANALYSIS var1 to var6

/PRINT INITIAL EXTRACTION ROTATION

/CRITERIA MINEIGEN(1) ITERATE(25)

/EXTRACTION PAF

/CRITERIA ITERATE(25)

/ROTATION OBLIMIN

/SAVE REG(ALL)

/METHOD=CORRELATION.

Requesting Specific Number of Factors

FACTOR

/VARIABLES var1 var2 var3 var4

var5 var6

/ANALYSIS var1 to var6

/PRINT INITIAL EXTRACTION

ROTATION

/CRITERIA FACTORS(2) ITERATE(25)

/EXTRACTION PAF

/CRITERIA ITERATE(25)

/ROTATION OBLIMIN

/SAVE REG(ALL)

/METHOD=CORRELATION.

377

378

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

sectionÂ€9.13. The last line directs SPSS to use correlations (as opposed to covariances)

in conducting the factor analysis.

The right side of the TableÂ€ 9.14 shows syntax that can be used when you wish to

extract a specific number of factors (as done in sectionÂ€9.12) instead of relying, for

example, on Kaiser’s rule to determine the number of factors to be extracted. Note that

the syntax on the right side of the table is identical to that listed on the left except for

line 5 of the syntax. Here, the previously used MINEIGEN(1) has been replaced with

the statement FACTORS(2). The FACTORS(2) statement in line 5 directs SPSS to

extract two factors, regardless of their eigenvalues. FACTORS(3) would direct SPSS

to extract three factors and so on. Note that neither set of syntax requests a scree plot

of eigenvalues. As described, with SPSS, such a plot would use eigenvalues from the

unreduced correlation matrix. When principal axis factoring is used, it is generally

preferred to obtain a plot of the eigenvalues from the reduced correlation matrix.

TableÂ€9.15 shows SPSS syntax that was used to obtain the four-factor solution presented

in sectionÂ€9.12.2. The first line is an optional title line. The second line, following the

required phrase MATRIX DATA VARIABLES=, lists the 12 observed variables used

in the analysis. The phrase N_SCALER CORR, after the CONTENTS subcommand,

informs SPSS that a correlation matrix is used as entry and that sample size (N) will

be specified prior to the correlation matrix. After the BEGIN DATA command, you

must indicate the sample size for your data (here, 318), and then input the correlation

matrix. After the END DATA code just below the correlation matrix, the FACTOR

MATRIX IN(COR=*) informs SPSS that the factor analysis will use as data the correlation matrix entered earlier. Note that PAF is the extraction method requested, and

that SPSS will extract four factors no matter what the size of the eigenvalues are. Also,

note that the direct quartimin rotation procedure is requested, given that the factors are

hypothesized to be correlated.

9.15 USING SAS IN FACTOR ANALYSIS

This section presents SAS syntax that can be used to conduct factor analysis using

principal axis extraction. Like SPSS, SAS can implement a factor analysis with raw

data or using just the correlations obtained from raw data. TableÂ€9.16 shows syntax

assuming you are using raw data, and TableÂ€9.17 shows syntax that can be used when

you only have a correlation matrix available. Syntax in TableÂ€9.17 will allow you to

duplicate analysis results presented in sectionÂ€9.12.

The left side of TableÂ€9.16 shows syntax that can be used when you wish to apply Kaiser’s rule to help determine the number of factors present. The syntax assumes that the

data set named my_data is the active data set in SAS. The first line initiates the factor

analysis procedure in SAS where you must indicate the data set that is being used (simply called my_data here). The second line of the syntax directs SAS to apply Kaiser’s

rule to extract factors having an eigenvalue larger than 1, and the code METHOD=prin

TITLE PAF WITH REACTIONS-TO-TEST DATA.

MATRIX DATA VARIABLES=TEN1 TEN2 TEN3 WOR1 WOR2 WOR3 IRTHK1 IRTHK2 IRTHK3 BODY1 BODY2 BODY3

/CONTENTS=N_SCALER CORR.

BEGIN DATA.

318

1.0

â•‡.6568918â•‡1.0

â•‡.6521357â•‡â•‡.6596083â•‡1.0

â•‡.2793569â•‡â•‡.3381683â•‡â•‡.3001235â•‡1.0

â•‡.2904172â•‡â•‡.3298699â•‡â•‡.3498588â•‡â•‡.6440856â•‡1.0

â•‡.3582053â•‡â•‡.4621011â•‡â•‡.4395827â•‡â•‡.6592054â•‡â•‡.5655221â•‡1.0

â•‡.0759346â•‡â•‡.0926851â•‡â•‡.1199888â•‡â•‡.3173348â•‡â•‡.3125095â•‡â•‡.3670677â•‡1.0

â•‡.0033904â•‡â•‡.0347001â•‡â•‡.0969222â•‡â•‡.3080404â•‡â•‡.3054771â•‡â•‡.3286743â•‡â•‡.6118786â•‡1.0

â•‡.0261352â•‡â•‡.1002678â•‡â•‡.0967460â•‡â•‡.3048249â•‡â•‡.3388673â•‡â•‡.3130671â•‡â•‡.6735513â•‡â•‡.6951704â•‡1.0

â•‡.2866741â•‡â•‡.3120085â•‡â•‡.4591803â•‡â•‡.2706903â•‡â•‡.3068059â•‡â•‡.3512327â•‡â•‡.1221421â•‡â•‡.1374586â•‡â•‡.1854188â•‡1.0

â•‡.3547974â•‡â•‡.3772598â•‡â•‡.4888384â•‡â•‡.2609631â•‡â•‡.2767733â•‡â•‡.3692361â•‡â•‡.1955429â•‡â•‡.1913100â•‡â•‡.1966969â•‡â•‡.3665290â•‡1.0

â•‡.4409109â•‡â•‡â•›.4144444â•‡â•‡â•›.5217488â•‡â•‡â•›.3203353â•‡â•‡â•›â†œæ¸€å±®.2749568â•‡â•‡â•›â•›.3834183â•‡â•‡â•›.1703754â•‡â•‡â•›â•›.1557804â•‡â•‡â•›.1011165â•‡â•‡â•›.4602662â•‡â•‡â•›â•›.4760684â•‡1.0

END DATA.

FACTOR MATRIX IN(COR=*)

/PRINT INITIAL EXTRACTION CORRELATION ROTATION

/CRITERIA FACTORS(4) ITERATE(25)

/EXTRACTION PAF

/CRITERIA ITERATE(25)

/ROTATION OBLIMIN.

Table 9.15:â•‡ SPSS Syntax for Factor Analysis With Principal Axis Extraction Using Correlation Input

380

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

Table 9.16:â•‡ SAS Syntax for Factor Analysis Using Raw Data

Using Kaiser’s Rule With PC

Requesting Specific Number of Factors

WithÂ€PAF

PROC FACTOR DATA = my_data

PROC FACTOR DATA = my_data

MINEIGEN = 1.0 METHOD=prin

Â�PRIOR=one ROTATE=oblimin;

PRIORS=smc NFACTORS=2 METHOD=prinit

ROTATE=oblimin SCREE SCORE OUTSTAT=fact;

VAR var1 var2 var3 var4 var5 var6; VAR var1 var2 var3 var4 var5 var6;

RUN;

PROC SCORE DATA = my_data SCORE = fact

OUT=scores;

RUN;

PRIOR=one directs SAS to use principal components extraction with values of 1 on

the diagonal of the correlation matrix. Thus, this line requests SAS to use the unreduced correlation matrix and extract any factors whose corresponding eigenvalue is

greater than 1. If you wanted to use principal components extraction only for the purpose of applying Kaiser’s rule (as is done in sectionÂ€9.12), you would disregard output

from this analysis except for the initial eigenvalues.

To complete the explanation of the syntax, ROTATE=oblimin directs SAS to use the

oblique rotation method direct quartimin. The orthogonal rotation method varimax

can be implemented by replacing the ROTATE=oblimin with ROTATE=varimax.

After the VAR command on the fourth line, you must indicate the variables being used

inÂ€theÂ€analysis, which are generically named here var1, var2, and so on. RUN requests

the procedure to be implemented.

The right side of the TableÂ€9.16 shows syntax that uses principal axis factoring,

requests a specific number of factors be extracted (as done in sectionÂ€9.12), and

obtains factor scores (which are optional). The code also requests a scree plot

of the eigenvalues using the reduced correlation matrix. The second line of the

code directs SAS to use, initially, squared multiple correlations (smc) as estimates of variable communalities (instead of the 1s used in principal components

extraction). Also, NFACTORS=2 in this same line directs SAS to extract two

factors while METHOD=prinit directs SAS to use an iterative method to obtain

parameter estimates (which is done by default in SPSS). Thus, use of the code

PRIORS=smc and METHOD=prinit requests an iterative principal axis factoring

solution (identical to that used by SPSS earlier). Again, the ROTATE=oblimin

directs SAS to employ the direct quartimin rotation, and SCREE instructs SAS

to produce a scree plot of the eigenvalues from the reduced correlation matrix

(unlike SPSS, which would provide a SCREE plot of the eigenvalues associated

with the unreduced correlation matrix). The SCORE OUTSTAT code is optional

TITLE ‘paf with reaction to tests scale’;

DATA rtsitems (TYPE=CORR);

INFILE CARDS MISSOVER;

_TYPE_= ‘CORR’;

INPUT _NAME_ $ ten1 ten2 ten3 wor1 wor2 wor3 tirtt1 tirtt2 tirt3 body1 body2 body3;

DATALINES;

ten1 1.0

ten2 .6568918 1.0

ten3 .6521357 .6596083 1.0

wor1 .2793569 .3381683 .3001235 1.0

wor2 .2904172 .3298699 .3498588 .6440856 1.0

wor3 .3582053 .4621011 .4395827 .6592054 .565522 1 .0

tirt1 .0759346 .0926851 .1199888 .3173348 .3125095 .3670677 1.0

tirt2 .0033904 .0347001 .0969222 .3080404 .3054771 .3286743.6118786 1.0

tirt3 .0261352 .1002678 .0967460 .3048249 .3388673 .3130671 .6735513 .6951704 1.0

body1 .2866741 .312008 5 .4591803 .2706903 .3068059 .3512327 .1221421 .1374586 .1854188 1.0

body2 .3547974 .3772598 .4888384 .2609631 .2767733 .3692361 .1955429 .1913100 .1966969 .3665290 1.0

body3 .4409109 .4144444 .5217488 .3203353 .2749568 .3834183 .1703754 .1557804 .1011165 .4602662 .4760684

PROC FACTOR PRIORS=smc NFACTORS=4 METHOD=prinit ROTATE=oblimin NOBS=318 SCREE;

RUN;

Table 9.17â•‡ SAS Syntax for Factor Analysis With Principal Axis Extraction Using Correlation Input

1.0

382

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

and requests that the factor score coefficients, used in creating factor scores (as

well as other output), be placed in a file called here fact. Following the variable

line, the next couple of lines is optional and is used to create factor scores using

the regression method and have these scores placed in a data file called here

scores. RUN executes theÂ€code.

TableÂ€9.17 shows SAS syntax that was used to obtain the four-factor solution presented

in sectionÂ€9.12.2. Line 1 is an optional title line. In lines 2–4, all of the code shown

is required when you are using a correlation matrix as input; the only user option is

to name the dataset (here called rtsitems). In line 5, the required elements include

everything up to and including the dollar sign ($). The remainder of that line includes

the variable names that appear in the study. After the DATALINES command, which

is required, you then provide the correlation matrix with the variable names placed in

the first column. After the correlation matrix is entered, the rest of the code is similar

to that used when raw data are input. The exception is NOBS=318, which you use to

tell SAS how large the sample size is; for this example, the number of observations

(NOBS) isÂ€318.

9.16 EXPLORATORY AND CONFIRMATORY FACTOR ANALYSIS

This chapter has focused on exploratory factor analysis (EFA). The purpose of EFA is

to identify the factor structure for a set of variables. This often involves determining

how many factors exist, as well as the pattern of the factor loadings. Although most

EFA programs allow for the number of factors to be specified in advance, it is not possible in these programs to force variables to load only on certain factors. EFA is generally considered to be more of a theory-generating than a theory-testing procedure. In

contrast, confirmatory factor analysis (CFA), which is covered in ChapterÂ€16, is generally based on a strong theoretical or empirical foundation that allows you to specify

an exact factor model in advance. This model usually specifies which variables will

load on which factors, as well as such things as which factors are correlated. It is more

of a theory-testing procedure than is EFA. Although, in practice, studies may contain aspects of both exploratory and confirmatory analyses, it is useful to distinguish

between the two techniques in terms of the situations in which they are commonly

used. TableÂ€9.18 displays some of the general differences between the two approaches.

Let us consider an example of an EFA. Suppose a researcher is developing a scale to

measure self-concept. The researcher does not conceptualize specific self-concept factors in advance and simply writes a variety of items designed to tap into various aspects

of self-concept. An EFA of these items may yield three factors that the researcher then

identifies as physical, social, and academic self-concept. The researcher notes that

items with large loadings on one of the three factors tend to have very small loadings

on the other two, and interprets this as further support for the presence of three distinct

factors or dimensions of underlying self-concept. In scale development, EFA is often

considered to be a better choice.

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

Table 9.18â•‡ Comparison of Exploratory and Confirmatory Factor Analysis

Exploratory—theory generating

Confirmatory—theory testing

Heuristic—weak literature base

Determine the number of factors

Determine whether the factors are correlated

orÂ€uncorrelated

Variables free to load on all factors

Strong theory or strong empirical base

Number of factors fixed a priori

Factors fixed a priori as correlated or uncorrelated

Variables fixed to load on a specific factor or

factors

Continuing the scale development example, as researchers continue to work with this

scale in future research, CFA becomes a viable option. With CFA, you can specify both

the number of factors hypothesized to be present (e.g., the three self-concept factors)

but also specify which items belong to a given dimension. This latter option is not possible in EFA. In addition, CFA is part of broader modeling framework known as structural equation modeling (SEM), which allows for the estimation of more sophisticated

models. For example, the self-concept dimensions in SEM could serve as predictors,

dependent variables, or intervening variables in a larger analysis model. As noted in

Fabrigar and Wegener (2012), the associations between these dimensions and other

variables can then be obtained in SEM without computing factor scores.

9.17â•‡EXAMPLE RESULTS SECTION FOR EFA OF

REACTIONS-TO-TESTS SCALE

The following results section is based on the example that appeared in sectionÂ€9.12. In

that analysis, 12 items measuring text anxiety were administered to college students at

a major research university, with 318 respondents completing all items. Note that most

of the next paragraph would probably appear in a method section of a paper.

The goal of this study was to identify the dimensions of text anxiety, as measured by

the newly developed RTT measure. EFA using principal axis factoring (PAF) was used

for this purpose with the oblique rotation method direct quartimin. To determine the

number of factors present, we considered several criteria. These include the number of

factors that (1) had eigenvalues greater than 1 when the unreduced correlation matrix

was used (i.e., with 1s on the diagonal of the matrix), (2) were suggested by inspecting a scree plot of eigenvalues from the reduced correlation matrix (with estimates of

communalities in the diagonal of correlation matrix), which is consistent with PAF,

(3) had eigenvalues larger than expected by random as obtained via parallel analysis,

and (4) were conceptually coherent when all factor analysis results were examined.

The 12 items on the scale represented possible dimensions of anxiety as suggested in

the relevant literature. Three items were written for each of the hypothesized dimensions, which represented tension, worry, test-irrelevant thinking, and bodily symptoms. For each item, a 4-point response scale was used (from “not typical of me” to

“very typicalÂ€ofÂ€me”).

383

384

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

TableÂ€1 reports the correlations among the 12 items. (Note that we list generic item

names here. You should provide some descriptive information about the content of the

items or perhaps list each of the items, if possible.) In general, inspecting the correlations appears to provide support for the four hypothesized dimensions, as correlations

are mostly greater within each dimension than across the assumed dimensions. Examination of Mahalanobis distance values, variance inflation factors, and histograms

associated with the items did not suggest the presence of outlying values or excessive

multicollinearity. Also, scores for most items were roughly symmetrically distributed.

Table 1:â•‡ Item Correlations (NÂ€=Â€318)

1

2

3

4

5

6

7

8

9

10

11

12

Ten1 1.000

Ten2

.657 1.000

Ten3

.652 .660 1.000

Wor1

.279 .338 .300 1.000

Wor2

.290 .330 .350 .644 1.000

Wor3

.358 .462 .440 .659 .566 1.000

Tirt1

.076 .093 .120 .317 .313 .367 1.000

Tirt2

.003 .035 .097 .308 .305 .329 .612 1.000

Tirt3

.026 .100 .097 .305 .339 .313 .674 .695 1.000

Body1 .287 .312 .459 .271 .307 .351 .122 .137 .185 1.000

Body2 .355 .377 .489 .261 .277 .369 .196 .191 .197 .367 1.000

Body3 .441 .414 .522 .320 .275 .383 .170 .156 .101 .460 .476 1.000

To initiate the exploratory factor analysis, we requested a four-factor solution, given

we selected items from four possibly distinct dimensions. While application of Kaiser’s rule (to eigenvalues from the unreduced correlation matrix) suggested the presence of three factors, parallel analysis indicated four factors, and inspecting the scree

plot suggested the possibility of four factors. Given that these criteria differed on the

number of possible factors present, we also examined the results from a three-factor

solution, but found that the four-factor solution was more meaningful.

TableÂ€2 shows the communalities, pattern coefficients, and sum of squared loadings for

each factor, all of which are shown after factor rotation. The communalities range from

.39 to .80, suggesting that each item is at least moderately and in some cases strongly

related to the set of factors. Inspecting the pattern coefficients shown in TableÂ€2 and

using a magnitude of least .4 to indicate a nontrivial pattern coefficient, we found that

the test-irrelevant thinking items load only on factor 1, the worry items load only on

factor 2, the tension items load only on factor 3, and the bodily symptom items load

only on factor 4. Thus, there is support that the items thought to be reflective of the same

factor are related only to the hypothesized factor. The sums of squared loadings suggest

that the factors are fairly similar in importance. As a whole, the four factors explained

61% of the variation of the item scores. Further, the factors, as expected, are positively

and mostly moderately correlated, as indicated in TableÂ€3. In sum, the factor analysis

provides support for the four hypothesized dimensions underlying text anxiety.

Chapter 9

â†œæ¸€å±®

â†œæ¸€å±®

Table 2:â•‡ Selected Factor Analysis Results for the Reaction-to-TestsÂ€Scale

Factors

Item

Test-irrelevant

thinking

Worry

Tension

−0.03

0.01

0.01

−0.06

0.07

0.09

0.76

0.78

0.90

−0.01

0.09

−0.03

2.61

0.00

0.09

−0.04

0.95

0.66

0.60

0.04

0.02

−0.03

0.07

−0.03

0.01

3.17

0.78

0.82

0.59

−0.05

0.05

0.14

0.03

−0.09

0.03

−0.06

0.10

0.04

3.08

Tension1

Tension2

Tension3

Worry1

Worry2

Worry3

TIRT11

TIRT2

TIRT3

Body1

Body2

Body3

Sum of squared

loadings

1

Bodily

symptoms

0.02

−0.05

0.36

−0.03

0.03

0.11

−0.03

0.07

−0.04

0.63

0.55

0.71

3.17

Communality

0.64

0.69

0.72

0.80

0.54

0.62

0.60

0.64

0.77

0.39

0.41

0.54

TIRTÂ€=Â€test irrelevant thinking.

Table 3:â•‡ Factor Correlations

1

Test-irrelevant

thinking

Worry

Tension

Bodily symptoms

2

3

4

1.00

0.44

0.50

1.00

0.65

1.00

1.00

0.47

0.08

0.28

9.18 SUMMARY

Exploratory factor analysis can be used when you assume that latent variables underlie responses to observed variables and you wish to find a relatively small number of

underlying factors that account for relationships among the larger set of variables. The

procedure can help obtain new theoretical constructs and/or provide initial validation

for the items on a measuring instrument. Scores for the observed variables should be

at least moderately correlated and have an approximate interval level of measurement.

Further, unless communalities are expected to be generally high (> .7), a minimal sample size of 200 should be used. The key analysis steps are highlightedÂ€next.

I. Preliminary Analysis

A. Conduct case analysis.

1) Purpose: Identify any problematic individual observations and determine

if scores appear to be reasonable.

385

386

â†œæ¸€å±®

â†œæ¸€å±®

Exploratory Factor Analysis

2)Procedure:

i) Inspect the distribution of each observed variable (e.g., via histograms) and identify apparent outliers. Scatterplots may also be inspected to examine linearity and bivariate outliers. Examine descriptive statistics (e.g., means, standard deviations) for each variable to

assess if the scores appear to be reasonable for the sample atÂ€hand.

ii) Inspect the z-scores for each variable, with absolute values larger

than perhaps 2.5 or 3 along with a judgment that a given value is

distinct from the bulk of the scores indicating an outlying value. Examine Mahalanobis distance values to identify multivariate outliers.

iii) If any potential outliers or score abnormalities are identified, check

for data entry errors. If needed, conduct a sensitivity study to determine the impact of one or more outliers on major study results.

Consider use of variable transformations or case removal to attempt

to minimize the effects of one or more outliers.

B. Check to see that data are suitable for factor analysis.

1) Purpose: Determine if the data support the use of exploratory factor analysis. Also, identify the presence and pattern (if any) of missingÂ€data.

2) Procedure: Compute and inspect the correlation matrix for the observed

variables to make sure that correlations (e