The SAGE Encyclopedia of Communication Research Methods [1 ed.] 9781483381435

Communication research is evolving and changing in a world of online journals, open-access, and new ways of obtaining da

6,938 514 12MB

English Pages 2064 [2013] Year 2017

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The SAGE Encyclopedia of Communication Research Methods [1 ed.]
 9781483381435

Table of contents :
THE SAGE ENCYCLOPEDIA OF COMMUNICATION RESEARCH METHODS- FRONT COVER
THE SAGE ENCYCLOPEDIA OF COMMUNICATION RESEARCH METHODS
COPYRIGHT
CONTENTS
LIST OF ENTRIES
READER’S GUIDE
ABOUT THE EDITOR
CONTRIBUTORS
INTRODUCTION
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Z
APPENDIX A: MODERN HISTORY OF THE DISCIPLINE OF COMMUNICATION
APPENDIX B: RESOURCE GUIDE
INDEX

Citation preview

The SAGE Encyclopedia of Communication Research Methods

Editorial Board Editor Mike Allen University of Wisconsin–Milwaukee

Editorial Board Nancy A. Burrell University of Wisconsin–Milwaukee Mark Hamilton University of Connecticut Dale Hample University of Maryland Raymond Preiss University of Maryland–Overseas

The SAGE Encyclopedia of Communication Research Methods

Edited by Mike Allen University of Wisconsin–Milwaukee

Volume 1

FOR INFORMATION:

Copyright  2017 by SAGE Publications, Inc.

SAGE Publications, Inc.

All rights reserved. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher.

2455 Teller Road Thousand Oaks, California 91320 E-mail: [email protected] SAGE Publications Ltd. 1 Oliver’s Yard 55 City Road London EC1Y 1SP

Printed in the United States of America

United Kingdom

ISBN 978-1-4833-8143-5 SAGE Publications India Pvt. Ltd. B 1/I 1 Mohan Cooperative Industrial Area Mathura Road, New Delhi 110 044 India SAGE Publications Asia-Pacific Pte. Ltd. 3 Church Street #10-04 Samsung Hub Singapore 049483

Acquisitions Editor:  Andrew Boney Developmental Editor:  Carole Maurer Reference Systems Manager:  Leticia Gutierrez Production Editor:  Andrew Olson

This book is printed on acid-free paper.

Copy Editor:  C&M Digitals (P) Ltd. Typesetter:  C&M Digitals (P) Ltd. Proofreaders:  Theresa Kay

Sally Jaskold



Eleni Georgiou



Susan Schon

Indexer:  Joan Shapiro Cover Designer:  Candice Harman Marketing Manager:  Leah Watson

17 18 19 20 21 10 9 8 7 6 5 4 3 2 1

Contents Volume 1 List of Entries   vii Reader’s Guide   xv About the Editor   xxiii Contributors  xxiv Introduction  xxxv Entries A 1 D 331 B 83 E 403 C 117

Volume 2 List of Entries   vi Entries F 487 G 603 H 639 I 681

J K L M

817 829 835 897

Volume 3 List of Entries   vi Entries M (Cont.) 1030 P 1175 N 1069 Q 1375 O 1107 R 1397

Volume 4 List of Entries   vi Entries S 1523 V 1817 T 1745 W 1875 U 1807 Z 1899 Appendices Appendix A: Modern History of the Discipline of Communication   1903 Appendix B: Resource Guide   1905 Index  1909

List of Entries Abstract or Executive Summary Academic Journal Structure Academic Journals Acculturation Acknowledging the Contribution of Others Activism and Social Justice African American Communication and Culture Agenda Setting Alternative Conference Presentation Formats Alternative News Media American Psychological Association (APA) Style Analysis of Covariance (ANCOVA) Analysis of Ranks Analysis of Residuals Analysis of Variance (ANOVA) Analytic Induction Anonymous Source of Data Applied Communication Archival Analysis Archive Searching for Research Archiving Data Argumentation Theory Artifact Selection Asian/Pacific American Communication Studies Association of Internet Researchers Attenuation. See Errors of Measurement: Attenuation Authoring: Telling a Research Story Authorship Bias Authorship Credit Autoethnography Autoregressive, Integrative, Moving Average (ARIMA) Models Average Effect, Estimation of. See Meta-Analysis: Estimation of Average Effect Axial Coding

Between-Subjects Design Bibliographic Research Binomial Effect Size Display Bivariate Statistics Block Analysis. See Multiple Regression: Block Analysis Blocking Variable Blogs and Microblogs. See Social Media: Blogs, Microblogs, and Twitter Blogs and Research Body Image and Eating Disorders Bonferroni Correction Bootstrapping Burkean Analysis Business Communication Case Study Categorization Causality Ceiling and Floor Effects. See Errors of Measurement: Ceiling and Floor Effects Chat Rooms Chicago Style Chi-Square Citation Analyses Citations to Research Close Reading Cloze Procedure Cluster Analysis Coding, Fixed Coding, Flexible Coding of Data Cohen’s Kappa. See Intercoder Reliability Techniques: Cohen’s Kappa Cohort Communication and Aging Research Communication and Culture Communication and Evolution Communication and Future Studies

Bad News, Communication of Basic Course in Communication vii

viii

List of Entries

Communication and Human Biology Communication and Technology Communication Apprehension Communication Assessment Communication Competence Communication Education Communication Ethics Communication History Communication Journals Communication Privacy Management Theory Communication Skills Communication Theory Computer-Assisted Qualitative Data Analysis Software Computer-Mediated Communication Confederates Confidence Interval Confidentiality and Anonymity of Participants Conflict, Mediation, and Negotiation Conflict of Interest in Research Conjoint Analysis Content Analysis: Advantages and Disadvantages Content Analysis, Definition of Content Analysis, Process of Contrast Analysis Control Groups Controversial Experiments Conversation Analysis Copyright Issues in Research Corporate Communication Correlation, Pearson Correlation, Point-Biserial Correlation, Spearman Correspondence Analysis Counterbalancing Covariance/Variance Matrix Covariate Covert Observation Cramér’s V Crisis Communication Critical Analysis Critical Ethnography Critical Incident Method Critical Race Theory Critical Theory. See Critical Analysis Cronbach’s Alpha. See Reliability: Cronbach’s Alpha Cross Validation Cross-Cultural Communication

Cross-Lagged Panel Analysis Cross-Sectional Design Cultural Sensitivity in Research Cultural Studies and Communication Curvilinear Relationship Cutoff Scores Cyberchondria Dark Side of Communication Data Data Cleaning Data Reduction Data Security Data Transformation Data Trimming Databases, Academic Debate and Forensics Debriefing of Participants Deception in Research Decomposing Sums of Squares Degrees of Freedom Delayed Measurement Demand Characteristics Development of Communication in Children Diaries. See Journals Diaspora Dichotomization of Continuous Variable. See Errors of Measurement: Dichotomization of a Continuous Variable Digital Media and Race Digital Natives Dime Dating Disability and Communication Discourse Analysis Discriminant Analysis Discussion Section. See Writing a Discussion Section Distance Learning Duncan Test. See Post Hoc Tests: Duncan Multiple Range Test Educational Technology Effect Sizes Emergency Communication Empathic Listening English as a Second Language Environmental Communication Error Term Errors of Measurement Errors of Measurement: Attenuation

List of Entries

Errors of Measurement: Ceiling and Floor Effects Errors of Measurement: Dichotomization of a Continuous Variable Errors of Measurement: Range Restriction Errors of Measurement: Regression Toward the Mean Eta Squared Ethical Issues, International Research Ethics Codes and Guidelines Ethnographic Interview Ethnography Ethnomethodology Evidence-Based Policy Making Ex Post Facto Designs Experience Sampling Method Experimental Manipulation Experiments and Experimental Design External Validity Extraneous Variables, Control of Facial Action Coding System Factor, Crossed Factor, Fixed Factor, Nested Factor, Random Factor Analysis Factor Analysis: Confirmatory Factor Analysis: Evolutionary Factor Analysis: Exploratory Factor Analysis: Internal Consistency Factor Analysis: Oblique Rotation Factor Analysis: Parallelism Test Factor Analysis: Rotated Matrix Factor Analysis: Varimax Rotation Factorial Analysis of Variance Factorial Designs False Negative False Positive Family Communication Fantasy Theme Analysis Feminist Analysis Feminist Communication Studies Field Experiments Field Notes Film Studies Financial Communication First-Wave Feminism Fisher Narrative Paradigm Fixed Effect Analysis. See Meta-Analysis: Fixed Effect Analysis

Fleiss System. See Intercoder Reliability Techniques: Fleiss System Focus Groups Foundation and Government Research Collections Frame Analysis Fraudulent and Misleading Data Freedom of Expression Frequency Distributions Funding of Research Game Studies Garfinkeling Gender and Communication Gender-Specific Language Generalization GeoMedia GLBT Communication Studies GLBT People and Social Media Grounded Theory Group Communication Health Care Disparities Health Communication Health Literacy Hermeneutics Heterogeneity of Variance Heteroskedasticity Hierarchical Linear Modeling Hierarchical Model Historical Analysis Holsti Method. See Intercoder Reliability Techniques: Holsti Method Homogeneity of Variance Human Subjects, Treatment of Human–Computer Interaction Hypothesis Formulation Hypothesis Testing, Logic of Ideographs Imagined Interactions Implicit Measures Individual Difference Induction Informant Interview Informants Informed Consent Institutional Review Board Instructional Communication Interaction Analysis, Qualitative

ix

x

List of Entries

Interaction Analysis, Quantitative Intercoder Reliability Intercoder Reliability Coefficients, Comparison of Intercoder Reliability Standards: Reproducibility Intercoder Reliability Standards: Stability Intercoder Reliability Techniques: Cohen’s Kappa Intercoder Reliability Techniques: Fleiss System Intercoder Reliability Techniques: Holsti Method Intercoder Reliability Techniques: Krippendorff’s Alpha Intercoder Reliability Techniques: Percent Agreement Intercoder Reliability Techniques: Scott’s Pi Intercultural Communication Interdisciplinary Journals Intergenerational Communication Intergroup Communication Internal Validity International Communication International Film Internet as Cultural Context Internet Research, Privacy of Participants Internet Research and Ethical Decision Making Interpersonal Communication Interpretative Research Interviewees Interviews, Recording and Transcribing Interviews for Data Gathering Intraclass Correlation Intrapersonal Communication Invited Publication Jealousy Journalism Journals Kendall’s Tau Krippendorff’s Alpha. See Intercoder Reliability Techniques: Krippendorff’s Alpha Kruskal–Wallis Test Kuder-Richardson Formula. See Reliability: Kuder-Richardson Formula Laboratory Experiments Lag Sequential Analysis Lambda Language and Social Interaction Latin Square Design Latina/o Communication Leadership

Least Significant Difference. See Post Hoc Tests: Least Significant Difference Legal Communication Library Research Limitations of Research Linear Regression Linear Versus Nonlinear Relationships Literature, Determining Quality of Literature, Determining Relevance of Literature Review. See Writing a Literature Review Literature Review, The Literature Reviews, Foundational Literature Reviews, Resources for Literature Reviews, Strategies for Literature Search Issues. See Meta-Analysis: Literature Search Issues Literature Sources, Skeptical and Critical Stance Toward Log-Linear Analysis Logistic Analysis Longitudinal Design Managerial Communication Manipulation Check Margin of Error Markov Analysis Marxist Analysis Mass Communication Massive Multiplayer Online Games Massive Open Online Courses Matched Groups Matched Individuals Maximum Likelihood Estimation McNemar Test Mean, Arithmetic Mean, Geometric Mean, Harmonic Measurement Levels Measurement Levels, Interval Measurement Levels, Nominal/Categorical Measurement Levels, Ordinal Measurement Levels, Ratio Measures of Central Tendency Measures of Variability Media and Technology Studies Media Diffusion Media Effects Research Media Literacy Median

List of Entries

Median Split of Sample Message Production Meta-Analysis Meta-Analysis: Estimation of Average Effect Meta-Analysis: Fixed-Effects Analysis Meta-Analysis: Literature Search Issues Meta-Analysis: Model Testing Meta-Analysis: Random Effects Analysis Meta-Analysis: Statistical Conversion to Common Metric Meta-Ethnography Metaphor Analysis Methodology, Selection of Methods Section. See Writing a Methods Section Metrics for Analysis, Selection of Mixed Level Design Mode Model Testing. See Meta-Analysis: Model Testing Modern Language Association (MLA) Style Mortality in Sample Multicollinearity Multiplatform Journalism Multiple Regression Multiple Regression: Block Analysis Multiple Regression: Covariates in Multiple Regression Multiple Regression: Multiple R Multiple Regression: Standardized and Raw Coefficients Multitrial Design Multivariate Analysis of Variance (MANOVA) Multivariate Statistics Narrative Analysis Narrative Interviewing Narrative Literature Review Native American or Indigenous Peoples Communication Naturalistic Observation Negative Case Analysis Neo-Aristotelian Criticism New Media Analysis New Media and Participant Observation News Media, Writing for Nonverbal Communication Normal Curve Distribution Null Hypothesis Observational Measurement: Face Features Observational Measurement: Proxemics and Touch

Observational Measurement: Vocal Qualities Observational Research, Advantages and Disadvantages Observational Research Methods Observer Reliability Odds Ratio One-Group Pretest–Posttest Design One-Tailed Test One-Way Analysis of Variance Online and Offline Data, Comparison of Online Communities Online Data, Collection and Interpretation of Online Data, Documentation of Online Data, Hacking of Online Interviews Online Learning. See Distance Education. Online Social Worlds Opinion Polling Ordinary Least Squares Organizational Communication Organizational Ethics Organizational Identification Orthogonality Outlier Analysis Overidentified Model p value Panel Presentations and Discussions Parasocial Communication Parsimony Partial Correlation Participant Observation Passing Theory Path Analysis Patient-Centered Communication Pay to Review and/or Publish Peace Studies Peer Review Peer Reviewed Publication Pentadic Analysis Percent Agreement. See Intercoder Reliability Techniques: Percent Agreement Performance Research Performance Studies Personal Relationship Studies Persuasion Phenomenological Traditions Phi Coefficient Philosophy of Communication Physiological Measurement

xi

xii

List of Entries

Physiological Measurement: Blood Pressure Physiological Measurement: Genital Blood Volume Physiological Measurement: Heart Rate Physiological Measurement: Pupillary Response Physiological Measurement: Skin Conductance Pilot Study Plagiarism Plagiarism, SelfPoetic Analysis Politeness Theory Political Communication Political Debates Political Economy of Media Popular Communication Population/Sample Pornography and Research Postcolonial Communication Post Hoc Tests Post Hoc Tests: Duncan Multiple Range Test Post Hoc Tests: Least Significant Difference Post Hoc Tests: Scheffe Test Post Hoc Tests: Student–Newman–Keuls Test Post Hoc Tests: Tukey Honestly Significance Difference Test Poster Presentation of Research Power Curves Power in Language Primary Data Analysis Privacy of Information Privacy of Participants Probit Analysis Professional Communication Organizations Program Assessment Pronomial Use-Solidarity Propaganda Psychoanalytic Approaches to Rhetoric Public Address Public Behavior, Recording of Public Memory Public Relations Publication, Politics of Publication Style Guides Publications, Open-Access Publications, Scholarly Publishing a Book Publishing Journal Articles Qualitative Data Quantitative Research, Purpose of

Quantitative Research, Steps for Quasi-Experimental Design Quasi-F Queer Methods Queer Theory Random Assignment Random Assignment of Participants Random Effects Analysis. See Meta-Analysis: Random Effects Analysis Range Raw Score Reaction Time Reality Television Regression to the Mean. See Errors of Measurement: Regression to the Mean Relational Dialectics Theory Relationships Between Variables Reliability, Cronbach’s Alpha Reliability, Kuder-Richardson Formula Reliability, Split-half Reliability, Unitizing Reliability of Measurement Religious Communication Repeated Measures Replication Reproducibility. See Intercoder Reliability Standards: Reproducibility Research, Inspirations for Research Ethics and Social Values Research Ideas, Sources of Research Project, Planning of Research Proposal Research Question Formulation Research Reports, Objective Research Reports, Organization of Research Reports, Subjective Research Topic, Definition of Researcher–Participant Relationships Researcher–Participant Relationships in Observational Research Respondent Interviews Respondents Response Style Restriction in Range. See Errors of Measurement: Restriction in Range Results Section. See Writing a Results Section Rhetoric Rhetoric, Aristotle’s: Ethos Rhetoric, Aristotle’s: Logos

List of Entries

Rhetoric, Aristotle’s: Pathos Rhetoric, Isocrates’ Rhetoric as Epistemic Rhetorical and Dramatism Analysis Rhetorical Artifact Rhetorical Genre Rhetorical Leadership Rhetorical Method Rhetorical Theory Rigor Risk Communication Robotic Communication Sample Versus Population Sampling, Determining Size Sampling, Internet Sampling, Methodological Issues in Sampling, Multistage Sampling, Nonprobability Sampling, Probability Sampling, Random Sampling, Special Population Sampling Decisions Sampling Frames Sampling Theory Scales, Forced Choice Scales, Likert Statement Scales, Open-Ended Scales, Rank Order Scales, Semantic Differential Scales, True/False Scaling, Guttman Scheffe Test. See Post Hoc Tests: Scheffe Scholarship of Teaching and Learning Science Communication Scott’s Pi. See Intercoder Reliability Techniques: Scott’s Pi Search Engines for Literature Search Secondary Data Second-Wave Feminism Selective Exposure Semiotics Semi-Partial r Sensitivity Analysis Service Learning Significance Test Simple Bivariate Correlation Simple Descriptive Statistics Skewness Small Group Communication

Snowball Subject Recruitment Sobel Test Social Cognition Social Constructionism Social Implications of Research Social Media: Blogs, Microblogs, and Twitter Social Network Analysis Social Network Systems Social Networks, Online Social Presence Social Relationships Solomon Four-Group Design Spam Spirituality and Communication Spontaneous Decision Making Sports Communication Stability. See Intercoder Reliability Standards: Stability Standard Deviation/Variance Standard Error Standard Error, Mean Standard Score Statistical Conversion to Common Metric. See Meta-Analysis: Statistical Conversion to Common Metric Statistical Power Analysis Stimulus Pretest Strategic Communication Structural Equation Modeling Structuration Theory Student-Newman-Keuls Method. See Post Hoc Tests: Student-Newman-Keuls Method Submission of Research to a Convention Submission of Research to a Journal Surrogacy Survey: Contrast Questions Survey: Demographic Questions Survey: Dichotomous Questions Survey: Filter Questions Survey: Follow-up Questions Survey: Leading Questions Survey: Multiple-Choice Questions Survey: Negative-Wording Questions Survey: Open-Ended Questions Survey: Questionnaire Survey: Sampling Issues Survey: Structural Questions Survey Instructions Survey Questions, Writing and Phrasing of Survey Response Rates

xiii

xiv

List of Entries

Survey Wording Surveys, Advantages and Disadvantages of Surveys, Using Others’ Symbolic Interactionism Synecdoche Terministic Screens Terrorism Testability Textual Analysis Thematic Analysis Theoretical Traditions Third-Wave Feminism Time-Series Analysis Time-Series Notation Title of Manuscript, Selection of Training and Development in Organizations Transcription Systems Treatment Groups Triangulation True Score t-Test t-Test, Independent Samples t-Test, One Sample t-Test, Paired Samples Tukey Test. See Post Hoc Tests: Tukey Turning Point Analysis Two-Group Pretest–Posttest Design Two-Group Random Assignment Pretest–Posttest Design Type I Error Type II Error Underrepresented Groups Univariate Statistics Unobtrusive Analysis Unobtrusive Measurement

Validity, Concurrent Validity, Construct Validity, Face and Content Validity, Halo Effect Validity, Measurement of Validity, Predictive Variables, Categorical Variables, Conceptualization Variables, Continuous Variables, Control Variables, Defining Variables, Dependent Variables, Independent Variables, Interaction of Variables, Latent Variables, Marker Variables, Mediating Types Variables, Moderating Types Variables, Operationalization Video Games Visual Communication Studies Visual Images as Data Within Qualitative Research Visual Materials, Analysis of Vote Counting Literature Review Methods Vulnerable Groups Wartime Communication Within-Subjects Design Writer’s Block Writing a Discussion Section Writing a Literature Review Writing a Methods Section Writing a Results Section Writing Process, The Z score Z Transformation

Reader’s Guide The Reader’s Guide is provided to assist readers in locating articles on related topics. It classifies articles into five main general topical categories: Creating and Conducting Research; Designing the Empirical Inquiry; Qualitatively Examining Information; Statistically Analyzing Data; and Understanding the Scope of Communication Research. Entries may be listed under more than one topic.

Creating and Conducting Research

Fraudulent and Misleading Data Funding Research Health Care Disparities Human Subjects, Treatment of Informed Consent Institutional Review Board Organizational Ethics Peer Review Plagiarism Plagiarism, SelfPrivacy of Information Privacy of Participants Public Behavior, Recording of Reliability, Unitizing Research Ethics and Social Values Researcher-Participant Relationships Social Implications of Research

Creation of Research Project

Authoring: Telling a Research Story Body Image and Eating Disorders Hypothesis Formulation Methodology, Selection of Program Assessment Research, Inspiration for Research Ideas, Sources of Research Project, Planning of Research Question Formulation Research Topic, Definition of Social Media: Blogs, Microblogs, and Twitter Testability Ethics

Acknowledging the Contribution of Others Activism and Social Justice Anonymous Source of Data Authorship Bias Authorship Credit Confidentiality and Anonymity of Participants Conflict of Interest in Research Controversial Experiments Copyright Issues in Research Cultural Sensitivity in Research Data Security Debriefing of Participants Deception in Research Ethical Issues, International Research Ethics Codes and Guidelines

Literature Reviews

Archive Searching for Research Bibliographic Research Databases, Academic Foundation and Government Research Collections Library Research Literature, Determining Quality of Literature, Determining Relevance of Literature Review, The Literature Reviews, Foundational Literature Reviews, Resources for Literature Reviews, Strategies for Literature Sources, Skeptical and Critical Stance Toward xv

xvi

Reader’s Guide

Meta-Analysis Publications, Scholarly Search Engines for Literature Search Vote Counting Literature Review Methods Writing and Publishing Research

Abstract or Executive Summary Academic Journals Alternative Conference Presentation Formats American Psychological Association (APA) Style Archiving Data Blogs and Research Chicago Style Citations to Research Evidence-Based Policy Making Invited Publication Limitations of Research Modern Language Association (MLA) Style Narrative Literature Review New Media Analysis News Media, Writing for Panel Presentations and Discussion Pay to Review and/or Publish Peer Reviewed Publication Poster Presentation of Research Primary Data Analysis Publication, Politics of Publication Style Guides Publications, Open-Access Publishing a Book Publishing a Journal Article Research Report, Organization of Research Reports, Objective Research Reports, Subjective Scholarship of Teaching and Learning Secondary Data Submission of Research to a Convention Submission of Research to a Journal Title of Manuscript, Selection of Visual Images as Data Within Qualitative Research Writer’s Block Writing a Discussion Section Writing a Literature Review Writing a Methods Section Writing a Results Section Writing Process, The

Designing the Empirical Inquiry Content Analysis

Coding of Data Content Analysis: Advantages and Disadvantages Content Analysis, Definition of Content Analysis, Process of Conversation Analysis Critical Analysis Discourse Analysis Interaction Analysis, Quantitative Intercoder Reliability Intercoder Reliability Coefficients, Comparison of Intercoder Reliability Standards: Reproducibility Intercoder Reliability Standards: Stability Intercoder Reliability Techniques: Cohen’s Kappa Intercoder Reliability Techniques: Fleiss System Intercoder Reliability Techniques: Holsti Method Intercoder Reliability Techniques: Krippendorf Alpha Intercoder Reliability Techniques: Percent Agreement Intercoder Reliability Techniques: Scott’s Pi Metrics for Analysis, Selection of Narrative Analysis Observational Research, Advantages and Disadvantages Observational Research Methods Observer Reliability Rhetorical and Dramatism Analysis Semiotics Unobtrusive Analysis Internet Inquiry

Association of Internet Researchers (AoIR) Chat Rooms Computer-Mediated Communication (CMC) Internet as Cultural Context Internet Research and Ethical Decision Making Internet Research, Privacy of Participants Online and Offline Data, Comparison of Online Communities Online Data, Collection and Interpretation of Online Data, Documentation of Online Data, Hacking of Online Interviews Online Social Worlds Social Networks, Online Spam

Reader’s Guide

Measurement

Correspondence Analysis Cutoff Scores Data Cleaning Data Reduction Data Trimming Facial Affect Coding System Factor Analysis Factor Analysis: Confirmatory Factor Analysis: Evolutionary Factor Analysis: Exploratory Factor Analysis: Internal Consistency Factor Analysis: Parallelism Test Factor Analysis: Rotated Matrix Factor Analysis: Varimax Rotation Factor Analysis-Oblique Rotation Implicit Measures Measurement Levels Measurement Levels, Interval Measurement Levels, Nominal/Categorical Measurement Levels, Ordinal Measurement Levels, Ratio Observational Measurement: Face Features Observational Measurement: Proxemics and Touch Observational Measurement: Vocal Qualities Organizational Identification Outlier Analysis Parsimony Physiological Measurement Physiological Measurement: Blood Pressure Physiological Measurement: Genital Blood Volume Physiological Measurement: Heart Rate Physiological Measurement: Pupillary Response Physiological Measurement: Skin Conductance Range Raw Score Reaction Time Reliability, Cronbach’s Alpha Reliability, Knuder-Richardson Reliability, Split-half Reliability of Measurement Scales, Forced Choice Scales, Likert Statement Scales, Open-Ended Scales, Rank Order Scales, Semantic Differential Scales, True/False Scaling, Guttman Standard Score

Time Series Notation True Score Validity, Concurrent Validity, Construct Validity, Face and Content Validity, Halo Effect Validity, Measurement of Validity, Predictive Variables, Conceptualization Variables, Operationalization Z Transformation Research Subjects/Participants

Cohort Confederates Generalization Imagined Interactions Informants Interviewees Matched Groups Matched Individuals Random Assignment of Participants Respondents Response Style Treatment Groups Vulnerable Groups Sampling

Experience Sampling Method Sample Versus Population Sampling, Internet Sampling, Methodological Issues in Sampling, Multistage Sampling, Nonprobability Sampling, Probability Sampling, Special Population Sampling Decisions Sampling Frames Survey Research

Opinion Polling Sampling, Random Survey: Contrast Questions Survey: Demographic Questions Survey: Dichotomous Questions Survey: Filter Questions Survey: Follow-up Questions

xvii

xviii

Reader’s Guide

Survey: Leading Questions Survey: Multiple-Choice Questions Survey: Negative-Wording Questions Survey: Open-Ended Questions Survey: Questionnaire Survey: Sampling Issues Survey: Structural Questions Survey Instructions Survey Questions, Writing and Phrasing of Survey Response Rates Survey Wording Surveys, Advantages and Disadvantages of Surveys, Using Others’ Under-represented Group

Qualitatively Examining Information Qualitative Concepts and Techniques

Alternative News Media Analytic Induction Archival Analysis Artifact Selection Autoethnography Axial Coding Burkean Analysis Case Study Close Reading Coding, Fixed Coding, Flexible Computer-Assisted Qualitative Data Analysis Software (CAQDAS) Covert Observation Critical Ethnography Critical Incident Method Critical Race Theory Cultural Studies and Communication Demand Characteristics Ethnographic Interview Ethnography Ethnomethodology Fantasy Theme Analysis Feminist Analysis Field Notes First Wave Feminism Fisher Narrative Paradigm Focus Groups Frame Analysis Garfinkling Gender-Specific Language

Grounded Theory Hermeneutics Historical Analysis Ideographs Induction Informant Interview Interaction Analysis, Qualitative Interpretative Research Interviews for Data Gathering Interviews, Recording and Transcribing Journals Marxist Analysis Meta-ethnography Metaphor Analysis Narrative Interviewing Naturalistic Observation Negative Case Analysis Neo-Aristotelian Method New Media and Participant Observation Participant Observer Pentadic Analysis Performance Research Phenomenological Traditions Poetic Analysis Postcolonial Analysis Power in Language Pronomial Use-Solidarity Psychoanalytic Approaches to Rhetoric Public Memory Qualitative Data Queer Methods Queer Theory Researcher-Participant Relationships in Observational Research Respondent Interviews Rhetoric, Aristotle’s: Ethos Rhetoric, Aristotle’s: Logos Rhetoric, Aristotle’s: Pathos Rhetoric, Isocrates’ Rhetoric as Epistemic Rhetorical Artifact Rhetorical Method Rhetorical Theory Second Wave Feminism Snowball Subject Recruitment Social Constructionism Social Network Analysis Spontaneous Decision Making Symbolic Interactionism Synecdoche

Reader’s Guide

Terministic Screens Textual Analysis Thematic Analysis Theoretical Traditions Third-Wave Feminism Transcription Systems Triangulation Turning Point Analysis Unobtrusive Measurement Visual Materials, Analysis of

Statistically Analyzing Data Experimental Design Issues

Between-Subjects Design Blocking Variable Causality Control Groups Counterbalancing Cross-Sectional Design Data Degrees of Freedom Delayed Measurement Ex Post Facto Designs Experimental Manipulation Experiments and Experimental Design External Validity Extraneous Variables, Control of Factor, Crossed Factor, Fixed Factor, Nested Factor, Random Factorial Designs False Negative False Positive Field Experiments Hierarchical Model Individual Difference Internal Validity Laboratory Experiments Latin Square Design Longitudinal Design Manipulation Check Measures of Variability Median Split of Sample Mixed Level Design Multitrial Design Null Hypothesis One-Group Pretest–Posttest Design

Orthogonality Overidentified Model p value Pilot Study Population/Sample Power Curves Quantitative Research, Purpose of Quantitative Research, Steps for Quasi-Experimental Design Random Assignment Replication Research Proposal Rigor Sampling, Determining Size Sampling Theory Solomon Four-Group Design Stimulus Pre-test Two-Group Pretest–Posttest Design Two-Group Random Assignment Pretest–Posttest Design Variables, Control Variables, Dependent Variables, Independent Variables, Latent Variables, Marker Variables, Mediating Types Variables, Moderating Types Within-Subjects Design Analysis of Variance Approaches

Analysis of Covariance (ANCOVA) Analysis of Ranks Analysis of Variance (ANOVA) Bonferroni Correction Chi-Square Decomposing Sums of Squares Error Term Eta Squared Factorial Analysis of Variance McNemar Test One-Tailed Test One-Way Analysis of Variance Post Hoc Tests Post Hoc Tests: Duncan Multiple Range Test Post Hoc Tests: Least Significant Difference Post Hoc Tests: Scheffe Test Post Hoc Tests: Student-Newman-Keuls Test Post Hoc Tests: Tukey Honestly Significance Difference Test

xix

xx

Reader’s Guide

Repeated Measures t-Test t-Test, Independent Samples t-Test, One Sample t-Test, Paired Samples Linear Approaches to Statistics

Analysis of Residuals Bivariate Statistics Bootstrapping Confidence Interval Conjoint Analysis Contrast Analysis Correlation, Pearson Correlation, Point-Biserial Correlation, Spearman Covariance/Variance Matrix Covariate Cramér’s V Discriminant Analysis Kendall’s Tau Kruskal-Wallis Test Linear Regression Linear Versus Nonlinear Relationships Multicollinearity Multiple Regression Multiple Regression: Block Analysis Multiple Regression: Covariates in Multiple Regression Multiple Regression: Multiple R Multiple Regression: Standardized Regression Coefficient Partial Correlation Phi Coefficient Semi-Partial r Simple Bivariate Correlation Statistical Models

Autoregressive, Integrative, Moving Average (ARIMA) Models Binomial Effect Size Display Cloze Procedure Cross Validation Cross-Lagged Panel Analysis Curvilinear Relationship Effect Sizes Hierarchical Linear Modeling Lag Sequential Analysis

Lambda Log-Linear Analysis Logistic Analysis Margin of Error Markov Analysis Maximum Likelihood Estimation Meta-Analysis: Estimation of Average Effect Meta-Analysis: Fixed Effects Analysis Meta-Analysis: Literature Search Issues Meta-Analysis: Model Testing Meta-Analysis: Random Effects Analysis Meta-Analysis: Statistical Conversion to Common Metric Multivariate Analysis of Variance (MANOVA) Multivariate Statistics Odds Ratio Ordinary Least Squares Path Analysis Probit Analysis Quasi-F Sobel Test Structural Equation Modeling Time-Series Analysis Statistical Measurement Issues

Categorization Cluster Analysis Data Transformation Errors of Measurement Errors of Measurement: Attenuation Errors of Measurement: Ceiling and Floor Effects Errors of Measurement: Dichotomization of a Continuous Variable Errors of Measurement: Range Restriction Errors of Measurement: Regression Toward the Mean Frequency Distributions Heterogeneity of Variance Heteroskedasticity Homogeneity of Variance Hypothesis Testing, Logic of Intraclass Correlation Mean, Arithmetic Mean, Geometric Mean, Harmonic Measures of Central Tendency Median Mode Mortality in Sample

Reader’s Guide

Normal Curve Distribution Relationships Between Variables Sampling, Probability Sensitivity Analysis Significance Test Simple Descriptive Statistics Skewness Standard Deviation and Variance Standard Error Standard Error, Mean Statistical Power Analysis Type I error Type II error Univariate Statistics Variables, Categorical Variables, Continuous Variables, Defining Variables, Interaction of Z Score

Understanding the Scope of Communication Research Areas of Inquiry

Acculturation African American Communication and Culture Agenda Setting Applied Communication Argumentation Theory Asian/Pacific American Communication Studies Bad News, Communication of Basic Course in Communication Business Communication Communication and Aging Research Communication and Culture Communication and Evolution Communication and Human Biology Communication and Future Studies Communication and Technology Communication Apprehension Communication Assessment Communication Competence Communication Education Communication Ethics Communication History Communication Privacy Management Theory Communication Skills Communication Theory Conflict, Mediation, and Negotiation

Corporate Communication Crisis Communication Cross-Cultural Communication Cultural Studies and Communication Cyberchondria Dark Side of Communication Debate and Forensics Development of Communication in Children Diaspora Digital Media and Race Digital Natives Dime Dating Disability and Communication Distance Learning Educational Technology Emergency Communication Empathic Listening English as a Second Language Environmental Communication Family Communication Feminist Communication Studies Film Studies Financial Communication Freedom of Expression Game Studies Gender and Communication GeoMedia GLBT Communication Studies GLBT Social Media Group Communication Health Communication Health Literacy Human-Computer Interaction Instructional Communication Intercultural Communication Intergenerational Communication Intergroup Communication International Communication International Film Interpersonal Communication Intrapersonal Communication Jealousy Journalism Language and Social Interaction Latino Communication Leadership Legal Communication Managerial Communication Mass Communication Massive Multiplayer Online Games

xxi

xxii

Reader’s Guide

Massive Open Online Courses Media and Technology Studies Media Diffusion Media Effects Research Media Literacy Message Production Multiplatform Journalism Native American or Indigenous Peoples Communication Nonverbal Communication Organizational Communication Parasocial Communication Passing Patient-Centered Communication Peace Studies Performance Studies Personal Relationship Studies Persuasion Philosophy of Communication Politeness Political Communication Political Economy of Media Political Debates Popular Communication Pornography and Research Propaganda Public Address Public Relations Reality Television Relational Dialectics Theory Religious Communication Rhetoric Rhetorical Genre Risk Communication Robotic Communication Science Communication Selective Exposure

Service Learning Small Group Communication Social Cognition Social Network Systems Social Presence Social Relationships Spirituality and Communication Sports Communication Strategic Communication Structuration Theory Surrogacy Terrorism Training and Development in Organizations Video Games Visual Communication Studies Wartime Communication Structure of Research Community

Academic Journal Structure Citation Analyses Communication Journals Interdisciplinary Journals Professional Communication Organizations (NCA, ICA, Central, etc.)

Appendices Modern History of the Discipline of Communication Resource Guide 1. Associations 2. Books 3. Journals

About the Editor Mike Allen (PhD, Michigan State University, 1987) has co-authored two books and edited five on quantitative methods and on meta-analysis, including the 2009 SAGE title Quantitative Research in Communication. He has served as editor for the journals Communication Monographs and Communication Studies. He has published more than 250 articles and books over the past

three decades and has worked with over 300 co-authors in published works. In 2011, he was awarded the Central States Communication Association Federation Research Prize, and he has been ranked in the top 25 of active research career scholars based on the number of published works and in the top 25 most cited of doctoral faculty in communication.

xxiii

Contributors Tony E. Adams Northeastern Illinois University

Steven A. Beebe Texas State University

Roger C. Aden Ohio University

Ruth Beerman Bloomsburg University

Tamara D. Afifi The University of Iowa

Mark L. Berenson Montclair State University

Dalal Albudaiwi University of Wisconsin–Milwaukee

Suzanne V. L. Berg Newman University

Cassandra Alexopoulos University of California, Davis

Daniel Bergan Michigan State University

Mike Allen University of Wisconsin–Milwaukee

Mara K. Berkland North Central College

Peter A. Andersen San Diego State University

Lawrance M. Bernabo University of Minnesota, Duluth

Christopher J. E. Anderson University of Wisconsin–Milwaukee

Elena Bessarabova University of Oklahoma

Colleen E. Arendt Fairfield University

Elisabeth Bigsby University of Illinois

Alphonso R. Atkins Ivy Tech Community College of Indiana

David Blakesley Clemson University

Anita Atwell Seate University of Maryland

Martin Bland University of York

Ciano Aydin University of Twente

Kevin L. Blankenship Iowa State University

Benjamin M. A. Baker University of Wisconsin–Milwaukee

Maria D. Blevins Utah Valley University

Robert L. Ballard Pepperdine University

Kristen C. Blinne State University of New York College at Oneonta

John Banas University of Oklahoma

Samuel Boerboom Montana State University–Billings

Fatima Abdul-Rahman Barakji Wayne State University

Galina B. Bolden Rutgers University

Dana Battaglia Adelphi University

Derek M. Bolen Angelo State University xxiv

Contributors

Belinda Boon Kent State University

Christopher J. Carpenter Western Illinois University

Carl H. Botan George Mason University

G.W. Carpenter University of the Pacific

John Bourhis Missouri State University

D. Jasun Carr Idaho State University

Nicholas David Bowman West Virginia University

Amy E. Chadwick Ohio University

Cheryl Campanella Bracken Cleveland State University

Chin-Chung (Joy) Chao University of Nebraska at Omaha

Lisa Bradford University of Wisconsin–Milwaukee

Laura L. Chapdelaine Montclair State University

Jaclyn Brandhorst University of Missouri

Stellina Marie Aubuchon Chapman Ohio University

Maria Brann Indiana University–Purdue University Indianapolis

April Chatham-Carpenter University of Arkansas at Little Rock

Camille Brisset Université de Bordeaux

Maura R. Cherney University of Wisconsin–Milwaukee

Nicholas Brody University of Puget Sound

Mike W.-L. Cheung National University of Singapore

Christopher Brown Minnesota State University, Mankato

Jeffrey T. Child Kent State University

Nancy Brule Bethel University

Terrence L. Chmielewski University of Wisconsin–Eau Claire

Barry S. Brummett University of Texas at Austin

Ioana A. Cionea University of Oklahoma

Claudia Bucciferro Gonzaga University

Jessica E. Clements Whitworth University

Ann Burnett North Dakota State University

Rebecca J. Cline Kent State University (retired)

Nancy A. Burrell University of Wisconsin–Milwaukee

Tina A. Coffelt Iowa State University

Jennifer A. Butler University of Wisconsin–La Crosse

Andrew William Cole Waukesha County Technical College

Patrice M. Buzzanell Purdue University

D’Lane R. Compton The University of New Orleans

Michael A. Cacciatore University of Georgia

Timothy Coombs University of Central Florida

Heather E. Canary University of Utah

Erica F. Cooper Roanoke College

Richard C. Cante University of North Carolina at Chapel Hill

Elena F. Corriero Wayne State University

Heather J. Carmack James Madison University

Emily Cramer North Central College

xxv

xxvi

Contributors

Gregory A. Cranmer Columbus State University

Melissa A. Dobosh University of Northern Iowa

Valerie Cronin-Fisher University of Wisconsin–Milwaukee

Catherine A. Dobris Indiana University–Purdue University Indianapolis

Christopher L. Cummings Nanyang Technical University Ryan Cummings Brian Lamb School of Communication Anne Marie Czerwinski University of Pittsburgh at Greensburg Dave D’Alessio University of Connecticut–Stamford Suzy D’Enbeau Kent State University Feng Dai Yale University Elise J. Dallimore Northeastern University Mary M. Dalton Wake Forest University Cori Dauber University of North Carolina at Chapel Hill Rachel Davidson University of Wisconsin–Milwaukee Shardé M. Davis University of Iowa Deborah DeCloedt Pinçon University of Wisconsin–Milwaukee Jonathan Bryce Dellinger University of Wisconsin–Milwaukee Kevin Michael DeLuca University of Utah Nathalie Desrayaud Florida International University

Tony Docan-Morgan University of Wisconsin, La Crosse Elizabeth Dorrance Hall Purdue University Edward Downs University of Minnesota Duluth Richard Draeger Jr. University of Wisconsin–Milwaukee Amy Dryden Northern Arizona University Levent Dumenci Virginia Commonwealth University Norah E. Dunbar University of California, Santa Barbara Jessica J. Eckstein Western Connecticut State University Jen Eden Marist College Autumn Edwards Western Michigan University Chad C. Edwards Western Michigan University Laura L. Ellingson Santa Clara University Tara Emmers-Sommer University of Nevada, Las Vegas Andreas M. Fahr Fribourg University

Shari L. DeVeney University of Nebraska Omaha

Thomas Hugh Feeley University at Buffalo, The State University of New York

Aron Elizabeth DiBacco Indiana University–Purdue University Indianapolis

Hairong Feng University of Wisconsin–Milwaukee

Linda B. Dickmeyer University of Wisconsin–La Crosse

Kimberly Field-Springer Berry College

Keith E. Dilbeck Erasmus University–Rotterdam

Edward L. Fink University of Maryland

James DiSanza Idaho State University

Douglas Fraleigh California State University, Fresno

Contributors

Lawrence R. Frey University of Colorado, Boulder

Alice Hall University of Missouri–St. Louis

Jeremy P. Fyke Marquette University

Jeffrey A. Hall University of Kansas

Elena Gabor Bradley University

Mark Hamilton University of Connecticut

Adam J. Gaffey Winona State University

Janice Hamlet Northern Illinois University

Robert N. Gaines The University of Alabama

Lisa K. Hanasono Bowling Green State University

Adolfo J. Garcia State University of New York at New Paltz

Lindsey M. Harness University of Wisconsin–Milwaukee

Johny T. Garner Texas Christian University

Leslie J. Harris University of Wisconsin–Milwaukee

Barbara Mae Gayle University of Maryland University College Europe

Amanda M. Harsin Indiana University–Purdue University Indianapolis

Zac Gershberg Idaho State University

Marouf Hasian University of Utah

Patricia Gettings Purdue University

Matthias R. Hastall Technical University of Dortmund

Jannath Ghaznavi University of California, Davis

Jennifer Morey Hawkins University of Wisconsin–Milwaukee

Michael A. Gilbert York University

Katharine J. Head Indiana University–Purdue University Indianapolis

Matthew J. Gill Eastern Illinois University Elizabeth M. Goering Indiana University–Purdue University Indianapolis

Michael L. Hecht Pennsylvania State University E. C. Hedberg Arizona State University

Monica L. Gracyalny California Lutheran University

Veronica Hefner Georgia College & State University

Katherine Grasso University of California, Davis

Alan D. Heisel University of Missouri–St. Louis

John O. Greene Purdue University

David Dryden Henningsen Northern Illinois University

Peter B. Gregg University of St. Thomas

Mary Lynn Miller Henningsen Northern Illinois University

Clare Gross University of Wisconsin–Milwaukee

Lori Henson Indiana State University

Laura Guerrero California State University, Fullerton

Leandra H. Hernandez National University

Lisa M. Guntzviller University of Illinois at Urbana–Champaign

Anna R. Herrman St. Norbert College

xxvii

xxviii

Contributors

Jon A. Hess University of Dayton

Stephanie Kelly North Carolina A&T State University

Colin Hesse Oregon State University

Tyler Kendall University of Oregon

Mary F. Hoffman University of Wisconsin–Eau Claire

Michael L. Kent University of Tennessee, Knoxville

Shannon L. Holland Southwestern University

Jarim Kim Kookmin University

Matthew Hollander University of Wisconsin–Madison

Jihyun Kim Kent State University

Amanda Holman Creighton University

Sang-Yeon Kim University of Wisconsin–Milwaukee

Allen I. Huffcutt Bradley University

Etsuko Kinefuchi University of North Carolina at Greensboro

Timothy Huffman Loyola Marymount University

Russell J. Kivatisky University of Southern Maine

Dena M. Huisman University of Wisconsin–La Crosse

Boenell J. Kline Bloomsburg University

Amanda L. Irions University of Maryland

Julia Kneer Erasmus University–Rotterdam

James D. Ivory Virginia Tech

Ascan F. Koerner University of Minnesota–Twin Cities

Mariko Izumi Columbus State University

Shana Kopaczewski Indiana State University

Bruce E. Johansen University of Nebraska at Omaha

Nicole B. Koppel Montclair State University

Amy Janan Johnson University of Oklahoma

Michael W. Kramer University of Oklahoma

J. David Johnson University of Kentucky

Gary L. Kreps George Mason University

Malynnda Johnson University of Mount Union

Klaus Krippendorff University of Pennsylvania

Tanya Joosten University of Wisconsin–Milwaukee

Kimberly L. Kulovitz Carthage College

Nick Joyce University of Maryland

Angela G. La Valley Bloomsburg University

Adam S. Kahn California State University, Long Beach

Sara LaBelle Chapman University

Falon Kartch California State University, Fresno

Kenneth A. Lachlan University of Connecticut

Michael W. Kearney University of Kansas

Megan M. Lambertz-Berndt University of Wisconsin–Milwaukee

Peter M. Kellett University of North Carolina at Greensboro

Mark C. Lashley La Salle University

Contributors

Aimee Lau Wisconsin Lutheran College

David P. MacNeil George Mason University

Sean Lawson University of Utah

Kelly Madden Daily La Salle University

Yvan Leanza Université Laval

Annette Madlock Gatison Southern Connecticut State University

Sun Kyong Lee University of Oklahoma

Craig T. Maier Duquesne University

Mark A. Leeman Northern Kentucky University

Melissa A. Maier Missouri State University

Leah LeFebvre University of Wyoming

Colleen Carol Malachowski Regis College

Luke LeFebvre Texas Tech University

Jimmie Manning Northern Illinois University

James L. Leighter Creighton University

Yuping Mao Erasmus University–Rotterdam

John Leustek Rutgers University

Katherine B. Martin University of Miami

Jay Phil Lim Rutgers University

Amanda R. Martinez Davidson College

Anthony M. Limperos University of Kentucky

Lourdes S. Martinez Michigan State University

Mei-Chen Lin Kent State University

Mridula Mascarenhas Hanover College

Jeremy Harris Lipschultz University of Nebraska Omaha Social Media Lab

Jonathan Matusitz University of Central Florida

Meina Liu George Washington University

Amy May University of Alaska–Fairbanks

Xun Liu California State University, Stanislaus

Matthew S. May Texas A&M University

Matthew Lombard Temple University

Steve May University of North Carolina at, Chapel Hill

Tony P. Love The University of Kentucky

Molly A. Mayhead Western Oregon University

Yu Lu Pennsylvania State University

M. Chad McBride Creighton University

Kristen Lucas University of Louisville

Jennifer McCullough Kent State University

Christian O. Lundberg University of North Carolina at Chapel Hill

Bree McEwan Western Illinois University

Jenny Lye University of Melbourne, Australia

Danielle McGeough University of Northern Iowa

Jennifer A. Machiorlatti Western Michigan University

Charlton McIlwain New York University

xxix

xxx

Contributors

Timothy McKenna-Buchanan Manchester University

Scott A. Myers West Virginia University

Christopher J. McKinley Montclair State University

Jacob R. Neiheisel University at Buffalo, The State University of New York

Mitchell S. McKinney University of Missouri Jenna McNallie West Virginia Wesleyan College Susan Mello Northeastern University Christine E. Meltzer University of Mainz Andrea L. Meluch Kent State University Anne F. Merrill Pennsylvania State University Daniel S. Messinger University of Miami Nathan Miczo Western Illinois University Josh Miller University of Wisconsin–Milwaukee Vernon Miller Michigan State University Patricia Pizzano Miraglia University of Nevada, Reno Rahul Mitra Wayne State University

Laura L. Nelson University of Wisconsin–La Crosse James W. Neuliep St. Norbert College Frank Nevius Western Oregon University Joyce Neys Erasmus University–Rotterdam Kristine M. Nicolini University of Wisconsin–Milwaukee Charissa K. Niedzwiecki University of Wisconsin–La Crosse Adriani Nikolakopoulou University of Ioannina, School of Medicine Shahrokh Nikou Åbo Akademi University Stephanie Norander University of North Carolina at Charlotte David R. Novak DePaul University Audra K. Nuru Fairfield University

Kaori Miyawaki University of Wisconsin–Milwaukee

Anne Oeldorf-Hirsch University of Connecticut

Pamela L. Morris University of Wisconsin–La Crosse

Jennifer E. Ohs Saint Louis University

Jennifer Ann Morrow University of Tennessee

Michele Olson University of Wisconsin–Milwaukee

Laura Motel University of Wisconsin–Milwaukee

James O. Olufowote The University of Oklahoma

Giovanni Motta Columbia University

Kim Omachinski University of Illinois–Springfield

Star A. Muir George Mason University

Leah M. Omilion-Hodges Western Michigan University

Rebecca R. Mullane Moraine Park Technical College

Kikuko Omori St. Cloud State University

Elizabeth A. Munz West Chester University

Terry Ownby Idaho State University

Contributors

Nicholas A. Palomares University of California, Davis

Kelly Quinn University of Illinois at Chicago

John Parrish-Sprowl Indiana University–Purdue University Indianapolis

Katherine A. Rafferty Iowa State University

Michael M. Parsons Rhode Island College

James Donald Ragsdale Sam Houston State University

Sarah T. Partlow Lefevre Idaho State University

Artemio Ramirez University of South Florida

Emily A. Paskewitz University of Tennessee

Rana Rashid Rehman National University of Computer & Emerging Sciences

Jessica A. Pauly Purdue University Charles Pavitt University of Delaware Brittnie S. Peck University of Wisconsin–Milwaukee Joshua R. Pederson University of Alabama Xaquín Perez Sindin Ph.D. candidate at University of A Coruña Evan K. Perrault University of Wisconsin–Eau Claire Deborah Petersen-Perlman University of Minnesota–Duluth Sandra Petronio Indiana University–Purdue University Indianapolis Louise Phillips University of Roskilde Cameron Piercy University of Oklahoma

Hilary A. Rasmussen University of Wisconsin–Milwaukee Terra Rasmussen Lenox University of Wisconsin–Milwaukee Torsten Reimer Purdue University Amber Marie Reinhart University of Missouri, St. Louis Adam S. Richards Texas Christian University David J. Roaché University of Illinois at Urbana–Champaign Rhéa Rocque Université Laval Michael E. Roloff Northwestern University Sarah F. Rosaen University of Michigan–Flint

Stephen Pihlaja Newman University

Sonny Rosenthal Nanyang Technological University, Singapore

Nicole Ploeger-Lyons University of Wisconsin–La Crosse

Racheal A. Ruble Iowa State University

Denise Polk West Chester University

Clariza Ruiz De Castilla California State University, Long Beach

James Ponder Kent State University

Erin K. Ruppel University of Wisconsin–Milwaukee

Suzy Prentiss The University of Tennessee

Tillman Russell Purdue University

DeAnne Priddis University of Wisconsin–Milwaukee

Christina M. Sabee San Francisco State University

Emily B. Prince University of Miami

Erin Sahlstein Parcell University of Wisconsin–Milwaukee

xxxi

xxxii

Contributors

Jessica Marie Samens Bethel University

Nathaniel Simmons Western Governors University

Jennifer A. Sandoval University of Central Florida

Jonathan R. Slater State University of New York at Plattsburgh

Matthew Savage University of Kentucky

Kevin Smets University of Antwerp

Chris R. Sawyer Texas Christian University

Kim K. Smith University of Central Florida

Christoph Scheepers University of Glasgow

Peter B. Smith University of Sussex

Edward Schiappa Massachusetts Institute of Technology

Jennifer Snyder-Duch Carlow University

Anna Schnauber University of Mainz

Hayeon Song University of Wisconsin–Milwaukee

Maureen Schriner University of Wisconsin–Eau Claire

Patric R. Spence University of Kentucky

Chris Segrin University of Arizona

Matthew L. Spialek University of Missouri

James P. Selig University of Arkansas for Medical Sciences

John S. W. Spinda Clemson University

Deanna D. Sellnow University of Kentucky

Brian H. Spitzberg San Diego State University

Samantha Senda-Cook Creighton University

Lara C. Stache Governors State University

Ariana F. Shahnazi The University of Iowa

Glen H. Stamp Ball State University

Sonia Jawaid Shaikh University of Southern California

Heather M. Stassen-Ferrara Cazenovia College

Leonard Shedletsky University of Southern Maine

Thomas M. Steinfatt University of Miami

Pavica Sheldon University of Alabama Huntsville

Helen Sterk Western Kentucky University

John C. Sherblom University of Maine

Arrington Stoll University of Wisconsin–Milwaukee

Julie Delaney Shields University of New Mexico

Brett Stoll Cornell University

Inyoung Shin Rutgers University

Mary E. Stuckey Georgia State University

YoungJu Shin Arizona State University

Robert Sullivan Ithaca College

Carolyn K. Shue Ball State University

Erin M. Sumner Trinity University

Lisa Silvestri Gonzaga University

Ye Sun University of Utah

Contributors

Melissa Ann Tafoya La Sierra University Laramie D. Taylor University of California, Davis Kelly E. Tenzek University at Buffalo, The State University of New York Stephanie Tikkanen Ohio University Michael M. Tollefson University of Wisconsin–La Crosse William C. Trapani Florida Atlantic University J. David Trebing Kent State University April R. Trees Saint Louis University Haijing Tu Indiana State University Katie L. Turkiewicz University of Wisconsin–Waukesha Lynn H. Turner Marquette University Jason Turowetz University of Wisconsin–Madison Stacy Tye-Williams Iowa State University Adam W. Tyma University of Nebraska at Omaha Laura R. Umphrey Northern Arizona University Kathleen S. Valde Northern Illinois University Stephanie K. Van Stee University of Missouri–St. Louis Shawn VanCour University of California, Los Angeles Shannon Borke VanHorn Valley City State University C. Arthur VanLear University of Connecticut Alice E. Veksler Christopher Newport University

Andrea J. Vickery Louisiana State University Richard C. Vincent Indiana State University Ron Von Burg Wake Forest University Shawn T. Wahl Missouri State University Ningxin Wang University of Illinois at Urbana–Champaign Benjamin R. Warner University of Missouri Nathan G. Webb Belmont University René Weber UC Santa Barbara Markus Weidler Columbus State University David Weiss University of New Mexico Benjamin Wiedmaier Arizona State University Kristi Wilkum University of Wisconsin–Fond du Lac Karina L. Willes University of Wisconsin–Milwaukee Mark A. E. Williams California State University, Sacramento Steven R. Wilson Purdue University Luke Winslow San Diego State University Paul L. Witt Texas Christian University Gwen M. Wittenbaum Michigan State University Kevin Wombacher University of Kentucky Thomas S. Wright Temple University Chong Xing University of Kansas Philip A. Yates Saint Michael’s College

xxxiii

xxxiv

Contributors

Narine S. Yegiyan University of California, Davis

Christina Yoshimura The University of Montana

Sara K. Yeo University of Utah

Anna M. Young Pacific Lutheran University

Gamze Yilmaz University of Massachusetts, Boston

Anne N. Zmyslinski-Seelig University of Wisconsin–Milwaukee

Eunkyong L. Yook George Mason University

Lara Zwarun University of Missouri–St. Louis

Introduction The Field of Communication Studies Communication in various incarnations represents one of the oldest sources of academic interest, teaching, and study. From the time of Greece when Aristotle wrote about rhetoric and Socrates exchanged views with the sophists, to Cicero’s view of oratory, advice about communication practice embodies a long history. The continuing technological revolution extends the practice of communication beyond its origins in public discourse to mass-mediated forms, demonstrating the continued vitality of the discipline. Communication departments in universities historically contained elements of speech audiology and pathology, theater, advertising, mass communication, film, journalism, public relations, and library sciences. Some departments were affiliated with educational programs due to the need to certify teachers as qualified to teach public speaking, theater, and forensics. However, over the past century, many departments have been restructured or reconfigured with regard to content and methodology— and tension now exists between the study of communication as a practical discipline with a focus on the development of personal and professional skills, and the study of communication as an abstract discipline remains an element of ongoing discussion among communication scholars. Some university communication departments with a large focus on media may teach interviewing, script writing, operation of video recording/editing equipment, as well as writing copy for newspapers, blogs, or advertisements. Understanding technological tools related to web design and layout may also be an important element of this focus. At the same time, in the same department, other scholars may focus on critical, rhetorical, and social science approaches or work to create applications in p ­ ursuit of social justice.

Each area of focus generates the need for the development of appropriate tools of analysis. The result has been a process whereby many divergent and different elements exist within the same department, eventually resulting in the need for separate and more depth of study, which in turn required some elements to separate from the main department. Fifty years ago, for example, departments such as Speech Communication and Theatre (SCaT) combined elements. However, due to the distinct interests and applications, most combined departments have now separated; SCaT, for example, has separated into a Communication department and a Theatre department. Even when separate theater programs exist, some departments may still include oral interpretation of literature or considerations of text performance, resulting in overlap among departments. The development of communication departments and the boundaries of inclusion and exclusion reflect individual institutional development as well as perceptions of the appropriateness of particular disciplinary boundaries. Regardless of the elements and areas of study included in communication departments, the fundamental core tenet involves an examination of how humans exchange messages to manage relationships and obtain information. While the introduction and improvement in technology changes certain practices of communication, the motivations and processes of communicating with other persons still remain much the same.

Research in Communication Any volume dealing with research in the discipline of communication needs to consider the boundaries of what most communication programs teach and research. Most communication research

xxxv

xxxvi

Introduction

involves efforts to improve the understanding of communication practices in order to provide a basis for improving the content of what is taught in the courses. Research involves the generation of information and understanding of elements of communication to improve practice. Ultimately, the goal of research in communication generates a means of understanding the process of message sending and receiving. The challenge for the communication scholar becomes the selection of a communication situation or artifact combined with an appropriate methodology (a tool or means) to provide a basis for making claims. The scholar’s goal involves employing a tool that permits him or her to make claims based on the observed communication. The following sections consider the general issues of (a) what to study, (b)  how to organize thinking about communication, (c) how to study communication, and (d) how to make methodological choices. What to Study?

Defining the boundaries of what a scholar in communication studies often becomes difficult. One position is that a person “cannot not communicate,” which means that all human actions, behaviors, and artifacts provide legitimate grounds for calling that communication. A more restrictive view has communication involving the use of some type of symbol system with intended or ascribed meaning. Such a view, while more restrictive, is very common in the field of communication studies. Several of the entries in this encyclopedia provide a brief overview of the content areas involved in the study of communication. One of the best places to find out about the scope of communication scholarship involves examining the various divisions and interest groups of established communication organizations (National Communication Association, International Communication Association, World Communication Association, Central States Communication Association, Eastern States Communication Association, Southern States Communication Association, Western Communication Association). The range of topics and titles provides a means of evaluating the interests and orientations of the membership. The academy, as well as any discipline, is made up of many different communities that share an interest in some type of content and/or

method. Part of establishing a professional identity is the voluntary affiliation with members of the same community. Several of the entries in this encyclopedia describe professional organizations and the structure of the profession. These entries provide an understanding of the options that exist for scholars and often provide a website to permit contact (as does Appendix B: Resource Guide). Professional organizations typically organize conventions and publish journals. How to Organize Thinking About Communication

The definition of a theory is often a set of statements that organizes relationships among conceptual elements. A theory specifies what elements are important and begins to define the world in terms of what a scholar should pay attention to when understanding the process of communication. A study and theory of communication may be very specific; for example, a scholar may spend his or her entire career studying the effect of violence in television on children. The theoretical tools that address this issue are less likely to explain how a person diagnosed with a disease seeks emotional social support from family members. Both questions represent important issues, but seeking understanding probably represents a fundamental theoretical position that might share little in common. The decision of which methods to employ for any investigation requires an understanding of how to define the situation. A theory, particularly a good theory, articulates which elements are important considerations in representing the situation. Each theory should articulate which elements of the situation (variables) play an important role in understanding the communication process under scrutiny. How to Study Communication

The entire variety of methods used to study humanity are available to the communication scholar. The range of approaches involves critical, rhetorical, quantitative, and qualitative approaches. Some scholars focus on political and social movements and advocate and require the scholar to become part of social activism. Other scholars operate as traditional scientists and use ­instruments

Introduction xxxvii

to record physiological changes as well as to track the potential for the contribution of genetic elements to communication practices. At the same time, other scholars use artistic approaches such as performance art. The goal of the scholar is understanding public policy evaluation, social change, or simply an exhibition of critical understanding and representation of culture. The methods represented in this encyclopedia are not exhaustive but instead should be interpreted as indicative. Often a single entry in this encyclopedia about an element of a particular method could be studied or examined as the main topic in a number of textbooks. Many of the entries could be topics for further in-depth study with a variety of resources. The encyclopedia entries are intended to provide a definition and short description of various methods to assist the reader in determining whether or not the tool may be appropriate for use in his or her research. Other entries elaborate on a specific term or concept in order to increase the reader’s understanding of it with regard to communication studies. Often, when considering research methodologies and making a decision regarding data analysis, one needs to understand what kinds of options exist. Thus, part of understanding a method involves understanding what is required to employ the method. Every method establishes the elements that are necessary to collect data (information) when undertaking the technique. For example, a Markov analysis requires that the format of the data involve measurements over multiple time periods. How to Make Methodological Choices

Scholars are expected and often required to generate original scholarship, not only to earn a doctoral degree but also ultimately in review for tenure and promotion. Each scholar is required to choose a content area as well as a method of collecting and analyzing data. Entries in this encyclopedia address a variety of methods for collecting and analyzing data. A scholar’s choice of a content area indicates some level of personal preference and interest. For example, suppose a scholar wishes to study communication issues related to how family members communicate with each other. This topic indicates an important area for study with a number of

important implications and considerations. But providing a focus of study does not mandate or require the use of any particular methodology. Scholars can examine communication issues related to family using critical, quantitative, qualitative, and rhetorical methods. All the various methods could be employed to generate useful understanding of family communication. The decision of method requires an examination of the nature of the claim or conclusion the scholar wants to provide. A methodology provides a systematic set of approaches designed to guide the scholar in the search for information and the means to evaluate the results of that search. Each method involves a set of procedures and assumptions designed to rigorously test or evaluate the conclusions reached by the scholar. Scientific methods often involve reporting observations and typically generate descriptions of discovery. The scientist seeks to test whether a particular set of descriptions (theoretical statements) maintain accuracy when compared to the observed or empirical world. The tools of quantitative scientists involve statistical procedures that are capable of objective replication. Essentially, if conducted correctly, the results of any investigation would not depend on which scientist conducted the analysis. The process of science emphasizes the role of the process rather than the expertise or knowledge of a particular scientist. More humanistic methods involve the description of creation. Often the scholarly position is described as the development of a perspective. The goal of many scholarly efforts is the development of a theoretical position that generates an understanding of how a message becomes capable of interpretation. Humanistic methods, like rhetoric, do not require or expect that two individuals working with the same artifact necessarily generate the same conclusion.

The Rationale for This Encyclopedia Communication, as an academic discipline, incorporates a wide-ranging set of content as well as methods. Studying communication involves the examination of a process of human interaction that continues to evolve with technology and ­develop­ment of organizations and groups. Attempts at understanding the process of human interaction

xxxviii Introduction

require the inclusion of information about language, culture, history, human sociology, and psychology. The successful line of research provides an interesting set of perspectives on the process of development, maintenance, organizing, and dissolution of personal and organizational relationships. The study of methodology represents a constantly emerging area of definition and application. The methodology, whether critical, rhetorical, qualitative, or quantitative (or some combination), continues to change. While elements, like the calculation of Pearson correlation coefficient, have remained the same for over 100 years, the understanding of that calculation and the implications continue to receive attention. The boundaries of the discipline remain difficult to define, given the ubiquitous nature of communication combined with the overwhelming variety of communication. The continuation of scholarly pursuits in what amounts to a vital area of understanding promises further development of this area. Rather than viewing the chaos and lack of clarity as something generating difficulty, the variability and openness of the system provides opportunity and innovation. Communication scholars study the process of human interaction, something that dates back to Greek philosophers such as Aristotle and Socrates. The arguments made in the rhetoric or with sophists represented the method of rationalism whereby the great philosophers relied on rational argument and experience to draw conclusions. While current scholarship would represent the arguments as rational and incorporate experience, the application of most methodologies requires more than a great thinker simply outlining ideas. Much like communication between persons represents a dynamic process, the academic study and the methods reflect that same dynamic process. This encyclopedia provides some insight and understanding of the possible content and methods of such investigations and a first step to understanding many of those approaches.

Organizing the Encyclopedia The encyclopedia is arranged alphabetically and includes entries from “Agenda Setting” to “Media Diffusion” to “Z-score.” To guide readers in understanding communication research and communication research methods, the Reader’s Guide organizes entries into five categories—“Creating and

Conducting Research”; “Designing the Empirical Inquiry”; “Qualitatively Examining Information”; “Statistically Analyzing Data”; and “Understanding the Scope of Communication Research”—each of which is further organized into more specific subcategories. For example, “Designing the Empirical Inquiry” includes entries arranged in the following subcategories: Content Analysis, Internet Inquiry, Measurement, Research Subjects/Participants, Sampling, and Survey Research. In general, the encyclopedia includes content that addresses the topics or areas of research that interest communication scholars, the methods communication scholars employ, and the professional structure and practice of communication scholars. Each entry provides important information necessary for using and understanding research in communication. Entries in this encyclopedia that discuss topics or areas of communication research provide an overview of what defines that area of research. The entries also describe some of the enduring research questions or theories employed in each area and comment on the nature of methodologies and data collection procedures used. The design of the entries involves an answer to the questions “What does X area study?” and “How do the scholars conduct scholarship in this area?” The first step in scholarship is an important question, “What do you want to know?” Understanding the nature of the claim a scholar wants to make (e.g., empirical, ethical, advisory) provides the basis for the investigation. After one defines the central question of what knowledge or understanding is sought, the next step is methodological. Methodological concerns deal with the acquisition and analysis of information using some structured system to justify a claim. Some scholarly communities seek basic knowledge (see, e.g., the entry “Organizational Communication”), whereas other scholarly communities seek social change (see “Activism and Social Justice”). These entries provide understanding of the various communities that make up the discipline. Understanding what topics scholars in communication studies examine and what theories scholars employ provides a basis for choosing a particular methodology. Scholarly communities establish common means of working toward generating claims. Therefore, several entries in the encyclopedia provide the underlying information

Introduction

necessary to ultimately understand how to select a particular method by supplying information about the scope of the communities. The methods usually employed by communications scholars fall into three key areas: (a) rhetorical or critical, (b) qualitative, and (c) quantitative. Each method provides a means of assembling information used to make an evaluation that leads to a claim. Entries in this encyclopedia examine the central issues for each method and the procedures involved in the various programs of assembling and evaluating information using the particular method. Rhetorical or critical approaches to scholarship involve the application of a perspective or theory to the particulars of discourse or artifact. The encyclopedia is organized by articulating entries that examine many of the major approaches to this form of scholarship (e.g., Aristotelian, Burkean, psychotherapeutic, feminist, historical, dramatist). Each approach provides a different perspective on the basis of the communication encounter that structures and provides meaning. These entries provide an examination of the basis of each of the methods or approaches to employing that method. Qualitative scholarship typically employs methods dealing with the application of critical discourse or language-based approaches. The encyclopedia considers each of the various elements or approaches and the necessary elements for its successful application. Often critical methods employ the perspective of a particular identity (e.g., race, ethnic, gender, sexual orientation) as a basis for understanding communication. The use of various language devices (e.g., metaphors, themes) provides a means for analysis. The encyclopedia contains entries that examine the various elements and approaches to qualitative methods. The encyclopedia also includes entries that examine quantitative methods both in terms of design (e.g., survey methods, experimental design, content analysis, and nonexperimental designs) and analytic devices using various statistical procedures (e.g., analysis of variance, causal models, stochastic models). The issues and approaches of sampling techniques also receive attention. The content addressing quantitative approaches provides access to the different standards and options that exist for this form of scholarship. The question of quantitative design assumptions and analysis relating to the internal validity

xxxix

of design, the generalization or external validity of analysis, as well as the understanding of reliability of measurement (scale or interrater reliability) each receives separate attention. The encyclopedia intends to define most of the terms used and applied in the understanding of quantitative research. Many of the terms involve technical and precise definition and application in communication scholarship, so the entries in this encyclopedia provide a means to demonstrate the techniques as applied by communication scholars. In addition, the encyclopedia provides content to inform understanding about the structure of the academic profession involving communication as well as the process of communication scholarship. Like most academic communities, the scholars in the communication discipline belong to professional organizations that support the research, teaching, and service efforts of faculty. Thus, several entries in this encyclopedia describe various professional organizations, including their scope and the publications they support, with the intention of providing a means of identifying likely sources of support for scholars. Because the practice of scholarship involves the process of creating an idea, vision, or question to guide the process of knowledge generation and distribution, a set of entries in the encyclopedia talks about the process of submitting papers to conventions, articles to journals, and book-length manuscripts to publishers. The process of writing, editing, and reviewing forms the backbone of the review process, so this encyclopedia provides a series of entries devoted to understanding how academic papers are written and eventually chosen for publication and presentation at a convention.

Acknowledgments The encyclopedia represents the largest single project I have undertaken in my career. A lot of moving parts and pieces needed to be done in a hurry. A special thanks to my editorial board: Raymond Preiss, Dale Hample, Nancy A. Burrell, and Mark Hamilton. The SAGE editorial and support team was very gracious and directive ­ during the entire process. Finally, I owe a big thanks to the hundreds of scholars who gave of themselves in support of this project. Mike Allen

Sara Miller McCune founded SAGE Publishing in 1965 to support the dissemination of usable knowledge and educate a global community. SAGE publishes more than 1000 journals and over 800 new books each year, spanning a wide range of subject areas. Our growing selection of library products includes archives, data, case studies and video. SAGE remains majority owned by our founder and after her lifetime will become owned by a charitable trust that secures the company’s continued independence. Los Angeles | London | New Delhi | Singapore | Washington DC | Melbourne

A Abstract

or

everyone online for free. Thus, an abstract works as a point of entry for many readers, whereby they can evaluate whether the main document is useful for their purposes. When submitting a document for publication, the abstract is usually placed at the beginning of the document, followed by a set of keywords. However, writing an abstract after completing the document is recommended for a thorough and accurate abstract. Along with the abstract, keywords need to be selected carefully as they will assist readers in finding the document via online searches. Including key information about the main document in a precise manner in the abstract is important to help readers decide whether to read or use the document. Standard components of an abstract include (1) the research focus and a problem statement, (2) method, (3) major findings or results of the study, and (4) conclusion. Each section usually contains one or two sentences. Because of the limitations on the length of an abstract, each sentence should include maximized information, using optimum wordings and expressions. In terms of the tense of an abstract, an abstract is usually written in past and present tense. In general, the method and results sections are written in past tense, whereas the conclusion section is written in present tense. In addition, writing an abstract using active voice tends to make the abstract powerful and dynamic to convey the contents. Thus, a good abstract is one that provides accurate information in a concise manner to readers. Next, this entry explains each part of an abstract in detail.

Executive Summary

An abstract is a brief summary of a document, such as a journal article. An executive summary is a summary of a longer document. At times, an executive summary and an abstract are used interchangeably; however, there exists differences between the two. This entry describes the standard elements of an abstract and discusses how an executive summary and abstract differ.

Abstract In academia, an abstract is usually required when submitting journal articles, conference papers, book proposals, as well as when applying for research grants. The length of an abstract varies depending on the publication, but 100–250 words are standard for journals in the communication field. An abstract is not an excerpt from the main text but an original document that is self-sufficient without referring to the main text. Thus, copying from the main text is considered unacceptable, as readers will notice it once they read the main document. At the same time, including key phrases and terms that readers might look for is necessary so that the readers can easily find the article in online searches. An abstract is not an evaluation of the main text either. Rather, it is a condensed version of the main text that includes main points. The role of an abstract has become more important with the popularity of online searching, because often only the abstract is accessible to 1

2   Academic Journal Structure

Research Focus and Problem Statement

First and foremost, explaining the purpose and scope of the research is crucial in the abstract. When explaining the purpose, providing a bit of background helps readers realize the value of the document. If the document is a replication or part of a larger research project, it is important to clearly provide the information by citing the previous research and the larger research project. In addition, the approach and the theoretical framework used are important for some readers to see how the research was preceded. Research Method

For method, clarifying the data size, characteristics, and/or demographic information of the participants is important. In addition, the types of analyses and sometimes the types of tools the researchers used are important information for certain readers to decide whether they should read the main document.

and an abstract. First, the purpose is different. An executive summary is commonly used in business contexts to provide a shorter version of a longer report or document for business executives who may be too busy to read the entire document, whereas an abstract is used to invite readers to use or read the main text. Second, unlike with an abstract describing research, technical terms are often not used in an executive summary because the readers may be non-experts in the field. Third, the length of an executive summary is much longer than that of an abstract. A typical executive summary is about 5–10% of the length of the main document, but usually does not exceed 10 pages. Thus, using subheadings is advised to increase readability. In general, communication journals and conference papers require authors to include an abstract, not an executive summary. Kikuko Omori See also Academic Journal Structure; Publication Style Guides; Publications, Scholarly; Publishing Journal Articles

Major Findings and Results

In this section, the major findings are listed and briefly explained. If there are several findings, one or two major findings can be highlighted. A good abstract provides readers with accurate information without using ambiguous words or phrases. If the investigation used a quantitative research method, including numerative information, such as effect size, significant level, and confidence intervals, along with the method used, is important. Conclusion

What is the author’s conclusion based on the results? Similar to the conclusion section in the main text, the author cannot speculate or conclude something that the data did not prove. Providing accurate and precise interpretation of the data is important. In addition, the impact and implications of the research can be included in this section.

Executive Summary An executive summary is used more frequently in business contexts than in academia; thus, there are several differences between an executive summary

Further Readings American Psychological Association. (2009). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author. Bem, D. (1987). Writing the empirical journal article. In M. P. Zanna & J. M. Darley (Eds.), The compleat academic: A practical guide for the beginning social scientist (pp. 171–204). Mahwah, NJ: Erlbaum. Hofmann, A. (2013). Scientific writing and communication: Papers, proposals, and presentations. Cambridge, MA: Oxford University Press. Koopman, P. (1997). How to write an abstract. Retrieved May 17, 2015 from https://spie.org/Documents/ Publications/How%20to%20Write%20an%20 Abstract.pdf Rubin, R. B., Rubin, A. M., & Haridakis, P. (2009). Communication research: Strategies and sources. Belmont, CA: Wadsworth.

Academic Journal Structure The topic of academic journal structure refers to the content of, in this case, communication studies

Academic Journal Structure   3

journals. More simply, it refers to what types of research one is likely to find in particular journals. This topic is relevant not only to the student but also to the scholar seeking to publish his or her original research. In this entry, most of the principal journals in communication studies are characterized as a guide for the reader beginning with the most general publications and moving to the most specialized ones. The intent of most people who enter the profession of college or university professor is to teach. However, it does not take long for the newly minted professor to learn that progress toward higher ranks in the profession and the award of tenure depends not only on teaching ability but also on service to the college or university and the profession and, preeminently, published research. Most academic institutions have specific expec­ tations for how much published research a professor should have in particular periods and some sense of the quality of the academic journals in which they appear. These journals are similar to popular, commercial magazines, except that they are usually published by professional academic organizations and contain the texts of research articles. They also typically appear only a few times per year rather than weekly or monthly. The field of communication studies has changed dramatically since the early 1960s. At that time, there were many fewer journals in this field but also many fewer publishing scholars. Expectations were also lower, with some institutions not looking for any publication at all from their professors. Today, there are multiple publication outlets, many more practitioners, and heightened expec­ tations. Where the National Communication Association (NCA) then had only three journal publications, today it has 11. The International Communication Association (ICA) publishes five journals, and each of the four regional communication associations publishes at least one. Some subareas of communication studies, such as rhetoric and religious communication, have formed their own professional organizations and publish such journals as the Rhetoric Society Quarterly and the Journal of Communication and Religion. There are now also numerous international communication journals, such as the European Journal of Communication and the Chinese Journal of Communication. Several journals that are not

strictly communication oriented, such as the Journal of Social and Personal Relationships and Personal Relationships, nonetheless publish a large number of studies involving communication. It would appear that any scholar seeking to publish research findings today would have a relatively easy time in doing so. This is not the case, however, since the competition for publication has become more fierce. During James Donald Ragsdale’s 3-year tenure as editor of the Southern Communication Journal (volumes 77–79), submissions came in at the rate of 10 or so a month, while acceptances were about 20% of that. Acceptance for publication depends not only on the quality of the manuscript but also on the journal’s level of financial support. Publication is gratis, although in some fields, especially scientific ones such as crop science, scholars may be asked to pay for the cost of publication when a journal issue exceeds its normal page limitations. In the field of communication studies, as in other academic areas, the focus of some journals is general. The standard is the quality of research, not the subject matter or the research methodology. The ICA’s Journal of Communication is a leading example. It also publishes book reviews, which are not so often found in other journals in communication studies. The Southern Communication Journal is also a general publication that carries book reviews. Communication Quarterly, one of three journals published by the Eastern Communication Association (ECA), the Western Journal of Communication, and Communication Studies, the latter being the journal of the Central States Communication Association, are general journals. Many other journals, however, focus on specific subject areas, research methodologies, or both. Communication Monographs, for example, primarily publishes longer or monograph-length studies in the area of communication theory that utilize quantitative methodologies. ICA publishes a similar journal called Human Communication Research. NCA’s Quarterly Journal of Speech emphasizes studies in rhetorical theory and public address. The Rhetoric Society of America offers the Rhetoric Society Quarterly, and devotees of literary theorist Kenneth Burke, through the Kenneth Burke Society, produce the Kenneth Burke Journal.

4   Academic Journals

Returning to NCA journals, Communication Education publishes studies of instructional methodologies and techniques, and Communication Teacher features pieces on K–12 as well as college classroom activities, assessment, and original teaching methods. Other specialized publications of NCA include the online-only Review of Communication, which features cross-disciplinary studies of subareas of communication, the Journal of International and Intercultural Communication, Critical Studies in Media Communication, and the Journal of Applied Communication Research. Comparable journals published by ICA are Communication, Culture, and Critique and the Journal of Computer-Mediated Communication. Outside of the publications of the international, national, and regional professional organizations in communication studies, several other organizations are responsible for journals frequently consulted by communication studies scholars. Among these are the Journal of Social and Personal Relationships and Personal Relationships, which contain research, normally empirical, relevant to the large subarea of interpersonal communication. The Journal of Family Communication publishes research on communication within families regardless of whether the methodology is quantitative or qualitative. Finally, there are a few quite specialized journals, which offer the scholar rather unique sources of information or outlets for publication. There is, for example, the Review of Communication Research. This publication is devoted entirely to pieces that are reviews of literature in areas of communication studies. The Western States Communication Association offers Communication Reports, which carries short data or text-based studies. Similarly, the ECA offers Communication Research Reports with brief empirical articles of 10 or fewer typewritten pages. The latter two publications provide outlets for research that are not yet sufficiently well developed for a longer monograph-length manuscript. ECA also publishes an annual called Qualitative Research Reports in Communication. The number and variety of journals in communication studies offer the scholar ample outlets for the publication of his or her research. James Donald Ragsdale

See also Academic Journals; Communication Journals; Interdisciplinary Journals; Pay to Review and/or Publish; Publishing Journal Articles; Submission of Research to a Journal

Websites Central States Communication Association: http://www .csca-net.org/aws/CSCA/pt/sp/home_page Eastern Communication Association: http://www.ecasite .org/aws/ECA/pt/sp/p_Home_Page International Communication Association: https://www .icahdq.org/ National Communication Association: https://www .natcom.org/ Western States Communication Association: http://www .westcomm.org/

Academic Journals Academic journals serve a vital function in furthering the discipline of communication studies, by showcasing the latest research on a variety of topics in different contexts (e.g., at home, workplace, policy, mass media, social groups). Articles appearing in such journals are typically peerreviewed by other scholars before appearing in print, so they often enjoy greater legitimacy compared to self-published manuscripts or books. Journals are usually ranked on the basis of impact factor, which measures the number of times their articles are cited in recent years. Many university departments and schools of communication thus base their hiring, promotion, and tenure decisions on candidates’ records publishing in peer-reviewed journals. This entry explains the various types of communication journals and then addresses several issues of concern.

Types of Communication Journals The major professional associations of the field (e.g., National Communication Association, or NCA; International Communication Association, or ICA) have partnered with publishers to produce a suite of communication journals, on topics such as applied communication, critical cultural studies, media and political communication, and rhetoric.

Academic Journals   5

While NCA and ICA journals are usually ranked highly by rating agencies like Thomson Reuters (previously known as the Institute for Scientific Information, or ISI) according to impact factor (led by NCA’s Communication Monographs and ICA’s Journal of Communication), some journals produced by regional communication associations (e.g., Central States Communication Association), individual divisions of NCA or ICA, or even the ones unaffiliated with a specific association remain highly respected venues for scholarly work. In line with the ubiquity of communicative processes in general, and the history of communication studies as a field with interdisciplinary roots, communication scholars also publish in interdisciplinary, special topic-driven journals that enjoy high impact factor and researcher credibility (e.g., New Media & Society, Public Relations Review). Since the late 1990s, two important trends have surfaced. First, as journal publications gradually become expected for doctoral students even before they graduate, several journals catering exclusively or largely to graduate student research have emerged (e.g., Kaleidoscope). Many of these student-centered journals adopt the open-access online format, and are backed by credible editorial boards from well-known universities. Second, broader consumer shifts toward online rather than print formats means not only that some venerable journals plan on transitioning eventually online (most notably, ICA; its Journal of ComputerMediated Communication is already an onlineonly publication), but also that several new journals are purely online (e.g., International Journal of Communication). While there has been much consternation in academe in general about the validity and rigor of online academic journals, many of these ventures in communication studies are backed by well-known scholars and institutions, although some of them may lack impact factor ratings. Moreover, online journals typically feature more progressive copyright policies, permitting authors to reuse or distribute their work in broader forums, compared to traditional print journals.

Contemporary Issues Published research in academic journals often sets the benchmark for top-notch scholarship in communication—as with other scholarly fields—and

helps organize a knowledge-sharing network for scholars. This network helps categorize not just key theoretical concepts and methods but also the academics who draw from these concepts and use these methods. This might constitute a source of friction, however, given that most highly ranked journals publish functionalist and quantitative research, compared to interpretive, rhetorical, critical, and feminist perspectives. For instance, critics have averred that high-ranking journals’ standards of scientific rigor and “quality research” are often used to dismiss work emphasizing humanistic and emotive/evocative elements (e.g., studies using autoethnography, or critical discourse analysis using smaller samples). To counter these perceptions, and help encourage “nontraditional” scholarship, several new journals have been instituted (e.g., NCA’s Journal of International & Intercultural Communication, ICA’s Communication, Culture, and Critique). Editors of established journals frequently reiterate the need to publish “alternative” research, and also issue calls for special issues on niche topics to address some of these disparities. While such measures have been received favorably, ensuring adequate representation for scholars and scholarship remains an abiding concern. Another concern that remains pertinent is the relatively low impact factors for communication journals, compared to other social sciences, like sociology, psychology, and organization studies. A possible reason is that while communication journals frequently cite research from other disciplines, few articles in these fields cite studies published in communication journals. Moreover, very few communication scholars publish in journals from other fields—and even less in high-profile outlets like Science or Nature. Scholars remain divided as to how to rectify this situation—while some argue that researchers should showcase the communicative contributions to knowledge generation at large, by emphasizing communication as a defining characteristic of human behavior, others assert that researchers ought to downplay the distinctiveness of communication research and more broadly ally with other fields. Finally, a mixed bag of information seems evident on the inclusivity of communication journals. Evidence suggests that the gender gap is finally closing, with more female-authored articles increasingly published—a welcome sign for a discipline

6   Acculturation

where women constitute the numerical majority. More collaborative research is also appearing in print, by scholars from both different (national and international) institutions and different disciplines. Most published manuscripts are contributed by assistant professors, although full professors remain the most productive group—indicating the need for more effective professional development for assistant and associate professors. Importantly, research from North American (and European) contexts dominates communication journals, with relatively few studies based in Asia, Africa, and South America. Thus, while there is evidence that academic journal publishing in communication is gradually becoming less insular, there is much room for further inclusion. Rahul Mitra See also Communication Journals; Interdisciplinary Journals; Peer Review; Peer-Reviewed Publication; Publications, Open-Access; Publication, Politics of; Publications, Scholarly; Rigor

Further Readings Bunz, U. (2005). Publish or perish: A limited author analysis of ICA and NCA journals. Journal of Communication, 55, 703–720. doi:10.1111/j.1460-2466.2005.tb03018.x Kramer, M. W., Hess, J. A., & Reid, L. D. (2007). Trends in communication scholarship: An analysis of four representative NCA and ICA journals over the last 70 years. Review of Communication, 7, 229–240. doi:10.1080/15358590701482024 Levine, T. R. (2010). Rankings and trends in citation patterns of communication journals. Communication Education, 59, 41–51. doi:10.1080/03634520903296825 Poor, N. (2009). Copyright notices in traditional and new media journals: Lies, damned lies, and copyright notices. Journal of Computer-Mediated Communication, 14, 101–126. doi:10.1111/j.1083-6101.2008.01433.x

Acculturation Acculturation is most commonly described as adjusting to a new environment or culture. The

process of acculturation can happen to anyone who travels, works, or lives in a new place. It includes short-term or long-term experiences in a new environment. Acculturation encompasses the psychological, social, and physiological aspects of adaptation to a culture. To acculturate, one must observe the new culture to learn patterns of behavior. Then an individual can modify one’s own behavior to “fit” in the new environment. Not everyone acculturates the same, and sometimes not at all, causing acculturative stress. This stress mentally affects individuals who are unable to adjust to the new environment. Changes from the home culture and host culture vary, but are part of the acculturation process. This includes food, clothing, social behaviors, communication styles, and more. Socialization and learning the way individuals interact with one another in the host environment is a part of acculturation. Understanding the acculturation process is important for communication studies researchers, whether they are specifically studying the assimilation effect or simply have subjects within their group of respondents undergoing acculturation. In the remainder of this entry, various processes, timing, and examples of acculturation, including reentry, are examined, as are forms of assistance in the assimilation process.

Processes The process of cultural change, acculturation, happens in several forms. Social aspects of a new culture influence how one adapts. Through observation and awareness, it is possible to learn the rules of the new culture. For example, how people enter and exit public transportation, shop at a grocery store, or greet one another at a party are all behaviors that are observable. Once learned in the new culture, it is possible to employ such behaviors to have a sense of belonging. For some, it takes little to no time to adjust, and for others, it takes a longer period of time. This can be based on personality and background, as well as prior personal experience with travel and adjustment. However, it is possible to experience confusion, anxiety, and depression in a new environment, as well as a feeling of alienation. Being culturally aware and having social support can help ease the transition from the home culture to the host culture so that adaptation, or acculturation, occurs.

Acculturation   7

Lack of appropriate training or orientation hinders the ability to adapt and understand why some behaviors occur in the host culture. Being prepared through research, reading, trainings, and orientation prepare individuals who plan to reside in a host culture for short-term and long-term experiences. Training can be provided by professionals, such as study abroad advisors for students studying abroad, or cross-cultural trainers for expatriates moving overseas on assignment. Traveling abroad on vacation also requires researching online, reading travel books, and consulting travel agents to understand the host environment, and to prepare in advance for any type of change that may appear shocking or strange to the home culture.

Synonyms and Timing of Acculturation Researchers have used a variety of terms that are synonymous with acculturation. Cultural adjustment, cultural adaptation, assimilation, and culture shock have all been used interchangeably with acculturation, or as a way to describe the changes people experience going from one cultural environment to another. This transitional process of acculturation is described as culturally adjusting to the host society, whether it is an organizational change, study abroad, relocation, or expatriate assignment. There is no set time for acculturation. It may happen quickly for individuals who have been through cultural change before, or take longer for someone who has not often left the home environment. Negotiating the host environment helps the emotional component of acculturation to ease the tension, confusion, and anxiety caused by the cultural change. Negative experiences in the host environment can have a lasting impact and can delay or even halt the acculturation process. For example, if someone is robbed the first day in a foreign country, this may cause the person to have terrible feelings about the country or city. If anger, hostility, or prejudice is experienced at the start of acculturation, the emotional attitudes and feelings will be altered and frustration can build. Local people, or host people, have a lasting impact on how individuals acculturate and adjust to the host environment. If the experiences are positive, it is more likely acculturation will occur more quickly. However, if the initial experiences are negative, it may

mentally scar individuals and prevent them from acculturating to the foreign culture.

Examples of Acculturation The period of transition and adaptation to a new culture happens at any age and in any place. For university students, acculturation most commonly takes place through the transition to college. Moving out on their own for the first time includes new routines, new social support, and living without family and close friends. Each has an extraordinary impact on moving to adulthood and maturity. This transition is acculturation to living on one’s own for the first time, and learning how to make choices for oneself as an adult. In addition, while in college, another way to experience acculturation is through study abroad. Students prepare themselves for the “unknown” but aren’t able to truly experience the change until they are in the new country. Observing new behaviors and differences from the home country can create feelings of anxiety and uncertainty, making it difficult to acculturate. Other students embrace culture shock and immerse themselves in the new society to learn and adapt. Expatriates experience acculturation through cultural adaptation in the work place. Work norms and ethics may be much different from one office to another, whether in the same country or outside of the home country. Not all expatriates receive appropriate training in advance to learn the differences between the home and host culture, causing hardship once abroad. This can be difficult not only for the expatriate, but for the host country employees as well. Acculturation embodies the ability to adjust to the new environment and to adapt socially to the surroundings. Expectations may be different from what was anticipated, especially when working in a new office, and seeking out clarification and asking questions can ease the transition and minimize stress in a new position. Acculturation also takes place when relocating to a new city. No matter the reason for relocation, individuals need to prepare themselves for the change in cost of living, amenities available, transportation options, and socialization. Moving from one part of a country to another may cause culture shock. It can be daunting to interpret how people treat one another differently or are more or

8   Acknowledging the Contribution of Others

less assertive in the host environment. Recognizing the changes in the host environment helps those relocating to cope with the situation and acculturate to the new area.

Ward, C., & Kennedy, A. (1994). Acculturation strategies, psychological adjustment, and sociocultural competence during cross-cultural transitions. International Journal of Intercultural Relations, 18(3), 329–343.

Reentry Shock Although acculturation is typically referred to when the initial process of relocation or transition occurs, it also takes place when returning to the home environment. When abroad for a long-term assignment, individuals typically adapt to the new culture and take on new behaviors and norms. After becoming comfortable in the new host environment, it may be difficult to transition back to the home environment if returning after study abroad or living overseas. Individuals re-adapt to the home culture, and may not want to take on previous behaviors. This reentry shock can be as simple as observing how some cultures provide more recycling and composting, and it not being available in the home community. The process of acculturation begins again when re-entering the home environment. Acculturation may be easier for those who are more flexible, confident, mature, and optimistic. Finding locals in the host environment who can explain differences in behaviors observed is beneficial when adjusting to a new culture. This support enhances a foreign experience and helps individuals acquire social skills needed to adapt to the host culture. Host nationals are some of the best people to informally educate visitors on culture to ease the transition and provide a more meaningful experience abroad. Kim Omachinski See also Cultural Sensitivity in Research; Cultural Studies and Communication

Further Readings Berry, J. W., & Sam, D. L. (1997). Acculturation and adaptation. In J. W. Berry, M. H. Segall, & C. Kagitcibasi (Eds.), Handbook of cross-cultural psychology, Vol. 3: Social behavior and applications (2nd ed., pp. 291–326). Boston, MA: Allyn & Bacon. Kim, Y. Y. (1988). Communication and cross-cultural adaptation: An integrative theory. Philadelphia, PA: Multilingual Matters.

Acknowledging of Others

the

Contribution

Few research works are solely the product of the persons listed as the authors. At a minimum, the need to refer to the writings of others indicates how ideas or data become used to support the argument or conclusion of the writing. In addition to the contributions of previous research or writing, there may be persons or institutions that contributed to the generation of the final product; those contributions require acknowledgment. Very seldom do ideas originate without some basis in previous reading, at a minimum scholars often want to connect the current ideas to the previous generation of scholarship and scholars. Acknowledging the contribution of others both involves an intellectual acknowledgment as well as recognizing those persons who contributed materially to the text under specific discussion. The question of what contribution qualifies for a person to be included as an author is not subject to clear and efficient standards. Little agreement exists over the particulars of the criteria for attribution while there are standards set out by some professional organizations to articulate the requirements and obligations for the contribution of others. This entry explores standards for authorship and acknowledgment, examines the issues of plagiarism and self-plagiarism, and discusses other ethical considerations with regard to research and publication.

Standards for Authorship A central question in group research that involves a shared data set becomes what contribution qualifies or requires the recognition of a person as an “author” of the project. The usual statement involves that a person, to be an author, has to make a “substantial” contribution to the manuscript. The issue of what constitutes a contribution

Acknowledging the Contribution of Others   9

and whether any such contribution is substantial remains a topic for development. Clearly, all persons making a substantial contribution deserve authorship, or at least should be considered and given the opportunity for a listing as an author. Failure to include or provide such an opportunity can create risk for an accusation of “stealing” or taking credit for the work of others, a relatively serious charge in the academic community. The work of a scholar usually implies effort not directly paid for; instead, the expectations of scholars to publish research become part of the yearly expectations for productivity. The question can plague a research team where one of the persons involved is the chair of the department or the academic advisor of the graduate student. The lower status person may feel required to include the higher status person, or the higher status person may “request” to be included as an author, despite minimal or no contribution to the project.

Given that few social science publications carry the potential for profits (unlike an innovation that is patented), the question of acknowledgment carries some ethical considerations rather than a contractual obligation or ownership in a strict sense. What this means is that such violations or problems usually fail to involve typical legal standards because no financial implications exist. The matter instead becomes addressed by professional academic organizations, with the potential for institutions to become involved if the failures are considered serious or involve behavior considered unprofessional. The challenge for these circumstances of assigning value to the contribution has no real accepted set of standards. An open, complete, and honest discussion at the beginning of the project about responsibilities, expectations, and any future obligations when using the shared information is often advised for those involved in a joint project.

Standards for Plagiarism Standards for Acknowledgment The question of acknowledging contribution to a project remains not as well defined and unclear in terms of what is expected or required. The general statement is that a person must make a substantial contribution to the production of the outcome. Such contributions can include library research, data collection, data analysis, writing or editing the manuscript, as well as brainstorming or contributing to the intellectual development of the project. The problem is that the standards are not in a manner whereby one can simply provide a checklist and make a determination as to whether or not the standards have been fulfilled. The lack of clarity means that any person can make a determination of what actions constitute a “substantial” contribution to the development and completion of the project. The question of acknowledgment involves a group of persons working on a project that is published. Should all members of the work group be included as authors? In what order does one include the members of the project? In the case of a large data set with multiple projects, what future obligations or expectations are created in terms of recognition? Does a person collecting data have permanent authorship to all subsequent manuscripts that involves that data set?

The standards for plagiarism remain relatively well established and usually capable of clear explanation. An accusation of plagiarism involves an accusation that the material published was written by someone other than the author and was not correctly recognized as such. Essentially, a person claims credit for the work of another person. Generally, plagiarism, when accusations are made, involves a formal process with a hearing and evidence presented about the truth of the claim. Because the issue of plagiarism involves a previously published work that is copied, the usual determination examines the previous work for similarity to the later published work using the standards for academic writing relevant to the particular discipline. The question becomes whether the later work unfairly uses material from the earlier work without attribution.

Self-Plagiarism If an author uses his or her own previously published material, the charge is not technically plagiarism but involves copyright infringement. Often publications require an author to declare that the material published represents original work not published in another location. A violation of this

10   Acknowledging the Contribution of Others

becomes not an accusation of plagiarism but instead becomes a violation of ethics and is subject to whatever consequences the professional organizations decide are appropriate. If the publication in question has generated income (in the form of royalties), then some economic considerations and complications may arise. Often, authors will provide a note and reference to previously used data, methods, or material to permit persons to make comparisons. In the case of quantitative data, recognition of the multiple uses of the same data permits reviewers to not over-represent data multiple times in the summary analysis. If an author uses the same words as in a prior publication, representing the material as a quotation with appropriate listing as a reference is generally considered sufficient recognition to avoid a charge of plagiarism or copyright infringement. At issue is the degree of similarity and overlap of content with the previous work. Various editors, journals, and disciplines may apply separate sets of standards, so what might constitute a clear violation in one discipline might be considered normal expected practice in another discipline. The lack of universal agreement on the standards for correct attribution makes any reuse of material subject to claims of some level of unprofessional or unethical conduct. Technically, a scholar cannot plagiarize work written by himself or herself. However, a scholar can use material or data in one publication and then reuse that information in another publication. In such cases, the violation is not considered plagiarism but may involve copyright violation, because for most published works, authors assign copyright to the publisher. Republishing such work may be a violation of a contract or signed agreement as well as violation of professional standards. Republication, or reprinting, can occur if appropriate permissions are granted. When permission is granted, the prior work should receive listing as a reprint and be represented as such within the work. A clear listing provides clarity and identifies the work as a reprinted prior work. Under those conditions, self-plagiarism is not considered to exist because the work was simply republished, not represented as a unique and separate effort. The situation becomes a bit more complicated if the original work undergoes editing or other changes so that the work is no longer simply

the same but the level of changes fail to create a truly distinct and unique work.

Ethics of Research and Publication The challenge of acknowledgment often becomes a more ethical matter than one governed by strict rules. Every discipline and branch of a discipline develops standards and guidelines in order to provide a means to establish what persons are entitled to authorship. For communication scholars, an examination of the webpages of the National Communication Association, International Communication Association, as well as the Central, Western, Eastern, and Southern Regional Communication Associations can provide guidance. Other than plagiarism, which carries potential for formal sanctions, the usual standard involves ethical or professional conduct issues. Such issues carry implications for the professional reputation of a scholar and willingness of others to collaborate, and thus, should be carefully evaluated. One recommendation that can provide for a smoother set of working professional relationships involves establishing the groundwork for authorship and recognition at the beginning of the project. Understanding the expectations for the benefits and obligations of a project can create clarity and more harmonious social relationships among professionals. Generally, a sound rule of thumb is that including someone carries less risk than removing or failing to include someone. The future of the data sets and the subsequent work coming from additional analyses should also be discussed at the beginning of the project to clarify expectations and permit the entire project to move forward. Christopher J. E. Anderson and Mike Allen See also Abstract or Executive Summary; Authorship Credit; Ethics Codes and Guidelines; Plagiarism, Self; Publications, Scholarly; Publication Style Guides; Publishing a Book; Publishing Journal Articles; Writing Process, The

Further Readings American Psychological Association. (2015). Publication practices & responsible authorship. Retrieved May 12, 2015 from http://www.apa.org/research/ responsible/publication/

Activism and Social Justice   11 Committee on Science, Engineering, and Public Policy; Institute of Medicine; Policy and Global Affairs. (2009). On being a scientist: A guide to responsible conduct in research (3rd ed.). Washington, DC: National Academies Press. Retrieved May 12, 2015 from http://www.nap.edu/catalog/12192/on-being-ascientist-a-guide-to-responsible-conduct-in

Activism

and

Social Justice

The National Communication Association’s (NCA) Activism and Social Justice Division promotes scholarship (both research and teaching) that explores relationships among communication, activism, and social justice. Activism means engaging in direct, vigorous action to support or oppose one side of a controversial issue. In this case, that activism is directed toward social justice, in which people have their human rights and freedoms respected, receive equitable treatment with regard to opportunities and resources, and are not discriminated against because of their class, gender, race, sexual orientation, and similar identity markers. Communication activism for social justice, thus, means engaging in communicative practices to promote social justice; accordingly, activism and social justice communication scholarship involves researching and teaching social justice activist communicative practices. This entry provides an overview of activism and social justice communication scholarship.

Types of Activism and Social Justice Communication Scholarship There are two types of activism and social justice communication scholarship. The first type analyzes communication engaged in by activist individuals, groups, and organizations to promote social justice. The second type involves researcher-activists and teacher-activists employing their communication knowledge and skills, in collaboration with oppressed and marginalized community members, and with social justice groups and organizations, to intervene into and reconstruct unjust discourses in more just ways, with scholars documenting their purposes, practices, processes, and effects. Each type of scholarship is explained herein.

Analyses of Activist Communication Many activism and social justice communication scholars study and teach how activists employ communication to promote social justice. Such scholarship describes, interprets, and explains social justice activists’ communicative practices. Scholars also often critique those practices and, sometimes, they offer recommendations about how those practices can accomplish more effectively the goal of social justice. There is a long research tradition across many academic disciplines of studying activism; indeed, there, literally, are thousands of scholarly books, chapters, and journal articles that have been written about activist individuals, groups, organizations, communities, governments, and networks on every continent in the world, as well as across continents and countries (i.e., global/transnational activism). The communication discipline also has a history of studying social justice activists’ communication. The earliest critical mass of studies about contemporary activists (as opposed to historical studies of social movements that occurred much earlier in time) focused on the social justice movements that occurred during the 1960s and early 1970s, such as the civil rights movement, peace (anti-Vietnam War) movement, and women’s movement. Those studies used, primarily, rhetorical methods to analyze and critique communication strategies (e.g., protests) employed by leaders and members of those social justice movements. Scholars conducting those studies, typically, were not involved personally (as leaders or as members) in the social justice movements that they studied (or, at least, their involvement was not reported in their scholarly reports); instead, scholars functioned as observers of those social movements. Rhetorical analyses of a wide variety of social justice groups and movements continue to this day (e.g., animal rights; Arab Spring; environmental justice; lesbian, gay, bisexual, and transgender [LGBT] rights; labor; and Occupy movements). However, as qualitative methods, especially ethnography, became known and accepted in the communication discipline, scholars employed those methods to study activists’ communication, including researchers participating (typically, as participant-observers) in the social justice groups

12   Activism and Social Justice

and movements being studied, and conducting in-depth interviews with activists. Additionally, given the recent rise and diffusion of many new communication technologies (from cell phones to the Internet), many scholars study activists’ use of those communication technologies (especially, social media) to promote social justice (e.g., “cyberactivism”). Analyses of activist communication have yielded a wealth of information about the nature and effects of using face-to-face and mediated communication to promote social change. Perhaps most important, that scholarship established the communication nature of activism—that, fundamentally, activism is a communication practice and process of organizing people to create social (justice) change.

Communication Activism Social Justice Scholarship Some communication scholars go beyond description, interpretation, explanation, criticism, and offering recommendations regarding activists’ communicative practices to intervene, with affected community members and with social justice groups and organizations, into unjust discourses to make them more just. Those scholars use their communication resources (e.g., their theories, methods, pedagogies, and other practices) to actually engage in social justice activism, and, simultaneously, they study and report their activism. Such “first-person-perspective” communication activism social justice research complements but extends in significant ways the “third-person-perspective” research that characterizes analyses of activists’ communication. For instance, whereas social justice might result from research conducted about activists’ communication, if researchers make recommendations and activists enact them successfully, communication activism social justice researchers intervene to do something about injustice and, therefore, they seek to make a difference through their research. Moreover, going beyond teaching students about activists’ communication, communication activism for social justice educators provide opportunities for students to work with affected community members and with social justice organizations, and use the communication knowledge and skills

that they have learned in communication courses to intervene to promote social justice. Social justice communication activism research was enabled by some important research, in addition to analyses of activists’ communication, that preceded it. For instance, although most scholars who conduct applied communication research, which seeks to provide answers or solutions to real, pragmatic human communication issues or problems, employ a third-person observer perspective, some applied communication researchers (although they were not necessarily called that at the time, as the notion of “applied communication research” did not emerge until the 1970s) have intervened and studied their interventions. Examples of applied communication intervention research include experimental studies that were conducted during the 1920s and 1930s about the effects of scholars’ speech training interventions, the research program conducted during the 1970s on scholars’ interventions to minimize people’s communication apprehension (fear of speaking, especially public speaking), and various health intervention campaigns (e.g., to stop smoking tobacco and to practice safe sex) that communication scholars contributed to and studied beginning in the 1990s. Communication activism social justice research also emerged from the work of rhetoricians and media, cultural, feminist, and organizational communication scholars who employed critical theory to analyze and critique dominant cultural, institutional, and organizational communicative practices that excluded and marginalized people with regard to important political, economic, and social issues. Some rhetoricians even argued for an “activist” turn in rhetoric that privileged ideological or critical rhetorical research. Although such research tended to remain at the conceptual level of critique rather than the pragmatic level of intervention, it did provide important grounds for a communication approach to social justice. A third line of research that spurred communication activism social justice research was the general call for “engaged research” that connected researchers and community members in a reciprocal relationship to address pressing social problems in ways that benefited both partners. Although engaged research is a relatively diffuse term, it highlighted the need for and led to the acceptance or institutionalization of community-based research,

Activism and Social Justice   13

although such research was directed, primarily, toward civic rather than social justice issues (e.g., studying communication strategies for getting more people to vote in elections rather than getting more people to vote for just policies). That research also did lead, relatively recently, to a critical mass of scholars from virtually every academic discipline in the humanities, physical sciences, and social sciences who now associate together the terms activism and research. During the mid-1990s, those lines of research led to the articulation of a communication and social justice perspective. That perspective argued that communication scholars were well situated (both conceptually and pragmatically) to engage with and advocate for those who are economically, politically, socially, and/or culturally oppressed, marginalized, and under-resourced. From a conceptual perspective, injustice occurs when people do not receive equitable treatment with regard to rights, opportunities, and resources, because they possess particular identity characteristics that dominant societal members (i.e., those in power) do not respect. As an example, people who are excluded from getting married on the basis of their sexual orientation experience social injustice because they are being treated in an inequitable manner compared to other societal members; similarly, women who do not receive equal pay to that of men for equal work experience social injustice. Social justice communication scholars view societal conditions or arrangements, such as marriage and pay, as “discourses” from which particular people are excluded. From a pragmatic perspective, the only way to change those unjust discourses is by communicating about them (and using communication to organize people to communicate about those unjust discourses). Hence, social justice communication researchers are in a unique position, relative to other academic researchers, to use their communication knowledge and skills (and to teach others how to use their communication skills and knowledge) to intervene into unjust discourses and attempt to reconstruct them in more just ways, and, simultaneously, study that intervention process. The communication and social justice perspective, thus, privileges communication scholars engaging in activism to change dominant

discursive structures and institutions that create and maintain social injustice. Because of its emphasis on such activism, that perspective has come to be called “communication activism social justice scholarship” or, for short, “communication activism scholarship.” That scholarship includes both communication activism research and communication activism teaching. Communication Activism Research

Communication activism research has been and is directed toward a wide range of social justice issues, including eliminating racist practices, ending the death penalty, preventing human trafficking, stopping genocide, securing LGBT rights, and shutting down concentrated animal feeding operations (or “factory farms”), to name but a few. Although the goal of such research, certainly, is to accomplish structural changes that are needed to prevent injustice, that research is conducted at the microlevel in local sites and with particular individuals or groups (e.g., working with a local anti– death penalty organization to stop the execution of a particular person in a particular state). Interventions that communication activism scholars employ include offering communication skills (e.g., public speaking) training, leading small group planning sessions, facilitating public dialogue, constructing public service announcements, making visual and audio documentaries, and creating and staging performances. Those communication interventions are informed by a wide variety of communication theories, from (post) positivistic to interpretive to critical communication theories, with the interventions representing tests of those theories and/or resulting in grounded theory being advanced. The research methods employed to study those interventions (especially with regard to their effects) range from quantitative (e.g., experimental, survey, and content analysis) to qualitative (e.g., critical ethnography, discourse analysis, ethnography, performance studies, and rhetorical criticism) methods, with many studies being multimethodological to capture the complexity of attempting to enact such large-scale societal change. Regardless of the methodology employed, communication activism scholars approach research as a collaborative process that involves and benefits all parties involved;

14   Activism and Social Justice

hence, it constitutes some form of participatory (action) research. The reports that emerge from that process, typically, consist of detailed case studies of communication scholars’ interventions to promote social justice. Most important, as this relatively new form of research has matured, it has grown from scholars’ post hoc accounts of communication activism that occurred in the past, without being conceived and conducted as a research project, to a planned research project that is conducted, from its start to (as much as possible) its completion, as a systematic study that documents, as fully as possible, the purposes, practices, processes, and products (effects) of that communication activism. Communication Activism Teaching

Communication activism teaching involves communication educators instructing students studying communication how to use their communication knowledge and skills with community members experiencing social injustice and with social justice groups and organizations to promote social justice. Such teaching involves, for instance, offering communication activism courses (e.g., on environmental communication activism) in which students conduct social justice projects (e.g., administering citizen-scientist “groundtruthing” environmental surveys, the results of which are reported to public policymakers to prevent timber sales in U.S. old growth national forests), or incorporating social justice activism service-learning (as opposed to other forms of service-learning, such as civic service-learning, and experiential education, more generally) into traditional communication courses (e.g., public speaking). Communication activism teaching also extends to other sites and populations than the traditional college student classroom, such as teaching in prisons and confronting salient social justice issues that are embedded there. Communication activism teaching, thus, stands in sharp contrast to the corporate-oriented emphasis that characterizes much of contemporary education, including communication education. Moreover, communication activism teaching complements but extends significantly allied forms of teaching, such as critical (communication) pedagogy, which seeks to raise students’ critical awareness of social

injustice, by offering students direct embodied experiences of attempting to promote social justice. Additionally, when that communication activism teaching is studied and reported in a rigorous and systematic manner, it produces communication activism pedagogy research that then is published in communication journals and reported at communication conventions.

National Communication Association’s Activism and Social Justice Division The NCA’s Activism and Social Justice Division, thus, provides an intellectual home for communication scholars to focus on the important relationships among communication, activism, and the promotion of social justice. The division offers members opportunities to engage in lively debate, dialogue, and discussion about those important relationships, and to showcase their activism and social justice communication research and teaching. The division supports a broad and inclusive membership that is open to diverse philosophies, theories, methods, and practices, from descriptive research and teaching about communicative practices of activist individuals, groups, organizations, and movements to intervention research and teaching conducted by communication scholaractivists. What unites that body of scholarship is its ultimate goal: to study and teach the transformative power of communication to make an important difference in creating and sustaining a more socially just world. Lawrence R. Frey and Kristen C. Blinne See also Applied Communication; Communication Education; Critical Ethnography; Cultural Studies and Communication; Researcher–Participant Relationships; Service Learning; Social Implications of Research

Further Readings Anderson, G. L., & Herr, K. G. (Eds.). (2007). Encyclopedia of activism and social justice. Thousand Oaks, CA: Sage. Del Gandio, J., & Nocella, A. J., II. (2014). Educating for action: Strategies to ignite social justice. Gabriola Island, Canada: New Society.

African American Communication and Culture   15 Frey, L. R., & Carragee, K. M. (Eds.). (2007/2012). Communication activism (3 vols.). New York, NY/ Cresskill, NJ: Hampton Press. Frey, L. R., & Palmer, D. L. (Eds.). (2014). Teaching communication activism: Communication education for social justice. New York, NY: Hampton Press. Frey, L. R., Pearce, W. B., Pollock, M. A., Artz, L., & Murphy, B. A. O. (1996). Looking for justice in all the wrong places: On a communication approach to social justice. Communication Studies, 47, 110–127. doi:10.1080/10510979609368467 Hartnett, S. J. (2010). Communication, social justice, and joyful commitment. Western Journal of Communication, 74, 63–93. doi:10.1080/10570310903463778 Kahn, S., & Lee, J. (Eds.). (2011). Activism and rhetoric: Theory and contexts for political engagement. New York, NY: Routledge. McHale, J. (2004). Communicating for change: Strategies of social and political advocates. Lanham, MD: Rowman & Littlefield. Swartz, O. (Ed.). (2006). Social justice and communication scholarship. Mahwah, NJ: Erlbaum. Yep, G. A. (2008). The dialectics of intervention: Toward a reconceptualization of the theory/activism divide in communication scholarship and beyond. In O. Swartz (Ed.), Transformative communication studies: Culture, hierarchy and the human condition (pp. 191–207). Leicester, UK: Troubador.

African American Communication and Culture African Americans are not a monolithic group, and acknowledging that African American communication and culture differs from those of other races and ethnicities is only one aspect in the study of Black communication and culture that has many layers. To explore both the differences and similarities in Black culture, one must define who is “African American” or “Black,” which can be complex, because the terms are at times used interchangeably and can include many ethnic groups as population shifts and immigration have increased the Black population in the United States. The term Black itself can mean any person of African descent representing the larger African diaspora throughout the world. However, in the United States,

the Census Bureau defines “Black or African American” as any person having origins in any of the Black racial groups of Sub-Saharan Africa. The ability for African Americans to name themselves is a relatively new phenomenon. Racial identity is a social construct, and in previous decades the terms negro and colored were the identity terms used to name the descendants of slaves born in the United States. These layers also consist of the historical narratives around the plantation experience that continues to influence political and societal perceptions of individuals and an entire group of people. From the earliest days of the plantation experience, slave owners sought to exercise control over their human property by attempting to strip them of their African culture. This was done through brute force, physical isolation, and societal marginalization of African slaves and their free descendants. However, this actually facilitated the retention of significant elements of traditional African culture and communication among Africans in the United States. Most previous and some current research into African American communicative competence, language style, and relationship formation and maintenance are investigated using Eurocentric perspectives, which negate the Black cultural experience. The importance of personal narratives, oral traditions, language, social and cultural identities, family dynamics, religion and spirituality, sociopolitical frames, and a host of other traditions and practices that are unique to this diverse people group must be a part of any analysis that also takes place at the intersection of not only race, but also of class, gender, sexuality, and age. This entry reviews three critical variables to be considered in both qualitative and quantitative research designs and examines some theoretical constructs and ideologies that are important to consider when studying African American communication and culture.

Variables Defined The following are just some of the more salient variables that must be considered in both qualitative and quantitative research design in any analysis of African American communication and culture. These are only representative and not all inclusive.

16   African American Communication and Culture

Language

Orality is an African ancestral tradition that has accompanied African American communication and culture throughout its history in the United States. Oral tradition is the vast field of knowledge through which cultural information and messages are transmitted verbally from one generation to another. It is a means of recalling the past and telling about one’s own lived experience. Orality has given African Americans the creative ability to manipulate and transform the standard English language into unique speech codes. Historically, modifications to African patterns of language and culture took place while adopting some Eurocentric communication and cultural patterns, thus creating uniqueness in speech, language creation, and verbal communication practices. Some qualitative research methods tend to value oral traditions as a way to add depth of understanding that mere numbers and numerical measurement cannot provide. Identity

The majority of the existing research on identity comes from the field of psychology and some research has been conducted in the discipline of communication on African American identity. From the field of psychology, two prominent theories used for identity analysis in African Americans are reference group theory and situated identity theory. From the field of communication comes cultural contracts theory and identity negotiation. It is widely known that identity development is a process by which an individual establishes a relationship with a reference group. African Americans vary widely in their attitudes about race and ethnicity; however, categories such as skin color, socio-economic status, common history of oppression, and ancestry are some of the elements used to define the self. Individuals vary in the degree to which they self-identify along what is known as the continuum of Blackness within the African American cultural group. Spirituality, Religion, and the Black Church

Spirituality and religion are salient in African American cultural dynamics, and when preparing a research design, the operational definitions

between spirituality and religion must be clear, as spirituality is personal and religion is an institutional mechanism. Spirituality concerns itself with the desire for a connection with the sacred, nature, the unseen, or the supernatural. For African Americans, spirituality is not merely a system of religious beliefs, it is another aspect of African ancestral tradition that has maintained significance in Black culture. Spirituality comprises articles of faith that provide a conceptual framework for living everyday life. The institutional mechanism of religion is a tool that is used by some African Americans to practice their spirituality. An example of this would be the Black or African American church. From the beginning of U.S. history, religious preferences and racial segregation have encouraged the development of separate Black church denominations, as well as Black churches within White denominations. African American churches have long been the centers of strength and solidarity in their communities. When compared to American churches as a whole, Black churches tend to focus more on social issues such as poverty, education, gang violence, drug use, prison, and other inequities. They have served as school sites in the early years after the Civil War and as sites for charter (secular) schools in contemporary times. African American churches also take on social welfare responsibilities, such as providing for the poor, homeless, and sick. They have also been known to establish religious schools, orphanages, and prison outreaches. As a result, Black churches have fostered strong community organizations and provided spiritual and political leadership, as evidenced during the Black Liberation movement, Civil Rights movement, and during the Black Lives Matter movement that emerged in 2015. The Black church continues to be a source of support for members of the African American community.

Theoretical Constructs and Ideologies Applying Eurocentric communication patterns and cultural norms leads to misinformation and erroneous research results. The study of African American communication and culture should strive to use theoretical constructs and ideology that place African American communication and cultural dynamics at the center of the study.

African American Communication and Culture   17

Afrocentricity

To study African American communication and culture with an Afrocentric lens requires that questions about location, place, orientation, and perspective be considered. Afrocentricity is a paradigm that simply promotes a perspective that allows African people to be the subject of historical and contemporary experiences rather than objects on the fringes of Europe and whiteness. The Afrocentric idea has been framed as the most revolutionary challenge to White supremacy and is not merely a Black version of Eurocentricity, which attempts to protect White privilege and the corresponding advantages in education, economics, and politics. To date an Afrocentric paradigm does the following: questions the imposition of the White supremacist view as universal and/or classical, demonstrates the indefensibility of racist theories that assault multiculturalism and pluralism, and protects a humanistic and pluralistic viewpoint by articulating Afrocentricity as a valid, nonhegemonic perspective. Since its inception and use as a methodological tool, however, Molefi Kete Asante’s Afrocentric paradigm has faced harsh criticism. Critics of Afrocentricity fall into five broad categories. First are those who challenge the idea that the African essence that undergirds the notion of “center” is antithetical and that in Afrocentricity’s search for Africanness, it does not allow for cultural change. Therefore, Afrocentricity is unable to adequately deal with cultural change, which prevents it from understanding that being African in the 21st century also means being at least partly European as a result of the plantation experience, colonization, and Westernization. This results in Afrocentricity being perceived as too restrictive and incapable of grasping the dialectical complexity of modern African identities. Second, Black feminists who advocate for gender equity and Black neo-Marxists who advocate for social class and economic equity have critical concerns as they take issue with Afrocentricity, maintaining that race and culture remain the foremost socially relevant category in American society. The third critique stems from claims surrounding the origins of Egyptian and Greek intellectualism. Many European classicists support the Greek

Miracle theory and are disturbed by two related developments associated with the use and acceptance of Afrocentricity. First, credit was being taken away from Europe for the great civilizations of the Nile Valley, specifically Egypt. Second, the original intellectual achievements of Greece were revisited and diminished, as it was pointed out that many Greek philosophers had studied for long periods of time in ancient Africa and were therefore indebted to their African teachers for many of their ideas. This resulted in many scholars in the United States and Europe refuting those “Afrocentric” claims. It must be noted that the debate over the racial identity of the early Egyptians predates the emergence of Afrocentricity by several decades and is not a salient issue to Afrocentricity, and within the context of Diopian historiography, Egypt is at the beginning, both chronologically and conceptually, of African civilization. The final two critiques are questionable as the issues they address are clearly not a part of the premise of Afrocentricity—that of biological determinants and arguments (i.e., melanin) and false accusations of hegemony. Afrocentricity is fundamentally nonhegemonic and welcomes the existence of a multiplicity of cultural centers. Black Feminist Thought and Womanism

The Black Feminist Movement formed in response to the needs of Black women who felt racially oppressed by the Women’s movement (i.e., suffragettes, White feminism), and sexually oppressed by the Black Liberation Movement (i.e., Afrocentricity). Black feminist scholars assert that African American women are doubly disadvantaged in the social, economic, and political domains because they face discrimination on the basis of both race and gender. Black women felt their needs were being ignored by both movements and struggled to identify with either based on race or gender. The Black feminist agenda seeks to refocus intellectual, political, social, sexual, spiritual, and economic issues toward those that are most applicable to African American women. Womanism is a feminist term coined by Alice Walker and is a form of Black feminism. Womanism emerged as an epistemological standpoint to provide space and voice to African American women who seek social equity for their entire

18   African American Communication and Culture

community. Womanism provides a space that recognizes a woman who loves women and appreciates women’s culture and power as something that is incorporated and integral to the world as a whole. Like its sister Black feminism, womanism addresses issues of race and class that have been minimized or forgotten by White feminism, and actively opposes separatist ideologies. Uniquely, womanism honors the strength and experiences of African American women and recognizes that African American women are survivors in a world that is oppressive on multiple platforms. Womanist ideology recognizes that Black men are an integral part of Black women’s lives as their sons, family members, or lovers. It makes room for the ways in which African American women support and empower Black men, while recognizing a history of patriarchy, oppression, and sexual violence. This perspective is often used as a means for analyzing Black women’s lived experience, as it marks the place where race, class, gender, and sexuality intersect and provides a way to view and understand that an African American woman’s relationship to men is intricate. Womanism seeks to celebrate the ways in which women negotiate these oppressions in their individual lives. Cultural Contracts Theory

In the study of African American communication and culture, cultural contracts theory can be a starting point to explain what happens when a part of one’s identity or worldview is compromised in some way through the examination of cultural identity. Cultural contracts are used every day every time an individual interacts with someone with differing norms, beliefs, values, and traditions within and outside African American culture. There are three types of contracts: (1) ready-tosign (assimilation expected), (2) quasi-completed (occasional accommodation expected), and (3) cocreated (mutual respect expected). Cultural contracts can be exchanged instantaneously, but contract negotiation can take years, or never be completed. Cultural contracts theory recognizes that everyone is born into a specific culture with norms, beliefs, values, and traditions. These cultures represent contracts, and when these are interrupted via interaction with others having different

contracts or expectations, participants need to figure out a way to relationally coordinate actions. Complicity Theory

Complicity theory assumes that everyone is complicit in systems of domination. The concept of complicity as a theory of negative difference draws on critical race theory and cultural studies that explore how discourse in opposition to certain groups contributes to the negative social construction of difference as well as identity. Language used as a social construction of reality reinforces oppression. When one looks at the historical and social issues that situate African American communication and culture on the margins and in opposition to European (i.e., majority) cultural influences, complicity theory assumes that it is not possible to move beyond negative difference and complicity. African American communication and culture historically have been rhetorically framed as negative. Narrative Analysis

Narrative analysis is a qualitative analysis tool that values the stories and lived experiences of the participants in order to understand the phenomenon under investigation. Narrative analysis can be broken down into four types: (1) thematic, (2) structural, (3) dialogic/performance, and (4) visual. Narratives may be described as stories that possess some explanatory and exemplary value. Narratives can take the form of an extended answer in story form to a single or multiple research questions and visual media such as photographs, collages, paintings, and video diaries. African American communication and culture takes on various narrative forms from the orality of traditional stories and spoken word poetry, literature, music, and visual art. Narrative analysis would be a benefit to any research design on its own or in combination with quantitative methods. African American communication and culture developed separately from mainstream American culture because of segregation, Jim Crow laws, and systemic racism in the United States. It also grew out of African Americans’ desire to practice their own traditions to reinforce distinct cultural norms, values, and communicative practices. African American communication and culture is distinctive

Agenda Setting   19

and has become a significant part of popular culture globally as evidenced by the coopting of Black culture through linguistics, media, and the arts. Annette Madlock Gatison See also Critical Ethnography; Critical Race Theory; Cultural Sensitivity in Research; Spirituality and Communication

Further Readings Asante, M. K. (1980/2003). Afrocentricity, the theory of social change. Chicago, IL: African American Images. Hecht, M., Jackson, II, R. L., & Ribeau, S. A. (2003). African American communication: Exploring identity and culture. Mahwah, NJ: Erlbaum. Hill, C. P. (2000). Black feminist thought: Knowledge, consciousness, and the politics of empowerment. New York, NY: Routledge. Jackson, II, R. L. (2004). African American communication and identities: Essential Readings. Thousand Oaks, CA: Sage. McPhail, M. (1991). Complicity: The theory of negative difference. Howard Journal of Communication, 3, 1–13. Phillips, L. (2006). The womanist reader. New York, NY: Routledge.

Agenda Setting Agenda setting is the media’s ability to tell audiences what to think about, rather than telling them what to think. Editors and other journalists make decisions about what stories get covered and what stories don’t get covered, thereby identifying what matters. Choosing some topics over others has the potential to influence audience members to see those topics as having more significance than others. This entry reviews what came before agenda setting, the articulation of the agenda setting concept, how researchers have added to and altered the concept, as well as criticism of agenda setting and the role of agenda setting in the modern media environment.

Historical Background Historians acknowledge the role of journalism in facilitating the very development of the United

States in the 19th century, primarily by assisting the growing number of immigrants in acquiring literacy and a sense of what it meant to belong to this country. Investigative journalism by intrepid reporters such as Nellie Bly brought critical issues to the attention of the newspaper-reading public. These were the issues people learned to care about and respond to. In the first part of the 20th century, journalists such as Ida Tarbell and Lincoln Steffens used their influence to alert the newspaper-reading public to financial and government improprieties, which led to changes in law, policy, and newspaper sales. Coverage of events in Europe, the buildup to U.S. involvement in World War I, and the propaganda campaigns enacted by the government to drum up support for U.S. involvement in the war, encouraged the citizenry to carry out their patriotic duties. The power of mass communication to mold and sway public opinion caught the attention of and generated concern among critics and scholars. Renowned U.S. journalist Walter Lippmann was a keen observer of the impact of mass media on the public. He wrote that what people thought about, that is the pictures inside people’s heads, did not necessarily reflect the real world. Over the course of the 20th century, scholars heeded Lippmann’s words suggesting that those pictures inside people’s heads are influenced by what they encounter in the mass media. For the first few decades of the 20th century, critics and scholars saw the media as having powerful effects. The audience was perceived as gullible and easily manipulated; in other words, the media could indeed tell people what to think. But by the end of the 1930s and the beginning of the 1940s, scholars came to think that the media had more limited effects. This transition occurred following two studies of mass communication events: the analysis of the “War of the Worlds” panic phenomenon in 1938 and a study of voter decision making in the 1940 presidential election. While many who listened to the Mercury Theater broadcast of the H. G. Wells story “War of the Worlds” panicked at the “Martian landing” in Grover’s Mills, New Jersey, not everyone did. Among those who did listen, there was no uniform response. Instead, people responded in idiosyncratic ways. Scholars concluded those responses were a function of their individual differences.

20   Agenda Setting

A second phenomenon led scholars to further question the role of the media in influencing public opinion in a powerful, direct, and uniform way. The two-step flow hypothesis was introduced by the findings from Paul Lazarsfeld’s study of opinion formation in the Erie County study throughout the course of the 1940 presidential election. Opinion leaders were respected individuals within a community. These were the people who read newspapers and listened to the radio and subsequently shared their ideas about what was happening in the world with others. Rather than accepting media messages directly, scholars suggested that average people took their lead from opinion leaders, which meant the influence of the media on the general population was indirect and diluted rather than direct and powerful. Dr. David Manning White introduced the element of gatekeeping as part of that two-step flow. White’s study introduced “Mr. Gates,” a 40-yearold newspaper wire editor entrusted with the responsibility of determining what stories were “in” and which stories were “out” of the daily newspaper. Mr. Gates’s news judgment proved to be subjective, including his distaste for suicide stories, and stories that were “too Red.” The subtext of the study was that what ends up in the media affected readers’ notions of what is important, and those people making decisions about the stories that made it into the newspaper had influence in that determination.

Agenda Setting Emerges Bernard Cohen, writing about the press and foreign policy, said that the press “may not be successful much of the time in telling the public what to think, but it is stunningly successful in telling its readers what to think about” (1963, p. 13). This statement came to be known as the agenda-setting hypothesis, and the premise was echoed in the work of numerous other scholars, most notably Maxwell McCombs and Donald L. Shaw. McCombs and Shaw empirically confirmed agenda setting by way of their content analysis of the 1968 presidential election. Using a combination of content analysis of media and survey analysis of public opinion, they tested whether or not an agenda-setting phenomenon could be observed as members of the public negotiated

their way through political elections. They found the agenda set by the media correlated with the topics audience members perceived to be important. Shanto Iyengar and Donald Kinder conducted an experimental study of agenda setting and priming that established causality.

Related Concepts From the 1970s to the 1990s, ideas about agenda setting shifted. Researchers have conceived of a number of related concepts such as agenda building, agenda framing, issue priming, and first- and second-level agenda setting. Agenda building refers to how news organizations can reflect and shape policy priorities of the government and other elites. Priming refers to media coverage of particular issues that become top-of-the-mind awareness for audience members without deep knowledge on those topics. Agenda framing focuses particular attention on certain aspects of reality—for example, constitutional rights and gun control proposals. First-level agenda setting looks at how much the media cover a particular issue. Second-level agenda setting addresses how the issue is defined.

Criticisms and the New Media Environment Some critics have asked whether or not agenda setting was actually a theory. Other critics focus on the limits of the primary methodologies used to measure agenda setting, particularly in terms of reliability and validity. Agenda-setting research emerged in the United States and has focused extensively on news use in developed countries. This may limit the generalizability of agendasetting theory. One critic suggested that the directionality of effect is an issue, in other words, who/ what influenced who/what? The issue of directionality of effect is particularly relevant in the modern media era. The changing media landscape offers new complications for the scholarly understanding of the power of the press to influence audience determinations of issue significance. Members of the public encounter a myriad of sources of information about the world outside. From Facebook news feeds to Twitter posts, audiences in the 21st century have very different conceptualizations of what news is from

Alternative Conference Presentation Formats   21

that of previous generations, and therefore different ways of prioritizing the issues of modern political agendas. This broader, more diverse media culture offers more opportunities for individualistic selection and definition of news. Instead of relying on three or four sources of television news, one or two newspapers, and four or five radio stations, there are literally thousands of sources available online. In some ways, this is a truly democratized media world. Audience members do not have to consume news passively. Instead, many audience members generate news themselves by way of blogs, Instagram and Twitter messages, among other modes. The economic structure of the media environment has also affected how news is defined. Rather than the hard news policy orientation journalists assumed throughout most of the 20th century, 21st-century consumer interests drive what gets covered in both traditional and new media. In an effort to compete in the new media environment and economy, news organizations supply audience members with news “they can use” such as stories on health, nutrition, the environment, homes, religion, and travel. News is softening. The watchdog function of the news, whereby journalists paid close attention to those in power and shared what they learned, may take on a different meaning from their governmental oversight role. The big investigative stories that were so significant in the 20th century (the Teapot Dome Scandal, McCarthyism, the Vietnam War, and Watergate, among many others), may become victims of the new media economy. Furthermore, the audience skepticism and mistrust that developed as a result of the scandals revealed in those signature stories extend now beyond “the government” to the big news media corporations. Journalism in the United States serves a vital democratic function. By providing information about what is going on in the world, and how elected public officials are meeting (or failing to meet) their campaign promises, citizens become informed members of democracy. If consumer choice dictates a larger share of news media content, the quality of the information the citizenry employs in their democratic decision making is bound to change. Deborah Petersen-Perlman

See also Causality; Propaganda; Surveys: Advantages and Disadvantages of

Further Readings Birkland, T. A., & Lawrence, R. G. (2009). Media framing and policy change after Columbine. American Behavioral Scientist, 32(10), 1405–1425. doi:10.1177/0002764209332555 Cohen, B. C. (1963). The press and foreign policy. Princeton, NJ: Princeton University Press. Iyengar, S., & Kinder, D. (1987). News that matters: Television and American opinion (2010th Ed.). Chicago, IL: University of Chicago Press. Lippmann, W. (1922). Public opinion. New York, NY: Harcourt, Brace and Co. McCombs, M. E., & Shaw, D. L. (1972). The agendasetting function of mass media. Public Opinion Quarterly, 36(Summer), 176–187. doi:10.1086/267990

Alternative Conference Presentation Formats A completed research project is typically shared with an academic audience for review. In most cases, students, faculty, and practitioners who engage in the research process are hoping to have their work published in an academic journal. However, another common way to share research is through academic and/or professional conference venues. In the field of communication studies, conference opportunities abound through professional organizations at international (e.g., International Communication Association), national (e.g., National Communication Association), regional (e.g., Central, Western, Southern, and Eastern associations), state, and local levels. The audiences for conferences may include all communication contexts and interest areas, or they may target specific specialty areas and scholars (e.g., Aspen Conference for Organizational Communication scholars; undergraduate research conferences). Conference participants learn if their work has been accepted for a conference through a submission and review process that is typically quite competitive, with acceptance rates generally falling below 50%. An individual may submit a

22   Alternative Conference Presentation Formats

completed paper, abstract, or discussion/panel ideas to program planners affiliated with the conference, at which point the submission is sent to reviewers. Often, the feedback that is received from colleagues at conferences helps scholars further edit their work in preparation for submitting it for publication. Conference presentation formats vary greatly, with participants submitting their work according to the format that works best for their personal preference and the state of the completed research or draft. The alternative formats described in this entry provide explanations for the variety of opportunities to share and discuss trends and topics for communication teaching and research in the field.

Oral Presentation Individual Paper Panel

When a completed paper is accepted upon review, the author may be placed on a panel with other papers that are similar in theme or topic. Conference program planners for specific units or divisions of the conference organize the paper panel. Program planners may also assign a chair for the panel (i.e., an individual to introduce each participant and monitor the time allotted for each presentation) and a respondent (i.e., an individual who provides insight, criticism, and feedback for the papers, individually and/or collectively). During the panel, authors generally receive 10–20 minutes (determined by conference/panel facilitators and the number of papers presented on the panel) to share their paper via an oral presentation. Traditionally, all formats allow for audience input via question/answer or participation during a discussion panel. Paper Session

When a group of papers are similar in theme or topic, authors will sometimes petition the conference organizers that the papers, collectively, will be a compelling panel. An individual submits the idea on behalf of the group and may even include a chair and/or respondent. Although not unheard of to submit completed papers for the panel, it is more likely that with this format, individual paper abstracts are submitted, with an understanding

that complete papers will be available and presented at the time of the conference. During the conference panel, individual authors will have 10–20 minutes to share their paper via an oral presentation.

Discussion Panel In a discussion panel, a submitter develops an idea that will bring colleagues together to discuss a relevant research topic or disciplinary issue. Panel topics may vary greatly and focus on subtopics such as pedagogy, research, and disciplinary trends. The presenters may have different perspectives on a topic and typically present their thoughts in a prepared, yet extemporaneous, presentation that allows for connections and discussion among the presenters.

Poster Presentation Poster sessions allow for a large number of participants to share their work through a visual display during a set time period at the conference location. The format and selection process varies according to the conference and venue. In most cases, authors submit their research idea or abstract for review, knowing it will be presented in the form of a poster. Posters are professional in design and content, often printed from software such as Microsoft PowerPoint or Adobe Photoshop templates. Content commonly includes the research purpose, key sources, methods, and findings. Poster participants stand next to their poster and interact with conference attendees as approached.

Performance Session Performance studies includes research, criticism, and teaching of individual and collective performance in many forms. Scholars pursue performance as inquiry, so when submitting to a conference, participants may do so in the formats identified herein. An alternative form for scholars in this area is to submit solo performances and/or productions that may be original to the conference or previously staged elsewhere. Another option is to present a series of shorter performances with colleagues who focus on a particular theme.

Alternative News Media   23

Additional Formats The presentation types presented herein are the most common options found at professional conferences. However, there are other ways for communication scholars and teachers to creatively share their work and for participants to engage in professional development. The options presented in this section may require a competitive selection process, but may also include invitation and program planner creativity. Scholar-to-Scholar Presentation

A scholar-to-scholar presentation allows for participants to interact and share in a creative format. For the National Communication Association annual conference, participants may choose to use a poster, but are also encouraged to consider media, slides, or other visual displays that best highlight the author’s work. The session is considered more contemporary than a traditional poster session, yet the interaction and format for participants is similar. Seminars

A seminar is an extended session, often a halfor full-day, designed for scholars, teachers, and/or students to engage in discussion on a particular topic or perspective. Seminar participants may include conference attendees who are interested in joining the discussion, or they may need to apply in order to participate. Seminars may be part of the regular conference programming or they may be included as a pre-conference (early) option. Short Courses

A short course is an extended session during the conference that allows teachers and scholars to present information to interested participants on an array of topics including teaching/classroom units or courses, research methodology, or technology. The short course instructor may have submitted the idea for the course to conference planners or he or she may have been invited to lead a course due to expertise or credibility in the area. Short course participants often pay an additional fee to attend these sessions. Linda B. Dickmeyer

See also Panel Presentations and Discussion; Peer Review; Poster Presentation of Research; Professional Communication Organizations; Publications, Scholarly; Submission of Research to a Convention

Further Readings Alexander, B. K., Anderson, G. L., & Gallegos, B. P. (2005). Performance theories in education: Power, pedagogy and politics of identity. Mahwah, NJ: Erlbaum. Esterberg, K. E. (2002). Qualitative methods in social research. Boston, MA: McGraw-Hill. Keyton, J. (2014). Communication research: Asking questions, finding answers. New York, NY: McGraw-Hill. National Communication Association. (n.d.). Convention resource library. Retrieved from https://www.natcom .org/conventionresources/ Rubin, R. B., Rubin, A. M., & Haridakis, P. (2009). Communication research: Strategies and sources (7th ed.). Belmont, CA: Wadsworth. Taylor, B. C., & Trujillo, N. (2001). Qualitative research methods. In F. M. Jablin & L. L. Putnam (Eds.), The new handbook of organizational communication (pp. 161–196). Thousand Oaks, CA: Sage.

Alternative News Media While alternative news media has existed in print form since the 17th century, economic barriers long associated with production and distribution became largely alleviated in the 21st century with the advent of electronic technologies, enabling less well-funded alternative news media to expand in prominence and reach. Broadly defined, the term alternative news media refers to a heterogeneous range of media and journalistic practices identified by how alternative news media structure, function, and processes differ from those of traditional news media. Alternative news media cannot be understood or analyzed separately from that of traditional news media, as alternative news media exists, in part, to compensate for the shortcomings of traditional news sources and coverage. The following sections detail the similarities and differences in content and journalistic practices employed by alternative news media.

24   Alternative News Media

Traditional Media The political independence of traditional news media—colloquially referred to as the fourth estate— has been credited with facilitating democracy. Yet even before mass media achieved commercial success due to growth of advertising in the 1850s, the elitist influence on the interpretation of events of political and economic import in traditional media became acknowledged. Traditional news media’s power to encourage public action during the period of nationalist consolidation was understood as crucial in directing and prodding the masses into action, but such action remained subordinated to the agenda of the elite whose opinions informed such interpretations of events. The historical struggle for press freedom was not synonymous with a mandate for economic or popular freedom; rather, grand narratives featured in traditional media legitimized the notion of great leaders as the engines of progress, and downplayed economic and systemic inequalities. By uncritically emphasizing the primacy of consensus, traditional news media failed to challenge the commercial media paradigm by downplaying the conflicts and coercive structures against which people struggled. Today, concentrated corporate ownership of traditional media reflects a dominant hegemonic discourse associated with fiscal dependence on the business model of news production. By employing a dichotomous rhetorical framework (covering a story from two diametrically opposed perspectives), this superficial “balance” in traditional news media obscures diversity of opinion by failing to explore other perspectives or possible commonalities. Consistent with a top-down, vertical structure, stories in traditional media privilege the voices of those in power—the politicians, corporations, and police—who become the primary definers of an event, and whose interpretation of the story creates the frame and limits discussion of others’ perspectives of the event. Decisions on what stories, texts, and images to present, or not present, produces a normative influence, where primary definers decide (a) how to frame problems, (b) which views to legitimize as “expert,” and (c) what potential solutions are, or are not, examined. As a result of financial dependence, traditional news media have responded to the recent fiscal crisis by deemphasizing investigative

journalism, resulting in greater focus on immediate news pegs (a short reference in a story to a larger more in-depth treatment) rather than sustained coverage of public policy issues.

Alternative Media Alternative news media exists, in part, to challenge the dominant hegemonic discourse of traditional news media. By covering stories in far greater depth, and by contesting the codes, identities, and institutionalized relationships depicted in the public policy pedagogy of corporate news media, alternative news media seeks to empower an otherwise marginalized public. By encouraging citizens to engage critically with the output of news media, keeping otherwise extinguished issues alive, and challenging a singular view of normality, alternative news media requires critical content and critical producers. By providing critical content, alternative news media explicitly examines issues of social inequality, questions what society has failed to become, and advances the interests of social transformation. Rather than prominently featuring the perspectives of primary definers, alternative news media producers frequently rely on nonexpert eyewitnesses, and extend news coverage far beyond the bounds of the dominant ideological frame. Furthermore, a more horizontal relationship exists between alternative news media producers and sources, where the lines between producer and source often become blurred. The alternative news media audience remains small compared to that of traditional news media, and because of reliance on noncommercial financing (e.g., donations) to retain independence, gaining visibility remains challenging. As a result of limited resources, alternative news media often cannot focus on the entire range of issues covered by the traditional news media, but instead typically focuses on a smaller set of issues associated with the interests of a specific audience.

Assessing Credibility of Traditional and Alternative News Media While frequently conflated, a lack of ideological bias in news media is not synonymous with the concept of credibility. As a common idiom states,

Alternative News Media   25

“History is written by the victors.” This idiom refers to the prevalence and inevitability of bias in historical accounting of events resulting from determination of which events become deemed significant to recounting of larger national narratives. Whether intentional or unintentional, implicit or explicit, bias in news media can be identified by virtue of which stories are covered, which stories are not covered, and by how a story or event is framed. For traditional news media producers, implicit bias becomes de rigueur, as acknowledging otherwise would contradict the perception of balance and objectivity they seek to instill. In this sense, alternative news media producers exist in a more enviable position; sans expectations of “balance,” producers frequently communicate their raison d’être and associated bias directly and explicitly with audiences, and encourage the audience to critically analyze media content without fear that illuminating inevitable bias will negatively impact assessment of credibility. Ultimately, the perception of media credibility depends more on the public’s perceptions of the veracity of information presented as fact, on the reputation of the source, and on how closely media content resonates with the public’s preexisting ideological predispositions, rather than on analysis of the presence or absence of bias. While alternative news media content often serves to critique the normative discourse of traditional media coverage, the limited reach of alternative news media constrains transformative potential, whereas traditional news media holds the power to challenge the credibility of alternative news media through disciplinary rhetoric. By defining credibility as consonant only with the practices they employ, traditional news media can challenge the journalistic value or legitimacy of alternative news media sources, question the credibility or professionalism of alternative media producers, or rhetorically instill skepticism of alternative news media content. As a result, producers of alternative news media must exercise diligence in documenting and supporting claims to support the veracity of the information provided. Research involving the credibility of alternative news media sources often relies on the concept of source credibility, but the concept of source credibility fails to account for a networked understanding of credibility akin to the trust established

in traditional interpersonal encounters. Because of the collaborative knowledge building that occurs among producers, sources, and consumers of alternative news media, and the online and offline communication these parties engage in with others, the network of trust and credibility that forms with respect to alternative news media content cannot be accounted for or fully understood using the present models of assessing credibility of traditional news media.

Ethical Considerations Alternative news media content remains an ideal source of research data, provided certain ethical considerations are taken into account. Even when a website remains open to the public without special access codes, most Institutional Review Boards now require that researchers obtain written permission from the owner or producer of an alternative news media website before a study can be approved. Furthermore, because the general public often can post comments or add content to alternative news media websites, researchers must protect the identities of those whose stories or comments serve as a basis for analysis. Deborah DeCloedt Pinçon See also Internet as Cultural Context; Qualitative Data

Further Readings Finn, J., & Gil de Zuniga, H. (2011, October). Online credibility and community among blog users. In Proceedings of the American Society for Information Science and Technology, New Orleans, LA. Habermas, J. (1992). Further reflections on the public sphere. In C. Calhoun (Ed.), Habermas and the public sphere (pp. 421–461). London, UK: MIT Press. Hall, S. (1999). Encoding, decoding. In S. During (Ed.), Cultural studies reader (pp. 507–517). London and New York: Routledge. Hall, S., Critcher, C., Jefferson, T., Clarke, J., & Roberts, B. (1978). Policing the crisis: Mugging, the state, law and order. London, UK: Macmillan. Hamilton, J., & Atton, C. (2001). Theorizing AngloAmerican alternative media: Toward a contextual history and analysis of US and UK scholarship. Media History, 7, 119–135. doi:10.1080/13688800120092200

26   American Psychological Association (APA) Style Harcup, T. (2003). “The unspoken – said”: The journalism of alternative media. Journalism, 4, 356–376. doi:10.1177/14648849030043006 Harcup, T. (2005). “I’m doing this to change the world”: Journalism in alternative and mainstream media. Journalism Studies, 6, 361–374. doi:10.1080/ 14616700500132016 Jordan, J. W. (2007). Disciplining the virtual home front: Mainstream news and the web during the war in Iraq. Communication and Critical/Cultural Studies, 4, 276–302. doi:10.1080/14791420701459707 Kelly, D. M. (2011). The public policy pedagogy of corporate and alternative news media. Studies in Philosophy and Education, 30, 185–198. doi:10.1007/s11217-011-9222-2 Sandoval, M., & Fuchs, C. (2010). Towards a critical theory of alternative media. Telematics and Informatics, 27, 141–150.

American Psychological Association (APA) Style The style of the American Psychological Association (APA) can be found in the Publication Manual of the American Psychological Association. The APA manual details the definitive style for ethics, research and writing mechanics, and citation standards in social and behavioral research. The APA sixth edition is notable for its inclusion of disciplines outside psychology. Thus, APA continues to be the primary citation format for quantitative and qualitative social research scholars in the communication field. This entry includes a discussion of APA goals and corresponding standards, particularly as they differ from other common writing formats and style guides. This is followed by a brief presentation of APA history and controversies that surround the sixth edition of the APA manual.

APA Goals, Standards, and Characteristics Goals and Standards

APA style is maintained with the goal of upholding scientific rigor in the research studies of social scientists (and to a lesser extent, humanities scholars). A desire to advance the field of knowledge through ethical research that is simple (for readers),

transparent, and replicable has resulted in a standardized system for researching, writing, reporting, and presenting different types of research. To meet these goals, APA style necessitates that researchers use common standards at all stages of the research and writing processes. APA-Specific Characteristics

There are a number of key differences between APA and other research styles. First, APA style encompasses information beyond the written word. The manual includes guidelines for presenting technological, visual, and audio information. Keeping pace with technological trends where data are no longer solely textual, APA formatting has particularly standardized the reporting of various forms of social media. Next, the sixth edition of the APA manual reinforces previous APA goals to focus on the voice of the researcher. For example, in contrast to a Modern Language Association (MLA) style for English and other humanities disciplines, which focus on the importance of original authors’ language and authorship, APA is more concerned with the ideas of previous works (and when they occurred) and how those are integrated by the present author. As a result, APA writers are encouraged to use their own words to paraphrase others’ works whenever possible. The APA manual urges authors to avoid substantial use of direct quotations when possible; this is one reason why APA style reports in-text page numbers only for direct quotes and standardizes reporting of publication year instead (as opposed to MLA, where page numbers are reported even for paraphrased ideas). Another APA standard modified over time has been the reconsideration of language bias in writing and reporting research. For example, previous versions of APA challenged the use of “he” to include people of all sexes and genders. The sixth edition continues in this vein and considers unique racial, ethnic, gendered, sexed, and other sub/ cultural values by discouraging use of generic labels to describe specific groups. Keeping with this standard, the sixth edition of the APA manual specifically instructs writers on how they should address inappropriate (e.g., racist, sexist) or nonstandard (e.g., online shorthand or slang) language when reporting such works of others.

American Psychological Association (APA) Style   27

A third potential writing bias acknowledged in the sixth edition APA manual concerns authorship. By acknowledging that research bias even exists, and thus prioritizing first-person authorlanguage (e.g., “We conducted . . .”) over pseudoobjectivist tendencies (e.g., “The researcher conducted . . .”), APA reflects a larger research paradigm shift in the social sciences from positivist, empirical approaches to more subjectivistacknowledged trends. APA style also prioritizes use of streamlined citations throughout a paper. This abbreviated approach balances ease of comprehension in the main text with a readers’ ability to quickly reference any originally cited works for himself or herself. For example, the Chicago Manual of Style (CMS), used by historians and other humanities disciplines, requires readers to jump to different paper locations for further information while reading. In contrast, APA style discourages significant use of footnotes and endnotes. APA style also tends to minimize space by listing full references only once, in a final References section. It also denotes use of author initials instead of full first names (e.g., as used by MLA and CMS) and includes an abbreviated style for commonly referenced source types such as periodicals. APA style further minimizes page space by use of in-text shorthand to denote multiple authors (et al.). Even the overall manual was significantly condensed from the fifth edition to the sixth, through use of corresponding online resources. Finally, APA includes—via print and online versions of the manual—substantial consideration of how to ethically and clearly use, reference, and research Internet and other technologies (e.g., imaging, electrophysiological measures). Ease of use has been amplified by the APA sixth edition standard of including digital object identifiers (DOIs), often hyperlinked, which allow readers to directly access articles online. Ethical standards are identified for correctly attributing works and referencing actual authors in the face of free online access and non-reviewed information distribution (e.g., Wikipedia).

APA History and Controversies The APA manual—in any edition—typically reflects historically informed research trends and changing research goals and methods, sometimes

informed by the politics or community standards of a particular time or practice. Originally created in article format in 1929 by a group of interdisciplinary researchers, the APA manual’s sixth edition revision was overseen by a six-person committee to address needed changes in ethics; language bias; reporting standards for academic journals; writing style; and presentation of graphics, references, and statistics. As with many APA editions over the years, these changes were inspired both by current policies and research approaches and standards as well as by changing technologies. A main source of criticism surrounding the sixth edition was the presence of errors in the first printing of the new manual in July 2009. The APA organization as a whole, held as the definitive source responsible for identifying formatting flaws of individual researchers, was subject to intense condemnation for failing to hold their own work accountable to those same standards; they were also criticized for refusing to issue replacement copies or refunds to those who had purchased the incorrect manual. Issuing an apology, the APA quickly published a supplement list of the errors on their website, followed by a reprinting of the corrected manual in October 2009. Other controversies surrounding the APA sixth edition involved challenges to the standards contained therein. Although generally seen as an improvement to the limited statistics recommendations provided in the fifth edition, the sixth edition has been criticized for overstepping a formatting role (i.e., how to report) by issuing content-related advice (i.e., what should be reported). However, in the introduction to the sixth edition APA manual, it is acknowledged that subfields should not necessarily adopt recommendations as requirements without consideration of how they convey axiology or research values (i.e., descriptive versus prescriptive standards). Further, the APA sixth edition concludes by emphasizing the publication process generally, as opposed to previously conveyed APA-specific policies and standards that may have applied only to psychologists. As a result of the attention paid to these issues and updated formatting standards in keeping with new forms of research, the APA sixth edition is a preferred style manual for researchers conducting research in the communication field. Jessica J. Eckstein

28   Analysis of Covariance (ANCOVA) See also Acknowledging the Contribution of Others; Authorship Bias; Authorship Credit; Ethics Codes and Guidelines; Gender-Specific Language; Plagiarism; Publication Style Guides; Research Reports, Organization of; Social Implications of Research

Further Readings American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: American Psychological Association. Joyce, N. R., & Rankin, T. J. (2010). The lessons of the development of the first APA ethics code: Blending science, practice, and politics. Ethics & Behavior, 20(6), 466–481. doi:10.1080/10508422.2010.521448 Kelley, D. L. (2008). Doing meaningful research: From no duh to aha! (A personal record). Journal of Family Communication, 8(1), 1–18. doi:10.1080/15267430701573573 Podsakoff, P. M., MacKenzie, S. B., Lee, J-Y., & Podsakoff, N. B. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903. doi:10.1037/0021-9010.88.5.879

Analysis of Covariance (ANCOVA) Analysis of covariance (ANCOVA) is a handy, powerful, and versatile statistical technique. It is a cousin of analysis of variance (ANOVA). Both ANOVA and ANCOVA, like all other inferential statistics, attempt to explain the nonrandom association between two or more variables. ANCOVA can be used in all forms of ANOVA with one, two, three, or even four independent variables. Likewise, several covariates can be employed simultaneously to control for extraneous variation. Thus, many ANOVA designs can be improved with ANCOVA. This entry discusses the use, application, and benefits of ANCOVA in communication research, and explains the process for selecting covariates.

Use in Communication Research Communication researchers, like other scientific researchers, assume that the universe is not random.

Stuff is associated with other stuff. Rainfall is associated with humidity. Earthquakes are associated with fault lines between tectonic plates. Income is associated with education. In communication, stuff just doesn’t happen; the world is not a random place. Communication apprehension is associated with poor public speaking performance. Interpersonal intimacy is associated with relational satisfaction. Affectionate behavior is associated with health. Source credibility is associated with increased persuasion. Fear appeals are associated with behavior change. The goal of all scientists, including communication scientists, is to explain and understand the seemingly random universe. All inferential statistics are based on removing unexplained or error variance from the denominator of a statistical test and moving it to the numerator, where it is explained variation or systematic variation. Understanding communication, like understanding the subject material of any other science, requires explaining the variation in stuff, showing that relations among variables are nonrandom. This is the essence of scientific understanding. This is clearly the case with ANOVA or F ratios, where the numerator of an F ratio is the variance that can be accounted for and the denominator is unexplained variation, called error variance. For example, we might want to know what makes a good public speaker. Many things are involved in public speaking skill, but we want to test whether public speaking skill increases after a public speaking class. We randomly assigned college freshmen to take either a public speaking or some other class (say chemistry) and we compute an ANOVA to see whether there is a statistically significant difference in public speaking ability (the outcome variable) at the end of the semester between the students who took the chemistry class and the students who took the public speaking class. We could use an ANOVA to do that. ANOVA examines whether there are statistically significant differences among groups. These can be preexisting groups such as males and females or Democrats and Republicans, or they can be experimental groups such as people receiving three kinds of public speaking training or three different types of persuasive health communication messages.

Analysis of Covariance (ANCOVA)   29

ANCOVA is the same as ANOVA except it uses an extra variable or variables to control for distracting, interfering, or confounding variables that may distort the real relationship between an independent variable and an outcome variable.

Applications and Benefits Let’s discuss a couple of examples of important applications of ANCOVA. First, let’s go back to a version of the public speaking class example discussed earlier. We want to see if several kinds of public speaking training increase public speaking ability. The gold standard is an experiment, so we create three classes: one using relaxation therapy, one using traditional public speaking training, and one unrelated to communication, say chemistry. We randomly assign students to each of these kinds of classes, so that all types of students of different races, genders, backgrounds, home cities, etc., have an equal chance of getting into one of the three kinds of classes. We now assume, with at least some justification, that the students in the three kinds of classes are very similar to one another—at least not systematically biased. At the end of the semester, we test each student for her or his public speaking ability (the dependent or outcome variable) and use an ANOVA to test whether students in one group are significantly better speakers than students in the other group. But we can do better! We know that all the students were really not the same (though we are assuming and expecting that students in all three types of classes were very similar). We know that extraverts tend to be better speakers than introverts and people with better vocabularies tend to be better speakers than people with more limited vocabularies. We could use ANCOVA to statistically readjust people’s public speaking scores by controlling for extraversion and vocabulary. We statistically adjust each person’s public speaking (outcome) scores as though each student had the same extraversion/ introversion level and extent of vocabulary. Now we have removed variance associated with two extraneous variables that impacted the students’ public speaking scores and have a produced a better test of the association between these three types of classes with public speaking ability. Let’s take another example. Suppose we conducted a health communication study that tested

three types of persuasive messages (the independent variables) to attempt to persuade college students to use sunscreen (the outcome variable) on spring break. We randomly assign people to hear and view one of three types of persuasive health messages. However, different groups of people may have different attitudes about sunscreen. Very light-skinned people who worry more about sunburns may already use more sunscreen and also might think the message is directed at them and be more attentive to the message than dark-skinned people. So we could use race or skin type as a covariate taking out or controlling for sunscreen use that is attributable to skin color, thereby more effectively testing the effect of the persuasive sun safety message on use of sunscreen. ANCOVA would adjust each person’s sunscreen use score based on their skin type and then test the impact of the three types of persuasive messages on sunscreen use. In the two previous examples, we used ANCOVA to adjust an outcome variable based on some other variables (the covariates) known to be associated with outcome variable. Thus, ANCOVA is a handy statistical maneuver since we can statistically control for one or more of these distracting or confounding variables while comparing our groups in a true experimental design. ANCOVA has another important application. Sometimes, when the differences among groups are naturally occurring, a true experiment is impossible because it requires random assignment to groups. For example, listening ability, gender, or political party affiliation cannot be randomly assigned, only measured. ANCOVA can still be used to take out extraneous variance to provide a better test of the association of the independent variable and the outcome variable even though the outcome variable is not manipulated. Consider the following example: We are determining the association of taking a class in nonverbal communication with nonverbal receiving ability. Prior research consistently shows that women have better nonverbal receiving ability than men. So we can use gender or biological sex as a covariate to adjust the nonverbal receiving ability scores and get a truer association of the benefits of a class on nonverbal communication with nonverbal receiving ability.

30   Analysis of Ranks

ANCOVA has an additional benefit. Remember this entry started by noting that science seeks to increase explained variance (in the numerator) and reduce unexplained or error variance (in the denominator). All inferential statistics employ some form of this equation. Because ANCOVA can account for and remove some error variance, it lowers that denominator and provides a stronger test of the relationship between an independent variable and an outcome variable.

Covariate Selection So how should one select covariates? First a covariate should have a statistically significant relationship with the outcome variable; if the covariate has no such relationship it cannot extract much variation and would be statistically irrelevant. Second, the covariate should not be associated with the independent variable. In that case, the covariate would be extracting real variance and masking the association between the independent variable and the outcome variable. Third, if more than one covariate is used, they should not be multicolinear; that is, the covariates should not be highly associated with one another. If they are multicolinear, then they will not extract unique variance from the outcome variable and are largely redundant. Finally, ANCOVA must adhere to certain requirements of ANOVA. Ideally, all the variables including independent and outcome variables are measured reliably. Likewise, any covariate should be linearly related to the outcome variable and the ANOVA assumptions of homogeneity of variance and normality should be adhered to. Peter A. Andersen See also Analysis of Variance (ANOVA); Experiments and Experimental Design

Further Readings Kirk, R. E. (1968). Experimental design: Procedures for the behavioral science. Belmont, CA: Brooks/Cole. Tabachnick, B. G., & Fidell, L. S. (1989). Using multivariate statistics (2nd ed.). New York, NY: Harper Collins. Winer, B. J. (1971). Statistical principles in experimental design (2nd ed.). New York, NY: McGraw-Hill.

Analysis

of

Ranks

One method of providing an evaluation is the rank ordering of some unit. For example, the ranking of college football teams provides an annual exercise in comparison of units. Rank ordering provides a relative placement of units to each other without regard to the relative distance between the evaluations. For example, if teams were rated on a 1–100 basis, it is possible that two teams could receive a ranking of 95, a tie for first place. However, such an evaluation does not mean that the next team will receive a 94; the next team could be evaluated and given a score of 85, for example. We could say that there are two great college football teams and the rest simply not nearly as good. Rank ordering is to place the teams relative to each other without regard to how much better one team is than another. For example, the first place team may be considered much better than the second place team but the rankings would still be 1 and 2. The third place team may be considered only slightly less skilled than the second place team but would still receive a ranking of 3. The rank order indicates only the relative evaluation, the distance between the evaluations that exists as an exact same interval, in terms of the mathematical or numerical distance but fails to represent the true level of comparison between units. Special statistics are used to evaluate data generated in these circumstances, permitting the analysis of data collected in this fashion. The analysis of ranks can provide the same kinds of outcomes as when the desire is to compare means, calculate correlations, or even perform more complicated statistical analyses (ordinal multiple regression or ordinal analysis of variance) under this set of conditions. The question is how to generate the underlying tests that can be used to perform an analysis of ranks. Like all statistics, the issue is the ability to generate an expectation of what the distribution should look like if the ranks were randomly distributed. Suppose we take a simple example of a comparison of whether or not two systems of ranking produce the same or different results. Consider the issue of football rankings; we have two groups do the rankings: (a) coaches and (b) fans. The question is whether the two groups largely agree or not on the rank order of the teams.

Analysis of Residuals   31

A central and simple question considers whether or not the outcome of the two rankings produces the same results. One possibility involves the use of Spearman’s rank correlation coefficient. The goal is the consideration of to what degree the two systems of ranking provide a correspondence with each other. Similar to Pearson’s correlation (used on interval or ratio data), the values run from the perfect levels of 1.00 (as one value increases, the value on the other ranking also increases) to −1.00 (as one value increases, the other decreases). A correlation of 0.00 indicates no predictability or correspondence between the ranking methods. The generated value expresses the degree of accuracy of the value of the second ranking when the value of the first ranking is known. With a perfect correlation, knowing that a team was rated first by the fans would correspond to a ranking of first by the coaches. The higher the correlation, the more accurate the prediction. The comparison is, in many respects, mathematically the same as any analysis that assumes that the distance between the numbers (1, 2, 3, etc.) is considered the same. What becomes generated is a set of distributions and a comparison of the discrepancy between the distributions. The sum of the discrepancies creates the ability to generate a significance test that checks whether the discrepancies are greater than expected due to random chance. The expectation becomes not that the rank orders will match but instead be very similar, assuming the two rank order methods produce similar sets of results. There exist a number of challenges to the assumptions that are used to compare rank order data. For example, the question of whether the normal distribution that treats the distances as the same would produce better or more accurate results receives a lot of attention. There exists a great deal of consensus based on many different mathematical simulations that actually parametric procedures (student’s t-test, Pearson’s correlation) generate outcomes that are in fact more accurate and less subject to Type I (false positive) and Type II (false negative) errors. One of the more standard characteristics of many nonparametric statistics (the kind of statistics that are used or recommended on rank order data) involves the assumption of nondistribution, but then often the significance test involves the conversion to a z score. What this

means is that the standard metric of comparison becomes related to an interval level conversion. Some scholars suggest that rather than the statistical conversion, a more efficient procedure involves simply using what would typically be considered parametric statistics from the beginning. The implications are that despite the perception of advantage in a mathematical sense to what is considered nonparametric statistics, the parametric statistics are often considered more accurate. However, there exists disagreement among scholars about the implications of the choice of statistics. The question of whether or not a particular choice is more warranted or beneficial may be illusory. Very often, the application of a rank order procedure to the analysis of rank order data generates the same conclusion as an application of parametric statistics to rank order data. Another element in the comparison of statistics involves the inclusion of rank order data in a metaanalysis. Often the rank order data becomes converted to a z score (a parametric measure) or is simply interpreted as a parametric statistic (like the Spearman’s rho, rank order, correlation). So, even if the choice to use a nonparametric statistic is made by the investigator, the data still are treated in a meta-analysis as a parametric statistic for inclusion in the analysis. Mike Allen See also Correlation, Spearman; Cramer’s V; Kendall’s Tau; Kruskal-Wallis Test; Measurement Levels; Measurement Levels, Ordinal

Further Readings Siegel, S., & Castellan, N. J. (1988). Nonparametric statistics for the behavioral sciences (2nd ed.). New York, NY: McGraw-Hill. Williams, F., & Monge, P. (2000). Reasoning with statistics: How to read quantitative research (5th ed.). Boston, MA: Cengage Learning.

Analysis

of

Residuals

A residual is something left over after a primary analysis has been conducted. Usually, the issue most often is applied to multiple regression.

32   Analysis of Residuals

Essentially, multiple regression is an attempt to use predictor variables in combination to predict a dependent variable. The following equation provides a general view of this process: Y = C + b1X1 + b2X2 + b3X3 + . . . + biXi + e This equation indicates that the value of the dependent variable (Y) is based on a constant number (C) and a combination of predictor variables (X) that are weighted by a coefficient (b). The last term in the equation represents the “error” term (e). What is meant by “error” in this case is that all elements not part of the prediction constitute error or what should be a random and unpredictable part of the equation necessary to make the two sides of the equation balance. The sum of residuals (error) in the entire set of data should equal zero. Thus, if one considers the score for the dependent variable Y and the amount of error (either positive or negative), one can plot on a diagram the amount of error for each value. The assumption that the error is random means that when examining this distribution, no discernible pattern should exist. The analysis of residuals refers to an attempt to determine whether or not a pattern exists and why.

Plotting Residuals The typical technique involves the production of a scatterplot permitting a visual examination in addition to more formal statistical analysis. Various possibilities exist that may require attention. Each separate test or assessment requires some effort. The possibility exists that the explanation for the residuals requires a combination of multiple explanations and consideration of several different possible sources. The first option suggests that the residual pattern is random and within expected tolerance limits for the overall pattern and individual predictions. Under these conditions, the multiple regression equation provides a set of predictions that work without any of the problems identifiable on the basis of residuals. The errors that exist are random, based on the normal assumptions related to sampling error.

The first consideration is the existence of outliers. An outlier involves a single value whose error is so great that there is a distortion to the estimation of the process. Consider a set of values where the error value for one case is 100. Suppose that the next largest positive value for the error is 10. Remember, that the sum of the errors should be zero. The result is that the level of error for this single case becomes so large that the rest of the values are in a sense distorted because almost all of the error becomes associated with this single case. The most frequent remedy is removal of the identified case. The problem with a random sample is that under sampling, the extreme value may simply reflect a randomly drawn outlier. In theory, 50% of the errors should be positive and 50% should be negative. Often this percentage is distorted by an outlier, but if the percentage is greater or lesser by a significant amount, some adjustment may be required. The assumption is that error should operate as a normal distribution around a mean (zero) and a departure from that indicates some element that may require adjustment or consideration. An extreme outlier in the analysis of residuals indicates the source of a significant potential departure from the normal curve considerations. The definition of a random extreme outlier would be that the mean of the sample adjusts only slightly while the variance (variability) observes a tremendous drop. Under those conditions, the outlier often is considered simply the result of a random chance factor associated with sampling. The second consideration requires an examination of the pattern of residuals formed by the scatterplot. Evidence of a pattern, like a nonlinear curve, indicates the existence of some type of variable or metric that creates the ability to predict the level of error. Any ability to predict the level of error in the set of estimate runs counter to the assumption that amount of error should be random and unpredictable. If such a pattern is detectable (formal statistical tests are available to assess this), then there may exist a missing variable or some interaction of predictors in the equation that may be required to include as part of the analysis. The case for residuals operating in some nonrandom manner becomes easy to assess; a detectable pattern indicates that the error operates no longer as random.

Analysis of Variance (ANOVA)   33

Determining the Cause of a Residual Pattern If the analysis of residuals reveals a pattern, the goal of adequate statistical analysis requires or expects some identification of the source of this pattern and inclusion in the multiple regression equation. The assumption of multiple regression is that the distribution of error is random; when the error is not random, the equation is functioning with reduced efficiency and effectiveness. The challenge becomes adapting or adjusting the equation so that the predictability operates at a maximum. Several routes can be considered such as a possible covariate that becomes multiplied through to adjust the scores and remove the systematic element in the observed residual. For example, in some communication variable functions as a result of experience (age), the use of age as a covariate in the analysis could essentially remove the influence of age and restore the error term to a randomness. Another possible cause of departure from a random pattern could be either a set of causal dependencies unidentified in the analysis or some combinatorial interaction. The result of either set of conditions may become evident in some pattern formed by the residuals when analyzed. The correct solution in this instance would be the identification of this feature and inclusion in the analysis to account for the pattern. The problem with any recognition of a nonrandom pattern in residuals becomes the difficulty of identifying and explaining that event. The challenge of an unknown and potentially unidentified element may be both methodologically and theoretically unsolvable in the immediate investigation. However, the case for future research and the identification and solution of this phenomenon should serve to increase the attention and need for additional efforts. Mike Allen See also Analysis of Covariance (ANCOVA); Multiple Regression; Multiple Regression: Standardize and Raw Coefficients

Further Readings Rospa, P. J., Schaffer, M. M., & Schroeder, A. N. (2013). Managing heteroscedasticity in general linear models. Psychological Methods, 18, 335–351. doi:10.1037/ a0032553

Analysis

of

Variance (ANOVA)

In many social science disciplines such as communication and media studies, researchers wish to compare group averages on a dependent variable across different levels of an independent variable. Analysis of variance (ANOVA) is a collection of inferential statistical tests belonging to the general linear model (GLM) family that examine whether two or more levels (e.g., conditions) of a categorical independent variable have an influence on a dependent variable. As with most inferential tests, the purpose of ANOVA is to test the likelihood that the results observed are due to change differences between the groups. This entry provides a general overview of ANOVA, including a discussion of the assumptions underlying the tests, comparison with t-tests, different forms of ANOVA, and provides two examples of ANOVA designs.

Assumptions Underlying ANOVA Tests ANOVA belongs to the family of parametric inferential tests; therefore, a number of requirements related to the variables and the population of interest must be met or else assumptions underlying the mathematical properties will be violated. One requirement is that independent variables should be measured on a nominal scale, such that the variables’ conditions vary qualitatively (e.g., presence or absence of a variable) but not quantitatively. Another requirement is that the variance associated with the populations from which the independent variable was sampled should be equal (i.e., homogeneity of variance). Dependent variables should be quantifiable on at least an interval scale (e.g., extent of agreement with a statement; amount of satisfaction with a relationship) and should also be normally distributed. These assumptions also highlight ANOVA’s roots in experimental design, where researchers have a significant amount of control in the manipulation and measure of the variables of interests. However, it is possible to utilize ANOVA in quasiexperimental and some correlational designs where random assignment of participants to conditions is too costly or not possible (e.g., gender). Violation of these assumptions can affect the

34   Analysis of Variance (ANOVA)

interpretation of the results from these tests; under these conditions nonparametric tests might better serve the researcher.

Comparison to t-Test To better understand and appreciate the utility of ANOVA tests, it might be useful to compare it to t-test types of analyses. Both tests share similar assumptions and are applicable for designs where different levels of a condition can be independent (i.e., participants are exposed to only one level of the independent variable) or dependent (i.e., participants are exposed to all levels of the independent variable). And just like t-tests, ANOVA partitions out the variability attributed to the difference between the independent variable (i.e., difference between groups or treatment variance) and variability in the research context (e.g., naturally occurring variability or error variance). However, two important differences between t-tests and F tests derived from ANOVA are that ANOVA can accommodate research designs that utilize (a) more than two levels of an independent variable and (b) multiple independent variables.

Types of ANOVA Designs One-Way ANOVA

To illustrate the different types of variables in a research context, consider the following example involving message source effects in persuasion— common variables of interest in communication research. Imagine that a researcher is interested in the effect of source credibility on the persuasiveness of a message advocating the reduction of greenhouse gases. In this case, credibility is the independent variable and persuasion is the dependent variable. The researcher hypothesizes that participants exposed to the message from the high-credibility source will be more persuaded than those who are exposed to the same message from either the low-credibility source or where no source information is mentioned. To test this, three conditions are created: a high-credibility condition, a low-credibility condition, and a control condition where credibility-related information is absent. Following exposure to the independent variable and the message, participants

report their opinion toward the reduction of greenhouse gases. Examination of participants’ opinions will likely yield that the amount of persuasion will vary across participants: some may be very persuaded, some very little, and some moderately so. ANOVAbased tests will partition that variability into two general types: variability attributed to the independent variable (i.e., treatment variance, credibility) and variability that is left over in the experimental context. This latter form of variability is called error variance, and can originate from many different aspects in the experimental setting not under control of the experimenter, such as characteristics the participants bring into the context (e.g., personality) and the context itself (e.g., room temperature). These other sources of variability are combined into error variance, which is essentially all of the variability in a dependent variable not attributed to the independent variable. The value calculated by the ANOVA test, the F value, is the ratio of these two types of variability (hence the term analysis of variance). Specifically, an F value results from dividing the treatment variance by the error variance. The larger the value, the smaller the likelihood that chance played a role in the differences observed between levels of the independent variable. In traditional null hypothesis testing terms, a significant F value is one that has a less than 5% probability of occurring by chance (p  .05, more supportive behaviors for bullying victims, compared to students who did not receive the intervention program (M = 3.25, SD = 8, N = 85). The graph in Figure 2 depicts the results. However, what if researchers wanted to compare a change in the intervention group before the program versus the end? A paired sample t-test would need to be used. A paired sample t-test is conducted when there are different scores for the same subject, or when data is in pairs, commonly used in studies measuring data at different time points. Consider the previous example of a bullying intervention program, with researchers also wanting to measure a decrease in bullying. The paired sample test generates the t statistic by calculating the mean difference of the scores and dividing by the standard error of the difference. The following formula: xdiff t= sd / n is used, where xdiff = the difference between the means, sd = the standard deviation, and n = the number of difference scores. Consider three sets of mean scores of students reporting on the magnitude of classroom bullying before the intervention

1788   t-Test, Independent Samples

Figure 2  E xample of Results of Independent Samples t-Test .45 .40 .35 .30

Nancy A. Burrell and Clare Gross

0.95

.25

population, especially in research with smaller samples sizes. Potential drawbacks to t-tests are that only two groups of dependent variables can be compared, and the relationship with only one independent variable can be found.

See also Normal Curve Distribution; Standard Deviation and Variance; Standard Error; Z score; Z Transformation

.20 .15 .10

0.025

.05

0.025

Further Readings

.00 –4

–3

–2

–1

–1.97

0 t(2.70)

1

2

3

4

1.97

(A) and, after the intervention (B), and column D recording the mean differences: A−B=D 5 .7 − 3 .9 = 1 .8 6 .2 − 4 .2 = 2 .0 Averaging the differences, the difference between the means is 2. Next, the standard deviation is calculated by taking the square root of the average variance. Assume the standard deviation of 1.9 is calculated. Knowing N = 3, we can now find the t value: 2 (1.9 / 3) 2 = (1.9 / 1.73) 2 = 1 .1 = 1.82

t=

Looking up the critical value of the t score with 2 degrees of freedom (df = N–1), for a two-tailed t-test, we can see the t of 1.9 is far from significant. In this case, the findings would suggest participants did not experience a significant reduction in bullying behaviors as a result of the intervention.

Benefits and Drawbacks/Conclusion t-Tests provide a picture of how data from a sample population truly represents an actual

Cohen, B. H. (2001). Explaining psychological statistics (2nd ed.). New York, NY: Wiley. Mendenhall, W., Beaver, R. J., & Beaver, B. M. (1999). Introduction to probability and statistics (10th ed.). Pacific Grove, CA: Duxbury Press.

t-Test, Independent Samples The t-test is used when researchers want to compare the means of two groups to determine if the groups are statistically different from one another. The t-test, in general, is a type of parametric test, which is a test used to make assumptions about the larger population based on data gathered from a sample of that population. It can also be classified as an inferential statistic, as one is trying to “infer” something about the population based on a sample. In other words, in the case of t-tests, researchers use sample data to make claims about the larger population. The t-test statistic has variations in and of itself depending on the nature of the two means the researcher wishes to compare. The three types of t-tests include the independent samples t-test, the paired samples t-test, and the one-sample t-test. The t-test described in this entry is the independent samples t-test. The independent samples t-test is an important, basic statistic for communication researchers. It is one of the simplest and most frequently used statistical measures in communication research. It is a straightforward test in which researchers compare the difference between means of two independent groups. This entry offers a definition of this type of statistical measure, a description of

t-Test, Independent Samples   1789

when it ought to be used, the assumptions of the test, as well as case examples to illustrate appropriate uses of the test.

The Independent Samples t-Test As noted, the independent samples t-test is widely used by communication researchers. The purpose of this test is to determine whether the means from two independent groups are significantly different from each other. It is important to remember that if this test is used, the two groups must be independent from one another. Groups can be considered independent when there is no overlap between the groups. For example, if one has a control group and a treatment group, participants cannot be in the control group at the same time that they are in the treatment group. Or, if we are grouping based on age, a participant cannot be 20 at the same time that she is 28. Groups can also be considered independent when we are collecting data from and comparing two different populations. For example, say that a researcher is hoping to compare the friendliness of current Wisconsin residents to the friendliness of current Oklahoma residents. Here, the participants are from two different populations (i.e., participant is either a current Wisconsin resident or a current Oklahoma resident), and the score for a participant from one group is not related in any way to the score for a participant from another group. In sum, for two groups to be considered independent—and thus, to be able to use the independent samples t-test— there must not be any naturally occurring or planned relationship between the members of the two groups. The participants within the groups are not matched in advance and there is no “before and after” type of testing used on just one group. When the above conditions are satisfied, we can assume the value for one group’s mean is not influenced or related to the value of the other group’s mean. Thus, the groups are independent.

When to Use the Independent Samples t-Test Now that the independent samples t-test has been defined and now that it is clear whether two groups are independent, it is important to explain when this particular test can be used. It is helpful

to ask oneself questions when determining which type of statistical measure is right for the research one is attempting to analyze. First, to determine if the t-test is the appropriate test, ask: Am I trying to determine if the means of two groups are statistically different from each other? If the answer is “yes,” then proceed to determine which particular t-test is the appropriate one for your analysis. In the case of the independent samples t-test, ask: Am I trying to determine if the means for two independent groups are significantly different from one another? Am I trying to determine if two separate groups have different average values on some variable? Am I trying to determine whether the mean value of some variable for one group differs significantly from the mean value of that variable for a second group? If the answer to these questions is yes, proceed to use the independent samples t-test. If the answer to the above questions is no, it is likely that you ought to consider one of the other two types of t-tests. For example, say that a researcher is conducting an experiment, and is comparing a “before and after” value for one group of participants. The mean value of the test variable is collected at Time One, then an intervention occurs, and then the mean value of the test variable is collected again for the same sample at Time Two. Here, the two “groups” are actually the before and after scores; these two means are considered dependent on each other, and the paired samples t-test ought to be used. Or, if a researcher wishes to compare a sample mean to a known population mean, a one-sample t-test would be the correct statistical measure to use. The analysis of variance (ANOVA) is yet another statistical measure related to the t-test, and would be used if a researcher wants to compare the means of more than two groups. The following brief scenarios would each allow the use of an independent samples t-test. An organizational communication researcher wants to compare the communication competence levels between managers with less than one year of managerial experience to managers with more than one year of managerial experience. An instructional communication researcher hopes to determine whether there are differences in levels of public speaking apprehension between students who took an accelerated, two-week public speaking course versus students who took a 16-week

1790   t-Test, One Sample

public speaking course. An interpersonal communication scholar is interested in determining if individuals in long-distance romantic relationships and individuals in geographically close romantic relationships report different levels of relational closeness.

Assumptions of the Independent Samples t-Test Each statistical measure carries a set of assumptions that must be satisfied in order to consider the results of the test to be valid. If these assumptions are violated (or not met) in some way, one cannot consider the results to be “true.” There are four basic assumptions of the independent samples t-test. First, this test assumes that the cases represent a random sample from the defined population about which we are hoping to make claims, and that the scores are independent of each other. The independent samples t-test also requires consideration of the scales of measurement for the variables in the t-test. The independent variable (also known as the grouping variable) must be nominal naturally, or nominal through a median split. The dependent variable (also known as the test variable) has to be an interval or ratio type of variable. Third, the independent samples t-test also assumes the scores on the testing variable are normally distributed. The final assumption of this test is that the population variances are equal.

Case Example to Illustrate the Independent Samples t-Test Anndrea is interested in determining if employees of small, niche women’s clothing boutiques identify more strongly with their organization than employees working in the women’s clothing section of large department stores. To test her hypothesis, she obtains a random sample of 100 full-time boutique workers and 100 full-time department store workers. Each of her 200 participants complete a brief, six-item questionnaire designed to measure participants’ feelings of organizational identification, or, oneness with or belongingness to their organization. Anndrea ends up with two variables to analyze— her grouping variable and her testing variable. Her

grouping variable (or, independent variable) is nominal in nature; each participant is either an employee of a boutique or a department store. Boutique employees are defined as 1 on the grouping variable, while department store employees are defined as a 2. Her testing variable (or, dependent variable)—the score on the organizational identification questionnaire—is interval in nature, with possible scores ranging from 1 to 5. Consistent with most other social science research, she sets her significance level/alpha level ahead of time at .05. If her t-value is significant at less than or equal to .05, Anndrea can say that there is indeed a statistically significant difference in boutique workers’ and department store workers’ reported levels of organizational identification, and that these differences did not occur due to chance. Nicole Ploeger-Lyons See also Analysis of Variance (ANOVA); Measurement Levels; Random Assignment; t-Test; t-Test, One Sample; t-Test, Paired Samples

Further Readings Bostrom, R. N. (1998). Communication research. Prospect Heights, IL: Waveland. Green, S. B., & Salkind, N. J. (2011). Using SPSS for Windows and Macintosh: Analyzing and understanding data (6th ed.). Boston, MA: Pearson. Kachigan, S. K. (1986). Statistical analysis: An interdisciplinary introduction to univariate and multivariate methods. New York, NY: Radius Press. Keyton, J. (2014). Communication research: Asking questions, finding answers (4th ed.). Boston, MA: McGraw-Hill. Reinard, J. C. (2007). Introduction to communication research (4th ed.). Boston, MA: McGraw-Hill.

t-Test, One Sample Broadly speaking, the t-test is a statistical procedure used when one hopes to compare the means of two groups. The t-test is considered a type of inferential statistic because, with it, one is attempting to infer something about the population from the sample data. In other words, one takes the knowledge about the sample and makes claims

t-Test, One Sample   1791

about the population. The t-test can also be classified as a parametric test. To be more specific, there are three main types of t-tests, each of which are important communication research measures. They include the onesample t-test, the independent samples t-test, and the paired samples t-test. The focus of this entry is the one-sample t-test. This test is often considered to be the “classic” form of the t-test. The onesample t-test is used when researchers hope to compare a sample mean to a population mean or to some other specified test value. This entry offers a more detailed definition and explanation of the one-sample t-test, describes when to use this statistical measure, identifies the major assumptions of this test, and provides a case example to help illustrate a proper use of this procedure in communication research.

The One-Sample t-Test The ability to correctly use the one-sample t-test is an important tool in any communication researcher’s toolbox. As briefly noted, this test ought to be used when a researcher collects data on a single sample from a defined population when the population variance is unknown. Essentially, the researcher would be comparing one-sample mean to what is known about the larger population in order to determine whether the sample is significantly different from the population. Or, the researcher could compare the sample mean to another specified test value.

When to Use the One-Sample t-Test Now that a basic definition of the one-sample t-test is understood, the next step is to determine when to use this test. An easy way to determine if the t-test in general—and here, the one-sample t-test—is the correct statistical procedure to use, ask a series of yes/no questions. First, to determine if a t-test is the appropriate test, ask: Am I comparing two means to determine if they are statistically different from each other? If the answer is “yes,” then it is time to figure out which of the three t-tests to use. Since the one-sample t-test is the focus here, start with that. If the answer to any of these questions is “yes,” proceed with the onesample t-test. Am I trying to determine if a sample

mean is statistically different from a population mean? Am I trying to determine if a sample mean is statistically different from the midpoint of my test variable? Am I trying to determine if a sample mean is statistically different from a chance level of performance on the test variable? If a researcher is comparing two means, but answers “no” to all of the immediately preceding questions, it is likely that one of the other two types of t-tests will be appropriate to test the hypothesis or research question. For example, a scholar is interested in determining whether communication majors and mathematics majors have different mean extroversion scores. Since these are two independent groups, the independent samples t-test would be the best statistical test. Or, for example, a scholar is interested in voters’ likelihood to vote for a candidate before and after a television promotion. This “before and after” value for the same group of participants indicates that a paired samples t-test would be an appropriate test.

Determining the Test Value Determining the test value is perhaps the biggest decision to be made when using the one-sample t-test. When trying to determine what the test value will be (usually considered to be a neutral point in some way), there are some common options. First, one could compare a sample mean to a known mean from the larger population. For example, if it is known from past research that the national mean value for job satisfaction is 6.2 on a 10-point scale, one could adopt the previously established mean. If one is interested in determining whether a specific company’s employees report higher or lower job satisfaction than the national average, the test value of 6.2 would be chosen. What if the national job satisfaction mean is unknown? Another option for determining the test value could be to use the midpoint of the job satisfaction scale. Here, one would use a test value of 5.0, as that is the exact midpoint of the 10-point scale. Lastly, if one is using a test variable that involves some sort of performance, an option for the test value would be the chance level of performance on that test variable. For example, say that a basic communication course director is interested to

1792   t-Test, One Sample

know if students across sections of the course can identify the use of pathos within a presentation. He or she has a random group of students from multiple sections read a persuasive presentation manuscript. Then, in each of the five multiplechoice questions, they are presented with four lines from the presentation and are asked to identify which one line of the four demonstrates the speaker’s use of pathos (appeal to listeners’ emotions). Because students have a one in four chance of getting each item correct, and because the student has five opportunities to obtain a correct answer, chance level on this task is 1.25. A score less than 1.25 would indicate students did worse than chance, while a score greater than 1.25 would indicate students performed better than chance.

Assumptions of the One-Sample t-Test Just like other statistical measures, the one-sample t-test carries with it a set of assumptions about the data that need to be met in order to ensure appropriate use of the test. The first assumption of the one-sample t-test is that the sample cases represent a random sampling from the defined population. Second, the test variable ought to be classified as either an interval- or ratio-level variable. Lastly, the test variable should be normally distributed within the population.

Case Example to Illustrate the One-Sample t-Test Levi hopes to investigate how student leaders in his home state compare in argumentativeness to national student leaders. Specifically, Levi wants to determine whether undergraduate student government leaders from the private and public universities in the state of Kansas are more or less argumentative than undergraduate student government leaders from private and public universities across the United States. Levi collects a random sample of 50 Kansan student government leaders and has them complete Infante and Rancer’s Argumentativeness Scale, which asks participants to answer a series of 20 questions. Participants report on a scale from 1 to 5 how true each item is for them personally, where 1 = almost never true for you and 5 = almost always true for you. The responses for each

item are summed to give a total score out of 100. A total score of 73-100 indicates high argumentativeness while a score of 56-72 indicates moderate argumentativeness. Low argumentativeness is indicated by a score of 20-55. Levi knows—from past research—that the population mean for undergraduate student leaders from public and private universities across the United States is 73, which indicates a relatively high level of argumentativeness. Therefore, he conducts a one-sample t-test on the 50 Kansans’ scores to determine if the mean of his sample is statistically different from 73. He chose to set his test value at 73 because a sample mean with a value greater than 73 indicates Kansan student government leaders are more argumentative than the student government leaders from across the nation, while a value less than 73 indicates student government leaders from Kansas are less argumentative. Consistent with most social science research, Levi sets his significance level/alpha level ahead of time at .05. If his t-value is significant at less than or equal to .05, Levi can say with 95% confidence that there is indeed a statistically significant difference (and that these differences did not occur due to chance) in argumentativeness between student government leaders in his state and across the nation. Nicole Ploeger-Lyons See also Analysis of Variance (ANOVA); Measurement Levels; Random Assignment; t-Test; t-Test, Independent Samples; t-Test, Paired Samples

Further Readings Bostrom, R. N. (1998). Communication research. Prospect Heights, IL: Waveland. Green, S. B., & Salkind, N. J. (2011). Using SPSS for Windows and Macintosh: Analyzing and understanding data (6th ed.). Boston, MA: Pearson. Kachigan, S. K. (1986). Statistical analysis: An interdisciplinary introduction to univariate and multivariate methods. New York, NY: Radius Press. Keyton, J. (2014). Communication research: Asking questions, finding answers (4th ed.). Boston, MA: McGraw-Hill. Reinard, J. C. (2007). Introduction to communication research (4th ed.). Boston, MA: McGraw-Hill.

t-Test, Paired Samples   1793

t-Test, Paired Samples Before describing a paired samples t-test, it is helpful to offer a brief reminder of the t-test itself. The t-test statistical procedure is used when a researcher wants to know whether two group means are different and whether that difference can be shown to be statistically significant. Statistical significance indicates that the outcome of the statistical procedure is rare enough to confirm that the differences observed would not have occurred solely by chance. For communication research (and many other social sciences), a one in 20 chance is usually considered sufficiently rare. Thus, the alpha (or significance) level for most t-tests is set at .05. The paired samples t-test is one of three main types of t-tests, the other two being the one-sample t-test and the independent samples t-test. Determining which t-test to use depends on the nature of the data itself and the hypothesis or research question that is posed. In short, what is the nature of the two means one wishes to compare? If the means are derived from matched pairs (or matched subjects) or if they are derived from a repeated measures design (similar to “before and after” scores), then the paired samples t-test is the correct choice. This entry offers a definition of the paired samples t-test, a detailed explanation of the types of study design that may require the use of this test, the assumptions associated with this statistical procedure, as well as a case example to illustrate an appropriate use of the test.

The Paired Samples t-Test Understanding when and how to perform a paired samples t-test is valuable for any communication researcher. Many communication studies use a repeated measures experimental design (with or without an intervention), or involve researcherproduced participant pairs or naturally occurring participants’ pairs. In any of these instances, the paired samples t-test would be the correct statistical choice to compare means from the two groups. In short, the paired samples t-test is a statistical procedure employed to compare the means of two groups that are matched, dependent on one

another, influenced by each other, or linked to each other in some way. The paired samples t-test is also referred to as the dependent samples t-test; remembering the word “dependent” is a helpful tip to understand this type of test. Dependent samples are contrasted with the independent samples t-test, in which there is no overlap between the two groups and the value for one group’s mean is not influenced by the value of the other group’s mean. The primary question being asked when using a paired samples t-test is: Do mean differences in scores between the two groups (or conditions) differ significantly from zero?

When to Use the Paired Samples t-Test It is important to remember that if the paired samples t-test is used, the two group means one wishes to compare must be dependent on each other in some way. How does one know if groups are dependent on each other? The two most common possibilities that would lead to dependent groups include researcher-produced pairs and the use of a repeated measures design. Researcher-Produced Pairs

Oftentimes, researchers choose to pair participants in two groups. Also known as matched pairs, this phrase means that a person in one group is somehow paired to or matched with a participant in another group. A common use of this type of sophisticated pairing, in which two groups of participants are matched individually by the researcher, is to pair based on one or more demographic characteristics. For instance, if a researcher is studying college students’ public speaking apprehension and public speaking ability, the researcher’s population is college students in general. Purposeful pairing allows the researcher to control for some extraneous demographic variable. To control for year in college, the researcher could choose to place one freshman in one group and another freshman in the other group, a sophomore in one group and another sophomore in the other group, a junior in one group and another junior in the other group, and so on. Depending on the nature of the study, one group could be exposed to a treatment of some sort (experimental

1794   t-Test, Paired Samples

group) and the other group would be considered the control group. Lastly, a researcher may choose to account for naturally occurring pairs, like close friends, twins, or spouses. The researcher may wish to individually split these pairs into two groups, as their scores are likely influenced by one another. Repeated Measures Design

Perhaps the most common use of the paired samples t-test in communication research is found in experiments that involve a repeated measures design. In this type of study design, the researcher compares pairs of scores, not pairs of participants. A score (or observation) from one group is compared to a related observation in another group. Each participant is in both “groups”; they are assessed on two occasions or under two conditions with one measure. If a participant is assessed on two occasions, the researcher may be doing a “before and after” type of study. For example, say that a researcher wishes to study whether a series of trainings affects part-time employees’ job commitment scores. Before the trainings, each participant in the sample takes a brief job commitment questionnaire (score at time one). Then, each participant is exposed to weekly trainings for two months. At the conclusion of the trainings, each participant takes the same questionnaire (score at time two). In a paired sample t-test, the mean of the time one scores is compared to the mean of the time two scores to determine whether there is a statistically significant difference that could be attributed to the training. Or, for example, consider a situation in which an organizational communication researcher wants to investigate whether employees are, on average, more concerned with their relationship with their supervisor or with their salary level. The paired samples t-test would be an appropriate choice because everyone in the one group is asked about both concern for supervisor relationship and concern for salary level.

Assumptions of the Paired Samples t-Test Like all other statistical measures, the paired samples t-test carries a set of assumptions. There are three in particular relevant to this test. Like the

other t-tests, the paired samples t-test assumes random sampling from the defined population. Also, the grouping variable ought to be nominal in nature (either naturally or through a median split), while the test variable must be an intervalor ratio-level variable. Lastly, it is assumed that the population difference scores (essentially mean two minus mean one) are normally distributed.

Case Example to Illustrate the Paired Samples t-Test Dotty and Greg are communication researchers and practitioners who specialize in leadership training and development. They have been hired by an organization to conduct a series of leadership training seminars for 50 middle-level managers. The purpose of their seminars is to increase the quality of leadership communication among the managers. In order for Dotty and Greg to demonstrate to the individuals who hired them—and for them to know themselves—whether their trainings were effective, Dotty and Greg decide to do a brief repeated measures study. Before the trainings begin, Dotty and Greg send each of the 50 participants the Perceived Leadership Communication Questionnaire. The questionnaire is designed to measure how leaders perceive their own communication with their followers. Each participant is to complete it and bring it with him or her to the first training. Dotty and Greg proceed to conduct six half-day trainings over the next three months. Two weeks after the last training, they send the Perceived Leadership Communication Questionnaire to the participants one more time. Using a paired samples t-test, Dotty and Greg will compute the mean difference in scores from before the trainings and after the trainings. To determine whether this difference is significantly different from zero (using an alpha level of .05), they examine the p value. If the p value is less than .05, they can conclude that their trainings made a statistically significant difference in how these middle-level managers perceive their communication with their followers. Nicole Ploeger-Lyons See also Analysis of Variance (ANOVA); Measurement Levels; Random Assignment; t-Test; t-Test, Independent Samples; t-Test, One Sample

Turning Point Analysis   1795

Further Readings Bostrom, R. N. (1998). Communication research. Prospect Heights, IL: Waveland. Green, S. B., & Salkind, N. J. (2011). Using SPSS for Windows and Macintosh: Analyzing and understanding data (6th ed.). Boston, MA: Pearson. Kachigan, S. K. (1986). Statistical analysis: An interdisciplinary introduction to univariate and multivariate methods. New York, NY: Radius Press. Keyton, J. (2014). Communication research: Asking questions, finding answers (4th ed.). Boston, MA: McGraw-Hill. Reinard, J. C. (2007). Introduction to communication research (4th ed.). Boston, MA: McGraw-Hill.

Tukey Test See Post Hoc Tests: Tukey

Turning Point Analysis Turning point analysis is the study of moments of change in a relationship. Turning point analysis recognizes that relationships have the potential to change in positive or negative ways at any given point. The concept of studying these pivotal moments of change received some sporadic attention in the area of interpersonal communication beginning with Charles Bolton’s framing of relationships in 1961. However, it was the work of Leslie Baxter and Connie Bullis in 1986 that was its own “turning point” that transformed the topic from an often-ignored feature of relationships to a focus of relational analysis. Baxter and Bullis popularized the view that studying turning points not only offered evidence of various trajectories for relationship development, but also could yield important insight into the types of meaning being created within relationships. This entry introduces turning point analysis, paying specific attention to the meaning and process of relationships, retrospective interview techniques, and the application of turning point analysis in different contexts.

The Meaning of Relationships What events are steeped in meaning for relational partners? Some of this is idiosyncratic. However,

Catherine Surra (along with her colleagues) has identified four major categories of turning points in romantic relationships. First, there are interpersonal/normative turning points. These events lead relational partners to evaluate their relationship against those of other couples or of a societal standard. This may happen once a couple hits their one-year-anniversary, for example. Second, there are dyadic turning points. These moments reflect the specific interactions relational partners experience, such as their first fight or their first sexual encounter. Third, there are social network turning points. These are moments of change involving people outside of the dyad, such as a trip to meet a partner’s parents. Finally, there are circumstantial turning points. The locus of these points is outside of the couple, but reverberates within the relationship. Circumstantial turning points might include a change in employment status or witnessing a disaster like 9/11. Other researchers have sought out specific turning points that apply in romantic relationships as well as in other types of relationships. These turning point categories help us to understand how meaning is created within interpersonal relationships, as well as offer heuristic value. Which life events cause us to re-evaluate our relationships? How do these moments form patterns in the process of relating?

The Process of Relationships It is notable that turning point analysis reflects a process view of relationships that differs significantly from stage theories of relationships. Whereas some relationship researchers believe that people pass through different thresholds or steps in a linear and incremental process turning points are more erratic and less predictable. They reflect a relational view that emphasizes intermittent moments of change in relationships. These changes can be in multiple variables associated with relational quality such as intimacy, commitment, or satisfaction. This series of shifts up or down can be looked at together to paint a picture of a couple’s relational path. Although it is impossible to review all possible trajectories for interpersonal relationships while looking at the turning points on all relational outcome variables there are a few general patterns

1796   Turning Point Analysis

worth noting. First, there are turbulent relationships where there is a high amplitude of change in the relationship, meaning that movement at turning points is substantial. There are also relationships where the amount of change in the relationship at each turning point is relatively low, or stagnant. There are relationships that are punctuated often by turning points, and others where partners report turning points occurring infrequently. There are relationships with a high ratio of positive turning points in relation to negative turning points, and there are relationships in which the converse is true. You can imagine how relational life might feel and proceed differently in relationships with each of these different types of turning point patterns.

Retrospective Interview Technique Key to investigating the moments of change in relationships is identifying what turning points occur and when they happen. The most common methodological tool to accomplish this is the retrospective interview technique (RIT). This practice has been refined through the work of many researchers, including Surra, Baxter, Sally Lloyd, and Rodney Cate. Researchers utilizing the RIT ask their participants to focus upon a particular relationship and a particular relational variable (commonly commitment, satisfaction, or intimacy). They then guide participants in looking backward over the history of that relationship to generate a list of turning points where the relational variable changed either positively or negatively. Once the list has been generated participants are asked to plot these points on a graph, where one axis represents chronological time since the start of the relationship and the other represents the relational variable. Once researchers have a list of each of the major turning points in the relationship, and a point on the graph indicating when each happened and what level of the relational variable each is associated with, they may then move on to ask their participant about the context of the turning points. They may ask questions such as what sense the participant made of the turning point, or how it helped to create a relational identity for the dyad.

Many researchers then engage in inductive coding to group like turning points together into a category. When two or more researchers group the turning points together in the same way with a high degree of consistency with one another they create a coding scheme. They can then evaluate the frequency with which a turning point code occurs within a certain population (like dating partners, or married couples). Researchers who utilize RIT often try to build upon one another’s coding schemes in order to increase the ability to compare turning points across relationship type. The data points on the RIT graphs can be used to look for patterns of turning points over the course of an individual relationship, as well as to compare the turning point patterns of one group to another. Depending upon the relational outcome variable and relationship type that is being researched the pattern may reflect various trajectories such as the progress in developing/maintaining commitment, or the ebb and flow of intimacy within long-term relationships, or even the effect of various life events upon relationship satisfaction.

Turning Point Analysis Application In relationship research the first applications of turning point theory were focused squarely upon romantic dyads. In the 1990s and early 2000s, the scope of turning point analysis was expanded to begin investigating other types of relationships as well. Researchers have now extended turning point analysis to look at such various relational types as post-dissolution relationships (such as exes and divorced/blended families), grandparent/ parent relationships, and various types of friendships (intercultural friendships, proximal friendships, distance friendships). Some scholars have even extended turning point analysis outside of the personal relationship realm into those dyads with a role relationship, such as teacher/student relationships and small groups or organizational teams. Of course, the idea of turning points has also found some traction outside of relationship research. For instance, the concept of analyzing turning points that have marked significant change has been used in studying various topics in rhetoric and argumentation as well as in the development of communication technologies. The idea

Two-Group Pretest–Posttest Design   1797

has also been used to track advantageous or disastrous moments in the stock market. Although the term turning point means roughly the same thing in each of these circumstances the focus on the process and meaning encapsulated within turning points has been especially emphasized by relational scholars. Christina Yoshimura See also Family Communication; Informant Interview; Interpersonal Communication; Interviews for Data Gathering; Personal Relationship Studies; Respondent Interviews; Social Relationships

Further Readings Baxter, L. A., Braithwaite, D. O., & Nicholson, J. H. (1999). Turning points in the development of blended families. Journal of Social and Personal Relationships, 16(3), 291–313. Baxter, L. A., & Bullis, C. (1986). Turning points in developing romantic relationships. Human Communication Research, 12, 469–493. Becker, J. A. H., Johnson, A. J., Craig, E. A., Gilchrist, E. S., Haigh, M. M., & Lane, L. T. (2009). Friendships are flexible, not fragile: Turning points in geographically close and long distance friendships. Journal of Social and Personal Relationships, 26(4), 347–369. Bolton, C. D. (1961). Mate selection as the development of a relationship. Marriage and Family Living, 23(3), 234–240. Dun, T. (2010). Turning points in parent-grandparent relationships during the start of a new generation. Journal of Family Communication, 10(3), 194–210. Erbert, L. A., Mearns, G. M., & Dena, S. (2005). Perceptions of turning points and dialectical interpretations in organizational team development. Small Group Research, 36(1), 21–58. Golish, T. D. (2000). Changes in closeness between adult children and parents: A turning point analysis. Communication Reports, 13(2), 79–97. Kellas, J. K., Bean, D., Cunningham, C., & Cheng, K. Y. (2006). The ex-files: Trajectories, turning points, and adjustment in the development of post-dissolutional relationships. Journal of Social and Personal Relationships, 25(1), 23–50. Lee, P. W. (2006). Bridging cultures: Understanding the construction of relational identity in intercultural friendship. Journal of Intercultural Communication Research, 35(1), 3–22.

Lloyd, C. A., & Cate, R. M. (1985). Attributions associated with significant turning points in premarital relationship development and dissolution. Journal of Social and Personal Relationships, 2, 419–436. Surra, C. A. (1987). Reasons for changes in commitment: Variations by courtship style. Journal of Social and Personal Relationships, 4, 17–33.

Two-Group Pretest–Posttest Design A two-group pretest–posttest design is an experimental design, which compares the change that occurs within two different groups on some dependent variable (the outcome) by measuring that variable at two time periods, before and after introducing/changing an independent variable (the experimental manipulation or intervention). This entry describes the purpose and setup of this specific type of experimental design, compares it to similar designs, explores its relative advantages as well as potential pitfalls, and finally suggests some improvements for increased internal validity.

Design Setup Experiments are a style of research that attempt to establish causality (the idea that changes in independent variable A are the cause of changes in dependent variable B). In order to establish causality, researchers must create designs with internal validity, or assurance that no other explanation exists for the relationship between variables A and B. In addition, researchers strive to achieve external validity, or likelihood that their results will be similar to those found in reality rather than a laboratory. There are many different forms an experiment can take. Depending on the number of groups, exposure to manipulations, and the number and timing of measurements, these types of experiments take on different names and have various advantages and disadvantages. The setup of a two-group pretest–posttest design has two essential components that contribute to its unique research advantages: measurement of the dependent variable and the groups. In this particular

1798   Two-Group Pretest–Posttest Design

design, measurement of the dependent variable occurs before and after the intervention. The pretest refers to a measurement made prior to the intervention, which serves as a baseline to compare against a measurement taken after the intervention, the posttest. In a medical drug trial, researchers may compare a patient’s blood pressure before and after administering a drug to see if the drug had any effect on that patient. However, these effects may have been caused by any number of things other than the drug itself; for example, the doctor may have been soothing, or the patient may have changed his or her diet between the two tests. In order to come closer to establishing the drug as the reason for any positive health changes, researchers must demonstrate its effects, specifically, and rule out other possible causes. This is solved by allowing for a comparison between groups. In this design, the “two groups” are typically referred to as the treatment group and the control group. The treatment group is a subset of the sample that receives an experimental manipulation in which the researchers are interested. The control group is ideally identical to the treatment group in every way, but does not receive the experimental manipulation, and serves as a way to measure what might have happened without any intervention. For example, in the previous case of a medical drug test, individuals in the control group would be given a placebo (a pill which has no known effects), while those in the treatment group would receive the actual blood pressure drug. Researchers would then test to see if there were significant changes in the blood pressure of patients in the treatment group as compared to the control group. By comparing two groups who have been otherwise exposed to identical conditions (e.g., the same doctor and diet), researchers can be more certain that changes were due to the drug and begin to rule out other possible causes. The advantages of a two-group pretest–posttest design lie in the combination of these two components. By measuring twice, the researcher can establish an effect within one group that may result from exposure to a manipulation. By having a control group that does not receive the manipulation but still has any changes recorded, the researcher can compare and determine that any changes that occurred in the treatment group were

likely due to the manipulation, as all other influences are held constant.

Similar Designs The two-group pretest–posttest design stands in direct comparison to a few other possible designs. The most basic example is a one-group posttest-only design, in which a manipulation is administered and measurements are taken after. The two obvious flaws in this design are the lack of pretests and control groups. With nothing to compare the measurement to, either before the manipulation or against a second group, any changes in the dependent variable may be due to a number of other causes. An improvement upon the one-group posttestonly design would be the two-group posttest-only design. This design incorporates a control group and a treatment group, but does not include a pretest. This is sometimes the result of natural occurrences of some variable that cannot be controlled (e.g., the general mental health of residents of a town that experienced a wildfire versus residents of a similar town which did not). The researcher can compare the two groups to see if there is a difference between them, and surmise that it may be due to the independent variable. However, causality still cannot be fully determined because the researcher does not know if the difference existed prior to the manipulation or not. The two-group pretest–posttest design improves upon this design by allowing researchers to not only measure changes within each group, but also to compare across the two groups at both time periods.

Some Problems With the Design and Possible Solutions Although the two-group pretest–posttest design has a variety of advantages, it does contain two important threats to internal and external validity: a lack of random assignment and possible pretest effects. Random assignment is the process by which individuals are assigned to either treatment or control groups. When participants are truly randomly assigned, they have an equal chance of being in either group. When samples are large enough, random assignment theoretically creates groups that are equivalent; any possible variations are evenly distributed between the two groups and not

Two-Group Random Assignment Pretest–Posttest Design   1799

concentrated in one. Alternatively, nonrandom assignment results in biases. For example, imagine that two teachers are asked to participate in a study. A female teacher is trained to coach her students on a new reading comprehension program, while a male teacher is not. The two classes of students are then tested on their reading comprehension at the end of the program. If the female teacher’s students do better on the test, was it due to the program? Or was it the fact that she was female, or a superior teacher, or had gifted students? Because the students were not randomly assigned to the reading program condition, several possible differences between the two groups may exist that may account for the differences in scores beyond the effects of the test. In a two-group random assignment pretest–posttest design, we can be sure that differences between the two groups are not likely to be due to some inherent bias in the assignment of the groups rather than due to the manipulation. Finally, two-group pretest–posttest designs also introduce the possibility of a pretest effect, which is when exposure to pretesting somehow impacts outcomes on a dependent variable outside the effects of the manipulation. For example, if a pretest asks participants to list their gender, it may prime them to act in accordance with particular gender stereotypes; consequently, their scores on the posttest may be a result of these gender expectations rather than the manipulation itself. In this way, external validity is threatened because participants may act in a way that reflects their awareness of being researched, rather than how they may act in a more realistic setting. Pretest effects can be mitigated by using a more advanced experimental design, the Solomon fourgroup design. In this design, a two-group pretest– posttest design is conducted alongside a two-group posttest-only design. This results in four groups, which allows researchers to make three important comparisons: (1) within the two pretest–posttest groups, to see if there was any effect on the dependent variable; (2) between groups that received the manipulation and those that did not, to test for differences in the dependent variable; and (3) to test between pretest–posttest and posttest-only groups, which either both received the manipulation or not, to see if the pretest had any significant effect. By accounting for all of these differences,

researchers can increase internal validity by ruling out any possible alternative explanations. Stephanie Tikkanen See also Causality; Control Groups; Experimental Manipulation; Experiments and Experimental Design; Random Assignment; Random Assignment of Participants; Treatment Groups; Two-Group Random Assignment Pretest–Posttest Design

Further Readings Lana, R. E. (2009). Pretest sensitization. In R. Rosenthal & R. L. Rosnow (Eds.), Artifacts in behavioral research: Robert Rosenthal and Ralph L. Rosnow’s classic books (pp. 93–109). New York, NY: Oxford University Press. Ong-Dean, C., Hofstetter, C. H., & Strick, B. R. (2011). Challenges and dilemmas in implementing random assignment in educational research. American Journal of Evaluation, 32, 29–49. doi:10.1177/ 1098214010376532 a Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York, NY: Houghton Mifflin Company. Singleton, R. A., & Straits, B. C. (2010). Approaches to social research (5th ed.). New York, NY: Oxford University Press.

Two-Group Random Assignment Pretest–Posttest Design A two-group random assignment pretest–posttest design is an experimental design that compares measures of a dependent variable (outcome) before and after the introduction of an independent variable (some experimental manipulation or intervention) between two groups with randomly selected participants. This design is often used in studies where researchers are interested in establishing a causal connection between the manipulation and the outcome. This entry first introduces the concept of experimental research and underscores the importance of internal validity, then explains this particular design and how it contributes to internal validity of experiments, explores the types of conclusions researchers using this

1800   Two-Group Random Assignment Pretest–Posttest Design

method are able to draw, and finally offers some problems and solutions related to this method.

Experimental Research and Internal Validity Experimental research is a type of research that strives to explain causal relationships between variables. Although many forms of experimental design exist, many have common elements including measurement of both independent and dependent variables and the introduction of a treatment or manipulation. Often, these factors are combined with placement into different groups, which allows researchers to make comparisons across the groups and determine different effects of the treatment on each. Researchers who seek to explain how independent variable A affects dependent variable B must carefully choose a design that enables them to rule out any other possible explanations of the changes observed in B. The assurance that no other possible causes exist is known as internal validity. This is particularly important in designs with more than one condition. For example, if a researcher wanted to study the effects of room color on mood, the rooms to which participants are exposed should be identically furnished, heated, lighted, and so on; otherwise, any changes in mood may be due to comfort, temperature, or lighting. By keeping all other elements identical, or constant, the researcher can be more certain that effects on the dependent variable (mood) are due only to changes in the independent variable (room color), thus increasing internal validity. Another important way to increase internal validity in designs with multiple groups is random assignment, which is the process of assigning participants to treatment conditions without any possible bias. This is achieved through any process of random selection, such as flipping a coin and having those whose coin lands on heads be in one group and tails, the other. If participants are truly randomly selected, each individual has an equal chance of being in any condition. This ensures that any bias in the population (e.g., similar tendencies due to shared characteristics) does not impact the results of a study. For example, imagine a researcher studying the effects of music on test-taking anxiety. He or she creates two groups, one which is exposed

to music while taking a test and another which is not, and compares the two to determine which group experiences more anxiety. If he or she assigns all female participants to one group and all male participants to the other, any difference in anxiety between the groups may be due to gender rather than exposure to music. By randomly assigning participants to groups, the researcher should have an equal number of each gender in each group and can be more certain that any difference between them is a direct result of only his or her manipulation. Random assignment works with other personality and physical differences, as well; ideally, any differences in participants (e.g., intelligent vs. unintelligent, right-handed vs. lefthanded, or introverted versus extroverted) will be randomly distributed across conditions and create only random variation within groups.

Two-Group Pretest–Posttest Design and the Role of Random Assignment The two-group pretest–posttest design has two essential components that allow researchers to determine the effects of an independent variable upon a dependent variable: the measurement of the dependent variable and the groups. The dependent variable is measured twice, both before the introduction of the experimental manipulation (the pretest) and after the manipulation (the posttest). By measuring at both times, a researcher is able to identify any changes which occur in the dependent variable. However, measuring before and after introducing some intervention does not ensure internal validity; the change that occurred may be due to the treatment, but also may be due to some other influence the researcher cannot account for. Consequently, the strength of the two-group pretest–posttest design is in the inclusion of a second group. In this design, the groups are referred to as the treatment group and control group. The treatment group receives whatever experimental manipulation the researcher is testing, whereas the control group does not receive it. Importantly, all other elements are held constant between these groups. Then, the two groups can be compared to identify any differences that may represent effects of the manipulation upon the treatment group. Now, when researchers measure the dependent variable within two groups before and after the

Two-Group Random Assignment Pretest–Posttest Design   1801

manipulation, they can compare the two groups. Since all other conditions should be equal, any change seen in one group and not the other is likely due to the manipulation. If there is some other cause, both groups should have been exposed to it, and the researcher would see no change. Importantly, there is a distinction between conducting this study with or without random assignment. The two-group pretest–posttest design is the most effective in increasing internal validity and determining causality when paired with random assignment. If participants are randomly assigned, the two groups should be identical in every fashion except exposure to the treatment. Consequently, a researcher can claim that any significant difference between these groups after exposure to the manipulation is due to the treatment, ruling out any possible biases from the assignment of the sample. Often, researchers are unable to enforce random assignment, such as with established groups (e.g., classes in a school) or when they cannot ethically assign someone to a group (e.g., pregnant versus nonpregnant women). In these cases, the lack of random assignment is appropriate, but must be noted as a limitation of the study.

A Precaution for Use of Two-Group Pretest–Posttest Designs One other potential limitation that researchers using a two-group random assignment pretest– posttest design should be aware of is the pretest effect. The pretest effect occurs when the participants in a study are given a pretest before the manipulation which in some way impacts the way they respond to the manipulation. This may be either a conscious or subconscious decision. For example, if individuals are asked question about their attitudes toward rape before being exposed to a pornographic video, they may unintentionally view the content of the video more critically than if not primed to think about sexual violence. Alternatively, participants may use the pretest to guess what the purpose of the study is and answer in ways that they think the researcher wants them to respond, or they may maliciously answer the opposite way. In either situation, the pretest has had some impact upon measurements of the dependent variable outside of the effects of the independent variable alone, which decreases validity.

As such, while the two-group pretest–posttest design has many advantages, it does not allow researchers to rule out the effects of a pretest. A more advanced design, the Solomon four-group design, exists, pairing a two-group pretest–posttest design with a second set of groups that do not receive the pretest but still have one treatment and one control group. In such a design, researchers are enabled to determine not only the effects of the manipulation but also to ascertain any effects of the pretest with three important comparisons: (1) within the two pretest–posttest groups, to see if there was any effect on the dependent variable; (2) between groups that received the manipulation and those that did not, to test for differences in the dependent variable; and (3) to test between pretest–posttest and posttest-only groups, which either both received the manipulation or not, to see if the pretest had any significant effect. Stephanie Tikkanen See also Causality; Control Groups; Experimental Manipulation; Experiments and Experimental Design; Random Assignment; Random Assignment of Participants; Treatment Groups; Two-Group Pretest– Posttest Design

Further Readings Lana, R. E. (2009). Pretest sensitization. In R. Rosenthal & R. L. Rosnow (Eds.), Artifacts in behavioral research: Robert Rosenthal and Ralph L. Rosnow’s classic books (pp. 93–109). New York, NY: Oxford University Press. Ong-Dean, C., Hofstetter, C. H., & Strick, B. R. (2011). Challenges and dilemmas in implementing random assignment in educational research. American Journal of Evaluation, 32, 29–49. doi:10.1177/ 1098214010376532 a Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York, NY: Houghton Mifflin. Shadish, W. R., & Ragsdale, K. (1996). Random versus nonrandom assignment in controlled experiments: Do you get the same answer? Journal of Consulting and Clinical Psychology, 64, 1290–1305. doi:10.1037/ 0022-006X.64.6.1290 Singleton, R. A., & Straits, B. C. (2010). Approaches to social research (5th ed.). New York, NY: Oxford University Press.

1802   Type I Error

Type I Error Type I error is an incorrect rejection of a true null hypothesis. The truth is that two variables in a research hypothesis (alternative hypothesis) are independent of each other, so there is no association between the two variables in reality. However, researchers often mistakenly conclude that those variables are related to one another. Simply put, Type I error can be understood as false positive. This entry provides an explanation of Type I error, offers an example, and discusses how to reduce Type I error rates.

When Type I Error Occurs Statistical testing is based on probability using data from a sample not from a population. Thus, although the selected sample well represents the population, errors might occur. That is, the decision researchers make from statistical testing could potentially result in errors. Particularly, Type I error occurs when researchers decide to reject the null hypothesis when it is true and should not be rejected. The following example offers further insight into Type I error. A research hypothesis predicts that there is a sex difference on self-disclosure between men and women. The truth in reality is that men and women are not different in terms of self-disclosure, but a researcher incorrectly concludes that there is a difference. That is, a null hypothesis is true and should not be rejected. However, by random chance, results from statistical testing indicate that the null hypothesis is not true. So, the researcher rejects the null hypothesis and argues that men and women are different in self-disclosure when there is no difference in reality. This is Type I error. The threshold for rejecting a null hypothesis is called the significance level. When conducting statistical testing, researchers choose the significance level. So, the level of Type I error can be controlled by researchers. Conventionally, the significance level is set at .05, which indicates that there is a 5% chance that a null hypothesis is erroneously rejected when it should not be. Because the significance level is sometimes called alpha (α), the probability

of committing Type I error can be called the alpha (α) level. In statistics, multiple comparisons would potentially increase a chance of committing Type I error. Let’s say that a researcher hypothesizes that teacher self-disclosure of personal information in an online class influences class satisfaction, and the researcher wants to compare three conditions: (a) high self-disclosure, (b) low self-disclosure, and (c) no self-disclosure. To identify differences in class satisfaction, three sets of comparison should be performed for this analysis: high versus low, high versus none, and low versus none. This means the chance of making Type I error increases up to three times as each set of comparison includes a 5% chance of committing Type I error. With this example of three comparisons, the estimated chance of making Type I error would be 3 x .05 (significance level; alpha) = .15. So, a 15% chance of making Type I error would be expected. Thus, multiple comparisons would potentially result in a high chance of committing Type I error.

How to Reduce a Type I Error Rate When making a decision on whether to reject or not to reject a null hypothesis, making a correct decision and reducing a chance of committing errors is important. For example, in the study of medicine, researchers would not claim that a newly developed treatment for cancer is working unless they have high confidence about the effectiveness of the treatment. Because it could cause serious problems if the treatment does not properly work on a patient, the researchers would want to find a way to test the certainty of the treatment’s effectiveness not to make errors. Researchers do not know whether or not they are committing an error while conducting a study. As such, the best way to prevent the potential error is to be cautious of the possibility. Because the significance level, which is directly related to Type I error, can be controlled by researchers, the solution to reduce the chance of committing Type I error is to lower the significance level. As mentioned, a conventionally accepted significance level is .05. When researchers want to lower the Type I error rate, they would set a lower significance level such as at .01. Since this level gives a 1% chance of Type I error, the probability of

Type II Error   1803

committing Type I error becomes smaller compared to when using a .05 level. For multiple comparisons, one of the commonly used methods to reduce the Type I error rate is to cut down the significance level to a smaller one that corresponds to the number of comparisons involved. For example, if an analysis involves two sets of comparison, the significance level would be set at: .05/2 (number of comparison) = .025. By reducing the significance level, the chance of making Type I error is expected to decrease. Type I error can cause serious problems. So, it is sometimes necessary to lower the conventionally accepted significance level to reduce the Type I error rate. However, when researchers try to lower Type I error, it can eventually increase another type of error (Type II error), since Type I and Type II errors are interrelated. Thus, researchers are advised to consider a variety of possibilities and make decisions about which error is more important to prevent in a given study. Statistical testing is being conducted with data from a sample, not from a population. So, there is always a possibility that the findings from the sample might not be true in the population although researchers claim that the findings are true. That is, errors might occur when researchers make decisions about whether or not to reject a null hypothesis. Jihyun Kim See also Hypothesis Formulation; Hypothesis Testing, Logic of; Null Hypothesis; Type II Error; p value

Further Readings Lachlan, K., & Spence, P. R. (2005). Corrections for type I error in social science research: A disconnect between theory and practice. Journal of Modern Applied Statistical Methods, 5, 490–494. Levine, T. R. (2011). Statistical conclusions validity basics: Probability and how type 1 and type 2 errors obscure the interpretation of findings in communication research literatures. Communication Research Reports, 28, 115–119. doi:10.1080/088240 96.2011.541369 Smith, R. A., Levine, T. R., Lachlan, K. A., & Fediuk, T. A. (2002). The high cost of complexity in experimental design and data analysis: Type I and Type II error rates in multiway ANOVA. Human

Communication Research, 28, 515–530. doi:10.1111/j.1468-2958.2002.tb00821.x Steinfatt, T. M. (1979). The alpha percentage and experimentwise error rates in communication research. Human Communication Research, 5, 366–374. doi:10.1111/j.1468-2958.1979.tb00650.x

Type II Error Type II error is a failure of rejection of a false null hypothesis (or a null hypothesis that is not true and should be rejected). However, in some cases, researchers erroneously make a decision that it should not be rejected. Simply, this error is false negative. This entry provides a description of Type II error and a relationship between Type I and Type II errors.

When Does Type II Error Occur? When researchers make decisions on statistical testing results based on a p value, there is a chance that researchers might make errors such as Type II error by accepting a false null hypothesis. That is, the truth in a population is that a research hypothesis (alternative hypothesis), which predicts that there is a relationship between an independent and dependent variable, is true. However, data from a sample indicate that a p value is greater than the conventionally accepted level, so researchers conclude that the two variables in the research hypothesis are not related. By erroneously accepting an inaccurate null hypothesis and simultaneously failing to support a true research hypothesis, researchers make a Type II error. The probability of committing Type II error is called beta (β). As an example, consider a situation in which a researcher hypothesizes that the amount of time spent on Facebook is related to friendship satisfaction. Consequently, a null hypothesis is that there is no relationship between the two variables. As hypothesized, the truth in a population is that there is a relationship between time spent on Facebook and friendship satisfaction, so the null hypothesis should be rejected. However, by chance, the level of significance from the selected data might not meet the conventionally accepted level of a p value. Based on this finding, the researcher

1804   Type II Error

would conclude that time spent on Facebook has nothing to do with friendship satisfaction when they are in fact related. In this case, this researcher would end up committing Type II error by failing to reject a false null hypothesis. Theoretically, the probability of committing Type II error can be reduced by statistical power. Statistical power indicates the likelihood of rejecting a null hypothesis when it is false and should be rejected. Thus, if statistical power is strong, the probability of reducing Type II error becomes high. Power can be assessed as: 1- beta (β), and it can be improved by increasing a sample size. A larger sample size leads to a stronger power. Ultimately, the likelihood of committing an error can be reduced.

Relationship Between Type I and Type II Errors Type II error is interrelated to Type I error (another type of error in statistical testing). The more researchers try to reduce the likelihood of committing Type I error, the greater the probability of making Type II error becomes, and vice versa. More specifically, when trying to reduce Type I error, researchers would decide to lower a p value. Then, it naturally becomes more difficult to reject a null hypothesis, and consequently this would lead to a lower chance of committing Type I error. However, a smaller p value would make it harder to reject a null hypothesis even when the null hypothesis is false and should be rejected: Type II error. Given that a p value is related to the likelihood of committing Type I and Type II errors, researchers need to decide what level of significance should be set for statistical testing. There is no definite answer for this, and the decision would depend on researchers’ careful discretion on which error is more important to avoid in a given study. For example, if it is more important to avoid Type I error, a smaller p-value should be set. Although there is a variety of factors that should be considered when deciding which type of errors should be more strongly prevented, one of the most important considerations is related to the topic of the study. For example, in a medical situation, Type II error (false negative) might be more dangerous than Type I error (false positive). If a

doctor diagnosed that a patient had cancer when the patient did not have (Type I error), the test results would cause anxiety and scary feelings for the patient at the moment. However, this would allow the doctor and patient to do follow-up tests to double-check, and it would eventually inform them the initial result was not true. In this case, the error would not threaten the patient’s life because a proper procedure would be followed after the false initial result. However, if a doctor diagnosed that a patient does not have cancer when the patient does (Type II error), the outcome of this diagnosis would eventually cause serious problems because the patient would not get any treatment for cancer. In this particular case, Type II error would be more serious than Type I error. Criminal dilemma might be another good example. Let’s say that a null hypothesis is that Person A is not guilty of the crime. For this example, Type II error would be Person A is actually guilty, but found to be innocent; so the guilty Person A becomes free. Type I error occurs when Person A is found to be guilty when Person A actually did not commit a crime. For justice, Type II might be more serious than Type I error. However, from Person A’s perspective, Type I error is a much more serious problem. As these examples illustrate, whether Type I error or Type II error is more serious depends on a variety of factors. Thus, researchers should be cautious of those factors when considering how to control errors. Type II error occurs when a null hypothesis is false and should be rejected but when it is claimed to be true. As discussed, Type II error is interrelated with Type I error. Thus, careful discretion is needed to assess which type of errors should be more strongly prevented in a given study. Jihyun Kim See also Hypothesis Formulation; Hypothesis Testing, Logic of; Null Hypothesis; Statistical Power Analysis; Type I error

Further Readings Freiman, J. A., Chalmers, T. C., Smith, Jr, H., & Kuebler, R. R. (1978). The importance of beta, the type II error and sample size in the design and interpretation of the randomized control trial. Survey of 71 “negative”

Type II Error   1805 trials. The New England Journal of Medicine, 299(13), 690–694. doi:10.1056/NEJM197809282991304 Levine, T. R. (2011). Statistical conclusions validity basics: Probability and how type 1 and type 2 errors obscure the interpretation of findings in communication research literatures. Communication Research Reports, 28, 115–119. doi:10.1080/08824096.2011.541369

Smith, R. A., Levine, T. R., Lachlan, K. A., & Fediuk, T. A. (2002). The high cost of complexity in experimental design and data analysis: Type I and Type II error rates in multiway ANOVA. Human Communication Research, 8, 515–530. doi:10.1111/j.1468-2958.2002 .tb00821.x

U Underrepresented Groups Underrepresented groups are nondominant groups such as people of color; people with disabilities; people from a lower socioeconomic status; people who are gay, lesbian, bisexual, and transgendered; people of a nondominant religion; and retirees. These groups signify a distinct area of research, including in communication studies, both as subjects for investigation and as creators of innovative research perspectives and methodological approaches about them. Challenges to the traditional research paradigms, texts, and theories used to explain the experiences of people of color or that regard all human experiences as universal prompted such methodologies and theoretical approaches. Among the methodologies and theoretical approaches for studying underrepresented groups discussed in this entry are critical race methodology, cultural studies, afrocentricity, postcolonialism, ethnography, feminism, and phenomenological approaches.

Critical Race Methodology Critical race methodology is a theoretically grounded approach to research that foregrounds race and racism in all aspects of the research process. It also challenges the separate discourses on race, gender, and class by showing how these three elements intersect to affect the experiences of people of color. Critical race theory is the work of progressive legal scholars of color who are

attempting to develop a jurisprudence that accounts for the role of racists in U.S. law and that works toward the elimination of racism as part of a larger goal of eliminating all forms of subordination. Critical race methodology attempts to operationalize critical race theory, which is a critical examination of society and culture, focusing on the intersection of race, law, and power. Critical race theory recognizes that racism is ingrained in American society. This approach to communication research resonates strongly among communication scholars because of its focuses on the discourse and images that create, maintain, and transform cultural relations of meaning and power as racial phenomena. Analyses consider the subtle and often-contradictory processes by which symbolic codes are organized and activated in the production and reception of cultural performances and texts.

Cultural Studies Cultural studies draws on methods and theories from communication, sociology, cultural anthropology, and other disciplines to investigate the ways in which culture creates and transforms individual experiences, everyday life, social relations, and power. The relations in culture are understood through human expression and symbolic activities, and cultures are understood as distinctive ways of life. Within communication, cultural studies was initially adopted as a critique of quantitative studies of media institutions and their effects. It has now extended to offer

1807

1808   Underrepresented Groups

communication scholars humanistic resources for analyzing media and for incorporating the reflective voices of audience members. Cultural studies considers texts as complex artifacts that operate to shape ideologies.

Afrocentric Methodology An Afrocentric methodology is a form of cultural criticism that is concerned with establishing a worldview about the writing and speaking of oppressed people, placing people of African descent at the center of analyses about them rather than in the margins and viewed as objects of curiosity. Afrocentric ideas regarding texts and creative productions, then, reflect African or African American cultural responses to life.

Postcolonialism Although postcolonialism entered the communication discipline through rhetorical theory, it has long been evident in intercultural communication research and most recently converged around critical-cultural communication as well as organizational communication research. The implications of postcolonialism for communication researchers include a heightened reflexivity concerning the potential of any methodology to restore and reproduce oppression. Postcolonial researchers subsequently commit to conceptualizing their sites through the lens of historical and cultural struggle.

Ethnographic Research As both a process and an outcome of research, ethnography is a way of studying a culture-sharing group as well as the final, written product of that research. Ethnographers study the meaning of the behavior, the language, and the interaction among members of the culture-sharing group. There are many forms of ethnography; however, three that have become popular among communication scholars—especially scholars of color and feminists—are autoethnography, feminist ethnography, and critical ethnography. Autoethnography is an approach to research and writing that seeks to describe and systematically analyze personal experience in order to understand

cultural experiences. Feminist ethnography is part of the larger challenge that feminism poses to positivist social research. Social scientists use feminist ethnography to uncover how gender operates within different societies. Critical ethnographers advocate for the emancipation of groups marginalized in society.

Feminist Research Feminist researchers establish their legitimacy through a variety of methodological strategies, including constructing questions that directly address gender concerns, sampling subjects so as to include women, anticipating the influence of actors’ gender identities on their participation in events, interpreting data to preserve the affective dimensions of personal narratives, and promoting research to influence policymaking that affects women’s lives. Feminist research begins with questioning and critiquing androcentric bias within the disciplines, challenging traditional researchers to include gender as a category of analysis. Feminist research takes many different modes of inquiry. For example, feminists of color have revealed to White middle-class feminists the extent of their own racism through the development and articulation of Black feminist thought and womanism. These feminist approaches focus on the everyday lives of Black women and other women of color. At the core of Black feminist/womanist thought lies theories created by African American women who clarify a Black woman’s standpoint, an interpretation of Black women’s experiences and ideas by those who participate in them. Through these feminist approaches, the important interconnections among categories of difference in terms of gender, ethnicity, and class are acknowledged.

Phenomenological Approaches A phenomenological approach to communication research about underrepresented groups represents a methodology that focuses on lived experiences. This methodological approach reflects a significant change from traditional empirical research in which objective examinations of nonrelational others is valued. Phenomenological

Univariate Statistics   1809

methodology involves a three-step process of discovery that includes descriptions of lived experiences, knowledge categorized into essential themes, and hermeneutic interpretation of themes. This reflective process encourages researchers to acknowledge persons as multidimensional and complex beings with particular social, cultural, and historical life circumstances. Each researcher is viewed as the expert on his or her own life. Such an approach is crucial for gaining insight into populations that have been muted within dominant societal structures. Janice Hamlet See also Activism and Social Justice; African American Communication and Culture; Asian/Pacific American Communication Studies; Communication and Culture; Communication Theory; Critical Race Theory; Cultural Sensitivity in Research; Cultural Studies and Communication; Diaspora; GLBT Communication Studies; Latina/o Communication; Native American or Indigenous Peoples Communication; Queer Theory

Further Readings Asante, M. K. (1987). The Afrocentric idea. Philadelphia, PA: Temple University Press. Grossberg, L., Nelson, C., & Treichler, P. A. (1992). Cultural studies. New York, NY: Routledge. Hesse-Biber, S. N. (Ed.). (2012). Handbook of feminist research. Thousand Oakes, CA: Sage. Lindlof, T. R., & Taylor, B. C., (Eds.). (2011). Qualitative communication research methods (3rd ed.). Thousand Oakes, CA: Sage. Ziegler, D. (Ed.). (1995). Molefi Kete Asante and Afrocentricity: In praise and in criticism. Nashville, TN: James C. Winston.

Univariate Statistics Univariate statistics refer to all statistical analyses that include a single dependent variable and can include one or more independent variables. Univariate statistics represent some of the most commonly used statistical analyses in communication research. Univariate statistics utilize inferential statistics, which, under ideal conditions, allow researchers to infer a causal relationship

between the independent and dependent variable and generalize the results of their analyses conducted on a smaller sample to a larger population. This entry begins with a discussion of how univariate statistics are used in communication research and concludes with a brief overview of a few frequently used types of parametric and nonparametric univariate methods of analysis.

Using Univariate Statistics in Communication Research Communication researchers use univariate statistics as a tool to answer questions or test hypotheses about the world around them. For example, a researcher may read about the need for more organ donors and be interested in finding ways to increase the number of people who sign the organ donation registry. The researcher would look over previous research on the topic and how health messages inspire people to action. Suppose the researcher finds during his or her research that public service announcements (PSAs) have been shown to increase the number of people who take action on a specific health topic. The researcher may then develop a hypothesis that those people who watch a PSA advocating organ donation are more likely to sign the registry versus those who do not watch the PSA. The researcher can then design an experiment to test the hypothesis. In this experiment, the independent variable would be the PSA (with two levels of shown or not shown). The independent variable represents the variable that is manipulated in the experiment to determine if it has any effect on the dependent variable. To conduct the experiment, the researcher could randomly select 100 people to participate in the experiment and randomly assign them to two groups. The researcher would show the PSA to half of the sample (Group 1) and not show the PSA to the other half of the sample (Group 2). At the end of the study, the researcher would ask participants in both groups to sign the donor registry. The dependent variable in this example would be signing the registry and it is used to determine if the independent variable (PSA) has any effect on the dependent variable (getting people to sign the registry). The information collected during the experiment on who did or did not sign the registry and which group they were in (watch the PSA/not

1810   Univariate Statistics

watch the PSA) would represent the researcher’s experimental data. But how does the researcher use the data to find support for the hypothesis? This is where statistics become useful. After the researcher reviews the data and finds that there is a single dependent variable, he or she would then use univariate statistics to test the hypothesis. For this example, the researcher would run an independent samples t-test. If the researcher finds that there are significant differences, meaning the differences are greater than those expected by chance, in who signed the registry among the two groups of those who watched the PSA and those who did not, the researcher has support for the hypothesis and can conclude that using a PSA to attract more donors is a good idea for future health campaigns. As shown in the previous example, communication researchers use different types of experimental designs to collect data about topics and then analyze the data using specific statistical analyses. These statistical analyses are the tools of the researchers and the results of those analyses help researchers to provide answers to their research questions and potentially lend support to their hypotheses. The next question is, how does a researcher know what type of statistical analysis to use? To know what type of analysis is appropriate, it is important to first identify all independent and dependent variables found in the study and the types of hypotheses and questions the researcher wants to answer. In instances with a research design that has questions or hypotheses comprising one dependent variable, using univariate statistics is the correct choice. When a researcher has more than one dependent variable, a multivariate approach may be a better choice. Because there are many types of univariate statistics available, more in-depth planning is needed to choose the correct univariate statistical analysis to suit specific research questions and hypotheses.

Types of Univariate Statistics When determining which type of univariate statistic to use, a researcher must first determine the level of measurement of the dependent variable (ratio, interval, ordinal, or nominal). Ratio or interval variables are analyzed with parametric

statistics, and ordinal or nominal variables are measured with nonparametric statistics. Briefly described in the following subsections are some of the more widely used parametric and nonparametric univariate analyses. Parametric Statistics

Once researchers determine their dependent variable to be ratio or interval, they will need to consider the level of measurement of their independent variable and the data’s distribution shape. Different types of parametric statistics are used depending on the level of measurement (nominal, ordinal, interval, or ratio) for the independent variables. t-Tests A t-test is used to determine whether the difference between the means of two groups or conditions is due to the independent variable or whether the difference is simply due to chance. A t-test requires one independent variable with only two levels and one dependent variable. When a researcher has a single independent variable with two levels (also known as a dichotomous variable), an independent samples t-test would be the appropriate choice. When a researcher has a single dichotomous independent variable and the two means of those two groups are related (also called correlated) with one another, a paired samples t-test is the test that should be used. A paired samples t-test is often used in repeated measures designs in which the same scale is given to a participant twice, such as before and after an intervention. Analysis of Variance An analysis of variance (ANOVA) is similar to a t-test in that it examines the mean differences due to an independent variable on one dependent variable, but unlike a t-test, the levels of the independent variable can be increased to three or more levels and the number of independent variables can include more than one. A one-way ANOVA compares levels or groups (three or more) of a single factor based on one dependent variable. An example would be comparing the means of dating, married, and engaged couples on their relational satisfaction. A two-way ANOVA compares levels

Unobtrusive Analysis   1811

or groups of two factors based on one dependent variable. Using the same example from the oneway ANOVA, a researcher could also include sex (male or female) as a second independent variable with relationship status (dating, married, or engaged) on relational satisfaction. A two-way ANOVA design also allows a researcher to examine interactions between the two factors, as well as potential mean differences due to both independent variables separately. Simple Regression When a researcher has an independent variable that is a ratio or interval, a form of regression is the appropriate method, and when there is one independent variable, simple regression is the correct analysis. Simple linear regression attempts to explain the strength of a relationship between two variables by fitting a linear equation to the observed data. Regression allows the researcher to make predictions based on the data collected. Regression can be used to analyze whether changes in the independent variable (also called X or the explanatory variable) match changes in the dependent variable (also called Y or the outcome or variable) in a linear fashion. Nonparametric Statistics

If researchers have a dependent variable that is ordinal or nominal, they will use nonparametric analyses. Because of the decrease in the amount of information that is collected, these tests are often less powerful than parametric tests. Again, researchers need to consider the level of measurement for the independent variable to determine what type of nonparametric test to use. Chi-Square A chi-square test of independence should be used when a researcher has a dependent variable that is nominal or ordinal and an independent variable that is nominal or ordinal. This test uses a simple contingency table to cross-tabulate the levels of the independent variable with levels of the dependent variables and compares the observed frequencies (what the researcher collects in the study) and the expected frequencies if the two variables were independent. Chi-square is often called

a goodness-of-fit test because it measures how well the observed data fit with the distribution expected if the variables are not associated with one another. Amber Marie Reinhart See also Analysis of Variance (ANOVA); Chi-Square; Hypothesis Testing, Logic of; Measurement Levels; Multiple Regression; Multivariate Statistics; Significance Test; t-Test

Further Readings Basil, M. D., Brown, W. J., & Bocarnea, M. C. (2002). Differences in univariate values versus multivariate relationships. Human Communication Research, 28(4), 501–514. Brown, B. L., Hendrix, S. B., Hedges, D. W., & Smith, T. B. (2011). Multivariate analysis for the biobehavioral and social sciences: A graphical approach (Chapter 1). New York, NY: Wiley. Morgan, S. E., Reichert, T., & Harrison, T. R. (2002). From numbers to words: Reporting statistical results for the social sciences. Boston, MA: Allyn & Bacon. O’Rourke, N., Hatcher, L., & Stepanski, E. J. (2005). A step-by-step approach to using SAS for univariate & multivariate statistics. Cary, NC: SAS Institute. Sheskin, D. J. (2003). Handbook of parametric and nonparametric statistical procedures. Boca Raton, FL: CRC Press.

Unobtrusive Analysis One of the challenges every social science researcher faces is how to gather and analyze data that reflect true human experience. Although typical research is conducted via observation or questionnaires, human behavior is interrupted through these methods; therefore, the validity of the data collected may be unreliable. For example, when communication graduate researchers use undergraduate communication students to survey for various research projects, the undergraduate population may grow tired or even resentful of completing the surveys. A survey also interrupts or intrudes upon a student’s life; therefore, it may not directly reflect the actual behavior or expected attitude during the natural stream of interactions with that participant. Unobtrusive

1812   Unobtrusive Analysis

measures help to reduce the amount of selection bias and researcher bias throughout a research project. Because the researcher is not interacting directly with the participants, he or she cannot influence the participants’ responses or interactions. If a researcher conducts an experiment in which the participants are observed directly, the physical presence of the researcher may have an effect on the behavior of the participants. Unobtrusive measures allow for data collection and analysis to be completed without the researcher intruding in the research context. This entry discusses methods of indirect data collection before examining the unobtrusive analysis of existing documents and data sources, as well as Internet and social media sources. The entry concludes with a section on ethical considerations of unobtrusive data collection and analysis.

Indirect Data Collection To avoid the possibility of the presence of a researcher interfering with the human behavior of the research participants, a number of indirect data collection methods may be employed. For example, if a researcher would like to measure the effectiveness of communication approaches for wait staff within a restaurant, the servers could each be coached in a particular approach. Then following the experiment, the amount tipped for the servers could be calculated and compared. Another commonly used indirect collection method is using undetectable recording devices and tools to record human interactions, which are later analyzed by the researcher. This approach may be controversial, as it is common practice for all research participants to be willing and aware of participation in a research experiment. In addition, legal and ethical considerations must be assessed prior to recording any human interaction (this is expanded upon in a later section). The nature of indirect data collection is to collect data without the participants’ knowledge. Although the goal of this is to obtain data that are free of bias introduced due to participant knowledge, participants have a right to privacy. Some information may be considered public and therefore not subject to privacy laws or regulations. However, a researcher is often still required to

obtain a signed consent form prior to analyzing or using any data collected during the experiment.

Analyzing Existing Documents A common approach to unobtrusive research methods is through content analysis. The content used for such analysis comes from existing documents, such as newspaper articles or organizational documents. The content can be analyzed quantitatively or qualitatively through thematic analysis, indexing, or quantitative description analysis, and other methods. As with any other data source used for research, there are pros and cons to using existing documents. One limitation to consider is that the researcher is limited to only documents that exist. So, for example, if a researcher were conducting research within an organization about how an organization communicates during times of change, the documents that exist might capture the organization’s official stance on how things should be communicated. However, the existing documents would not capture how the unofficial “water cooler” communication has an impact on how communication actually happens in many cases.

Analyzing Existing Data Sources Researchers in the 21st century have data readily available to them that are already collected through organizations such as governments, schools, and businesses. The data collected by these organizations can be used for secondary analysis. For example, census data, consumer data, crime records, and the like can all provide a wealth of knowledge after analysis. For example, by combining census data with consumer data, a researcher may be able to analyze what geographical areas are more in tune with the latest technology (e.g., knowing where people typically purchase the latest technology). Using these types of sources for secondary analysis comes at a cost, figuratively and economically. The databases containing these types of data sources are enormous. Not only does it take a significant amount of technical knowledge to process the data in these databases but also a significant amount of equipment and computer software.

Unobtrusive Measurement   1813

When using consumer data, there may be a significant cost to obtain the license to even access and use the data. Another consideration when using an existing source of data is that the researcher may not be familiar with existing issues with the data. Some of the data collected may contain errors or omissions that were not critical for the primary research study but might have significant implications on the current research project. For these reasons, these data are typically used in wellfunded national-level studies. Other sources that are also commonly used for research include existing organizational data or records. One example might be using existing medical records to determine patterns of illness or treatment. Another example might be using data collected by the human resources department during exit interviews of ex-employees to examine organizational communication issues. Although these sources and other similar sources offer a significant amount of valuable information, ethical considerations must be made to ensure that participants’ right to privacy is upheld.

The Internet and Social Media Sources With the advent of the Internet and social media, there is a wealth of data available that can be used for research. The Internet offers a massive amount of data related to just about any topical area imaginable. Although some web-based sources are controlled to only approved-member participation, there are other sources that are considered to be public domain. Although such sources are in the public domain, there are still ethical considerations for use of this information for research purposes.

Ethical Considerations Some examples of collection methods that may be considered ethically questionable might include going through a person’s trash to collect data, staging a false door-to-door event to collect data through interaction or observation within a person’s home, or working with a place of business to collect information from customers of that business. However, some examples of ethically questionable data sources may not be so obvious. Concern has been growing over whether

information obtained through public Internet sources can be used without informed consent. Some questions regarding this concern include “When is consent required?” and “How can researchers obtain informed consent from participants when the data are collected through secondary sources?” Researchers can turn to their institutional review board (IRB) for help determining whether consent is required and the best way to obtain consent should it be required. In some cases, in which a researcher is obtaining data through participation in an online social network, the researcher must include a disclaimer within his or her user profile indicating that he or she is a researcher. This would be required particularly if the researcher joins and participates in a private online social network group for the purpose of conducting research. In other cases, in which the social group is open to the public, such a disclaimer may not be warranted. Karina L. Willes See also Ethics Codes and Guidelines; Informed Consent; Online Communities; Online Data, Collection and Interpretation of; Online Social Worlds; Primary, Data Analysis; Secondary Data; Social Networks, Online

Further Readings Kahn, A. S., Ratan, R., & Williams, D. (2014). Why we distort in self-report: Predictors of self-report errors in video game play. Journal of Computer-Mediated Communication, 19(4), 1010–1023. doi:10.1111/ jcc4.12056 Kazdin, A. E. (1979). Unobtrusive measures in behavioral assessment. Journal of Applied Behavior Analysis, 12(4), 713–724. doi:10.1901/jaba.1979.12–713 Marrelli, A. F. (2007). Unobtrusive measures. Performance Improvement, 46(9), 43–47.

Unobtrusive Measurement Many research studies, including those in communication studies, consist of participant interviews or survey questionnaires. Both types of data collection involve the researcher being

1814   Unobtrusive Measurement

physically present during the collection of data, which can physically alter the respondents’ responses and behavior, providing the researcher with altered or biased results. During a questionnaire, respondents may get tired of filling out the survey or they could answer the questions with the response that they believe the researcher wants rather than being truthful, all because they want to look better in the eyes of the researcher. In an interview, respondents could also embellish the responses of their questions in order to not come off as one who does not enact the positive behavior being analyzed (e.g., communicating effectively with a colleague). As such, these measurements of data are obtrusive because they can result in an interruption in the natural stream of participant behavior. Unobtrusive measurements seemingly reduce the risk of biases that typically result from the intrusion of the researcher and measurement instrument (i.e., survey questionnaire or interview procedure). Furthermore, unobtrusive measurements reduce the amount of control the researcher has over the type of data that is collected. Depending on the theory or theoretical constructs that guide the research study, unobtrusive measures may or may not benefit the study. Also, with some theories, unobtrusive measurements may not be available. The sections that follow detail the different types of unobtrusive measurements that are available for researchers. These methods include indirect measures, content analysis, and secondary analysis of data.

Indirect Measures A more natural form of unobtrusive measurements in the realm of research is an indirect measure. With indirect measurements, the researcher is able to collect his or her data without having a formal research procedure or protocol. Indirect measures require the researcher to be creative in thinking about the data that he or she wants to collect and how he or she will collect it. Most of the forms of data collection are both hidden and unknown from the participants. For instance, if a researcher wants to see how many people frequent a specific exhibit at a museum, the researcher could install electronic counters next to the exhibits that he or she wants to test and these counters

would serve as a way to detect and calculate the number of patrons that visit them. The patrons would not be aware of such calculation; thus, it is not persuading them to visit the exhibit that is being observed by the researcher. This type of data collection is more unobtrusive than a survey or interview because the researcher does not have to physically ask participants what exhibits they visited. If the researcher asked the participants whether they visited certain exhibits, the participants may not remember which ones they visited or they could answer incorrectly. Rather, the researcher knows exactly how many people visited each exhibit through the electronic, indirect measurement system. Another example of using an indirect form of unobtrusive measurement would be if the researcher wanted to know what television stations people like to watch, he or she could have a cable company keep track of the stations that are on a person’s television when the company performs at-home service calls. Rather than conducting an obtrusive survey or interview about what television shows participants felt were their favorite, the researcher could work with a cable company and have the service technicians see what station is on the television when it is being serviced. In these two examples, the researcher has to be cognizant of the ethics behind indirect measurements because indirect measurements occur without the participants’ knowledge. This type of measurement could violate the participants’ right to privacy because the researcher is not using any form of informed consent. If the type of research information that the researcher is collecting is considered public, such as the television station that is on the television when the technician arrives to someone’s home, then there may not be an invasion of privacy because the data were easily observed. With all research methods, there may be specific types of research studies that warrant the use of an ethical, readily available unobtrusive measure in the form of an indirect measure. It is important for the researcher to assess the reliability and validity of such an indirect measure in order to verify the results of the study are sound in research methods. If the researcher collects the same data at two different time points, then he or she is assessing the validity and reliability of the

Unobtrusive Measurement   1815

data. For instance, having two cable house calls to assess the television station preference can also assess the test–retest reliability of the measure. If the television stations are the same at both times of data collection, then the researcher has a more valid data point and the measure is a more reliable form of data collection. In these examples, the measurements occurred naturally among the participants, and the data points were gathered without the researcher having to include any formal research protocol.

Content Analysis A second form of unobtrusive measurement is content analysis. Both qualitative and quantitative researchers can use content analysis to analyze patterns within text documents. Content analysis includes three different types of research: thematic analysis of text, indexing, and quantitative descriptive analysis. All three types of content analysis are discussed in the following subsections. Thematic Analysis

Thematic analysis is when the researcher examines a body of text (e.g., documents, news articles, field notes, interviews) and identifies major themes that emerge within the set. After initial themes begin to emerge from the data sets, the researcher can continuously review the data sets in order to further refine them (e.g., look for new ideas that emerge with multiple views of the data set). Repeating this procedure multiple times throughout the analysis is helpful to make sure that all themes are discovered. In addition, as thematic analysis suggests, the researcher continually revisits the data set to see what themes naturally emerge as time passes. A constant revisit of the data, over time, allows the researcher to conclude that all themes were uncovered. Indexing

A second type of content analysis is known as indexing. Indexing is a way to keep track of or itemize all keywords within the body of text. Consider for instance a textbook. An index would help the reader know what keywords are found within

the text and on what pages they can locate these keywords. The keyword list is alphabetized for easy viewing and locating. Likewise, if keywords are used within a phrase or a specific context, they are listed as such. For instance, if the term articles was used in multiple formats, such as journal articles, newspaper articles, and electronic articles, then the keyword would be broken down to assess all three of the contexts for which it was used within the text. Quantitative Descriptive Analysis

Quantitative descriptive analysis is similar to indexing but takes a more numerical or quantitative approach to this content analysis. If the reader of a body of work wants to know which words, keywords, phrases, or ideas were used most frequently, he or she would use a quantitative descriptive analysis to determine this. With both indexing and quantitative descriptive analysis, computer software programs are often used to determine the frequency of such keywords. The computer program scans the body of text and indexes all keywords and quantifies them for research purposes. Limitations of Content Analysis

There are limitations with using all of these forms of data collection and analysis. First, not all research subjects are available in the form of text. As such, if a researcher’s topic of interest is not available in the form of news media (e.g., newspaper articles, journal articles, magazine articles), then the researcher is less likely to find a body of data to analyze. Furthermore, additional data points could be eliminated from content analysis because they are not in the form of text documents (e.g., videos, films, documentaries); this can result in a bias of information that is included in the research report. Last, using a computer program to analyze key ideas within the text can also result in error (e.g., subtleties of meaning, typos) and should always be checked by the researcher or research team in order to avoid misinterpreted results. All these ideas aside, content analysis with the use of automated methods is a relatively fast and convenient method for quickly analyzing large amounts of text.

1816   Unobtrusive Measurement

Secondary Analysis of Data The last type of unobtrusive measurement discussed in this entry is known as secondary analysis of data. This type of measurement is sometimes known as the reanalysis of quantitative data because it makes use of data that already exist in the form of government, business, school, or other organizations’ databases. Many such organizations have free, publicly accessible electronic databases that researchers can access, download, and analyze. Types of secondary analysis data include consumer data, crime records, census bureau data, economic data, and standardized testing data. For the purpose of research, these data sets may have taken months or years to collect but may have been examined only once to suffice the research needs. Vast amounts of publically accessible data are available for public use, and many researchers enjoy combining information from multiple databases to examine their research questions and assess patterns among the data. For example, a researcher may want to look at standardized testing data with census information to assess patterns in students’ test scores by geographic location and group. This could tell the researcher or research team a lot about student behavior and teaching styles among these groups. A few things to keep in mind with using secondary analysis is the efficiency of this type of data and the scope of the study. First, secondary data analysis is cost efficient. These data have already been collected by another research team and are often available to the public at no cost. When doing research on a budget, using secondary analysis of data allows the research team to not have to worry about the costs associated with data collection and participation. Furthermore, secondary data analysis allows the researcher to extend the scope of the study because most publically accessible databases are of a national sample. Using this type of data

allows the researcher to be an efficient researcher at a larger, broader scope. Using secondary analysis of data does have some disadvantages. Because researchers did not collect the data on their own, they have to make assumptions about what data to combine and how the data were collected. Furthermore, when researchers are not involved with the data collection, they do not know what limitations occurred with that process, which could alter or bias the data. If possible, it is a good measure to try and reach out to the original research team with questions and also to read their detailed documentation of procedure that often is available with the data sets. Finally, it is important to give professional credit to the studies that were reanalyzed using secondary data analysis. Secondary data analysis can benefit the social science arena, especially at times like the present when grant-funded research is limited and much government data are free. Stellina Marie Aubuchon Chapman See also Content Analysis, Definitions of; Content Analysis, Purposes of; Experiments and Experimental Design; Reliability of Measurement; Secondary Data; Thematic Analysis; Unobtrusive Analysis; Validity, Measurement of

Further Readings Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed methods approaches (2nd ed.). Thousand Oaks, CA: Sage. Fitch, K. (1994). Criteria for evidence in qualitative research. Western Journal of Communication, 58, 32–38. Leeds-Hurwitz, W. (2005). Ethnography. In K. L. Fitch & R. E. Sanders (Eds.), Handbook of language and social interaction (pp. 327–353). Mahwah, NJ: Erlbaum. Lindlof, R. R., & Taylor, B. C. (2002). Qualitative communication research methods (2nd ed.). Thousand Oaks, CA: Sage.

V Validity, Concurrent Concurrent validity is a logical inference from which results of a new measure or test share comparable results as some other previously validated measure or test. The type of validity testing belongs to the category of criterion validity, which more or less accurately predicts the same outcome as a previously validated measure or test. The extent to which new test results correspond with results from a previously validated measure is the extent to which concurrent validity may be argued. The higher the degree of similarity in function (e.g., high correlation) between the two tests—one new and one previously validated—the more concurrent validity becomes established. Concurrent validity, therefore, is the simultaneous accuracy of measurement evidenced by comparison with results from a previously validated test. The aim of the following is to provide an understanding of concurrent validity by describing the evidence generated through the comparative assessment. The aim also includes an explanation of the level of concurrent validity in respect to various alternative forms of validation, such as construct, criterion, and predictive validity. Finally, practical examples are included to provide clarity on appropriate application of tests, followed by a variety of common errors in establishing concurrent validity.

Assessment Validity is context sensitive, particularly in the domain of social sciences. As new contexts are

defined, new measures are in constant demand to advance theoretical assumptions or conclusions about predicting outcomes. The new measures must be tested, or validated, for accuracy. A commonly accepted validation process is to compare simultaneous results generated from new measures with results generated from previously validated measures—concurrent validity. The question becomes whether or not the two measures simultaneously predict the same outcomes within various contexts. Consider the following example involving a communication scholar who wants to improve predicting the trustworthiness that employees have toward their managers. Over decades of replication, scholars have accepted the measure of interpersonal similarity as the standard by which to predict the trustworthiness employees have toward managers. That is, the more similarity employees believe to share with managers, the more employees tend to trust the managers. However, the scholar is interested in improving predictability of trustworthiness employees have toward managers and has created a new instrument called the task attraction measure. The new measure claims to predict the same outcome as the standard interpersonal similarity measure, except to generate even more predictive power. The scholar collects data using both the standard interpersonal similarity measure and the new task attraction measure. Data are collected at the same time, using both measures to generate evidence toward predicting employee trust toward managers. The scholar is now ready to test concurrent validity.

1817

1818   Validity, Concurrent

The objective is to generate evidence that the new task attraction measure is indeed predicting the same expected outcome as the interpersonal similarity measure, employee trustworthiness toward managers. To generate the evidence, the scholar assesses the concurrent validity of the new measure in relation to the standard interpersonal similarity measure. The process often includes regression analysis to first determine the magnitude of prediction that both measures generate. Given both measures (i.e., the new and the standard) are indeed predictive of employee trustworthiness toward their managers, a typical next step is to test for concurrent validity by determining the correlation the two measures share with each other. The interpersonal similarity measure is already accepted as valid, so given the new task attraction measure shares a high statistically significant correlation with the standard interpersonal similarity measure, the scholar establishes concurrent validity. That is, the scholar has now generated evidence in support of arguing for the validation of a new measure to predict employee trustworthiness toward managers.

Levels of Validity Among the construct and criterion types of validity, concurrent validity falls within the criterion types of validity. Whereas construct validity is typically concerned with establishing the soundness of a conceptual framework or theoretical model, criterion validity has more to do with the predictive power of such theoretical constructs. That is, criteria are used in observing specific outcomes as a result of the measurement process in order to determine validity. Given the outcomes occur, or are observed as expected, the scholar establishes evidence to argue predictability of some observable operation. As such, two common levels of criterion validity emerge, one is predictive and the other is concurrent. Two general types of criterion validity (i.e., predictive and concurrent) apply to communication studies. A major difference between the two levels is the timing of tests or data collection. Predictive validity is concerned with expected outcomes between one initial test and then a later test of the same sample to determine expected outcomes. However, where predictive validity is concerned

with a series of tests, concurrent validity is instead concerned with simultaneous testing of predictability between two measures taken at the same time. The objective of concurrent validity, then, is to determine predicative power from one-shot test results. Essentially, at the level of predictive validity, researchers are concerned with measurement at different times; however, at the level of concurrent validity, researchers are concerned with simultaneous measurement.

Application Generally, unlike various other assessments of criterion validity, tests for concurrent validity compare results from two or more conceptually similar measurements or tests by collecting data at the same time. The major concern is to test the comparison of a new measure with a currently, previously validated measure, in an attempt to advance or determine predictive criterion (indicators) of some outcome. Both measures administer conceptually similar criteria, and proceed to determine predictability simultaneously; hence, concurrent validity. One example of an application for communication studies revolves around instructor source credibility across cultures. A group of scholars were interested in comparing a well-established measure of instructor source credibility in one culture with various translated versions of the measure in different cultures. The instructor source credibility measure focuses on three criteria—competence, trustworthiness, and goodwill/care—to predict an instructor’s derived credibility. The scholars collected data using three different translated versions of the same credibility measure, from three different samples (the United States, Spain, Thailand) at the same time, and proceeded to compare the results. The three different samples reported strong positive correlations, with the same predictive criteria, so concurrent validity of instructor source credibility was established across cultures.

Common Errors Convergent Validity Error

Concurrent validity is not the same as convergent validity. Although operationally similar,

Validity, Construct   1819

convergent validity focuses more on the conceptual definition of a construct. Generally, the concern is more about testing whether or not two or more instruments are testing the same concept under different names. For example, a group of researchers tested whether or not a measure of communication competence was measuring the same concept as rhetorical sensitivity. Both theories claimed to measure how well individuals match various relationships in various contexts. Because of the near exact conceptual definitions, or theoretical descriptions, the scholars tested the strength of the statistical relationship results from the two measurements shared. The test for convergent validity therefore is a type of construct validity. Instead of testing whether or not two or more tests define the same concept, concurrent validity focuses on the accuracy of criteria for predicting a specific outcome. Concurrent validity is more or less focused on a comparison of accuracy in prediction between a new and previously validated test; hence, criterion validity, whereby the concern is more about testing the function of prediction and less about conceptualization of some construct. Imperfect Gold Standard

Given that validity is context sensitive, definitions of standards from which to establish validity readily become ambiguous. The establishment of concurrent validity requires a previously validated measure or test. Over time, often through a long series of replication of resulting predictive power, scholars accept a measure as the gold standard. Then, to establish concurrent validity, the gold standard measure is used comparatively to generate evidence for arguing in support of validation. The gold standard does not always provide correct criteria from which to generate useful predictive evidence. For example, a scholar finds a previously validated measure of communication apprehension. The measure has experienced a long series of replication in predicting communicative outcomes. However, the behaviors described in the communication apprehension measure have been established with participants recruited over decades in U.S. cultural contexts. However, when

the scholar applies the same measure in an alternative cultural context, say the Thai culture, the same behaviors associated with apprehension in the U.S. contexts become more associated with respectfulness and regard for authority in Thai culture. As a result, the validity of the gold standard in the U.S. contexts is invalid for results derived from the same measure in Thai culture. Essentially, the gold standard renders an ethnocentric error when the cultural context changes, and therefore criteria for establishing concurrent validity become compromised. Keith E. Dilbeck See also External Validity; Measurement Levels; Validity, Construct; Validity, Face and Content; Validity, Predictive; Variables, Operationalization

Further Readings Biocca, F., Harms, C., & Gregg, J. (2001, May). The networked minds measure of social presence: Pilot test of the factor structure and concurrent validity. Paper presented at the 4th Annual International Workshop on Presence, Philadelphia, PA. Hullman, G. A. (2007). Communicative adaptability scale: Evaluating its use as an “other-report” measure. Communication Reports, 20, 51–74. Morisky, D. E., Green, L. W., & Levine, D. M. (1986). Concurrent and predictive validity of a self-reported measure of medication adherence. Medical Care, 24, 67–74. Rubin, R. B., & Martin, M. M. (1994). Development of a measure of interpersonal communication competence. Communication Research Reports, 11, 33–44.

Validity, Construct Construct validity is a logical inference from which some measure or test generates results that sufficiently corroborate theoretical conceptualization or rationale. Often, the process of construct validity centers on the development of a new measure or test. To validate the new measure, researchers generate empirical evidence to support argumentation that the measure is indeed accurately observing what is claimed to be measured.

1820   Validity, Construct

Often in communication studies, constructs typically define various motivation- or attitudinalbased communication such as emotions, cultural values, and perceptions of identity, or performancebased communication such as skills, knowledge, and communicative outcomes such as the persuasive skills that speakers use to influence an audience to act. Consequently, to establish construct validity, researchers assess the accuracy of new measurement through determination of the similarities and/or differences shared among a variety of associated events. The aim of this entry is to provide an understanding of construct validity by describing the evidence generated from observing the relationship between the new concept and independent, but related communication activities. The aim also includes an explanation of the levels of construct validity in respect to convergent and discriminant validity. Finally, practical examples are included to provide clarity on appropriate application of tests, followed by a variety of common errors in establishing construct validity.

Assessment As a branch of the social sciences, communication research findings almost always involve social interaction and therefore are dependent on the context within which communication takes place. As new contexts are defined, new measures are in constant demand to advance theoretical assumptions or conclusions about communication activities. The new measures must be tested, or validated, for accuracy. A common practice of construct validation aims to determine similarities and differences between the new construct and various related measures. At times, researchers are concerned with the similarities a new measure shares with various other measures. Tests for similarity commonly include factor analysis, correlation analysis, analysis of variance (ANOVA), and regression analysis. At other times, researchers focus on differences more commonly determined by various t-tests and discriminant analyses. The question becomes whether or not the new measure relates as theoretically expected to the other associated measures within some context. As an example, consider a communication scholar who wishes to establish a new theory to

explain why some individuals express dishonest reasons about why they are emotionally upset in conflict situations. The scholar develops a measure to generate evidence for a construct that involves two levels. The primary level includes attributions associated with dishonest reasons for why an individual is upset, while the secondary level includes attributions associated with more honest reasons. To validate the construct, the scholar chooses a measure of conflict competitiveness. The scholar hypothesizes that the more competitive an individual becomes in conflict situations, the more that same person expresses dishonest reasons to explain his or her emotional state. The scholar collects data using both the conflict competitiveness measure and the new measure for primary and secondary emotional expressions. Data are collected at the same time, using both measures to generate evidence for why individuals become more or less honest about why they may be upset in conflict situations. The scholar is now ready to begin determining construct validity. The objective is to generate evidence that the new emotional expression measure is indeed measuring the two levels of expressed emotions (honest and dishonest). To generate the evidence, the scholar assesses the similarity shared between the new measure in relation to the conflict competitiveness measure. The process often includes an initial factor analysis to determine that the levels of the new emotional expression measure do indeed result in two separate factors as theoretically expected. Following the factor analysis, the scholar compares results of the new measure with results of conflict competitiveness measure. Given both measures share a relatively high statistically significant positive correlation, the scholar begins to establish construct validity. The scholar may also include a regression analysis to determine how well the conflict competitiveness measure predicts the new emotional expressiveness measure. Given that the emotional expression measure results in a two factor solution (honest and dishonest), shares a relatively high correlation with conflict competitiveness, and is significantly predicted by the conflict competitiveness measure, the scholar claims to have generated evidence in support of arguing for the validation of the new measure. That is, the scholar claims to have validated

Validity, Construct   1821

a new construct that describes various levels of honesty when individuals express emotional state during conflict situations.

Levels of Validity Construct validity generally has two sublevels: convergent validity and discriminant validity. Convergent validity is based more on similarities. It is concerned with, for example, how much conceptual overlap Theory A shares with Theory B. Researchers interested in determining convergent validity may propose that the two measures are so conceptually similar that they are actually measuring the same event under different titles. A typical statistical analysis used to establish convergent validity is correlation analysis. The more Theory A increases, the more Theory B increases. Basically, researchers concerned with convergent validity seek to determine a parameter from which to gauge the relatedness of at least two theoretical concepts, often evidenced by a high positive statistically significant correlation. Discriminant validity is more concerned with determining theoretical differentiation. A researcher interested in determining discriminant validity may propose that Theory A is indeed different than Theory B, often evidenced by a low statistically significant correlation. Discriminant validity may also seek to determine that a construct describes unrelated factors, evidenced by a discriminant analysis. The discriminant analysis is used to determine the accuracy in which the measure classifies the five separate factors. Both of the sublevels, convergent and discriminant validity, are common processes to establish evidence to argue that a measure is indeed measuring what the theory purports to measure.

Application The general purpose of construct validity process is to generate evidence that a theoretical concept is accurately observed or measured. Often, the validation process centers on a newly developed measure to generate the evidence. The evidence commonly involves a variety of tests to determine similarities and differences with other wellestablished measures. For communication studies, the collection of measures provides a means to

observe the degree to which theoretical functions are (un)related within social contexts. Consider, for example, a group of researchers interested in the process by which individuals communicate to gain social group membership. The researchers believe that a large variety of individuals will establish false identity attributions to gain social group membership without entitlement. That is, individuals may deceive a social group in order to gain acceptance as a group member under false pretenses. The researchers develop a conceptual framework (construct) to determine the degree to which such individuals become motivated to gain social group membership under false identity attributions, and therefore without entitlement. To generate evidence for the theoretical framework, the researchers find a measure that determines the tolerance for disagreement that individuals maintain during communication activities. The researchers theorize that the more tolerance for disagreement individuals maintain, the less motivated the same individuals will become to communicate false identity attributions. The data are collected using both measures and results are generated using correlation analysis. To establish construct validity, the researchers hypothesize that a negative statistically significant relationship exists between the two measures. That is, the higher individuals score on the tolerance for disagreement measure, the lower the individuals will score on the deceptive measure. Given that negative correlation occurs, the researchers begin to establish evidence, useful for arguing in support of validating the new construct.

Common Errors The virtually inexhaustible list of methods used to establish construct validity present difficulties in the validation process. Although the two types of validity tests, convergent and discriminant, are likely the most common, construct validity remains open to a large variety of alternative tests. The objective then becomes to generate a collection of tests to build relatively sufficient conditions to argue for construct validity, making the validation process increasingly ambiguous. Consider, for example, that a new construct is designed to predict a specific outcome. The

1822   Validity, Face and Content

researchers compare results from the new construct with another construct that purports to predict the same outcome. The researchers discover a positive statistically significant correlation occurs between the two measures. The researchers could argue in support of convergent validity as evidence to claim construct validity of the new measure. However, though the measures share convergent validity, after regression analysis, results indicate that the new measure is not actually predictive of the theorized outcome. The determination of construct validity, then, becomes ambiguous. At the same time, a group of researchers could design a new construct to determine that results from three or more groups of test subjects, under different test conditions, report significant differences through analysis of variance (ANOVA). The researchers might claim to have established construct validity; except, the statistical process of ANOVA fails to identify which of the test groups account for the significant difference. The researchers could then perform a post hoc analysis to identify which of the test groups do in fact account for the largest proportion of variance. However, when post hoc results indicate a high degree of colinearity among the groups, the indication is that the test groups are in fact too correlated to determine significant differences in prediction. Thus, the evidence used to argue in favor of construct validity becomes once again increasingly ambiguous. Consequently, the determination of construct validity requires sound reasoning behind why specific results indicate expected outcomes. Basically, common errors occur when researchers know how to apply various analyses, but fail to know to how the interpretation of results sufficiently evidences construct validity. Keith E. Dilbeck See also Internal Validity; Validity, Concurrent; Validity, Face and Content; Validity, Measurement of; Validity, Predictive

Further Readings Emmers-Sommer, T. M., & Allen, M. (1999). Surveying the effect of media effects: A meta-analytic summary of the media effects research in human

communication research. Human Communication Research, 25, 478–497. McCroskey, J. C. (1992). Reliability and validity of the willingness to communicate scale. Communication Quarterly, 40, 16–25. Spitzberg, B. H. (1990). The construct validity of traitbased measures of interpersonal competence. Communication Research Reports, 7, 107–115.

Validity, Face

and

Content

Validity refers to a condition in which statements or conclusions made about reality are reflective of that reality. A number of forms of validity exist in social science research. One form is generalizability, which represents the extent to which a statement or conclusion applies to populations or settings not included within the context of a specific study. Causal validity is concerned with statements or conclusions drawn about hypothesized effects of variables on outcomes under study. Measurement validity relates to the degree to which measures designed to capture concepts do in fact measure desired concepts. In other words, how accurately a measure in a study assesses what researchers believe the measure captures is reflective of the amount of measurement validity present within the study design. Thus, establishing whether a measure has gauged what it was designed to measure is an essential first step for determining validity of research findings. Measurement validity can be established using four distinct subtypes: face validity, content validity, criterion validity, and construct validity. This entry focuses on the first two subtypes of measurement validity—face validity and content validity—specifically how they relate to one another as well as other forms of measurement validity (criterion validity and construct validity). It also provides examples of various approaches commonly used to assess face validity and content validity. Last, this entry discusses potential challenges faced by researchers interested in determining face validity and content validity of their research.

Validity, Face and Content   1823

Measurement Validity Measurement validity represents the degree to which a measure constructed to assess a specific concept actually measures the concept in question. A valid measure of a concept will be strongly associated with other measures of the concept previously shown to display high validity. Similarly, a valid measure should also significantly correlate with other concepts known to be strongly related to the concept being measured. In addition, a valid measure should not be associated with distinctly unrelated concepts. The validity of a measure is called into question when the measure is not associated with known valid measures of a concept, fails to correlate with measures of related concepts, or is found to be associated with measures of unrelated concepts. Consider the following example. A researcher is testing the effects of playing violent video games on aggressive behavior among children. The researcher examines the validity of the measure for violent video-game playing and finds that it is associated with previously established measures of violent video-game playing and measures of other types of violent media consumption, including a measure assessing violent-film exposure. In addition, the researcher considers whether the measure of violent video-game playing is associated with measures of news consumption and confirms that the study’s measure is not associated with this unrelated concept. Based on this, the researcher can conclude that the study’s measure of violent video-game playing exhibits high measurement validity. Although the process for establishing the validity of a measure may be straightforward, developing highly valid measures can be a challenging endeavor. When a measure does not meet criteria of validity and thus fails to measure the concept it was designed to measure, a measurement error has occurred. Measurement error may occur as a result of three factors: idiosyncratic individual errors, generic individual errors, or method factors. Idiosyncratic individual errors arise when a relatively small subset of individuals respond to a measure in somewhat unsystematic and unexpected ways. For example, a respondent may fail to understand the meaning of a question about road safety or misinterpret specific wording

regarding driving laws, which leads the respondent to offer answers that deviate from how the respondent would answer with a clearer interpretation of the question. Similarly, a research participant may provide answers to questions about road safety immediately following a near collision that are very different from responses to these questions following a less eventful day. Since the quality of a study’s findings is conditional on its use of valid measures, establishing the extent to which appropriate levels of measurement validity have been reached remains an important goal for researchers. Measurement validity can be judged in four ways: criterion validity, construct validity, face validity, and content validity. The first two approaches assess measurement validity of a measure in the context of other measures and how it relates or compares to them. Specifically, criterion validity uses comparisons to previously validated measures to determine measurement validity, whereas construct validity uses theory to guide assessments of measurement validity. For example, one way to establish criterion validity of a self-report measure of monthly cigarette use may be to compare its scores with a less obtrusive measure that captures number of cigarette boxes purchased in the last month. Similarly, construct validity can be determined by comparing scores from a measure of attitude toward quitting smoking and intention to quit smoking, as reasoned-action theories such as the theory of planned behavior suggest that concepts of attitude and intention are likely to be significantly associated. In contrast, the last two approaches involve assessment of measurement validity through inspection of a measure’s individual attributes.

Face Validity As the name suggests, face validity requires an examination of a measure and the items of which it is comprised as sufficient and suitable “on its face” for capturing a concept. A measure with face validity will be visibly relevant to the concept it is intended to measure, and less so to other concepts. For example, a self-report measure of the number of cigarettes smoked in a day would be a measure of daily smoking with strong face validity. As such, face validity is a rapid way to determine the

1824   Validity, Halo Effect

face value related to the appropriateness of using a particular measure to capture a concept, and thus an excellent first step for establishing overall measurement validity. However, face validity alone cannot provide sufficient support for measurement validity and should always be interpreted in the context of other forms of measurement validity. Returning to the example of a self-report measure of the number of cigarettes smoked in a day, although at face value this measure may be valid in terms of measuring daily smoking, given the rise in stigma related to smoking behavior and the desire among some individuals to respond to provide “the right answer,” a question such as “How many cigarettes do you smoke on a typical day?” may lead to underreporting of actual smoking behavior. This would have the effect of reducing measurement validity and conclusions drawn about smokers.

Content Validity Content validity focuses on the degree to which a measure captures the full dimension of a particular concept. A measure exhibiting high content validity is one that encompasses the full meaning of the concept it is intended to assess. In order to ensure that a measure is capable of fully gauging a concept, researchers typically consult experts and conduct a thorough review of the extant literature covering various properties and qualities of the concept under study. One example of a measure with content validity is the perceived argument strength scale. The items included in this scale were significantly informed by theory and research suggesting message factors (e.g., believability, novelty, and personal importance) are important for fully assessing a message’s argument strength as perceived by an individual. At times, however, experts may fail to come to agreement about the content validity of a measure and whether it fully encompasses all dimensions of a specific concept. This represents one of the challenges in using measures of validity that are conditional on subjective inspections of a measure’s individual attributes. As such, these forms of validity do not reach the same level as criterion validity and construct validity, which are judged using empirical evaluations to determine validity.

Different types of validity are concerned with establishing the extent to which observed findings reflect some aspect of reality. Measurement validity is specifically focused on assessing the degree to which measures used in a study accurately gauge what they are intended to measure, which has important implications for the conclusions researchers can draw about research findings. Of the four different forms of measurement validity, content validity and face validity rely on subjective evaluations to make a determination of validity. This reliance on subjective assessment raises challenges that stronger forms of validity based on empirical observations, such as criterion validity and construct validity, do not typically face. Lourdes S. Martinez See also Categorization; Coding of Data; Data Transformation; External Validity; Grounded Theory; Internal Validity; Survey: Leading Questions; Survey Questions, Writing and Phrasing of; Validity, Measurement of

Further Readings Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. Brewer, J., & Hunter, A. (1989). Multimethod research: A synthesis of styles. Newbury Park, CA: Sage. Kelly, B. J., Niederdeppe, J., & Hornik, R. C. (2009). Validating measures of scanned information exposure in the context of cancer prevention and screening behaviors. Journal of Health Communication, 14(8), 721–740. Schutt, R. K. (2011). Investigating the social world: The process and practice of research. Thousand Oaks, CA: Pine Forge Press. Zhao, X., Strasser, A., Cappella, J. N., Lerman, C., & Fishbein, M. (2011). A measure of perceived argument strength: Reliability and validity. Communication Methods and Measures, 5(1), 48–75.

Validity, Halo Effect The halo effect refers to the idea that a person or attribute considered highly (or lowly) valued in one aspect becomes highly (or lowly) valued in

Validity, Halo Effect   1825

some other aspect unrelated to the original assessment. Consider, for example, a person who receives a high (or low) evaluation in some domain, such as the ability to shoot a basketball, based on performance. The “halo” of a positive evaluation as a basketball shooter might then be applied to the ability of the person to drive a motorcycle. The evaluator, without any evidence or performance information regarding motorcycle driving, assigns a high (or low) value to that person based on the person’s basketball performance. The justification for the positive (or negative) evaluation of the ability to operate a motorcycle fails to reflect any value other than the positive (or negative) evaluation in the other domain, basketball ability. The “halo” refers to the aura of perception that the evaluator selectively interprets information so that all actions of the person become interpreted consistent with the original evaluation. A good person can do no wrong or a bad person can do no good. Halo effects serve to some extent as the basis of stereotypes, employ selective perception, and often can be viewed as related to attribution theory. As a research endeavor, any design or analysis using a rater to make multiple evaluations of the same person may reflect a common set of assumptions or valence. An evaluator or observer may provide a positive evaluation on one element and that evaluation can impact all subsequent assessments. The use of a single person to perform multiple evaluations risks this kind of problem, that is, having one rater provide both sets of evaluations. The solution involves finding a means to generate independent assessments. One outcome of the halo effect is a higher reliability among assessments of the rater across situations. The impact of the rating issue, when it occurs, has the “positive” consequence of raising cross-situational consistency. However, the appearance of gain across situations becomes illusionary and an outcome of the measurement if a comparison to independent evaluations demonstrates significant departure from the shared assessment.

Ratings by Observers The most frequent application in research involves observer or participant evaluations that involve multiple aspects or behaviors of a person. The

issue of validity, then, involves the ability of the evaluator to make separate, independent judgments, and whether a good (or bad) evaluation made by the rater is used to provide another evaluation that is more positive (or negative) than deserved. Consider the example of an evaluator ranking someone on a series of tasks. If the evaluator decides that the “person can do no wrong” after one set of assessments, then this can create a problem in subsequent considerations. The outcome may generate the perception of a consistent set of evaluations achieved across context or applications, when in fact the original evaluation is biasing the subsequent evaluations (halo effect). Importantly, the existence of a halo effect does not assume conscious bias or an attempt to create consistency on the part of the rater. A rater may be entirely unaware of the consistent evaluation or believe that the similar evaluations are warranted and justified. The concern often becomes an issue when considering the validity of supervisor ratings of employees. A supervisor may create an overall impression of a worker’s ability or suitability for a particular task. The impact of this predisposition is to impact all evaluations of the employee (both positive and negative). What happens is that the ratings of individual characteristics are influenced by the overall evaluation that the supervisor has of the employee rather than an independent assessment of the specific characteristics. The greater the need for judgment by the raters, the greater the potential for a halo effect because the impact of more subjective subject judgments provides more potential for the halo effect to occur. The solution to avoid the halo effect is simply to use multiple raters. If multiple raters, either on a single task or across many tasks, agree (assuming the raters and ratings remain independent), this indicates a consistency not influenced by a halo effect. When the ratings demonstrate consistency across the situation, regardless of the rater used, then the cross-situational consistency reflects an actual consistency unrelated to the use of the individual rater.

Creation of Stereotypes The focus on the development of stereotypes provides one application of a halo effect. Usually, the

1826   Validity, Halo Effect

impact is that the establishment or belief in some aspect of a person is used as proof that all members of the person’s class share that same characteristic. For example, believing that all tall persons are good at playing basketball provides an example of a stereotype. While, clearly, members of the National Basketball Association are taller than the average person, the generalization to all tall persons of the attribute of being good at basketball is unreasonable and untrue. The issue of a person observing a particular attribute or relationship and then making generalizations provides the basis of a common assessment that could reflect the existence of a halo effect. Applications to an individual person would be the case of assuming that a person who achieves well on one task will, by definition, be good at all the other tasks. In this case, the outcome of one evaluation is applied to all such other possible areas of interest. The generation of a stereotype involves the application of some characteristic to all members of the group. Often, stereotypes imply an additional assumption that the group shares an entire set of characteristics in common (e.g., lazy, criminal, worthless, uneducated).

Explanation by Attribution Theory Attribution theory suggests that an individual will take some particular characteristics of another person and then infer from that a lot of other characteristics. Basically, the individual attributes to a person an entire set of behaviors and abilities based on a small set of known data. In a similar vein, the generation of an evaluation based on some valence (positive or negative) extended to other generalized evaluations represents a similar process in research. For a rater, the issue operates in the same fashion to explain the rating, or measurement, problem. What happens is that a single instance or content category becomes the basis for determining or influencing the other ratings made. The important point is that the rater may or may not be consciously aware of the commonality in evaluation or recognize that he or she is using it during the assessment.

Techniques to Avoid a Halo Effect in Research As mentioned, the simplest solution to avoid the problem of a common rater influence across

different evaluations is to use different evaluators. For example, if the need exists to rate both the proficiency of basketball shooting and the ability to drive a motorcycle, then different evaluators can be used to rate basketball shooting and motorcycle driving, respectively. If different evaluators are used, then the rating of each evaluator should be considered independent and unrelated to the other in terms of influence. A person may possess terrible or exceptional skills at both tasks and the correlation between abilities when observed by different evaluators unaware of the other rating should not demonstrate a halo effect. The halo effect requires a common evaluator influenced by one evaluation to make a similar judgment on the next task. Halo effects become more difficult to avoid in field or applied settings. A common issue in business, as discussed earlier, involves the evaluation of employees. Often the evaluations require assessment across a number of different tasks or situations. A positive evaluation in one element can become a halo that is applied to other elements. What happens is that the halo is applied to future evaluations and alternative tasks and situations. When the evaluation is made based on the previous task or situation, the carryover to the new task or situation may generate an evaluation that reflects not the current situation but instead the past evaluation. When concerns of such a halo effect occur, outside or more objective assessments may be required. The use of independent evaluators does not provide evidence that the rating system is valid or reliable. The use of independent assessments provides only a way of handling the issue of a source of rating error. The use of independent raters fails to provide evidence that the mechanism or particular system offers a valid source of measurement. The entire process of evaluation, particularly the tools used, may lack validity and ultimately reliability. The impact of the halo effect considers how an individual rater may use one evaluation to influence another evaluation of a common unit. Mike Allen See also Validity, Concurrent; Validity, Construct; Validity, Face and Content; Validity, Measurement of

Validity, Measurement of   1827

Further Readings Forgas, J. P. (2011). She just doesn’t look like a philosopher . . . ? Affective influences on the halo effect in impression formation. European Journal of Social Psychology, 41, 812–817. Johnson, D. M., & Vidulich, R. N. (1956). Experimental manipulation of the halo effect. Journal of Applied Psychology, 40, 130–134. Nisbett, R. E., & Wilson, T. D. (1977). The halo effect: Evidence for unconscious alteration of judgments. Journal of Personality and Social Psychology, 35, 250–256. Sine, W. D., Scott, S., & Gregorio, D. D. (2003). The halo effect and technology licensing: The influence of institutional prestige on the licensing of university inventions. Management Science, 49, 478–496. Retrieved from http://media.proquest. com.ezproxy.lib.uwm.edu/media/pq/classic/ doc/342679611/fmt/pi/rep/

Validity, Measurement

of

The purpose of this entry is to explain what measurement is and the role that the concept of validity plays in ensuring that measurements are sound. Several ways of assessing validity will be presented, and the relevance and significance of theory will be discussed for its centrality in evaluating a measure’s validity.

What Is Measurement? We make and use measurements all the time: We measure time with a clock, a stopwatch, or a calendar; we measure distance whenever we drive; we measure weight when we buy products by their weight or whenever we weigh ourselves. Sometimes our measurements are more complicated: At some stores, we learn how much a product costs per pound or per fluid ounce; in these cases, the resulting measure is obtained by division, dividing the overall price by the product’s weight or volume. Similarly, the measurement of density is the mass of a substance divided by its volume. A measurement may be an even more complex transformation of several measures. For example, although we can crudely measure the

distance that we walk by counting our steps and multiplying by the estimated length of our average step, we cannot measure the distance from the Earth to the Moon or measure the distances within atoms the way we measure the distance that we walk. The process of measuring the distance to the Moon or subatomic distances involves several different measures that are combined using a formula. In communication research and in other social sciences, researchers often measure “human” quantities, such as intelligence (who was smarter, Einstein or Newton?), beliefs (who do you think was taller, George Washington or Abraham Lincoln?), attitudes (who is liked more, Hillary Clinton or Donald Trump?), behaviors (who texts and chats on cell phones more, men or women?), group climate (which college football team has the most fans, Michigan or Ohio State?), organizational structure (are the number of hierarchical levels in a company proportional to the number of employees?), cultural differences (are Americans more religious than Europeans?), as well as personality characteristics (how shy are you?), and aspects of language (how ambiguous is this sentence?). The number of human quantities appears limitless. Definitions

Measurement can be defined as the systematic assignment of numbers to objects based on attributes of the objects. Here, systematic means that the assignment follows one or more rules. Assignment of numbers means that the object will be characterized by the number that is assigned. Depending on the kind and precision of measurement, the numbers can mean various kinds of numbers: Some measures may be restricted to positive integers {1, 2, 3, . . .} or to only a few integers {0, 1}, to nonnegative integers {0, 1, 2, 3, . . . }, to all integers from negative (e.g., −152) to positive (+3,456), to all rational numbers (all numbers that can be expressed as a ratio of two integers, such as 1/2, 7/10, −400, 1,234.567), or to rational plus irrational numbers (irrational numbers are those that cannot be expressed as the ratio of two integers, such as √2 or π). (There are other possibilities, such as complex numbers, but we need

1828   Validity, Measurement of

not discuss these possibilities here.) Attribute refers to some quality, trait, feature, property, or aspect of an object, and it is usually called a variable: A person can be smart or not so smart (attribute: intelligence); a person can be tall or short (attribute: height); people can prefer Hillary Clinton or Donald Trump (attribute: popularity); men and women can differ in their cell

phone use (attribute: use of communication technology); a group can be large or small (attribute: group size); an organization can be hierarchical or level (attribute: equality of power or influence); a culture can be relatively religious or secular (attribute: religiosity). Using some of these examples, one can see how measurement works (Table 1).

Table 1  Examples of Measurement, Organized by Object Object

Attribute

Units

Example

Person

intelligence

estimated IQ (intelligence quotient) based on the Stanford– Binet Intelligence Scales (5th ed., 2003)

Einstein is estimated to have had an IQ of 160, whereas Newton is estimated to have had an IQ of 190.

Person

height

inches, feet, centimeters, meters

George Washington was 6’2” tall, whereas Abraham Lincoln was 6’4” tall.

Person

popularity

percent of positive attitudes minus percent of negative attitudes toward a person (August 3, 2015; national poll)

Clinton has a 37% positive rating and a 48% negative rating, for a difference of -11%. Trump has a positive rating of 26% and a negative rating of 56%, for a difference of -30%.

Group of people

use of communication technology

minutes texting and chatting on a cell phone (2010)

Women text and chat an average of 1,457 minutes per month, whereas men text and chat an average of 1,114 minutes per month.

Group of people

group size

count of number of people in the group (across 210 U.S. media markets, September, 2011)

OSU is estimated to have 3,167,263 fans, whereas Michigan is estimated to have 2,921,066 fans.

Organization

equality of power or influence

number of levels in the organization

Companies with about 1,000 employees generally have four hierarchical levels, whereas companies with about 3,000 employees generally have about eight hierarchical levels.1

Culture

religiosity

percent of population that regularly attends religious services

A greater proportion of Americans regularly attend religious services than Europeans do.2

Operationalization

The process by which attribute is converted to the number that will stand in for it is called operationalization. For example, in Table 1 religiosity is

operationalized by conducting a survey in which samples of individuals were asked how often they attend religious services. Individuals’ response choices were as follows:

Validity, Measurement of   1829

More than once a week Once a week Once a month Christmas/Easter day Other specific holy days Once a year Less often Never, practically never

Responses were coded, entering data into a computer, aggregating (combining) the responses, and reporting the results. This operationalization has several sources of potential errors: (a) the survey respondents not understanding the question or the response alternatives; (b) the transcription of the responses by the respondent or by the researcher could be mistaken; (c) the data analysis may be flawed; and (d) the report of the data analysis may be biased or based on misinterpretation or misreading of the computer results. These errors, and others, are all part of the operationalization. Different operationalizations of the same attribute are likely to have different sources of error. Suppose, for example, that a researcher wishes to measure how popular a person (named Leslie) is. The researcher can ask Leslie about herself (How popular are you? How many friends do you have? How many people speak to you on a typical day? How many people spoke to you yesterday? List your friends.). The researcher can also estimate Leslie’s popularity by asking others, such as friends in the same school class as Leslie, about Leslie (How popular is Leslie? How many friends does Leslie have? How many people do you think speak to Leslie on a typical day? How many people do you think spoke to Leslie yesterday? Are you Leslie’s friend?). The researcher can also use data such as number of personal calls on Leslie’s cell phone to estimate Leslie’s popularity. Each of these ways to assess Leslie’s popularity can be incorrect—that is, with errors of some kind—because of the method used. For example, Leslie’s friends may overestimate Leslie’s popularity because they like her, whereas the use of cell phone data may underestimate Leslie’s popularity if Leslie’s parents limit Leslie’s cell phone use.

These errors can be thought of as being two kinds: random and systematic. Random error is error that is unpredictable and due to chance. Because of random error, sometimes a measure may be too high and sometimes a measure may be too low. But if the error is random, over repeated measurements (in the same way, under the same conditions, with no change in what we are measuring), the random error should average out. Systematic error is error that makes the measure off the mark; in other words, the measure is biased. For example, if a measure of intelligence is in English, and some respondents are not fluent in English, the results of the IQ test may make the respondents seem that they are not very intelligent when, in fact, the test was biased and produced scores that were too low because those not fluent in English may not have understood the questions or the response alternatives on the test. In this case, the operationalization is flawed.

What Is Validity? As can be seen from the previous section, all kinds of attributes at various levels (e.g., person, group, organization, culture) can be measured with different types of measures, and each measure may be associated with different types of error, random and systematic. The next question is how does the researcher know or learn if any of the measures are sensible, or, in other words, whether the measures are valid. Suppose the researcher asks 100 people to assess their own popularity, giving the researcher 100 numbers. Does this set of numbers reflect the attribute—popularity—that the researcher seeks to measure? This is the meaning of measurement validity: the measure of an attribute reflects that attribute. For a measure to be valid, it should be minimally affected by random error and by systematic error. The following subsections discuss the basic ways of attempting to determine whether a measure is valid. Qualitative Types of Validity

There are two types of validity that are not associated with a quantitative or numerical result indicating the degree that a measure is valid. The first is called face validity, and the second is called content validity.

1830   Validity, Measurement of

Face Validity A measure has face validity if it appears valid on its face. This judgment—that the measure has face validity—may be made by the researcher choosing to use the measure, by experts in the topic under investigation, or by a sample of people who are similar to those to be used in the research that will be subsequently conducted. Judging a measure to have face validity is especially important at the early stages of research, particularly when a measure will be used that has not been used previously or when the measure is being used on a sample that is different in important ways from the sample that was used previously. Examining whether a measure has face validity requires a careful examination of the questions or scales or methods that are being considered for research. Because different researchers,

different experts, or a different sample of people may come to different conclusions about the face validity of a measure, claiming that a measure has face validity is only the beginning of the process of validating a measure. Content Validity A measure has content validity if the researcher can show that the items that constitute the measure—for example, the questions asked to measure popularity—are sampled from and are representative of the domain of content that the measure is supposed to incorporate. Suppose a teacher were preparing an examination for an undergraduate research methods class. The researcher could create the grid as shown in Table 2.

Table 2  Creating an Examination for an Undergraduate Research Methods Class by Use of a Grid Content Source

Kind of Assessment Definition of Key Concepts

Analysis of Key Application of Concepts Key Concepts

Comparison of Key Concepts

Integration of Key Concepts

Lectures Discussion Textbook Articles

Notice that there are 20 blank cells in this grid. Imagine that the teacher created a test by asking five questions per cell, with each question counting for 1 point, so the test has 100 points. A teacher using this method could say that the test covers the domain of content—the undergraduate research methods taught in the class—and therefore claim that the test has content validity. However, any other teacher of the same course could create a different grid, develop different questions based on that grid, and also claim that the test has content validity. Each teacher may have a valid claim; there is no single grid that establishes content validity. Content validity can be used to assess measures of many types; the measurement of popularity, for example, could be designed so that the measure appears to be content valid, with or without a

grid. But for some types of measures, this is unnecessary: Generally one is not concerned with sampling a domain of content when we measure time, distance, and other measures that seem unambiguous in that the attribute being measured and its operationalization are closely tied. Quantitative Types of Validity

There are types of validity that provide statistical or numerical assessments of validity. The statistic most commonly used is the Pearson product–moment correlation coefficient, or, more simply, the correlation or bivariate (i.e., two variable) correlation. (This statistic is not used for categorical values with more than two categories, and there are other assumptions associated with this statistic.) The correlation

Validity, Measurement of   1831

measures the degree of linear relation between the measures of two attributes (variables). The correlation goes from −1.00 to +1.00, with a positive value indicating the more of one variable, the more of the second variable, and a negative value indicating the more of one variable, the less of the second variable. A correlation of zero indicates that the two variables in question are not linearly related. Quantitative assessments of validity include criterion-related validity (which includes predictive and concurrent validity) and construct validity (which includes convergent and discriminant validity). Criterion-Related Validity A measure has criterion-related validity if it is related (typically linearly) to an applied outcome measure. So, if a researcher is attempting to measure an attribute that can indicate success or failure at some endeavor, he or she is interested in criterion-related validity. The object of measures such as the SAT, the GRE (Graduate Record Examination Revised General Test), the LSAT (the Law School Admission Test) is to predict who would do well in college, graduate school, or law school, respectively, and they are often used, in conjunction with other data, to select applicants for admission to these schools. What these measures are supposed to predict is called the criterion, and it is assumed that the criterion is measured with validity. Civil service examinations, driver’s license tests, and other such measures are similar in that they are typically used to help screen out those who are viewed as unable to do the job or have the skill that the test is supposed to reflect. If the test (i.e., the measure the researcher is trying to validate) is given at one time and the criterion (i.e., the measure the researcher is trying to predict) is given at a later time, this type of validity is called predictive criterion-related validity. If the test is measured close in time to the criterion, this type of validity is called concurrent criterion-related validity. The quantitative assessment of criterion-related validity is the correlation between the measure the researcher is trying to validate and the criterion: Based on this method, the test or measure is considered valid if the correlation between the test

and the criterion is large, statistically significant, and, typically, positive (e.g., higher scores on the GRE are associated with better performance in graduate school). For this form of validity, the measure the researcher is trying to validate may indicate the skills or abilities that he or she believes should predict the criterion. For example, a researcher who wants to create a measure that predicts good police performance might think that good police performance involves • • • •

physical strength and stamina; knowledge of relevant laws; driving skill; interpersonal skills for dealing with the public with calmness and without anger or aggression; • discipline and ability to follow orders; • knowledge of first aid; and • courage and clear thinking in the midst of crises.

If the researcher can create a measure that incorporates the factors in this list, and show that this measure does, in fact, predict good police performance, then the researcher can use this measure as a criterion-valid test for selecting recruits for the police force. Construct Validity, Elaborate Version A measure has construct validity if it is related (typically linearly) to other measures that are part of a network of theoretical causes and effects. For example, suppose a researcher hypothesizes that popularity is caused by • birth order (younger children are predicted to be more popular) [X1]; • physical attractiveness [X2]; and • empathy with others [X3];

and that popularity causes • frequent selection as a group leader [Y1]; • frequent receipt of help from others [Y2]; • high levels of confidence at work and at home [Y3]; and • high levels of satisfaction with one’s friends [Y4].

The researcher can represent this causal network as shown in Figure 1.

1832   Validity, Measurement of

Figure 1  A  Causal Network for Construct Validation of a Measure of Popularity. Y1

X1

X2

X3

Popularity

Y2

Y3 Y4

The labels for the measures are from the list of causes and effects. Each arrow is a hypothesized relation based on one or more theories that include popularity. Thus, there are seven hypotheses represented in Figure 1. For every hypothesis (every arrow), the researcher can compute the correlation between the cause and the effect. If the researcher conducts one or more studies to test these hypotheses, using the same measure of popularity for each study, he or she can consider what the results might be: 1. Every hypothesis is supported in that each of the seven correlations is significant, strong, and in the direction hypothesized. In that case, the researcher can say that (a) the measure of popularity is validated, and (b) that the theory or theories that generated these hypotheses are supported. 2. Not a single hypothesis is supported. In that case, the researcher can say that (a) the measure of popularity is invalid, (b) the theory or theories that generated these hypotheses are invalid, or (c) both (a) and (b) are true. 3. Some hypotheses are supported and some are not. In this case, the conclusion with regard to the validity of the measure and the status of the theory or theories that generated the hypotheses is not clear, and more research is needed to evaluate the measure of popularity and its theoretical underpinnings.

Evaluating the construct validity of a measure as described here is not substantially different from conducting most scientific investigations, although here the focus is on the assessment of one particular measure.

Construct Validity, Limited Versions There are two limited versions that are used to assess construct validity. The first, convergent validity, examines whether a single measure is associated, typically linearly, with a measure the researcher wishes to validate. So, if the researcher proposes that an individual’s popularity should correlate strongly and positively with the frequency of being selected as a group leader and finds that the correlation is strong, positive, and statistically significant, the researcher can conclude that the two measures converge as predicted, and that they exhibit convergent validity with regard to each other. This result does not indicate whether other variables that are predicted to correlate with popularity would do so. On the contrary, suppose the researcher hypothesizes that popularity is unassociated with intelligence. The researcher is predicting that, across a sample of people, an individual’s popularity should be uncorrelated with the individual’s intelligence. This form of validity is called discriminant validity, and a correlation close to zero indicates that these two traits are not the same, showing that the two measures—the measure of popularity and the measure of intelligence—can be discriminated or differentiated. Thus, in this case, the lack of a significant correlation is consistent with the researcher’s predictions, and popularity and intelligence show discriminant validity.

The Role of Theory Different types of validation seem to be associated with different goals, such as covering a range of topics deemed essential for a classroom test, preparing an examination for selection for a job or program, and developing scientific theory. For every type of validation, a researcher wants to understand whether the measure—the one he or she seeks to validate—does indeed measure that which the researcher wishes to measure. The elaborate form of construct validity requires one or more underlying theories within which the attribute the researcher wishes to measure “resides.” Because of this situation, the researcher can make stronger claims for the evidence gained by this validation process: The researcher will have considered several hypotheses with several

Validity, Predictive   1833

measures, and the hypotheses and measures may be linked to an even broader set of hypotheses and measures. Furthermore, when a researcher has construct validity, he or she can derive other relations between attributes and derive other measures, both applied and theoretical. So, the long-term goal of the validation process is extending scientific knowledge, which should support both theoretical and applied concerns. Edward L. Fink See also Validity, Concurrent; Validity, Construct; Validity, Face and Content; Validity, Halo Effect; Validity, Predictive

Further Readings Andersen, P. B., Gundelach, P., & Lüchau, P. (2008). Religion in Europe and the United States: Assumptions, survey evidence, and some suggestions. Nordic Journal of Religion and Society, 21, 61–74. Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The concept of validity. Psychological Review, 111, 1061–1071. doi:10.1017/CBO97805114 90026.007 Campbell, D. T., & Fiske. D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81–105. doi:10.1037/h0046016 Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302. doi:10.1037/h0040957 Hill, C. W. L., & Jones, G. R. (2009). Essentials of strategic management (2nd ed.). Independence, KY: Cengage Learning. Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley. Schweizer, K. (2012). On issues of validity and especially on the misery of convergent validity. European Journal of Psychological Assessment, 28, 249–254. doi:10.1027/1015-5759/a000156

Validity, Predictive Predictive validity is a logical inference from which results of a measure or test share comparable results with an alternative measure or test taken at a different time. The type of validity

testing belongs to the category of criterion validity, which establishes, more or less accurately, predictability of an outcome of some other past or future measure or test. The extent to which results derived from the measure in question correspond with results from the alternative test, taken at a different time, is the extent to which predictive validity may be argued. The higher the degree of similarity in function (e.g., high correlation) between the two tests—the one in question and the alternative—the more predictive validity becomes established. Predictive validity, therefore, is the related accuracy of predicting the outcome of an alternatively validated test, taken at a different time, under similar conditions. The aim of this entry is to provide an understanding of predictive validity by describing the evidence generated through the comparative assessment. The aim also includes an explanation of the level of predictive validity in respect to various alternative forms of validation, such as construct, criterion, and concurrent validity. Finally, practical examples are included to provide clarity on appropriate application of testing, followed by a variety of common errors in establishing predictive validity.

Assessment That validity is context-sensitive means the various times at which the individual measurements are taken entail that each of the events remains contextually relevant. That is, the independent measures conceptually advance theoretical assumptions or conclusions by generating evidence to predict one another, given the same outcome criteria. A commonly accepted validation process of testing predictive validity, then, is to compare asynchronous results generated from the measure in question with results generated from an alternative, often previously validated, measure. The question becomes whether or not the two (or more) measures predict one another under the same outcome conditions within similar contexts at different times. Consider, for example, a group of intercultural communication researchers who are interested in how relationship building takes place while living in a foreign nation. The researchers believe that language is important to building relationships, and thus become interested in generating accurate

1834   Validity, Predictive

predictions about individuals acquiring foreign language skills while living in a foreign nation. The researchers theorize that individuals who already perceive themselves as competent to communicate in their native language are more willing to use, and therefore learn, the foreign language. First, the researchers measure the self-perceived communication competence that individuals report before departing to the foreign nation. A year later, the researchers test how well the same individuals score on the foreign language acquisition examination. Given the results from the selfperceived communication measure share a statistically significant relationship (typically correlation analysis) with the foreign language acquisition examination, predictive validity is established. In the context of intercultural communication and language acquisition, researchers determine the results as evidence to argue that self-perceived communication competence in a native language is predictive of foreign language acquisition while living in a foreign nation. The objective is to generate evidence that the initial self-perceived communication competence is indeed predictive of the expected language acquisition outcome. To generate the evidence, the researchers assess the predictive validity of the initial measure in relation to the results of the language acquisition examination, 1 year later. The amount of time between measures is arbitrary, and in the field of communication, the necessary amount of time depends on the context being investigated. Often, the process includes correlation analysis between the initial measure and the later measure. However, regression analysis also helps to determine the magnitude of predictive power that the first measure (self-perceived communication competence) holds for the second measure (language acquisition examination). Given either a high correlation or a high degree of predictive power, or both, the researchers establish the predictive validity of the self-communication competence measure. That is, the researchers have now generated evidence in support of arguing for the initial measure to predict the second, later measure.

Levels of Validity Among the construct and criterion types of validity, predictive validity falls within the criterion

types of validity. Whereas construct validity is typically concerned with establishing the soundness of a conceptual framework or theoretical model, criterion validity has more to do with the predictive power of such theoretical constructs. That is, criterion is used in observing specific outcomes as a result of the measurement process in order to determine validity. Given the outcomes occur, or are observed as expected, the researchers establish evidence to argue predictability of some observable operation. As such, two common levels of criterion validity emerge, one is concurrent and the other is predictive. Both levels of criterion validity (concurrent and predictive) commonly apply to communication studies. A major difference of application between the two levels is the timing of tests or data collection. Concurrent validity is generally concerned with the relationship shared between two (or more) independent measures, taken at the same time, that predict the same outcome. However, where concurrent validity is concerned with measures taken at the same time, predictive validity is concerned with a series of tests, one (or more) after the other. The objective of predictive validity, however, is to determine the magnitude of predictive power from at least two different tests taken at different times. Essentially, at the level of concurrent validity, researchers are concerned with simultaneous measurement. At the level of predictive validity, researchers become more concerned with how one measure taken at one time predicts measurement results of some other independent measure taken at a later time.

Application Generally, tests that are performed to establish predictive validity compare results from two (or more) conceptually similar measurements or tests by collecting data using one measure and then collecting data using a different measure at a different time. Both measures apply the same criteria to conceptually similar contexts. The major concern is to test the comparison of a measure with another later measure in an attempt to advance or determine predictive criteria (indicators) of some outcome. Both measures administer conceptually similar criteria, and proceed to determine predictability of one measure compared to a later measure.

Variables, Categorical   1835

One example of an application for communication studies revolves around health communication, specifically, doctor–patient communication. The researchers are interested in patient compliance with pharmaceutical prescriptions—does the way medical doctors communicate with their patients predict whether or not patients will comply with taking their medications correctly? The researchers theorize that the more medical doctors communicate with nonverbal immediacy behaviors about taking medication, the more patients perceive that the doctor cares about their medical condition. The researchers collect data about doctors’ immediacy when communicating with patients, and then later measure the degree to which patients comply with pharmaceutical prescriptions. Given the nonverbal immediacy measure shares a statistically significant relationship (e.g., correlation, regression) with the results from the measure of patient compliance, predictive validity becomes available as evidence to support an argument for the predictive validity of nonverbal immediacy in health communication.

Common Errors Concurrent Validity Error

Predictive validity is not the same as concurrent validity. Although the two types of tests are similar, they are procedurally different types of validation. Both concurrent and predictive validity belong to the category of criterion validity; however, the timing of measurements that take place is different. While tests for concurrent validity are concerned with one-shot measurement, at which data collection takes place all at the same time, predictive validity concerns how one set of test results relates to a different set of test results (from a different independent measure) taken at a different time. Criteria Over Time

Because validity is context sensitive, definitions of standards from which to establish validity readily become ambiguous. The establishment of predictive validity requires that the criteria for which the measure was designed to test is conceptually related. Over time, often through a long series of replication of resulting predictive power, scholars

accept a measure as valid. Then, to establish predictive validity, the primary measure (the measure in question) is used to comparatively generate evidence with some other measure for arguing in support of validation. However, at times the criteria (the predicted measure) may not conceptually relate to the primary measure (the predictive measure). For example, a group of researchers are attempting to develop specific indicators, from past events, that have led to current communication behaviors of individuals alive today. Perhaps the past events are so old that there is no way to test the accuracy in the design of indicators— trying to test psychometric measures on 200 deceased individuals may not work as predicted. On the contrary, future events may take too long to establish, thereby invalidating the original conceptualization of the relationship shared between the two (or more) independent measures. Keith E. Dilbeck See also Validity, Concurrent; Validity, Construct; Validity, Face and Content; Validity, Halo Effect; Validity, Measurement of

Further Readings Aune, R. K., & Reynolds, R. A. (1994). The empirical development of the normative message processing scale. Communications Monographs, 61, 135–160. Kennedy-Lightsey, C., & Martin, M. (2009, November). The predictive validity of the Interpersonal Attraction Scale (IAS) in the instructional context. Paper presented at National Communication Association 95th Annual Convention, Chicago, IL.

Variables, Categorical Categorical variables are sets of variables with values assigned to distinct and limited groups or categories. Categorical variables take on values in a set of categories, different from a continuous variable, which takes on a range of values. Categorical variables are also called discrete or nominal variables. Categorical variables are common in social and behavioral sciences. Groups or categories often

1836   Variables, Categorical

consist of numeric (e.g., female = 1, male = 0) or alphabetic (e.g., female, male) labels, and generally provide information that is not quantitative by nature. In their simplest form, categorical variables can be binary variables with only two options (e.g., “yes” or “no”). In more complex forms, categorical variables can represent a variety of options belonging to the same group or category, such as marital status, religion, or ethnicity. For example, marital status can be described in a variety of ways such as single, married, divorced, or widowed. It is important to note that numeric labels are simply labels and do not indicate one category is more/less or better/worse than the other. Numeric labels are simply codes that allow the researcher to conduct analyses, and do not reflect any rank ordering or quantities. Such codes are referred to as dummy codes, or the quantification of a variable, which allows the researcher to conduct analyses with numeric symbols taking the place of words. Categorical scales are common in social sciences for measuring attitudes, opinions, and behaviors, and have two primary types of scales: nominal and ordinal levels of measurement. When variables have categories without a natural ordering (serving as labels only), the variable is nominal. Nominal measurement levels consist of only categorical variables that do not have higher or lower status than one another, so the order of listing is irrelevant. The marital status example described earlier, then, is considered a nominal categorical variable. Categorical variables at the nominal level should have no logical order, and be exhaustive and mutually exclusive (each case to be categorized in the nominal measure must fall into only one of the categories). Besides gender, other more commonly used nominal measures in survey research include race, religious affiliation, and political affiliation. Once all cases in the population are examined, cases sharing the same criteria are grouped into the same category and receive the same label (e.g., “female” or “male”). Categories can also be measured on an ordinal level when they can be naturally ranked (e.g., from lowest to highest), such as size or social class. Ordinal variables are ordered, but the distances between each category are not known. Categorical variables at the ordinal level should also be mutually exclusive, and they should have

a logical order and be scaled according to the amount of the identified characteristic. For example, grades can be ordered A, B, C, D, and F, where an individual who earns an A achieves at a higher level than one who earns a B, and so on. As is the case with ordinal level measurement, one should not assume the distance between an A and a B is the same as the distance between a B and a C. Researchers might also use Likert scales, for example, ranging from strongly disagree to strongly agree, indicating more/less agreement with statements. Ordinal categorical variables such as these create qualitative comparisons, which then become meaningful; it makes sense to say that a person who earns an A has achieved at a higher level in a course than a person who earns a B. Categorical variables, though regularly presented in terms of verbal descriptors, can also be visually demonstrated through the use of charts or tables indicating category frequency. Bar and pie charts display data spatially in two different ways. Bar charts use bars of varying lengths that are proportional to the frequencies in each category. Typically, bar charts are constructed with an x-axis representing the category label, and a y-axis representing the number of values. Pie charts show proportionality by displaying categorical data in terms of a percentage or fraction of the total value of the category. Each category is represented by a portion of a circular-shaped graph (or “pie”) representing a subset of the total. A contingency table, which utilizes cells containing frequency counts of outcomes for a sample, displays frequencies of characteristics and become more complex as more characteristics are examined in the same object. Categorical variables can be used in statistical analyses as either explanatory (independent) or response (dependent) variables. Changes in mean response variables according to the values of explanatory variables can be measured by regression models. At the nominal level, mode (frequency of the variable) is the only measure of central tendency appropriate for categorical variables. Chi-square is an appropriate test for categorical data at the nominal level. At the ordinal level, correlation coefficients using categorical variables where both are at the ordinal level include biserial (used to calculate correlations

Variables, Conceptualization   1837

between continuous and categorical variables) and Spearman’s rho, and when both variables are dichotomous, phi coefficient. Categorical variables are often used in t-tests, analysis of variance (ANOVA), multivariate analysis of variance (MANOVA), simple and multiple regression analysis, and discriminant analysis. Kim K. Smith See also Measurement Levels; Measurement Levels, Nominal/Categorical; Measurement Levels, Ordinal; Variables, Conceptualizing; Variables, Continuous; Variables, Operationalization

Further Readings Allen, M., Titsworth, S., & Hunt, S. K. (2009). Quantitative research in communication. Los Angeles, CA: Sage. Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677–680.

Variables, Conceptualization Conceptualization is a process of defining meaning of the terms used in a study (e.g., definition using concepts and words) based on previous scholarship. Utilizing prior research provides a basis for creating agreement upon variable conceptualization within the field. However, when reviewing a scholarly article, it is common (and important) for researchers to critically question how something is conceptualized and how it is operationalized. This entry focuses on conceptualization in the variable construction process: Conceptualization → Definition → Operationalization → Measurement Researchers use various terminologies throughout the conceptual, theoretical, and empirical stages of research. Occasionally, these terminologies are utilized interchangeably; however, there are distinctions. Variables have conceptual definitions. These definitions include elements of interest to researchers. Elements are specifically identified in the hypotheses and research questions.

Hypotheses are predictive statements about the relationship between the variables, and research questions are statements about the relationship between the variables without specific predictions (rather more exploratory and descriptive in nature). These elements must be able to be expressed as more than one value or category. For instance, in a study examining paternal behaviors of both mothers and fathers, the researcher would have two variable elements. However, exclusively examining mothers does not allow the parental sex to vary and, therefore, it is a constant. If a variable has only one characteristic or a fixed value that does not change, it is referred to as a constant. This entry discusses the differences between conceptualization and concepts. It explains the derivation of concepts in a discussion of both direct and indirect observations. In addition, concepts and constructs are used synonymously, and this entry dispels the differences and similarities between those terms. Next, this entry further narrows variable conceptualization through a discussion of indicators, dimensions, and examples. Finally, this entry closes with a discussion of the distinction that occurs for inductive and deductive approaches.

Conceptualization and Concepts Researchers start with concepts that are mental images comprising observations, feelings, or ideas. These can be represented as objects (e.g., newspaper articles, Snapchat photos, or love letters), events (e.g., relationship turning points, childbirth, or onset of menopause), relationships (e.g., familial bonds, doctor–patient relationships, or Facebook friends), or processes (e.g., presidential speechmaking, cross-fit lifting techniques, or creating grandma’s eulogy). Often these conceptions are ambiguous and imprecise notions that must be made more specific and precise. Thereby, when a concept is conceptualized, researchers specify what they mean by a term, which makes it possible to communicate a working definition. The process of coming to agreement on what is specified by terminology used is conceptualization and the outcome is a concept. Conceptualization is a continual process for researchers. Therefore, the researcher should give focused emphasis on it during the beginning of the

1838   Variables, Conceptualization

research design, though it should be revisited throughout the data collection and result interpretation processes. Even when researchers begin to standardize a definition, or commonly utilize the identical or unchanged conceptualization, it still must undergo the same rigor. Pre-existing conceptualizations affirmed by the field, extensively tested, and adopted across numerous studies have advantages; nevertheless, nuances can occur and often multiple conceptualizations exist in a variety of contexts. Still, readers should look to the literature, to see how researchers are conceptualizing the variables, whether in disagreement or as standardized. An explicit variable definition should be present upon conceptualization.

Observation and Constructs Abraham Kaplan identifies three classes of things: direct observables, indirect observables, and constructs, all of which can be measured. The first class, direct observables or manifest constructs, are those things that can be simply and directly measured, such as the use of particular demographics (e.g., height or weight), physical affection (e.g., hugs or kisses), or verbal fillers (e.g., huh, umm, or like). The second class, indirect observables or latent constructs, are subtle and indirectly measured, such as historical texts, diary notations, or secondary observations from interactions. The third class, constructs, are theoretical constructions, traits, behaviors, or characteristics, which are not based on direct or indirect observations; rather, constructs are those things that are measured to test a theory, but constructed; examples are uncertainty, apprehension, and love. Although constructs are not observable, they help scientists communicate, organize, and study the world. A concept is a theoretical definition of a construct. Hence, variable conceptualizations comprise direct observations, indirect observations, or constructs. Regardless of how concepts are derived, concepts differ on their level of abstraction. This, in turn, affects the precision that researchers must provide in their conceptualization descriptions. For instance, relational satisfaction is a more abstract concept than relational length. An abstract concept typically will involve a greater degree of explanation to clarify ambiguity and position the

variable. In comparison, relational length is more concrete and relatively straightforward. Thus, it is essential to precisely identify the concepts and then determine appropriate indicators to represent the meaning.

Indicators and Dimensions The process of conceptualization involves describing the indicators. These are identified to mark the presence or absence of a concept. Oftentimes variables and indicators are used interchangeably or synonymously in research methods texts. A variable is often considered a more abstract conceptualization, whereas an indicator is considered more concrete. Both need to have multiple facets to the concepts; otherwise, they are a fixed value or constant. However, these are terms that can easily become muddled. After determining the presence or absence of an indicator, researchers determine dimensions of the variable. These are defined as specifiable multifaceted constructs. When defining the variable, researchers must determine whether the variable is unidimensional or multidimensional. Specifying the unique dimensions allows for more complex understanding and refinement. Considering whether a variable should be conceptualized as unidimensional or multidimensional is an important step in defining the variable. For instance, when defining attraction, uncertainty, or health, researchers often view these constructs as multifaceted. A unidimensional variable investigates one comprehensive construct from a set of indicators, whereas multidimensional variables examine multiple facets of a concept. The multidimensional concept must include the description of the subconcepts under the larger definition. These subconcepts are relatively independent. For instance, when examining attraction, a researcher may include diverse dimensions such as emotional attraction, friendship attraction, and physical attraction. A statistical procedure, factor analysis, can help to reveal whether a measurement should be represented as a unidimensional or multidimensional construct. The following is an example to help clarify variables, indicators, and dimensions and their nuances. One particular researcher is interested in illness uncertainty in cancer bloggers’ threads. The

Variables, Conceptualization   1839

variable is illness uncertainty and the indicator could be specific linguistic word choices. There are multiple ways to assess illness uncertainty; this is simply one indicator to assess whether theory of uncertainty in illness is present or absent. Next, the researcher should determine whether to depict theory of uncertainty in illness as unidimensional or multidimensional. Based on previous literature, Merle Mishel described uncertainty in illness as having three sources of uncertainty (e.g., medical, social, and personal). According to the theoretical framework, it may be important to examine the three sources of uncertainty as a multidimensional construct.

Inductive and Deductive Distinction Inductive conceptualization is part of the process used to make sense of related observations. This approach is often utilized when there is little known about the topic to be investigated. Induction begins with a systematic observation of the world (gathering data), observes patterns and idiosyncrasies, and develops toward a logical explanation or theoretical framework. A common approach utilizing this conceptualization process is grounded theory, which begins by examining study participants’ knowledge and meaning and builds toward a description of the phenomenon under study. Thus, concepts emerge from what has been observed. On the contrary, deductive conceptualization helps translate elements of the theoretical framework into specific variables that can be used to test hypotheses. This approach attempts to explain, predict, and control phenomenon. Thereby, deductive concepts are constructed from what is described in theory or what can be observed. Commonly, deductive conceptualization is conducted through experiments, questionnaires, surveys, interviews, and content analysis. Generally, inductive conceptualization is utilized in qualitative research methods, and deductive in quantitative research methods. When reviewing research articles, it is important to delineate the variable formulation and implementation process, because these processes are represented differently through qualitative and quantitative scholarship and alter how variables are conceptualized. In scholarly qualitative-based research articles, the structure for conceptualization

is not formulaic. The process of depicting and refining develops as the participant experience is shown. The conceptualization is an interrelated process with operationalization and interpretation. At times, the purpose of qualitative scholarship may be to determine the conceptualization of a concept. Thus, it is not uncommon to alter variable conceptualizations throughout the process or even as a result of data collected. When reading a scholarly quantitative-based research article, the concepts and constructs (often utilized synonymously) are described in the literature review and are represented formulaically. These two components build understanding of conceptualized variables. Concepts and constructs are presented in competing or original definitions. In the literature review section of an article, researchers make arguments for or against particular conceptualizations. As these concepts and constructs are discussed, they are narrowed and specified for that particular study. For instance, does communication apprehension include fear, anxiety, or uncertainty? Readers then are able to assess the rationale for inclusion or exclusion of particular constructs that are represented as variables. Researchers present the conceptualized variables in their research questions and hypotheses. In the method section of an article, researchers present precise definitions about how the variables were observed and measured. In quantitative research, the researcher determines whether the variable conceptualized is either the independent (the manipulation or cause of change in the dependent variable) or dependent variable (effect or influence of change by the independent variable). Whether a research study is qualitative or quantitative, variable conceptualization begins the variable formation process of enabling data. The variable definition comes next, leading to a discussion of variable operationalization and eventually conducting measurement. Variable formulation begins the process of description, measurement, evaluation, and critique. Leah LeFebvre See also Factor Analysis; Grounded Theory; Induction; Measurement Levels; Validity, Construct; Variables, Defining; Variables, Dependent; Variables, Independent; Variables, Operationalization

1840   Variables, Continuous

Further Readings Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. Baxter, L. A., & Babbie, E. (2004). The basics of communication research. Boston, MA: Wadsworth. Kaplan, A. (1964). The conduct of inquiry. San Francisco, CA: Chandler. Mishel, M. H. (1988). Uncertainty in illness. Journal of Nursing Scholarship, 20, 225–232. doi:10.1111/j.1547-5069.1988.tb00082.x Mishel, M. H. (1990). Uncertainty in chronic illness. In J. J. Fitzpatrick (Ed.), Annual review of nursing research: Vol. 17 (pp. 269–294). New York, NY: Springer.

Variables, Continuous Continuous variables are variables that can take on any value within a range. Continuous variables are also considered metric or quantitative variables, where the variable can have an infinite number or value between two given points. A variable is continuous if it is theoretically possible for members of the group to fall anywhere on a spectrum with small amounts of a characteristic on one end and large amounts of a characteristic on the other end. Continuous variables are often measured in infinitely small units. Many physical traits are considered continuous variables, along with psychological traits such as intelligence, extroversion, and creativity. The idea is that, no matter how similar two people are in extroversion, it is theoretically possible for a third person to have a level of extraversion between the other two people. To determine if a variable is continuous, there are two primary questions to ask: (a) If a small difference exists between two people in the group, could a third member of the group be positioned between the first two? (b) If so, could the third member be positioned between the first two no matter how small the difference between the first two? Continuous variables are different from categorical, or discrete, variables. Continuous variables are quantitative in nature, but not all quantitative variables are continuous. For example, if a parent has two children, it is not logically possible for a third child to be “in between” the

first two. However, height is considered a continuous variable. In a group of individuals where not all are equally tall, height is considered a variable. If two people in the group, Person A and Person B, are nearly the same height, consider whether a third person, Person C, could be taller than one but shorter than the other. Furthermore, could Person C’s height be between Persons A and B regardless of how small the difference between A and B? As long as Person A and Person B are not the same height, the variable is continuous. To illustrate, Person A could be 72.13 inches tall, and Person B could be 72.14 inches tall. Person C’s height can be an infinite number of values between 72.13 and 72.14, so theoretically Person C could fit between A and B. Continuous variables are measured on interval or ratio scales (but not all interval or ratio scales are continuous). Interval scale data are measured on a linear scale, whereas ratio scale data are measured on a nonlinear scale. Some continuous data are treated simply as interval scale variables, whereas others are treated as a continuous ordinal scale. In some cases, continuous data are given discrete values; for example, age is continuous, but age at the most recent birthday is a discrete value. In such situations, it is appropriate to treat discrete values as continuous. Researchers are limited in their ability to measure such infinite numbers because there are infinite possibilities. As a result, numbers are often rounded off to make the data easier to work with, which means data are treated as discrete variables. An example of this rounding can be seen in grade point average (GPA). An individual can earn a 3.31 GPA, 3.32, and so on. These numbers are rounded and assigned to a rank order scale of A, B, C, D, and F. It is important to remember that continuous variables are often reported with numbers containing one or two decimal places, but reporting in this way does not change the fact that these variables are continuous. Deciding how to treat continuous data is important for choosing statistical techniques. When values are treated as continuous, statistical analysis is more powerful because data are lost when continuous data are recorded in a range or rounded. Simple ratios can be treated as continuous. Continuous variables can be used in statistical analyses as either explanatory (independent) or response (dependent) variables. When a researcher

Variables, Control   1841

is comparing a continuous dependent variable with a continuous independent variable, linear regression is appropriate. Multiple regression is appropriate when the researcher is comparing two or more continuous independent variables with one continuous dependent variable. Researchers should generally use linear regression or analysis of covariance (ANCOVA) for a mixture of categorical and continuous variables. Kim K. Smith See also Measurement Levels, Interval; Measurement Levels, Ratio; Variables, Categorical

Further Readings Allen, M., Titsworth, S., & Hunt, S. K. (2009). Quantitative research in communication. Los Angeles, CA: Sage. Wrench, J. S., Thomas-Maddox, C., Richmond, V. P., & McCroskey, J. C. (2012). Quantitative research methods for communication: A hands-on approach (2nd ed.). Oxford, UK: Oxford University Press.

Variables, Control Control variables are the variables (i.e., factors, elements) that researchers seek to keep constant when conducting research. In a typical research design, a researcher measures the effect an independent variable has on a dependent variable. To properly measure the relationship between a dependent variable and an independent variable, other variables, known as extraneous or confounding variables, must be controlled (i.e., neutralized, eliminated, standardized). Although control variables are not the central interest of a researcher, they are paramount to properly understand the relationship between independent and dependent variables. If extraneous variables are not controlled in a research project, they can skew the results of a study. If used properly, control variables can help the researcher accurately test the value of an independent variable on a dependent variable. Therefore, controlling extraneous variables is an important objective of research design.

Control variables are often overlooked in research design, which not only can lead to confounding variables but also can adversely affect the internal and external validity of a study. Researchers should consider control variables as important as independent and dependent variables when designing a study. Without control variables, a researcher cannot make accurate claims about the impact of independent variables. Controlling for extraneous variables is particularly important when conducting research on human subjects. Because humans are complex beings, a number of confounding variables can affect the results of a study. Therefore, a researcher must eliminate any extraneous variables so dependent and independent variables can be isolated and quantified precisely.

Examples of Control Variables To further explore how control variables can be used in research, three different examples will be examined. Each example is related to communication research in higher education. The first example explores how extraneous variables can be controlled when examining how new technology can affect student learning. If, for example, a researcher is interested in testing the effects of e-readers on reading comprehension in a particular course, a researcher would want to control for any other factors that might affect reading comprehension. In this example, e-readers are the independent variable and reading comprehension is the dependent variable. To accurately understand the effects of an e-reader on comprehension, a researcher might want to control for variables such as content knowledge and intelligence. This would ensure that the researcher knows the extent the independent variable, and not confounding variables, affects the dependent variable. A second example examines how control variables can help researchers understand the relationship between cocurricular activities and communication competency. If conducting research on the relationship between a convocation program at a university and intercultural communication competency, a researcher would need to control for variables that could affect communication competency. In this instance, the dependent variable is intercultural communication competency, and the

1842   Variables, Defining

independent variable is a convocation program. If not controlled, variables such as experience traveling abroad or a course in intercultural communication might confound the results because of their influence on the dependent variable. The third example examines the topic of student retention. In this example, a researcher is exploring how first-year seminars (independent variable) affect student retention (dependent variable). To ensure that this relationship is truly being examined, a researcher would need to control for other factors that might lead to student retention. Therefore, control variables in this experiment would be factors such as ACT/SAT scores, student housing, and involvement in sororities and fraternities.

How Variables Are Controlled in Research A researcher should always keep potential extraneous variables in mind when designing a study and interpreting findings. Controlling for extraneous variables can be accomplished during the research process (experimental control), or it can also be accomplished statistically during data analysis (statistical control). There are a variety of ways a researcher can control for extraneous variables while designing and conducting research. First, researchers can control for confounding variables caused by differences among subjects. To ensure that results are not skewed by differences among participants, researchers can ensure that variables (e.g., age, sex, history) are held constant either during the research process or in the statistical analysis. Second, researchers can control for confounding variables during the data collection phase of research. This can be accomplished by providing blind treatment, randomized control groups, and by creating treatment and control groups. These steps ensure that bias or the placebo effect does not affect the independent variable. Third, researchers can control for confounding variables by ensuring that the setting of research remains constant for all participants. This can be accomplished by ensuring that each participant receives the same instructions in the same manner. In addition, situational variables such as temperature and time of day must be considered when administering research. Situational considerations

will vary, depending on the purpose of a research project and the subsequent independent and dependent variables that are being tested. To statistically control variables, a researcher can mathematically isolate, standardize, neutralize, remove, or suppress variables during the analysis stage of research. This type of control can be utilized to supplement other control measures or when controlling variables during the research project is not possible. To conclude, controlling for extraneous variables is central to communication research design. Regardless of the dependent and independent variables, one must ensure that confounding variables are eliminated from data analysis. Whether one controls extraneous variables through experimental control or statistical control, control variables are necessary to accurately understand the relationship between dependent and independent variables. Nathan G. Webb See also Control Groups; Experiments and Experimental Design; Internal Validity; Primary Data Analysis; Variables, Dependent; Variables, Independent

Further Readings Breaugh, J. A. (2006). Rethinking the control of nuisance variables in theory testing. Journal of Business and Psychology, 20, 429–443. doi:10.1007/s10869-0059009-y Carlson, K. D., & Wu, J. (2012). The illusion of statistical control: Control variable practice in management research. Organizational Research Methods, 15, 413–435. doi:0.1177/1094428111428817 Spector, P. E., & Brannick, M. T. (2011). Methodological urban legends: The misuse of statistical control variables. Organizational Research Methods, 14, 287–305. doi:10.1177/1094428110369842

Variables, Defining Variable definitions describe what researchers mean when they identify their variables. These are explicitly and precisely written definitions of variables delineated for readers to pinpoint the construct under consideration in the literature review.

Variables, Defining   1843

This entry focuses on the definition in the variable construction process: Conceptualization → Definition → Operationalization → Measurement To begin, this entry discusses what variable definitions offer to researchers and the importance for establishing well-thought-out definitions. Next, this entry describes the distinctions between real, nominal, and operational definitions. Finally, this entry provides a discussion of exploratory, descriptive, and explanatory studies and how the variable definition process differs for each of these research types.

Overview Variables must have conceptual or theoretical definitions. These definitions include elements of interest to researchers. A concept or mental image of a similar set of ideas, observations, or feelings must be defined. Defining the meaning of a variable connects ideas to concrete observations. Concepts can be utilized in everyday conversation with vague and common agreements about their usage. Frequently, however, concepts may be under debate, might have changed over time, or are undefined, or possibly contain multiple definitions. For all these reasons, it is helpful to define variables to distinguish what a particular study is analyzing. In addition, it is unlikely that readers have the same base knowledge about a topic, share particular or common definitions, or utilize the same contemporary definition. Researchers must take a variable definition beyond laypersons’ commonsense view of it. Defining variables requires an explicit definition before the variables are utilized in the research. Explicitly defining variables enables the researchers to provide a common base for their argument. Researchers cannot count on readers knowing exactly what they mean. It is important to define concepts clearly, especially when they are abstract, unfamiliar, or new. In their introduction and literature review, researchers need to specifically and clearly define what they mean when they use a concept. To define a variable, an explanation must originate from theory and/or previous definitions

based in prior scholarship first through the conceptual or theoretical definitions. Researchers must determine which definitions are important to the phenomenon being examined and how it relates to their theoretical framework. When defining the variable and the concepts that underlie its foundation, it should be related to the theory as well as to other concepts. The shared understanding of one concept relies on the understanding of other related concepts. Nonetheless, the definition should describe a variable that can be distinguished from things that are and are not related to that particular concept.

Real, Nominal, and Operational Definitions When conceptualizing, it is important to consider the three types of definitions: real, nominal, and operational. The real definition reflects the actual essence or real manifestation of a construct. For instance, to think of a straight line is to think of an object and what it is. This definition often leads to confusion by mistaking the real entity for a concept of something; thus, nominal definitions are used to specify and help state what a word means. The nominal definition originates from a consensus or common convention (e.g., any dictionary definition) that describes how a concept should be utilized. This definition does not necessarily signify anything meaningful but rather focuses the observation. The operational definition is nominal rather than real; moreover, it provides clarity as to what the variable represents for that study’s purpose (e.g., identifying which exact measurement will assess that variable). This is unambiguous and allows readers to understand what specifically is being examined, what results are reported, and how to replicate the study. Operational definitions are made based on the purposes of the study. Researchers rely on operationalization to specify concrete steps or processes for creating and measuring a variable. By specifying what data represents a variable, the criteria are created for inclusion and exclusion of what does and does not represent the variable. In the method section of a research report, researchers should specifically define, or operationalize, the variables present within the study.

1844   Variables, Dependent

Exploratory, Descriptive, and Explanatory Studies Concepts build the foundation for variable formations that are specifically identified in the research questions or hypotheses, or both. Variables comprise concepts and constructs that must be expressed as more than one value or category and are represented differently in exploratory, descriptive, or explanatory studies. Exploratory studies examine a new area and may not or do not have a prescribed definition that exists in the literature. Therefore, exploratory studies rely on other similar concepts (e.g., phubbing as a term for snubbing someone with the phone). Descriptive studies answer questions about what, when, and how to describe what is observed; these studies necessitate clear and precise definitional articulations. Descriptions alter the examination of the variables, such as describing marital conflict, fighting, or stress. Explanatory studies ask questions about why and seek causality; therefore, these definitions and subsequent results are less ambiguous and often dependent on preestablished literature in testable hypotheses (e.g., women self-disclose more than men). In this explanatory example, self-disclosure has numerous studies that reinforce the definition and provide causality from the variable definition. Leah LeFebvre See also Hypothesis Formulation; Measurement Levels; Research Question Formulation; Variables, Conceptualization; Variables, Dependent; Variables, Independent; Variables, Operationalization

Further Readings Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. Shoemaker, P. J., Tankard, J. W., Jr., & Lasora, D. L. (2004). How to build social science theories. Thousand Oaks, CA: Sage.

Variables, Dependent A dependent variable is dependent upon the presence or absence of an independent variable. The dependent variable is what the researcher is trying

to measure or explain, and is the object of the research; thus, it is sometimes called the outcome variable. Dependent variables are most often represented in quantitative research where the focus is on using defined variables to measure outcomes. This entry describes the nature of the relationship between dependent and independent variables, offers examples of such relationships, and indicates types of statistical tests used to examine dependent variables. A researcher’s goal is to determine if and to what extent changes in one variable influence certain types of changes in another, the dependent variable. A researcher will manipulate independent variables to see how dependent variables respond; the dependent variable represents the measureable outcome of this manipulation. A researcher will measure the dependent variable to determine whether and how much it changes. The researcher’s goal is to accurately predict how the dependent variable will change in the presence of the independent variable. To do so, the researcher must find a significant relationship between independent and dependent variables, represented by correlation or causal claims. In graphical representations of these relationships, the dependent variable is generally put on the y-axis. For example, the amount of sleep an individual gets before a test could influence the grade a student receives on that test. In this case, the researcher is looking to see how much the test grade depends on the amount of sleep. It is important to remember that a variable isolated as dependent in one research study is not necessarily a dependent variable in other research designs. In the example, the test grade was dependent on the amount of sleep. However, if a researcher wanted to know if test grades influenced self-esteem, the test grade would become an independent variable and level of self-esteem dependent on the grade. Research design determines which variables are independent and which variables are dependent. To measure dependent variables, researchers must determine the type of variable (e.g., ordinal, interval) and determine appropriate statistical tests. Researchers looking for relationships between one independent and one dependent variable, such as the sleep and test score example, have a number of options for testing the significance of the relationship. Some research designs

Variables, Independent   1845

are more complex and feature one or more independent or dependent variables. For example, some research designs call for a measure of significant differences between the effect of two categorical independent variables and one categorical dependent variable, so a researcher might use a t-test. Sometimes research designs call for researchers to analyze differences in a dependent variable between two or more groups. To make such comparisons, the researcher would use analysis of variance (ANOVA) procedures, such as one-way ANOVA to analyze one categorical independent variable among three or more groups with one continuous dependent variable. Or, a researcher could use multivariate ANOVA (MANOVA) to determine how one or more independent variables influences one or more dependent variables. Finally, a researcher looking to estimate how much of the variance in a dependent variable is accounted for by variance in a predictor variable would use multiple regression. There are numerous statistical tests a researcher can use to determine the relationship between independent and dependent variables; the appropriate test depends on the specifics of the research design. Kim K. Smith See also Multiple Regression; Multivariate Analysis of Variance (MANOVA); One-Way Analysis of Variance; t-Test; Variables, Categorical; Variables, Continuous; Variables, Independent

Further Readings Allen, M., Titsworth, S., & Hunt, S. K. (2009). Quantitative research in communication. Thousand Oaks, CA: Sage.

Variables, Independent Independent variables influence or cause variation in another variable (referred to as the dependent variable). Independent variables are isolated, manipulated, and controlled by a researcher to see how the manipulation affects the dependent variable. Independent variables themselves are not affected or changed by other variables; their

existence is not dependent on other variables in the proposed relationship. Also referred to as predictor variable, the independent variable is an essential component of quantitative research designs, as it is the catalyst for the measurable changes in a research study. This entry describes qualities of independent–dependent variable relationships, provides basic examples, and lists common statistical approaches for variable relationship analysis. To test study predictions, a researcher looks to see if a dependent variable changes based on the presence or absence of an independent variable. Most quantitative research studies seek to determine if an independent variable influences, either directly or indirectly, the dependent variable. Independent variables often occur earlier in time than dependent variables and are generally represented graphically on the x-axis. The researcher’s goal is to accurately predict how the dependent variable will change in the presence of the independent variable. To do so, the researcher must find a significant relationship between independent and dependent variables, represented in the form of correlation (connection) or cause–effect claims. Commonly cited examples of independent variables include age, sex, race, education, and income; these things do not change. For instance, a researcher might look to see if age influences the extent to which an individual uses social media applications; in this case, the independent variable of age is proposed to have an effect on the dependent variable of social media use. Age is isolated by the researcher as a predictor of social media use behaviors, which may vary. A second example could use sex as a predictor variable; a researcher could isolate sex as an independent variable to determine the difference between levels of selfdisclosure in men and women. It is important to remember that a dependent variable in one study can serve as an independent variable in other research studies. For example, a researcher could consider how self-disclosure influences feelings of closeness in a relationship. In this case, self-disclosure is the independent variable that predicts how close individuals feel in a relationship. Research design determines which variables are independent and dependent. To measure relationships between independent and dependent variables, researchers must determine appropriate statistical tests. Researchers

1846   Variables, Interaction of

looking for relationships between one independent and one dependent variable, such as the age and social media example, have a number of options for testing the significance of the relationship. Some research designs are more complex and feature one or more independent or dependent variables. For example, some research designs call for a measure of significant differences between the effect of two categorical independent variables and one categorical dependent variable, so a researcher might use a t-test. Researchers often use analysis of variance (ANOVA) procedures, such as one-way ANOVA, to analyze one categorical independent variable among three or more groups with one continuous dependent variable. Research designs with two or more categorical independent variables are analyzed with a factorial ANOVA, and to determine how one or more independent variables influence one or more dependent variables, multivariate ANOVA (MANOVA) is utilized. Finally, a researcher looking to estimate how much of the variance in a dependent variable is accounted for by variance in an independent variable would use multiple regression. There are numerous statistical tests a researcher can use to determine the relationship between independent and dependent variables; the appropriate test depends on the specifics of the research design. Kim K. Smith See also Factorial Analysis of Variance; Multiple Regression; Multivariate Analysis of Variance (MANOVA); One-way Analysis of Variance; t-Test; Variables, Categorical; Variables, Continuous; Variables, Dependent

Further Readings Allen, M., Titsworth, S., & Hunt, S. K. (2009). Quantitative research in communication. Thousand Oaks, CA: Sage.

Variables, Interaction

of

Interaction of variables refers to the interplay of at least two independent variables on a third

dependent, or outcome, variable. Interaction of variables can be considered in the context of different statistical tests including analysis of variance (ANOVA), multivariate analysis of variance (MANOVA), multiple linear regression analysis, and multilevel modeling. When there is an interaction of variables, it is referred to as an interaction effect. In the context of ANOVA, an interaction effect occurs when the effect of one independent variable on one dependent variable differs depending on the levels of a second independent variable. In the context of regression analysis, an interaction effect occurs when the impact of one independent variable varies over the range of the other independent variable. Examining data for interaction effects is important because it allows the researcher to gain a more complex understanding of how the variables relate to one another and to the outcome of interest. Although the presence of interaction effects can make it more challenging to predict outcomes, it allows the researcher to tell a more comprehensive story about what is “happening” in the data. This entry describes interaction of variables (i.e., interaction effects). To accomplish this, it first walks through a simple example of an interaction of variables in ANOVA and provides a visual representation of an interaction effect. Second, this entry introduces interaction in the context of multiple linear regression.

An Example: Interaction of Variables in ANOVA Imagine that you are interested in determining the best way to do your laundry. You decide that the cleanliness of your clothes is your outcome variable (dependent). Cleanliness is scored on a scale from 1 to 10, where 1 is least clean and 10 is most clean. You have separated your clothing into two different kinds: not very dirty clothing (NVD) that you wore when you were at school/work and very dirty clothing (VD) that you wore to exercise and complete outdoor chores. You also have recently purchased two different types of laundry detergent: Detergent X has few chemicals and Detergent Y has lots of chemicals. You might ask three different questions based on this scenario:

Variables, Interaction of   1847

2. Is there a difference in how clean your clothes are depending on whether they are washed using Detergent X or Detergent Y (ignoring how dirty the clothes were at the start)? 3. Does the effect of being washed using Detergent X or Detergent Y differ based on whether the clothes were NVD or VD at the start?

If you ask question 1 or question 2, then you are interested in main effects (i.e., effect of one independent variable on a dependent variable). If you ask question 3, then you are interested in the interaction of variables. In other words, you are asking if the effect of one independent variable (how dirty clothes are to start: NVD or VD) on a dependent variable (cleanliness of clothes) depends on the level of a second independent variable (type of laundry detergent used). It would be a mistake to only consider each variable individually because it may be a combination of variables that achieves the desired result (in this case, the cleanest clothes). After 200 test washes, you report the data as shown in Table 1, whereby each cell represents the mean cleanliness of clothes based on 50 washes (e.g., the average cleanliness of NVD clothes washed in Detergent X is 7 out of 10). Table 1  Data for Laundry Experiment Detergent X

Detergent Y

NVD

7

10

VD

8

4

It is helpful to plot the means for your data on a line graph to visualize the presence of an interaction (Figure 1). The two lines in Figure 1 are not parallel, in fact they cross, and this indicates an interaction effect (note that an ANOVA would be run in a statistical software package to test for statistical significance). Notice that Detergent X works well for cleaning both NVD and VD clothing. However, Detergent Y does a great job cleaning NVD clothing but a poor job

cleaning VD clothing. In other words, the effect of type of laundry detergent on cleanliness depends on how dirty the clothes were at the start (NVD or VD). Interaction of variables in ANOVA is based on categorical independent variables. If you have continuous independent variables, you might use a regression analysis to look for possible interaction of variables. Figure 1  Interaction of Variables in ANOVA 12 Cleanliness of Clothing

1. Is there a difference in how clean your clothes are depending on whether they are NVD or VD at the start (ignoring type of laundry detergent used)?

10 8 6 4 2 0

Detergent X Detergent Y Type of Detergent NVD

VD

Interaction of Variables in Multiple Linear Regression In multiple linear regression, interaction refers to differences in the magnitude of the influence of one independent variable (X) on a dependent variable (Y) as a function of a second independent variable (Z). Put differently, an interaction effect occurs when the importance of one independent variable varies over the range of the other independent variable. Although it is beyond the scope of this entry to detail the process of examining data for interaction effects in regression, the basic steps include (a) creating an interaction term by multiplying one independent variable (X) by the other independent variable (Z), (b) adding the interaction term to the standard regression equation, (c) plugging specific values of Z into the regression equation and solving, and (d) plotting the resulting regression lines on a graph. The goal of this process is to determine whether the slopes of the lines differ from one another, which will indicate the presence of an interaction effect. Patricia Gettings

1848   Variables, Latent See also Factorial Analysis of Variance; Multiple Regression; Variables, Categorical; Variables, Continuous; Variables, Moderating Types

Further Readings Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Thousand Oaks, CA: Sage. Bauer, D. J., & Curran, P. J. (2005). Probing interactions in fixed and multilevel regression: Inferential and graphical techniques. Multivariate Behavioral Research, 40, 373–400. Keppel, T. D., & Wickens, G. (2004). Design and analysis: A researcher’s handbook (4th ed.). Upper Saddle River, NJ: Pearson/Prentice Hall.

Variables, Latent Latent variables are variables that are unobserved—at least not directly. Also sometimes referred to as hidden variables, unmeasured variables, factors, hypothetical variables, or hypothetical constructs, latent variables are measured in practice not by a single variable, but by multiple observed (or manifest) variables. Their use in communication, and in the social sciences more generally, is ubiquitous. For instance, communication researchers are frequently interested in the influence of personality or other psychological traits on the reception of different kinds of messages—a perennial concern for students of attitude change and persuasion. With few exceptions, however, complex psychological or attitudinal traits of the kind that often serve as a filter for messages received from the mass media or through an individual’s social environment are not adequately captured by a single, readily observed indicator. To get at these underlying, latent concepts, researchers must engage in efforts to infer their existence from analyses of variables that can be observed—much in the same way as physicists infer the existence of far-away planets by observing the “wobble” induced by the gravitational pull of these unobserved celestial bodies on neighboring stars. In the social sciences, making inferences about the existence of latent constructs often necessitates that

researchers employ large batteries of variables culled from surveys or multiple, imperfectly measured indicators from other sources. Apart from providing a partial solution to the problems inherent in capturing unobserved and perhaps even unobservable concepts, latent variables are also used in a purely predictive capacity and for data reduction—stripping down large data sets for the sake of parsimony and tractability. This entry provides an overview of the different ways in which latent variables have been conceptualized and employed in communication research. It also touches upon some of the philosophical underpinnings of latent variables and briefly introduces several methods for measuring latent variables or statistical techniques that incorporate measures of latent variables into a broader framework.

Latent Variables as Hypothetical Constructs and as Unmeasurable Constructs Unlike distant planets—entities that are known to exist and are, in principle, observable—there are a number of constructs that may “exist” only in the minds of researchers. Examples of such constructs often include concepts such as self-esteem and selfefficacy. The position that latent variables are not real is most closely associated with constructivism. Realists, on the contrary, do not question the existence of such constructs, but merely view them as unmeasurable. There is a divide among this latter camp between those who view latent constructs as being unmeasurable, and those who believe them to be unmeasurable at present, but may, in principle, be measureable at a later point in time. While it is not possible to observe distant planets directly at the current juncture, advances in technology may one day endow us with the ability to observe these celestial bodies with our own eyes. Many scholars have noted, though, that the distinction between latent or unobserved variables and observed or manifest variables is not as simple as it might seem at first blush. For instance, although the sex of an individual (whether male or female) would appear to be something that researchers can easily observe, even such a seemingly straightforward concept may carry additional meaning. The real world is complex, and

Variables, Marker   1849

even variables that many would be tempted to label as a trait that is inherent in an individual— such as sex—might actually be more accurately characterized as a construct that exists only in the mind of the researcher.

Latent Variables as a Means of Data Reduction or Prediction An alternative view of latent variables is very much an instrumentalist one. That is to say that it conceptualizes latent variables as useful tools for dealing with data, rather than as a means to tap into or operationalize a concept that is unmeasured or perhaps even unmeasurable. In this sense, latent variables are simply a means to a given end. Often that end is greater parsimony and empirical tractability. Researchers are frequently confronted with large amounts of data—surveys that contain hundreds if not thousands of individual variables. Many such variables will purport to measure the same object of interest. Patriotism and nationalism, for instance, are concepts that are often measured using a battery of 120 different questions or items. While each item may contribute some amount of explanatory power to models developed to examine the impact of patriotism and nationalism on a host of outcomes—including one’s tolerance toward people from another country—it is simply not practical to include each item as an explanatory variable in such a model. Latent variables offer a way to summarize large volumes of data, effectively reducing the dimensionality of a given data set and aiding the researcher in his or her quest to explain a given outcome with reference to the fewest factors, while preserving as much information as possible. Prediction is often of paramount importance to those adopting the instrumentalist view of latent variables, although such an approach frequently leaves the researcher with little understanding of the mechanisms that might be driving the observed relationship between a latent variable and a particular dependent variable.

Methods of Extracting Latent Variables From Observed Variables A variety of statistical models can be used to infer the existence of latent variables using items that

can be observed. Researchers commonly employ factor analysis in extracting latent variables from groups of observed variables. Factor analysis belongs to a larger class of what are known as measurement models. Other kinds of measurement models that are often used in extracting latent variables include latent trait models and item response models. The combination of factor analysis with path models—structural equations that describe the relationship between latent variables—is called structural equation modeling. Jacob R. Neiheisel See also Errors of Measurement; Factor Analysis; Reliability of Measurement; Structural Equation Modeling

Further Readings Bartholomew, D. J., Knott, M., & Moustaki, I. (2011). Latent variable models and factor analysis: A unified approach (3rd ed.). West Sussex, UK: Wiley. Bollen, K. A. (2002). Latent variables in psychology and the social sciences. Annual Review of Psychology, 53, 605–634. Boorsboom, D. (2008). Latent variable theory. Measurement, 6, 25–53. Borsboom, D., Mellenbergh, G. J., & Van Heerden, J. (2003). The theoretical status of latent variables. Psychological Review, 110(2), 203–219.

Variables, Marker Marker variables are variables that are used to indicate some other feature. Often, the variable of interest is not directly observable; instead, a marker believed to indicate the existence or level of the variable is used. For example, often the use of biological gender (male and female) is used to represent differences in socialization that a child experiences. Consider a person stating that he or she has the “flu.” If one asks them, “How do you know that?” the person may list a number of symptoms (markers)—raised temperature, body aches, fatigue—all of which are indicative of someone infected with an influenza virus. The person has not directly tested or provided direct proof that the condition is the flu, but the set of

1850   Variables, Marker

markers that exist permit a reasonable inference that the person is suffering from the flu. The reason that biological gender serves as a “marker” variable is that no direct measurement exists for the process of socialization. Instead, the assumption made about gender involves differences between how little girls and little boys are socialized. The belief that the process is different creates an expectation that the mark or measure of the difference can be inferred or realized on the basis of measuring the biology of the person. The assumption of the feminization of girls and the masculinization of boys by the social system serves as the reason for expecting gender differences between men and women. A similar event occurred when considering the original measurement of human immunodeficiency virus (HIV). The measures for the existence of the virus did not measure the level of virus in the body; instead, the measures tested for the existence of the antibodies to the virus found in the blood. The argument for the use of this measure is that only a person exposed to the virus, with the virus in the blood, would have generated any antibodies to the virus. The use of a marker variable is a means to indicate a feature not capable of direct measurement but believed to exist. According to this theory, the presence of particular antibodies in a person indicates exposure to a particular virus. HIV is a virus for which there is no known cure, so a person is considered actively infected with the virus if antibodies are present. Marker variables also act as a means to make inferences, much like tracking wild animals whereby a person tracking animals relies on marks left by the animals to determine what kind of animal, in which direction it is moving, how recently it moved through the area, how big it is, and other features of the animal and the circumstances. The question is always how indicative or accurate are the marks left as interpreted by the person making the observation. The determination of whether the marker indicates the outcome remains an important element of examining the underlying relationship. The less accurate the marker, the less is one’s ability to make accurate claims about the relationship that is directly sought for measurement. Marker variables permit the substitution of an available and easy-to-implement measurement

tool for a target variable whose measurement is not easy or simple. For example, a single item asking a person to indicate biological gender as a marker of socialization process is far simpler than articulating and developing a set of scales designed to assess the process of socialization based on gender. The efficiency of this substitution, combined with easy availability, makes the use of marker variables a procedure often used in research. The prerequisite of an underlying theory about how the process or marks essentially are created provides the basis for understanding how accurate the inference should be treated. A challenge to the underlying theory or inference-making process about the connection between the marker variable and the feature supposedly marked by the indicator calls into question the entire line of research using that method. For example, if one supposes that sexual orientation, such as homosexuality (a preference for sex with the same gender), reflects a choice generated from a socialization process, then the ability to reeducate or change the preference exists. The desire or preference expressed by the person creates a marker of some element of the environment that may need to be evaluated. However, if one believes that sexual orientation reflects a biological or genetic basis, then sexual orientation serves as a marker for some existing biological structure. The argument does not consider the marker, the behavior under question; instead, the marker is interpreted based on the theoretical assumption about how the marker becomes generated. If one assumes that the sexual orientation is genetic (biological), then the use of any type of behavioral therapy or education would probably generate a low level of success because the underlying mechanism or cause is misunderstood. The controversy or disagreement over what a marker indicates can reflect an underlying theoretical disagreement that requires empirical examination and resolution. Use of marker variables has come under criticism because the need to eventually test the assumption of connectivity to the underlying cause remains a limitation. The longer and more difficult to measure the true variable under consideration, the greater the level of difficulty in assessing the entire body of research that employs a specific marker. One challenge to the process essentially undermines the entire body of

Variables, Mediating Types   1851

literature because if the validity of the marker becomes questioned as a measure, then any subsequent claims find no empirical support from the data. Marker variables generally are not recommended as a means for measurement of underlying processes unless the marker is established at both a theoretical and empirical level as an indicator of the process. The use of a marker variable generally is employed because the targeted or desired variable remains difficult to measure and the marker provides a simple means of evaluating the presence of that variable. Mike Allen See also Variables, Conceptualization; Variables, Defining; Variables, Dependent; Variables, Independent; Variables, Latent; Variables, Operationalization

between the independent and dependent variables would not exist. Mediation analysis is used to better understand why or how the relationship between two variables occurs. For example, suppose the independent variable (X) influences the dependent variable (Y) through a third and mediating variable (M). It could be stated then, that X led to M, which led to Y. Mediator variables are the bridge that connects X with Y. A mediator can be either a categorical (e.g., sex) or a continuous (e.g., IQ) variable. For example, engaging in sexual intercourse can lead to pregnancy, but only if you are a woman. In this scenario, engaging in sex is the independent variable and becoming pregnant is the dependent variable. The relationship between the independent and the dependent variable can only exist with the presence of the mediating variable, which in this example is being a female.

Further Readings Keogh, B. K., Major, S. M., Reid, H. P., Gandara, P., & Omori, H. (1978). Marker variables: A search for comparability and generalizability in the field of learning disabilities. Learning Disability Quarterly, 1, 5–21. Williams, L. J., Hartman, N., & Cavazotte, F. (2013). Method variance and marker variables: A review and comprehensive CFA marker technique. Organizational Research Methods, 13, 477–514. doi:10.1177/ 10944281103666036

Variables, Mediating Types In communication research, a mediating variable is a variable that links the independent and the dependent variables, and whose existence explains the relationship between the other two variables. A mediating variable is also known as a mediator variable or an intervening variable. A mediator variable allows a scientist to hypothesize that the independent variable impacts the mediating variable, which in turn impacts the dependent variable. In other terms, a mediating variable is present when a third variable influences the relationship between the predictor and the criterion variables. Without the mediator variable, the link

When to Expect a Mediating Variable Before testing for mediating variables, it is imperative to understand where a mediating variable might exist. Whereas there is no definite sign to look for, there are some clues that can help in determining whether a mediating variable might be present. For instance, if previous literature or theory suggests that there could be a relationship between all three of the study variables, it might be likely for the mediation to occur again in the current sample with the correlation of the same three variables. Another method for determining if a mediating variable might exist is if all three variables correlate after running a correlation matrix. Some variables might present themselves as moderating variables in certain scenarios. When trying to distinguish between mediating and moderating variables, it is important to remember the necessity of the third variable. For example, if it seems that the relationship between the independent variable and the dependent variable would exist without the presence of the third variable, then the variable is probably a moderating variable. However, if it seems as though the third variable’s presence is necessary for the other relationship between the two variables to exist, then the third variable is likely a mediator. A mediating variable is the conduit that creates the

1852   Variables, Moderating Types

connection between two other variables. Without the presence of this mediating variable, the channel linking X to Y would be nonexistent. For example, individuals with eating disorders may also be inclined to share their struggles with close friends. However, if social support is the mediator between engaging in disordered eating and disclosing those behaviors to friends, then the absence of social support from those friends would prevent the link between disordered eating and disclosure to ever occur.

Ways to Compute There are several ways to test for mediation. The most common way is outlined by Reuben M. Baron and David A. Kenny, who list several requirements that must be met in order for mediation to occur. The first step is to determine if the independent variable is a predictor of the dependent variable. Once this relationship is established, the second step is to determine if the independent variable is a predictor of the mediator. The independent variable must predict the mediator at this step in order for mediation to occur. The third part of the process is to ascertain whether the mediator is a predictor of the dependent variable. The independent variable must be mathematically controlled with statistical software in this step to determine the true nature of the relationship between the mediator and the dependent variable. The first two steps laid out by Baron and Kenny use a single regression analysis, whereas the third step requires the use of a multiple regression analysis. A. F. Hayes and K. J. Preacher provide the other, more recently developed method to test for the presence of mediation. This process is referred to as the bootstrap method. The bootstrap method is a type of nonparametric test. These types of tests do not operate under the premise that the data fit a certain distribution. This means that any group of data could be subjected to the test, regardless of any probability distributions related to the sample. The bootstrap method can be computed using a variety of statistical software, such as Statistical Package for the Social Sciences (SPSS) or Statistical Analysis System (SAS). Veronica Hefner

See also Hierarchical Linear Modeling; Hypothesis Formulation; Interaction Analysis, Quantitative; Multiple Regression: Block Analysis; Multivariate Statistics; Relationships Between Variables; Variables, Moderating Types; Z Transformation

Further Readings Baron, R. M., & Kenny, D. A. (1986). The moderatormediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173–1182. doi:10.1037/0022– 3514.51.6.1173 Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. New York, NY: The Guilford Press. MacKinnon, D. P. (2008). Introduction to statistical mediation analysis. New York, NY: Taylor & Francis. Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments & Computers, 36(4), 717–731. doi:10.3758/BF03206553

Variables, Moderating Types The term moderating variable refers to a variable that can strengthen, diminish, negate, or otherwise alter the association between independent and dependent variables. Moderating variables can also change the direction of this relationship. A moderating variable can either be categorical (e.g., race) or continuous (e.g., weight), and is used exclusively in quantitative, rather than qualitative, research. Moderating variables are useful because they help explain the links between the independent and dependent variables. Also sometimes referred to as simply moderators, these moderating variables provide additional information regarding the association between two variables in quantitative research by explaining what features can make that association stronger, weaker, or even disappear. For example, in experimental studies, X (independent variable) causes Y (dependent variable). A third variable, M (moderating variable) might

Variables, Moderating Types   1853

exist, which would explain additional information about the link between X and Y. For example, using data from a questionnaire, a researcher might hypothesize that the relationship between media exposure and beliefs about romance is moderated by the current relationship status of the participant. In other words, the association between viewing romantic media and reports of romantic beliefs could be different depending on whether the participant is single or in a relationship. If a difference is found, then relationship status would be deemed a moderating variable in the connection between media exposure and beliefs about romance. Regardless of the impact or existence of a moderating variable, the association between the independent and dependent variable will still exist. In other words, if relationship status is the moderating variable, then the correlation of media exposure and romantic beliefs will be stronger or weaker depending on whether the participant is currently in a romantic relationship or single; however, that correlation between media exposure and romantic beliefs will exist for both partnered and single individuals. The moderating variable is not required for a relationship between variables to exist. Instead, a moderating variable simply adds additional explanation for what that already-existing relationship looks like.

When to Expect a Moderating Variable Before testing for moderating variables, it is imperative to understand where a moderating variable might exist. Although there is no definite sign to look for, there are some clues that can help in determining whether a moderating variable might be present. For instance, if previous literature or theory suggests that there could be a relationship between all three of the study variables, it might be likely for the moderation to occur again in the current sample with the correlation of the same three variables. Another method for determining if a moderating variable might exist is if all three variables correlate after running a correlation matrix. Some variables might present themselves as mediating variables in certain scenarios. One thing to remember when trying to distinguish between mediating and moderating variables is the necessity of the third variable. For example, if it seems that the relationship between the independent

variable and the dependent variable would exist without the presence of the third variable, then the variable is probably a moderating variable. However, if it seems as though the third variable’s presence is necessary for the other relationship between the two variables to exist, then the third variable is likely a mediator. Two individuals who see multiple images of ideal body types on social media may also feel negative emotions about their own bodies. However, those emotions may be more negative and painful for the individual who is overweight compared to the woman who weighs less. Although both women may report negative feelings about their bodies, the variable of weight is a moderator that can make those feelings more negative in certain circumstances.

Ways to Compute Moderators can be computed using one of two ways: interaction effects or path analysis. To calculate interaction effects using multiple hierarchical linear regressions, researchers enter controls simultaneously in the first block, the independent variable and the moderating variable in the second block, and the cross-product of the moderator and the independent variable (i.e., the interaction) in the third block. To reduce problems with multicollinearity, which is the high correlation of two predictor variables and can be detrimental to the accuracy of the statistical analysis, researchers transform the variables into z scores prior to calculating the interaction variable. For example, if a researcher predicts that the association between X and Y would be the strongest due to M, X and M go in the second block, the cross-product of X and M go in the third block, and Y is entered as the dependent variable. If the interaction is significant, the moderating variable is influencing the relationship between X and Y, and the value of the beta (positive or negative) explains the direction of the impact, while the strength of the influence is determined by the number of the beta itself (between .00 and .99). Using the path analysis method, designed by A. F. Hayes and K. J. Preacher, the evidence that a variable is moderating the relationship between an independent variable and a dependent variable can be evaluated using a variety of statistical software, such as an analysis of moment structures

1854   Variables, Operationalization

(AMOS) or the PROCESS macro in Statistical Package for the Social Sciences (SPSS) or Statistical Analysis System (SAS). This analysis corrects the earlier methods by correcting distributional violations of the sample or low statistical power. There is evidence of mediation if the confidence intervals for the mediator do not contain zero. Veronica Hefner See also Hierarchical Linear Modeling; Hypothesis Formulation; Interaction Analysis, Quantitative; Multiple Regression: Block Analysis; Multivariate Statistics; Relationships Between Variables; Variables, Mediating Types; Z Transformation

Further Readings Baron, R. M., & Kenny, D. A. (1986). The moderatormediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173–1182. doi:10.1037/00223514.51.6.1173 Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. New York, NY: The Guilford Press. Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments & Computers, 36(4), 717–731. doi:10.3758/BF03206553 Sharma, S., Durand, R. M., & Gur-Arie, O. (1981). Identification and analysis of moderator variables. Journal of Marketing Research, 18(3), 291–300. doi:10.2307/3150970

Variables, Operationalization Operationalization is an important step in the process of developing methodologically sound study designs. To operationalize a variable under study, a researcher begins with a concept and conceptualization of that concept that is clearly defined and outlined by a theoretical foundation. This entry focuses on operationalization and how this procedure relates to concepts and conceptualizations in the social scientific process. It also

provides examples of various approaches commonly used to move from concepts to operationalizations. Throughout the entry, potential issues and challenges likely to be encountered by researchers are discussed. To more clearly articulate the role of operationalization, this entry begins with an introduction to concepts and conceptualizations prior to discussing operationali­ zation. The entry concludes with a section linking concepts, conceptualizations, and operationalizations.

Concepts and Conceptualizations A concept is a way of summarizing and classifying ideas and observations. Media use, for example, is a concept designed to capture the idea of how individuals interact with media and all related dimensions that accompany this behavior. To effectively guide research, a concept requires an explicit and consistent definition. The concept of media use could signal a variety of ways in which individuals use media, ranging from less deliberate forms of use (e.g., passively watching a television segment on cancer prevention) to more active forms of media consumption (e.g., actively searching for information about cancer treatments). As such, a researcher interested in studying media use would need to consider whether to use the overarching concept of media use or a more specific subconcept (e.g., video-game playing) for empirical examination. Since concepts are abstract in nature, they require further specification in order to move them from abstract ideas to more concrete terms that can be examined in research. Specifically, concepts need to be expressed more concretely in order to provide context for making sense of observations or testing hypotheses. This means that researchers need to use comparable definitions of concepts across research in order to draw meaningful conclusions about findings and identify common patterns. The process of converting abstract theory into concrete ideas is known as conceptualization. For example, one way to conceptualize a concept such as information seeking is to define it as a behavior characterized by purposeful attempts to procure information about a topic of interest to the information seeker. One question raised with this concept is whether this

Variables, Operationalization   1855

behavior might also include the acquisition of information using more passive efforts. In the case of information seeking, a helpful conceptualization should delineate which types of behaviors constitute active and passive information seeking. Proper conceptualization thus provides clear boundaries for dimensions that are included within the definition while excluding dimensions that fall beyond its scope. To arrive at a thorough conceptualization of a concept, it is important to identify and include all relevant dimensions of that concept. Challenges arise with concepts that are multifaceted and comprise several dimensions. For example, the concept of media exposure in recent years has received some debate as various conceptualizations have at times produced different results in terms of detecting and linking media to significant effects on outcomes of interest. More specifically, the existing body of research on media effects contains conceptualizations ranging from mere exposure (e.g., the number of hours spent listening to the radio) to ones that include dimensions for audience involvement and attention paid to certain types of content (e.g., how closely individuals pay attention to news about climate change). This has made it difficult to summarize the literature on media effects. Efforts to organize and provide clear and consistent definitions for concepts under study can help reduce these obstacles and facilitate steps such as operationalization that follow in the scientific process.

Operationalization Finding relevant concepts and appropriately defining them through the process of conceptualization are necessary steps prior to operationalization. Operationalization is the process by which concepts are linked to variables. This process involves identifying operations that will showcase values of a variable under study. In other words, operationalization specifies concrete observations that are thought to empirically capture a concept existing in the real world. For example, an operationalization of the concept of information seeking as conceptualized by active pursuits of information about a specific topic may be operationalized as the number of sources consulted in the pursuit of information. For example, a researcher may investigate how

cancer patients move across informational sources (e.g., Internet, social media, television news segments, family members, coworkers) outside clinical encounters with the health care system. Another operationalization of information seeking may be the frequency with which an individual consults sources in an effort to satisfy his or her informational needs. Using this operationalization, a researcher may want to examine how many times cancer patients consult the Internet over the course of their cancer experience. Some concepts are more easily operationalized than others due to differing levels of abstraction. Often, concepts that are challenging to operationalize are referenced using different definitions or with definitions that are inconsistently applied, even when the concepts are clearly outlined by theory. For example, a concept such as social norms may play a distinct role in socialpsychological theories designed to predict human behavior; however, the research literature examining social norms uses a variety of subconcepts related to social norms, including injunctive norms, descriptive norms, and subjective norms. Injunctive norms refer to the manner in which a specific behavior is socially judged. This type of norm reflects what individuals perceive should or should not be done in a given situation (e.g., “People should not drive while intoxicated”). Descriptive norms are norms related to perceptions of how commonly others perform a given behavior (e.g., “People like me do not drive while intoxicated”). In contrast, subjective norms capture perceptions of how important others view the behavior in question (e.g., “People who are important to me do not think I should drive if I am intoxicated”). Although the literature on social norms has increasingly refined the distinctions between various subconcepts of social norms, the number of subconcepts that fall under the umbrella concept of social norms leads to a greater number of conceptualizations and thus operationalizations that may result. However, a greater challenge exists in operationalizing concepts for which clearly defined concepts and conceptualizations are lacking. For example, public communication campaign exposure represents a concept that could benefit from greater conceptual clarity, the lack of which may lead to inconsistent and interchanging use of this

1856   Variables, Operationalization

term. This lack of clarity presents problems for conceptualizing public communication campaign exposure and operationalizing it in a meaningful way to capture some aspect of reality, with implications for linking exposure to campaign effects.

See also Measurement Levels; Scales, Likert Statement; Scales, Semantic Differential; Variable, Conceptualizing

Further Readings

Connecting Concepts, Conceptualizations, and Operationalizations The first step in operationalizing a variable under study is to begin with a clearly defined concept. The concept of binge drinking has been previously defined by public health and medical professionals as the consumption of four or more alcoholic drinks in 2 hours for women, and five or more alcoholic drinks in 2 hours for men. Using this conceptualizing, a researcher interested in examining the effects of alcohol advertising on college students’ binge drinking behavior may design a study to capture their behavior by considering indicators that may point toward case values that represent the binge drinking variable. Several possible indicators exist for capturing values for this variable. One option may be to ask study participants to indicate the number of alcoholic beverages they consume over the course of a typical weekend. Another option could be to directly observe participants’ behavior after exposure to alcohol-related advertising to see if they purchase more alcoholic beverages or redeem coupons for alcohol-related products. Operationalization seeks to translate abstract theoretical ideas into concrete terms that can be tested in a study or used to contextualize empirical observations of reality. The process of operationalization begins with a concept supported with a theoretical foundation and a clearly outlined conceptualization of that concept. Identifying a theoretical concept and using a sound conceptualization are critical steps for implementing effective operationalizations of concepts under study. Several ways exist for a researcher to move from concept, to conceptualization, to operationalization. Operationalization may at times raise challenges for researchers, but with sufficient clarity in conceptualization, many issues can be averted. Lourdes S. Martinez

Annenberg Media Exposure Research Group. (2008). Linking measures of media exposure to sexual cognitions and behaviors: A review. Communication Methods and Measures, 2(1–2), 23–42. Lewis, N., Gray, S. W., Freres, D. R., & Hornik, R. C. (2009). Examining cross-source engagement with cancer-related information and its impact on doctor–patient relations. Health Communication, 24(8), 723–734. Mayer, D. K., Terrin, N. C., Kreps, G. L., Menon, U., McCance, K., Parsons, S. K., & Mooney, K. H. (2007). Cancer survivors information seeking behaviors: A comparison of survivors who do and do not seek information about cancer. Patient Education and Counseling, 65(3), 342–350. National Institute on Alcohol Abuse and Alcoholism. (2015). Drinking levels defined. Retrieved from http://www.niaaa.nih.gov/alcohol-health/overviewalcohol-consumption/moderate-binge-drinking Niederdeppe, J. (2014). Conceptual, empirical, and practical issues in developing valid measures of public communication campaign exposure. Communication Methods and Measures, 8(2), 138–161. Niederdeppe, J., Hornik, R. C., Kelly, B. J., Frosch, D. L., Romantan, A., Stevens, R. S., . . . Schwartz, J. S. (2007). Examining the dimensions of cancer-related information seeking and scanning behavior. Health Communication, 22(2), 153–167. Park, H. S., Klein, K. A., Smith, S., & Martell, D. (2009). Separating subjective norms, university descriptive and injunctive norms, and US descriptive and injunctive norms for drinking behavior intentions. Health Communication, 24(8), 746–751. Park, H. S., & Smith, S. W. (2007). Distinctiveness and influence of subjective norms, personal descriptive and injunctive norms, and societal descriptive and injunctive norms on behavioral intent: A case of two behaviors critical to organ donation. Human Communication Research, 33(2), 194–218. Schutt, R. K. (2011). Investigating the social world: The process and practice of research. Thousand Oaks, CA: Pine Forge Press.

Video Games   1857

Video Games Video games (alternatively referred to as computer games, digital games, electronic games, and video games, among other terms) are a broad range of interactive electronic entertainment technologies that allow a user to input commands and receive feedback via a visual display. Video games are played on a range of hardware devices including personal computers, dedicated video game consoles connected to television monitors, custom-designed arcade cabinets, mobile phones, and tablet devices, typically using control interfaces such as keyboards, handheld controllers, and cameras. The software for individual games is distributed and accessed by similarly varied means, including optical media, online downloads, and proprietary memory storage cartridges. Even more diverse than the hardware and software formats used by video games are their content and themes; video games provide leisure and entertainment for their users in the form of puzzles, simulations of sport and combat, fantasy narratives, educational exercises, satirical works, and more. Some video games are played by one user, whereas others can be played by two or more users in the same location or by dozens or thousands online. Since the arrival of the first video game prototypes in the mid-20th century, video games have become a substantial element of the entertainment media environment. While video game sales figures fluctuate from year to year (and can also be difficult to calculate depending on what software, hardware, subscription and access costs, and other goods and services are considered relevant), varied estimates have placed total industry sales figures per year by 2015 at more than US$10 billion in the United States and from US$50 billion to more than US$100 billion worldwide. Similarly loose estimates have reckoned the video game audience to number more than 1 billion souls worldwide by 2015, with more than half of the people in the United States playing video games. This entry reviews the history of video games and the history of communication research on video games.

History of Video Games Because the term video game encompasses a broad range of hardware, software, and media content,

and because the medium has evolved very rapidly technologically and aesthetically, it is difficult to identify unequivocally what milestone can be described as the advent of video games. One prototype that is often credited as a likely first video game was Tennis for Two, a tennis simulation created in 1958 using oscilloscope screens and computer equipment used for missile tracking by U.S. physicist William “Willy” Higinbotham. Higinbotham, who had worked on the research team that developed the first atomic bomb and subsequently became a prominent advocate of nuclear nonproliferation as a founder and chair of the Federation of American Scientists, created the device to entertain visitors to Brookhaven National Laboratory, where he worked at the time. While Tennis for Two was never commercialized and had a relatively small audience (appearances at two annual “visitor’s day” events), it is recognized retrospectively by many as the first video game because of its use of computer technology, user input controls, and dynamic moving feedback on video displays. Other early prototypes that preceded video games are noteworthy for including some but not all of these characteristics, including several computer-based games that either did not include a video display or included a video display that was static and did not feature motion like that of Tennis for Two. Examples include a patent for a “Cathode Ray Tube Amusement Device” issued in 1948 for a device invented by Thomas J. Goldsmith, Jr., and Estle Ray Mann of Dumont Laboratories in the United States; a simple chess simulation programmed in 1951 by Dietrich Prinz as part of a collaboration between the University of Manchester and British engineering firm Ferranti; a draughts (aka checkers) game created by Christoper Strachey at London’s National Physical Laboratory beginning in 1951; Nim, a computer game that was exhibited at the 1951 Festival of Britain by the Ferranti company; and OXO, which was created by Alexander “Sandy” Douglas at the University of Cambridge in 1952 and simulated the game “Tic Tac Toe” (aka “Noughts and Crosses”). It is important to note, though, that these and other early developments may be identified as legitimate claimants to the title of “first video game” given the varied criteria that might be applied as necessary conditions for a technology to be considered a video game. Furthermore,

1858   Video Games

different “first” games can be legitimately identified for alternate terms such as digital games or computer games as opposed to video games, given that these terms have similar, but not always precisely identical, defining criteria. The hazy nature of the first video game notwithstanding, other video game development milestones are slightly clearer. The first video game that was distributed to a large audience and remain available beyond an exhibition or two is widely understood to have been Spacewar!, the creation of which was led by three students at the Massachusetts Institute of Technology with contributions from others at later stages. Spacewar! was completed in 1962 and was soon played in both its original version and modified versions on computers in research laboratories across the nation and worldwide. While Spacewar! was not distributed commercially, the first commercial games were two coin-operated Spacewar! adaptations. Galaxy Game, the first coin-operated game, was a unique single arcade machine placed on the Stanford University campus in 1971. Computer Space, also released in 1971, was another coinoperated game mass produced and released throughout the United States. Neither of these games was as successful as commercial breakthroughs Pong, a coin-operated game released in 1972 by Atari, Inc., or the Odyssey, a home video console designed by Ralph Baer and released by Magnavox in 1972. Other notable moments in the first two decades of commercial video games include the 1977 release of Atari’s Video Computer System home console, the 1980 coin-operated arcade hit Pac Man from Namco, and Super Mario Brothers, a bestselling game released in 1985 by Nintendo for play on the Nintendo Entertainment System home console. While commercial milestones for video games played on computers, consoles, and coin-operated arcade machines were being established through the 1970s and 1980s, a somewhat separate series of developments precipitated the development of online games. While popular games from Spacewar! to Pong to Pac Man to Super Mario Brothers, as well as precursors such as Tennis for Two, were played either by multiple people in the same location or a single player, Essex University students developed arguably the first online game between 1978 and 1980. Their game, MUD (Multi-User

Dungeon), allowed multiple players from distant locations to play together in its fantasy setting using the Internet ancestor ARPANET. MUD, which relied solely on text commands and feedback to allow users to navigate and interact with its environment while reading detailed descriptions of the game’s locations, items, and characters, was so popular it spawned an eponymous genre of scores of games modified to feature a vast range of features, themes, and settings. MUD was not initially commercialized, though a series of notable commercial MUD games were released by others in the early 1980s; most relied on per-minute or per-hour Internet subscription fees as revenue sources through collaboration with Internet service providers. As Internet use became more widespread among consumers, commercial online video games featuring graphics followed, such as Habitat, released online by Lucasfilm in 1986 as possibly the first graphical online game; 1991’s Neverwinter Nights from Stormfront Studios and Strategic Simulations, which had a peak subscriber total of more than 100,000 users; 1997’s Ultima Online from Origin Systems and Electronic Arts, which doubled that total with a peak of more than 200,000 subscribing players; Lineage, a 1998 online game released by South Korea’s NCSoft that remained popular for more than a decade and amassed subscribers numbering more than 2 million; and 1999’s Everquest by Sony’s Verant Interactive, which topped out at more than 400,000 subscribers. Perhaps the definitive commercial presence among a plethora of online games launched in the 1990s and 2000s, though, has been Blizzard Entertainment’s World of Warcraft, which has been active since its initial release in 2004 and has boasted more than 12 million subscribers worldwide at its peak membership. As audiences for online games have grown, console games have also come to capitalize on the potential of the Internet for fostering collaborative and competitive play over long distances by providing multiplayer options increasingly frequently, so much so that most bestselling console games now include multiplayer features.

History of Communication Research on Video Games The growth of video games as an economic and social presence has been followed by growing

Visual Communication Studies   1859

attention to video games from researchers in communication and related fields; patterns and trends in the topics and methods used by communication researchers interested in video games have ebbed and flowed over time. There was little to no research on video games as a communication medium in the 1970s and early 1980s, which mirrors mixed interpretations by U.S. courts during this period as to whether video games included speech acts or even creative intellectual property subject to copyright protection. By 1984, communication journals began publishing articles on video games, and journals in related fields such as social psychology began to feature research on video games at around the same time. Much of this research focused on the potential negative effects of video game use, such as overuse and aggressive behavior, through experiments and surveys. This trend has endured in the years since with potential negative effects remaining the most common topic of much of the most prominent social research on video games. In the focus on video games as a stimulus potentially causing harms, much of the most widely read research on video games in the field of communication through the 1980s and 1990s ignored the social interactions of video game users, whether playing in one location or online through MUDs and other online games. There were some exceptions to this trend, particularly an array of qualitative and critical scholarship dealing in a number of ways with the cultural role of video games. While research focusing on effects of games, particularly negative effects, has continued to be a dominant topic in game research, one way that scholarship began to address video games as a communication medium by the late 1990s was with content analyses systematically documenting the messages in video games, such as portrayals of characters and prevalence of violence. Since 2000, the growth of commercial online games has also been echoed by scholarship exploring the ways in which online game users interact with each other and function within formal and informal groups, their motivations for playing, and the content of their discussions about video games in other online forums and settings. Such research has employed a range of methods such as surveys, experiments, content analyses, field experiments, secondary analysis of game server data, ethnographies,

and interviews. Just as the features of video games and the manner in which video game players use video games has evolved over a period of decades, so have the topics and questions explored by research on video games among communication scholars diversified somewhat over time. James D. Ivory See also Content Analysis, Definition of; Critical Analysis; Cultural Studies and Communication; Critical Theory; Ethnography; Experiments and Experimental Design; Film Studies; Game Studies; Qualitative Data; Textual Analysis

Further Readings Bartle, R. A. (2010). From MUDs to MMORPGs: The history of virtual worlds. In J. Hunsinger, L. Klastrup, & M. Allen (Eds.), International handbook of internet research (pp. 23–39). Dordrecht, The Netherlands: Springer. doi:10.1007/978-1-4020-9789-8_2 Elson, M., Breuer, J., Ivory, J. D., & Quandt, T. (2014). More than stories with buttons: Narrative, mechanics, and context as determinants of player experience in digital games. Journal of Communication, 64, 521–542. doi:10.1111/jcom.12096 Ivory, J. D. (2013). Video games as a multifaceted medium: A review of quantitative social science research on video games and a typology of video game research approaches. Review of Communication Research, 1(1), 31–68. Retrieved from http:// rcommunicationr.org Kirriemuir, J. (2006). A history of digital games. In J. Rutter & J. Bryce (Eds.), Understanding digital games (pp. 21–35). London, UK: Sage. Lowood, H. (2006). A brief biography of computer games. In P. Vorderer & Jennings Bryant (Eds.), Playing video games: Motives, responses, and consequences (pp. 25–41). Mahwah, NJ: Erlbaum. Williams, D. (2002). Structure and competition in the U.S. home video game industry. Journal of Media Management, 4, 41–54. doi:10.1080/ 14241270209389979

Visual Communication Studies The term visual communication studies refers to an interdisciplinary academic field of scholarship

1860   Visual Communication Studies

that analyzes the composition, effectiveness, and effect of messages that are expressed primarily or in significant ways through image or graphical depiction. While text, often called verbal or linguistic communication, may accompany those messages, in order to be considered visual communication, the objects, artifacts, or symbols that comprise the message must be designed or delivered in ways substantially dependent on the visual attention or vision of audiences. Establishing a specific date for the emergence of visual communication studies as an area of study is challenging because many of its associated disciplines (e.g., aesthetics, art history, communication studies, graphic design, mass communication, media studies, multimedia and computer animation studies, philosophy of art, popular culture studies, television and cinema studies) exist as independent areas of scholarship and in ways only partially dedicated to concerns addressed by visual communication studies. Beginning roughly in the 1930s and accelerating on par with the growth of mass communication technologies in the 20th century, visual communication studies developed in response to a number of changing political, social, cultural, and economic factors. These include a steady displacement of linguistic messaging (e.g., by advertisers, politicians, social activists) in favor of image-heavy artifacts, an exponential growth in media consumption, the rise in attention paid to celebrities, the diversification of media platforms and alternatives for message senders, technological innovation (in particular the widespread adoption of television and subsequently the Internet as platforms for accessing news, political discourse, and popular culture), and the rise of identity politics and the “culture wars” (in which the nature and modes of representation increasingly served as the terrain for social, cultural, and political negotiations over relative levels of equality and opportunity experienced by members of an increasingly diverse society). While visual communication studies is a relatively new field, conveying messages in visual form has been a common and widespread practice since the first cave paintings some 40,000 years ago. Whether through the literal use of symbols or images to express ideas or in more abstract invitations to audiences to visualize various futures, the

reliance on visuality to convey messages is as ancient as human communication itself. Insofar as it relies upon a physical representation of concepts and letters, writing itself, for example, is a form of visual communication that ranges from simple line drawings on a convenient medium (e.g., dirt, rock, papyrus) to complicated hieroglyphics or intricate calligraphy. Visual communication studies, therefore, examines the abundance of messages and visual sensory experiences that subjects encounter in both their daily lives and in exceptional political moments in order to determine how visuality affects the subjects and their understanding of the world. The scope of the field spans a wide spectrum from examining symbolic and cultural conventions (e.g., national symbols, advertising, high art) to the pragmatic (e.g., street signs, maps, currency, clothing) to the most intricate and elaborate performances (e.g., those that express identity of some kind, or intercultural dynamics). Although they share common interest in the form and function of image-based artifacts as well as draw on similar theoretical and methodological resources, the academic study of visual communication has generally been divided into two broad fields of analysis: visual communication and visual rhetoric. Scholars who study visual communication primarily inquire into the artistry and effectiveness of visual artifacts by employing principles of design, cognition, and aesthetics theory, whereas those scholars who examine visual rhetoric tend to ask questions about the societal effect of an image or visual artifact.

Visual Communication The older of the two branches of visual communication studies, visual communication, intertwines speculation into the nature of perception, biological understanding of the eye, and pragmatic issues of graphic design, color choice, and intent of message to form a field of study principally concerned with understanding the manufacture of images in relation to the processes of vision. As with any academic field of study, there are varying research objectives associated with visual communication, but they are unified in their effort to determine the way in which sight operates as a crucial register of knowledge (about the self, others, and the world

Visual Communication Studies   1861

at large). The principal methodological avenues for this mission include understanding the mechanics of vision including the function of the eye and the properties of the objects it apprehends, assessing and employing the principles of graphic design, and attention to the assumptions and practices of media industries and the media operations that shape public perception. The study of visual artifacts dates to nearly as far back as their production, but thinkers of various ancient cultures were fascinated by vision and by the relationship between objects and their perception. Many of the phenomena associated with these early conceptualizations remain principal sources of inquiry today, including the nature of light, the function and physiology of the eye, the play of optical illusions, and the use and mixture of color. At the heart of these investigations was concern for the nature of perception and the way that sensory messages are translated into cognitive thought. One of the earliest theories in this regard was Gestalt theory, which argued that the sensual components of a scene are organized according to particular principles in order to form a cohesive whole. Some scholars challenge this account by arguing for a much more active role of the brain and its processing in the making of meaning from perception, a general approach referred to as the cognitive. According to this school of thought, the brain/the self uses any number of mental faculties in order to fashion sense from stimuli. These might include, among others, recall, association, cultural factors, and deferral of difficult or challenging images. Insights of this type into the nature of perception shape the graphic design industry, which seeks pragmatic integration of the mechanics of vision with design technique in order to maximize the appeal and effectiveness of image work in whatever medium is deemed most suitable. A more critically reflexive vein of graphic considers the way cultural values and historical sensibilities come together to form period perspectives. Finally, a substantial body of scholarship examines the way that various media are used to convey messages and shape public opinion. This would include analysis of the conventions, socioeconomics, politics, and ideological effects of any number of mediums, including photography, television, cinema, the Internet, computer animation,

and bodily expression. All of these, and more, provide the potential to captivate the eye and to distribute impressions of knowledge and truth that scholars assess and critique.

Visual Rhetoric The second branch of visual communication studies, visual rhetoric, emerged from the field of rhetorical studies in the 1980s. The foundations of rhetorical studies, in turn, lie in the analysis of speech and date back to the Ancient Greek belief that public address is essential to a healthy civil society. This disciplinary tradition plays a significant role in shaping rhetorical studies’ examinations of visual rhetoric. Questions of whether methods and theoretical approaches developed for linguistic persuasive efforts translate to imagebased forms, therefore, often inform and structure investigations into such artifacts. These include whether images can convey messages without linguistic anchors, whether visual discourses are inherently more truthful (or deceptive), whether images or visual artifacts convey more emotional intensity than linguistic expression and whether images convey more universally due to their generality. The study of visual rhetoric focuses less on the composition of a message for its own artistic sake and instead inquires into the way visualty is employed in the service of persuasion, argument, or the effort to construct a particular social reality. A number of heuristics inform the various approaches to the study of visual artifacts. These include attention to the semiotic composition and symbolic value of the image, consideration of the spectacular nature of contemporary image politics, tracing the emergence of social control by examining the disciplinary nature of various institutions, and applying the principles of psychoanalysis to questions of recognition and the motivation to belong. As with its linguistic counterpart, semiotics is commonly employed in order to determine how sign systems are anchored, signify, and are given value through some visual enterprise. As with many of the critical approaches to the humanities, the intellectual heritage for these maneuvers can be traced to 1950s’ and 1960s’ cultural analysis of ritual and everyday life by Continental

1862   Visual Communication Studies

theorists. Semiotic analysis of photographs, magazines, cookbooks, monuments, political campaigns, and national figures demonstrated that these artifacts were both informed by sign systems and, in turn, naturalized ideological conceptions ensuring the commodification and continuation of particular forms of life over and against their alternatives. Consideration of the commodification of culture highlights those moments where, by design or as a result of emergent events, visual culture increasingly appears to turn on the matter of effectively and dramatically garnering public attention. This body of research comprises a number of significant lines of inquiry, including the examination of the agenda-setting function of news coverage. Relatedly, stigmatization and stereotyping are often employed as techniques to generate and shape public interest, resulting in othering representations of nations, cultures, or identity groups that negatively impact public perception of those populations. A third vein of attention-based research interrogates the spectacular nature of visual culture, arguing that audiences are increasingly exposed to more and more dramatic appeals to the point where the citizenry no longer feels capable of adequately engaging a seemingly impossible series of competing demands, ultimately resulting in less overall civic participation. Following the work of Michel Foucault and others, many visual rhetoric scholars consider questions of power and resistance, especially as they are made possible through various institutions. Emulating the broad template found in Foucault’s analysis of the arrangement of discursive formations such as the clinic, the sanitarium, or the prison, scholars have cast a critical eye on ways some institutions are uniquely positioned to regulate and discipline bodies in ways that encourage them to adopt more docile political subjectivities. Often such investigations are paired with concern for larger social transformations that shift additional burdens on subjects to conform to control expectations (e.g., through neoliberalization or a trend toward greater acceptance and dependence on surveillance cultures). This brand of scholarship comprises the lion’s share of visual rhetoric studies and ranges across sites as diverse as the body and its presentation

to commemorative culture (e.g., museums, memorials, landmarks) to ways citizens express their citizenship through consumption or charitable efforts. The desire to be seen as a worthy subject signaled, for many scholars, the need to consider the affective motivation of belonging more directly. Scholars have done this, in part, by considering the way that certain figures are lauded while others are reviled (e.g., with regard to body image and or performances of gender or sexual orientation). Some have also translated the clinical practice of psychoanalysis into a heuristic for analyzing cultural formations and the subjectivities they encourage. Visual culture is particularly suited to this mode of investigation in that much of psychoanalytic theory addresses individuals’ felt need to display themselves in material reality as they imagine themselves in their ideational sense of self. William C. Trapani See also Critical Theory; Cultural Studies and Communication; Mass Communication; Persuasion; Rhetoric; Semiotics; Visual Materials, Analysis of

Further Readings Berger, J. (1972). Ways of seeing. New York, NY: Penguin. Griffin, M. (2001). Camera as witness, image as sign: The study of visual communication in communication research. In W. Gudykunst (Ed.), Communication yearbook 24 (pp. 433–463). Los Angeles, CA: Sage. Melville, S., & Readings B. (1995). Vision and textuality. Durham, NC: Duke University Press. Mitchell, W. J. T. (1994). Picture theory: Essays on verbal and visual representation. Chicago, IL: University of Chicago. Olson, L. C. (2007). Intellectual and conceptual resources for visual rhetoric: A re-examination of scholarship since 1950. Review of Communication, 7, 1–20. doi:10.1080/15358590701211035 Olson, L. C., Finnegan, C. A., & Hope, D. H. (Eds.). (2008). Visual rhetoric: A reader in communication and American culture. Los Angeles, CA: Sage. Stephens, M. (1998). The rise of the image, the fall of the word. New York, NY: Oxford University Press.

Visual Images as Data Within Qualitative Research   1863

Visual Images as Data Within Qualitative Research The term visual images encompass a variety of artifacts that can be used as data within various qualitative research methodologies. Visual images fall into numerous typologies and either the researcher or the participants in a research study can create the images. Images can also be found, that is, they exist as images not created by the researcher or participants. Visual images can be categorized as still (e.g., digital or film photographs, paintings, graphic design illustrations, advertisements) or moving images (e.g., cinematic films, television shows, or home movies). This entry considers why social (and even humanities) researchers use visual images as data in their studies concerning the human condition. Furthermore, images are never transparent or neutral in content; therefore, consideration is given to how visual images as data can function at higher orders of signification and thus reveal underlying ideologies or social representations. In addition, attention to how visual images fit into various research methodologies is explored. Last, this entry discusses potential weaknesses in using visual images as data in qualitative research.

Visual Images as Data Photographic images constitute an ever-present visual spectacle in contemporary society. The impact images have in society is remarkable and contributes daily to people’s interactions with fellow citizens. The prominence that images play in modern-day life is important to some visual cultural researchers who claim images of every sort dominate people’s lives and that everything seems to be about the image. Looking deeper into this phenomenon, communication and cultural theorist Sut Jhally explains that our image-based culture thrives within advertising and other mass media, but it also dominates social media. For example, the medium of print advertising (newspapers, magazines, and collateral materials) every day publishes impressions of a contemporary social structure by using visual rhetoric to persuade viewers with its underlying ideology of

consumerism. Furthermore, Jhally points to this construct as defining how modern culture conceives the “good life” and society’s attempt to enact such a life based on visual models provided in the media. Visual images therefore play a significant role in this social construct, and more so in the representation of individuals or groups, such as stereotyping race, gender, or ethnicity. In addition, Jhally situates reality as being socially constructed and thus contemporary society tends to comprehend itself through visual images and their usage. He even goes so far as to suggest images have become the prevailing language that crosses national borders around the globe. This phenomenon of images is pervasive, or as visual anthropologist Marcus Banks suggests, visual images are ever-present and appear or can be found everywhere in our culture, and as such, this phenomenon is sparking new, emergent fields of inquiry such as visual culture. Banks goes on to explain there are two primary reasons for using images (still and moving) in qualitative research. First is that images abound within Western society. Since he acknowledges their ubiquity in contemporary culture, visual images can be included in nearly all types of social scientific research. In other words, images are not restricted to the anthropologist or sociologist conducting an ethnographic study, but they are also viable for social researchers within areas such as communication, cultural, or media studies. In addition, visual images as data will find fertile soil for visual analyses within overarching paradigms such as critical theory, postmodernism, or feminism. Last, according to British cultural geography professor Gillian Rose, semiotics, psychoanalysis, discourse analysis, compositional interpretation, audience studies, documentary studies, and photo-elicitation also provide rich avenues for visual image usage and analysis. A second reason for using images manifests in the form of sociological insights that can only be obtained through either an analysis of images or from a study that incorporates the creation or collection of images as their data. Media relying extensively on visual images, such as television, movies, or video games, have been and continue to be productive ground for social researchers investigating the effects visuals have on various audiences (e.g., children). In addition to image usage in

1864   Visual Images as Data Within Qualitative Research

mass media, social media provides researchers with new avenues for social inquiry.

Meaning of Visual Images Historically, there has been a division of thought concerning the reliability of images premised on the notion that an artist’s painting would be infused with his or her subjectivity. Conversely, a photographic image was viewed as a precise scientific rendering through technology, thus giving it preference based on objectivity. However, images are never transparent. Rose states images created using visual technologies do not give the viewer a transparent view of their particular social milieu. Rather, images function interpretively premised on dominant or subordinate coding, depending on who is creating or viewing the visual image. Thus, as Roland Barthes explains, images provide meaning at multiple levels. First, the content within the frame of the visual image functions at the level of denotation; it is the subject matter of the image. Second, the image can function at a higher level known as connotation; this is where researchers explore ideological meanings contained within visual data.

Visual Image Methodologies Visual methodologies take various approaches to analyzing and interpreting images as data. Images (e.g., photographs, films, video, paintings, animations) as data in social science research fit within broad types: found images or created images. Found images already exist in society in the form of magazine advertisements, television programs and commercials, news and reporting images, or video games. Researchers may, for example, conduct a content analysis of magazine ads in specific publications to determine gender or racial profiling trends. Another example might be the researcher who performs a semiotic analysis of television commercials in order to comprehend underlying societal ideologies. Conversely, instead of relying on images found in society, some researchers may choose to work with images created specific to their project. First, and most typical, are images created by the researchers themselves. Traditionally, this has been in the form of documentary photography or

cinematography. While the researcher may include visual images in the research project, they often perform a secondary role, such as visual support to an ethnographic study. On the contrary, some social researchers prefer to have their research participants create visual images, which then are subjected to analysis by the researcher. This type of usage could be used in psychoanalysis or in writing an autoethnography. While these two approaches have been typical fare for most social researchers, Banks suggests some studies are blending the two in collaboration between researcher and research study participants. Furthermore, he explains this shared approach may utilize both existing images and newly created ones. This entry briefly examines two specific visual methods applicable to this discussion. Photo-Documentation

Based on prevailing notions of positivism at the time of photographic invention, photographs are assumed to be objective, accurate documents rendered through precise optical technology. As such, researchers use these images as data in their various types of analyses. According to Rose, in order for researchers to use photographs or videos as data, there needs to be an explicit connection to the research questions established at the onset of the research project. Qualitative researchers will typically perform a coding process through inductive research in a two-stage manner: Broad-based open codes are first developed, followed by more focused or specific codes that emerge from the first set. These codes help the researcher to analyze trends within the images he or she created in the study. Photo-Elicitation

This particular method most often directly involves the research participants by having them create images used in the final analysis. Researchers from numerous social science specialties utilize this approach in their projects. At times researchers will use this specific approach during interviews with participants to elicit comments, spark discussion, or to draw on past memories. For example, by engaging the participants to discuss

Visual Materials, Analysis Of   1865

aspects of a photograph, researchers may gain new insight into a specific social phenomenon that may not be obtainable through other means, such as aural or textual data analysis.

Potential Weakness Using visual images as data for analyses could produce differing and subjective interpretations based on variables beyond the researcher’s control. Rarely are visual data methodologies solely employed in a research study as the only method. Typically researchers will incorporate visual images as data with other methods such as quantitative content analysis, ethnographic studies, textual and discourse analyses, or semiotic investigations. Thus, the qualitative researcher should incorporate visual data methods with other standardized methodologies to ensure reliability and trustworthiness in the research project. Terry Ownby See also Communication and Culture; Cultural Studies and Communication; Ethnography; Film Studies; Qualitative Data; Semiotics; Visual Communication Studies; Visual Materials, Analysis of

Further Readings Banks, M. (2007). Using visual data in qualitative research. London, UK: Sage. Barthes, R. (1972). Mythologies (A. Lavers, Trans.). New York, NY: Hill and Wang. (Original work published 1957) Jhally, S. (2003). Image-based culture: Advertising and popular culture. In G. Dine & J. M. Humez (Eds.), Gender, race, and class in media: A text-reader (2nd ed., pp. 249–257). Thousand Oaks, CA: Sage. Manghani, S., Piper, A., & Simons, J. (Eds.). (2006). Images: A reader. London, UK: Sage. Rogoff, I. (2004). Studying visual culture. In C. Handa (Ed.), Visual rhetoric in a digital world: A critical sourcebook (pp. 381–394). Boston, MA: Bedford/St. Martin’s. Rose, G. (2012). Visual methodologies: An introduction to researching with visual materials (3rd ed.). London, UK: Sage. Stanczak, G. C. (2007). Visual research methods: Image, society, and representation. Los Angeles, CA: Sage. Sturken, M., & Cartwright, L. (2009). Practices of looking: An introduction to visual culture (2nd ed.). Oxford, UK: Oxford University Press.

Visual Materials, Analysis Of The discipline of communication grapples with the paradox of simultaneously condemning and celebrating images, privileging words yet seduced by images. As a result, communication erratically engages images. These conflicting motivations only partially explain the trouble with analyzing visual materials. Another major explanation is the changing media matrix. With the emergence of photography and film and now video and the Internet and social media, there has been a seismic shift from a culture of words to a culture of images, from reading to viewing. Consequently, ways of studying images have varied greatly. This entry charts five major approaches for engaging images and suggests where the discipline is now. The bias in this entry is toward qualitative approaches, but these orientations are suggestive for quantitative analyses. Since by training and habit scholars tend to be lovers of words and books, they often neglect images or approach images with the mind-set and methods of print, ensuring logocentric readings that turn images into texts. This reflex is common but a more image-centric approach is emerging. The traditionally dominant approach to images is one of disciplinary neglect. Communication scholars often work in a universe devoid of images. A paradigmatic example concerns analyses of the U.S. civil rights movement. In rhetoric, the civil rights movement is largely reduced to its linguistic manifestations—primarily the speeches of Martin Luther King Jr. and lesser luminaries, rarely accounting for the force of images of African Americans risking their bodies protesting in the streets. Political communication scholars remain committed to words, deploying computer-assisted word counting even as politicians spend their money on image advertising. Even when studying texts dominated by images, scholars often do not acknowledge the images. This trend is evident in the rhetorical treatment of television. In studies of sitcoms, television news, and commercials, critics engage television as if it were radio, studying transcripts and ideological content suggested by the words, leaving the images untouched. To ignore images mutates one medium, television, into another medium, speeches or writing.

1866   Visual Materials, Analysis Of

A second approach acknowledges the force of images, but subordinates images to words and deploys linguistic-centric methods. This recognition of images is an important advance, but privileging words and using old methods are limiting. In this approach, images are understood as anchored by words, so scholars focus on slogans and captions to explain the meaning of images. Scholars use familiar methods such as close reading, enthymeme analysis, ideographic analysis, and semiotic analysis. This approach assumes equivalence between words and images and transparency in the communication of meanings. It also assumes that the purpose of images is to represent reality. For example, Ansel Adams’s Yosemite landscapes are understood to represent the reality of Yosemite. A third approach uses a moral response to images grounded in iconophobia or fear of images. This approach subordinates methods to condemnatory moral judgments. In a moralistic approach, scholars are motivated by ideology and/or politics to take a position that condemns images. This approach is common with respect to the study of television and video games, which are often seen as having denigrative effects on people, especially children and teenagers. Many of the studies on violence in television and video games are grounded in moralism. First-person shooter video games such as Halo are condemned for causing violence in real life. This approach often explicitly asserts that images corrupt reality and are a sign of a decline from a Golden Age during which print culture reigned supreme. For example, the Lincoln-Douglas debates are held up as the political gold standard by which contemporary imagebased presidential campaigns are judged to be debased. The last two approaches recognize the contemporary world as transformed, sustained, and entertained by jarring juxtapositions of endless images on the public screens of smartphones, computer tablets, gaming devices, laptops, and televisions. In the ceaseless circulation of images in the media matrix, speed annihilates contemplation, distraction disrupts attention, affect eclipses meaning, and images move from representing reality to constituting it. In a fourth approach, visual scholars are careful in their work to provide contemporary and historical

context in order to make sense of images. This method has yielded rich accounts of images and their milieus. Yet context can slip into understanding, for context is always a fiction of the critic’s imagination. A revealing example of the issue of context is the May 1, 2003 “Mission Accomplished” photo of President George Bush aboard the aircraft carrier USS Abraham Lincoln. This photograph provoked opposing reactions at the time: some readers saw the photograph as U.S. military propaganda whereas others read it as a sign of victory. Given the absolute heterogeneity of audiences, context is always utterly undecidable and interpretations infinite. The various audience readings are plausible and suggest that the general context of war cannot determine the meaning of this image. In the ensuing months and years, the troubles in Iraq have continued to proliferate the meanings of the image. Historical texts, with their varied and multiple contexts, further amplify undecidability. Critics can look at the multiple partial and contingent contexts of any image, but they cannot use contexts for definitive interpretations. Poststructuralism has started to influence communication scholars, resulting in a strikingly different approach to engaging images. For poststructuralist thinkers, images are fundamentally incommensurate with words. Instead of being abstractions like words that represent reality, images are intractably immanent, absolutely particular things in the world. To push this point, images are events that carry affective force in the world. This fifth approach to images asks different questions. Instead of asking what they mean, it asks what do images do in the world. Instead of judging the representational fidelity of images, this approach traces the force of images that change the world, not represent it. When video footage of Staten Island police choking Eric Garner to death as he gasped “I can’t breathe” went viral via social media platforms in 2014, the question for communication scholars was not about the video’s fidelity or whether the video was in context, instead the question was about how this video is transforming race relations in the United States. How does this image of Garner dying effectively move people to action? How does this video enable citizens to challenge police authority? How does this video undermine U.S. credibility around the world? What do images do in the world? Kevin Michael DeLuca

Vote Counting Literature Review Methods   1867 See also Content Analysis, Definition of; Content Analysis, Process of; Critical Analysis; Peace Studies; Performance Studies; Public Behavior, Recording of; Visual Communication Studies

Further Readings Barthes, R. (1984). Camera Lucida: Reflections on photography (Richard Howard, Trans.). London, UK: Flamingo/Fontana. Deleuze, G., & Guattari, F. (1987). A thousand plateaus: Capitalism and schizophrenia. Minneapolis, MN: University of Minnesota Press. DeLuca, K. M., & Peeples, J. (2002). From public sphere to public screen: Democracy, activism, and the violence of Seattle. Critical Studies in Media Communication, 19(2), 125–151. Elkins, J. (1998). On pictures and the words that fail them. New York, NY: Cambridge. Finnegan, C. A. (2003). Picturing poverty: Print culture and FSA photographs. Washington, DC: Smithsonian. Hariman, R., & Lucaites, J. (2003). Public identity and collective memory in U.S. iconic photography: The image of “Accidental Napalm.” Critical Studies in Media Communication, 20, 35–66. Johnson, D. (2007). Martin Luther King Jr.’s 1963 Birmingham campaign as image event. Rhetoric & Public Affairs, 10(1), 1–25. Latour, B. (2013). An inquiry into modes of existence. Cambridge, MA: Harvard University Press. Mitchell, W. J. T. (1995). Picture theory: Essays on verbal and visual representation. Chicago, IL: University of Chicago Press. Sontag, S. (1977). On photography. New York, NY: Farrar. Vivian, B. (2007). In the regard of the image. JAC: Rhetoric, Writing, Culture, Politics, 27(3–4), 471–504.

Vote Counting Literature Review Methods A literature review is an effort to summarize or provide perspective on existing empirical or research manuscripts. There exist a variety of types of reviews that can be conducted to examine issues such as measurement, evaluate a theory, propose a new theory, or summarize the state of current knowledge. One challenge is determining which method should be used to provide

a detailed summary of the existing empirical literature. The traditional or historical method of establishing a finding or conclusion from empirical literature relied on a vote counting method of summarizing the results of the significance tests from individual investigations. This entry examines the process of this method and the common error associated with this method.

Process In this method, a scholar proceeds by gathering all the relevant material or investigations considering a particular outcome or relationship. The scholar then makes a chart that reflects the outcome of the investigations as either significant or nonsignificant, based on the results of the statistical test performed. The results are tabulated (either as significant or nonsignificant), and depending on what the majority of the studies provide, an outcome is established. This process is much like the taking of yeas and nays in a legislative body; however, the votes in this case are the results of the significance tests. Depending on the outcome, the relationship is considered to exist or not. The process relies on the assumption that each independent significance test result carries an equal value toward assessing the existence of the relationship. Much like the process of a democratic election, the vote of each finding or outcome is tabulated and contributes toward the outcome. The hidden assumption is that each investigation provides an independent test that has equal value with all the other investigations under consideration. Usually no direct or systematic means are really provided for issues such as sample size, measurement quality, or any other methodological issue that may impact the significance test.

Common Errors With This Process The process is problematic at many levels. The assumption that each outcome should be considered equal in value to other outcomes lacks statistical support. Consider a set of studies that are displayed in Table 1. There are six studies and three of the outcomes are significant and three of the studies are nonsignificant. A scholar representing the results based on the outcome of

1868   Vote Counting Literature Review Methods

the significance test would generate results that demonstrate inconsistent or weak support for a conclusion that a relationship can be thought to exist. The results portrayed by the significance test demonstrate inconsistency and therefore uncertainty. The challenge then is to find why the inconsistency exists because the existence of the relationship (assuming a need for a consistent relationship) fails to exist. Table 1  Expected Distribution of Findings Study Outcomes Data Source

Significance Test Results

Study 1

ns

Study 2

ns

Study 3

ns

Study 4

sig

Study 5

sig

Study 6

sig

The typical solution to this problem is to consider the potential sources of moderation that generate an explanation for a different outcome. Table 2 displays how one might examine the six studies and search to offer an explanation for the observed inconsistency. The examination considers potential methodological features related to measurement, sample, or other conditions under which the data were collected as a basis for an explanation of the inconsistency. Essentially, the narrative or the vote is taken on the ability of each of the characteristics of the investigation to “account” for the inconsistency in the report of the results. Essentially, the argument is that the source of inconsistency relates to methodological artifact, differences among the studies that create conditions that provide different tests. The identification of those differences permits grouping of studies and the ability to uncover systematic sources of variability that might explain the apparent inconsistency in outcome using the significance test.

Table 2  Coding Study Features Data Source Significance Test Results

Sample Gender

Sample Age

Study 1

ns

Male

Grade School

Study 2

ns

Female

High School

Study 3

ns

Mixed

Grade School

Study 4

sig

Male

College

Study 5

sig

Female

Grade School

Study 6

sig

Mixed

High School

The problem with the entire process is that the inconsistency may exist because the significance test is not accurate. Table 3 displays the relationship between the two variables as measured using a correlation coefficient. The display indicates that each study actually observed exactly the same relationship between the two variables of interest. While the magnitude of the relationship was identical, the outcome of the significance test was not identical. What happened was that the size of the sample was smaller for the three studies reporting a nonsignificant relationship. In statistical terms, the statistical test for those three investigations lacked the statistical power to identify the statistical relationship as significant. What occurred was not an inconsistency in outcome; instead, the inconsistency was the result of reliance on a significance test whose value is related to the size of the sample. Table 3  Study Outcomes Using a Correlation Coefficient Data Source

Significance Observed Test Results Effect (r)

Sample Size

Study 1

ns

.20

 50

Study 2

ns

.20

 50

Study 3

ns

.20

 50

Study 4

sig

.20

100

Study 5

sig

.20

100

Study 6

sig

.20

100

Vote Counting Literature Review Methods   1869

The outcome is that three of the studies provide false negatives or indicate the lack of a statistical relationship when one in fact does exist. If one were taking a “vote” of the relationship using the typical method of counting the number of significant versus nonsignificant results, the conclusion would probably be that no reliable relationship is reported in the literature. The argument would be that while some studies report a significant relationship, others do not. Usually, at that point the person conducting the literature review would then begin to search for moderator variables. Essentially, a moderator variable provides a characteristic (e.g., aspect of the measurement, feature of the sample) that would account for the inconsistency in the findings. In the example provided, because no statistical inconsistency exists, the size of the relationship remains constant. The search for a means of explaining the inconsistency begins with a flawed premise, as no real inconsistency exists among the results on the investigation. The investigations all observed the same mathematical relationship without any variability in the findings.

Reducing Random Error The results of this example provide a representation of the common error of most vote counting methods of reviewing the literature. Relying on vote counting methods provides a technique that depends on the accuracy of each “vote” or study outcome to draw a conclusion. When the results of a significance test routinely vary based on something like sampling error, reliance on the process to establish or evaluate relationships is problematic at best. The results for any single investigation based on a limited sample have a potential for inaccuracy in the results. Taken across studies, the set of investigations contains results that are inaccurate for no other reason than random statistical error. The important element to emphasize is that the errors in this case are random, the error in significance test results take place even if all elements of the design, testing, and analysis are flawless. Random sampling error cannot be eliminated, identified, or reduced without a direct reference or inclusion of statistical analysis. The problem, in a very real sense, is

a random statistical one, and requires a statistical means to identify and address the issue. The only solution in reducing random sampling error within a single investigation involves increasing the sample size. The process of narrative review cannot really begin to identify conclusions or methodological influences without elimination or serious reduction in this source of random error. The problem with examination of the outcomes of individual investigations is that without considering this factor in a formal sense, any argument about the cause of observed variability may be trying to address an error that is random and not the result of any systematic cause. For example, consider the results of a flip of the coin: of 10 flips, five result in heads and five in tails. If asked to make sense of a pattern of heads and tails, a person would likely object because the pattern is random and can only be interpreted as random. Proposing theories about air currents, or gravity, to explain the inconsistency in the results in order to map out an explanation may provide entertainment, but it would do little to handle the process, because the outcomes are random. The failure to provide a means of reducing or eliminating random results provides a very difficult task to generating results capable of systematic interpretation.

Correcting for Random Error Not being able to eliminate the random effect of sampling error in social science (or natural science) investigation outcomes creates a fundamental problem. Thus, the impact of random error has to be accounted for in the examination; in most single investigations, there is an estimation of a confidence interval, which can provide an examination of the magnitude of the impact of sampling error in the estimation of any parameter or effect. The significance test is an examination of the probability of a Type I, or false positive, effect due to sampling error outcomes. A power analysis examines the probability of Type II error, or false negatives, when estimating a particular size effect with a certain size sample. Basically, at the level of the individual investigation, the process of examination is relatively well established.

1870   Vulnerable Groups

Thus, a systematic means of identifying and accounting for the impact of random error is needed. The definition of the mean for any set of numbers, particularly in a normal curve is that the expected value (mean) represents the value where the sum of the random errors is 0. When one talks about the average, the term expected value indicates that the value is 1 and the elimination or random error is such that it represents the best estimation of the value one should expect. The challenge of determining the existence and size of a relationship between variables when considering the review of a literature becomes difficult due to the impact of random error on the outcome of a significance test. The level of false positive (Type I) error is expected to be 5% and the expected level of Type II error (false negative) has been calculated by Larry Hedges in the social sciences to be about 50%. The problem with simple vote counting is the inability to simply separate the random source of variability with such high levels of error. More studies do not change the percentage; in some respects, a larger number of studies simply adds to the potential to create a greater degree of confusion. Mike Allen See also Meta-Analysis; Type I Error; Type II Error; Writing a Literature Review

Further Readings Allen, M., Titsworth, S., & Hunt, S. K. (2009). Quantitative research in communication. Los Angeles, CA: Sage. Card, N. A. (2012). Applied meta-analysis for social science research. New York, NY: Guilford Press. Cooper, H., & Hedges, L. V. (1994). Handbook of research synthesis. New York, NY: Russell Sage. Hedges, L. V. (1987). How hard is hard science, how soft is soft science? The empirical cumulativeness of research. American Psychologist, 42(5), 443–455. Schmidt, F., & Hunter, J. (2015). Methods of metaanalysis: Correcting error and bias in research findings (3rd ed.). Thousand Oaks, CA: Sage. Schmidt, F., & Hunter, J. E. (2014). Methods of metaanalysis: Correcting for artifact and bias in research findings (3rd ed.). Beverly Hills, CA: Sage.

Vulnerable Groups Vulnerable groups involve human samples considered particularly susceptible to coercion or undue influence in a research setting. A vulnerable group includes persons who may be incapable of understanding what it means to participate in research and/or who may not understand what constitutes informed consent. Individuals considered vulnerable may, for various reasons, have a diminished capacity to anticipate, cope with, resist, and/or recover from the impact of a natural or man-made hazard. Vulnerable groups may also consist of individuals who are unable to care for themselves and/or may have an increased chance of suicide, self-harm, or the likelihood of harming others. Researchers’ assignment of vulnerable group status is both dynamic and relative, because the nature of those groupings is culturally dependent and those labeled as vulnerable are perceived as being in danger, at risk, under threat, susceptible to problems, helpless, and/or in need of protection or support. A situation that makes one person vulnerable may not make another person vulnerable. Being identified as a vulnerable group participant may also overlap with being identified as a victim or a troubled or troublesome individual. Groups considered vulnerable can vary across academic disciplines, based on the frequency of practitioners’ or researchers’ interactions with those characteristics being studied. However, a nonexhaustive list of groups considered vulnerable across many human-research fields include children, the elderly, single parents, people with disabilities, ethnic minorities, those who are mentally disabled, asylum seekers and refugees, prisoners, pregnant women and fetuses, addicts, individuals with little social support, patients with an acute illness or chronic pain, victims of intimate and other forms of violence, and those who are homeless, economically disadvantaged, poor, illiterate, or unemployed. Some scholars argue that even students used as study participants are vulnerable if their research participation in a specific study (without reasonable alternatives) is mandatory for class credit. This entry discusses the types of vulnerable groups, provides examples of inappropriate handling of vulnerable groups for research, and

Vulnerable Groups   1871

presents guidelines for protecting vulnerable groups during research studies.

Identifying Vulnerable Groups Clearly, there are many different types of vulnerability that contribute to vulnerable group status. One type is the innate/personal, defined by characteristics unique to an individual person that may be sensitive. For example, innate/personal vulnerabilities may include psychological issues, such as anxiety disorders, Alzheimer’s disease, and autism, or issues affecting self-esteem, such as obesity, illiteracy, or nonstandard appearances. Another type of vulnerability includes structural/ contextual/environmental factors, or circumstances that lead to a group status assigned by a culture or society. Being homeless, using particular drugs, being a (legal or illegal) sex worker, or living in a war-torn country are examples of external vulnerabilities determined by the structural norms or environmental factors in a given culture. Vulnerability may depend not only on how a cultural label is employed but also on how the participants themselves perceive their vulnerable status. Emic vulnerabilities are those for which participants possess particular self-awareness. They are aware that their group status or identity is a stigmatized (or otherwise sensitive) one. Emic vulnerabilities may be based on any number of internal or external contributing factors (e.g., mental illness, intimate violence). Etic vulnerabilities, conversely, are those for which the participants themselves may not be aware of their vulnerable status or may not personally identify with the group label that others consider sensitive. In research contexts, etic vulnerabilities are often based on researchers’ identification of a particular demographic variable or group status that has been associated with health problems or social risks in previous research. For example, being a member of a denigrated caste or living below the poverty level are assumed to be stigmatizing situations and, thus, labeled as vulnerable. Ultimately, vulnerability is shaped by a number of intersecting influences such as individual perceptions, situations, and social, historical, political, and cultural factors. Being considered a vulnerable group can also vary based on which theoretical perspective the researcher privileges. For example, a feminist

approach to protecting vulnerable groups would assert that in a primarily patriarchal society, women are more vulnerable than men, whereas a researcher with a Marxist perspective would opine that workers in low-paid jobs are a vulnerable group as a result of capitalism. Clearly identifying what a vulnerable group is can be further complicated by the fact that group members have multiple identities and lack homogeneity, and their membership may be transient.

Vulnerable Groups in Research and Resulting Safeguards The Tuskegee Case

One well-known example of an inappropriate handling of a vulnerable group is that of the Tuskegee Experiments. During the period of 1932–1972, the U.S. Public Health Service conducted a study on syphilis among 600 African American males (400 with the disease and 200 without the disease) living in the countryside of Tuskegee in Macon County, Alabama. The researchers informed participants, who were mostly poor and uneducated, that the treatment (i.e., aspirin, spinal taps, and known ineffective tonics) they were receiving would help cure their syphilis. However, despite the discovery of an actually effective treatment early on in the study (penicillin in 1940), over 400 men remained intentionally untreated with any effective means. Members of this vulnerable group were not told they were participating in an experiment looking at the long-term effects of the disease and were further uninformed that effective treatments were being withheld. Years later, as a result of the experiment, Senate hearings were held, lawsuits filed, and new rules for medical and scientific research were required to be implemented. One aspect that came out of this new legislation was a clear mandate among human subjects researchers that protection for vulnerable groups should be prioritized. In 1997, President Bill Clinton apologized to the remaining living men and to the African American community. Nonetheless, “Tuskegee” is still used as a case example for explaining the importance of scientific research review and monitoring, especially when using vulnerable populations.

1872   Vulnerable Groups

The Bangladesh Refugee Case

Another case illustrates the importance of researchers’ awareness that harm can occur to vulnerable groups at any stage of the research process—in recruitment, during research, after the study is concluded, and when results are published. A study conducted with members of a refugee camp in Bangladesh illustrates this point. In this case, the researchers were seen by the study participants (i.e., the refugees) as having the power to effect change in their political and living status. After the study was completed, a number of aftereffects were seen. Participants approached the researchers seeking their help and assistance for families who were struggling within the camp system. The stories participants provided in the study and afterward to the researchers—of their neglect and the mismanagement of resources by the very institution that was also protecting them—resulted in official repercussions to this vulnerable group for their perceived breach of confidence after the researchers left. In addition, the criminal elements in the camps also threatened and punished them for their apparent favoritism by researchers. As a result of this research project, 100 families had to be relocated due to these unforeseen ramifications; this relocation may have benefited these families, but left others in the camp continuing to suffer because of what had been revealed. This example shows that researchers need to be careful when publishing their data without consideration of the potential impacts of the communities or individuals involved. Vulnerable populations may not possess a full understanding of what informed consent constitutes and may be surprised and displeased to see themselves on film, quoted in articles, or broadcast publicly.

Guidelines for Protection of Vulnerable Group Participants The U.S. Safeguarding Vulnerable Groups Act 2006 provides clear criteria for what constitutes a vulnerable adult. Anyone may be considered a vulnerable participant if he or she is in receipt of health or social care, lives in sheltered housing, requires assistance in the conducting of their affairs, resides in prison or is in contact with probation services, is detained under Immigration

Act power, or is involved in any activities targeting vulnerable adults (e.g., education and training). These criteria were established to protect those who have no other legal access to redress in cases of grievance. When using a vulnerable group sample, it is important and ethical for a researcher to provide appropriate and additional safeguards for participant well-being. The majority of universities and organizations conducting research have Institutional Review Boards that review and approve research proposals before researchers are allowed to pursue the research project. Because much communication research involving human subjects is considered a social science, the communication discipline adopts the safety and research guidelines established by the American Psychological Association, particularly when asserting that all participants must possess decisional capability when granting consent to take part in a research study. A member of a vulnerable group must be able to provide informed consent, although sometimes the individual may be capable of providing this consent and at other times someone else must provide that consent for the individual. When working with children, mentally disabled, or elderly individuals, it is necessary for researchers to understand what this entails. Decisional capability is determined when a person provides evidence that he or she has the ability to (a) understand that he or she has a choice, (b) understand relevant information, (c) appreciate the situation of the study and its likely consequences, and (d) rationally manipulate the information presented to him or her. Highly vulnerable groups, such as those who are mentally disabled or elderly with dementia or Alzheimer’s disease, may require a professional assessment of their actual ability to self-consent based on these four aspects. In other cases, less formal procedures may suffice (e.g., a legal guardian or parent grants consent on behalf of another). When working with the aforementioned vulnerable groups, there are particular steps a researcher should take to gain “true” informed consent. First, an attempt should be made to gain consent directly from the participant. If the participant is unconscious or lacking in decision-making capability, then, the researcher should document his or her observations in the research record and medical

Vulnerable Groups   1873

records, and then gain surrogate consent from a guardian. If the researcher finds the participant is of questionable decisional capability but not unresponsive, the researcher can describe the research to the participant, document the assessment of the participant’s decisional capacity relevant to the information relating to the study, inform the participant of the intent to obtain the surrogate’s consent, and gain assent (i.e., not legally binding approval or “consent” to participate) from the participant. If the participant expresses resistance to the intent to get a surrogate’s approval or does not assent to participate in the study, then the researcher should exclude the participant from the study. Finally, if assent and consent are both obtained, the researcher should document this fact. Following these steps when working with a vulnerable group participant ensures that an ethical research process has been followed. Children are also considered a highly vulnerable group and because of the focus on family communication, researchers in the communication discipline often find themselves wanting to use children as participants; therefore, special consideration is necessary when attempting to gain consent. When a researcher approaches children, he or she must first help the child understand, in a developmentally appropriate manner, the purpose of the study. Second, the researcher should disclose the nature of the study and what the child is likely to experience, such as types of questions to be asked and about what topic. Third, the researcher should assess whether or not the child understands the information that has been provided and if so, then secure the child’s willingness to participate (i.e., assent). Regardless of whether the child chooses to participate, the parent or legal guardian still needs to sign the informed consent form on behalf of the child. Many researchers find themselves using a vulnerable group population because the specific vulnerability is what requires exploration. For example, research may be needed to understand the vulnerability and address the treatment of, improve conditions for, or change policies on behalf of a particular condition. When choosing a vulnerable group to research, the researcher must adequately justify and provide detailed safeguard measures to protect the vulnerable individuals; anticipated benefits that clearly outweigh potential

risks must also be clearly documented. All potential risks and costs associated with vulnerable groups should be offset by direct and tangible benefits to those who do choose to participate. The ethical standards of the researchers are even more important when working with a vulnerable group sample, as compared to a nonvulnerable sample. For example, researchers must take even greater care in conducting their professional work with integrity and must respect the rights and dignity of those involved and who are affected by their research. Researchers must ensure the physical, social, and psychological well-being of those who take part in the study and must carefully interpret the findings of their research because of the impact it may have on the lives of those who will be affected long after the researcher leaves and the work is published. It is important to note that the principal investigator holds the ultimate responsibility for protecting the safety, rights, dignity, and welfare of all research participants. Nancy Brule and Jessica J. Eckstein See also Activism and Social Justice; African American Communication and Culture; Authorship Bias; Communication and Culture; Controversial Experiments; Cultural Sensitivity in Research; Cultural Studies and Communication; Disability and Communication; Feminist Analysis; Latina/o Communication; Underrepresented Groups

Further Readings Brandt, A. M. (1978). Racism and research: The case of the Tuskegee syphilis study. The Hastings Center Report, 8(6), 21–29. Higgins, Y. (n.d.). Protecting vulnerable populations during research: Guidelines for safeguarding the health and welfare of children. Retrieved June 3, 2015 from http://www.cgirb.com/irb-insights/protectingvulnerable-populations-during-research-guidelines Larkin, M. (2009). Vulnerable groups in health and social care. Thousand Oaks, CA: Sage. Pittaway, E., Bartolomei, L., & Hugman, R. (2010). “Stop stealing our stories”: The ethics of research with vulnerable groups. Journal of Human Rights Practice, 2(2), 229–251. doi:10.1093/jhuman/huq004 Reverby, S. M. (2009). Examining Tuskegee: The infamous syphilis study and its legacy. Chapel Hill, NC: University of North Carolina Press.

W Wartime Communication The term wartime communication suggests a variety of approaches and perspectives. Wartime communication can include functional modes of communication such as radio signaling or satellite navigation; social dimensions of communication involving correspondence between the war front and home front; and informative aspects of communication such as press coverage and other visual representations of war. Because the nature of wartime communication encompasses such a vast landscape, this entry focuses only on American conflicts. The entry expands on these three types of wartime communication to offer brief overviews of three approaches to wartime communication. Research on wartime communication has undergone significant change as the widespread use of personal computers, Internet accessibility, and the proliferation of social network sites from a war zone presents communication scholars with new texts and spaces for analysis, and consequently new methodological challenges and opportunities.

Modes of Communication War has always been a source of inspiration for advancements in communication technology. For example, the walkie-talkie was developed during the Second World War. During the 20th century, wartime communication predominantly referred to functional modes (connecting planes

to the ground, coordinating strikes and ambushes, reporting needs to medics) as well as informational elements (how professional war reporters, photographers, and politicians frame the war effort to public audiences). But the post-9/11 wars in Iraq and Afghanistan introduced a previously overlooked social dimension to wartime communication through the prevalence of mobile, digital communication technology among troops in the field. The 2003 prisoner abuse photos taken at Abu Ghraib, Iraq, and the 2010 “Wikileak” that released thousands of classified documents to the public domain are two examples of the way increased opportunities for sociality (enabled by advancements in mobile, digital, and social media) has led to more porous boundaries between the war front and the home front. Prior to this millennium, military officials and professional war reporters were the principal arbiters of war’s image and representation. As such, much of the existing scholarship on wartime communication evaluates professionally produced texts for their persuasive force or epistemological contribution to civilian, public audiences. By the early 2000s, however, developments in participatory media, which coincided with the wars in Iraq and Afghanistan, enabled troops on the ground to document and disseminate the war for themselves. In other words, just as television defined wartime communication during the Vietnam War and the First Gulf War, the Internet defined wartime communication in the wars in Iraq and Afghanistan.

1875

1876   Wartime Communication

Outlooks and Approaches In addition to the three types of wartime communi­ cation—functional, social, and informational— there are also three main fields of inquiry within the study of wartime communication. The first critiques public and political rhetoric surrounding war. The second focuses on mass mediated coverage of war. The third examines the documentary efforts of the troops themselves. Each outlook suggests respective texts and methodological approaches. War and Public Dialogue

The first field of inquiry concerns itself with public discourses about war. This view examines the nature of war rhetoric and its influence on society’s view of war. Rhetorical critics analyze everything from the vilification processes that lead to war, to the rhetorics of protest and social movement that emerge from war. Networked communication software has opened avenues for connection and organization. As a result, communication scholars interested in 21st-century wartime social movements focus less on individual leaders and orators and more on symbolic action and the dynamic structural factors giving rise to the movement. Wartime communication scholars interested in visual rhetoric consider the extent to which particular photographs from war can prompt important questions and ignite public debate and, conversely, how other visual discourses such as the “shock and awe” campaign of the Persian Gulf War might influence audiences to support the war. One point that draws criticism from communication scholars is that despite being labeled the most mediated wars in history (due to the troops’ ability to document and share footage online), the public has not engaged in rigorous dialogue about the United States’ involvement in Iraq and Afghanistan. Propaganda and Professional War Reporting

The second field of inquiry explores mass mediated communication about war. The practice of mediating war dates back millennia. From the oldest Stone Age cave paintings to sculptures in ancient Greece to pocket-sized digital cameras on the battlefield, humans have always taken an

interest in reporting war. Prior to the wars of this century, mass mediated war coverage came almost exclusively from professional reporters or photojournalists. As such, communication scholarship in this area focuses mostly on broadcast media (newspaper, radio, TV). Debates animating this strain of scholarship center on matters of authenticity, accuracy, and exposure; the widely held notion is that war reporters have a social obligation to expose the horrors of war in order to deter it. Others interested in this line of inquiry use close textual analysis to identify recurring conventions or media frames associated with broadcast wartime communication. Some, for example, examine the extent to which appeals to patriotism could serve propagandistic ends. Since 2010, fewer and fewer scholars approach wartime communication through the lens of broadcast because of the recognition that communication increasingly operates as a network. For example, even professional embedded photojournalists use social network services and mobile apps to capture and share news from the front. What is more, the ability for troops to publicize their own war stories has further destabilized the position of war correspondents within traditional journalism. Warrior-Produced Media

The third field of inquiry considers content produced by the troops themselves. This area of research has blossomed since the emergence of the infamous Abu Ghraib prison photos in 2004. Prior to this century, warrior-produced content consisted mostly of personal letters, snapshots, and journals preserved in the private space of the home or archived in libraries and museums. The proliferation of Internet-enabled digital communication technology dramatically increased not only the amount of content warriors were able to produce but also the speed by which they could share it. Moreover, the networked communication environment expanded circulation patterns for wartime communication across fronts beyond close friends and family members to include hundreds of social network site members. At the start of the wars in Iraq and Afghanistan, military blogging, or “milblogging,” was a popular way for service members to share their experiences with broader audiences. By 2005, YouTube began

Within-Subjects Design   1877

to replace the blog format because video proved to be a more dynamic way to capture and share the war experience. Most video clips posted to YouTube contained footage of combat operations, weaponry, destruction, explosions, and death. Communication researchers note how warrior-produced videos resemble some of the most popular war-themed video games, identifying a growing conflation between war and entertainment. Scholarly conversations about how the wars were playing out on YouTube peaked around 2009. Since then, social network sites (SNS) have proven to be fertile ground for communication researchers to study wartime communication from a boots-on-the-ground perspective. Imagining social network sites communication as a situated, localized discourse encourages more ethnographically minded methodology such as online and offline fieldwork and interview research.

Critical Studies in Media Communication, 28, 292–313. doi:10.1080/15295036.2011.589031 Stahl, R. (2006). Have you played the war on terror? Critical Studies in Media Communication, 23, 112–130. doi:10.1080/07393180600714489

Within-Subjects Design

Achter, P. (2010). Unruly bodies: The rhetorical domestication of twenty-first-century veterans of war. Quarterly Journal of Speech, 96, 46–68. doi:10.1080/00335630903512697 Alper, M. (2013). War on Instagram: Framing conflict photojournalism with mobile photography apps. New Media & Society, 1–16. doi:10.1177/ 1461444813504265 Carruthers, S. L. (2008). No one’s looking: The disappearing audience for war. Media, War & Conflict, 1, 70–76. doi:10.1177/1750635207087626 Kellner, D. (2008). War correspondents, the military, and propaganda: Some critical reflections. International Journal of Communication, 2, 297–330. doi:19328036/20080297

One of the key first steps of a research project is developing the design of the project. While experiments can be conducted in myriad ways, the two primary methods in which project designs are carried out in communication and other disciplines are within-subject designs and between-subject designs. Within-subjects design, which is the topic of this entry, focuses on studying the effects of treatments to every individual within a single group over a period of time. This entry further defines within-subjects design before providing hypothetical situations to highlight the use and effect of this design, along with some modifications. The advantages and disadvantages inherent in within-subjects design and the implications of using that particular design are also considered. In a within-subjects design (also referred to as a within-group design), repeated measures are used on one group. Clearly, in a world full of billions of people, the opportunities for either interpersonal or mass communication is enormous. However, scholars may not have either the resources or the time to conduct large-scale research. For example, should a project be designed to determine who votes for Candidate A or Candidate B in an election and then those findings be used to make generalizations about a given population? The selection of an intact group, whether the entire population of New York City or a small village in Kansas, might prove problematic for generalization. A wellconstructed within-subjects design provides a valid, reliable, and repeatable method that can serve the ends sought by the investigator.

Silvestri, L. E. (2015). Friended at the front: Social media in the American warzone. Lawrence: The University Press of Kansas.

Examples of Within-Subjects Design

Smith, C. M., & McDonald, K. M. (2011). The mundane to the memorial: Circulating and deliberating the war in Iraq through vernacular soldier-produced videos.

Hypothetically, basic combat training in the military provides an example of a within-subjects design. The first characteristic of this event is the

Lisa Silvestri See also Alternative News Media; Blogs and Research; Close Reading; Discourse Analysis; Field Notes Frame Analysis; Internet as Cultural Context; Interviews for Data Gathering

Further Readings

1878   Within-Subjects Design

formation of groups of about 40 to 50 individuals. The second characteristic of training is the interaction the participants have with training instructors who work with the participants on physical training, mental conditioning, military knowledge, and decorum. The goal of basic training involves taking a person at one stage of life and experience and creating a person who acts and thinks like a soldier. The analysis would probably involve a pretest as well as a posttest, with a definable goal: for the participant to meet or surpass the minimal scores required by the military for the performance of duties. The pretest could provide an initial fitness assessment. During this time each individual participant (or a “recruit” in this military scenario) must do as many push-ups and sit-ups as possible in an allotted time and run a given distance while being timed. The expectation is that the person, after completing weeks of training, will be able to meet minimal performance requirements for those tests. For this example, assume that the group as a whole did poorly on the push-up portion of the initial assessment. The training instructors would receive this data and then decide what program of training offers the most effective means for improving the performance of the recruits. Over the course of the next few weeks, the group engages in designated push-up and strengthtraining exercises. The belief of the instructors is that the group as a whole will demonstrate improved performance on the push-up test at the next assessment. Should this training be successful, the instructors should see improved scores at the next assessment. At the conclusion of basic training, the posttest occurs. The posttest constitutes the final physical fitness assessment. Should the training operate as intended, the repeated measurement should show the group as a whole meeting or surpassing the minimal physical fitness goals with incremental improvements made from each prior fitness assessment. The previous example of military basic training provides a description of a within-subjects design, one actually employed by the armed services. This example of basic training fits the within-subjects design in several dimensions. The use of a pretest—the initial fitness assessment— provides a beginning or starting place to measure

change. A within-subjects design ultimately intends to measure a specific change over time. The next element involves the use of training by the instructors. Of course, the use of a control group could have taken place, in which one group of recruits would not receive any additional or specific physical training, but that would be a between-subjects design. The posttest includes the final physical fitness test, repeating the original pretest. A researcher could look at the scores of the pretest, follow-up assessments, and posttest to measure any change of the scores over time in the within-subjects design. If the number of recruits who either met or surpassed the minimum standards of the assessments increased when comparing the pretest to the posttest, a conclusion can be made that the training produced a positive impact. While this example seems obvious and simple, the logic of the within-group design should be considered straightforward. A perusal of academic journals might reveal that a particular participant group attended a certain academic institution. In this hypothetical investigation, we decided to measure the effectiveness of electronic solicitations in getting students to fill out online course evaluations of the instructor. The subjects of the investigation are students enrolled in a basic communication course. The within-subjects design becomes evident in this example. Much like the recruits in basic combat training, the number of students in this communication course is not likely to change a great deal. To successfully implement a within-subjects design, there needs to be the assumption that for the multiple measurements involved, the same people can be evaluated for each measurement. Failure to have the same set of people at each time period seriously restricts the use of this approach. There exists an opportunity for repeated measures to be taken with this group. For example, the instructor of the course could verbally request that the students participate in the evaluation of the course. This verbal announcement could take many forms. In one week, the instructor could politely ask the students to participate in the activity. Perhaps during the second week, the instructor could encourage participation via a boost in the students’ final grade of the course. Conversely, the instructor during the third week may engage in

Within-Subjects Design   1879

threatening the final grade of the students failing to participate. At any rate, given a variety of messages over time, the different and increased level of compliance could be assessed. However, one difference exists between this example and the previous example using military recruits. In this example, there exists no real opportunity for a pretest. There are only the various messages and the measurement of successfully soliciting student participation in completing the online instructor evaluation for the course. The design could be further complicated by the use of multiple sections for the same or different instructors, each using a different order of messages to gain student compliance with the request. Another option exists to permit the opportunity for students to take part in several course evaluations throughout the semester. Various systems of messages and reminders could be employed. Students possessing smart devices could participate in this activity by having messages sent electronically to provide reminders with the level of participation examined. The opportunity for repeated measures might involve whether or not the reminder was sent with a particular wave or level of evaluation. Each separate solicitation could be done electronically with either a promise of anonymity or grade incentive provided. The ability to generate various options and comparisons that record the level of response exists. The main difference between the group of recruits and group of students is the strength of the manipulation. In the former group, the recruits are largely controlled in where they go and what they do. In the latter group, the students will be subject to a variety of manipulations throughout the week and semester from outside the confines of the classroom. At any rate, the within-subjects design can be modified for groups that are completely controlled and those that are not.

Advantages of Within-Subjects Design These hypothetical situations illustrate how to design a within-subjects experiment. The withinsubjects design investigation carries a number of advantages. First, the number of participants to obtain an accurate estimation of a statistical parameter becomes far less than the alternative

design using independent groups. The reason for this is that the analysis of change becomes measured not across groups but instead within a person. In a very real sense, the change within a person between two measurements carries greater statistical accuracy than reliance on betweengroup changes. The increased accuracy of the estimation provides greater statistical power in an analysis with far fewer participants. The second advantage becomes a more direct sense of assessment in terms of change. Most often, experimental designs or designs used to compare different manipulations or the introduction of different processes argue for the advantage of one approach over another based on what is perceived to be the change taking place. The within-subjects design measures that change more directly than other designs. The third advantage becomes the need not to hope that either the use of intact groups or the random assignment to groups creates equivalency when assessing change. When using groups in an investigation to measure change, particularly for a posttest-only design, there is a belief that each group started with the same value. A comparison of a control group to the experimental group assumes that the difference in the postintervention score represents the difference in experience (control or no action vs. experimental treatment). That assumption works if each group started with the same score (pretest). The within-subjects design measures change of each individual separately and does not require assuming the equivalency among the groups to provide an accurate measure of change. In the military recruit example, the change is measured for each individual soldier and requires no control group.

Disadvantages of Within-Subjects Design The disadvantages of the within-subjects design involve limitations or concerns about the nature of the group over time. For example, suppose some people drop out of the military (voluntarily or involuntarily) because they are failing at the physical performance demands. Suppose others are removed from basic training because of health concerns caused by the training. As a result, the gain scores may be biased because the overall

1880   Writer’s Block

means of the gain scores are enhanced by removing people who were failing to adequately perform the training regimen. The results reflect gains, but only for those people who display gains from the training, whereas the people negatively impacted by the training who drop out fail to appear in subsequent measurements. The within-subjects design can prove difficult to manage in many populations in which guaranteed access to the participants becomes difficult to sustain. The result is that often the designs employ a pretest and then an immediate posttest, but often there are few or no long-term followups or continued measurements. The results, therefore, are often produced from evaluations at the time of potentially maximum effectiveness. Consider the example of the military recruits. Will the physical training that provided improvements in the assessments be maintained by the recruits 6 months after leaving basic training? Failure to maintain training efforts during those 6 months may mean that the group’s status returns to the level of the pretest or even diminishes from pretest levels. The use of immediate posttests provides evidence for an effective approach, but may not provide evidence for an effective permanent or long-term change. For many reasons, within-subjects design provides a desirable means of program or intervention evaluation. When the desire of the investigator focuses on the measurement of some effort to create change, the most effective and efficient means of measuring that change may require employing a pretest and posttest to directly assess the degree of difference over time. Richard Draeger See also Analysis of Variance (ANOVA); BetweenSubjects Design; Repeated Measures; t-Test, Paired Samples; Two-Group Prestest–Posttest Design; TwoGroup Random Assignment Pretest–Posttest Design

Further Readings Allen, M., Titsworth, S., & Hunt, S. K. (2009). Quantitative research in communication. Los Angeles, CA: Sage. Pedhazur, E. J. (1997). Multiple regression in behavioral research (3rd ed.). New York, NY: Wadsworth.

Raghavarao, D., & Padgett, L. (2014). Repeated measurements and cross-over designs. New York, NY: Wiley. Winer, B. J. (1971). Statistical principles in experimental design (2nd ed.). New York, NY: McGraw-Hill.

Writer’s Block In broad terms, writer’s block can be described as a deficit in the quantity of a person’s writing created by causes internal to the writer’s mind. This is to distinguish writer’s block per se from external factors that prevent writing, such as illness or lack of time. The role of overcoming or avoiding writer’s block in the academic life should be self-evident given the importance of publication to academic advancement. In fact, it is almost certain that a number of otherwise qualified scholars have self-selected out of the field because the emphasis on publication leads to an element of anxiety, which can lead to writer’s block. This entry describes the two primary sources of writer’s block before describing how to overcome it. Tips for avoiding writer’s block are then provided.

Sources of Writer’s Block Writer’s block is generally believed to have two sources, and successfully overcoming it depends on successfully discovering the etiology. The first source is described as mechanical and is created by overly strict adherence to writing guidelines that are intended to be heuristic. An example of this is when a writer is blocked because he or she has been taught that a thesis should always have three points of support, and the writer cannot think of a third point of support for the thesis. The second source of writer’s block can be described as affective, and subsumes phenomena such as written communication apprehension. In such cases, writers find themselves unable to write because they are concerned that the work will be perceived negatively by themselves or by readers, thus causing the writers to become either overly concerned with feelings of inadequacy or obsessed with perfection.

Writer’s Block   1881

Overcoming Writer’s Block Because the term writer’s block is a portmanteau covering blocks generated from a wide variety of sources, there is no such thing as a universal cure for it. Rather, there are a set of activities a writer can experiment with to overcome the obstacles. The level of severity, classified as mild, moderate, or recalcitrant, is determined by the block’s resistance to common methods of overcoming it. Because there are no self-diagnostic tools for the severity of a block, the state of the art in terms of overcoming it is to presuppose that the block is mild, and to attempt those methods first. If the method fails to yield results for mild blocks, more strenuous efforts should be attempted. Mild Blocks

Basic methods of approaching mild writer’s blocks generally involve having a realistic sense of what the writer is doing. In general, blocks occur in writing rather than editing, which means it generally occurs during the first draft. Thus, writers are reminded that a first draft is not expected to be perfect, and that problematic sections can be passed over and returned to later—the point being to focus on writing per se rather than writing the paper in a specific order. Certain sections of an academic paper, such as where the methodology is described, are fairly straightforward, since writers know what steps they have taken, and thus it is sometimes better to skip ahead and begin writing that section. This provides writers with visible indications of progress as well as a chance to work around the block. In addition, the writer is encouraged to have realistic expectations in terms of productivity. Instead of expecting that the entire paper be drafted in one session, the writer can set an intermediary goal (between several paragraphs and one section) for each writing session, and then rest or perform other tasks before returning to the paper with a fresh mind. This can provide opportunities for reflection and also help to reduce the anxiety that can be created by deadline pressure. Some writers prefer the inspiration created by deadline pressure. As discussed earlier, different people block on their writing for different reasons,

so it follows that they will need different techniques to overcome their blocks. Using deadline pressure to overcome writer’s block can be problematic in that if the block is not overcome in a timely way, then the deadline will not be met. Another technique that has proven valuable to some writers is to stop writing and instead thoroughly outline a difficult section. This forces writers to focus on the structure of the arguments underlying their research, with which they should be familiar, rather than the production of verbiage. Once the section is thoroughly outlined, writers can simply explicate the outline in complete sentences with a degree of confidence that the text will be logical even if not elegant. A final technique that helps overcome mild blocks is to never stop at a natural stopping point in a manuscript (such as the end of a section or chapter). Instead, the writer continues writing far enough into the next section of the text to a point where the writer is confident of being able to proceed after taking a break. Thus, when the writer returns to the work, he or she has a sense of what to say next. Moderate Blocks

Treatment of moderate blocks generally calls for the writer to take a step back from his or her work in order to get a new sense of perspective on it. This can be effected by the involvement of other people. These can be colleagues, friends, or family members; generally, any person who is willing to give feedback is suitable. It is not necessary that the person be knowledgeable in the writer’s field of study, but rather that the person is simply willing to listen carefully and ask questions if any arise. The writer can then “talk through” the paper, describing not the block but the ideas the writer is attempting to convey. The process of organizing the information logically and then selecting the words to verbalize it can help the writer in those terms: organizing and verbalizing. This can give the writer a new perspective on the writing process, even if it does not elicit useful suggestions from the listener. Obviously, if the listener does have useful feedback, that is an additional benefit. But it is not a critical part of the process. The focus is on the writer’s perspective, not the feedback per se.

1882   Writer’s Block

Another method of attaining this new perspective is to use techniques of cognitive mapping or non-linear outlining. The writer sits down with a blank piece of paper and starts to list the logical points of the work in no particular order except as they occur to the writer. As interconnections between the concepts being listed start to become clear to the writer, links between concepts are then added to the outline. Experts suggest that this be done on a sheet of paper oriented horizontally, to emphasize that it is not, and not intended to be, a linear process. It is particularly useful for writers who are suffering from mechanical blocks to get external feedback, preferably from an experienced writer. In this case, if the writer can describe for the listener what the particular blocking point is, then the writer and the listener together may be able to identify the cause of it. In general, recognition that heuristics are intended to be useful guidelines rather than absolute requirements is often sufficient to overcome the block once the heuristic at the source of the block is identified. Recalcitrant Blocks

Recalcitrant blocks have been treated with some success by explicitly using simple behavior modification techniques. The purpose of them is not to condition the writer to produce text on demand, but to help instill in the writer the habit of writing regularly, which is one of the means of avoiding future blocks (discussed in the next section). One technique is to sit down and define a specific and measurable goal (e.g., to write five pages in the course of each week). At the end of the week, if the writer has successfully met the goal, a small but desirable reward is given; if the goal has not been met, a minor penalty (e.g., making a small contribution to a charity) is instituted. It is generally considered that making the commitment public in some way is a useful way of increasing the efficacy of the treatment, as that will create a certain degree of social support and accountability for the writer. Recalcitrant writer’s block sometimes requires therapy in order to discover the etiology of the block—preferably by a trained professional. Most colleges and universities can refer the writer to a qualified therapist, who will

additionally take steps to ensure the privacy of the writer needing treatment.

Avoiding Writer’s Block There are no sure ways to avoid writer’s block, because, as previously described, it has a variety of etiologies. One technique to avoid blocks that has shown some value is making writing a habit rather than an extraordinary activity by setting aside time for writing every week. It is not necessary that the writing be academic, although if it is, the writer gains the added benefit of increased productivity. When establishing writing as a habit, though, the writer is cautioned not to develop additional habits associated with it, such as the repeated use of the same writing implement, the need to have certain kinds of sounds (or lack thereof) in the sensorium, the desire to be sitting in the same chair each time, or the need to follow a specific set of steps before starting. These associated habits can themselves be a source of block if, for any reason, the writer is prevented from going through this routine before starting. Other methods of preventing the occurrence of writer’s block include starting work early (i.e., well before any deadline that might be looming) and to work on a paper in small sections rather than try to complete the entire manuscript in a short period of time. Again, there are those who thrive on deadline pressure; however, these are generally not the individuals who find themselves confronted by writer’s block. To the extent that blocks are created by anxiety, and it is clear that anxiety in the form of written communication apprehension can be a source of block, taking steps to reduce anxiety can help decrease the chances of a block occurring. As described previously, another method of avoiding writer’s block is to make public commitments to write and to develop a support group of writers to work with on a regular basis. Public commitments have been demonstrated to be an effective motivator of behavior. Meeting with a writers group not only helps establish a regular pattern of writing but also provides the writer with a supportive atmosphere in which to write and a willing source of feedback. Being part of a writers group is increasingly easy to accomplish

Writing a Discussion Section   1883

with the proliferation of online writing groups on the Internet generally and through sites such as Facebook specifically, although there are also academic writers groups that are part of many university communities. Dave D’Alessio See also Communication Apprehension; Writing Process, The

Further Readings Huston, P. (1998). Resolving writer’s block. Canadian Family Physician, 44, 92–97. Leader, Z. (1991). Writer’s block. Baltimore, MD: Johns Hopkins University Press. Silvia, P. J. (2007). How to write a lot: A practical guide to productive academic writing. Washington, DC: American Psychological Association. Upper, D. (1974). The unsuccessful self-treatment of a case of “writer’s block.” Journal of Applied Behavior Analysis, 7, 497. doi:10.1901/jaba.1974.7-497a

Writing

a

Discussion Section

A discussion section is the section of a research paper or journal article in which authors present their interpretation of the major findings of their research. Discussion sections are one of the four major sections (i.e., introduction, method, results, and discussion) of most types of research papers. The discussion section typically includes the following components: (a) the significance of the study, (b) interpretations of the significant results, (c) implications, (d) limitations, (e) future studies, and (f) conclusion. The discussion section is placed at the end of the paper or article; but depending on the format required by a course professor or journal, the conclusion might be a separate section following the discussion section. For many academicians, the discussion section is a central focus of the paper or article, because in it, the authors answer the questions posed in the introduction. Thus, a well-written and well-organized discussion section is crucial, and often an essential criterion for publication acceptance.

In addition, it is important for authors to consider how their results fit into the bigger picture, that is, how their research relates to other research in the field. To see how their findings fit within this larger scope, researchers can ask questions such as, “Are there any trends in the data I have?” or “Are my findings in line with the data other researchers have previously provided?” The remainder of this entry describes how to structure a discussion section with information on key components of the section. The importance of objectivity and scholarly tone and language use when writing the discussion section is also discussed.

Structure When writing up the results of their research, researchers should make clear and logical arguments that the readers can easily follow. To do so, the authors need to be mindful of their organization, their word choices, the level of knowledge of their audience, and possible questions that readers might have. Conventionally, the discussion section starts with the significant findings, followed by the authors’ interpretations of the findings, implications of the study, limitations, future research, and a conclusion. In addition, most discussion sections start with general concepts and proceed to more specific ideas. Subheadings help organize the section by themes or concepts. For research involving multiple experiments, authors usually include a short discussion section for each experiment. A comprehensive general discussion is then included after all the experiments are discussed. In the general discussion, authors clarify the significance of the entire research and their findings, and compare the results of each experiment. The following subsections discuss in turn each of the important components of a discussion section. Summary of Significant Findings of the Study

When writing a summary of significant findings of their research, researchers first remind readers of the hypotheses and research questions introduced earlier in the paper, as well as findings of previous studies on the topic, and relate their

1884   Writing a Discussion Section

findings to the hypotheses. For example, if the hypotheses were supported, then the authors explain what that means and why the readers need to be concerned about those results. At times, the hypotheses might be partially supported. If so, the authors clarify what was supported and what was not. If a hypothesis was not supported, the authors provide a thorough explanation why the hypothesis failed to receive support. Authors can also discuss possible alternative explanations and hypotheses. When discussing significant findings, authors avoid speculation and overgeneralizations. Instead, they share their interpretation of the results of their research, basing their interpretation on the relevant findings. Moreover, the discussion relates to the previous sections of the research paper or article. All findings discussed in the discussion section connect back to those mentioned and argued in the introduction, method, and results sections. If an author has not mentioned them in the previous sections, then they should not be brought up in the discussion section. Similarly, any interpretation included must follow clearly from the methods and results of the study. For example, authors do not generalize the results to other populations or situations unless supported by the findings. However, the purpose of the discussion section is not to repeat what has already been reported in the results section; rather, the discussion section is intended to discuss what hypotheses were supported or not supported and what that means to the field of study. Interpretation of the Significant Findings

An important—perhaps the most important portion—of a discussion section is the authors’ clear and thorough argument and interpretation of the findings. In this subsection, authors explain the significance of the findings to readers without overstating it. Then, they provide their interpretation of the results and situate them within a broader context. For example, what does it mean that Hypothesis 1 was supported, but Hypothesis 2 was not supported? Answering that question can help readers understand the importance of the results.

Implications of the Study

Researchers conduct research to be able to apply the findings in real-life situations. Therefore, regardless of the research methods employed, authors include discussion of the implications of their research and the impact it and the findings might have for everyday situations and for society in the future. Limitations

Every study has limitations. Such limitations might be related to the sample, the data, or the methods used. For example, if a researcher had used another research method, the results might have been different. Thus, researchers clarify the limitations of their study in the discussion section. Providing a thorough examination of the limitations can help readers avoid overgeneralizing or misinterpreting the results and can help other researchers in designing future studies. Future Studies

In some cases, future studies may be explained along with the limitations of the study. Regardless of whether discussion of future studies is in the limitations subsection or in its own subsection, including suggested avenues for future studies can help readers see the direction of the line of research. It might also help them to create and conduct their own research, based on the authors’ insight of future areas of study. Conclusion

The conclusion is the subsection in which the authors provide any concluding remarks about their research. Some authors may wish to briefly restate the future direction of the research by identifying areas for further exploration. This can help readers consider potential future studies. A well-written conclusion is one that gives readers something to think about regarding the research findings and potential future studies. A poorly written conclusion is one that speculates or overgeneralizes the results.

Writing a Discussion Section   1885

Objectivity A discussion section is the only section of a research paper or journal article that is subjective in nature, although authors still need to be as objective as possible. In a discussion section, authors share their interpretation of the results, implications, limitations, and recommendations for future studies. It is expected that authors will write with a scholarly tone and avoid using judgmental, biased, emotional language, or attacking other research or researchers. It is also expected that authors will base their arguments and interpretations on their data, not on their opinion. Even though the discussion section is where authors explain their interpretation of the study results, they might cite other sources, for instance, in cases where they wish to compare their results with those of previous research or in cases where they want to reiterate or emphasize the empirical or theoretical backgrounds addressed in the introduction. In addition, they might use citations to introduce new ideas or to support their interpretation of their findings.

Scholarly Tone and Language Because the primary purpose of a discussion section is to provide readers with an interpretation of the results, the language typically follows the standard set by the discipline. For example, conventionally in communication research, present tense is used to report the significance of the findings and to present authors’ interpretations, whereas past tense is used to summarize the results. Although a study’s findings might be complicated, results are usually explained in a simple and clear language, in an easy-to-understand manner. This helps to assist readers in their comprehension of complex results. Similarly, appropriate organization and word choice helps readers follow the authors’ argument. Thus, slang, technical jargon, and biased language are typically avoided. It is also important for authors to be mindful of the level of expertise and knowledge of their readers. Although authors are expected to be experts on the topic, the readers may be novices. Thus, the use of technical terms or ambiguous language may make the discussion more difficult for some readers to comprehend.

Using a scholarly tone does not mean using jargon or complex language; rather, it means formal writing that is clear, engaging, and dynamic while still being objective. To avoid misunderstandings and improve the writing of discussion sections, authors can ask themselves the following questions: • Are the terms used in the discussion section generally known in the field? If not, is it obvious that readers of the intended audience will be able to understand the terms without difficulty? • Are the ideas presented in the discussion section relatively new? If so, are the arguments easy to follow?

In addition, authors might want to read their discussion section from the viewpoint of prospective readers. This can help authors anticipate the questions readers may have and attempt to answer those questions in the discussion section. For instance, in the case in which a researcher finds unexpected results, he or she might anticipate what questions readers might have about why the results were unexpected and the implications of the unexpected findings. Kikuko Omori See also Academic Journal Structure; Authoring: Telling a Research Story; Publication Style Guides; Writing a Results Section; Writing Process, The

Further Readings American Psychological Association. (2009). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author. Bem, D. (1987). Writing the empirical journal article. In M. P. Zanna, & J. M. Darley (Eds.), The complete academic: A practical guide for the beginning social scientist (pp. 171–204). Mahwah, NJ: Erlbaum. Holmes, R. (1997). Genre analysis, and the social sciences: An investigation of the structure of research article discussion sections in three disciplines. English for Specific Purposes, 16(4), 321–337. Hopkins, A., & Dudley-Evans, T. (1988). A genre-based investigation of the discussion sections in articles and dissertations. English for Specific Purposes, 7(2), 113–121.

1886   Writing a Literature Review Pasek, J. (2012). Writing the empirical social science research paper: A guide for the perplexed (Doctoral dissertation). Retrieved from https://www.apa.org/ education/undergrad/empirical-social-science.pdf Samraj, B. (2002). Introductions in research articles: Variations across disciplines. English for Specific Purposes, 21(1), 1–17. Swales, J. M., & Feak, C. B. (2004). Academic writing for graduate students (2nd ed.). Ann Arbor, MI: University of Michigan Press.

Writing

a

Literature Review

A literature review (commonly referred to as a lit review) for scholarly research papers is written as a preliminary section immediately preceding the statement of research questions or hypotheses. The main part of the paper prior to a Methods section, a lit review places the study in a larger context by presenting, evaluating, and contextualizing research related to the topic of inquiry. A lit review differs from a review of literature (or a “basic” literature review), which seeks to merely summarize all current knowledge on a topic in order to propose new avenues of research for future scholars. Instead, a research lit review (or “advanced” literature review) incorporates only studies or reports most relevant to the specific study being proposed in the paper. It is how the researchers let the reader know how they got to the point of their own study. With some variations, most researchpaper lit reviews are written by including common components. When writing a lit review, these parts are included to address specific reader and researcher goals, are informed by particular ethical and quality standards for which materials to include, and use a standardized style of writing. This entry reviews the purpose and format of a literature review and then highlights several quality, stylistic, and ethical concerns.

Purpose and Format of a Literature Review Goals and formats of lit reviews differ, but most include an argument or rationale for why the topic should be studied; a paraphrased, integrated

coverage of previous literature that has brought the field of research to that current point; and a theory or perspective to guide how and/or why the hypotheses and/or research questions will be examined and interpreted. Topical Argument or Rationale

Although not often explicitly identified as such, a lit review can be thought of as a persuasive argument to get readers to pay attention, continue reading, and value what the researcher is proposing. A common way to do this is to write using solid arguments supported by credible evidence. A lit review writer is expected to convince the audience that both the overall area (e.g., field or discipline) and the specific topic being studied are valuable to general society, organizational or cultural systems, and/or specific individuals. Essentially, every source cited in the lit review is shown as a piece of a current-day knowledge puzzle, with a defined hole to be filled by a missing piece (i.e., the proposed study). Therefore, to some extent, this part of a lit review and the following (presentation of other works) are largely epistemological because they show the reader what is or is not information—or what counts as knowledge. The lit review also is written in a way that is axiological because it convinces the reader what topics or previous knowledge are worth knowing and knowing more about. Seminal and Currently Relevant Research

A well-written lit review will help an audience understand how and where the subsequently proposed study fits into the larger scheme of knowledge as currently understood in a particular field. The lit review, and particularly its presentation of previous literature, is how the researcher shows that he or she is up to date and understands the field of study. It also is where the writer demonstrates convincingly that his or her study fits into the next piece of the broader knowledge puzzle. Both contributions and limitations of previous works are discussed and assessed in this section; this allows the reader to see from whence the current study has been drawn and where or how it will address existing problems in the field. Thus, when writing a lit

Writing a Literature Review   1887

review, other articles should not merely be described in a reporting style. Instead, the writer should evaluate those works for the purpose of advancing the state of knowledge. Theoretical Background or Lens

Just as the purpose of a theory or paradigm is to help readers understand particular phenomena, lit reviews—which present these ideas—are the section for researchers to show how their work is grounded in a particular way of thinking and to demonstrate that their framework is a credible, systematic way to describe, explain, and/or predict the findings of their study. Although not every lit review incorporates an explicit theory or paradigm, most scholars write them in such a way as to reveal their foundation, approach to, or lens through which they will integrate the current study’s findings. A thorough, clear lit review helps the reader understand why a particular aspect of the topic, as opposed to a different focus, is presently studied.

Quality, Stylistic, and Ethical Concerns To evaluate the quality of a lit review, and thus by association the credibility of the study and its researchers, there are some common writing standards applied to research papers. Lit reviews in the field of communication typically include detailed examination and integration of the chosen sources. Furthermore, the manner in which the researcher writes or implements those sources can reveal a lot about the potential biases or worldview of the author. Clearly, some of these considerations can be somewhat subjective. Nonetheless, overarching accepted standards across many types of research demonstrate some common misconceptions that quickly distinguish the novice from the experienced lit review writer. Quality of Source Materials

In a formal research paper, “good” sources are usually those that are scholarly. This means that the writer, to increase credibility, will largely avoid using popular press or unpublished works

to support his or her arguments or rationale. Next, a quality writer will emphasize peerreviewed sources. Being subject to critique by expert researchers before being published provides the material with its own layer of validity that the author can harness for his or her own quality argument. Third, primary sources are essential. Accuracy can be verified by using only works that report directly from the persons who conducted the research. The thorough lit review writer will make sure he or she has read the work before using it; in doing so, he or she makes sure to avoid plagiarism and misquoting or misattributing ideas. Finally, when choosing which sources to include, a focused writer prioritizes sources specifically related to his or her topic of study. Unless a topic is new to the field, tangentially related sources are not typically incorporated in a lit review. The lit review writer should expect that his or her statements, arguments, and supporting citations will be checked for corroboration by others, especially when the proposed study is new or controversial. To allay readers’ concerns, the writer demonstrates support by using a variety (i.e., not all by the same author or school of research) of authors and publications; once again, exceptions to this include lit reviews for studies that challenge the status quo or are new or cutting edge. Initially, to establish a thorough background on the topic, the original creators of a theory or authors of a finding that guides the field are incorporated. Otherwise, the most recent knowledge on a topic is shown by citing works that are current at the time the study is written. Stylistic Trends

Language and Grammar In research that follows the American Psychological Association Publication Manual (APA style manual), lit reviews are expected to focus on paraphrasing, as opposed to directly quoting, other sources as much as possible. Whenever possible, the lit review should be the words (if not always the grammatical first-person voice) of the writer, supported by parenthetical citations to source the ideas of other authors. Three writing tenses (and personas) may be used in the lit review, depending

1888   Writing a Methods Section

on its purpose in the paper: present tense and first person when referring to the present study’s proposals or hypotheses; past tense and third person (even when using personal works) when referring to any research or theory already published; and future tense and first or third person when talking about what needs to yet occur in the field or what will be reported in the paper itself. An example is provided below: In this literature review, I will discuss [firstperson present] the results found by Johnson and colleagues, who discovered [third-person past] . . . Combined, literature to date suggests [third-person present] . . . Thus, I proposed [first-person past] the following research question to guide [present] the current study. Organization Differing from a review of literature, in which myriad organizational approaches to presenting sources are common (e.g., chronological, thematic, cause–effect, problem–solution), a research lit review focuses more on an inverse pyramid structure whereby broader background themes, topics, and trends are discussed first to form a foundation for specific literature that directly relates to the detailed study presented. For example, lit reviews are typically written to first define key terms (where relevant) and begin making a case (i.e., rationale) for studying the topic. Subsequently, each section or subsection in a lit review covers what is known to date and how that shows readers the need for further study, all along providing evidence of how the research questions and/or hypotheses the author proposes are necessary and valuable. Novice writers of lit reviews can be identified by a failure to appropriately embed sources within their larger arguments. In other words, they let the literature drive their writing rather than fitting the material to the study’s focus. Writers who simply summarize articles one-byone, author-by-author, list facts or findings, or write paper versions of annotated bibliographies are not creating a lit review. Instead, writers should view the lit review as an opportunity to tell a story (with the highs, lows, arches, and

conclusions of a good narrative) that foregrounds their proposed study. Jessica J. Eckstein See also Authoring: Telling a Research Story; Ethics Codes and Guidelines; Literature Review, The; Literature Reviews, Resources for; Literature Reviews, Strategies for; Research, Inspiration for; Writing Process, The

Further Readings Galvan, J. (2012). Writing literature reviews: A guide for students of the social and behavioral sciences (5th ed.). Los Angeles, CA: Pyrczak. Hynes, M., Sharpe, S., & Greig, A. (2011). Appearing true in the social sciences: Reflections on an academic hoax. Journal of Sociology, 48(3), 287–303. doi:10.1177/1440783311413487 Machi, L. A., & McEvoy, B. T. (2012). The literature review: Six steps to success (2nd ed.). Thousand Oaks, CA: Corwin. Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16(10), 837–851. doi:10.1177/ 1077800410383121 Wallace, J. D. (2008). Accelerated peer-review journal usage technique for undergraduates. Communication Teacher, 22(3), 80–83. doi:10.1080/ 17404620802154659 Webster, J., & Watson, R. T. (2002). Analyzing the past to prepare for the future: Writing a literature review. Management Information Systems Quarterly, 26(2), 13–23. Wolfswinkel, J. F., Furtmueller, E., & Wilderom, C. P. M. (2013). Using grounded theory as a method for rigorously reviewing literature. European Journal of Information Systems, 22, 45–55. doi:10.1057/ ejis.2011.51

Writing

a

Methods Section

The methods section of research is argued to be the most important aspect of any report. Fortunately, it is also the most straightforward section to write. Often the challenge of writing such a section is knowing what to include, as well as how much detail is needed. By keeping three distinct

Writing a Methods Section   1889

goals in mind, scholars will be able to quickly and effectively draft a methods section. First, one must remember that the goal of the methods section is to provide the framework used in the study. Second, it should be expected that the methods section will look different depending on the audience to which the research is being presented. While the same information is required for a journal manuscript and a conference presentation, the amount of detail will vary. Third, this section, while straightforward, is still an argument. It is up to the writer to provide an argument for why methodical choices were made, as well as the evidence to show that the study was done ethically and effectively. With these three goals in mind, this entry first provides an overview of the parts typically included in the methods section, followed by tips for adapting the methods section to three varying forums—course paper or thesis, conferences, and journal publication. The entry concludes with additional tips for crafting the methods section.

Parts of the Methods Section There are some standard parts to a methods section, but it should be noted that these are general categories. Both quantitative and qualitative papers often require the same topics; however, the wording within the sections will be varied. The writer should look carefully at examples of previous research to gain the best understanding of what should be included, and how to present the steps for a particular method. For example, even within qualitative methods, an ethnographic method section will not mirror that of a grounded theory. Therefore, it will be imperative that careful research is done within each methodological approach. All methods sections begin with an overview of the method selected. This short paragraph tells the reader about the method chosen and argues for why this was the approach needed to answer the research question or test the hypothesis. The goal here is to argue why this method was chosen, not to argue why other methods were not selected. This overview should be supported by citing the creators of the method selected and should highlight any important constructs commonly associated with the chosen method.

Participants

Immediately following the overview, the participants of the study are described. The description of the participants in the experiment or research study includes who they were, how many there were, and how they were selected. The accepted term for describing a person who participates in research studies is a participant not a subject (Boynton, 1998). This section also includes any major demographics that have an impact on the results of the experiment (i.e., if race is a factor, then a breakdown by race should be provided). Materials

The next section describes the materials, measures, equipment, or stimuli used in the experiment or research study. This may include testing instruments, technical equipment, or other materials used during the course of research. When concluding experiments, this section is often called apparatus. The apparatus would describe any equipment used during data collection (e.g., computers or eye-tracking devices). Examples of materials include scripts, surveys, or software used for data collection (not data analysis). Within this section, one might be required to include specific examples of materials or prompts, depending on the nature of the study. For example, In order to characterize the sample, information about the participants’ fear of death was measured using the Multidimensional Fear of Death Scale (Neimeyer, & Moore, 1984) and the Frommelt Attitudes toward Care of the Dying Scale (FATCOD) (Frommelt, 1991). Design

Next is the description of the type of design used in the study. Specifically for quantitative studies, a definition of the variables as well as the levels of these variables is required. A description of whether the experiment uses a within-groups or between-groups design is also noted. For example, The experiment used a 3 × 2 between-subjects design. The independent variables were age and understanding of second-order beliefs. Qualitative studies might discuss whether the interviews were

1890   Writing a Methods Section

structured or semi-structured, or how communities or populations were selected for observation. Procedure

The largest part of the methods section is the procedure. This portion details the procedures used; explains what the researcher had participants do, how data were collected, and the order in which steps occurred; and concludes with data analysis. The procedure section plays the key role in allowing the audience to know how a similar study would be conducted. This section is also often where the most scrutiny is placed in terms of proving a study is valid. Therefore, specific details of how the study was conducted are required. However, the level of detail depends on the form in which the paper is being presented and audience for which it is intended.

Audiences of the Methods Section From writing a class report to a journal publication, how one handles the methods section will look and feel a bit different. While the sections will not change, the amount of detail and the level of focus will be greatly modified. As mentioned previously, knowing who the audience is and the purpose of their interaction with the methods should drive how this section is written. Therefore, the preceding lays out how much detail one should go into while writing for three varying platforms: course paper or thesis, conference paper or presentation, journal article. Methods for Course Paper or Thesis

The most common forum for students and those first learning to write methods sections is a classroom setting. Within these settings, the key to the methods section is to provide the fullest picture of the study. Every step should be detailed and supported. Clear justification of the method and the procedure is paramount. Ideally, upon reading this type of methods section, anyone should be able to pick up what was done and replicate the study on his or her own. While complete interview protocols, or questionnaires, are not required, enough detail on how those mechanisms were constructed, applied, and analyzed is.

Even more established scholars may want to begin with this fully fleshed out version of the methods section. Doing so allows the writer to see the whole picture, which will then make trimming it back for a more focused presentation or publication easier. Methods for Conference Paper or Presentation

Once the full manuscript is drafted, the most common next step is presenting the work for a conference. Given the limited time for a presentation, the methods section typically needs to be modified. As such, two primary goals should guide the editing process. First, provide enough detail to tell the whole story. In this situation, an audience is less likely to require a full detailed rationale for the ways the study came together. However, they will want to know why the method was chosen. The second goal, then, is to provide the most important pieces a reader would need to know. Often the greatest amount of detail will be centered on answering who, what, and how. Who were the participants and what were their demographics. What was done in the study. And finally how did this study come together. Once these elements are presented, use the remaining time to explain what was found. After all, that is more than likely the best part of the study. Methods for Journal Article

After the conference presentation, the next step is submission for journal publication. Seeking publication typically again requires transformation of the paper, including the methods section. Most likely it is the methods section that sees the greatest level of reconfiguring during this process. The primary goal for this forum is to provide a clear and precise description of how the study was performed. This includes the rationale for the methodological choices and characteristics of the study design. In addition, the study will need to be presented in such a way that the reader gains a full understanding of what would need to be done for replication, as well as the ability to judge the validity of results and conclusions presented. This form of methods section is often considered the most challenging version to craft; however,

Writing a Results Section   1891

keeping the goals presented here in mind can help to make the challenge easier.

Key Tips and Resources for Writing the Methods Section Beyond the aforementioned goals and variation of complexity a methods section could take on, a few addition tips can be applied. First, the methods section of the paper is always written in the past tense. Remember to use the formatting that the target journal (or course professor) prefers. In communication, this is generally American Psychological Association (APA) style. When writing the method section, keep the most recent version of the appropriate style guide on hand. The guide provides valuable tips and requirements for how this section should look, as well as what needs to be included. For quantitative methods, the text From Numbers to Words is highly recommended. This text not only identifies what the methods section needs to include for every statistic test, but it also provides word-for-word examples of how data should be reported. Remember the methods section needs to provide enough detail that another researcher could replicate the experiment, but focus on brevity. Avoid unnecessary detail that is not relevant to the outcome of the experiment. Finally, always read through each section of the paper for agreement with other sections. If a set of steps and procedures in the methods section is mentioned, these elements should also be present in the results and discussion sections. Malynnda Johnson See also Methodology, Selection of; Publication Style Guides; Publications, Scholarly; Reliability of Measurement; Validity, Measurement of; Variables, Conceptualization

Further Readings Boynton, P. M. (1998). People should participate in, not be subjects of, research. British Medical Journal, 317(7171), 1521. Miller, J. E. (2007). Preparing and presenting effective research posters. Health Services Research, 42(1, Pt. 1), 311–328.

Morgan, S. E., Reichert, T., & Harrison, T. R. (2002). From numbers to words: Reporting statistical results for the social sciences. Boston, MA: Allyn & Bacon. Peterson, J. L., Johnson, M. A., Scherr, C., & Halvorsen, B. (2013). Is the classroom experience enough? Nurses’ feelings about their death and dying education. Journal of Communication in Healthcare, 6(2), 100–105. Weissberg, R., & Buker, S. (1990). Writing up research. Englewood Cliffs, NJ: Prentice Hall.

Writing

a

Results Section

The results section of a research manuscript appears after the methods section and reports the outcomes of the analyses used to evaluate the research questions and/or hypotheses. The results section is critical because the goal of any research project involves the presentation of information based on some type of analysis of the empirical world. The application of a system of evaluation and analysis requires a presentation of the outcomes of that acquisition and evaluation capable of interpretation by an audience familiar with the rules for inference in that system. Writing a results section provides an opportunity to provide that information and display the proof for the conclusions ultimately offered. One of the best ways to organize a reporting of results of a quantitative study involves using the numbered research questions and/or hypotheses to report the results of the statistical analyses. The results section for a qualitative study varies depending on the goal of the type of report. This entry reviews in detail the issues of writing a results section first for a quantitative study and then for a qualitative study.

Quantitative Results Section Usually, if one is simply using an existing scale or the desire is to report the demographic characteristics of the sample, that information is often better served by appearing in the methods section. The methods section should be a place to provide information unrelated to the evaluation of hypotheses or the research questions but is vital or considered important. Demographic information about the profile of the participants may provide

1892   Writing a Results Section

important information that may affect the generalizability of the results but do not directly impact the findings. The results section should provide enough detail to permit someone provided with the data to be able to replicate the findings. The goal is conciseness and clarity for the report of the outcomes of the procedures undertaken. The literature review provides a justification for the entire investigation and usually results in a cumulative statement about the purpose of the research and the expected outcomes. The forms usually involve two different focal points for the empirical examination: research questions and hypotheses. A research question is usually a nondirectional inquiry; for example, asking the question, “what is the relationship of communication apprehension to school grades?” is a nondirectional question. The question permits the relationship to be negative (as one variable increases the other variable decreases) or positive (as one variable increases the other variable increases). The results require a two-tailed test of significance, because the value could either be significantly positive or significantly negative. A hypothesis is different because the direction of the outcome is specified. For example, a hypothesis would be something like, “men will report greater levels of communication apprehension than women.” The testable part of the statement provides a basis for the expectation of one group generating a score different from the other group in a particular direction (greater or lesser than). The appropriate statistical test will then be a one-tailed test. The statistical implications of this shift is an increase in the power of the statistical test to find significant differences between the two groups in this example. The gain in power is because, unlike the two-tailed, or nondirectional test, the levels of significance are not either a value that is higher or lower than the critical value but instead are all values greater than a critical value that is either positive or negative, depending on the direction of the hypothesis. What should happen in the results section becomes a rather journalistic account of the results of the test for each research question and hypothesis. The results section should be patterned or structured by a discussion of each research question or hypothesis with the results of whatever statistical analysis was performed to

provide an answer. Some presentation will be relatively simple, such as the results of a t-test. In each case, there exists an appropriate format for the particulars of the statistical test. Often in the methods section of the manuscript there may be a specification of the particulars of the statistical tests involved; this practice becomes important as the tests become more complicated or precise. The methods section serves as a good place to provide the justification or explanation for the choice of a particular statistical test, whereas the results section simply reports the results of the application of that chosen procedure. What usually happens is that a computer program, like SPSS or Minitab, produces a lot of output or information when asking for an analysis. Typically not all of the information is completely reported because some of the information is unnecessary or redundant. The publication style guide, which primarily is the Publication Manual of the American Psychological Association (APA) in communication research, will provide details about what information should be reported and the format for the report of that information. The rule is that enough information should be reported to permit someone who if given the data set could replicate the analyses and findings with relative ease. Each statistic has a set of information that is expected to accompany the overall estimate. For example, if reporting an independent groups t-test, the t, degrees of freedom, and p value should be available. Generally, one would expect the means and standard deviations to be reported for each of the two groups in the test. The reporting of the means and standard deviations permits others to examine and understand the implications of the t statistic provided. In addition to the statistical information, there should be clear indications about how the research question or hypothesis is answered; was the hypothesis supported or did it fail to receive support? A significant result for the hypothesis indicates support for the underlying theoretical argument. The author should avoid words or positions that indicate that the theory is “proven” because alternatives may exist. The attitude should reflect the position that continued belief in the theory remains warranted. Failure to achieve a significant result does not indicate proof of “no relationship” or that the relationship is “insignificant.” The significance

Writing a Results Section   1893

test indicates only whether the size of the observed relationship with a particular sample size could not be zero at a specified probability level. The observed relationship is a sample estimate of the population parameter, and a confidence interval for the relationship is necessary to evaluate the potential size of the population parameter. A complete correlation matrix should be reported for any investigation involving multiple variables, whether independent or dependent. The correlation matrix should contain a mean, standard deviation, and reliability information (when possible) for each variable. The correlation matrix preserves the entire set of statistical information for use in any subsequent meta-analysis, something typically not available in most statistical procedures (e.g., analysis of variance, multiple regression, multivariate analysis of variance). Many statistical procedures report only part of the available information or report the information in a form that is adjusted (like a multiple regression coefficient), which prevents estimation of zeroorder relationships.

Qualitative Results Section Qualitative methods provide a different set of consideration when generating a set of results. Generally, most qualitative methods will not generate a set of research questions or hypotheses that are capable of an answer in a couple of sentences. The methods section often details a process by which the data (e.g., transcripts, written survey answers) were organized and assembled and put into units for consideration. The difference between grounded approaches, in which no prior theoretical organization exists, and theoretically oriented analyses becomes important. Grounded theory requires a set of readings to provide an organization that becomes the creation of a theory to understand the set of issues. Grounded theory requires the need to not only organize the responses but to provide a means of generating a theoretical frame that permits interpretation of any structure that becomes imposed to understand the available information. Ethnographic approaches usually reflect the need to provide a “thick description” approach championed by Clifford Geertz. This approach requires that the scholarship provide examples of

the discourse in order to permit the reader to understand the lived reality of the culture as expressed by the ideas and words used by members of that culture. The challenge for writing results often involves trying to figure out how long to make any entry or set of examples. The advantage of qualitative methods is that it offers the ability to provide examples to characterize and illustrate the underlying themes or conclusions offered by the scholar. The ability of a well-chosen example to provide form and texture to the description offers the reader an insight into understanding the culture under description. What must be kept in mind becomes the need to provide the description and sense of the culture without simply viewing the results as a set of individual examples. The examples should illustrate and make the underlying description, theory, or theme come to life and provide an understanding not possible through simple denotative scholarly language. Examples should be relatively short; one reason for keeping examples short is the expectation of focus and clarity. Longer examples tend to provide multiple ideas and become more difficult in providing clarity. Many examples may simply become redundant and may begin to provide multiple issues that distract from the essential point or analytic function that the scholar wants to address. The use of several long examples is typically discouraged because the examples should illustrate the point, and the paper should be more than a set of examples strung together. If the analysis involves discourse, such as critical discourse analysis, the example provides only the actual words, and the context and deeper set of values or assumptions may require some additional development. If cultural power issues (such as those involving gender or ethnicity) exist, the focus might involve what is not said in the example in terms of assumptions or implications. Often more critical framed analyses focus on unstated or implied elements and the examples may not require a great deal of length. This prevents the focus from being on the examples rather than being on the direct examination of the discourse. Conversational analysis operates in much the opposite manner by focusing on only the text and not considering or introducing context. Conversational analysis usually uses a small bit of discourse

1894   Writing Process, The

or text and provides an extensive evaluation and analysis of that text. The focus of the analysis becomes limited to only an examination of that text. Under this condition, the need to report the entire text used for analysis becomes essential for any reader to understand the analysis. The results section is really a means of providing an understanding of the provided discourse and should be interpreted as such. Rhetorical analysis often does not have a results section that would be labeled as such. Usually rhetorical analysis involves the selection of some artifact for analysis. The normal development of theory and then method often is melded and is considered by many inseparable because the relationship is such that the development and employment of perspective works as a unified holistic entity. That is not true for all systems; approaches such as neo-Aristotelian analysis more closely resemble empirical approaches often with regular and expected analytic devices used to permit evaluation of the artifact. Mike Allen See also American Psychological Association (APA) Style; Conversation Analysis; Modern Language Association (MLA) Style; Publication Style Guides; Publishing a Book; Publishing Journal Articles; Qualitative Data; Rhetorical Method; Rhetorical Theory

Further Readings Allen, M., Titsworth, S., & Hunt, S. K. (2009). Quantitative research in communication. Los Angeles, CA: Sage. Black, E. (1965). Rhetorical criticism. Madison: University of Wisconsin Press. Ellis, D. G., & Donohue, W. A. (Eds.). (1986). Contemporary issues in language and discourse processes. Hillsdale, NJ: Erlbaum. Emerson, R. M., Fretz, R. I., & Shaw, L. L. (2011). Writing ethnographic field notes (2nd ed.). Chicago, IL: University of Chicago Press. Foss, S. K. (2008). Rhetorical criticism: Exploration and practice (4th ed.). Long Grove, IL: Waveland Press. Geertz, C. (1973). Interpretation of culture. New York, NY: Basic Books. Martin, W. E., & Bridgmon, K. D. (2012). Quantitative and statistical research methods: From hypothesis to results. San Francisco, CA: Jossey-Bass.

Writing Process, The This entry outlines a number of factors to consider when beginning the academic writing process as well as basic elements of good writing that will assist communication students and scholars when undertaking a writing project. This entry first discusses the importance of determining the appropriate audience or outlet for the writing project as well as following the specified format based on the chosen outlet’s requirements. Specific elements that will improve an author’s writing ability, including style, voice, verb tense and choice, transitions, and citations and attributions, are then discussed.

Audience, Outlet, and Format When beginning the writing process, it helps to organize the process by starting with a few questions such as who is the audience and what is the purpose? A term paper written for a professor has a different audience than a research article written for journal reviewers or a thesis or dissertation written for a committee. Likewise, writing for an academic audience is different than writing for a lay audience such as professional organizations or news media. While determining the audience for a course paper is easy, often a research-based manuscript has a few potential venues. For example, a researcher looking to publish in a peer-reviewed journal usually has a handful of journals to choose from and therefore must determine which journal is the best fit. Here are a few criteria a researcher considers when determining a good journal for submitting a manuscript: (a) Has this journal published this topic before? (b) Has this journal published articles using the same methodology? (c) What are the goals of the manuscript? Is the manuscript theoretical or applied? Is it a metaanalysis or review of the literature? (d) What types of articles have been published in the most recent issues? (e) Where did the authors cited in the manuscript publish their research? (f) What is the acceptance rate and/or impact factor of this journal? (g) Is this journal considered rigorous enough to be able to count toward your institution’s research and promotion process?

Writing Process, The   1895

After considering the audience and outlet, researchers must also consider how they will write up their research. A researcher has certain guidelines to consider, and these guidelines vary depending on audience or outlet. For example, a manuscript written for a peer-reviewed journal article typically has to follow certain regulations for formats and citations. Communication research journals typically call for American Psychological Association (APA) style but journals vary on manuscript length and the outline or format of the paper. For example, some journals are stricter in calling for the use of typical manuscript sections (i.e., introduction, literature review, method, results, discussion, and conclusion), whereas other journals allow researchers more flexibility. In addition to determining and adhering to a journal or other outlet’s requirements or style guide (e.g., APA), other elements of good writing include style, voice, verb tense and choice, transitions, and citations and attributions. The next sections explain these elements in more detail.

Style An effective writing style contains many elements. Perhaps the most important stylistic tip is to write as simply and concisely as possible. Many novice writers think that a more formal paper (and in turn, its author) sounds more intelligent based on how overindulgent the author’s use of vocabulary may be. Instead, remember this rule: The more complex the topic, the simpler the writing needs to be. In other words, research can be complicated enough; writing style should provide clarity, not obfuscate. Following are some ways to write clearly: •• Avoid the use of slang and colloquialisms. •• Similarly, avoid jargon or vocabulary unique to a particular field that may be unfamiliar to readers. •• Be specific and do not assume the reader is aware of the topic and current literature. A writer should assume the reader is intelligent but an outsider to the field. •• Use active voice instead of passive voice. Instead of writing, It has been argued, write Smith argues. Instead of The relationship between violence and video games has been studied,

write Researchers have studied the relationship between violence and video games. •• Use gender-neutral language. This means avoiding sexist language. Gender-neutral language also means writing he or she, or him or her. This also means using terms like firefighter, chairperson, or mail carrier instead of fireman, chairman, or mailman. An easy solution if sentences become cumbersome is to use the plural form of a noun and then use their instead of his or her. For example, instead of writing, A researcher must be strategic when deciding where to send his or her manuscript, use the plural version: Researchers must be strategic when deciding where to send their manuscript. Just remember to be consistent in use of noun and pronoun (i.e., singular or plural) within a sentence.

These examples lead into two more tips: (a) be consistent in use of person (i.e., first or third person), tense (past, present, or future tense), and singularity versus plurality and (b) make sure that every pronoun used has a clear antecedent. In other words, a writer must ensure that every pronoun (its, this, those, these, or that) clearly connects to a noun (e.g., These variables, That argument). A reader should never ask, “This what? Those what?”

Voice Together, style and voice are often a combination of an author’s writing style and requirements of an outlet (e.g., a journal vs. an editorial). This entry considers voice to be both the use of first and third person and passive versus active voice. Researchers often use third person in their articles, as it is considered more traditional and formal. In recent years, however, journal editors have become more accepting of first person. However, articles written in third person often lead to overuse of passive voice (e.g., Thirty people were interviewed for this study instead of I interviewed thirty people). Critics of first person say it sounds too informal and less academic. Regardless, good writing uses more active voice than passive voice, even when using third person. For example, The three hypotheses were proven correct (Smith, 2015) versus Smith (2015) proved her three hypotheses correct.

1896   Writing Process, The

One problem with passive voice is that it can be unclear who has ownership of actions. For example, when a writer says, It was found, it is unclear who did the finding and when. Other examples indicate the confusion behind passive voice: •• It was discovered . . .  [Who discovered? And in this study or previously?] •• It was discussed . . .  [By these authors or others?] •• A study was conducted . . .  [When? And by whom?] •• Long-distance relationships were said to be difficult to maintain . . .  [Who said that?] •• Surveys were distributed . . .  [By whom?]

Most of the above examples can be fixed easily, even when using third person: •• It was discovered . . . → Smith (2015) discovered . . .  •• It was discussed . . .  → Smith and Doe (2015) discussed . . .  •• It has been found . . .  → Smith, Doe, and Jones (2015) found . . .  •• A study was conducted . . .  → Smith (2015) conducted a study . . .  •• Long-distance relationships were said to be difficult to maintain . . .  → Smith (2015) argued that long-distance relationships are difficult to maintain.

In addition, scholars have noted that the use of third person and passive voice also removes the researcher from his or her own research. The reader knows who the author is at the beginning of the article but then the author “disappears” throughout the rest of the article. Conventionally, scholars use third-person voice to imply formality and objectivity. However, using sentences such as, Ninety people were surveyed, or It was found, removes ownership from the author. Regardless, if a journal calls for third person instead of first person, writers should work on identifying and then avoiding the use of passive voice whenever possible.

Verb Tense and Verb Choice Style manuals often agree on what verb tense should be used in articles. For example, APA style

calls for past tense [Smith (2015) argued] and present perfect tense [Smith (2015) has argued]. Writers should use past tense when referring to studies that have already been conducted [Smith (2015) interviewed 15 people . . . ; Smith (2015) found . . . ] In other words, anything already published uses past tense. Then, when writing about their own studies, researchers using APA style should shift to present tense, regardless of whether writing in first or third person [Results suggest . . .  Our findings have theoretical and practical implications.] A researcher should also switch to future tense when discussing future research [Our/The next study will build on these findings by examining . . . ]. One more note on verb tense: Students writing a research prospectus or proposal also should use future tense in the methods section when discussing what they will do for their study [Third person: Thirty students will be interviewed. First person: I will interview 30 students.]. When it comes to choosing verbs, many writers use too many words and choose too many “weak” verbs. For example, instead of writing They came to the conclusion, a writer should simply write, They concluded. The latter uses fewer words and concluded is a more active verb than came. Other common examples (with suggested improvements) include the following: •• a conclusion was made [note: this is another example of passive voice] → concluded •• reached a conclusion → concluded •• brings up the argument → argued/contended •• article talked about → article discussed •• surveys were given to students . . .  → students completed a survey •• authors backed up argument by . . .  → authors supported their argument by •• researchers looked at the role of . . .  → researchers examined •• researchers provided an analysis of → researchers analyzed •• researchers provided an examination of → researchers examined •• the participants were split up by the authors → the authors divided participants •• author put emphasis on → author emphasized •• getting information out → distribute

Writing Process, The   1897

•• did a good job in their explanation → explained clearly/thoroughly •• gave a visual explanation → explained visually •• researchers took a sample → researchers sampled •• the research can make a substantial contribution → the research can contribute substantially

Writers should also try to minimize their use of is/are or was/were. While common in everyday conversation, a writer should practice rephrasing sentences to use more diverse verbs. For example: In this article, online dating is looked at among college students sounds better when rewritten as, This article examined online dating among college students. In the next example, the first sentence uses both was and is: The author was clear in describing the methods for studying a crisis where image restoration is involved. A simple change in the verbs results in a sentence with four fewer words and stronger verbs: The author clearly described the methods for studying a crisis involving image restoration.

Transitions Because many research articles are not only lengthy but cover complex topics, the writing must be as clear as possible. One way to write for clarity is to pay attention to the way sections connect and also how individual paragraphs connect. The use of transitions and transitional words helps articles flow from section to section and paragraph to paragraph. Even though an article uses headings (e.g., Literature Review and subheadings within the literature review), a writer must still use transitions. For example, an article discussing media coverage for female athletes includes one paragraph that discusses the lack of media coverage compared with male athletes and then the next paragraph discusses problems with media coverage. A good transition between paragraphs could be as follows: Not only does media coverage for male athletes greatly outweigh female athletes’ coverage, when media does cover female athletes, recent research suggests that the coverage differs significantly from how male athletes are portrayed.

In addition to full-sentence transitions, writers cannot forget about transitional words or phrases. Examples include also, additionally, in addition to, similarly, next, first, second, third, finally, therefore, thus, likewise, among others. When a writer wants to change direction, transitional words or phrases are even more crucial because otherwise, the reader cannot see how two sentences or ideas do not fit together. Examples include however, on the other hand, in contrast, despite (as in despite these findings . . . ), unfortunately, although, whereas, nevertheless, and regardless.

Citations and Attributions The last element of good research writing involves avoiding plagiarism, which is taking another’s ideas without proper acknowledgment. Ethical writers ensure they properly attribute others’ work by citing them, regardless of paraphrasing ideas or using direct quotations. When writing research reports or articles, a writer is supposed to cite other sources. Writers build their credibility by showing a deep understanding of prior research, how prior research fits together (e.g., what are common themes in research on college students and social support?), and how prior research informs a researcher’s current study. As mentioned earlier, different outlets have different requirements for how to attribute sources, and the communication field largely uses APA style. Here are a few examples of citations in APA style: Likewise, Smith and Jones (2015) recently identified a strong correlation . . . . The same two researchers also discovered in their meta-analysis that the majority of studies focused overwhelmingly on college students (Smith & Jones, 2015). Notice how the two examples above vary in putting the attribution within the sentence or at the end. Writers should similarly vary how they use citations, which includes avoiding an overreliance on “according to” citations: According to Smith and Jones (2015) . . .  Instead, here are two more alternatives to “according to” citations: Research published by

1898   Writing Process, The

Smith and Jones (2015) found . . . . and Smith’s (2015) research uncovered . . .  While in-text citations are critical to avoiding plagiarism, writers cannot forget to include references at the end of the paper (APA style uses the term references, not bibliography or works cited). A writer should not consider the references an afterthought to a paper (even if a deadline is looming) but rather as an important part of a paper. Every source cited within the paper must be included in the references. During the editing process, the writer must also ensure that any sources cut from in-text citations must also be deleted in the references. In other words, the sources cited within the text should match the references page perfectly. A reader should never wonder from where a writer pulls his or her information. Colleen E. Arendt and Audra K. Nuru See also Academic Journal Structure; Acknowledging the Contribution of Others; American Psychological

Association (APA) Style; Citations to Research; Communication Journals; Ethics Codes and Guidelines; Peer Review; Research Report, Organization of

Further Readings Alcoff, L. (2008). The problem of speaking for others. In A. M. Jaggar (Ed.), Just methods: An interdisciplinary feminist reader (pp. 484–494). Boulder, CO: Paradigm. Lindlof, T. R., & Taylor, B. C. (2011). Qualitative communication research methods (3rd ed.). Thousand Oaks, CA: Sage. Rubin, H. J., & I. S. (2005). Qualitative interviewing: The art of hearing data (2nd ed.). Thousand Oaks, CA: Sage. Rubin, R. B., Rubin, A. M., Haridakis, P. M., & Piele, L. J. (2010). Communication research: Strategies and sources (7th ed.). Boston, MA: Wadsworth. Treadwell, D. (2014). Introducing communication research: Paths of inquiry (2nd ed.). Thousand Oaks, CA: Sage.

Z and provides examples to illustrate the function of z scores in communication research.

Z Score The z score is a standard score widely used in statistics. The normal distribution is one of the central requirements of statistical testing. Most statistical tests assume normal distribution in the sample data, as distribution of means is roughly normal and does not lead to major statistical errors. One important feature of normal distribution is that it uses raw scores derived from a sample. However, using raw scores does not enable researchers to conduct all types of statistical analyses. Raw scores only allow researchers to compare the scores within the same sample. Consider the following scenario: A communication researcher wants to compare communication accommodation between group members and examine whether it differs between individualistic and collectivistic cultures. The researcher collects data from two different samples. One sample includes individuals made up of American students and the other sample includes individuals who only have Chinese students. If the researcher wants to compare and contrast communication accommodation scores between two different samples (e.g., American and Chinese student samples), they need to transform the raw scores to standard scores. There are different types of standard scores, but this entry focuses specifically on z scores. It provides the definition of z scores and explains the formula to calculate z scores using a communication-related example. This entry also summarizes the general importance of z scores

Definition A z score is the deviation of a raw score from the mean and expressed in terms of standard deviation units. The mean of z scores is always 0 and the standard deviation is always 1. z scores above the mean have a positive value, whereas z scores below the mean have a negative value. z scores are interpreted using the z-distribution table, and most statistical textbooks include these tables for reference.

Calculation The deviation score is the difference of any score (X) from the mean (M). When this deviation score is divided by the standard deviation (SD), z score of any raw score can be calculated. The following formula is used to calculate z scores (Z): X−M SD The following example illustrates the calculation of a z score of a raw score using a communication-related example. Suppose that a group of communication researchers measure the organizational identification of newcomers using an organizational identification scale. One newcomer scores 20 in the identification scale. The sample mean for this score is 25 and the standard deviation is 2. Using the z-score calculation formula,

1899

Z=

1900   Z Score

one can deduct the mean score of 25 from the individual’s score of 20 and divide it by the standard deviation of 2. The result represents the z score of this individual’s raw score on the organizational identification scale. Using this formula, communication researchers can transform raw scores into z scores manually, yet several statistical analysis packages have the function of z transformations for easier automated calculations.

Importance of Z Scores z scores serve two important functions in statistics. These scores are based on a standardized normal distribution. First, using z scores allows communication researchers to make comparisons across data derived from different normally distributed samples. In other words, z scores standardize raw data from two or more samples. Second, z scores enable researchers to calculate the probability of a score in a normal distribution. The following section provides specific examples for each function of z scores.

Application of Z Scores in Communication Research z scores allow researchers to standardize and analyze data from two or more normally distributed samples. Thus, communication researchers should use z scores when they want to compare data collected from two different samples. For instance, suppose that a group of communication researchers examine how teenagers and college students experience communication apprehension differently. They need to collect data from two different samples (e.g., teenagers and college students) on the degree to which teenagers and college students experience communication apprehension differently. As researchers cannot compare raw data from two different samples (e.g., teenagers vs. college students), they need to transform the raw data into z scores to ensure that they make comparisons between two standardized normal distributions. Another example illustrating the preceding function of z scores can be found in research investigating the effects of cyberbullying on high school students coming from different socioeconomic backgrounds. In this case, researchers can

collect data about the effects of cyberbullying from two different school districts such as lowincome and high-income neighborhoods. To investigate how cyberbullying affects high school students, researchers need to compare data collected from two different neighborhoods representing two different samples. In this case, researchers should transform raw data scores into z scores, as z scores allow them to work with a standardized normal distribution across two different samples. In addition to providing statistical rigor by standardizing normally distributed raw data from different samples, z scores help researchers to calculate the probability of a score occurring in a given normal distribution. Although descriptive statistics such as mean or median provide some information about the probability of a score in a normal distribution, z scores allow researchers to make more precise and accurate estimations. Imagine that a communication researcher investigates the effects of verbal immediacy behaviors on performance in small groups. The researcher calculates the performance scores of 80 groups and finds that the mean score of one group is 78 out of 100. If the researcher wants to calculate how well this group with a score of 78 compared to the rest of the sample, they need to use z scores. If the researcher eyeballs the performance scores in relation to the sample mean, the estimation may be inaccurate, as this calculation does not take the variation in scores into account among 80 different groups. Gamze Yilmaz See also Data Transformation; Normal Curve Distribution; Primary Data Analysis; Sampling, Probability; Secondary Data; Standard Deviation and Variance; Standard Score; Z Transformation

Further Readings Gravetter, F. J., & Wallnau, L. B. (2000). Statistics for the behavioral sciences (5th ed.). Belmont, CA: Wadsworth/Thomson Learning. Salkind, N. J. (Ed.). (2006). Encyclopedia of measurement and statistics. Thousand Oaks, CA: Sage. Thompson, B. (2006). Foundations of behavioral statistics: An insight-based approach. New York, NY: Guilford.

Z Transformation   1901

Z Transformation Z transformation is the process of standardization that allows for comparison of scores from disparate distributions. Using a distribution mean and standard deviation, z transformations convert separate distributions into a standardized distribution, allowing for the comparison of dissimilar metrics. The standardized distribution is made up of z scores, hence the term z transformation. Z scores are a special type of standard score in which each unit represents one standard deviation from the mean; z scores always have a distribution mean of 0 and a standard deviation of 1. This entry details a variety of issues central to understanding z transformation. First, standardization and z scores are explained. The formula for z transformation is provided and discussed. Second, an example of comparison across different metrics is provided to demonstrate how z transformation makes such comparisons possible. Two scores are compared and other applications of z scores are previewed.

Standardization and Z Scores The first step in z transformation is to convert the original units to a standard unit. Standardizing is done by subtracting the distribution mean from the original value and then dividing by the standard deviation. The new value is the z score or standardized value, and the formula is x−µ , σ where x is an observation, µ is the mean of the distribution, and σ is the standard deviation. A z score describes how many standard deviations an observation moves away from the mean, as well as the direction in which it moves. Positive scores indicate the observation is larger than the mean, and negative scores indicate the observations are smaller than the mean. Z scores always have a distribution mean of 0 and a standard deviation of 1, so the numerical value represents the number of standard deviations between x and µ. A z score of –1 describes a data point that is exactly 1 standard deviation below the mean, whereas a positive 1 is by definition 1 standard deviation above the mean. z=

Applying Z Scores Although the definition of z transformation may sound technical, conceptually z transformation and z scores themselves are fairly simple and extremely useful. For example, with measures of central tendency, any two scores can be meaningfully compared. Imagine you want to get into a competitive program in videogame design, but the program is in Russia and you need to demonstrate your proficiency in Russian. You took two tests, the Russian Language Proficiency (RLP) and the Test of Former Soviet Languages (TFSL). Your videogame design program only requires one score, and you want to send your best performance. You scored a 30 on the RLP and a 650 on the TFSL. Which score do you send? Given the different metrics and hence different distributions, the choice is not obvious. However, z transformations will allow you to convert to a common metric to make the comparison simple. In this example, the comparison appears difficult because of the different properties of the two distributions in terms of central tendency and variability. We know the two test scores in their original metrics, but we also know the mean and standard deviation of the two distributions. For the RLP, the mean is 20, and the standard deviation is 8. For the TFSL, the mean is 550, and the standard deviation is 150. That’s all the information we need. Making use of the z score formula, we know the RLP mean is 20, with a standard deviation of 8. Just work the formula: 30–20 = 10, so 10 is 1.25 standard deviations above the mean (10/8). Now we do the same for the TFSL, which has a mean of 550, with a standard deviation of 120. The score 650 minus the mean 550 is 100; 100/120 = .83 standard deviation above the mean. You should send the RLP score if you want to make the best impression. Let us take this example a step further. You have a colleague applying to the same program, but he is being very coy about his scores. You did hear him say he scored two standard deviations above the mean on TFSL. With basic algebra, z scores can be transformed into raw scores if we know the mean and standard deviation of the distribution in question (which we do in this case). The algebra first: z = (x − µ) → (z)(σ) = (x − µ) → x = (z)(σ) + µσ So to figure out your colleague’s score on the TFSL, you just have to work the formula. Two

1902   Z Transformation

standard deviations is a z score of 2. The mean of TFSL is 550, and the standard deviation is 120: 2(120) + 550 = 240 + 550 = 790. Your colleague scored 790 (assuming his comment about two standard deviations is correct). Z transformation can be applied to any size distribution. In statistics courses, for example, the heights of the students are transformed to z scores. In a small class with only eight students, the distribution may deviate dramatically from a normal distribution, but z transformation is just a way of changing the way the axis is labeled. The new labels center on 0 and mark standard deviations with intervals of 1. John Banas

See also Normal Curve Distribution; Standard Deviation and Variance; Standard Score; Variables, Defining

Further Readings Heiman, G. (2001). Understanding research methods and statistics: An integrated introduction for psychology. Boston, MA: Houghton Mifflin. Moore, D. S. (2000). The basic practice of statistics (2nd ed.). New York, NY: Freeman. Steiger, J. H. (1980). Tests for comparing elements of a correlation matrix. Psychological Bulletin, 87, 245–251. doi:10.1037/0033-2909.87.2.245

Appendix A Modern History of the Discipline of Communication

1868 Northwestern University teaches first class on elocution in speaking

1935 Alan Monroe publishes book on public speaking dealing with persuasion and the motivated sequence

1910 Eastern Communication Association forms

1946 National Association of Teachers of Speech changes name to Speech Association of America

1914 National Association of Academic Teachers of Public Speaking formed, involving 17 scholars teaching English desiring a separate academic identity focused on their interests

1949 American Forensic Association (formally Association of Directors of Speech Activities) forms

1917  Quarterly Journal of Speech begins publication

1950 International Communication Association forms

1921 First doctoral programs in Speech/ Communication formed at the University of Wisconsin–Madison and University of Iowa

1951  Journal of Communication begins 1957 Michigan State University creates first communication doctoral program focused on social science

1923 National Association of Academic Teachers of Public Speaking changes name to National Association of Teachers of Speech

1970 Speech Association of America changes name to Speech Communication Association

1929 Western States Communication Association (formerly Western Association of Teachers of Speech) forms

Edwin Black publishes “The Second Persona” in Quarterly Journal of Speech

1931 Southern States Communication Association forms Central States Communication Association (formerly Speech Association) forms 1934  Speech Monographs begins (current title is Communication Monographs)

1962 Everett Rogers publishes the book Diffusion of Innovations, proposing a model of innovation adoption

1973  Journal of Applied Communication Research begins

Human Communication Research begins

Religious Communication Association forms 1903

1904   Appendix A

1975 Human Communication Research begins Charles Berger and Richard Calabrese propose Uncertainty Reduction Theory for the study of relational development Gerald Miller and Mark Steinberg publish Between People proposing the developmental theory of relational development 1983 World Communication Association forms

1984  Critical Studies in Mass Communication begins 1989 Text and Performance Quarterly begins 1991 Communication Theory begins 1997 Speech Communication Association changes name to National Communication Association 2005  Communication, Culture, and Critique begins

Appendix B Resource Guide Associations American Forensics Association: http://americanforensics .org/history.html Central States Communication Association: http:// csca-net.org/aws/CSCA/pt/sp/history Eastern Communication Association: http://www.ecasite .org/aws/ECA/pt/sp/history International Communication Association: http://www .icahdq.org/page/history Religious Communication Association: http://www .relcomm.org/home.html National Communication Association: http://www .natcom.org/historyofNCA/ Western States Communication Association: http://www .westcomm.org/?page=History World Communication Association: http://wcaweb.org/ wca-history/

Books Allen, M., & Preiss, R. (Eds.). (1998). Persuasion: Advances through meta-analysis. Cresskill, NJ: Hampton Press. Allen, M., Preiss, R., Gayle, B., & Burrell, N. (Eds.). (2002). Interpersonal communication research: Advances through meta-analysis. Mahwah, NJ: Lawrence Erlbaum. Allen, M., Titsworth, S., & Hunt, S.K. (2009). Quantitative research in communication. Los Angeles, CA: Sage. Burrell, N.A., Allen, M., Gayle, B.M., & Preiss, R.W. (Eds.) (2014). Managing interpersonal conflict: Advances through meta-analysis. New York, NY: Routledge.

Denzin, N.K., & Lincoln, Y.S. (Eds.). (2011). The SAGE handbook of qualitative research (4th ed.). Beverly Hills, CA: Sage. Emmers-Sommer, T., & Allen, M. (2005). Safer sex in personal relationships: The role of sexual scripts in HIV infection and prevention. Mahwah, NJ: Lawrence Erlbaum. Gayle, B., Preiss, R., Burrell, N, & Allen, M. (Eds.). (2006). Classroom communication and instructional processes: Advances through meta-analysis. Mahwah, NJ: Lawrence Erlbaum. Griffin, E., Ledbetter, A., & Sparks, G. (2014). A first look at communication theory (9th ed.). New York, NY: McGraw-Hill. Hayes, A. (2015). Statistical methods for communication science. New York, NY: Routledge. Herrick, J.A. (2015). Argumentation: Understanding and shaping arguments (5th ed.). State College, PA: Strata Publication. Hunter, J.E., & Schmidt, F.L. (2014). Methods of metaanalysis: Correcting for artifact and bias in research findings (3rd ed.). Beverly Hills, CA: Sage. Palczewski, C.H., Ivie, R., & Fiske, J. (2012). Rhetoric in civic life. State College, PA: Strata Publications. Preiss, R., Gayle, B., Burrell, N., Allen, M., & Bryant, J. (Eds.). (2007). Mass media effects research: Advances through meta-analysis. Mahwah, NJ: Lawrence Erlbaum. Tedford, T.L., & Herbeck, D.A. (2013). Freedom of speech in the United States (7th ed.). State College, PA: Strata Publications. Williams, F., & Monge, P.R. (2013). Reasoning with statistics: How to read quantitative research (2nd ed.). New York, NY: Routledge.

1905

1906   Appendix B

Journals National Communication Association Journals

Website http://www.natcom.org/journals.aspx

Journals Communication and Critical/Cultural Studies Communication Education Communication Monographs Communication Teacher Critical Studies in Communication First Amendment Studies Journal of International and Intercultural Communication Journal of Applied Communication Quarterly Journal of Speech Review of Communication Text and Performance Quarterly

International Association Communication Journals

Website https://www.icahdq.org/pubs/journals.asp

Journals Journal of Communication Human Communication Research Communication Theory Journal of Computer-Mediated Communication Communication, Culture, & Critique Annals of the International Communication Association

Regional Association Journals

Central States Communication Association Website http://associationdatabase.com/aws/CSCA/pt/sp/ journal

Journal Communication Studies

Eastern Communication Association Website http://www.ecasite.org/aws/ECA/pt/sp/Home_Page

Journals Communication Quarterly Communication Research Reports Qualitative Communication Research Reports

Southern States Communication Association Website http://www.ssca.net/publications

Journal Southern Communication Journal

Western Communication Association Website http://www.westcomm.org/?page=WJC

Journals Western Journal of Communication Communication Reports

Association for Communication Administration Website http://www.unco.edu/aca/Journal.html

Journal Journal of the Association for Communication Administration

American Forensic Association Website http://americanforensics.org/publications.html

Journal Argumentation and Advocacy

World Communication Association Website http://wcaweb.org/publication/

Journal Journal of Intercultural Communication

Appendix B   1907

Religious Communication Association Website http://www.relcomm.org/journal-ofcommunication-and-religion.html

Journal Journal of Communication and Religion

Organization for Research on Women and Communication Website http://www.orwac.org/aws/ORWAC/pt/sp/journal

Journal Women’s Studies in Communication

Organization Studying Communication Language and Gender Website http://osclg.org/women-language

Journal Women and Language

Rhetoric Society of America Website http://rhetoricsociety.org/aws/RSA/pt/sp/rsq

Journal Rhetoric Society Quarterly

Additional Journals Communication Research: http://crx.sagepub.com Health Communication: http://www.tandfonline.com/ toc/hhth20/current Journal of Health Communication: http://www .tandfonline.com/toc/uhcm20/current

Index NOTE: Page references to figures and tables are labeled as (fig.) and (table), respectively. Aakhus, Mark, 2:846 ABBA counterbalancing, 1:278 ABC, 2:617, 3:1039, 3:1092 Abortion, 4:1582 Abstract or executive summary, 1:1–2 abstract, overview, 1:1 executive summary, 1:2 major findings and results, 1:2 research focus and problem statement, 1:2 research method, 1:2 Academic databases. See Databases, academic Academic journals, 1:4–6 contemporary issues of, 1:5–6 types of communication journals, 1:4–5 See also Invited publication; Publishing journal articles Academic journal structure, 1:2–4 examples, 1:3–4 overview, 1:2–3 Academic OneFile, 1:349 Academic Search Complete, 1:349 Accountability, critical ethnography and, 1:298 Accreditation, for communication assessment, 1:182 Accrediting Commission for Community and Junior Colleges/ Western Association of Schools and Colleges (AACJC-WASC), 1:182 Acculturation, 1:6–8 Asian/Pacific American communication studies, 1:61 examples of, 1:7–8 overview, 1:6 processes, 1:6–7 re-entry shock, 1:8 synonyms and timing of, 1:7 Accuracy, of Web resources, 1:94 Accuracy tasks, in group communication, 2:634

Acknowledging the contribution of others, 1:8–11 American Psychological Association style, 1:26–28 archive searching and, 1:49 authorship bias, 1:69–71 authorship credit, 1:71–73 Chicago style, 1:126–128 citations and attributions, 4:1897–1898 citation searching with bibliographic resources, 1:94 copyright issues in research, 1:261–263 data ownership and rights, 1:51 ethics of research and publication, 1:10 Modern Language Association style, 3:1031–1032 overview, 1:8 self-plagiarism, 1:9–10 standards for acknowledgment, 1:9 standards for authorship, 1:8–9 standards for plagiarism, 1:9 See also Plagiarism Acquiescence bias, 4:1556 Acta Diurna, 2:820 Active-Empathic Listening (AEL) Scale, 1:415–416 Activism and social justice activism and research examples, 1:12–14 analyses of, 1:11–12 National Communication Association’s Activism and Social Justice Division, 1:11 teaching, 1:14 types of activism and social justice communication scholarship, 1:11 Activism and Social Justice Division (NCA), 1:11, 1:14 ACTUP, 3:1390 Adam, Paul C., 2:622 Adams, Ansel, 4:1866

1909

Adams, Tony E., 3:1393 Adaptive conjoint analysis (ACA), 1:237, 1:236 Adaptive structuration theory (AST), 1:175–176 Ad baculum argument, 1:54 Adbusters, 1:110 Additive clustering, 1:142 Ad hominem argument, 1:54 Adorno, Theodor W., 1:295, 1:321, 2:568, 2:775, 3:1232, 3:1282 Advancement of Learning (Bacon), 3:1508 Advertising, gender-specific language in, 2:617 Affect, rhetoric and, 3:1347 Affective learning, 2:712 Affiliation, social relationships and, 4:1647 African American communication and culture, 1:15–19 Afrocentricity, 1:17 Black Feminist Movement and Womanism, 1:17–18 complicity theory, 1:18 critical race theory, 1:302–305 cultural contracts theory, 1:18 digital media and race, 1:380–383 identity variables, 1:16 language variables, 1:16 narrative analysis, 1:18–19 overview, 1:15 spirituality, religion, and Black churches, 1:16 theoretical constructs and ideologies, overview, 1:16–17 variables, overview, 1:15 Afrocentricity, 1:17, 4:1808 After-action reports, in emergency communication, 1:410 Against the Sophists (Isocrates), 3:1506 Agency, in cultural studies, 1:322 Agency for Healthcare Research and Quality (AHRQ), 2:640

1910   Index Agenda setting, 1:19–21 agenda building and agenda framing, 1:20 criticism and new media environment, 1:20–21 historical background, 1:19–20 1968 presidential election and, 1:20 in political communication, 3:1268 related concepts, 1:20 Aging research. See Communication and aging research Agresti, Alan, 2:947 Agresti–Caffo interval, 2:905 Agresti–Coull interval, 2:905 Ahrens, Frank, 3:1039 “Ain’t I a Woman?” (Truth), 2:573 Alberic of Monte Cassino, 3:1507 Alcidamas, 3:1486 Alcoholics Anonymous, 2:550, 4:1687 Allen, Brenda, 2:553 Allen, Mike, 1:431, 2:998, 3:1057, 4:1640 Allen, Paula Gunn, 2:554 Allison, Paul D., 2:839 Allport, Gordon, 2:767 Alpha level, 2:677 Alternative conference presentation formats, 1:21–23 courses, short, 1:23 discussion panel, 1:22 oral presentation, 1:22 overview, 1:21–22 performance session, 1:22 poster presentation, 1:22 scholar-to-scholar presentation, 1:23 seminars, 1:23 Alternative news media, 1:23–26 assessing credibility of traditional and alternative news media, 1:24–25 defined, 1:24 ethical considerations, 1:25 overview, 1:23 traditional media, 1:24 Althusser, Louis, 1:321, 2:718, 3:1510 Altman, Rick, 2:568 Amazon, 4:1530, 4:1540 Ambivalence paradigm, intergenerational communication, 2:763–764 American Association for Higher Education and Accreditation (AAHEA), 1:185 American Association for Public Opinion Research, 3:1150 American Association of Retired Persons (AARP), 2:874 American Association of Teachers of Journalism, 2:820

American Equal Rights Association, 2:573 American Journalism Review, 3:1039 American Life Project, 1:176 American Medical Association (AMA), 1:452 American National Election Studies (ANES), 3:1273–1274 American Psychological Association, on vulnerable groups, 4:1872 American Psychological Association style, 1:26–28 APA-specific characteristics, 1:26–27 authorship credit and, 1:71 for bibliographic research, 1:93 Chicago style compared to, 1:126 confidentiality and anonymity of participants, 1:228 on deception in research, 1:361 goals and standards, 1:26 history and controversies, 1:27 interdisciplinary journals, 2:759 for journal articles, overview, 3:1373 partial correlation, 3:1186 publication style guides, overview, 3:1365–1368 research report organization, 3:1458, 3:1459 submission of research to a journal, 4:1692, 4:1693, 4:1694 for writing a methods section, 4:1891 writing individual sections of journal articles, 4:1892, 4:1895, 4:1896, 4:1897 American Society of Newspaper Editors, 1:453 American Splendor (film), 2:568 Americans with Disabilities Act, 1:390 American Woman Suffrage Association (AWSA), 2:573 Amin, Samir, 2:775 Analysis of covariance (ANCOVA), 1:28–30 applications and benefits, 1:29–30 blocking variables compared to, 1:99 covariates, 1:284–285 covariate selection, 1:30 crossed factor analysis, 2:493 eta squared used in, 1:446, 1:447 use in communication research, 1:28–29 Analysis of ranks, 1:30–31 Analysis of residuals, 1:31–33 determining cause of residual pattern, 1:33 overview, 1:31–32 plotting residuals, 1:32 Analysis of variance (ANOVA), 1:33–36

analysis of covariance (ANCOVA) compared to, 1:28–30 ANOVA, defined, 1:34 ANOVA for categorical data, 2:886 assumptions underlying ANOVA tests, 1:33–34 between-subjects design, 1:91 blocking variables and, 1:99 comparison to t test, 1:34 confidence intervals, 1:226 contrast analysis, 1:249 covariance/variance matrix, 1:281 crossed factor analysis, 2:493 data trimming, 1:347 decomposing sum of squares, 1:363, 1:364–366 dichotomizing a continuous variable compared to, 1:435, 1:436 discriminant analysis and, 1:396 effect sizes, 1:407 eta squared used in, 1:446 factorial analysis of variance, 2:533–536 factorial ANOVA designs, 1:35–36 generalized linear models (GLMs), 2:869 heterogeneity of variance, 2:652 intraclass correlation, 2:807, 2:808 lambda for justifying ANOVAs, 2:841 MANOVA versus, 3:1059–1060 media split of sample, 2:974 meta-analysis: model testing, 2:997–998 multiple R relationship to, 3:1051–1052 multivariate analysis of variance, 3:1058–1062 one-way analysis of variance, 1:34–35, 3:1128–1132 quasi-F, 3:1387–1388 random factors, 2:501 univariate statistics, 4:1811–1812 See also individual post hoc test entries Analysis using active appearance models (AAM), 2:491 Analytical journals, 2:826 Analytic induction, 1:36–39 accounting for counterfactuals and outliers, 1:37–39 overview, 1:36–37 qualitative approach, 1:37 quantitative approach, 1:37 Analytic rotation methods, overview, 2:528 Andersen, Janis, 1:193 Anderson, Rob, 1:196 Anderson–Darling test, 4:1606 Ang, Ien, 2:775

Index   1911 Animal studies, 1:424 Annenberg Public Policy Center, 3:1273 Anonymity. See Confidentiality and anonymity of participants Anonymous of Bologna, 3:1507 Anonymous source of data, 1:39–40 Anthony, Susan B., 2:573, 2:665 Antimodels, Marxist analysis, 2:911–912, 3:1030 Anti-Spam Legislation (Canada), 4:1652 Anxiety/Uncertainty Management (AUM) theory, 1:213 Anzaldúa, Gloria, 2:554, 2:851, 3:1390, 3:1391 Aoki, Eric, 3:1492 A posteriori, defined, 3:1132 Appearance code, 3:1099 Apple, 2:914 Applied communication, 1:40–44 defining research and rigor, 1:42–43 development of, as research and discipline, 1:41–42 overview, 1:40–41 role of applied research, 1:43–44 role of theory in, 1:43 Apprehension of communicating. See Communication apprehension Apprehensive subject role, 1:372 A priori hypotheses, defined, 2:509, 3:1132 Archaeology of Knowledge, The (Foucault), 1:391 Archetype, 3:1346 Archival analysis, 1:44–46 archive as memory device, 1:45–46 digital archives, 1:46 foundation and government research collections, 2:581–583 invention and, 1:45 overview, 1:44–45 ArchiveGrid, 1:44 Archive searching for research, 1:46–49 acknowledging archival sources, 1:49 approaches and dimensions of, 1:47 identifying goals for, 1:47 initial inquiries for, 1:47–48 orientation interviews for, 1:48 overview, 1:46 rules of, 1:48 searching collections, 1:48–49 technology and, 1:49 Archiving data, 1:50–52 choosing archives for, 1:51–52 data ownership and rights, 1:51 data security and backup, 1:51 data sharing, 1:50–51 for different time scales, 1:50 metadata, 1:51 preserving data, 1:50

Arendt, Hannah, 3:1228, 3:1233, 3:1234 Arête, 3:1478 Argumentation and Advocacy, 3:1488 Argumentation theory, 1:52–56 dialectic, 1:52–53 disagreement, 1:53 fallacy theory, 1:54 informal logic, 1:53 overview, 1:52 pragma-dialectics, 1:54 rhetorical approach, 1:52, 1:54–55 Aristotle categorization, 1:119 Collection of Arts, 3:1506 on ethics, 1:452 historical analysis and, 2:663 Isocrates’ rhetoric and, 3:1484, 3:1486 neo-Aristotelian criticism, 3:1087–1090 Nicomachean Ethics, 1:195, 3:1479 organizational ethics, 3:1159 path analysis, 3:1194 phenomenological traditions, 3:1229 philosophy of communication, 3:1232 religious communication, 3:1431 rhetoric, ethos, 3:1478–1479 rhetoric, logos, 3:1480–1482 rhetoric, overview, 3:1477 rhetoric, pathos, 3:1482–1483 rhetorical genre, 3:1496–1497 rhetorical theory, 3:1506, 3:1507 robotic communication and, 3:1517 Arithmetic mean. See Mean, arithmetic Arnett, Ronald C., 1:192, 1:196, 3:1232, 3:1234, 3:1235 Arnheim, Rudolf, 2:567 Arte of Rhetorique (Wilson), 3:1508 Artifact selection, 1:56–59 appropriateness of, 1:58 artifact description, research questions, and justification, 1:58–59 artifacts as dynamic, 1:56–57 defining artifacts, 1:56 errors of measurement: range restriction, 1:443–445 ethnography and, 1:459–460 meta-analysis and, 2:986 methodological goals and, 1:59 multiple code systems of artifacts, 1:57 rhetorical artifacts, 3:1494–1497 selection for analysis, overview, 1:57–58 Art of Persuading, The (Pascal), 3:1508 Arts-based research stories, 1:68

Asante, Molefi Kete, 1:17, 3:1510 Asch, Solomon, 1:224 Ascribed identity, 1:60–61 Asian/Pacific American Caucus, 1:60 Asian/Pacific American communication studies, 1:59–63 ascribed identity, 1:60–61 diaspora and cultural adaptation, 1:61 intercultural communication between Asia and U.S., 1:61–62 intersectionality of identity, 1:61 methodology in, 1:62 overview, 1:59–60 scope of, 1:60 Asimov, Isaac, 3:1517 Assessment. See Communication assessment Assimilation. See Acculturation Association, causality and, 1:121–122 Association for American Colleges & Universities (AAC&U), 1:185 Association for Chinese Communication Studies (ACCS), 3:1331 Association for Computing Machinery, 1:177 Association for Education in Journalism and Mass Communication (AEJMC), 2:820, 3:1445 Association for the Assessment of Learning in Higher Education (AALHE), 1:185 Association of American Universities (AAU), 1:185 Association of Internet Researchers, 1:63–65, 1:177 annual conferences, 1:64 ethics committee, 1:64 listserv, 1:64–65 membership and governance structure, 1:63–64 overview, 1:63 Association of Public and Land-Grant Universities (APLU), 1:185 Association of Science-Technology Centers, 4:1573 Asymmetry, social network analysis, 4:1634 Asynchronous online interviews, 3:1144 Atari, Inc, 4:1858 Atlas Shrugged (Rand), 1:195 Atomistic fallacy, 2:657–658 Attention of digital natives, 1:384 measures of attention, 2:688 Attenuation. See Errors of measurement: attenuation

1912   Index Attitude studies, environmental communication, 1:423 Attribution theory, 3:1191, 4:1826 Audience argumentation theory and, 1:54–55 audience reception theories, 2:775–776 content analysis and defining population for, 1:244 for invited publication, 2:814 of performance studies, 3:1218 target audience research, 4:1680–1681 title of manuscripts and, 4:1773 writing methods section for, 4:1890–1891 writing process and, 4:1894–1895 Audio materials, experimental manipulation, 1:475 Audiovisual mass media, history of, 1:200–201 Augustine, 3:1232 Auld, Frank, 3:1423 Auscultatory measurement, 3:1240–1241 Austin, J. L., 3:1216, 3:1217, 3:1232 Australian and New Zealand Communication Association (ANZCA), 3:1331 Australian Speech Communication Association (ASCA), 3:1331 Author credibility, for literature review, 2:873 Authoring: telling a research story, 1:65–69 arts-based research stories, 1:68 critical research stories, 1:67 ethical considerations, 1:68 interpretive research stories, 1:67 overview, 1:65–66 postmodern research stories, 1:67–68 postpositivist research stories, 1:66–67 Authoritarianism, 2:773–774 Authority, for evaluating Web resources, 1:93 Authorship bias, 1:69–71 authorship effect, 1:69 citation bias, 1:69–70 conflict of interest, 1:70 countering, 1:70–71 gift and ghost authorship, 1:70 Authorship credit acknowledging contribution of others, 1:8–10 overview, 1:71–73 See also Acknowledging the contribution of others; Plagiarism Autocorrelation functions (ACFs), 1:77

Autoethnography, 1:73–76 approaches to, 1:75–76 elements of research for, 1:74–75 family communication research, 2:547 overview, 1:73–74 performance (auto)ethnography, 3:1217–1218 queer studies, 3:1393 Automaticity, measures of, 2:690 Autoregressive, integrative, moving average (ARIMA) models, 1:76–79 bivariate modeling of, 1:78–79 overview, 1:76–77 univariate modeling of, 1:77–78 Average effect, estimation of. See Meta-analysis: estimation of average effect Averages, frequency distributions, 2:597 Axial coding, 1:79–82 categories and subcategories, 1:81 controversy of, 1:82 grounded theory, 2:631–632 influence of grounded theory on, 1:80–81 memos and, 1:81–82 method for analysis, 1:80 overview, 1:79–80 using, 1:82 Ayres, Joseph, 3:1057 Babbling, as preverbal behavior, 1:375 “Backstage,” 4:1742 Back-view mirror analysis, 1:168 Backward stepwise discriminant function analysis, 1:397 Bacon, Francis, 3:1508 Bacon, Jen, 2:553 Bacon, Wallace, 3:1217 Baden, Christian, 2:504 Badiou, Alain, 2:698 Bad news, communication of, 1:83–86 changing health-care relationships and goals, 1:83–84 commonly adopted models, 1:85–86 contexts, 1:83 criticism of bad news guidelines and protocols, 1:86 principles and skills for, 1:85 providers and patients: impact and perspectives, 1:84–85 See also Health communication Baer, Ralph, 4:1858 Baez, Jillian, 2:553, 2:852 Baile, Walter, 1:85 Bainbridge, Jason, 4:1756 Baird, A. Craig, 3:1088 Bakhtin, Mikhail, 2:718, 3:1232, 3:1510

Bald on record, 3:1265 Bales, Robert, 2:549, 2:636 Bandura, Albert, 1:169, 3:1513 Band wagon (propaganda device), 3:1341, 4:1710 Bangladesh refugee case, 4:1872 Banks, Marcus, 4:1863 Baran, Stanley, 2:649 Barge, J. Kevin, 1:188, 1:189 Barker, Larry, 1:415 Barnett, George, 1:222 Baron, Reuben M., 4:1852 Barthes, Roland, 1:56, 2:568, 3:1232, 3:1510, 4:1864 Bartlett, Maurice Stevenson, 2:508 Bartlett’s test exploratory factor analysis and, 2:517 heterogeneity of variance, 2:653 Basic course in communication, 1:87–90 foundation of discipline, 1:87–88 pedagogy, 1:88–90 Baskerville, Barnett, 2:664 Batsell, Jake, 2:819 Baudrillard, Jean, 2:621, 3:1510 Baxter, Leslie A., 2:552, 3:1408, 3:1409, 3:1411, 4:1795, 4:1796 Bayesian model, 2:905, 2:927, 3:1040, 4:1684 Baym, Nancy, 1:221 Bazin, André, 2:568 Beall, Jeffrey, 3:1202 Beatty, Michael, 1:169 Beebe, Steven A., 1:90, 1:188, 1:189 Beecher, Henry Ward, 2:664 Beer, David, 1:46 Before the Rain (film), 2:781 Behavioral learning, 2:712 Behaviorism, development of communication in children and, 1:374 Behe, Michael, 2:698 Bell, Derrick, 1:303 Bell Labs, 1:144 Belmont Report, 1:255, 1:342, 2:670, 2:705–706, 2:709, 3:1324 Beltrán, Luis Ramiro, 2:776 Benchmarks, for communication assessment, 1:185 Beneficence, for human subjects, 2:670 Benini’s β, 2:725, 2:728 (table) Benjamin, Walter, 1:321, 2:568, 3:1282 Bennet’s S, 2:725, 2:726, 2:728 (table) Bennett, Jeffrey A., 3:1391 Bennis, Warren, 2:855 Benoit, William, 1:291 Bentham, Jeremy, 1:195, 3:1159

Index   1913 Benzécri, Jean-Paul, 1:276 Berelson, Bernard, 1:149 Berger, Peter L., 2:885, 4:1623 Berke, Philip R. R., 2:734 Bernoulli random variables, 2:905 Berstein, Daniel, 4:1571 Besson, Luc, 2:781 Beta error. See False negative; Type II error Betweenness centrality, 1:222 Between-groups variance estimate, 3:1129–1130 Between-participants factorial designs, 1:36 Between-subjects design, 1:90–92 advantages and disadvantages of, 1:91–92 analysis of, 1:91 factorial analysis of variance, 2:534 overview, 1:90 use of, 1:91 Beullens, Kathleen, 1:153 Beyond Culture (Hall), 1:310, 2:756 “Beyond Persuasion: A Proposal for Invitational Rhetoric” (Foss, Griffin), 2:552 Bhabha, Homi, 1:376, 3:1294 Bias critiquing literature sources for, 2:885–886 in Likert scales, 4:1556–1557 publication bias, 2:996–997 sampling theory, 4:1551–1552 See also Random assignment; Random assignment of participants Bibliographic research, 1:92–94 academic bibliographic research tools, 1:92–93 bibliographic databases, 1:349 criteria for evaluating Web resources, 1:93–94 strategies, 1:94 Bibliographies Chicago style for, 1:127 in emergency communication, 1:412 See also individual style guides Bicycle Thief, The (film), 2:567 Big Book (Alcoholics Anonymous), 2:550 Big data journalism and, 2:822–823 media and technology studies, 2:959 online data documentation, 3:1141 social networks and, 4:1641 Bill and Melinda Gates Foundation, 2:581, 2:601 Billings, Andrew C., 4:1664 Bing, 4:1574

Binomial effect size display, 1:95–96 BESD, defined, 1:95 for small effect sizes, 1:95 (table) uses of, 1:95–96 Biochemical variation, 3:1237 Birth control, 4:1582 Birth of a Nation, The (film), 2:651 Bitzer, Lloyd, 3:1498 Bivariate analyses, 4:1768 Bivariate statistics, 1:96–98 examples, 1:96–97 procedures for conducting bivariate analysis, 1:97–98 types of, 1:97 Björkin, Mats, 1:153 Black, Edwin, 1:296, 2:664, 3:1090, 3:1498, 3:1509 Black Feminist Movement, 1:17–18 Black Lives Matter, 1:16 Black Panthers, 3:1504 Black propaganda, 3:1340 Blackwell, Henry, 2:573 Black Women Organized for Action (BWOA), 4:1581 Blair, Carole, 2:552, 3:1477, 3:1509 Blair, Hugh, 3:1508 Blair, J. Anthony, 1:53 Bless, Herbert, 4:1694 Blizzard Entertainment, 4:1858 Block analysis. See Multiple regression: block analysis Blocking variable, 1:98–99 Blogs and research, 1:100–101 content analysis, 1:100–101 discourse analysis, 1:101 Internet research and ethical decision making, 2:787–788 intrapersonal communication in, 2:812–813 overview, 1:100 participant observation, 1:101 research application, 1:100 social network analysis, 1:100 Blood pressure. See Physiological measurement: blood pressure Bloomberg Businessweek, 2:874 Blumer, Herbert, 4:1739, 4:1740 Bly, Nellie, 1:19 Boarding house experiment (Garfinkeling example), 2:610 Bochner, Arthur, 2:790 Bodie, Graham D., 1:413–416, 3:1241 Body image and eating disorders, 1:101–104 ecological risk and protective theory (ERPT), 1:103–104 overview, 1:101–102 reinforcement theory, 1:102–103 social comparison theory, 1:102

Bogardus social distance scale, 4:1565 Bogenschneider, Karen, 1:103 Boles, Shawn M., 1:482 Bollywood, 2:780 Bolton, Charles, 4:1795 Bona fide group perspective (BFGP), 4:1612 Bond, Michael Harris, 1:160 Bonferroni correction, 1:104–106 concerns about, 1:105 Kruskal–Wallis test, 2:832 least significant difference, 3:1299 procedures, 1:105 Type I errors, 1:104 Books, history of print and, 1:200 Boolean operators, 1:350 Booth, E. Tristan, 2:625 Bootstrapping, 1:106–108 application to mediational analysis, 1:108 origins and execution, 1:106–108 overview, 1:106 Bormann, Ernest, 2:548, 2:549 Boster, Frank, 3:1225 Boston Women’s Health Collective, 4:1581 Bound and Gagged: Pornography and the Politics of Fantasy in America (Kipnis), 3:1286, 3:1290 Boundary management, 3:1321 Bouvry, Stéphane, 2:747 Bowlby, John, 4:1647 Box, George, 2:668 Box.net, 3:1141 Box’s M, 2:653 boyd, danah, 1:221, 4:1641 Boyer, Ernest, 4:1568 Bradburn, Norman, 4:1552 Bradley, Lawrence W., 3:1079 Bradlow, E.T., 3:1040 Brain. See Communication and human biology Brambor, T, 3:1037 Brandtzaeg, Petter, 1:153 Braver, Mary, 4:1650, 4:1651 Braver, Sanford, 4:1650, 4:1651 Breaching experiment (Garfinkeling), 2:608–610 Bremond, Emile, 3:1485, 3:1486 Brennan, William J., Jr., 2:590 Brennan–Prediger kappa, 3:1428 Brentano, Franz, 3:1227, 3:1229 Breusch–Pagan test, 2:655 Brigance, William Norwood, 2:664 British Empire Marketing Board, 2:567 British General Post Office, 2:567 Broadrose, Brian T., 3:1079 Brookhaven National Laboratory, 4:1857

1914   Index Brooks, Garth, 2:911 Brown, Julie R., 2:552 Brown, Michael, 1:158, 2:591, 3:1496 Brown, Penelope, 3:1264–1266 Browne, Kath, 3:1391 Brown–Forsythe test, 2:653, 2:666 Brown v. Board of Education, 1:303 Brummett, Barry, 1:57, 3:1489 Bryant, Anita, 2:624 Buber, Martin, 1:196 Buckman, Robert, 1:85 Budweiser (Anheuser-Busch), 1:265–266 Bullis, Connie, 4:1795 Bullying, 2:628–629 Buñuel, Luis, 2:780 Burke, Kenneth on artifact selection, 1:57, 1:58, 1:59 Burkean analysis, 1:109–112 fantasy theme analysis, 2:549 health communication, 2:644 ideographs, 2:682 Kenneth Burke Journal, 1:3 pentadic analysis and, 3:1210, 3:1211 performance studies, 3:1216 rhetorical and dramatism analysis, 3:1490–1494 rhetorical theory, 3:1508 second wave feminism and, 4:1583 synecdoche, 4:1744 “Terministic Screens,” 4:1746 Burkean analysis, 1:109–112 feminist analysis, 2:554 future directions of research, 1:110–111 overview, 1:109 perspective by incongruity, 1:110 psychology and form, 1:110–111 themes and motivations, 1:109–110 See also Burke, Kenneth Burkhalter, Byron, 1:381 Burnett, Ann, 2:554 Burrows, Roger, 1:46 Burt, R. S., 4:1635 Bush, George H. W., 3:1275 Bush, George W., 3:1504, 4:1866 Bush, Vannevar, 2:671 Business communication, 1:112–115 customer satisfaction and, 1:113–114 employee satisfaction and, 1:114 listening skills for, 1:113 multiple methods in, 1:115 organizational climate and, 1:114 overview, 1:112–113 as skill, 1:113 Test of English for International Communication (TOEIC), 1:418

work-life balance and, 1:114 workplace bullying and, 1:114–115 workplace diversity and inclusion, 1:114 workplace relationships and, 1:115 writing skills for, 1:113 Buss, David, 1:165 Butler, Judith, 2:554, 3:1217, 3:1390, 3:1395, 3:1510 Butterworth, Michael, 3:1504 Buzzanell, Patrice, 2:553–554 Cacioppo, John, 3:1224 Cahiers du Cinema, 2:568 Cai, Li, 2:652 Caillois, Roger, 2:604 Calafell, Bernadette M., 2:553, 2:850, 2:851, 2:852, 3:1393 Campbell, Donald T., 1:480, 2:770, 4:1650, 4:1651, 4:1677, 4:1678 Campbell, George, 3:1508 Campbell, Karlyn Kohrs, 2:552, 2:574, 2:665, 3:1497, 3:1498, 4:1583 Canadian Communication Association (CCA), 3:1331 Canadian Radio-television and Telecommunications Commission (CRTC), 4:1653 Canal+Cine+, 2:781 Cancer Health Literacy Study (CHLS), 2:646–648 Canonical correlation, in discriminant analysis, 1:397 CAN-SPAM Act of 2003, 4:1652 Čapek, Karel, 3:1517 Carbasse, R., 3:1039 Carbaugh, Donal, 1:162, 2:846, 4:1663 Carbine, Patricia, 4:1581 Card stacking (propaganda device), 3:1341 Carroll, John M., 2:957 Carter, Jimmy, 3:1271, 3:1275 Case study, 1:117–119 approaches to communication ethics, 1:197 freedom of expression example, 2:591–592 one-shot case study designs, 1:478 overview, 1:117 research design, 1:117–118 research procedures, 1:118 rigor and research, 1:118 risk communication, 3:1515–1516 war on terror (frame analysis case study), 2:585–586 Casillas, Yannina, 2:852 Castañeda, Carlos, 2:852 Castells, Manuel, 2:777 Cate, Rodney, 4:1796

Categorization, 1:119–121 classical view, 1:119 conversation analytic view, 1:120–121 prototype view, 1:119–120 self-categorization view, 1:120 Catt, Carrie Chapman, 2:574, 2:665 Cattell’s scree test, 2:517, 2:518 Caucus on Gay and Lesbian Concerns (1978), 2:623 Causality, 1:121–124 association, 1:121–122 axial coding and, 1:81 block analysis, 3:1047 content analysis and, 1:243 correlation versus, 4:1600–1601 demonstrating (See individual areas of research) in experimental research, 1:124 meta-analysis: model testing, 2:997–998 multivariate, 1:122–123 overview, 1:121 planning research project, 3:1448 possible alternate hypotheses and, 1:122 in survey research, 1:123–124 temporal order, 1:122 Causal layered analysis, 1:168 Cavalier-Smith, Thomas, 2:661 CBS, 3:1092 Ceiling and floor effects. See Errors of measurement: ceiling and floor effects Centers for Disease Control and Prevention (CDC), 2:640, 3:1516 Centrality, 1:222 Central States Communication Association (CSCA), 1:3, 1:5, 1:10, 1:21, 2:555 Central States Speech Journal, 3:1488 Central Statistical Office (Poland), 4:1577 Central tendency bias, 4:1556 Centre for Contemporary Cultural Studies, University of Birmingham (UK), 1:295, 1:321 Centroid, 1:276–277, 1:282, 1:397 Cepeda, Maria Elena, 2:852 Certificate of Confidentiality (CoC), 1:343 Chabrol, Claude, 2:568 Chaffee, Steve, 3:1233 Challenger, 3:1503–1504 Champan, John Jay, 2:664 Channels, 4:1634 Character, factor analysis of, 2:502 Charmaz, Kathy, 2:630 Chase, Stuart, 4:1624

Index   1915 Chasing Papi (film), 2:852 Chatbots, 3:1518–1519 Chat rooms, 1:124–126 confidentiality and anonymity of participants, 1:229–230 forums, 1:125 overview, 1:124–125 research application, 1:125–126 Chávez, Cesar, 2:851 Chávez, Karma R., 2:851, 2:852 Chen, Victoria, 2:852 Cherokee Phoenix and Indian Advocate, 3:1080 Cherokee Tribal Council, 3:1080 Chevrette, Roberta, 2:553 Chicago style, 1:126–128 APA compared to, 1:27 author-date format, 1:128 Chicago Manual of Style, overview, 1:126 on electronic sources, 1:128 general guidelines, 1:126–127 notes-bibliography guidelines, 1:127 publication style guides, overview, 3:1365–1368 research report organization, 3:1458, 3:1459 submission of research to a journal, 4:1692, 4:1694 Chicago Tribune, 2:823 Child-parent communication children as digital natives, 1:383–386 evolution and, 1:164–165 Children consumer socialization of, 2:570 debriefing of participants, 1:357–358 development of communication in, 1:373–376 Facial Action Coding System in infants, 2:489 gender and communication, 2:613 generalization in parent-child communication dyads, 2:620 intergenerational communication, 2:761–766 path analysis, 3:1193–1197, 3:1195 (fig.) as vulnerable group, 4:1873 Chinese Journal of Communication, 1:3 Chi-square, 1:128–132 assumption and limitations of, 1:131–132 confirmatory factor analysis, 2:507 correspondence analysis, 1:377 Cramér’s V compared to, 1:289 degrees of freedom, 1:368 discriminant analysis and, 1:398

multiple-sample, 1:130–131, 1:130 (table), 1:131 (table) overview, 1:128–129 single-sample, 1:129–130, 1:129 (table) structural equation modeling, 4:1683 univariate statistics, 4:1811–1812 Cho, Hyunyi, 3:1225 Chobham, Thomas, 3:1507 Choice-based conjoint (CBC) analysis, 1:236 Choice dilemma task, 2:634 Chomsky, Noam, 1:311, 1:374 Christians, Clifford, 1:196 Chronemic code, 3:1099 Chyi, Hsiang Iris, 3:1040 Cicchetti, Domenic V., 2:727 Cicero on ethics, 1:452 Isocrates’ rhetoric and, 3:1484, 3:1486 neo-Aristotelian criticism and, 3:1088, 3:1089 philosophy of communication, 3:1232 rhetoric, ethos, 3:1479 rhetoric, overview, 3:1477 Cinderella effect, 1:164–165 Cinema and Social Change in Latin America: Conversations with Filmmakers (Burton), 2:781 Cisgender, defined, 2:624 Cisneros, Josue David, 2:851 Cissna, Kenneth, 1:196 Citation. See Acknowledging the contribution of others Citation analyses, 1:132–134 assessing value of publications, scholars, departments, 1:132 citation bias, 1:69–70 comparing indexes, 1:132–133 limitations to citation counting, 1:133–134 power ranking, 1:133 See also Acknowledging the contribution of others Citations to research, 1:134–136 critiquing design of investigation, 1:135–136 methodological issues, 1:135 overview, 1:134–135 relevant theoretical literature, 1:135 Citizens United v. Federal Election Commission, 2:589 Civil Rights Act (1964), 1:303, 4:1580 Civil rights movement African American communication and culture, 1:16 rhetorical method and, 3:1504

Cixous, Hélène, 3:1510 Clair, Robin, 3:1316 Clark, Margaret, 4:1647 Classical view, categorization, 1:119 Classification, experimental manipulation, 1:474 Classroom environment, instructional communication, 2:714–715 Clinton, Bill, 2:585, 3:1275, 4:1871 Clinton, Hillary, 1:370, 2:616, 3:1272, 3:1276 Closed archives, 1:50–51 Close reading, 1:136–139 close textual analysis, overview, 1:136 (See also Textual analysis) conducting, 1:138 role and requirements of critic, 1:136–137 theories, methods, techniques, 1:137–138 Cloud computing, history of, 1:202 Cloze procedure, 1:139–141 combining, with other techniques, 1:141 general prompting strategy, 1:139–140 measure of readability, 1:140 overview, 1:139 Cluster analysis, 1:142–145 additive clustering, 1:142 challenges to, 1:144–145 cluster sampling, 3:1400, 4:1525, 4:1539, 4:1550–1551 divisive clustering, 1:142–143, 1:143 (fig.) methods, overview, 1:142 non-hierarchical clustering, 1:143–144 CNN, 3:1037, 3:1038 Coca-Cola, 2:581 Co-constructed narratives, 1:76 Co-cultural communication theory, 1:163 Code of Federal Regulations (CFR) (FDA), 2:709 Coding, fixed, 1:145–147 overview, 1:145–146 unitizing, 1:146 Coding, flexible, 1:147–148 “multi-valued” data, 1:148 overview, 1:147 unitizing and, 1:147 Coding of data, 1:148–151 communication skills and, 1:211 computer-assisted, 1:151 content analysis and, 1:241, 1:244, 1:246–248 corporate communication and, 1:264–265

1916   Index ethnography data analysis, 1:460 Facial Action Coding System, 2:487–491 fantasy theme analysis, 2:550–551 grounded theory, 2:630–632 intercoder reliability, 2:721–724 manifest versus latent content coding, 1:149 Markov analysis, 2:906 nonverbal communication, 3:1099–1100 observational measurement of face features, 3:1107–1108 overview, 1:148 process, 1:149–151 in quantitative versus qualitative research, 1:149 script creation in performance studies, 3:1218–1219 See also individual names of intercoder reliability techniques Coefficient of coherence, 4:1768 Coefficient of determination, 1:269–270, 1:444 Cognitive learning, 2:712 Cognitive neuroscience approach, to communication, 1:171–172 Cohen, Bernard, 1:20 Cohen, Jacob, 1:95, 1:408, 1:447, 2:947, 3:1194 Cohen, Patricia, 3:1194 Cohen’s Kappa. See Intercoder reliability techniques: Cohen’s Kappa Coherent subset, 2:515 Cohort, 1:151–154 classic research, 1:152 cohort research designs, future of, 1:154 cohort research designs, overview, 1:151–152 major considerations, 1:152–153 prospective cohort designs in communication, 1:153 retrospective cohort designs in communication, 1:153–154 Collection of Arts (Aristotle), 3:1506 Collectivistic cultures, 1:310 College students, communication apprehension of, 1:178–179 Collinearity. See Multicollinearity Collins, Patricia Hill, 2:554, 3:1461 ComAbstracts, 1:349 Combination rules, 4:1684 Combs, Timothy, 1:292 ComIndex, 1:349 Common Core educational standards, 1:191, 1:194 Common Rule, 2:709, 2:710 Common Sense (Paine), 2:820

Communibiological paradigm, 1:169–170 Communication, Culture, and Critique (ICA), 1:4, 1:5 Communication accommodation theory (CAT), 1:161, 2:762–763 Communication Accommodation Theory (Giles), 2:767 Communication Across the Curriculum (CXC), 1:194 Communication and aging research, 1:154–159 disability and communication, 1:390 ethics in, 1:158–159 intergenerational communication, 2:761–766 overview, 1:154–155 paradigmatic scope, 1:155 qualitative research, 1:155–156 quantitative research, 1:156–158 topical scope of, 1:155 Communication and Critical/Cultural Studies, 1:204, 2:553 Communication and culture, 1:159–163 acculturation, 1:6–8 African American communication and culture, 1:15–19 Asian/Pacific American communication studies, 1:59–63 critical approach, 1:162–163 cross-cultural communication, 1:308–312 cultural sensitivity in research, 1:317–320 cultural studies and communication, 1:320–322 English as a Second Language, 1:418 ethical issues, international research, 1:449–450 Facial Action Coding System, 2:489–490 frame analysis and cultural constraints, 2:585 game studies, 2:605 GeoMedia, 2:622 health communication, 2:643 Hofstede’s value dimensions, 1:159–161 intercultural communication, 2:755–758 intercultural communication competence, 1:188 international communication theories, 2:775, 2:776 Internet as cultural context, 2:783–785 interpretive approach, 1:161–162 overview, 1:159

social science approach, overview, 1:159 sociocultural tradition, 4:1762 underrepresented groups and, 4:1807–1808 See also African American communication and culture; Asian/Pacific American communication studies; Ethnicity; Latina/o communication studies; Native American or indigenous peoples communication; Passing theory; Race Communication and evolution, 1:163–166 basic principles of evolution, 1:163–164 communication and human biology, 1:170–171 family communication, 1:164–165 overview, 1:163 persuasive source communication, 1:165 romantic communication, 1:165 testing evolutionary explanations, 1:165–166 Communication and future studies, 1:166–169 back-view mirror analysis, 1:168 causal layered analysis, 1:168 content analysis, 1:168 Delphi method, 1:168 environmental scanning, 1:168 futures wheel, 1:168 futures workshops, 1:168 game-based analysis, 1:168 information technology and new media, 1:172 interdisciplinary areas, 1:167 overview, 1:166–167 relevance tree, 1:168–169 science fiction and, 1:167–168 trend analysis, 1:169 visioning, 1:173 Communication and human biology, 1:169–173 blood pressure, 3:1238 (fig.), 3:1238–1242, 3:1238 (table), 3:1242 (table) communication apprehension, 1:180 evolutionary perspective, 1:170–171 foundational assumptions, 1:169–170 genital blood volume, 3:1242–1245 heart rate, 3:1245–1248 methodological tools, 1:172–173 neuroscientific perspective, 1:171–172

Index   1917 personal relationship studies, 3:1223 physiological measurement, 3:1236–1238 pupillary response, 3:1248–1249 research traditions, 1:170–172 skin conductance (electrodermal activity), 3:1237, 3:1250–1253 Communication and Mass Media Complete, 1:349 Communication and Organizations: An Interpretive Approach (Putnam, Pacanowsky), 2:794 Communication and technology, 1:173–177 archive searching for research and, 1:49 bibliographic research, 1:92–94 challenges of researching, 1:176–177 communication and future studies, 1:166–169 computer-assisted coding of data, 1:151 computer-mediated communication (CMC), overview, 1:173 computer-mediated communication (CMC) competence, 1:188–189 confidentiality and anonymity of participants, 1:229–230 cyberchondria and, 1:328–330 database and technological considerations, 2:786–787 digital age and Internet, 1:201–202 digital media and race, 1:380–383 digital natives, 1:383–386 dime dating, 1:386–387 educational technology, 1:403–406 feminist analysis and, 2:553 group-based approaches, 1:175–176 health communication sought on Internet, 2:644 historical perspective, 1:173–174 human-computer interaction (HCI), 2:671–674 Internet as cultural context, 2:783–785 interpersonal communication approaches, 1:174–175 massive multiplayer online games, 2:917–919 media and technology studies, 2:957–960 media diffusion, 2:960–963 motives-based approaches, 1:176 multimodality approach, 1:175 naturalistic observation via Internet, 3:1083–1084 new media analysis, 3:1090–1093 online observation, 3:1117–1118

privacy of information and digital platforms, 3:1322–1323, 3:1325 See also Media and technology studies; individual Internet topics Communication apprehension, 1:177–182 assessment, 1:178, 1:184 benefits of research, 1:181 commonality, universality, applicability of, 1:178–179 early research, 1:178 factorial analysis of variance example, 2:535 (table) methodologies for studying, 1:179–181 overview, 1:177–178 Communication as design, 2:846 Communication assessment, 1:182–186 accreditation, 1:182 improving student learning, 1:185 learning goals, 1:182–183 measures for, 1:183–185 Communication competence, 1:186–190 communication skills compared to, 1:209 components of, 1:186–187 errors of measurement: range restriction, 1:443 importance of, 1:189 measuring, 1:189 overview, 1:186 types of, 1:187–189 Communication Currents, 1:42 Communication disorders, ethnomethodology and, 1:465–466 Communication education, 1:190–194 debate and forensics, 1:351–354 debriefing of participants, 1:356 defined, 1:191 distance learning and, 1:399–402 film studies, 2:566–569 instructional communication, 1:190–191, 1:192–193, 2:711–716 methods in, 1:191–192 overview, 1:190 questions, method, and theory, 1:192–194 Communication Education (NCA), 1:4, 1:204, 3:1445 Communication ethics, 1:194–198 acknowledging contribution of others, 1:8–10 confederates and, 1:223–224 confidentiality and anonymity of participants, 1:227–230

contexts of, 1:196–197 deception in research, 1:361–362 ethics and alternative news media, 1:25 of field notes, 2:565 history of theories, 1:195–196 informants/informant interviews, 2:701–702, 2:704–705 Internet as cultural context, 2:784 Internet research and ethical decision making, 2:787–789 interpretive research, 2:796–797 laboratory experiments, 2:837–838 media ethics, 2:822 of new media and participant observation, 3:1096 observational research, advantages and disadvantages, 3:1115 overview, 1:194–195 philosophical and normative approaches, 1:197–198 recording of public behavior, 3:1351–1354 treatment of human subjects, 2:668–670 using hacked data in research, 3:1143 See also Ethics codes and guidelines; Institutional review board; Organizational ethics; Research ethics and social values; individual areas of research Communication history, 1:199–202 development of writing, 1:199–200 digital age and Internet, 1:201–202 explosion of audiovisual mass media, 1:200–201 origins of speech and language, 1:199 print revolution, 1:200 Communication journals, 1:202–205 distinguishing characteristics of, 1:203–205 overview, 1:202–203 publication and retrieval, 1:203 publishing process and peer review, 1:203 Communication Monographs (NCA), 1:3, 1:5, 2:855, 3:1445 Communication predicament model of aging (CPM), 2:763 Communication privacy management theory, 1:205–208 basics of, 1:206–207 CPM theory, overview, 1:206 qualitative and quantitative approaches to, 1:206–207 unique methods for practitioners, 1:207–208 Communication Quarterly (ECA), 1:3

1918   Index Communication Reports (Western States Communication Association), 1:4 Communication Research, 2:863, 2:874, 3:1445 Communication Research Measures II: A Sourcebook, 4:1736 Communication Research Reports (ECA), 1:4, 1:204 Communication Review, 2:874 Communication sciences and disorders (CSD), cloze procedure in, 1:139 Communication skills, 1:208–211 operationalization of, 1:210–211 overview, 1:208 questions asked by researchers, 1:209–210 study of, 1:208–209 Communication Studies (ECA), 1:3, 1:133, 2:858, 2:994 Communication studies, field of, 1:xxxvi Communication Teacher (NCA), 1:4 Communication theory, 1:211–214 argumentation theory and, 1:55 role of, in research, 1:212–213 theory, defined, 1:211–212 theory, research, practicality, 1:214 theory development, 1:213 theory-driven research, 1:213–214 theory testing, 1:213 See also individual areas of research Communication Theory, 1:204, 3:1445, 4:1761 Communicative Responses to Jealousy (CRJ) scale, 2:817–819 Communist-socialist approach, to international communication, 2:773–774 Communitarian theory, 2:777 Community autoethnographies, 1:76 Community-based participatory research (CBPR), 1:449–450 Comparative fit, 4:1684 Comparative fit index (CFI), 2:507, 2:508 Compensation internal validity and, 2:772 for interviews, 2:805 Compensatory rivalry, 2:772–773 Competency. See Communication competence Complexity theory, 2:698 Complicity theory, 1:18 Computer-assisted coding of data, 1:151 Computer-assisted qualitative data analysis software, 1:214–218 benefits of, 1:216–217 CAQDAS, overview, 1:214

considerations, 1:217 data coding, 1:215 data management, 1:215 data metrics, 1:216 data retrieval, 1:215 functions of, overview, 1:215 Computer-assisted reporting, 2:822 Computer-mediated communication, 1:218–223 CMC, overview, 1:173 communication and human biology, 1:171 communication competence, 1:188–189 distance learning, 1:399 experiments, 1:219–220 for interviewing, 2:801 online social networks and, 4:1642 overview, 1:218 parasocial communication, 3:1179–1182 social network analysis (SNA), 1:221–222 surveys, 1:218–219 See also Communication and technology Computers are social actors paradigm (CASA), 2:672, 2:673 Concept priming, implicit measures, 2:689 Concept tracking, 3:1092–1093 Confederates, 1:223–225 benefits and risks of, 1:223–224 deception in research, 1:361 experimental manipulation, 1:475 manipulation check, 2:901 overview, 1:223 utility in multiple disciplines, 1:224–225 Confidence interval, 1:225–227 constructing, 1:225–226 defined, 2:903 in meta-analysis, 2:987–988 overview, 1:225 significance tests versus, 1:226–227 using in experimental designs, 1:226 Confidentiality and anonymity of participants, 1:227–230 anonymity, defined, 1:228 confidentiality, defined, 1:228 false consensus effect and, 3:1136 overview, 1:227 procedures, 1:228–229 rationale to protect privacy, 1:228 unique circumstances, 1:229–230 Confirmation, induction and, 2:696 Confirmatory, defined, 3:1151 Confirmatory factor analysis. See Factor analysis: confirmatory

Conflict, mediation, and negotiation, 1:230–233 bootstrapping, application to mediational analysis, 1:108 conflict and environmental communication, 1:423 domains, 1:231–232 environmental communication and, 1:423 face negotiation theory, 1:310 mediation analysis, 4:1677 methodologies, 1:232–233 overview, 1:230–231 peace studies on topics of, 3:1206 scope, 1:231 theories, 1:232 Conflict of interest in research, 1:233–236 academic COI, 1:234–235 authorship bias and, 1:70 coercion COI, 1:234 COI, overview, 1:233 commitment COI, 1:234 conflict of conscience, 1:233–234 disclosure and management plan, 1:235 funding and financial COI, 1:234 intellectual bias, 1:234 international research, 1:450–451 (See also Ethical issues, international research) myth of objectivity, 1:235 Conjoint analysis, 1:236–239 advantages and disadvantages, 1:238 attributes and levels of attributes, 1:236–237 basic steps in, 1:236 conjoint profile cards and orthogonal design, 1:237 methodology, 1:237 overview, 1:236 reliability and validity, 1:237–238 Connotative meanings, 2:970 Conquergood, Dwight, 2:851, 3:1214, 3:1215, 3:1219 Consciousness-raising (C-R) techniques, 4:1580–1581 Consent. See Informed consent Consequential ethics, 1:195–196 Construct equivalence, 1:418–419 Consumer socialization, financial communication about, 2:570–571 Contamination effects, control groups and, 1:254 Content analysis, advantages and disadvantages, 1:239–242 chat rooms research applications, 1:125

Index   1919 for communication and future studies, 1:168 corporate communication, 1:264–265 data, 1:239–240 dime dating, 1:387 in emergency communication, 1:411 Holsti method, 2:740–741 in intergroup communication, 2:769 political debates, 3:1274 process, 1:241–242 risk communication, 3:1516–1517 scope of, 1:239 social network analysis, 4:1634 Content analysis, definition of, 1:242–245 limitations, 1:245 overview, 1:242 process of, 1:244 uses of, 1:243–244 Content analysis, process of, 1:245–248 advantages and disadvantages, 1:241–242 benefits and drawbacks of, 1:248 blogs and research, 1:100–101 chat rooms research applications, 1:125 coding, reliability, and finalizing data, 1:247–248 coding units, coding scheme, code book, 1:246–247, 1:247 (table) for communication and future studies, 1:168 definition and, 1:244 overview, 1:244, 1:245–246 sampling and data types, 1:246 Content Analysis: An Introduction to Its Methodology (Krippendorff), 2:744, 2:747 Content validity, 4:1824 Contextual analysis, public address and, 3:1349–1350 Contextual Factors, of empathic listening, 1:414–415, 1:414 (fig.) Contingency coefficient, 1:390 Continuous partial attention, 1:385 Continuous variables, 2:950 Contrapuntal analysis, 3:1222 Contrast analysis, 1:248–252 analyzing and interpreting, 1:249–251 contrasts in research, 1:251–252 mechanics and assumptions of, 1:249 orthogonality of, 1:251 overview, 1:248–249 Control groups, 1:252–254 historical controls, 1:253 limitations of, 1:253–254 matched, 1:253 nonequivalent groups, 1:252 participant as own control, 1:253

placebo groups, 1:252–253 random assignment, 1:252 waiting list, 1:253 yoked, 1:253 Controversial experiments, 1:254–257 examples, 1:255–257 institutional review boards and ethical guidelines, 1:255 overview, 1:254–255 Convenience sampling, 4:1524–1525, 4:1536–1537 Convergent validity concurrent validity compared to, 4:1818–1819 measurement of validity, overview, 4:1832 Conversational Listening Span (CLS), 1:415 Conversational Skills Rating Scale (National Communication Association), 1:184 Conversation analysis, 1:257–261 building blocks of talk-ininteraction, 1:259–260 CA, overview, 1:257–258 in field of communication, 1:260–261 language and social interaction compared to, 2:845 qualitative interaction analysis, 2:717–718 as theory and method, 1:258–259 Cook, Thomas D., 1:372, 1:480 Cook, Thomas T., 2:770 Cooley, Charles Horton, 4:1739, 4:1740–1741 Cooper, Lane, 3:1480 Coordination tasks, in group communication, 2:634 Cooren, Francois, 4:1763 Copeland, Gary A., 3:1477 Copyright issues in research, 1:261–263 copyright law, 1:261–262 fair use, 1:262 platforms for publication, 1:262–263 Corbin, Juliet, 2:630 Corporate apologia, 1:291 Corporate communication, 1:263–266 content analysis, 1:264–265 crisis communication and, 1:290–292 definitions, 1:263–264 financial communication and corporate message framing, 2:570 interviews, 1:265 research methods, 1:264

social media research methods, 1:265–266 survey research, 1:265 See also Managerial communication Corporate Communications: An International Journal, 1:264 Corporate social responsibility (CAR), 1:264 Correlation, Pearson, 1:266–270 agreement coefficients, compared, 2:725, 2:728 (table) analysis of ranks, 1:31 assumptions, 1:267–268 bivariate statistics, 1:97 correlation, defined, 1:267 correlation coefficient r, 1:267 correlation computation, 1:268 Cramp’s V compared to, 1:289 Cronbach’s alpha reliability, 3:1420, 3:1421 curvilinear relationship to, 1:324 degrees of freedom, 1:368 eta squared, 1:446 Holst’s method and, 2:743 interceder reliability, 2:722 interpreting correlation, 1:268–270, 1:269 (fig.), 1:269 (table) limitations, 1:270 log-linear analysis, 2:886 measurement levels and, 2:947 measures of central tendency, 2:950 meta-analysis, fixed-effects analysis, 2:989 meta-analysis and statistical conversion to common metric, 2:1010 multiple regression, 3:1041 objectives, 1:267 odds ratio, 3:1123 ordinary least squares, 3:1151 overview, 1:266–267 partial correlation, 3:1184, 3:1186 physiological measurement, 3:1319 primary data analysis and, 3:1319 selection of metrics for analysis, 2:1024, 2:1025 sensitivity analysis, 4:1593 significance test, 4:1598, 4:1599 Spear man correlation and, 1:266, 1:274, 1:275 standard deviation and variance, 4:1666 surveys, 4:1830 types of relationships, 1:267 Correlation, point-biserial, 1:270–273 assumptions of, 1:271–272 computation of, 1:272 defined, 1:270 interpreting, 1:272

1920   Index limitations, 1:272–273 objectives, 1:270–271 overview, 1:270 types of correlation coefficients, 1:271 (table) Correlation, Spear man, 1:273–275 analysis of ranks, 1:31 applications in communication research, 1:274 assumptions, 1:274 benefits of, 1:275 history of Spear man rank-order method of correlation, 1:273–274 Kendall’s tau and, 2:830 limitations, 1:275 measurement levels and, 2:947 overview, 1:273 Pearson correlation and, 1:266, 1:270, 1:271 rank order scales, 4:1560 selection of metrics for analysis, 2:1025 surveys, 4:1837 use of rank method, 1:274–275, 1:274 (tables) Correlation coefficient in meta-analysis, 2:985 meta-analysis and statistical conversion to common metric, 2:1009–1010 Correspondence analysis, 1:275–277 analysis of categorical data, 1:276 application to communication, 1:277 CA, overview, 1:275–276 key concepts, 1:276–277 visual representation of data, 1:276 Cosmopolitan, 2:874 Couch, Carl, 4:1739 Council for Higher Education Accreditation (SEA), 1:182, 1:184 Counteractions, analytic induction, 1:37–39 Counterbalancing, 1:277–281 application of Latin square to, 1:280 counterbalanced designs, 1:278–280, 1:279 (tables) order and sequence benefits, 1:277–278 overview, 1:277 Counter-Statement (Burke), 1:109, 1:110, 1:111 Court, Roberts, 2:590 Covariance/variance matrix, 1:281–282 additional applications, 1:282 defined, 1:281 describing, 1:281–282, 1:281 (table) factor analysis, 2:502–503

terms associated with matrix use, 1:282 using, 1:282 variance matrix, 2:955–956, 2:956 (table) Covariate, 1:282–285 categorical predictor variables, 1:284 conceptual considerations, 1:285 continuous predictor variables, 1:284–285 defined, 1:282–283 functions of, 1:283–284 statistical considerations, 1:285 Coverage probability, 2:904–905 Covert observation, 1:285–289 investment of time, 1:287–288 key considerations, 1:287 location of, 1:286 overview, 1:285–286, 3:1117 working with institutional review boards, 1:288 Cox, Milton, 4:1570 Craig, Robert T., 4:1761, 4:1763 Cramér, Harald, 1:289–290 Cramér’s V, 1:289–290 Cramér’s phi, 1:290 measures of nominal-variable associations, 1:290 overview, 1:289 skewness, 4:1606 tests of nominal-variable associations, 1:289 Crawford, Kate, 4:1641 Crenshaw, Kimberlé, 1:303 Creswell, John W., 4:1783 Crisis communication, 1:290–293 overview, 1:290–291 qualitative crisis communication studies, 1:291–292 quantitative crisis communication studies, 1:292–293 See also Emergency communication Criterion-referenced evaluation, for cutoff scores, 1:326–327 Critical analysis, 1:294–296 communication and culture, 1:162–163 critical cultural studies, 1:295 critical discourse analysis, 2:718–719 critical qualitative research, 1:296 critical rhetoric, 1:295–296 critiquing literature sources, 2:885 freedom of expression, 2:593 game studies, 2:605 GeoMedia, 2:622 international communication, 2:775 media and technology studies, 2:958–960 overview of, 1:294–295

Critical ethnography, 1:296–299 designing and conducting, 1:297 overview, 1:296–297 political concerns, 1:297–299 Critical incident method, 1:299–302 advantages of, 1:301 disadvantages of, 1:301 ethical considerations, 1:301 overview, 1:299–300 procedures, 1:300–301 Critical Questions (Nothstine, Blair, Copeland), 3:1477 Critical race methodology, 4:1807 Critical race theory, 1:302–305 assumptions, 1:303–304 background and origins, 1:302–303 methodology, 1:304–305 Critical research stories, 1:67 Critical Studies in Mass Communication, 3:1391 Critical Studies in Media Communication (NCA), 1:4, 2:1022 Critical theory. See Critical analysis Cronbach, Lee, 3:1415 Cronbach’s Alpha. See Reliability, Cronbach’s Alpha Cronkhite, Gary, 1:41 Cross-cultural communication, 1:308–312 face negotiation theory, 1:310 high versus low context cultures, 1:310 individualistic and collectivistic cultures, 1:310 overview, 1:308–309 practices, 1:311 research, 1:309 Sapir-Whorf hypothesis, 1:310–311 Crossed factorial design. See Factor, crossed Crossed factors, nested factors versus, 2:496–497, 2:496 (fig.) Cross-lagged panel analysis, 1:312–314 comparing coefficients, 1:314 cross-lagged correlations (CLC), 1:312 directions of causality, 1:312 measurement errors, 1:313–314 omitted variables, 1:314 overview, 1:312 panel models, 1:313, 1:313 (fig.) time frame of effect, 1:314 Cross-sectional design, 1:314–317 characteristics of, 1:315–316 issues with using human respondents in, 1:316–317 overview, 1:314–315

Index   1921 Cross-sectional surveys, 1:410–411, 2:915 Cross-spectral analysis, 4:1768 Cross validation, 1:305–308 justification for, 1:306–307 overview, 1:305–306 theoretical model evaluation, 1:307 value of, 1:307–308 Cruz, Laura, 4:1571 Csikszentmihalyi, Mihaly, 1:471 Cues, cloze procedure for, 1:139–142 Culin, Stewart, 2:603 Cultural adjustment/adaptation. See Acculturation Cultural contracts theory, 1:18 Cultural discourse analysis, 2:846 Cultural imperialism, 2:776 Culturally and Linguistically Diverse (CLD), 1:417 Cultural sensitivity in research, 1:317–320 dimensions of, 1:317–318 heterogeneity in, 1:319–320 overview, 1:317 practicing, by researchers, 1:318–319 Cultural studies and communication, 1:320–322 international communication, 2:775 key concepts, 1:321–322 overview, 1:320 theoretical history, 1:320–321 Culture shock. See Acculturation Cupach, W. R., 1:333 (fig.) Currency, of Web resources, 1:94 Curvilinear relationship, 1:322–325 dark side of communication and, 1:333 general linear model approach, 2:868–869 overview, 1:322–323 statistical analysis of, 1:324–325 types of relationships, 1:323–324, 1:323 (fig.), 1:324 (fig.) Customer satisfaction, 1:113–114 Cutoff scores, 1:325–328 implications of, 1:327–328 limitations of, 1:327 overview, 1:325–326 uses of, 1:326–327 Cyberactivism, 2:628 Cyberbullying, 2:628–629 Cyberchondria, 1:328–330 communication research and, 1:329–330 hyporchondria versus, 1:328 overview, 1:328 social implications of, 1:329 Cybernetic tradition, 4:1762 Cyworld (South Korea), 4:1640

Daily Courant, 2:820 Daly, Mary, 2:554 Dance Your Ph.D., 3:1216 Dark side of communication, 1:331–335 axiological approach, 1:332 implications, 1:334 methodological approach, 1:334 nominal approaches, 1:331–332 overview, 1:331 sociological approach, 1:332 Spitzberg and Cupach typology of, 1:333 (fig.) theoretical approach, 1:332–334 Darling, Ann, 4:1568, 4:1570 Darwin, Charles, 1:163, 2:490, 3:1248 Darwin’s Black Bo (Behe), 2:698 Data, 1:335–337 archival analysis, 1:44–46 archiving data, 1:50–52 CAQDAS, 1:214–217 case study data analysis, 1:118 confirmatory factor analysis and data collection, 2:506 content analysis of, 1:239–240, 1:246 (See also Content analysis, process of) conversation analysis, 1:257–261 corporate communication campaigns, 1:265–266 correspondence analysis, 1:275–277 cross-sectional design, 1:314–317 cross validation, 1:305–308 emergency communication datagathering interviews, 1:410 ethnography and data analysis, 1:460 in fantasy theme analysis, 2:550–551 fraudulent and misleading, 2:586–588 grounded theory data collection and analysis, 2:630 interviews for data gathering, 2:803–807 naturalistic observation data collection, 3:1083–1084 online data topics, 3:1137–1143 overview, 1:335 planning research project, 3:1448 political debates and data analytics, 3:1275–1276 primary and secondary, 1:336–338 ranking, with Kendall’s tau, 2:829–922 sources of, 1:335–336 spam as, 4:1653 survey response rates, 4:1728–1730 See also individual names of statistics techniques; individual statistical methods

Data Archive (UK), 1:51 Databases, academic, 1:348–351 bibliographic, 1:93 classification of, 1:348–350 library research, 2:861–862 overview, 1:348 terminology and vocabulary, 1:350 See also Literature review, the Data breach (online), problems of, 3:1141–1143 Data cleaning, 1:337–339 data cleansing process, 1:337–338 data validation versus, 1:338–339 overview, 1:337 Data mining cluster analysis and, 1:144 distance learning and, 1:402 Data reduction, 1:339–341 communication and, 1:339–340 strategies, 1:340–341 Data security, 1:341–344 archiving data, 1:51 legal background, 1:342 overview, 1:341–342 practical guidelines for, 1:342–344 Data transformation, 1:344–346 dichotomizing, 1:345–346 inverse scoring, 1:345 log transformation, 1:346 transformation into standard scores, 1:344–345 Data trimming, 1:346–348 alternative terminology, 1:347 common statistical functions requiring, 1:347 defined, 1:346–347 examples, 1:347–348 overview, 1:346 Winsorizing compared to, 1:347 Date Night (film), 3:1410–1411 Dating. See Dime dating David, Herbert A., 2:932 Da Vinci, Leonardo, 3:1517 Davis, Dennis, 2:649 Davis, Fred D., 2:963 Daw, Walid, 2:908 Day, Angel, 3:1508 Dayan, Daniel, 1:377 Deacon, Terrence, 2:699 Debate and forensics, 1:351–355 areas of research in, 1:352–353 common methods use in research of, 1:353–354 overview, 1:351–352 De Beauvoir, Simone, 3:1229 Debriefing of participants, 1:355–359 debates about, 1:358–359 debriefing in multiple contexts, 1:357–358

1922   Index debriefing process, 1:355–357 functions of debriefing, 1:357 Deception in research, 1:359–363 confederates and, 1:223–225, 1:361 debriefing of participants and, 1:356 defined, 1:359 ethical considerations, 1:361–362 example, 1:362 facial features in communication of deception, 3:1107–1108 reasons to employ deception, 1:359–360 using, 1:360–361 See also Confederates; Dark side of communication Decision making, organizational identification and, 3:1164 Declamation (debate and forensics), 1:351 Declaration of Helsinki, 1:448–449, 1:453, 2:708 “Declaration of Sentiments” (Stanton), 2:573 Decomposing sums of squares, 1:363–366 in advanced models, 1:365–366 significance test, 1:364–365 sum of squares (SS), defined, 1:363 between sum of squares (BSS), 1:363 within sum of squares (WSS), 1:363–364 total sum of squares, 1:363 variance estimates, 1:364 Deconstruction critical ethnography and, 1:298 phenomenology as deconstruction, 3:1229 Decyk, Betsy Newell, 4:1570 Dedmon, Donald N., 1:89 Deductive hypotheses, 2:676 Deductive reasoning, 3:1481 Deep structure, 1:318 Degree centrality, 1:222 Degrees of freedom, 1:366–368 chi-square test of independence, 1:368 one-sample t-test, 1:367 one-way ANOVA, 1:368, 3:1129 overidentified model, 3:1172–1173 overview, 1:366–367 path analysis, 3:1196 Pearson correlation, 1:368 two independent sample t-test, 1:367–368 De Lauretis, Teresa, 3:1390 Delayed measurement, 1:368–370 difficulties and considerations, 1:369–370 measuring using multiple delayed evaluations, 1:369

overview, 1:368 reasons for, 1:368–369 Deleuze, Gilles, 2:698 Delgado, Fernando, 2:852 Delgado, Richard, 1:303 Deloria, Vine, Jr., 3:1079 Delphi method, 1:168 DeLuca, Kevin, 3:1505 Demand characteristics, 1:370–373 from participants, 1:372 reducing, 1:372–373 in research, 1:371 from researchers, 1:371–372 Demo, Anne, 3:1505 Dendogram, 1:142, 1:143, 1:143 (fig.) Denotation versus connotation, 4:1588 Denotative meanings, 2:970 Denzin, Norman, 4:1782, 4:1783 Deontological ethics, 1:195 Dependency theory, 2:775–776, 2:777 Depression, Facial Action Coding System and, 2:490 Derrida, Jacques, 3:1217, 3:1229, 3:1232, 3:1294, 3:1510 DerSimonian–Laird estimator, 2:1002 DeSanctis, Geradine, 1:176 Descartes, Rene, 4:1739 Description, in field notes, 2:565 Descriptive discriminant analysis, 1:395 Descriptive statistics defined, 3:1318 importance of, 4:1605 See also Simple descriptive statistics Design and Analysis: A Researcher’s Handbook (Keppel, Wickens), 1:366 Desire, rhetoric and, 3:1347 De Sola Pool, Ithiel, 2:773, 2:774 Deterministic processes, 4:1767 De Vaucanson, Jacques, 3:1517 Development media theory, 2:774 Development of communication in children, 1:373–376 commonalities and variations, 1:376 definition of language, 1:373–374 early vocabulary acquisition, 1:374 sequence of, 1:374–376 theoretical accounts, 1:374 Deviation, simple descriptive statistics, 4:1604 Dewey, John, 3:1232, 3:1489, 3:1490, 4:1739, 4:1747 Dialectic communication and culture approaches, 1:161 dialectical tensions, 3:1409 overview, 1:52–53 pragma-dialectics, 1:54

Dialog. See Interaction analysis, qualitative; Interaction analysis, quantitative Dialogic Education (Arnett), 1:192 Dialogic performance, 3:1215 Dialogues on Eloquence (Fénelon), 3:1508 Diana, Princess of Wales, 3:1292 Diaries. See Journals Diary studies, imagined interactions, 2:686 Diaspora, 1:376–380 conceptualizing, 1:376–377 developments in media and diaspora research, 1:377–378 ethical considerations, 1:379–380 operationalizing, 1:378–379 overview, 1:376 Dichotomization of continuous variable. See Errors of measurement: dichotomization of a continuous variable Differences: A Journal of Feminist Cultural Studies, 3:1390 Difference score, 2:692 Diffusion, 2:772 Diffusion of Innovations (Rogers), 2:960 Digital games, game studies scholarship and, 2:604–605 Digital Games Research Association, 2:604 Digital media and race, 1:380–383 inequality and, 1:382 networks, diasporas, and markets (2000-2006), 1:381–382 origins and intellectual foundations, 1:380–381 overview, 1:380 Digital natives, 1:383–386 attention and focus, 1:384 criticism, 1:385 in education system, 1:383 overview, 1:383 social and political engagement, 1:384–385 Dillard, James, 3:1225 Dilthey, Wilhelm, 2:648 Dime dating, 1:386–387 content analysis, 1:387 overview, 1:386 research application, 1:387 traditional dating versus, 1:386–387 Dimension selection, 2:515 “Dimensions of Temporality in Lincoln’s Second Inaugural” (Leff), 3:1505 Directional hypotheses, 2:675–676

Index   1923 Direct measures, for communication assessment, 1:184 Directory of Open Access Journals, 3:1202 Disability and communication, 1:387–391 communication research areas combined with, 1:390 within communication studies, 1:388–389 overview, 1:387–388 research paradigms and, 1:389–390 Disability Issues Caucus (National Communication Association), 1:387–388 Disagreement, 1:53 Discourse analysis, 1:391–394 action-implicative discourse analysis, 2:845 blogs and research, 1:101 commonalities, 1:391–392 contrapuntal analysis, 3:1222 contribution to communication research methods, 1:393–394 critical analysis, 1:294–296 differences between discourse analytic approaches, 1:392–393 overview, 1:391 qualitative interaction analysis, 2:717–718 See also Politeness theory; Rhetoric Discourse Analysis and Social Psychology (Potter, Wetherell), 1:391 Discourse and Social Change (Fairclough), 1:391 Discourse of renewal, 1:292 Discriminant analysis, 1:394–399 analysis, overview, 1:396–397 assumptions, 1:396 canonical analysis, 1:397 discriminant function analysis, defined, 1:394 overview, 1:394–396 testing and interpretation of, 1:398 types of, 1:397–398 Discriminant validity, 4:1832 Discussion panel, 1:22 Discussion section. See Writing a discussion section Dispersion matrix, 1:282 Distance learning, 1:399–402 comparison studies, 1:400 data mining, 1:402 focus on process, 1:401–402 massive open online courses, 2:920–922 overview, 1:399–400 student outcomes, 1:400–401 types of research, overview, 1:400

Divisive clustering, 1:142–143, 1:143 (fig.) Doctor-patient relationships, 2:644 Documentary films, 2:781 Documents ethnography and, 1:459–460 foundation and government research collections, 2:581–583 See also Artifact selection Domain selection, 2:515 Domain-specific academic databases, 1:348–349 Domenico, Mary E., 2:554 Donath, Judith, 4:1641–1642 Donovan, Josephine, 2:554 Dorfman, Ariel, 2:776 Dorgan, Kelly, 2:554 Dorsolateral region of prefrontal cortex (DLPFC), 1:171 Douglas, Alexander “Sandy,” 4:1857 Dow, Bonnie, 3:1495, 4:1583 Downward comparison, 1:102 Dragon Naturally Speaking, 2:803 Dramatistic pentad, 3:1210, 3:1211 Dramatism. See Rhetorical and dramatism analysis Dramaturgical self, 3:1493 Dramaturgy, 4:1741–1742 Draughts, 4:1857 Dreams (film), 2:780 Dred Scott decision, 3:1293 Dropbox, 3:1141 Duchenne smiles, 2:490 Dumont Laboratories, 4:1857 Duncan, David B., 3:1296 Duncan test. See Post hoc tests: Duncan multiple range test Duniway, Abigail Scott, 2:665 Durbin–Watson test, 3:1044 Durkheim, Émile, 4:1627 Dusen, Raymond Van, 1:88–89 Dyadic data, in personal relationship studies, 3:1223 Dynamic constructivism approach, to communication and culture, 1:161 Earth First, 1:58 Eastern Communication Association (ECA) academic journal structure, 1:3 alternative conference presentation formats, 1:21 applied communication, 1:41 on ethics, 1:10 feminist analysis, 2:555 field experiments, 2:562 limitations of research and, 2:863 Eating disorders, 1:101–104

EBSCO (Elton Bryson Stephens Company), 3:1368 Eco, Umberto, 4:1743 Ecological fallacy, 2:657 Ecological risk and protective theory (ERPT), 1:103–104 Ecological validity, 1:481–482 Economic and Social Research Council (UK), 1:50 Economic issues. See Ethnicity; Health care disparities; Race Economic metaphors, Marxist analysis of, 2:912 E-Discovery, 4:1634 Edison, Thomas, 1:201 Editable websites, 2:787 Edited publications, peer-reviewed versus, 3:1371 Educational technology, 1:403–406 debate about, 1:404–405 defined, 1:403–404 learning about, 1:405–406 overview, 1:403 See also Pedagogy Education of the Orator (Quintilian), 3:1507 Edwards, Janis L., 2:683 Edwards, Jonathan, 2:664 Effects, in political communication, 3:1268 Effect sizes, 1:406–408 calculating, 1:407–408 as central concept of meta-analysis, 2:1008–1009 converting one effect size into another, 2:1010–1011 eta squared as, 1:447 logic of hypothesis testing, 2:679–680 overview, 1:406 small, medium, large, 1:408, 1:408 (table) statistical power analysis and, 4:1676 value of, 1:406–407 See also individual meta-analysis topics Eicker standard error, heteroskedasticity, 2:656 Eisenhauer, Joseph G., 1:366 Eisenstein, Elizabeth, 1:200 Eisenstein, Sergei, 2:567 Ekman, Paul, 2:487, 2:489, 2:490 Elaboration Likelihood Model (ELM), 1:214 Elaboration of latent consequences strategy, 3:1342 Electricity, history of, 1:201 Electrodermal activity (EDA). See Physiological measurement: skin conductance

1924   Index Electroencephalogram (EEG), 1:172 Electronic Arts, 4:1858 Electronic sources, citing with Chicago style, 1:128 Elevator experiment (Garfinkeling example), 2:610 Elfering, Achim, 3:1240 Elia, John, 3:1391 Elliot, T. S., 1:110 Ellison, Nicole, 1:218–219, 4:1641, 4:1642 El Movimiento (Hammerback, Jensen), 2:851 Elsevier, 1:349 Elving, Wim, 1:264 Emergency communication, 1:408–412 after-action reports, 1:410 content analysis, 1:411 cross-sectional surveys, 1:410–411 data-gathering interviews, 1:410 literature reviews and bibliographies, 1:412 message development and evaluation, 1:409–410 network analysis, 1:411 overview, 1:408, 1:409 qualitative research, 1:411–412 quasi-experimental design, 1:411 Emmons, Sally L.A., 3:1080 Emotion Facial Action Coding System and expression of, 2:489, 2:490 facial features in communication of, 3:1107–1108 Empathic listening, 1:412–417 for business communication, 1:113 defining, 1:413 measurement of, 1:415–416 model for, 1:413–415, 1:414 (fig.) overview, 1:412–413 research directions, 1:416 Empirical power. See Power curves Employee satisfaction, 1:114 Empowerment feminist communication studies on, 2:558 international communication, 2:774–779 Encoding, of text, 2:971 Encyclopedia, organization of, 1:xxxix–xl Engaged Journalism (Batsell), 2:819 Engagement questions, for focus groups, 2:579 Engelbart, Doug, 2:671 Engels, Friedrich, 2:909

English as a Second Language, 1:417–421 communication apprehension and, 1:179 cultural considerations, 1:418 equivalence issues of, 1:418–419 linguistic considerations, 1:417–418 overview, 1:417 qualitative research issues, 1:420 quantitative research issues, 1:418–419 related terms, 1:417 English Secretorie (Day), 3:1508 Enjoyment, rhetoric and, 3:1347 Enquiry Concerning Human Understanding (Hume), 2:695 Enron, 2:570, 3:1039, 4:1634 Entertainment communication, political communication and, 3:1270 Entman, Robert, 2:584, 2:585, 2:821 Environmental communication, 1:421–425 alternative forms and partnerships for, 1:424–425 animal studies, 1:424 attitude studies, 1:423 conducting, 1:424 media, 1:422 negotiation and conflict, 1:423–424 overview, 1:421–422 public participation, 1:423 risk communication, 1:423 science communication, 1:423 sense of place, 1:422–423 social movements, 1:422 Environmental manipulation, 2:901 Environmental movement, rhetorical method and, 3:1505 Environmental scanning, 1:168 Environment and artifacts code, 3:1099 Epistemology, feminist, 2:558–559 Equal Credit Opportunities Act (ECOA) (1974), 4:1582 Equal Pay Act (1963), 4:1580 Equal Rights Amendment (ERA), 4:1582 Equifinality, 1:333 Errors of measurement, 1:427–429 impact of, considering statistical applications, 1:429 overview, 1:427 random error and impact, 1:427–428 systematic error and impact, 1:428–429 Errors of measurement: attenuation, 1:429–433 defining attenuation and correction, 1:430–431, 1:430 (fig.) overview, 1:429–430

Errors of measurement: ceiling and floor effects, 1:433–435 causes, 1:433–434 definitions, 1:433 overview, 1:433 solutions, 1:434 Errors of measurement: dichotomization of a continuous variable, 1:435–437 costs of dichotomizing a continuous variable, 1:436–437 data transformation, 1:345–346 justification for dichotomizing a continuous variable, 1:435–436 variable, defined, 1:435 Errors of measurement: range restriction, 1:442–446 basis for measurement error, 1:443–444 impact of research artifacts, 1:445 overview, 1:443 range restriction, defined, 1:444–445 range restriction ratio, 1:445 Errors of measurement: regression toward the mean, 1:437–442 comparing differences from baseline (example), 1:440–441 internal validity, 2:772 one-group pretest-posttest design, 3:1125 overview, 1:437 regression toward mean as frequent phenomenon, 1:442 regression toward the mean process, 1:437–439, 1:437 (fig.), 1:438 (fig.), 1:439 (fig.) relating change to initial value, 1:441–442, 1:441 (table) treatment to reduce high levels of measurement (example), 1:439–440, 1:440 (fig.) Error term, 1:425–427 assumptions, 1:426–427 defined, 1:425–426 overview, 1:425 ESPN, 3:1039 Essays on Moral Development (Kohlberg), 3:1212 Eta squared, 1:446–448 interpreting and calculating, 1:446–447 limitation of, as an effect size indicator, 1:447 overview, 1:446 partial, 1:447 Ethical Decision-Making and Internet Research 2.0: Recommendations from the AoIR Ethics Working Committee, 1:64

Index   1925 Ethical issues, international research, 1:448–451 conflicts of interest, 1:450–451 cultural awareness, 1:449–450 Nuremberg Code and Declaration of Helsinki, 1:448–449 overview, 1:448 research process and, 1:450 Ethics codes and guidelines, 1:451–454 acknowledging contribution of others, 1:8–10 anonymous source of data, 1:39–40 Association of Internet Researchers, ethics committee, 1:64 authoring: telling a research story, 1:68 communication and technology research, 1:177 ethical guidelines, overview, 1:451 Federal Guidelines for the Ethical Conduct of Human Subjects Research (U.S. Department of Health and Human Services), 1:453–454 NCA Credo for Ethical Communication (National Communication Association), 1:452–453 professional codes of ethics, 1:451–452 SPJ Code of Ethics (Society for Professional Journalists), 1:453 Ethnicity communication and culture studies, 1:161–163 critical race theory, 1:303 cultural sensitivity in research, 1:317–320 diaspora and, 1:376–380 health care disparities, 2:639–642 massive multiplayer online games and, 2:919 underrepresented groups, 4:1807–1809 See also Latina/o communication studies; Passing theory Ethnographic interview, 1:454–457 goals and limitations, 1:455–456 interview settings, 2:799–800 origins and applications, 1:454–455 overview, 1:454 rewards and challenges of, 1:456 unique features of, 1:455 Ethnography, 1:457–461 autoethnography, 1:73–76 critical ethnography, 1:296–299 data analysis, 1:460 documents and artifacts, 1:459–460 family communication research, 2:547

field notes, 1:459, 2:564 group representation by, 1:460–461 interviews, 1:454–457, 1:459 meta-ethnography, 2:1012–1017 online ethnography, 1:220–221 overview, 1:457 performance research/performance studies and, 3:1213, 3:1217–1218 research decisions, 1:457–458 research process, 1:458–459 underrepresented groups, 4:1808 verification, 1:461 Ethnomethodology, 1:461–466 communication as accountable, indexical, reflexive, 1:463–464 communication disorders and, 1:465–466 EM, overview, 1:461–462 Garfinkeling, 2:608–610 meta-ethnography, 2:1012–1017 research process, 1:464–465 study of ordinary skills, 1:462–463 symbolic interactionism, 4:1742 Eunoia, 3:1479 Euromines, 4:1578 Europacorp, 2:781 European Communication Research and Education Association (ECREA), 3:1331 European Journal of Communication, 1:3 Evans-Pritchard, E. E., 3:1116 Everquest (video game), 4:1858 Evidence-based policy-making, 1:466–468 barriers to, 1:468 benefits of evidence-based analysis, 1:468 characteristics of evidence, 1:467 overview, 1:466–467 policy cycle, 1:467 role of theory in, 1:467–468 Evil, phenomenology of, 3:1229 Evolution. See Communication and evolution Evolutionary commonalities, 2:511 Evolutionary correlations, 2:511 Evolutionary factor analysis. See Factor analysis: evolutionary Executive summary, 1:2 Existential and hermeneutic phenomenology, 3:1228 Exit questions, for focus groups, 2:579 Expatriates, acculturation and, 1:7 Experience sampling method, 1:471–474 ESM, overview, 1:471

history and evolution of, 1:471–472 limitations, 1:473 phenomenological perspective, 1:472–473 Experimental manipulation, 1:474–477 experimenter effects, 1:476 Hawthorne effects, 1:476–477 manipulating independent variables, 1:475 manipulation checks, 1:475–476 overview, 1:474 threshold effects, 1:476 types of independent variables, 1:474–475 Experimental mortality, 1:369–370 Experiments and experimental design, 1:477–480 overview, 1:477 pre-experimental design, 1:477–478 true experimental design, 1:478–479 Explicit memory tests, 4:1620–1621 Exploration questions, for focus groups, 2:579 Exploratory factor analysis. See Factor analysis: exploratory Ex post facto designs, 1:468–471 experimental research, 1:469 field research, 1:469 limitations of design, 1:469–470 overview, 1:468–469 Expression of the Emotions in Man and Animals, The (Darwin), 2:490 Extemporaneous commentary (debate and forensics), 1:351 External validity, 1:480–483 balancing with internal validity, 1:482–483 ecological validity, 1:481–482 overview, 1:480 replication, 1:482 reporting, 1:482 sampling, 1:480–481 See also individual types of test designs Extraneous variables, control of, 1:483–486 integral variables distinguished from, 1:485–486 overview, 1:483–484 situational control, 1:484 statistical control, 1:485 treatment control, 1:484–485 Eye constriction, in smiling, 2:490 Eye contact dark side of communication and, 1:333 in public speaking, 1:209 Eye tracking, 3:1237

1926   Index Fabrigar, Leandre R., 2:504 Facebook agenda setting and, 1:20 blogs and research, 1:100 communication and technology, 1:176 computer-mediated communication and surveys, 1:218–219, 1:220 corporate communication and social media research methods, 1:265 covert observation with, 1:286 Facial Action Coding System, 2:489 factor analysis: rotated matrix, 2:530, 2:531 GLBT people and social media, 2:627, 2:628, 2:629 homogeneity of variance, 2:666 Internet as cultural context, 2:784 Internet research and ethical decision making, 2:787, 2:789 for Internet sampling, 4:1530 measurement, 2:950 media and technology studies, 2:958 media literacy, 2:972 multiplatform journalism, 3:1038 new media analysis, 3:1091, 3:1092 online data, 3:1138 online data documentation, 3:1140 online social networks and, 4:1640, 4:1641 as online social world, 3:1148 political debates, 3:1275 social media issues, 4:1630 social network systems, 4:1637, 4:1638, 4:1639 sources for research ideas, 3:1446 surveys, 4:1725, 4:1726 terrorism and, 4:1750 wartime communication, 4:1883 Face negotiation theory, 1:310 Faces at the Bottom of the Well (Bell), 1:303 Face threatening acts (FTA), 3:1264–1266 FaceTime, 2:804, 3:1144 Face-to-face communication computer-mediated communication compared to, 1:219–220 distance learning compared to, 1:400 online interviews compared to, 3:1144 Face validity, 4:1823–1824 Facework, 1:310 Facial Action Coding System, 2:487–491 applying, 2:488 automated measurement of, 2:490 certification, 2:488 clinical research, 2:490 cross-cultural research, 2:489–490

defining, 2:487 emotional expression, 2:489 eye constriction and, 2:490 in infants, 2:489 overview, 2:487 process of, 2:487–488 reliability of, 2:488–489 Facilitators, for focus groups, 2:580 Fact, Fiction, and Forecast (Goodman), 2:695 Factor, crossed, 2:492–494 crossed designs, 2:493 defined, 2:492 factorial designs, overview, 2:492–493 Factor, fixed, 2:494–496 analysis, 2:495 fixed versus random, 2:494–495 in nonexperimental research, 2:495 overview, 2:494 Factor, nested, 2:496–499 advantages of, 2:499 common mishaps of, 2:498–499 hierarchically nested design, 2:498 nested/non-nested models, structural equation modeling and, 4:1684–1685 notation, 2:497 overview of crossed versus nested factors, 2:496–497, 2:496 (fig.) split-plot design, 2:498 two-factor design, 2:497–498 Factor, random, 2:499–502 analysis, 2:501 debate about, 2:500–501 in nonexperimental research, 2:501 overview, 2:499–500 random factor versus fixed factor, 2:494–495 when to consider factors as random, 2:500 Factor analysis, 2:502–505 factor models, 2:502–503 factors, defined, 2:502 future challenges, 2:504–505 methods, 2:504 overview, 2:502 principal components analysis, 2:503 rotation, 2:504 Factor analysis: confirmatory, 2:505–509 application, 2:508–509 confirmatory, defined, 3:1151 confirmatory factor analysis (CFA), 4:1682, 4:1685–1686 defined, 2:505–506 exploratory factor analysis (EFA) versus, 2:524–525 levels of analysis, 2:506–508

limitations, 2:509 overview, 2:505 Factor analysis: evolutionary, 2:510–514 challenges, 2:513 framework for, 2:514 logic behind, 2:511–513 stationary approach in, 2:510 Factor analysis: exploratory, 2:514–519 confirmatory factor analysis (CFA) versus, 2:524–525 factor loadings, 2:517 factor retention, 2:517–518 factor rotation, 2:518–519 goals of, 2:515 item retention criteria, 2:516 item selection, 2:516 principal components analysis compared to, 2:515 rotated matrix, 2:526–531 sample size in, 2:515–516 statistical evaluation of items, 2:517 Factor analysis: internal consistency, 2:519–522 example of application of formulas test, 2:520–521, 2:521 (table) reliability of measurement, 3:1427–1428 structural equation modeling (SEM) for, 2:519–521 surveys, negative wording questions, 4:1714 test method, 2:519–520, 2:519 (fig.) Factor analysis: oblique rotation, 2:524–526 confirmatory factor analysis (CFA) compared to exploratory factor analysis (EFA), 2:524–525 factor analysis, defined, 2:524 oblique rotation, overview, 2:529–531, 2:530 (table) orthogonal versus oblique analysis, 2:525 understanding relationships among factors, 2:525–526 Factor analysis: parallelism test, 2:522–524 overview, 2:522 parallelism, defined, 2:522 relationship between items on separate factors, 2:523 (fig.) steps of, 2:522–523 Factor analysis: rotated matrix, 2:526–531 analytic rotation methods, 2:528 conceptual issues, 2:527–528 example, 2:529–531, 2:530 (table) graphical rotation, 2:528

Index   1927 oblique rotation, 2:529 orthogonal rotation, 2:528–529 overview, 2:526–527 Factor analysis: varimax rotation, 2:531–533 application, 2:532 defining, 2:504, 2:531–532 limitations, 2:533 orthogonal rotation, overview, 2:528–529 statistical process, 2:532 Factorial analysis of variance, 2:533–536 concepts, 2:533–534 defined, 2:533 2 x 2 example, 2:534–536, 2:535 (tables), 2:536 (tables) Factorial designs, 2:536–539 advantages of, 2:537 alternatives to full factorial design, 2:539 design considerations, 2:538–539 factorial ANOVA designs, 1:35–36 notation, 2:537 overview, 2:536–537 results, 2:537–538 Factor loadings, 2:510, 2:517 Factors, defined, 2:533–534 Fader, P.S., 3:1040 Fairclough, Norman, 1:294–295, 1:391, 2:718 Fairhurst, Gail, 2:856 Fair use, 1:262 Faithful subject role, 1:372 “Fake news,” 3:1270 Fallacy theories, 1:54, 1:122, 2:657–658 False consensus effect, 3:1136 False feedback, deception and, 1:361 False negative, 2:539–542 example, 2:541 minimizing Type II error, 2:540–541 overview, 2:539–540 statistical power and Type II error, 2:540 See also Type II error False positive, 2:542–544 controlling rate of Type I errors, 2:543 examples of, 2:543–544 logic of research methods, 2:542 overview, 2:542 Type I (alpha) error rate, 2:542–543 See also Type I error Falsifiability, 3:1435, 4:1752 Family communication, 2:544–548 communication and evolution, 1:164–165 critical approaches to study of, 2:547–548

diaspora and intersecting identities, 1:378–379 disability and communication, 1:390 empirical approaches to study of, 2:546–547 gender and communication, 2:613 generalization in parent-child communication dyads, 2:620 health communication and families, 2:644 history of research in, 2:544–545 importance of, 2:548 intergenerational communication, 2:761–766 interpretive approaches to study of, 2:547 research paradigms, 2:545–546 Family Medical Leave Act, 2:614 Fan, Weimiao, 4:1732 Fanaticism, 3:1431–1432 Fantasy theme analysis, 2:548–552 analyzing coded data, 2:551 application, 2:550 basic concepts and vocabulary, 2:548–549 coding to collect data, 2:550–551 development of, 2:549 formulating research questions, 2:551 overview, 2:548 relationship to communication theory, 2:549 types of messages, 2:549–550 Farber, Daniel, 1:304 Farrell, Thomas, 3:1489 Fatigue effect, 4:1718 Fear of communicating. See Communication apprehension Federal Communications Commission, 2:590, 3:1038, 3:1167 Federal Guidelines for the Ethical Conduct of Human Subjects Research (U.S. Department of Health and Human Services), 1:453–454 Federal Highway Administration, 2:822 Federal Policy for the Protection of Human Subjects, 1:342, 2:709 Federal Wide Assurance (FWA), 2:709 Federation of American Scientists, 4:1857 Feinstein, Alvan R., 2:727 Feit, Eleanor McDonnell, 3:1040 Felten, Peter, 4:1569 Feminine Mystique, The (Friedan), 2:611, 4:1580 Feminism. See First wave feminism; Second wave feminism; Third-wave feminism

Feminist analysis Black Feminist Movement and Womanism, 1:17–18 first wave feminism, 2:572–575 gender and communication studies on waves of feminism, 2:611 historical analysis and, 2:665 methods, 2:554 overview, 2:552 public address and, 3:1350–1351 publication outlets, 2:554–555 queer theory and discourse of, 3:1395 as theoretical tradition, 4:1763 topics of, 2:552–554 trends, 2:555 underrepresented groups, 4:1808 See also Pornography and research Feminist communication studies, 2:557–560 analysis and presentation of findings, 2:560 on empowerment, 2:558 feminist epistemology, 2:558–559 on gender, 2:557 on intersectionality, 2:558 method, 2:559–560 overview, 2:557 on reflexivity, 2:558 Fénelon, François, 3:1508 Ferranti, 4:1857 Ferraro, Geraldine, 3:1275 Ferris, Jim, 1:388 Festinger, Leon, 1:102 Feyerabend, Paul, 3:1489 Fidell, Linda, 2:515, 2:653 Field experiments, 2:560–563 benefits and drawbacks of, 2:561–562 elements of, 2:561 overview, 2:560–561 Robbers Cave Experiment, 2:562–563 Field notes, 2:563–566 ethical considerations, 2:565 ethnography, 1:459 ethnography and participant observation, 2:564 journals, 2:824–827 overview, 2:563–564 writing, 2:564–565 “File drawer effect,” 2:996 Film studies, 2:566–569 cinematic types, 2:567–568 history of, 2:566–567 history of film studies programs, 2:568–569 international film, 2:779–783 overview, 2:566

1928   Index Financial communication, 2:569–572 consumer socialization, 2:570–571 corporate message framing, 2:570 financial literacy, 2:571 future research, 2:571–572 methods, 2:571 overview, 2:569–570 Financial Times Historical Archive, 4:1577 Fincher, David, 2:781 Fine, Mark A., 1:72 Finnegan, Cara, 1:45 First Amendment rights, 2:589–593 First-level agenda framing, 1:20 “First-person-perspective” communication activism social justice research, 1:12 First wave feminism, 2:572–575 gender and communication studies on waves of feminism, 2:611 key figures of, 2:573–574 key issues of, 2:574 legacy of, 2:574 overview, 2:572–573 Fisher, B. A., 3:1156 Fisher, Ronald Alymer, 1:395, 2:666, 2:807, 3:1104, 3:1299 Fisher, Walter, 1:65, 2:575–577 Fisher narrative paradigm, 2:575–578 criticism and revision, 2:577 narrative as argument and aesthetic combination, 2:576–577 overview, 2:575 rational-world paradigm versus narrative paradigm, 2:575–576 rhetorical text, 2:577 Fisher’s z, 2:834, 2:839, 2:989 Fiske, John, 1:57, 4:1647 Fit path analysis, 3:1196 structural equation modeling and, 4:1683–1684 Fitch, Kristine, 1:162 Fitt’s Law, 2:672 Fitzgerald, Richard, 1:121 Fixed coding. See Coding, fixed Fixed effect analysis. See Meta-analysis Fixed factors. See Factor, fixed Flaherty, Robert J., 2:567 Flanagan, John, 1:300 Fleiss, Joseph L., 1:242, 2:727, 2:930–931 Fleiss system. See Intercoder reliability techniques: Fleiss system Flesch Reading Ease test, 3:1225 Flexible coding. See Coding, flexible Floor effect. See Errors of measurement Flowers of Rhetoric (Alberic), 3:1507 Floyd, Kory, 1:175, 1:324

Focus groups, 2:578–581 conducting, 2:579–580 description and goals of, 2:578 design considerations for, 2:578–579 interview settings, 2:799 mass communication research, 2:916 media and technology studies, 2:959 overview, 2:578 validity concerns of, 2:580–581 Footnote chasing, with bibliographic resources, 1:94 Forbes, 2:874 Ford, Gerald, 3:1271 Ford, Leigh Arden, 2:550 Ford Foundation, 1:404, 2:601 Foreign Service Institute (FSI), 2:756 Forensics. See Debate and forensics Formal logic, 1:53 Formation of Sacred Sermons (Hyperius), 3:1508 Forward stepwise discriminant function analysis, 1:397 Foss, Karen, 2:550, 2:553, 2:554, 2:790 Foss, Katherine, 4:1756 Foss, Sonja K. on artifact selection, 1:56 feminist analysis, 2:552, 2:553, 2:554 rhetorical genre, 3:1497 rhetoric as epistemic, 3:1490 second wave feminism and, 4:1583 Foucault, Michel critical analysis and, 1:295 discourse analysis, 1:391 discourse analysis and, 1:392 philosophy of communication, 3:1232 postmodern research stories, 1:67 qualitative interaction analysis, 2:718 rhetorical theory, 3:1510 rhetoric as epistemic, 3:1488, 3:1490 visual communication studies, 4:1862 Foundational reviews of literature. See Literature reviews, foundational Foundation and government research collections, 2:581–583 archives and access, 2:583 charitable foundation research collections, 2:581–582 government research collections, 2:582–583 overview, 2:581 Fourier frequencies, 4:1768 Four Theories of the Press, The (Siebert, Peterson, Schramm), 2:773 Fox de Cardona, Elizabeth, 2:776 Fox News, 3:1037 Fragility of Things, The (Connolly), 2:698 Frame analysis, 2:583–586 frame, defined, 2:583

health communication message framing, 2:642 key terms, 2:583–584 methodological issues in, 2:584–585 war on terror (case study), 2:585–586 Framing, in political communication, 3:1268 Frandsen, Finn, 1:292 Frandsen, Kenneth D., 1:89 Frank, Thomas, 2:697–698 Frankfurt Institute of Social Research, 1:321 Fraudulent and misleading data, 2:586–588 consequences of using, 2:587–588 minimizing use of, 2:588 overview, 2:586 use of, 2:586–587 Fraustino, Julia, 4:1665 Freedman, Diane P., 3:1391 Freedom of expression, 2:589–593 cases and issues of, 2:589 case study, 2:591–592 channels and contexts for, 2:590 constitutional provisions, 2:589–590 courts or individual justices on, 2:590 critical analysis, 2:592 diverse opportunities for scholars in research of, 2:593 historical analysis, 2:592–593 international issues of, 2:590–591 legal analysis, 2:591 overview, 2:589 qualitative and quantitative analysis, 2:593 rhetorical analysis, 2:592 Frequency distributions, 2:594–599 central tendencies and variance, 2:597 frequency and relative frequency distribution, 2:594 frequency distribution tables, 2:594–595, 2:594–595 (tables) images to depict statistics, 2:597–599, 2:598 (fig.) visual depictions of, 2:595–597, 2:596 (fig.), 2:597 (fig.) Frequency domain time series analysis, 4:1769 Freud, Sigmund, 1:111, 2:568, 4:1624, 4:1747 Frey, Siegfried, 2:908 Friedan, Betty, 2:611, 4:1580 Friedman, Batya, 1:382 Friedman test, 2:834, 4:1560 Friesen, Wallace, 2:487, 2:489, 2:490 From Numbers to Words (Morgan, Reichert, Harrison), 4:1891

Index   1929 F test ANOVA and, 1:34 decomposing sums of squares and, 1:363 multiple regression and, 3:1043 one-way analysis of variance, 3:1128–1132 split-plot design, 2:499 Full-profile conjoint (FPC) analysis, 1:236, 1:237 Functional magnetic resonance imaging (fMRI), 1:172, 2:964, 3:1237 Funding of research, 2:599–602 funding drawbacks, 2:600 funding increasingly complex research models, 2:599–600 overview, 2:599 types of funding, 2:600–602 Funt, Allen, 3:1407 Future studies, of communication. See Communication and future studies Futures wheel, 1:168 Futures workshops, 1:168 Gadamer, Hans-Georg health literacy, 2:648 phenomenological traditions, 3:1229 philosophy of communication, 3:1232, 3:1233, 3:1234, 3:1235 rhetorical theory, 3:1510 rhetoric as epistemic, 3:1490 Gagarin, Michael, 3:1485 Gain frame, 2:642 Galaxy Game (video game), 4:1858 Gallup, 4:1523 Galton, Francis, 1:267, 1:437, 1:438, 2:951 Galtung, Johan, 2:776, 3:1203, 3:1204 Game Informer, 2:874 Games and Culture, 2:604 Game studies, 2:603–607 communication and future studies, 1:168 conflict, mediation, and negotiation, 1:232 critical and cultural studies approaches, 2:605 digital games and increase in scholarship, 2:604–605 future of, 2:607 history of, before digital games, 2:603–604 interpretive qualitative approaches, 2:605–606 quantitative scientific approaches, 2:606–607 term for, 2:603

Game Studies, 2:604 Gaming. See Massive multiplayer online games Gangotena, Margarita, 2:852 Gardner, Jonathan, 1:154 Garfinkel, Harold, 1:258, 1:461–466, 2:608, 2:609, 2:610, 4:1742 Garfinkeling, 2:608–610 conducting breaching experiment, 2:609 examples, 2:609–610 origins in ethnomethodology, 2:608–609 overview, 2:608 Garner, Eric, 3:1496, 4:1866 Garrard, Judith, 2:1014 Garrison, William Lloyd, 2:664 Gatekeepers. See Informants Gayle, Barbara, 4:1570 Gayspeak (Chesebro), 2:623 Gearhart, Sally, 2:554, 3:1391, 4:1583 Gearhart, Sherice, 3:1078, 3:1079 Geertz, Clifford, 1:67, 1:220, 1:320, 4:1893 Gender and communication, 2:611–615 communication and evolution, 1:165–166 communication and human biology, 1:170, 1:172 dime dating and, 1:386–387 education, 2:614 family, 2:613 feminist communication studies on, 2:557 interpersonal contexts, 2:612–613 media, 2:614 nominal/categorical measurement levels in, 2:946 overview, 2:611 peace studies on gender topics, 3:1205 power and violence, 2:614–615 social relationships and, 4:1647 theories and methods of studying, 2:612 waves of feminism, 2:611 workplace, 2:613–614 See also Feminist analysis; individual survey topics Gender-specific language, 2:615–618 future scholarship, 2:618 gendered notions in media and society, 2:617–618 language use and differences, 2:615–617 overview, 2:615 Gender Stories (Foss, Domenico, Foss), 2:554 Gender Trouble (Butler), 2:554

General intelligence (g factor), 1:274 Generalizability. See Errors of measurement: range restriction Generalization, 2:618–621 defined, 2:618 in different research contexts, 2:620 making, without random sampling, 2:619–620 sampling and, 2:619 targets of, 2:619 Generalized linear model (GLM) approach, 2:869 General linear model approach (curvilinear, quadratic relationships), 2:868–869 Genital blood volume. See Physiological measurement: genital blood volume Geoffrey of Vinsauf, 3:1508 Geographic Information Systems (GIS), 2:622 GeoMedia, 2:621–623 critical and cultural studies, 2:622 field of, 2:621 Geographic Information Systems (GIS), 2:622 media ecology, 2:622 mobilities research, 2:622 theoretical perspective, 2:621–622 Geometric mean. See Mean, geometric Geopolitical control, international communication and, 2:774–779 George, E., 3:1039 George C. Marshall Foundation, 2:582 Georgiou, Myria, 1:377 Gestures, symbolic interactionism, 4:1741 Gibson, James W., 1:89, 1:90 Gibson, Katie L., 2:553 Giddens, Anthony, 2:777, 4:1687 Gift/ghost authorship, 1:70 Giles, Howard, 1:161, 2:764, 2:767 Gill, Ann, 3:1490 Gillespie, Marie, 1:377 Gilroy, Paul, 1:376 Gingrich, Newt, 4:1694, 4:1695 Ginsburg, Ruth Bader, 3:1503 Girgis, Afaf, 1:85 “Girl power,” 4:1766 Girl with the Dragon Tattoo, The (film), 2:781 Gist, George (Sequoyah), 3:1080 Gist, Nathaniel, 3:1080 Gladwell, Malcolm, 2:697 Glamour, 2:870 Glaser, Barney, 1:213, 2:630 Glasgow, Russell E., 1:482 Glass, Gene V., 2:998

1930   Index GLBT communication studies, 2:623–626 contemporary concerns of, 2:624 debate and forensics, 1:353 feminist analysis, 2:553 GLBT community and surrogacy issues, 4:1696 historical analysis and presidential discourse, 2:665–666 history of, 2:623 important differences in G, L, B, T identities, 2:624 key research areas of, 2:623–624 overview, 2:623 queer studies compared to, 3:1391, 3:1392 terminology in gender and communication, 2:611 See also Pornography and research GLBT people and social media, 2:626–630 cyberactivism, 2:628 cyberbullying, 2:628–629 overview, 2:626 promoting identity development, 2:627–628 research methods, 2:626–627 safe space of social media, 2:627 self-reflexivity, 2:627 Glittering generality (propaganda device), 3:1341 Global communication, 2:823 Globalization, international communication and, 2:777 Glocalization (global-local), 2:777 Godard, Jean-Luc, 2:568, 2:780 Godschalk, David, 2:734 Goffman, Erving, 1:258, 3:1265, 4:1741–1742, 4:1743 Golden, James, 2:574 Golder, M., 3:1037 Golding, Peter, 2:776, 2:777 Goldman, Emma, 4:1583 Gold Open Access, 3:1369 Goldsmith, Thomas J., Jr., 4:1857 Goldzwig, Steven R., 2:552, 2:553 Gonzales, Amy L., 4:1642 Gonzáles, Rodolfo “Corky,” 2:851 González, Alberto, 2:852 Goodman, Nelson, 2:695–697, 2:696, 2:698 Good subject effect, 1:360 Good subject role, 1:372 Goodwill (eunoia), 3:1479 Good Will Hunting (film), 2:911 Google digital natives, 1:383 Google+, 1:265, 4:1640 Google Drive, 3:1141

Google Scholar, 1:132–133, 2:855, 4:1576 new media analysis, 3:1092, 3:1093 philosophy of communication, 3:1235 search engines for a literature search, 4:1574, 4:1575 social network systems, 4:1639 Gorgias, 3:1484, 3:1486 Gorgias (Plato), 3:1506 Gornick, Janet C., 2:574 Gottman, John M., 2:838, 3:1099, 3:1133 Gouldner, Alvin (Mike), 2:720 Government development of writing and, 1:199–200 evidence-based analysis used by, 1:468 (See also Evidence-based policy-making) foundation and government research collections, 2:581–583 health care disparities and response of, 2:640 Goyette-Cote, Marc-Olivier, 3:1039 Grammar of Motives, A (Burke), 3:1211, 3:1212, 3:1508, 4:1744 Gramsci, Antonio, 1:295, 1:321, 2:718, 2:775, 2:910 Graphical rotation, overview, 2:528 Gray, Freddie, 1:158 Gray, Pamela L., 1:88 Gray propaganda, 3:1340 Great Train Robbery, The (film), 2:567 Grebner, Simone, 3:1240 Green, Paul, 1:236 Greenacre, Michael, 1:276 Greene, John, 4:1622 Greenhouse–Geisser, 3:1433 Green Open Access, 3:1369 Greenpeace, 1:58 Grierson, John, 2:567 Griffin, Cindy L., 2:552, 2:554, 4:1583 Griffin, Rachel Alicia, 2:553 Griffin, Susan, 3:1391 Griffith, D. W., 2:651 Griffiths, Martha, 4:1582 Grimm, Susan A., 3:1241 Gronbeck, Bruce, 3:1510 Grootendorst, Rob, 1:54 Grounded theory, 2:630–633 axial coding, 2:631–632 communication theory and, 1:213 data collection and analysis, 2:630 open coding, 2:630–631 overview, 2:630 saturation, 2:632 selective coding, 2:632 writing, 2:632–633

Group-based approaches, to communication and technology, 1:175–176 Group communication, 2:633–637 accuracy tasks, 2:634 analyzing group data, 2:637 communication competence, 1:188 coordination tasks, 2:634 curvilinear relationship in, 1:323–324, 1:323 (fig.) intergroup communication, 2:766–770 measuring, 2:636 measuring group outcomes, overview, 2:633 measuring group social influence, 2:635–636 overview, 2:633 personal growth, 2:634–635 personal relationship studies, 3:1223 policy tasks, 2:634 productivity tasks, 2:633–634 virtual teams, communication and technology studies, 1:175–176 See also Small group communication Group memory, 2:634 Group polarization effect, 2:635 Groupthink, 2:635 Grubbs’ test, 3:1169 Guardian, The, 2:874 Gudykunst, William, 1:213 Guetzkow, Harold, 2:906 Guetzkow’s U, 3:1424 Guilt-purification-redemption cycle, 3:1493 Guinier, Lani, 1:304 Gurwitsch, Aron, 3:1229 Gutenberg, Johannes, 1:200 Gutierrez, José Angel, 2:851 Guttman, Louis, 4:1564 Gwet, Kilem Li, 2:727 Gwet’s Ac, 2:727 Gwet’s AC1, 2:725, 2:728 (table), 2:729 Habermas, Jürgen, 1:295, 2:775, 3:1232, 3:1510 Habitat (video game), 4:1858 Hacking. See Online data, hacking of Hager, Joseph C., 2:487, 2:489 Hall, Edward T., 1:162, 1:309, 1:310, 2:756, 4:1625 Hall, Stuart, 1:295, 1:321, 1:376, 2:568, 2:775, 3:1282, 3:1510 Hallin, Daniel, 2:774 Halo (video game), 4:1866 Halperin, David, 3:1391 Hamerlink, 2:777 Hamilton, Mark, 3:1194 Hamilton, William D., 1:164

Index   1931 Hammerback, John C., 2:851 Hammond, Joyce, 4:1570 Hancock, Jeffrey T., 4:1642 Happ, Mary Beth, 1:153 Haptic code, 3:1099 Haptics, observational measurement, 3:1109–1111 Haraway, Donna, 3:1461 Hard Core: Power, Pleasure and the Frenzy of the Visible (Williams), 3:1286, 3:1290 Harding, Sandra, 3:1461 Hare, R. Dwight, 2:1012, 2:1013, 2:1014, 2:1015, 2:1016 Hargis, Donald E., 1:89 Harman, Gilbert, 2:698 Harmonic mean. See Mean, harmonic Harmonization, of data, 1:338 Harris, Cheryl, 1:304, 2:777 Harris, Leslie, 1:45 Harris, Phil, 2:776 Harrison, T. R., 4:1891 Harris Poll, 4:1523 Hart, Rod, 3:1509 Hartley’s Fmax, 2:653 Harvard University, 1:302 “Hashjacking,” 1:266 Hasian, Marouf, 3:1295 Haudenosaunee and the Trolls Under the Bridge, The (Broadrose), 3:1079 Hauser, Gerard, 1:45 Hawking, Steven, 3:1480 Hawthorne effect, 1:476–477, 3:1125 Hayakawa, S.I., 4:1624 Hayano, David, 3:1217–1218 Hayes, Andrew F., 2:652, 4:1852, 4:1853 Haythornthwaite, Caroline, 1:175 HBGary Federal, 3:1143 He, Jia, 3:1476 Health care disparities, 2:639–642 health communication field, 2:641 overview, 2:639 underlying causes of, 2:639–641 Health communication, 2:642–645 communicating bad news, 1:83–86 communication competence, 1:188 communication privacy management theory on, 1:207–208 creation of health messages, 2:642–643 cyberchondria and, 1:328–330 data security issues, 1:342 on health care disparities, 2:641 interpretation of health messages, 2:644–645 management of health messages, 2:643–644

negotiation of health messages, 2:644 overview, 2:642 privacy issues in health research, 3:1325 responses to health messages, 2:643 Health Communication, 1:204, 2:758 Health Insurance Portability and Accountability Act, 1:342, 3:1325 Health literacy, 2:645–648 Cancer Health Literacy Study (CHLS), 2:646–648 continuous versus categorical construct, 2:646 general versus disease-specific construct, 2:646 individual- versus organizational-level construct, 2:645–646 overview, 2:645 Healthy People 2010/2020 (CDC), 2:640 Heart rate, 3:1236 Hecht, Michael, 1:162 Hegde, Radha, 3:1294 Hegde, Shome, 3:1294 Hegel, Georg, 3:1409 Hegemony in cultural studies, 1:322 in discourse analysis, 1:391, 1:392 Hegemony and Socialist Strategy: Towards a Radical Democratic Politics (Laclau, Mouffe), 1:391 Heidegger, Martin, 2:648, 2:696, 3:1228, 3:1229, 3:1232, 3:1510 “Heinz dilemma,” 3:1212–1213 Heisel, Alan, 1:171 Heisel, Beatty, 1:171 Hellmann, Rosemary, 3:1241 Hello, “curse of,” 1:442 Hempel, Carl G., 2:696 Henbest, Ronald J., 3:1198 Henderson, Russell, 3:1492 Hendrix, Katherine, 1:192 Henry, Michel, 3:1229 Hermagoras, 3:1507 Hermeneutics, 2:648–652 assumptions of, 2:649–650 critiquing literature sources, 2:886 existential and hermeneutic phenomenology, 3:1228 interpretation and communication, 2:650–651 overview, 2:648–649 Hermogenes, 3:1507 Hernández, David Gonzalez, 2:852 Hernández, Laurie, 2:852 Hernández, Leandra H., 2:852 Herring, Susan, 1:380 Hesiod, 3:1506

Heterogeneity of variance, 2:652–654 ANOVA, 2:652–653 MANOVA, 2:653 overview, 2:652 t test, 2:652 Heteroskedasticity, 2:654–657 consequences of, 2:654–655 detection, 2:655 overview, 2:654 solutions, 2:655–656 Hick’s Law, 2:672 Hickson, Mark, 1:41 Hidden Dimension, The (Hall), 2:756 Hidden profile task, 2:634 Hierarchical backward elimination, 2:889 Hierarchical discriminant function analysis, 1:397 Hierarchical linear modeling, 2:657–660 conceptual reasons for, 2:657–658 fixed factors in, 2:495 intraclass correlations (ICC), 2:659 levels of analysis, 2:659–660 nonindependent data, 2:658–659 random factors, 2:501 statistical reasons for, 2:658 Hierarchically nested factor design, 2:498 Hierarchical model, 2:660–663 in communication, 2:662 overview, 2:660 statistical analysis, 2:661–662 tree-like structure applications, 2:660–661 High context cultures, 1:310 Higher education, online data hacking and, 3:1142 Higher Learning Commission (HLC), 1:182 Higinbotham, William, 2:604, 4:1857 Hill, The, 2:823 Hillis, Ken, 2:622 Hiltz, Starr Roxanne, 1:173, 1:174 Hirst, Martin, 2:822 “Hispanic,” terminology, 2:849–850 Historical analysis, 2:663–666 criticism of, 2:664 defense of, 2:664 development of, 2:663–664 feminist studies and, 2:665 genre studies and, 2:664–665 overview, 2:663 presidential discourse and, 2:665–666 Historical controls, 1:253 Historical flaw, control groups and, 1:254 History and Criticism of American Public Address, A (Brigance), 2:664

1932   Index History effects, 3:1124 Hitchcock, Alfred, 2:568 Hitler, Adolf, 3:1478 Hochmuth, Marie, 2:664 Hofstede, Geert cross-cultural communication, 1:309, 1:311 intercultural communication, 2:755, 2:757–758 power in language, 3:1315 response style, 3:1474 social relationships, 4:1647 value dimensions approach to communication and culture, 1:159–161 Hoggart, Richard, 1:295, 1:321 Holladay, Sherry, 1:292 Holland, Shannon, 2:617 Holling, Michelle A., 2:554, 2:850, 2:851 Hollywood “remakes,” 2:781–782 Holmes, David, 1:355–356 Holsti, Ole R., 2:906 Holsti method. See Intercoder reliability techniques: Holsti method Home Depot, 3:1141 Homer, 3:1506 Homogeneity of variance, 2:666–668 comparing distributions among groups, 2:666–667 comparing single distribution to hypothetical distribution, 2:667–668 homoscedasticity, 2:654, 2:666 implications, 2:668 measuring group communication, 2:636 overview, 2:666 Homo Ludens (Huizinga), 2:604 Homoscedasticity, 2:654, 2:866 Honda, 3:1518 hooks, bell, 2:554, 3:1510 Hoover, Herbert, 1:45 Hopf, Theodore, 3:1057 Hopper, Grace, 2:671 Horkheimer, Max, 1:321, 2:568, 2:775, 3:1232, 3:1282 Hormonal variation, 3:1237 Hormones. See Communication and human biology Horn’s parallel analysis, 2:517, 2:518 Hotelling’s trace, 1:398, 2:653, 3:1061 Housley, Wil, 1:121 Houston, Marsha, 2:852 Houston Ministerial Association, 3:1481 Hovland, Carl, 2:773 Howe, Julia Ward, 2:573 How to Read Donald Duck (Dorfman, Mattelart), 2:776

Hubbard, Rita, 2:550 Huber, Mary, 4:1570 Huber, Peter J., 1:346 Huber standard error, 2:656 Huck, Schuyler, 4:1651 Huckleberry Finn (Twain), 3:1503 Huerta, Dolores, 2:851 Huffington Post, 2:759, 2:823 Huizinga, Johan, 2:604 Hulu, 2:781 Human biology. See Communication and human biology Human Communication Research (ICA), 1:3, 2:1022, 3:1445 Human-computer interaction, 2:671–674 as communication, 2:673 interface design, social computing, and ubiquitous computing, 2:671–672 media and technology studies, 2:957 overview, 2:671 research methods, 2:673–674 research questions, 2:673 theoretical foundations of, 2:672–673 Human-robot interaction (HRI), 3:1517 Human subjects, treatment of, 2:668–670 beneficence, 2:670 historical overview, 2:668–669 informed consent for study of human subjects, 2:706 justice, 2:670 overview, 2:668 respect for persons, 2:669–670 See also Institutional review board Hume, David, 2:695, 2:696, 2:697 Humorous interpretation (debate and forensics), 1:351–352 Humphreys, Laud, 1:228, 3:1443 Hunter, John E., 1:434, 2:981, 2:986 Husserl, Edmund, 3:1227, 3:1228, 3:1229, 3:1232 Hutchings, Pat, 4:1569, 4:1570 Huynh-Feldt, 3:1433 Hybrid communication courses, 1:90 Hyde, Michael, 1:196, 2:650 Hyperius, Andreas, 3:1508 Hyperpersonal communication, 1:174, 4:1642 Hypochondria, cyberchondria versus, 1:328 Hypotheses, confirmatory factor analysis and. See Factor analysis: confirmatory Hypothesis formulation, 2:674–677 formulating a hypothesis, 2:676–677 hypothesis testing versus model building, 2:889

log-linear analysis, 2:886–889 multivariate hypotheses, 3:1065 negative case analysis and, 3:1084–1087 one-tailed test, 3:1127 overview, 2:674, 2:675 types of, 2:675–676 Hypothesis testing, logic of, 2:677–680 null hypothesis, 2:677–678 overview, 2:677 probability and p values, 2:678–679 from sample to population, 2:677 statistical significance and effect size, 2:679–680 Hyves (The Netherlands), 4:1640 I Am a Soldier, Too (Lynch), 2:617 IBM, 1:159–160, 1:310, 2:757, 3:1141–1142 Ibn-Sina, 3:1430 Identity African American communication and culture, 1:16 GLBT people and social media, 2:626–629 important differences in G, L, B, T identities, 2:624 queer theory on, 3:1396 Ideographs, 2:681–684 communication history, 1:199 definition and functions of, 2:681–682 diachronic analysis, 2:682–683 forms of, 2:683 overview, 2:681 representational, 2:683 synchronic analysis, 2:683 visual, 2:683 Ideology, in cultural studies, 1:321 Ihde, Don, 3:1229 “I-It”/“I-Thou” relationships, 1:196 Iliad, The (Homer), 3:1506 Image restoration theory (IRT), 1:291–292 Imagined interactions, 2:684–687 areas of research, 2:686–687 attributes of, 2:685 functions of, 2:684–685 II, overview, 2:684 imagery, 2:685 methodology, 2:685–686 online IIs, 2:685 third-party IIs, 2:685 “Imaging Nature: Watkins Yosemite, and the Birth of Environmentalism” (DeLuca, Demo), 3:1505 “I”/“Me” concept, 4:1740–1741 Imhof, Margarete, 1:413

Index   1933 Imitation of treatment, 2:772 Implicit Association Test (IAT), 2:690 Implicit measures, 2:687–691 concept priming, 2:689 measures of attention, 2:688 measures of automaticity, 2:690 measures of memory, 2:688–689 overview, 2:687–688 physiological responses, 2:690–691 priming effects, 2:689 sequential priming, 2:689–690 Implicit memory tests, 4:1620 Inbalances and inequalities concept, 2:776–777 Inclusive fitness theory, 1:164–165 Independent errors, linear regression, 2:866 Indirect measures, for communication assessment, 1:184 Individual difference, 2:691–694 in communication research contexts, 2:693–694 importance of, 2:691–692 individual decision making, 4:1661 measurement of, 2:692–693 overview, 2:691 Individualistic cultures, 1:160, 1:310 Induction, 2:694–700 overview, 2:694 philosophical background, 2:694–697 relevance for social science and communication research, 2:697–699 Inductive hypotheses, 2:676 Industrial revolution, communication history and, 1:201 Industrial robotics, 3:1518 Inferential statistics, defined, 3:1318–1319 Informal logic, 1:53 Informant interview, 2:700–702 ethical issues, 2:701–702 locating informants, 2:700 overview, 2:700 questioning informants, 2:700–701 Informants, 2:702–705 ethical considerations, 2:704–705 informant interviews, 2:700–702 overview, 2:702 reflexivity, 2:703–704 selection process, 2:702–703 uses of information from, 2:703 Information, Communication & Society, 1:64 Information processing theories, 2:672, 4:1609–1610 Information technology, new media and, 1:168

Informed consent, 2:705–707 consent issues during interviews, 2:804, 2:805 history of, 2:705–706 overview, 2:705 during research process, 2:706–707 for study of human subjects, 2:706 using new media, 3:1095–1096 Insch, Andrea, 1:264 Insiders. See Informants Instagram agenda setting and, 1:21 blogs and research, 1:100 corporate communication and social media research methods, 1:265 Internet research and ethical decision making, 2:787, 2:788 media literacy, 2:972 online and offline data, compared, 3:1134 online social networks and, 4:1640 social network systems, 4:1638 Institute for Propaganda Analysis (IPA), 3:1341–1342 Institute for Scientific Information (ISI), 1:5 Institute of Diversity and Ethics in Sports (TIDES), 4:1662 Institute of Medicine (IOM), 2:640, 2:645, 2:646, 2:648, 3:1197 Institutes of Oratory (Quintilian), 4:1744 Institutional review board, 2:707–711 approval for proposed research, 2:709–711 confidentiality and anonymity of participants, 1:230 controversial experiments and, 1:255 covert observation, 1:288 data security and, 1:341–344 debriefing of participants, 1:356 deception in research, 1:361 ethnography research, 1:458 Federal Guidelines for the Ethical Conduct of Human Subjects Research (U.S. Department of Health and Human Services), 1:453–454 historical background, 2:708–709 informed consent, 2:705–707 interview protocol, 2:805, 2:806 overview, 2:707–708 planning research project, 3:1448 privacy of participants, 3:1323–1325 respect for persons, 2:669–670 vulnerable groups and, 4:1872 Instructional communication, 2:711–716 classroom environment, 2:714–715 instructor characteristics, 2:712–713

overview, 1:190–191, 1:192–193, 2:711–712 overview of research, 2:712 research methods, 2:715–716 research questions, 2:715 student characteristics, 2:713–714 Instructional manipulation, 2:901 Instructor characteristics, in instructional communication, 2:712–713 Instrumentation effect, 3:1125 Instrument reactivity, 3:1125 Interaction analysis, qualitative, 2:716–719 conversation analysis, 2:717 critical discourse analysis, 2:718–719 discourse analysis, 2:717–718 interactive interviews, 1:76 overview, 2:716–717 Interaction analysis, quantitative, 2:719–721 ANOVA, 1:35 assumption of structure, 2:719–720 examining sequence of behavior, 2:720–721 overview, 2:719 understanding structure, 2:720 Interaction effects, 2:534, 2:538 Intercoder reliability, 2:721–724 complications and need to increase rigor, 2:723–724 content analysis and, 1:241–242, 1:243 definition and importance, 2:721–722 process, 2:722–723 reliability of measurement and, 3:1428 Intercoder reliability coefficients, comparison of, 2:724–729 comparing seven agreement coefficients, 2:725–727, 2:727 (fig.), 2:728 (table) criteria for reliable data, 2:725 overview, 2:724–725 Intercoder reliability standards: reproducibility, 2:729–732 codebook, 2:730–731 coders, 2:731 data set, 2:730 overview, 2:729–730 Intercoder reliability standards: stability, 2:732–735 across/over time, 2:733–734 coding, 2:732–733 common measures of, 2:734 manifest and latent analysis, 2:733 overview, 2:732 pretests and pilot studies, 2:733 stability, defined, 2:732

1934   Index Intercoder reliability techniques: Cohen’s Kappa, 2:735–738 agreement coefficients, compared, 2:725, 2:726, 2:728 (table) characterizing K, 2:735–738, 2:736 (table), 2:737 (fig.) coding of data, 1:151 comparison of coefficients, 2:727 content analysis, 1:242, 1:247 defined, 2:735 effect sizes, 1:407, 1:408, 1:408 (table) eta squared, 1:446, 1:447 Fleiss system and, 2:739, 2:740 Holsti’s method and, 2:741, 2:742 intercoder reliability, 2:722 Krippendorff’s alpha and, 2:751 Markov analysis and, 2:906 measuring intercoder reliability, 2:735 meta-analysis calculations, 2:981 normal distribution curve, 3:1101 observational research, 3:1120 reliability of measurement, 3:1428 reproducibility, 2:729 Scott’s pi and, 2:753 Intercoder reliability techniques: Fleiss system, 2:738–740 advantage of, 2:739 agreement coefficients, compared, 2:725, 2:727, 2:728 (table) defining, 2:738–739 Markov analysis and, 2:906 overview, 2:738 Scott’s pi and, 2:753 weaknesses of, 2:740 Intercoder reliability techniques: Holsti method, 2:740–743 application and reporting of, in research, 2:742–743 Holsti’s method formula, 2:741–742, 2:742 (table) overview, 2:722, 2:740–741 strengths and limitations of, 2:742 Intercoder reliability techniques: Krippendorff’s alpha, 2:743–751 agreement coefficients, compared, 2:725, 2:726, 2:727, 2:728 (table) coding of data, overview, 1:151 coincidence matrix representations of reliability data, 2:744–745, 2:745 (table) defined, 2:746 degrees of disagreements, 2:745–746 expected coincidences, 2:745 fixed coding, 1:146 general form of alpha, 2:744 Holsti’s method and, 2:741, 2:742 intercoder reliability, overview, 2:722 Markov analysis and, 2:906

normal distribution curve, 3:1101 numerical examples, 2:746–747, 2:749–750 observer reliability, 3:1120 overview, 2:743–744 reliability data, 2:744, 2:744 (table) reliability of measurement, 3:1428 stability, 2:734 unitizing and coding variable segments of a continuum, 2:747–749 Intercoder reliability techniques: percent agreement, 2:751–752 calculating percent agreement, 2:751 overview, 2:751 Intercoder reliability techniques: Scott’s pi, 2:752–755 agreement coefficients, compared, 2:725, 2:727, 2:728 (table) coding of data, 1:151 content analysis, 1:242 example, 2:754–755 Holsti’s method and, 2:741, 2:742 intercoder reliability, 2:722 observational research, 3:1120 overview, 2:752–753 percent agreement, 2:751 reliability of measurement, 3:1428 strengths and weaknesses of, 2:753–754 uses of, 2:753 Intercultural communication, 2:755–758 defining, 2:755–756 definitions of culture and implications for, 2:756–757 Hofstede’s dimensions, 2:757–758 origins of field of study, 2:756 overview, 2:755 Intercultural communication competence, 1:188 Interdisciplinary journals, 2:758–761 implications for scholars, 2:760 journals within disciplines compared to, 2:758–759 overview, 2:758 strengths and limitations of, 2:759–760 Interface design, for human-computer interaction, 2:671–672 Intergenerational communication, 2:761–765 communication accommodation theory (CAT), 2:762–763 intergenerational communication studies, 1:155 intergenerational solidarity and ambivalence paradigm, 2:763 lifespan approach, 2:761 methodologies, 2:763–765

models of, 2:763 overview, 2:761 social identity theory (SIT), 2:761–762 Intergroup communication, 2:766–770 distinguishing research from other areas of study, 2:766–767 examples, 2:767 “inter,” defined, 2:766 research methods, 2:767–769 Intergroup Contact Theory (Allport), 2:767 Intergroup distance index, 1:394–395 Interlandi, Jeneen, 2:587 Internal consistency. See Factor analysis: internal consistency Internal validity, 2:770–773 balancing with external validity, 1:482–483 compensation, 2:772 compensatory rivalry, 2:772–773 demoralization, 2:773 diffusion or imitation of treatment, 2:772 history, 2:770 instrumentation, 2:771–772 maturation, 2:770–771 mortality or subject attrition, 2:771 one-group pretest-posttest design, 3:1124–1225 overview, 2:770 regression to the mean, 2:772 selection, 2:772 testing, 2:771 See also individual types of test designs International communication, 2:773–779 communication apprehension studies, 1:179 development media theory, 2:774 geopolitical control and empowerment theories, 2:774–778 normative press theories, 2:773–774 overview, 2:773 professional communication organizations, 3:1331 International Communication Association (ICA) academic journal structure, 1:3 alternative conference presentation formats, 1:21 communication journals, 1:205 on ethics, 1:10 feminist analysis, 2:555 game studies, 2:604 limitations of research and, 2:863 panel discussions, 3:1177 sources for research ideas, 3:1445

Index   1935 theoretical traditions, 4:1763 types of academic journals and, 1:4 International English Language Testing System (IELTS), 1:418 International film, 2:779–783 co-productions and Hollywood “remakes,” 2:781–782 film production and national cinemas, 2:779–780 narrative style and cinematography, 2:780–781 overview, 2:779 research frameworks, 2:781–782 See also Film studies International Journal of Communication, 1:5 International Monetary Fund (IMF), 2:777 International Organization for Standardization (ISO) 26000, 1:264 International Trade Union Confederation, 4:1578 Internet, for literature search. See Search engines for a literature search Internet as cultural context, 2:783–785 application considerations, 2:783–784 context, defined, 2:783 ethical considerations, 2:784 Internet and intrapersonal communication research, 2:812–813 overview, 2:783 Internet research, privacy of participants, 2:785–787 database and technological considerations, 2:786–787 overview, 2:785 questionnaire design and privacy of participants in Internet-based research, 2:785–786 Internet research and ethical decision making, 2:787–789 Association of Internet Researchers, ethics committee, 1:64 blogging, 2:787–788 data collection method, 2:788–789 editable websites, 2:787 overview, 2:787 presenting findings, via Internet, 2:789 Internet Research Annual: Selected Papers from the Association of Internet Researchers, 1:64 Internet sampling. See Sampling, Internet Interpersonal communication, 2:789–794 approaches to communication and technology, 1:174–175

communication competence, 1:187–188 concepts, theories, contexts of, 2:789–790 critical research tradition, 2:793 empiricist research tradition, 2:790–792 gender and communication, 2:612–613 health communication and, 2:644 interpretive research tradition, 2:792–793 media and technology studies, 2:958 Interpretance, 4:1589–1590 Interpretive research, 2:794–797 to communication and culture, 1:162–163 ethical challenges, 2:796–797 interpretive research stories, 1:67 “interpretive turn” and communication scholarship, 2:794–796 overview, 2:794 qualitative processes of reliability and validity, 2:796 Inter-quartile range, 3:1401–1402 Interrupted time series design, quasiexperimental design, 3:1386, 3:1386 (table) Intersectionality, feminist communication studies on, 2:558 Intersubjectivity, 1:259 Interval measurement. See Measurement levels, interval Interval/ratio variables, defined, 1:271 (table) Interviewees, 2:797–800 interviewee-researcher relationship, 2:800, 2:804 interview settings, 2:799–800 overview, 2:797–798 recruitment process, 2:798–799 Interviews for communication and aging research, 1:156 conversation analysis, 1:257–261 corporate communication, 1:265 depth interviews for family communication research, 2:547 emergency communication datagathering interviews, 1:410 ethnographic interviews, 1:454–457, 1:459 focus groups, 2:578–581 informant interviews, 2:700–702 mass communication research, 2:916 narrative interviewing, 3:1072–1077 online, 3:1144–1145

respondent interviews, 3:1470–1472 time for covert observation compared to, 1:287–288 Interviews, recording and transcribing, 2:800–803 interviewing, 2:800–802 recording, 2:802, 2:804, 2:805 transcribing, 2:802–803 Interviews for data gathering, 2:803–807 after the interview, 2:806 compensation, 2:805 conducting interview, 2:805–806 confidentiality, 2:805 consent, 2:805 developing interview protocol, 2:803–804 initial greeting, 2:805 interviewee-researcher relationship, 2:804 preparation for interview, 2:804–805 Intraclass correlation, 2:807–810 extensions, 2:809–810 interclass correlation (ICC) coefficients, defined, 2:807 overview, 2:807–809 uses of ICCs, 2:809 Intraclass correlations (ICC), hierarchical linear modeling, 2:659 Intragenerational communication studies, 1:155 Intrapersonal communication, 2:810–813 Internet and intrapersonal communication research, 2:812–813 overview, 2:810–811 psychological factors compared to, 2:811–812 Intrasubjects, counterbalancing, 1:278 Introduction to Categorical Data Analysis, An (Agresti), 2:947 Inverse scoring, 1:345 Investigative Reporters & Editors (IRE), 2:822 Invited publication, 2:813–815 author selection, 2:814 challenges, 2:814–815 coauthors, 2:814 intended audience and guidelines, 2:814 manuscript’s aim, 2:814 overview, 2:813 timeline and feedback, 2:814 Ipuwer, 2:593 iRobot Corporation, 3:1518 Irreducible complexity, 2:698 Irwin, Steve, 4:1573 Isocrates, 1:452, 3:1483–1488, 3:1506

1936   Index iTunes (Apple), 2:914 Iyengar, Shanto, 1:20 Jackson, Peter, 2:779 Jackson, Sally, 2:846 Jakobson, Roman, 4:1743 James, William, 3:1489, 3:1490, 4:1739, 4:1747 Jamieson, Kathleen Hall, 2:616, 2:665, 3:1497, 3:1498 Janis, Irving, 3:1156 Jansson, Andre, 2:622 Janusik, Laura A., 1:413–416 Japanese language, Sapir-Whorf hypothesis and, 1:310, 1:311 Japp, Phyllis M., 2:574 Jaspers, Karl, 3:1234 Jay Rosen, 2:819 Jealousy, 2:817–819 conceptualizing responses to, 2:817–818 evolution and, 1:165 measuring and testing responses to, 2:818–819 overview, 2:817 Jefferson, Gail, 1:258, 1:465, 2:717, 4:1778 Jefferson, Thomas, 3:1500 Jeffersonian transcription, 2:802 Jeffreys prior interval, 2:905 Jenner, Caitlin (Bruce), 2:789 Jensen, Richard J., 2:851 Jespersen, Otto, 3:1315 Jesus, 3:1492 Jhally, Sut, 4:1863 Johansen, Winni, 1:292 Johnson, E. Patrick, 3:1390 Johnson, Fern, 2:553, 2:554 Johnson, Malynnda, 1:287 Johnson, Michael, 2:554 Johnson, Ralph H., 1:53 Johnson, Wendell, 4:1624 Jonckheere–Terpstra trend test, 2:833 Jones, Stacy Holman, 3:1393 Joreskog, Karl Gustav, 2:928 Jorns, Ulrich, 2:908 Journal Citation Reports, 2:871 Journal impact factor (JIF), 1:204–205 Journalism, 2:819–824 big data and, 2:822–823 computer-assisted reporting, 2:822 history, 2:820 media ethics, 2:822 media law, 2:822 media literacy and global communication, 2:823 multiplatform journalism, 3:1037–1040 overview, 2:819–820

photography, 2:821–822 privacy issues, 3:1323–1324 research, 2:820–821 Journalism & Communication Monographs, 3:1445 Journalism & Mass Communication Quarterly, 2:821, 3:1445 Journal of Applied Communication Research (JACR), 1:4, 1:41, 2:855, 2:858, 3:1209, 3:1445 Journal of Communication (ICA) academic journals, overview, 1:5 academic journal structure, 1:3 determining relevance of literature, 2:874 game studies, 2:604 limitations of research and, 2:863 selection of methodology, 2:1022 sources for research ideas, 3:1445 Journal of Communication and Religion, 1:3 Journal of Computer-Mediated Communication (ICA), 1:4, 1:5, 2:863, 3:1445 Journal of Family Communication, 1:4, 2:545 Journal of Homosexuality, 3:1392 Journal of Intercultural Communication Research, 1:414 Journal of International and Intercultural Communication (NCA), 1:4, 1:5 Journal of Leadership Education, 2:855 Journal of Leadership & Organizational Studies, 2:858 Journal of Leadership Studies, 2:858 Journal of Politeness Research, 3:1264 Journal of Social and Personal Relationships (JSPR), 1:3, 1:4, 2:758, 2:759 Journals, 2:824–827 analytical journals, 2:826 determining relevance of, 2:874 methodological journals, 2:825 observation journals, 2:824–825 by participants, 2:826–827 personal journals, 2:826 theoretical journals, 2:825–826 Jovanovic, Spoma, 1:196 Joyce, James, 1:111 JPMorgan Chase and Co., 3:1141 JSTOR, 1:349 Julius Caesar (Shakespeare), 1:111 Jussawalla, Meheroo, 2:775 Justice, for human subjects, 2:670 Kaiser, Henry, 2:517 Kaiser–Meyer–Olkin (KMO), 2:508, 2:517

Kaiser rule, 2:517, 2:518 Kaleidoscope, 1:5 Kallemeyn, Rebecca, 2:553 Kant, Immanual, 1:195, 3:1159 Kaplan, Abraham, 4:1838 Kassing, Jeffrey W., 2:553, 4:1662 Katz, Paul, 2:962 Kay, Alan, 2:671 Keeter, Scott, 3:1149 Kendall, Lori, 1:381 Kendall’s tau, 2:829–831 (a), (b), (c) versions of, 2:830 overview, 2:829 Pearson correlation and, 1:271 significance test for, 2:830–831 terminology for, 2:830 use and applications of, 2:831 Kennedy, John F., 3:1271, 3:1272, 3:1275, 3:1481 Kennedy Commission on the Status of Women, 4:1580 Kenneth Burke Journal, 1:3 Kenneth Burke Society, 1:3 Kenny, David A., 4:1852 Keppel, Geoffrey, 1:366 Kierkegaard, Soren, 3:1232 Kinder, Donald, 1:20 Kinesic code, 3:1099 King, Martin Luther, Jr., 4:1865 Kipnis, Laura, 3:1286, 3:1290, 3:1291 Kirkwood, William, 2:577 Kish, Leslie, 2:809 Kitses, Jim, 2:568 K-means procedure, for clustering, 1:143, 1:144 Knowledge critiquing literature sources, 2:884 knowledge component of communication competence, 1:186 Knowledge: Creation, Diffusion, Utilization, 4:1574 Koch, Gary G., 2:739 Kohlberg, Lawrence, 3:1212 Kolmogorov–Smirnov test, 2:832, 4:1606 Korean American Communication Association (KACA), 3:1331 Korotkoff method, 3:1240 Korzybski, Alfred, 4:1624, 4:1626 Kowol, Sabine, 4:1777 KR-20/KR-21. See Reliability, Kuder-Richardson Kracauer, Siegfried, 2:567 Kraidy, Marwan M., 3:1292 Kramarae, Cheris, 2:552, 2:553, 2:554 Kraut, Robert, 3:1464 Krippendorff, Klaus, 1:146, 1:150, 1:151, 1:246, 2:739, 2:744, 2:747

Index   1937 Krippendorff’s alpha. See Intercoder reliability techniques: Krippendorff’s alpha Kristeva, Julia, 3:1510 Kruskal, William, 2:832 Kruskal–Wallis test, 2:831–834 homogeneity of variance, 2:667 introduction of, 2:832 Likert scales, 4:1555 overview, 2:831–832 practices, 2:833–834 rank order scales, 4:1560 statistical software for, 2:832–833 Krustra, Erika, 4:1568 Kuder-Richardson formula. See Reliability, Kuder-Richardson Kuhn, Thomas, 1:332, 2:696 Kurdek, Lawrence A., 1:72 Kurosawa, Akira, 2:780 Laboratory experiments, 2:834–838 advantages of, 2:836–837 disadvantages of, 2:837 ethical considerations, 2:837–838 overview, 2:835 purpose and function of, 2:835–836 Lacan, Jacques, 2:568, 3:1510 Laclau, Ernesto, 1:391, 1:392, 1:393, 2:718 Ladies Home Journal, 4:1581 Ladri di biciclette (film), 2:567 Ladson-Billings, Gloria, 1:304 Lag sequential analysis, 2:838–841 analyzing, 2:840 examples of, 2:840 overview, 2:838–839 revised, 2:839–840 Lakoff, Robin, 2:616, 3:1315, 3:1316 Lambda, 2:841–846 to consider relationships among dependent variables, 2:842 to generate multivariate vector to classify elements, 2:841–842 for justifying ANOVAs, 2:841 overview, 2:841 Lampe, Cliff, 4:1641, 4:1642 Lamsam, Teresa, 3:1078, 3:1079 La Nana (film), 2:780 Landis, J. Richard, 2:739 Landon, Alf, 4:1523 Lang, Annie, 1:170 Language and Power (Fairclough), 1:391 Language and social interaction African American communication and culture, 1:16 assumptions, terms, concepts, 2:843–844 development of communication in children, 1:373–374

distinctive research perspectives, 2:844–846 English as a Second Language (ESL), 1:417–421 gender-specific, 2:615–618 language expectancy theory and path analysis, 3:1196 LSI, defined, 2:842 origins of, 1:199 overview, 2:842–843 power in, 3:1314–1317 pronomial use and solidarity, 3:1337–1340 relational dialectics theory, 3:1408–1412 Sapir-Whorf hypothesis, 1:310–311 semantic priming, 4:1619 sociali constructionism and, 4:1624–1625 terministic screens and, 4:1745–1748 transcription systems, 4:1776–1779 See also Critical analysis; Discourse analysis; Nonverbal communication Language and Social Reality: The Case of Telling the Convict Code (Wieder), 1:464–465 Language as Symbolic Action: Essays on Life, Literature, and Method (Burke), 4:1745 Lanigan, Richard L., 3:1233, 3:1235 Lapchick, Richard, 4:1662 La Raza Caucus (NCA), 2:849 Larson, Reed, 1:471 Lasswell, Harold, 2:773 Latent class analysis, 4:1687 Latent content, analysis, 1:243 Latent content coding, 1:149 Latent factor interactions, 4:1687 Latent growth modeling, 4:1687 Latent variables, 2:502 Latina, 2:852 Latina/o communication studies, 2:849–854 critical/(inter)cultural, 2:852 critical race theory, 1:303 interpretive/qualitative, 2:852 key terms, 2:849–850 Latinidad feministe, 2:553 (See also Feminist analysis) “Latin@,” terminology, 2:849–850 Latin@ vernacular discourse, 2:850 overview, 2:849 performance-based methodologies, 2:851–852 quantitative, 2:852–853 rhetoric approach, 2:851

theoretical and methodological research, overview, 2:850–851 trajectories for, 2:853 Latina/o Communication Studies Today (Valdivia), 2:852 Latin square design, 2:846–849 advantages and disadvantages, 2:847–848 counterbalancing application, 1:279–280 example, 2:846–847 overview, 2:846 Layered accounts, 1:75 Lazarsfeld, Elihu, 2:962 Lazarsfeld, Paul, 1:20, 2:773 Leadership, 2:854–858 assessing search credibility, 2:855–856 challenges and opportunities, 2:858 common search terms for, 2:855 conducting ethical research, 2:858 conducting literature review, overview, 2:855 methodology, 2:857 overview, 2:854 publishing findings, 2:858 research process, 2:854–855 research questions/hypotheses, 2:856–857 theories, 2:856 Leadership Quarterly, 2:855, 2:858 League of Women Voters, 2:574 Learning management systems (LMS), 1:401–402, 1:403 Least significant difference. See Post hoc tests: least significant difference Lectures on Rhetoric and Belles Lettres (Blair), 3:1508 Lee, Janet, 2:574 Leff, Michael, 2:665, 3:1505 Legal communication, 2:859–861 overview, 2:859 researching legal doctrine and jurisprudence, 2:859 researching legal institutions, 2:859–860 researching legal processes, 2:860 Legal issues copyright, 1:261–263 critical legal studies (CLS), 1:302 (See also Critical race theory) of data security, 1:342 first wave feminism on, 2:572–575 of freedom of expression, 2:589–593 See also Legal communication; individual names of cases Lehrer, Keith, 2:696 Lenin, Vladimir, 4:1588 Lenneberg, Eric, 1:311

1938   Index Lerner, Gerda, 2:552 Lesbian Avengers, 2:624 Les Jeux et les Hommes (Caillois), 2:604 Levene, Howard, 2:653 Levene’s test, 2:666 Levinas, Emmanuel, 1:196, 3:1228, 3:1229, 3:1232 Levinson, Stephen C., 3:1264–1266 Lévi-Strauss, Claude, 2:568 Lewin, Kurt, 2:820 Lewins, Ann, 1:217 Lewis, Thomas R., 1:88–89 LexisNexis Academic, 1:349 LGBTQ. See GLBT communication studies Libertarianism, 2:773–774 Library guides (LibGuides), 1:92–93 Library research, 2:861–862 databases for context and sources, 2:861–862 overview, 2:861 topic identification, 2:861 Lifespan approach, to intergenerational communication, 2:653–764, 2:761 Liker, Jeffrey K., 2:839 Likert, Rensis, 4:1555 Likert scales. See Scales, Likert statement Lilliefors, 4:1606 Limitations of research, 2:862–864 addressing interested readers, 2:863–864 addressing peer reviewers, 2:863 addressing writers or researchers, 2:864 overview, 2:862 Limited capacity model of motivated mediated message processing (LC4MP), 1:170 “Limited term” copyright, 1:262 Lincoln, Abraham, 1:351, 2:549, 2:663, 2:665, 3:1392, 3:1505 Lincoln Memorial, 3:1355 Linden Labs, 3:1147 Lindsay, Vachel, 2:566 Lineage (video game), 4:1858 Linear combinations, understanding and interpreting, 3:1064 Linear contrasts, 1:249 Linear regression, 2:864–866 assumptions, 2:866 overview, 2:864 procedures, 2:865–866, 2:865 (fig.) Linear versus nonlinear relationships, 2:866–870 curvilinear and quadratic relationships: general linear model approach, 2:868–869

finding, 2:867–868, 2:868 (fig.) generalized linear model (GLM) approach, 2:869 overview, 2:867 LinkedIn, 3:1083, 4:1638, 4:1640 “Links,” 4:1633–1634 Linnaeus, Carl, 2:661 Lin’s rc, 2:728 (table) Lippmann, Walter, 1:19, 2:820 LISREL (linear structural relations), 2:929, 2:930 Listening, empathy and. See Empathic listening Listening Style Profile instruments, 1:415 Literacy, financial, 2:571 Literacy, health. See Health literacy Literacy, media. See Media literacy “Literary Criticism of Oratory, The” (Wichelns), 3:1349 Literary Digest, 4:1523 Literature, determining quality of, 2:870–872 conducting literature search, 2:871 journals, books, article types, 2:870–871 for meta-analysis, 2:993–997 overview, 2:870 Literature, determining relevance of, 2:873–875 determining author credibility, 2:873 determining publication credibility, overview, 2:873–874 newspapers, 2:874 overview, 2:873 popular magazines, 2:874 questions to ask, 2:875 scholarly or academic journals, 2:874 trade publications, 2:874 Literature review, the, 2:875–877 citation analysis, 1:132–134 citations to research, 1:134–136 defined, 2:875 determining quality of literature, 2:870–872 determining relevance of literature, 2:873–875 emergency communication and, 1:412 foundational reviews, 2:878–880 goals of, 2:876–877 leadership research, 2:855–857 meta-analysis and literature search issues, 2:993–997 narrative literature review, 3:1075–1077 research proposal and, 3:1451 resources for, 2:880–881 search engines for literature research, 4:1574–1577

skeptical and critical stance toward literature sources, 2:884–886 strategies for, 2:881–884 types of, 2:875–876 vote counting literature review methods, 4:1867–1870 writing a literature review, 4:1886–1888 Literature reviews, foundational, 2:877–880 approaches and strategies, 2:878 assumptions, 2:879 credibility, 2:879 goals of, 2:878–879 incorporating, 2:879 overview, 2:877 primary sources, 2:879 traditional literature review compared to, 2:878 Literature reviews, resources for, 2:880–881 finding resources, 2:880–881 resource credibility, 2:880 resource necessity, 2:880 Literature reviews, strategies for, 2:881–884 importance and characteristics of literature reviews, 2:882 rationales compared with literature reviews, 2:881–882 steps for creating literature review, 2:882–884 Literature review section. See Writing a literature review Literature sources, skeptical and critical stance toward, 2:884–886 bias, 2:885–886 developing and applying lens to literature, 2:886 importance of questioning others’ work, 2:884 Littlejohn, Stephen, 2:550, 2:790 Liu, Brooke, 4:1665 Lloyd, Sally, 4:1796 Loftland, John, 1:287 Logarithmic transformation, curvilinear relationship, 1:324 Logic, formal and informal, 1:53 Logical Self-Defense (Johnson, Blair), 1:53 Logic of signifier, 3:1346 Logistic analysis, 2:890–893 additional logistic regression forms, 2:891 application, 2:891–892 odds ratio and logistic regression, 2:890–891 overview, 2:890 Logistic regression, 3:1123

Index   1939 Logit linear model, 2:839 Log-linear analysis, 2:886–890 applicability, 2:887 example, 2:887–889, 2:887 (table), 2:889 (table) hypothesis testing versus model building, 2:889 overview, 2:886 Log odds ratio, 2:1010–1011 Log transformation, 1:346 Lombard, Matthew, 2:734 London, Norman T., 1:89 London’s National Physical Laboratory, 4:1857 Longitudinal design, 2:893–895 accelerate longitudinal studies, 2:894 cohort studies, 2:894 for communication and aging research, 1:157 overview, 2:893 panel studies, 2:894 structure of longitudinal studies, 2:893 surveys, 2:915–916 trend studies, 2:893 Looking glass self, 4:1740 Lord of the Rings (film), 2:779 Loss frame, 2:642 Lovaas, Karen, 3:1391 Lovejoy, Elijah, 2:820 Low context cultures, 1:310 Löwenbräu, 1:261 Lower-order effects, 2:887, 2:889 Lozano, Elizabeth, 2:852 Lucasfilm, 4:1858 Luckmann, Thomas, 2:885, 4:1623 Lucy (film), 2:781 Lull, James, 2:775 Lumière, Auguste, 2:567 Lumière, Louis, 2:567 Lumina Foundation, 1:183 Lurkers, 3:1137 Lynch, Jessica, 2:617 Lyotard, Jean-François, 3:1510 MacBride, Seán, 2:777 MacCallum, Robert C., 2:516 Madison, Soyini, 3:1215, 3:1218 Madonna, 4:1765 Magazines determining relevance of, 2:874 history of print and, 1:200 Magna Carta, 2:820 Magnavox, 4:1858 Magritte, Rene, 4:1746 Mahalanobis, Prasanta Chandra, 1:394 Mailloux, Steven, 2:650 Main effects, 1:35, 2:534, 2:537–538 Major mode, 3:1030

Making strange, 2:610 Man, Play and Games (Caillois), 2:604 Management Communication Quarterly, 2:855, 2:858 Managerial communication, 2:897–900 emerging issues and challenges in, 2:899–900 overview, 2:897 processes, practices, policy implications, 2:897–899 research methods, 2:900 Mancini, Paolo, 2:774 Mandel, Paul, 2:1002 Manifest content coding, 1:149, 1:243 Manipulation check, 2:900–903 manipulation in experimental research, 2:901 overview, 2:900–901 types of manipulation, 2:901–902 Man-made Language (Spender), 2:554 Mann, Estle Ray, 4:1857 Manning, Chelsea, 2:624 Manning, Erin, 3:1489 Mann-Whitney, 4:1555 Manual for Writers, A (Turabian), 1:126 Manual of Preaching for Parish Priests (Surgant), 3:1508 Manuscript decisions, for journal articles, 3:1374 Man with a Movie Camera (film), 2:567 Mapping BRIC’s Media (Thussu, Nordenstreng), 2:778 Marcel, Gabriel, 3:1229 March, James, 3:1156 Marchant, Paul, 1:440 Marconi, Guglielmo, 1:201 Marcuse, Herbert, 1:321, 2:568, 2:775 Margin of error, 2:903–905 confidence interval, 2:903 overview, 2:903 robustness and coverage probability, 2:904–905 standard deviation and standard error of statistic, 2:903–904 Marion, Jean-Luc, 3:1229 Marketplace argument, 1:52 Markman, Barry, 4:1651 Markov, Andrei Andreevich, 2:636 Markov analysis, 2:905–909 applications, 2:908–909 claims and outcomes, 2:907–908 coding a transcript, 2:906 establishing probability matrix, 2:907 evaluation of probabilities, 2:907 overview, 2:636, 2:721, 2:839–840, 2:905–906 unitizing a transcript, 2:906

Marriage, interpersonal communication theories about, 2:790–793 Marsh, Othniel Charles, 3:1079 Marshall, George C., 2:582 Marshall, Thurgood, 2:590 Martin, Matthew, 1:189 Martinez, Amanda R., 2:852, 2:853 Martinez, Jacqueline M., 2:852 Martinez, Katynka Z., 2:852 Martinson, Brian, 2:588 Marx, Karl, 1:295, 2:909, 3:1277, 3:1278, 3:1294, 4:1623 Marxism cultural imperialism, 2:776 induction and, 2:697 See also Marxist analysis Marxist analysis, 2:909–913 conducting, 2:911–912 economic metaphors, 2:910 hegemony, 2:910 ideology, 2:909–910 overview, 2:909 sites of struggle, 2:910–911 Mashable, 2:823 Massachusetts Institute of Technology, 2:604, 4:1858 Mass communication, 2:913–917 changing scope of, 2:913–914 cross-sectional surveys, 2:915 experimental studies, 2:914–915 longitudinal surveys, 2:915–916 mass media effects, 2:957–958 (See also Media and technology studies) open-ended surveys, focus groups, interviews, 2:916 overview, 2:913 quantitative content analysis, 2:916 Massive multiplayer online games, 2:917–920 ethnic inquiry into MMOG conduct, 2:917–918 MMOGs, overview, 2:917 personal nature of, 2:918–919 technology and gaming, 2:919 theoretical approaches, 2:917 Massive open online courses, 2:920–922 course design, 2:921–922 MOOCs, overview, 2:920 student outcomes, 2:920–921 Massumi, Brian, 2:697, 2:699, 3:1489 Mastro, Dana, 2:852, 2:853 Matched control groups, 1:253 Matched groups, 2:922–925 advantages of, 2:924 appropriate use of, 2:922–923 challenges and issues of, 2:924

1940   Index overview, 2:922 quasi-experiments, 2:923–924 Matched individuals, 2:925–926 advantages of, 2:926 disadvantages of, 2:926 matched pairs design, 2:925–926 overmatching, 2:926 overview, 2:925 Mathet, Yann, 2:747 Mathew of Vendome, 3:1507 Mathieu, Georges, 3:1485, 3:1486 Matrices/matrix operations, 3:1063–1064 Matsuda, Mari J., 1:303 Mattelart, Armand, 2:776 Maturation, 2:770–771 Maturation, control groups and, 1:254 Maturation effect, 3:1124–1125 Mauchly test, 3:1433 Maximum likelihood estimation, 2:927–929 Bayesian and non-Bayesian statistics, 2:927 contrast with ordinary least squares, 2:927–928 MLE, overview, 2:927 outcomes, 2:928–929 Maxwell, Joseph, 1:147 Maynard, Douglas, 1:465–466 McCabe, Jessi, 2:550 McCain, John, 2:576 McCombs, Maxwell, 1:20 McCormick, Colleen, 1:188, 1:189 McCroskey, James C., 1:169, 1:178, 1:186, 1:189, 2:517 McCroskey, Linda, 1:189 McCroskey’s Personal Report of Communication Apprehension–24, 1:431 McDonald’s omega, 2:648 McDougall, William, 2:502 McGee, Michael Calvin, 2:664, 2:681, 3:1509 McGukin, Drew, 1:90 McIntyre, Alisdair, 1:195 McKee, Alan, 4:1754, 4:1755, 4:1756 McKerrow, Raymie, 1:295–296, 3:1510 McKinney, Aaron, 3:1492 McKinney, M. S., 3:1275 McLaughlin, Sarah, 3:1480 McLuhan, Marshall, 1:200, 1:202 McMahan, David, 1:39–40 McNemar, Quinn, 2:929 McNemar test, 2:929–932 application, 2:931–932, 2:931 (table) developing test for significance of changes in related proportions, 2:929–931, 2:930 (table) overview, 2:929

McPhee, Robert D., 3:1154 McRobbie, Angela, 2:568 McVeigh, Timothy, 4:1749 Mead, George Herbert, 4:1623–1624, 4:1739, 4:1740–1741 Mead, Margaret, 3:1116 Mean defined, 2:597, 2:952–953 normal curve distribution and relationship to, 3:1102–1103 simple descriptive statistics, 4:1601–1602 Mean, arithmetic, 2:932–935 compared to other types of mean, 2:934–935 as descriptive statistic, 2:933–934 geometric mean compared to, 2:936, 2:939 harmonic mean compared to, 2:938 importance of concept, 2:934 as inferential statistic, 2:934 overview, 2:932–933 Mean, geometric, 2:935–937 arithmetic mean compared to, 2:935 compared to other types of mean, 2:936–937 in descriptive and inferential statistics, 2:936 example, 2:937 overview, 2:935–936 Mean, harmonic, 2:937–939 arithmetic mean compared to, 2:935 compared to other types of mean, 2:938–939 in descriptive and inferential statistics, 2:938 example, 2:939 geometric mean compared to, 2:936 overview, 2:937–938 Mean, Lindsey, 2:553 Meaning and Identity in Cyberspace (Kendall), 1:381 Mean vector, 1:282 Measurement group communication, 2:635–636 measured score versus true score, 4:1784–1785 measurement validity, 4:1823 (See also individual types of validity) unobtrusive, 4:1813–1816 See also Reliability of measurement Measurement, delayed. See Delayed measurement Measurement, errors of. See Errors of measurement Measurement levels, 2:940–943 commonly used statistics for, 2:942–943 interval, 2:941–942

measurement, overview, 2:940 nominal, 2:940–941 ordinal, 2:941 ratio, 2:942 selection of metrics for analysis, 2:1023–1026 Measurement levels, interval, 2:943–945 difference between ratio data and, 2:944 interval scaling, 2:943–944 overview, 2:941–942, 2:943 use of interval scales in communication research, 2:944–945 Measurement levels, nominal/ categorical, 2:945–947 gender and, 2:946 limits of, 2:946 overview, 2:940–941, 2:945 research applications for, 2:945–946 Measurement levels, ordinal, 2:947–949 common survey questions using, 2:947–948 considerations for application, 2:948 overview, 2:941, 2:947 quasi-interval measures and, 2:948 Measurement levels, ratio, 2:949–951 application, 2:950–951 characteristics, 2:949–950 common examples, 2:950 overview, 2:942, 2:949 Measures of central tendency, 2:951–953 foundations of central tendency, 2:951–952 importance of, 2:953 mean, 2:952–953 median, 2:952 mode, 2:952 overview, 2:951 Measures of variability, 2:953–957 overview, 2:953–954 population versus sample statistics, 2:956–957 value of, 2:954–956, 2:954 (fig.), 2:955 (table), 2:956 (table) Mechanical Turk (Amazon), 4:1540 Media cultural studies of media communication, 1:320–322 developments in media and diaspora research, 1:377–378 emergence of media technologies, 1:199–202 environmental communication, 1:422 financial communication and, 2:569–572 frame analysis and journalistic choices, 2:583–584

Index   1941 gender and communication, 2:614 gender-specific language, 2:617–618 GeoMedia, 2:621–623 GLBT communication studies and representation in, 2:624 individual differences and, 2:691–694 international film, 2:779–782 journalists’ privilege, 2:589 (See also Freedom of expression) media ecology, 2:622 media ethics, 1:196 reality television, 3:1405–1408 second wave feminism and, 4:1581–1582 sports communication, 4:1663–1664 terrorists’ use of media and coverage by, 4:1749–1750 wartime communication, 4:1876–1877 writing for news media, 3:1097–1098 See also Film studies; Journalism; Mass communication; New media analysis; New media and participant observation; Pornography and research Media, political economy of. See Political economy of media Media and technology studies, 2:957–960 critical and cultural studies, 2:958–959 human-computer interaction, 2:957 interpersonal communication, 2:958 mass media effects, 2:957–958 methodological approaches, 2:959–960 organizational communication, 2:958 overview, 2:957 political communication, 2:958 Media diffusion, 2:960–963 adoption of new technologies, 2:962–963 communication networks, 2:962 diffusion of innovations, 2:960–961, 2:961 (fig.) future of, 2:963 new ideas and new media, 2:961–962 Media effects research, 2:963–968 behavioral measures, 2:967 demonstrating causality: internal validity, 2:965 demonstrating relevance: external and ecological validity, overview, 2:965 ethical considerations, 2:967–968 media content as experimental stimuli, 2:965–966 overview, 2:963–964 research methods in, 2:964–965 selective exposure, 2:966

self-reports of media use and consumption, 2:966–967 tension between internal and external validity, 2:967 Media exposure behavior, selective exposure of, 4:1585–1586 Media literacy, 2:968–972 as access, activism, advocacy, 2:972 family tree of, 2:969 (fig.) media messages and construction of meaning, 2:970–971 overview, 2:823, 2:968–969 political and economic media literacy, 2:971 textual analysis and meaning of, 2:970 value of studying, 2:970 Media multiplexity theory, 1:175 Median, 2:972–974 benefits and drawbacks of calculating median, 2:973–974 defined, 2:597, 2:952 defining, 2:972–973 notations and statistical formula, 2:973 simple descriptive statistics, 4:1602 Median split of sample, 2:974–976 advantages of, 2:974–975 considerations for, 2:975–976 disadvantages of, 2:975 overview, 2:974 Media reception/consumption, 2:971 Mediated priming, 4:1620 Media texts, 2:970 Mediation. See Conflict, mediation, and negotiation Medwave Fusion, 3:1241 Mehrabian, Albert, 1:193 Meisenbach, Rebecca, 2:554 Melanchthon, Philip, 3:1508 Méliès, Georges, 2:567 Melody, Bill, 2:776 Membership categorization analysis (MCA), 1:120–121 Memoria del Saqueo (film), 2:781 Memorials, as public memory, 3:1354–1356 Memory, 2:688–689, 4:1620–1621 Menander of Leodicea, 3:1507 Men’s Health, 2:870 Mercury Theater, 1:19 Merleau-Ponty, Maurice, 3:1228–1229, 3:1232 Message clarity, research on, 1:193 Message production, 2:976–979 content analysis and select sample of, 1:244 in emergency communication, 1:409–410

factorial designs, overview, 2:492–493 fantasy theme analysis, 2:549–550 financial communication and corporate message framing, 2:570 health communication messages, 2:642–645 media literacy and, 2:970–971 politeness theory, 3:1264–1267 research and techniques, 2:978–979 SMCR (source, message, channel, receiver) model of communication, 2:884 social cognition and measuring message production, 4:1621–1622 strategic communication and, 4:1679–1682 types of message production studies, 2:976–978 Messinger, Daniel, 2:490, 2:491 Meta-analysis, 2:979–983 attenuated measurement, 1:431–432 coding reports, 2:981 collecting reports, 2:980–981 cross validation, 1:307 fixed factors in, 2:495 identifying statistical relationship between two variables, 2:980 importance of conversion for comparison, 2:1007–1008 interpreting results, 2:983 Kendall’s tau for, 2:830 literature review types, explained, 2:878 narrative analysis for, 3:1076–1077 ordinary least squares, 3:1152 overview, 2:979–980 performing calculations for, 2:981–982 random factors, 2:501 replication, 3:1436 Meta-analysis: estimation of average effect, 2:983–988 artifact adjustments, 2:986 example, 2:987–988 fixed-effect models versus random-effects models, 2:984 independence in, 2:986 inverse variance weights, 2:986–987 outliers, 2:985–986 overview, 2:983–984 transformations, 2:984–985 Meta-analysis: fixed-effects analysis, 2:988–993 calculations of effect sizes and sampling variances, 2:989 fixed-effects models, 2:989–990 multivariate meta-analysis, 2:991, 2:993

1942   Index overview, 2:988–989 structural equation modeling, 2:991–992 univariate meta-analysis, 2:990–991, 2:992–993 Meta-analysis: literature search issues, 2:993–997 including studies in meta-analysis, 2:995–996 overview, 2:993 publication bias, 2:996–997 relevancy of studies, 2:993–994 searching for studies, 2:994–995 Meta-analysis: model testing, 2:997–1001 ANOVA, 2:998–999 causal modeling/structural equation modeling, 2:1000 multiple regression, 2:999–1000 overview, 2:997 understanding nature of validity, 2:997–998 Meta-analysis: random effects analysis, 2:1001–1007 assumptions, 2:1001–1002 computation of effects sizes, 2:1002–1003 heterogeneity, 2:1002 illustrative examples, 2:1003–1004, 2:1003 (fig.) interpretation of summary effect, 2:1004–1005 overview, 2:1001 subgroup analysis and meta-regression, 2:1005–1006, 2:1006 (fig.) Meta-analysis: statistical conversion to common metric, 2:1007–1012 converting one effect size into another, 2:1010–1011 different effect sizes in meta-analysis, 2:1009–1010 effect size, 2:1008–1009 importance of comparison for meta-analysis, 2:1007–1008 overview, 2:1007 practical considerations, 2:1011 Metadata, 1:51 Meta-ethnography, 2:1012–1017 advantages of, 2:1016 caveats, 2:1015–1016 data organization and analysis, 2:1014 development and current state, 2:1012–1013 example, 2:1014–1015 overview, 2:1012 phases of, 2:1013 qualitative syntheses, 2:1012

types of, 2:1013–1014 underlying principles, 2:1013 Metahistory (White), 4:1744 Metaphor analysis, 2:1017–1019 conflict metaphors, 2:1018–1019 identifying metaphors, 2:1017 overview, 2:1017, 2:1018 script/schema theory, 2:1017–1018 Meta-regression, 2:1005–1006, 2:1006 (fig.) Methodological journals, 2:825 Methodology, research proposal and, 3:1451–1452 Methodology, selection of, 2:1019–1023 breadth of communication research, 2:1020–1021 factors influencing choices, 2:1022 mixed method communication research, 2:1022–1023 overview, 2:1019–1020 research questions and methodological choices, 2:1021 Methods section. See Writing a methods section Metrics for analysis, selection of, 2:1023–1026 interval and ratio level, 2:1025–1026 levels of measurement, 2:1023–1024 nominal level, 2:1024 ordinal level, 2:1024–1025 overview, 2:1023 Metz, Christian, 2:568 Meyer, Philip, 2:822 Microsoft, 4:1687 Middle States Commission on Higher Education (MSCHE), 1:182 “Middle way,” 1:161 Mikkelson, Alan, 1:164 Milgram, Stanley, 1:256–257, 1:362, 2:669, 2:1022, 3:1443, 4:1635, 4:1636 Military, gender-specific language about, 2:617 Milk, Harvey, 2:624 Mill, John Stuart, 1:195, 3:1159, 3:1277 Millennial generation, as digital natives, 1:383 Miller, Gary E., 1:89 Miller, Lise, 1:367 Millet, Kate, 4:1581 Mills, C. Wright, 2:820 Minaj, Nicki, 3:1503 Mindfulness, 1:384 Minh-ha, Trinh T., 2:554 Minnick, Wayne C., 1:88–89 Minor mode, 3:1030 Mishel, Merle, 4:1839 Miss America, protest against, 4:1580–1581

MissTravel.com, 1:386 Mixed factorial designs, 1:36, 2:534 Mixed level design, 2:1026–1029 blocking random factor across levels, 2:1028 fixed and random factors, overview, 2:1026 nesting random factor within levels, 2:1028–1029 random factor as constant, 2:1027–1028 randomizing across random factor, 2:1027 types of, 2:1026–1027 Mixed methods communication research, 2:1022–1023 Mixi (Japan), 4:1640 Mobilities research, 2:622 Mode, 3:1030–1031 defined, 2:597, 2:952 definitions, 3:1030 examples, 3:1030–1031 simple descriptive statistics, 4:1602–1603 Model building, hypothesis testing versus, 2:889 “Model minority,” 1:60 Model testing. See Meta-analysis: model testing Moderation indices, 4:1684 Moderators, for focus groups, 2:580 Moderator variables, 1:35–36 Modernization theory, 2:774–775 Modern Language Association (MLA) style, 3:1031–1032 APA compared to, 1:26, 1:27 for bibliographic research, 1:93 Chicago compared to, 1:133 interdisciplinary journals, 2:759 in-text citations format, 3:1032 manuscript format, 3:1032 overview, 3:1031 peer review publication, 3:1209 publication style guides, overview, 3:1365–1368 research report organization, 3:1458, 3:1459 style guide, 3:1031–1032 submission of research to a journal, 4:1692, 4:1694 Works Cited list, 3:1032 Momaday, N. Scott, 3:1080 Monroe, Alan H., 1:88 Monroe, James, 3:1292 Montgomery, Barbara, 3:1408 MOOCs. See Massive open online courses Moore, Julia, 2:554 Moore, Mark L., 4:1744

Index   1943 Moore’s law, 2:671 Moraga, Cherrie, 2:851 Moreman, Shane T., 2:851, 2:852 Moreno, J. L., 4:1636 Morensen, Scott, 3:1390 Morgan, Robin, 4:1581 Morgan, S. E., 4:1891 Morlan, Don B., 1:88 Morley, David, 2:775 Morley, Donald D., 2:839 Morman, Mark, 1:324 Morphology, of language, 1:373–374, 1:375 Morreale, Sherwyn P., 1:90, 4:1569 Morris, Charles E., III, 2:665, 3:1392 Morris, Errol, 2:567 Mortality in sample, 3:1032–1035 defining mortality in an investigation, 3:1032–1033 implications and evaluation of, 3:1033–1034 methods of reducing mortality, 3:1034–1035 overview, 2:771, 3:1032 Mosco, Vincent, 2:776, 3:1277 Mother Teresa, 2:911 Motivated cognitive information processing, 1:170 Motivation drama and, 3:1493 instructional communication and, 2:712, 2:714 media effects research and, 2:967 motivation component of communication competence, 1:187 Motives-based approaches, to communication and technology, 1:176 Mott, Lucretia, 2:572, 2:574, 2:611 Motta, Giovanni, 2:504 Mouffe, Chantal, 1:391, 1:392, 1:393, 2:718 Mowlana, Hamid, 2:773 Ms., 4:1582 MSNBC, 3:1037 MUD (video game), 4:1858 Mulhall, Ann, 1:287 Multicollinearity, 3:1035–1037 context, 3:1035–1036 coping with, 3:1036–1037 linear regression, 2:866 multiple regression and, 3:1044 overview, 3:1035 symptoms and diagnostics, 3:1036 Multidisciplinary academic databases, 1:348–349 Multifinality, 1:333 Multimodal distribution, 3:1030

Multiplatform journalism, 3:1037–1040 case studies, 3:1039 defining, 3:1037–1038 market research methods, 3:1040 overview, 3:1037 research methods, overview, 3:1039 survey research, 3:1039–1040 Multiple methods research (MMR), triangulation compared to, 4:1783 Multiple regression, 3:1040–1045 assessing fit and strength of model and its predictors, 3:1042–1043 assumptions, 3:1043–1044 cross validation and, 1:306 issues of, 3:1044 meta-analysis: model testing, 2:997–998 overview, 3:1040–1041 statistical foundations, 3:1041–1042 uses and advantages of, 3:1041 Multiple regression: block analysis, 3:1045–1048 considerations about, 3:1046–1047 entering a single block, 3:1045–1046 entering multiple blocks, 3:1046 overview, 3:1045 Multiple regression: covariates in multiple regression, 3:1048–1050 defining the covariate, 3:1048–1049 describing multiple regression, 3:1048 overview, 3:1048 using covariates in multiple regression analysis, 3:1049–1050 Multiple regression: multiple R, 3:1050–1053 assumptions of R, 3:1051 defining multiple R, 3:1051 overview, 3:1050–1051 relationship of multiple regression to ANOVA, 3:1051–1052 uses of multiple R, 3:1052 Multiple regression: standardized and raw coefficients, 3:1053–1056 applying ordinary least squares regression, 3:1053–1054 difference between standardized and unstandardized coefficients, 3:1054 interpreting output of, 3:1054 overview, 3:1053 reporting standardized and unstandardized coefficients, 3:1054–1056 Multiple-sample chi-square, 1:130–131, 1:130 (table), 1:131 (table) Multiplexity, 4:1634 Multistage sampling, 3:1400

Multitrial design, 3:1056–1058 multiple trials for participants, 3:1056–1057 theoretical assumptions of sequence, 3:1057–1058 weaknesses of, 3:1058 Multiuser Dungeon/Domains (MUDs), 1:381 Multivariate analysis of variance, 3:1058–1062 ANOVA versus, 3:1059–1060 crossed factor analysis, 2:493 discriminant analysis and MANOVA, 1:395 heterogeneity of variance, 2:652 limitations of, 3:1062 MANOVA, defined, 3:1058–1059 maximum likelihood estimation and, 2:928 multivariate statistics, 3:1065–1066 one-way versus two-way, 3:1062 overview, 3:1060–1062 Multivariate causality, 1:122–123 Multivariate meta-analysis, 2:991, 2:993 Multivariate statistics, 3:1063–1067 important aspects of, 3:1063 matrices and matrix operations, 3:1063–1064 multivariate applications, 3:1065 multivariate hypotheses, 3:1065 overview, 3:1063 relations among variables, 3:1067 sets of dependent variables, 3:1065–1067 understanding and interpreting linear combinations, 3:1064 Mulvey, Laura, 2:568 Muñoz, José Esteban, 3:1390 Münsterberg, Hugo, 2:566–567 Murdoch, Rupert, 2:823 Murdock, Graham, 2:776 Murrow, Edward R., 3:1407 Museums, as public memory, 3:1355 MySpace, 1:221, 4:1640 Nakamura, Lisa, 1:381 Namco, 4:1858 Name calling (propaganda device), 3:1341 Nanook of the North (film), 2:567 Nanus, Burt, 2:855 Narrative analysis, 3:1069–1072 African American communication and culture, 1:18–19 applications, overview, 3:1070 authoring: telling a research story, 1:65–68 autoethnographic approaches, 1:75

1944   Index co-constructed narratives, 1:76 critical incident method, 1:299–302 for critical race theory, 1:304 dialogic/performance applications, 3:1071–1072 Fisher narrative paradigm, 2:575–578 forms of, 3:1069 functional applications, 3:1071 international film and narrative style, 2:780–781 narrative interviews, 2:799 overview, 3:1069 sources of data for, 3:1069 structural applications, 3:1070–1071 thematic applications, 3:1071 See also Interaction analysis, qualitative; Interaction analysis, quantitative Narrative interviewing, 3:1072–1075 approaches to, 3:1075 interviewing, 3:1073–1074 (See also Interviews) narratives, defined, 3:1073 overview, 3:1072 researcher’s role, 3:1075 Narrative literature review, 3:1075–1077 considerations, 3:1077 describing, 3:1075–1076 overview, 3:1075 strengths of, 3:1076 weaknesses of, 3:1076–1077 Nash, Catherine J., 3:1391 Nass, Clifford, 2:672 National American Woman Suffrage Association (NAWSA), 2:573, 2:574 National Association of Academic Teachers of Public Speaking (NAATPS), 1:87, 4:1611 National Association of Independent Colleges and Universities (NAICU), 1:185 National Black Feminist Organization (NBFO), 4:1581 National Cancer Institute, 2:647, 3:1448 National Center for Health Statistics data, 2:710 National Center for Minority Health and Health Disparities, 2:640 National cinema, 2:779 National Collegiate Athletic Association (NCAA), 2:829, 2:831, 4:1663 National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1:39, 1:342, 2:705, 3:1441

National Communication Association (NCA) academic journal structure, 1:3 Activism and Social Justice Division, 1:11, 1:15 alternative conference presentation formats, 1:21, 1:23 applied communication, 1:41, 1:44 Asian/Pacific American communication studies, 1:60 communication assessment, 1:182, 1:183, 1:184 communication education, 1:190, 1:194 communication journals, 1:204, 1:205 disability and communication, 1:387–388, 1:390 on ethics, 1:10 feminist analysis, 2:555 GLBT communication studies, 2:623 limitations of research and, 2:863 NCA Credo for Ethical Communication, 1:452 panel discussions, 3:1177 peer review publication, 3:1210 sources for research ideas, 3:1445 Spiritual Communication Division, 4:1654, 4:1658 submission of research to a journal, 4:1692 teaching communication, 1:89 types of academic journals and, 1:4 National Council of Teachers of English (NCTE), 1:87 National Film Board of Canada, 2:567 National Football League (NFL), 4:1663, 4:1664 National Geographic, 2:874 National Institute for Computer-Assisted Reporting (NICAR), 2:822 National Institutes of Health (NIH), 1:343, 2:601, 2:708, 2:759 National Organization for Women (NOW), 4:1580 National Research Act of 1974, 2:669, 2:709 National Science Foundation (NSF), 1:50, 2:601, 2:759, 3:1142, 4:1573 National Security Agency, 3:1093 National Woman Suffrage Association (NWSA), 2:573, 2:574 National Women’s Party (NWP), 4:1582 Native American or indigenous peoples communication, 3:1077–1081 community-based research, 3:1078–1079 critical race theory, 1:305 digital media and race, 1:382

Indian human as educational device, 3:1080 Native Hawaiians and cultural adaptation, 1:61 overview, 3:1077–1078 research and revival, 3:1080–1081 Sapir-Whorf hypothesis, 1:310 wampum as written communication, 3:1079–1080 Nativism, development of communication in children and, 1:374 Naturalistic observation, 3:1081–1084 advantages of, 3:1082 challenges of, 3:1082–1083 data collection, 3:1083–1084 online, 3:1083 overview, 3:1081–1082 Natural selection, 1:163–164 Nature, 1:5, 2:588 Nazi War Crime Tribunal, 2:708. See also Nuremberg Code NBC, 2:617, 3:1092 NCA Credo for Ethical Communication (National Communication Association), 1:452–453 NCSoft, 4:1858 Negative case analysis, 3:1084–1087 advantages and disadvantages of, 3:1087 overview, 3:1084–1085 process, 3:1085–1086 qualitative analysis, 3:1085 qualitative rigor in, 3:1086–1087 Negative face, 3:1265 Negativistic subject effect, 1:360 Negativistic subject role, 1:372 NEGOPY network analysis, 4:1635 Negotiation. See Conflict, mediation, and negotiation Neo-Aristotelian criticism, 3:1087–1090 completing analysis with, 3:1088–1089 as controversial method, 3:1089–1090 history and development of, 3:1088 overview, 3:1087–1088 Nested factors. See Factor, nested; Hierarchical linear modeling; Hierarchical model Netflix, 2:781 Network analysis, in emergency communication, 1:411 Network analysis indices, 4:1635 Neuroscientific perspective of communication and human biology, 1:171–172 neuroimaging, 1:172, 2:964, 3:1237

Index   1945 neurostudies, implicit measures, 2:690–691 See also Communication and human biology Neverwinter Nights (video game), 4:1858 New England Association of Schools and Colleges Commission on Institutions of Higher Education (NEASC-CIHE), 1:182 Newman–Keul’s test, 3:1132 New media analysis, 3:1090–1093 agenda setting and, 1:20–21 communication and future studies, 1:168 defining new media, 3:1091 media diffusion, 2:961–962 overview, 3:1090–1091 research difficulties of, 3:1093 research in new media, 3:1091–1093 New media and participant observation, 3:1093–1097 capturing lived experience, 3:1094 challenges, 3:1095–1096 efficiency and accuracy, 3:1094–1095 ethical considerations, 3:1096 opportunities of, 3:1094 overview, 3:1093–1094 retrospective analysis, 3:1095 New Media & Society, 1:5, 2:863, 2:874 New Poetics (Geoffrey of Vinsauf), 3:1508 New Rhetoric, The (Perelman, Olbrechts-Tyteca), 1:54, 3:1431, 3:1508 News Corp, 2:823 News media, writing for, 3:1097–1098 alternative news media, 1:23–26 news releases, 3:1097 political communication, 3:1269–1270 summary articles, 3:1097–1098 writing style for, 3:1098 Newspapers determining relevance of, 2:874 history of print and, 1:200 News releases, 3:1097 Newsweek, 2:870, 4:1581 Newton, Isaac, 3:1438 New York Radical Women (NYRW), 4:1580–1581 New York Times, 2:587, 2:874, 3:1150, 3:1286, 4:1749 New York Times v. Sullivan, 2:590 New York Young Lords and the Struggle for Liberation, The (Wanzer-Serrano), 3:1504

Neyman, Jerzy, 3:1104 Neyman–Pearson approach, 3:1104–1105 Nichols, Bill, 2:567 Nicomachean Ethics (Aristotle), 1:195, 3:1479 Nineteenth Amendment, U.S. Constitution, 2:573–574, 2:611 Nintendo, 4:1858 Nisbett, Richard, 1:161 Nissenbaum, Helen, 1:382 Nixon, Richard, 1:20, 2:820, 3:1271, 3:1272, 3:1275, 3:1494, 4:1694, 4:1695 Noblit, George W., 2:1012, 2:1013, 2:1014, 2:1015, 2:1016 Nominal measurement. See Measurement levels, nominal/ categorical Nominal variables, defined, 1:271 (table) Nondirectional hypotheses, 2:675–676 Nonequivalent control groups, 1:252 Non-hierarchical clustering, 1:143–144 Nonindependent data, hierarchical linear modeling, 2:658–659 Non-native speakers (NNS), 1:417 Nonparticipant observation, 3:1117 Nonprobability. See Sampling, nonprobability Nonrandom sampling, of propaganda, 3:1343–1344 Nonverbal communication, 3:1098–1101 common approaches, 3:1099–1100 considerations in research of, 3:1100–1101 Facial Action Coding System, 2:487–491 online interviews, 3:1145 study of, 3:1098–1099 Nordenstreng, Kaarle, 2:777, 2:778 Norlin, George, 3:1485 Normal curve distribution, 3:1101–1103 overview, 3:1101–1102, 3:1102 (fig.) relationship between mean, standard deviation, and area under curve, 3:1102–1103 z scores and, 3:1103 Normal distribution of errors, linear regression, 2:866 Normative approach to communication ethics, 1:197 to international communication, 2:773–774 North Atlantic Treaty Organization (NATO), 4:1727 Notes sources, citing with Chicago style, 1:127

Note taking, during interviews, 2:806 Nothstine, William L., 3:1477 Nouvelle Rhétorique, Le (Perelman, Olbrechts-Tyteca), 1:54 Null hypothesis, 3:1103–1106 defined, 2:677–678 defining, 3:1104 falsely rejecting the null (See False positive) McNemar test, 2:930 modern controversy of, 3:1104–1105 null hypothesis significance testing (NHST), 1:226–227, 2:542 overview, 3:1103–1104 power curves, 3:1310–1311 p value, 3:1176 Nuremberg Code, 1:448–449, 1:453, 2:669, 2:708 Nussbaum, Martha, 1:195 N.W.A., 3:1503 Obama, Barack Fisher narrative paradigm and, 2:576 historical analysis and, 2:665 ideographs, 2:683 media and technology studies, 2:958 postcolonial communication and, 3:1292 rhetoric, ethos, 3:1479 rhetorical and dramatism analysis, 3:1492, 3:1493 rhetorical artifacts, 3:1495 social media and, 4:1632 social media used by, 1:385 Obergefell v. Hodges, 3:1503 Objectivity, for evaluating Web resources, 1:93 Oblique rotation. See Factor analysis: oblique rotation Observation by applied researchers, 1:43–44 for communication and aging research, 1:156 communication skills and, 1:211 covert, 1:285–289 family communication research, 2:546–547 GLBT people and social media, 2:627–628 inspiration for research, 3:1437–1440 measuring patent-centered communication, 3:1198–1199 naturalistic observation, 3:1081–1084 new media and participant observation, 3:1093–1097

1946   Index observation journals, 2:824–825 See also Field notes; Participant observation; Researcher-participant relationships in observational research Observational measurement: face features, 3:1107–1109 facial features as indicators of deception, 3:1107 facial features in communication of emotion, 3:1107–1108 overview, 3:1107 See also Facial Action Coding System Observational measurement: proxemics and touch, 3:1109–1111 controlled observation, 3:1109–1110 field and laboratory experiments, 3:1110 naturalistic observation, 3:1109 overview, 3:1109 Observational measurement: vocal qualities, 3:1111–1112 functional applications, 3:1111–1112 methods for observing vocal qualities, 3:1111–1112 observable vocal qualities, 3:1111 overview, 3:1111 Observational research, advantages and disadvantages, 3:1112–1115 advantages of observational research, 3:1113–1114 covert observation, 3:1117 disadvantages of observational research, 3:1114–1115 ethical concerns, 3:1115 nonparticipant observation, 3:1117 observational research, overview, 3:1112–1113 online observation, 3:1117–1118 participant observation, 3:1116–1117 Observational research methods, 3:1115–1118 artifact selection and, 1:56–59 observation research, defined, 3:1115 uses and origins of, 3:1115–1116 See also individual areas of study Observer reliability, 3:1118–1121 context and appropriateness of, 3:1119 features of, 3:1118–1119 maximizing, 3:1119–1120 overview, 3:1118 reliability and validity, 3:1119 Occam’s razor, 3:1182–1184 Occupy Wall Street, 2:591 Ochs, Elinor, 2:846 O’Connell, Daniel, 4:1777

Odds ratio, 3:1121–1123 calculation, 3:1121–1123, 3:1122 (table), 3:1123 (table) in meta-analysis, 2:985 in meta-analysis, and statistical conversion to common metric, 2:1010 odds ratio and logistic regression, 3:1123 OR, defined, 3:1121 Office for Human Research Protection (OHRP), 2:705, 2:706, 2:709 Office of Management and Budget, 2:849 Office of Measurement, 4:1559 Office of Public Health and Science, 2:587 Office of Research Integrity (ORI), 2:587 Office of the Secretary of Health and Human Services, 2:587 Ogan, Christine, 1:377 Olbrechts-Tyteca, Lucie, 1:54, 3:1431, 3:1508 Olsson, Gunnar, 2:621 Omori, Kikuko, 4:1640 On Christian Doctrine (St. Augustine), 3:1507 One-group pretest-posttest design, 3:1124–1126 experimental design, overview, 1:478 Hawthorne effect, 3:1125 history effects, 3:1124 implementation, 3:1124 instrumentation effect, 3:1125 instrument reactivity, 3:1125 maturation effect, 3:1124–1125 overview, 3:1124, 3:1125 (fig.) participant mortality, 3:1125 regression to the mean, 3:1125 threats to internal validity, overview, 3:1124 use of, 3:1126 O’Neill, James M., 1:87 One-tailed test, 3:1126–1128 directional and nondirectional testing, 3:1126–1127 hypotheses or research questions, 3:1127 overview, 3:1126 use of, 3:1127–1128 One-way analysis of variance, 3:1128–1132 assumptions, 3:1131 between-groups variance estimate, 3:1129–1130 F ratio, 3:1130–1131 logic behind F test, 3:1129 measures of effect size, 3:1131–1132

multiple comparisons, 3:1132 overview, 1:34–35, 3:1128 within-groups variance estimate, 3:1129 See also Analysis of variance (ANOVA) Ong, Walter, 1:200 On Ideas of Style (Hermogenes), 3:1507 Online and offline data, comparison of, 3:1132–1135 emergence of online data, 3:1133–1134 overview, 3:1132–1133 Online communities, 3:1135–1137 formation of, 3:1135–1136 overview, 3:1135 sense of community and social capital, 3:1136–1137, 4:1641 theories and methods, 3:1137 See also Social media Online data, collection and interpretation of, 3:1137–1139 interpreting online data, 3:1139 online collection methods, 3:1138–1139 online data, defined, 3:1137–1138 survey research, 3:1138 Online data, documentation of, 3:1140–1141 advantages of, 3:1140 challenges of, 3:1140–1141 social media mining, 3:1141 Online data, hacking of, 3:1141–1143 data breach problems, 3:1141–1142 ethical implications of using hacked data in research, 3:1143 future research directions, 3:1143 in higher education, 3:1142 individual researchers and, 3:1142 overview, 3:1141 securing valuable research data, 3:1142–1143 Online ethnography, 1:220–221 Online imagined interactions (IIs), 2:685 Online interviews, 3:1144–1145 advantages and disadvantages, 3:1144–1145 overview, 3:1144 standardized versus nonstandardized, 3:1144 synchronous versus asynchronous, 3:1144 Online observation, 3:1117–1118 Online social worlds, 3:1146–1149 identity and representation, 3:1146–1147 overview, 3:1146

Index   1947 purpose and interactions, 3:1147–1148 theory and methods in, 3:1148–1149 See also Social media; Video games Ono, Kent, 3:1293 On the Manner of Teaching and Study of Belles Lettres (Rollin), 3:1508 “On Viewing Rhetoric as Epistemic” (Scott), 3:1488 Open-access publications. See Publications, open access Open Access Scholarly Publishers Association, 3:1202 Open coding, 2:630–631 Open-ended surveys, 2:916 Opening statements, for focus groups, 2:580 Opinion polling, 3:1149–1150 overview, 3:1149 random sampling and, 3:1149–1150 See also Surveys Oppositional messages, Marxist analysis, 2:911 Opton’s peak method, 3:1246 Orality, 1:16 Oral presentation, 1:22, 2:686 Orbe, Mark, 1:163 Order, in Markov model, 2:636 Order and sequence effects, 1:277–280 Order of Things, The (Foucault), 3:1490 Ordinal measurement. See Measurement levels, ordinal Ordinal variables, defined, 1:271 (table) Ordinary least squares, 3:1150–1153 application to structural equation modeling, 3:1150–1151 approach to analysis, 3:1150–1151 curvilinear relationship, 1:324 defined, 3:1150 drawback, 3:1151 factor analysis: internal consistency, 2:521 heteroskedasticity, 2:654–656 maximum likelihood estimation compared to, 2:927–929 multiple regression: standardized and raw coefficients, 3:1053–1054 O’Reilly, Bill, 2:788 Organisation for Economic Co-operation and Development (OECD), 4:1577, 4:1578 Organizational communication, 3:1153–1157 container versus constitutive approaches, 3:1154 decision making and, 4:1661 early models of organizations, 3:1154–1155

evolving models of, 3:1155–1156 media and technology studies, 2:958 organizational dramas, 3:1494 overview, 3:1153–1154 privacy issues, 3:1325 processes of, 3:1156–1157 See also Corporate communication Organizational ethics, 3:1158–1161 methodologies and research, 3:1160–1161 overview, 3:1158 tensions in research of, 3:1159–1160 theories of, 3:1159 topics of research, 3:1158–1159 value of, 3:1158 Organizational identification, 3:1161–1165 challenges of overidentification, 3:1164–1165 identification in action, 3:1162 impact on decision making, 3:1164 impact on self-perception, 3:1164 overview, 3:1161–1162 representational methods, 3:1165 role of communication in, 3:1163 role of organizational culture in, 3:1163–1164 role of socialization in, 3:1162–1163 Organization for Research on Women and Communication (ORWAC), 2:555 Organization for the Study of Communication, Language and Gender, 2:555 Orientalism, 3:1294–1295 Origin Systems, 4:1858 Orthogonality, 3:1165–1168 application, 3:1167 of contrast analysis, 1:251 as element in design or analysis, 3:1166 as empirical outcome, 3:1166–1167 in measurement, 3:1166 orthogonal rotation, overview, 2:528–531, 2:530 (table) orthogonal versus oblique analysis, 2:525 overview, 3:1165 in social sciences, 3:1167–1168 Oscillometric measurement, 3:1241 Osgood, C. E., 4:1561 Oster, Harriet, 2:489 Oswald, Ramona, 2:553 Otis, Megan, 4:1570 Ott, Brian L., 3:1492 Outlier analysis, 3:1168–1170 accounting for counterfactuals and outliers, 1:37–39 detection and analysis, 3:1168–1170

in meta-analysis, 2:985–986 overview, 3:1168 Out-of-class communication (OCC), by students, 2:714 Overidentified model, 3:1171–1173 degrees of freedom and, 3:1172–1173 identification roles, 3:1172 organizational identification, 3:1164–1165 organizational identification challenges, 3:1164–1165 overview, 3:1171 in structural equation modeling, 3:1171–1172 in systems of equations, 3:1171 Pacanowsky, Michael P., 2:794 Pacific and Asian Communication Association (PACA), 3:1331 Pac Man (video game), 4:1858 Paine, Thomas, 2:820 Panel presentations and discussions, 3:1177–1179 attending, 3:1177–1178 overview, 3:1177 planning, 3:1178–1179 purposes, 3:1178 Pan’s Labyrinth (film), 2:780 “Paper mills,” 3:1256 Parables for the Virtual (Massumi), 2:697, 2:699 Parabolic transformation, curvilinear relationship, 1:324 Parallelism test. See Factor analysis: parallelism test Parasocial communication, 3:1179–1182 defining, 3:1179–1181 future directions, 3:1181 Park, Han Woo, 1:222 Park, Hee Sun, 2:504 Parks, Malcolm, 1:175 Parsimony, 3:1182–1184 examples, 3:1183 overview, 3:1182 qualitative research, 3:1183 quantitative research, 3:1183 questioning, 3:1183–1184 Partial autocorrelation functions (PACFs), 1:77 Partial correlation, 3:1184–1187 defined, 3:1184 formula, 3:1184–1186 limitations, 3:1186 reporting, 3:1186 Participant observation, 3:1187–1190 benefits of, 3:1187 negotiating access, 3:1188 note taking and writing up data, 3:1188–1189

1948   Index observational research methods, overview, 3:1116–1117 observer’s critical distance, 3:1189–1190 overview, 3:1187 performance (auto)ethnography, 3:1217–1218 process of, 3:1188 Pascal, Blaise, 3:1508 Passing theory, 3:1190–1193 within and across groups, 3:1191–1192 defined, 3:1190–1191 overview, 3:1190 passing as prosocial, 3:1192–1193 prototypical identity, 3:1192 unique function of attribution process, 3:1191 Passwords, data security and, 1:343 Patching, Roger, 2:822 Patents, 1:261 Path analysis, 3:1193–1197 defining, 3:1193–1194 example, 3:1194–1197, 3:1195 (fig.) overview, 3:1193 as recursive model, 3:1194 Patient-centered communication, 3:1197–1200 factors contributing to, 3:1197–1198 measuring, 3:1198–1200 overview, 3:1197 Patient Protection and Affordable Care Act, 2:640 Paul Ekman Group, 2:488 Pay to review and/or to publish, 3:1201–1203 benefits and drawbacks to open access publishing, 3:1201–1202 determining appropriates of, 3:1202 open access publishing model, 3:1201 traditional publishing model, 3:1201 Peace studies, 3:1203–1206 gender inequality and violence, 3:1205 interpersonal conflict resolution, 3:1206 overview, 3:1203 peacemaking, peacekeeping, peacebuilding, 3:1204 perspectives and methods of, 3:1203–1204 poverty, 3:1205–1206 racism and privilege, 3:1206 topics of, overview, 3:1204–1205 war and terrorism, 3:1205 Pearson, Egon, 3:1104 Pearson, Karl, 1:267, 1:289, 1:394, 1:434, 4:1666 Pearson correlation. See Correlation, Pearson

Pedagogy activism and social justice research, 1:14 basic course in communication, 1:88–90 communication assessment used in, 1:182–186 communication ethics and, 1:198 of digital natives, 1:383–386 distance learning, 1:399–402 educational technology, 1:403–406 gender and communication, 2:614 massive open online courses, 2:920–922 scholarship of teaching and learning, 4:1568–1572 service learning and, 4:1593–1596 See also Communication education; Scholarship of Teaching and Learning Peer review, 3:1206–1208 academic databases, 1:348–350 benefits to writers and publishers, 3:1208 communication journals, overview, 1:203 copyright and, 1:262–263 double-blind peer review, 3:1373–1374 drawbacks, 3:1208 edited publications versus, 3:1371 Gold Open Access, 3:1369 limitations of research, 2:873 overview, 3:1206 pay to review and/or to publish compared to, 3:1201–1203 process, 3:1207 reviewer’s responsibilities, 3:1207–1208 Peer reviewed publication, 3:1208–1210 editors and editorial boards, 3:1209 overview, 3:1208 process, 3:1209–1210 submitting research articles, 3:1208–1209 Peirce, Charles S. Peircean semiotics, 4:1587–1588 phenomenological traditions, 3:1229 philosophy of communication, 3:1232 rhetoric as epistemic, 3:1489 social constructionism, 4:1624 symbolic interactionism, 4:1739 Pelias, Ronald J., 3:1218 Peñaz method, 3:1240 Pentadic analysis, 3:1210–1213 Burkean analysis and, 1:109 dramatism analysis, 3:1492 dramatistic pentad, 3:1211

extended, 3:1213 “Heinz dilemma,” 3:1212–1213 overview, 3:1210 ratios and relations, 3:1211–1212 Percent agreement. See Intercoder reliability techniques: percent agreement Perelman, Chaïm, 1:54, 3:1090, 3:1431, 3:1508 Performance research, 3:1213–1216 dialogic performance, 3:1215 issues in, 3:1215–1216 overview, 3:1213–1214 performing research, 3:1215 reflexivity in performance context, 3:1214–1215 Performance session, 1:22 Performance studies, 3:1216–1220 audience of, 3:1218 challenges and ethical considerations, 3:1219 performance as method of study, 3:1217 performance as object of study, 3:1216–1217 performance (auto)ethnography, 3:1217–1218 performers of, 3:1218 subjects of, 3:1218 Performative writing, 1:76 Permanence and Change: An Anatomy of Purpose (Burke), 1:109, 1:110, 1:112, 4:1745, 4:1746 Perot, Ross, 3:1275 Perotti, Niccolò, 3:1508 “Perpetual foreigner,” 1:60 Persepolis (film), 2:568 Personal computer (PC), history of, 1:202 Personal growth, in group communication, 2:634–635 Personality, factor analysis of, 2:502 Personal journals, 2:826 Personal Relationships, 1:3, 1:4, 1:204, 2:758, 2:759 Personal relationship studies, 3:1220–1224 biological influences, 3:1223 ethical considerations, 3:1223 interdependence and level of analysis, 3:1222–1223 interpretivist lens, 3:1221 overview, 3:1220 post-positivist lens, 3:1220–1221 qualities of personal relationships and research methods, 3:1221 relational history, 3:1222 studying communication in, overview, 3:1220

Index   1949 Personal Report of Communication Anxiety (PRCA) (National Communication Association), 1:178, 1:184, 1:431, 2:974 Persuasion, 3:1224–1227 experimental inductions in research of, 3:1225––1226 measures in, 3:1224–1225 persuasive source communication, 1:165 See also Rhetoric Peters, John Durham, 3:1232 Peterson, Theodore, 2:773 Petroglyphs, 1:199 Petronio, Sandra, 1:205 Petty, Richard, 3:1224 Pew Internet, 1:176, 4:1523 Pew Research Center, 4:1529, 4:1637 PF Chang’s, 3:1141 Phaedrus (Plato), 3:1506 Phase shift, 4:1768 Phenomenological Movement, The (Spiegelberg), 3:1227 Phenomenological traditions, 3:1227–1230 existential and hermeneutic phenomenology, 3:1228 experience sampling method, 1:472–473 intentionality, 3:1227 as movement, 3:1227–1228 phenomenology, defined, 3:1227 phenomenology as deconstruction, 3:1229 phenomenology of body, 3:1228–1229 phenomenology of evil, 3:1229 phenomenology of nothingness, 3:1228 phenomenology of otherness, 3:1228 political phenomenology, 3:1228 postphenomenology, 3:1229 pragmatist phenomenology, 3:1229 selection, 3:1229–1230 as theoretical tradition, 4:1761–1762 underrepresented groups, 4:1808–1809 Phi coefficient, 3:1230–1232 example, 3:1231–1232 interpretation of, 3:1231 setup and calculation of, 3:1230–1231 Philip Melanchthon (Melanchthon), 3:1508 Philip of Macedon, 3:1486 Philipsen, Gerry, 1:162, 2:845–846 Philodemus, 3:1484 Philosophy of communication, 3:1232–1236

constructs and traditions of inquiry, 3:1233–1234 grounding communication as human science, 3:1235 as interpretive inquiry, 3:1232–1233 overview, 3:1232 philosophical approach to communication ethics, 1:197 significance of communication scholarship, 3:1234–1235 Philosophy of Rhetoric (Campbell), 3:1508 Philosophy of Rhetoric (Richards), 3:1509 Phishing, 4:1653 Photo-documentation, 4:1864 Photo-elicitation, 4:1864–1865 Photography, 2:821–822 Phrases, searching for, 1:350 Phronesis, 1:195, 3:1478–1479 Physiological measurement, 3:1236–1238 biochemical and hormonal variation, 3:1237 blood pressure, 3:1236, 3:1238 (fig.), 3:1238–1242, 3:1238 (table) eye tracking and pupillometry, 3:1237 genital blood volume, 3:1242–1245 heart rate, 3:1236 media and technology studies, 2:959 neuroimaging, 1:172, 2:964, 3:1237 overview, 3:1236 physiological responses and implicit measures, 2:690–691 skin conductance, 3:1237, 3:1250–1253 Physiological measurement: blood pressure, 3:1238–1242 auscultatory measurement, 3:1240–1241 avoiding potential confounds with, 3:1241 measuring response, 3:1238 (fig.), 3:1238–1240, 3:1238 (table) oscillometric measurement, 3:1241 overview, 3:1238 Physiological measurement: genital blood volume, 3:1242–1245 applications of measure, 3:1243–1244 overview, 3:1242 significance of, 3:1242–1243 Physiological measurement: heart rate, 3:1245–1248 early studies of, 3:1245–1247 overview, 3:1245 recent advances in, for communication, 3:1247 Physiological measurement: pupillary response, 3:1248–1250

history of research on, 3:1248 overview, 3:1248 research examining pupillary response in communication, 3:1249–1250 understanding change in eye, 3:1248–1249 Physiological measurement: skin conductance, 3:1250–1253 advantages and disadvantages of measuring EDA, 3:1252–1253 applications, 3:1253 biological background, 3:1251–1252 cold sweat in emotionally relevant situations, 3:1251 electrodermal activity (EDA), overview, 3:1237, 3:1250–1251 measurement and parameters of EDA, 3:1252 Piaget, Jean, 2:570 Pickles, John, 2:621 Pictograms, 1:199 Pictographs, 1:199 Pieterse, Nederveen, 2:778 Pillai’s criterion, 1:398, 2:654, 3:1061 Pilot study, 3:1253–1255 approaches to, 3:1254–1255 goals of, 3:1253–1254 overview, 3:1253 Pineda, Richard D., 2:852 Pinterest, 3:1092, 4:1638, 4:1640 Place diaspora and, 1:379 sense of place and environmental communication, 1:422 Placebo control groups, 1:252–253 Plagiarism, 3:1255–1258 acknowledging contribution of others, 1:8–10 citations and attributions, 4:1897–1898 education, 3:1257 history of, 3:1256 impact of technology on, 3:1256 implications for research and education, 3:1256–1257 overview, 3:1255–1256 policy, 3:1257–1258 research integrity, 3:1257 See also Acknowledging the contribution of others Plagiarism, self, 3:1258–1261 causes of, 3:1260 controversies of, 3:1259–1260 features of, 3:1258–1259 overview, 1:9–10, 3:1258 prevention of, 3:1260–1261 Plain folks (propaganda device), 3:1341

1950   Index Plato categorization, 1:119 critical analysis and, 1:295 on ethics, 1:452 Isocrates’ rhetoric and, 3:1484, 3:1486 philosophy of communication, 3:1232 religious communication, 3:1430 rhetoric, logos, 3:1480 rhetorical theory, 3:1506 Plume, Alex White, 3:1081 Poe, Marshall T., 1:200–201 Poehlman, Eric, 2:587 Poetic analysis, 3:1261–1264 approaches, 3:1263 data analysis, 3:1263–1264 overview, 3:1261–1262 poetic possibilities, 3:1264 poetic sensibilities, 3:1262–1263 Point-biserial correlation. See Correlation, point-biserial Policy tasks, in group communication, 2:634 Politeness: Some Universals in Language Use (Brown, Levinson), 3:1264 Politeness theory, 3:1264–1267 culture and facework, 3:1267 global message evaluations and measures, 3:1266 legacy of Brown and Levinson, 3:1264–1266 message typology, 3:1266 overview, 3:1264 politeness as discursive practice, 3:1266–1267 Political communication, 3:1267–1271 general themes of research, 3:1270–1271 in institutional settings, 3:1268–1269 mediated communication, 3:1269–1270 overview, 3:1267–1268 processes of, 3:1268 Political debates, 3:1271–1276 content of, 3:1274 debate effects, 3:1272–1273 experiments, 3:1273 overview, 3:1271–1272 rhetorical analysis, 3:1275 social media, political debates, and data analytics, 3:1275–1276 surveys, 3:1273–1274 Political economy of media, 3:1276–1280 contemporary varieties of, 3:1277–1278 critiques of, 3:1278–1279

PEM, overview, 3:1276–1277 research areas, 3:1279–1280 roots of, 3:1277 Political issues critical ethnography, 1:297–299 of digital natives, 1:384–385 of historical analysis and presidential discourse, 2:665–666 induction and, 2:697–698 international communication, 2:773–779 political communication, 2:958 political drama, 3:1494 political-economy theory, 2:776 political phenomenology, 3:1228 terrorism and delayed measurement, 1:370 See also individual survey topics Politico, 2:823 “Politics of the Pitch: Claiming and Contesting Democracy Through the Iraqi National Soccer Team, The” (Butterworth), 3:1504 Pong (video game), 4:1858 Poole, Scott, 1:176 Popper, Karl, 2:885, 3:1104, 3:1167, 3:1435 Popular communication, 3:1280–1283 methodology, 3:1282 multidisciplinary appeal, 3:1282–1283 overview, 3:1280–1281 theory, 3:1281–1282 types of, 3:1281 Popular Mechanics, 2:874 Population/sample, 3:1283–1286 defining population, 3:1283–1284 degrees of freedom and, 1:367 diaspora and ethnic labeling, 1:378 errors of measurement: ceiling and floor effects, 1:433–434 errors of measurement: range restriction, 1:444 external validity, 1:480–481 factorial designs and determining sample size, 2:539 generalizability, 3:1285 generalization, 2:618–620 logic of hypothesis testing, 2:677 measures of variability, 2:956–957 mortality in sample, 3:1032–1035 overview, 3:1283 population concept and degrees of freedom, 1:367 population mean, 2:933 recruitment for focus groups, 2:579 representativeness, 3:1284–1285 sample size and statistical power analysis, 4:1676

sample versus population, 4:1523–1526 selecting sample items, 3:1285–1286 simple descriptive statistics, 4:1603–1604 surveys, sampling issues, 4:1718–1721 Pornography and research, 3:1286–1291 “antipornography feminism,” 3:1288–1290 contemporary finding of historical understandings of pornography, 3:1286–1287 humanities-based media studies and sexuality studies, 3:1290–1291 limits of ways that pornography is approached and discussed, 3:1287–1288 overview, 3:1286 social-scientific approaches to media, 3:1288 Porter, Edwin S., 2:567 Portfolios, for communication assessment, 1:184 Positionality, critical ethnography and, 1:298 Positive/direct correlation, 1:267 Positive face, 3:1265 Posner, Richard, 1:304 Postcolonial communication, 3:1291–1295 key terms for analysis, 3:1294–1295 overview, 3:1291–1293 role of postcolonial critics, 3:1293 studying rhetorics of disempowered, 3:1293–1294 underrepresented groups, 4:1808 Poster, Mark, 1:380 Posteriori, defined, 1:335 Poster presentation of research, 3:1308–1310 creating posters, 3:1308–1310 overview, 1:22, 3:1308 sections of posters, 3:1308 Post hoc ergo proctor hoc fallacy, 1:122 Post hoc tests, 3:1295–1296 overview, 3:1295 planned contrasts versus, 3:1295–1296 types of, 3:1296 Post hoc test: Scheffe test, 3:1301–1303 comparing tests, 3:1302–1303 defined, 3:1301–1302 overview, 3:1301 Post hoc tests: Duncan multiple range test, 3:1296–1298 overview, 3:1296–1297 utilizing, 3:1297–1298, 3:1297 (table), 3:1298 (table)

Index   1951 Post hoc tests: least significant difference, 3:1299–1301 describing least significant difference test, 3:1299–1300 example of least significant difference test, 3:1300 overview, 3:1299 Post hoc tests: Student-Newman-Keuls test, 3:1303–1305 conducting, 3:1304–1305, 3:1304 (table), 3:1305 (table) overview, 3:1303 Type I error and, 3:1303–1304 Post hoc tests: Tukey honestly significant difference test, 3:1305–1307 ANOVA results for first step of, 3:1306 (table) overview, 3:1305–1307 Postimperialism concept, 2:776–777 Postman, Neil, 1:201, 1:405–406, 4:1625 Postmodern epistemology, feminist, 2:559 Postmodernism, 2:777 Postmodern research stories, 1:67–68 Postphenomenology, 3:1229 Postpositivist research stories, 1:66–67 Potter, Jonathan, 1:391 Potter, Michael, 4:1568 Powdermaker, Hortense, 2:568 Power curves, 3:1310–1314 determinants of empirical power, 3:1313–1314, 3:1314 (fig.) gender and communication, 2:614–615 mechanics of computing empirical power, 3:1311–1312, 3:1312 (fig.) null hypothesis significance testing, 3:1310–1311 social relationships and, 4:1647 statistical power, overview, 3:1310 statistical power as remedy, 3:1311 theoretical power and optimal sample size, 3:1312–1313 Power Elite, The (Mills), 2:820 Power in language, 3:1314–1317 flexibility and variety in, 3:1317 language differences, 3:1315–1316 overview, 3:1314–1315 power and positioning “others,” 3:1316 power in workplace/organizational studies, 3:1316–1317 Power polynomials, in general linear model approach, 2:868–869 Power ranking, for citation analyses, 1:133

Practical wisdom (phronesis), 3:1478–1479 Pragma-dialectics, 1:54 Pragmatic evidence, 1:467 Pragmatics, of language, 1:373, 1:374 Pragmatist phenomenology, 3:1229 Praxis, 1:198 Preacher, K. J., 4:1852, 4:1853 Predictive discriminant analysis, 1:395 Prensky, Mark, 1:383, 1:385 Presage/Process/Product components, of empathic listening, 1:414–415, 1:414 (fig.) Prescott, Suzanne, 1:471 Pretest-posttest one group, 1:478 quasi-experimental design, 3:1385–1386, 3:1386 (table) reliability of measurement, 3:1426 stimulus pretest, 4:1677–1679 true experimental designs, 1:478 See also One-group pretest-posttest design; Solomon four-group design; Two-group pretestposttest design; Two-group random assignment pretest– posttest design Priest, Susanna Hornig, 4:1574 Primary data analysis, 3:1317–1321 analyzing qualitative analysis, 3:1320 descriptive statistics, 3:1318 evaluating interpretation of qualitative data, 3:1320 inferential statistics, 3:1318–1319 overview, 3:1317–1318 preparing for qualitative analysis, 3:1319–1320 preparing for quantitative data analysis, 3:1318 primary data, overview, 1:335–338 Primetime (ABC), 2:617 Priming, 1:20, 4:1619 Priming, in political communication, 3:1268 Priming effects, implicit measures, 2:689 Principal component analysis (PCA), 1:276, 2:511 exploratory factor analysis compared to, 2:515 factor analysis, 2:503 oblique analysis and, 2:525 Principal Investigators, of IRBs, 2:709–710 Principles and Types of Speech (Monroe), 1:88 Principles for Effective Assessment of Student Achievement, 3:1335

Principles of Letter-Writing (Anonymous of Bologna), 3:1507 Principles of Medical Ethics (American Medical Association), 1:452 Principles of the Art of Preaching (Chobham), 3:1507 Pronomial use and solidarity, 3:1337–1339 Print revolution, history of, 1:200 Prinz, Dietrich, 4:1857 Priori, defined, 1:335 Privacy of information, 3:1321–1323 boundary management, 3:1321 communication privacy management theories, 1:206–208 digital platforms, 3:1322–1323, 3:1325 overview, 3:1321 research with private information, 3:1322 rules for disclosure, 3:1321–1322 of social network systems, 4:1639 See also Institutional review board Privacy of participants, 3:1323–1325 confidentiality and anonymity of participants, 1:227–230 Internet research, privacy of participants, 2:785–787 journalism and research concerns, 3:1323–1324 overview, 3:1323 privacy and institutional review boards, 3:1324–1325 (See also Institutional review board) provisions for digital storytelling, 3:1325 provisions for health research, 3:1325 provisions for organizational consulting, 3:1325 Private spaces, covert observation in, 1:286 Probability. See Sampling, probability Probing questions, for focus groups, 2:579 Probit analysis, 3:1325–1328 assumptions, 3:1326–1327 comparison to other statistics, 3:1327–1328 overview, 3:1325–1326 in social sciences, 3:1326 strengths of analysis, 3:1328 Productivity tasks, in group communication, 2:633–634 Professional communication organizations, 3:1328–1333 description and contact information for, 3:1329–1333 history of, 3:1329 overview, 3:1328–1329

1952   Index Program assessment, 3:1333–1337 external obligations for, 3:1335–1336 issues of, 3:1336 metrics for, 3:1334 overview, 3:1333–1334 translating goals into outcomes, 3:1334–1335 Projansky, Sarah, 3:1293 Promax rotation method, 2:529 Prompts, cloze procedure for, 1:139–142 Pronomial use and solidarity overview, 3:1337 pronomial usage, 3:1337 solidarity, 3:1337–1339 “Proof and Measurement of Association Between Two Things, The” (Spearman), 1:273 Proof by contradiction, 3:1176–1177 Propaganda, 3:1340–1344 circulation of, 3:1340–1341 devices of, 3:1341–1342 elaboration of latent consequences strategy, 3:1342 measuring presence and extent of, 3:1342–1343 nonrandom sampling of, 3:1343–1344 overview, 3:1340 specific message techniques, 3:1341 wartime communication and, 4:1876 Propositional units, 1:246 Prospective cohort designs in communication, 1:153 Prototype view, categorization, 1:119–120 Prototypical identity, 3:1192 Proxemics observational measurement, 3:1109–1111 in personal relationship studies, 3:1222 proxemic code, 3:1099 Proximal similarities, 2:620 Psyche, 3:1345–1346 Psychoanalytic approaches to rhetoric, 3:1344–1347 accounts of rhetoric and psychoanalytic supplements, 3:1345–1347 exploring rhetorical approaches to psychoanalysis, 3:1345 overview, 3:1344–1345 Psychology Today, 2:759, 2:870 PsycINFO, 1:349 Ptacek, John, 1:85 Public address, 3:1347–1351 history of, 3:1348

key figures, 3:1348–1349 overview, 3:1347–1348 public versus private contexts, 3:1349 relationship between text and context, 3:1349 traditional versus nontraditional texts, 3:1349–1350 Publication, politics of, 3:1360–1365 elements of, 3:1361–1362 functions of academic publishing, 3:1362–1363 overview, 3:1360–1361 political controversy in academic publishing, 3:1363–1364 Publication bias, 2:996–997 Publication Manual of the American Psychological Association (APA). See American Psychological Association style Publications, open access, 3:1368–1370 benefits of, 3:1369 copyright and, 1:262–263 databases and, 1:348 Green Open Access and Gold Open Access, 3:1369 open access (OA), overview, 3:1368 open-access publications, defined, 3:1368–1369 Publications, scholarly, 3:1370–1371 abstract or executive summary, 1:1–2 edited publications, 3:1371 peer-reviewed publications, 3:1370–1371 purpose of, 3:1370 Publication style guides, 3:1365–1368 citation system, 3:1366–1367 format of research manuscript, 3:1366 overview, 3:1365 publication styles, 3:1365–1366 research process, 3:1367 using, 3:1367–1368 writing mechanics and style, 3:1366 Public behavior, recording of, 3:1351–1354 defining public behavior, 3:1351 ethical and review board issues, 3:1352–1353 ethics and criminal activity, 3:1353 identification of person in public space, 3:1353 reasons to record and study public behavior, 3:1351–1352 recording of minors, 3:1353–1354 unobtrusive measurement approaches, 3:1352 Public Health Service, Tuskegee Syphilis Experiment and, 1:257

Public memory, 3:1354–1357 overview, 3:1354 understanding, in context, 3:1354–1356 Public policy. See Evidence-based policy-making Public relations, 3:1357–1360 corporate communication and, 1:263 definitions of, 3:1357–1358 eclectic nature of, 3:1358–1359 guiding theories and principles, 3:1359–1360 overview, 3:1357 profession of, 3:1358 value of strong and weak ties, 3:1358 Public Relations Review, 1:5 Public service announcements (PSA), 2:691, 2:692 Public spaces, covert observation in, 1:286 Public speaking cluster analysis, 1:142 communication skills for, 1:209 courses in, 1:90 debate and forensics, 1:351–354 Public speaking, fear of. See Communication apprehension Publishing a book, 3:1371–1372 invited publication, 2:813–815 overview, 3:1371 steps involved, 3:1371–1372 See also American Psychological Association style; Chicago style; Modern Language Association (MLA) style; individual writing topics Publishing journal articles, 3:1372–1374 authorship credit, 1:71 common steps for, 3:1373–1374 feminist analysis and publication outlets, 2:554–555 invited publication, 2:813–815 leadership research, 2:858 overview, 3:1372–1373 publishing process and peer review, 1:203 See also American Psychological Association style; Chicago style; Modern Language Association (MLA) style; individual writing topics PubMed, 1:349 Pupillometry, 3:1237 Purposive sampling, 4:1525, 4:1532–1533, 4:1537 Putnam, Linda L., 2:794

Index   1953 p value, 3:1175–1177 basis of null hypothesis significance testing, 3:1176–1177 computing, 3:1175–1176 defined, 3:1175 example, 3:1175 hypothesis testing, 2:677 indicator of statistically significant finding, 3:1176 statistical decisions based on, 3:1176 QED: A Journal in LGBTQ Worldmaking (NCA), 2:623 Quadratic functions (general linear model approach), 2:868–869 Quadratic mean, 2:935, 2:936, 2:938 Qualified relativist position, 1:311 Qualitative data analytic induction, 1:37 anonymous source of data for, 1:39–40 axial coding, 1:79–82 communication and aging research, 1:155–156 critical incident method, 1:299–301 critical qualitative research, 1:296 document and textual analysis, 3:1376–1377 in emergency communication, 1:411 English as a Second Language, 1:420 experimental manipulation, 1:474 fraudulent and misleading, 2:588 freedom of expression, 2:593 game studies, 2:605–606 holistic approach to random assignment, 3:1400 negative case analysis for, 3:1084–1087 observation, 3:1375–1376 overview, 3:1375 primary data analysis, 3:1317–1320 qualitative methods, overview, 1:xl qualitative research compared to quantitative research, 3:1378–1379 questions and answers for, 3:1376 See also individual areas of research Qualitative interaction analysis. See Interaction analysis, qualitative Qualitative Research Reports, 1:204, 3:1209 Qualitative Research Reports in Communication (ECA), 1:4 Qualtrics, 2:786, 2:791, 3:1138 Quantitative descriptive analysis, 4:1815 Quantitative interaction analysis. See Interaction analysis, quantitative

Quantitative research, purpose of, 3:1377–1380 deciding to use, 3:1380 forms of inquiry, 3:1378 freedom of expression, 2:593 game studies, 2:605–606 generating concepts and supporting theories, 3:1379–1380 overview, 3:1377–1378 qualitative research compared to, 3:1378–1379 See also individual areas of research Quantitative research, steps for, 3:1380–1383 analytic induction, 1:37 anonymous source of data for, 1:40 coding of data, overview, 1:149 communication and aging research, 1:155–156 data collection and analysis, 3:1382–1383 English as a Second Language, 1:418–419 experimental manipulation, 1:474 interpretation of data and reporting findings, 3:1383 measures, 3:1381 overview, 3:1380–1381 research design, 3:1381 research site and respondents, 3:1381–1382 theory and hypothesis, 3:1381 See also individual names of statistical methods Quarterly Journal of Public Speaking, 1:87 Quarterly Journal of Speech (NCA), 1:3, 2:994, 3:1445, 3:1488 Quarterly Journal of Speech Education, 1:87 Quasi-experimental design, 3:1383–1387 in communication research, 3:1383–1384 distinguishing features of, 3:1384, 3:1385 (table) in emergency communication, 1:411 independent variable manipulation, 3:1385 interrupted time series design, 3:1386, 3:1386 (table) matched groups, 2:923–924 number of conditions in experiment, 3:1384 pretest-posttest, 3:1385–1386, 3:1386 (table) random assignment, 3:1384 use of pretests, 3:1384–1385

Quasi-F, 3:1387–1389 defining error or within-group variability, 3:1388 differences, 3:1387 implications of using quasi-F statistics, 3:1388 overview, 3:1387 sources of variability, 3:1387–1388 Quasi-public spaces, covert observation in, 1:286 Qaeda, Al, 4:1749 “Queering,” 2:665–666 Queer methods, 3:1389–1394 activism research, 3:1392–1393 activist approach to research, 3:1390–1391 art-based research, 3:1393 autoethnography, 3:1393 criticism, 3:1391–1392 historical approaches, 3:1392 methodology, 3:1391 overview, 3:1389 queer studies, 3:1389–1390 social scientific approaches, 3:1393 Queer Nation, 3:1390 Queer theory, 3:1394–1396 convergence and divergence from gay and lesbian studies, 3:1395 divergence of terminology, 3:1395 feminist discourse connections to, 3:1395 overview, 3:1394 queer as analytic, 3:1396 queer as descriptor, 3:1396 as resistance against heteronormativity, 3:1394–1395 Queer Theory and Communication (NCA), 2:623 Queer Words, Queer Images (Ringer), 2:623, 3:1391 Quest for Certainty, The (Dewey), 4:1747 Question-order effects, 4:1718 Questions cloze procedure for, 1:139–141 for focus groups, 2:578–581 See also individual survey topics Quine, W. V., 2:696 Quintilian, 1:452, 3:1088, 3:1477, 3:1507, 4:1744 Quota sampling, 4:1525, 4:1537 R, for Kruskal–Wallis test, 2:833 Race communication and culture studies, 1:161–163 critical race theory (CRT), 1:302–305 cultural sensitivity in research, 1:317–320

1954   Index diaspora and, 1:376–380 digital media and, 1:380–383 feminist analysis and, 2:552 gender and communication, 2:612 health care disparities, 2:639–642 peace studies on racism topics, 3:1205 underrepresented groups, 4:1807–1809 See also Passing theory Race In/For Cyberspace (Nakamura), 1:381 Ragsdale, James Donald, 1:3 Rainbow Coalition, 3:1504 Rakow, Lana, 2:553 Ramasubramanian, Srividya, 2:853 Ramus, Petrus, 4:1744 Rand, Ayn, 1:195 Rand, Erin J., 3:1392 Random assignment, 3:1397–1399 factorial designs and, 2:539 necessitation of randomness, 3:1397–1398 opinion polling and random sampling, 3:1149–1150 overview, 3:1397 practice of, 3:1398–1399 Random assignment of participants, 3:1399–1401 cluster sampling, 3:1400 control groups and, 1:252–254 multistage sampling, 3:1400 overview, 3:1399 qualitative research and, 3:1400 random assignment control groups, 1:252 simple random sampling (SRS), 3:1399–1400 stratified sampling, 3:1400 systematic sampling, 3:1400 Random effects analysis. See Metaanalysis: random effects analysis Random factors. See Factor, random Randomized switching replication design, 1:479 Random sampling. See Sampling, random Range, 3:1401–1402 defined, 2:954–955, 3:1401 inter-quartile range, 3:1401–1402 range rule, 3:1402 reporting, 3:1401 simple descriptive statistics, 4:1604 standard deviation and, 3:1402 Rank ordering, 1:30–31 Rape, 2:614–615 Ratio measurement. See Measurement levels, ratio Rational deletion, 1:140

Rationale, literature review strategies, 2:881–882 Rawls, John, 3:1159 Raw score, 3:1402–1404 in communication research, 3:1402 determining data normality, 3:1402–1403 measuring latent constructs, 3:1403 overview, 3:1402 transforming raw scores, 3:1403 Ray, Angela, 1:45 Reaction time, 3:1404–1405 institutional investigations, 3:1404–1405 overview, 3:1404 in training programs, 3:1404 Reader’s Digest, 2:874 Reader’s Guide, organization of, 1:xxxix Reading Race Online (Burkhalter), 1:381 Reagan, Ronald, 3:1275, 3:1500, 3:1502, 3:1503, 4:1580 Reality, understanding. See Social constructionism Reality television, 3:1405–1408 areas and methods of study, 3:1407–1408 brief history of, 3:1406–1407 categories, 3:1407 contested definitions of, 3:1405–1406 overview, 3:1405 Receptive language, 1:373 Reciprocity, social network analysis, 4:1634 Recorded messages analyzing, 1:240 recording interviews, 2:802, 2:804–805 recording of public behavior, 3:1351–1354 Redstockings, 4:1580 Reductionism, in religious communication, 3:1431 Redundancy effect, 4:1718 Reese, Stephen, 2:583 Reeves, Byron, 2:672 Reference group theory, 1:16 Referential units, 1:246 Reflection/reflexivity critical ethnography and, 1:297–298 ethnomethodology as, 1:463–464 feminist communication studies on, 2:558 GLBT people, social media, and self-reflexivity, 2:627 reflexivity in performance context, 3:1214–1215 reflexivity of informants, 2:703–704

researcher-participant relationships, 3:1467–1468 writing reflective field notes, 2:565 Reflexive ethnographies, 1:75 Regression effect sizes, 1:407 generalized linear models (GLMs), 2:869 logistic analysis, 2:890–893 univariate statistics, 4:1811–1812 See also Errors of measurement: regression toward the mean; Linear regression Regression, cross-lagged. See Cross-lagged panel analysis Regression to the mean. See Errors of measurement: regression toward the mean “Regression Towards Mediocrity in Hereditary Stature” (Galton), 1:437, 1:438 Reichert, T., 4:1891 Reinforcement theory, 1:102–103 Reinsch, N. Lamar, 1:154 Relational dialectics theory, 3:1408–1412 dialectical tensions, 3:1409 dialogue, 3:1409 key concepts, 3:1409 RDT, overview, 3:1408 research and analysis, 3:1411–1412 speech games, 3:1409–1410 utterance chains, 3:1410–1411 Relationships between variables, 3:1412–1414 analyzing, 3:1412–1413 interpreting statistical tests, 3:1413 overview, 3:1412 testing, statistically, 3:1413 types of relationships, 3:1412 Relativism, 1:311 Relevance tree, 1:168–169 Reliability conjoint analysis, 1:237–238 content analysis and coding, 1:247 of negative case analysis, 3:1084 observer reliability, 3:1118–1121 Reliability, Cronbach’s Alpha, 3:1414–1417 advantages and disadvantages, 3:1415–1416 errors of measurement: attenuation, 1:430–431 exploratory factor analysis and, 2:516 factors increasing/decreasing, 3:1416–1417 Fleiss system and, 2:739 health literacy, 2:648 intercoder reliability, 2:722

Index   1955 meaning and measurement, 3:1414–1415 overview, 3:1414 reliability of measurement, 3:1414, 3:1427, 3:1428 reproducibility, 2:729 rigor, 3:1512 surveys, 4:1714, 4:1736 Reliability, Kuder-Richardson, 3:1418–1420 interpretation, 3:1419 KR-20 and KR-21 formulas, explained, 3:1418 KR-21 and KR-21 formulas, alternatives, 3:1420 limitations, 3:1419–1420 overview, 3:1418 reason to use, 3:1419 when to use, 3:1418–1419 Reliability, split-half, 3:1420–1421 overview, 3:1420–1421 reliability, defined, 3:1420 Reliability, unitizing, 3:1421–1424 circumstances when unitizing reliability is not measured, 3:1422 measuring, 3:1423–1424 overview, 3:1421–1422 unitizing verbal messages, 3:1422–1423 Reliability of measurement, 3:1424–1429 intercoder reliability, 3:1428 internal consistency and, 3:1427–1428 methods of assessing reliability, 3:1425–1426 methods of increasing, 3:1428 overview, 3:1424–1425 stability of measurement, change in measurement over time, 3:1426–1427 Religious communication, 3:1429–1432 African American communication and culture, 1:16 communication ethics, 1:196 cultural sensitivity in research, 1:317–320 direct approaches, 3:1430 fanaticism, 3:1431–1432 interaction, 3:1430–1431 overview, 3:1429 physical approaches, 3:1430 reductionism, 3:1431 religion, generally, 3:1429 transcendence, 3:1429–1430 verbal approaches, 3:1430 See also Spirituality and communication; individual survey topics

Relocation, acculturation and, 1:7–8 RenRen (China), 4:1640 Repeated cross-sectional design, 1:315 Repeated measures, 3:1432–1435 advantages, 3:1434–1435 drawbacks, 3:1434 overview, 3:1432–1433 repeated measures design, t-test, 4:1794 statistics and assumptions of, 3:1433–1434 Replication, 3:1435–1437 conditions for accurate replication, 3:1436 external validity, 1:482, 3:1435 history, 3:1435 importance of, 3:1436–1437 meta-analysis, 3:1436 methods, 3:1436 Reporting, external validity, 1:482 Representamen, 4:1589 Representation, in cultural studies, 1:322 Repressors, 3:1245 Reproducibility. See Intercoder reliability standards: reproducibility Request for Comment (RFC) document, spam and, 4:1652 Research applied communication and, 1:42–43 archive searching for, 1:46–49 artifact selection, 1:56–59 authoring: telling a research story, 1:65–69 autoethnography and elements of research, 1:74–75 between-subjects design, 1:90–92 bibliographic research, 1:92–94 blogs and, 1:100–101 causality in, 1:123–124 chat rooms, research applications, 1:125–126 citation analyses, 1:132–134 citations for, 1:134–136 cohort research designs, 1:151–154 confederates, 1:223–225 cultural sensitivity in, 1:318–319 debriefing of participants, 1:355–359 funding of, 2:599–602 informed consent during research process, 2:706–707 interpretive research, 2:794–797 laboratory experiments, overview, 2:835–838 limitations, 2:862–864 matched groups, 2:922–925 pilot studies, 3:1253–1255

quantitative research steps, 3:1381 (See also Quantitative research, steps for) research tools for new media analysis, 3:1093 treatment groups, 4:1779–1781 triangulation, 4:1781–1784 See also Content analysis, advantages and disadvantages; Content analysis, definition of; Content analysis, process of; Data; Deception in research; Experiments and experimental design; Literature review, the; Qualitative data; Quantitative research, steps for Research, inspirations for, 3:1437–1440 archivals, 3:1438–1439 examples of, 3:1439–1440 experiences, 3:1437–1438 hypotheticals, 3:1438 observation as inspiration, 3:1440 overview, 3:1437 Researcher-participant relationships, 3:1465–1468 activism and social justice research, 1:11–15 cross-sectional design and, 1:316–317 cultural beliefs, 1:319 demand characteristics, 1:371–373 ethical considerations, 3:1467 journals kept by participants, 2:826–827 online interviews, 3:1145 overview, 3:1465 in participatory research, 3:1467 in qualitative research, 3:1466–1467 in quantitative research, 3:1465–1466 reflexivity, 3:1467–1468 Researcher-participant relationships in observational research, 3:1468–1470 ethics, 3:1470 overview, 3:1468–1469 positionality, 3:1469 researcher roles, 3:1469–1470 Researcher-produced pairs, t-test, 4:1793–1794 Research ethics and social values, 3:1440–1444 accuracy, 3:1442 beneficence, 3:1441–1442 completeness, 3:1442 ethics connection to value system, 3:1440–1441 justice, 3:1442 overview, 3:1440

1956   Index professional obligations, 3:1442–1443 reflections of ethical decisions, 3:1443 researcher-participant relationships, 3:1467, 3:1470 respect, 3:1441 respondents and, 3:1473 theory, 3:1441 truthfulness, 3:1442 See also Communication ethics; Institutional review board Research ideas, sources of, 3:1444–1447 everyday life and, 3:1444 issues, 3:1446–1447 overview, 3:1444 previous studies and theories, 3:1444–1446 “researchable ideas,” 3:1447 Research in communication, overview, 1:xxxvi–xxxix Research on Language and Social Interaction, 2:1022 Research project, planning of, 3:1447–1450 assistance needed for, 3:1449 causality as goal, 3:1448–1449 data analysis, 3:1449–1450 finding participants, 3:1449 human subjects and, 3:1448 overview, 3:1447–1448 publishing findings, 3:1450 research questions, 3:1448 Research proposal, 3:1450–1453 key elements of research proposal, 3:1451–1453 overview, 3:1450–1451 Research question formulation, 3:1453–1456 formulating qualitative questions, 3:1455–1456 formulating quantitative questions, 3:1454, 3:1455 overview, 3:1453–1454 Research report, organization of, 3:1458–1460 effective argument, 3:1459 overview, 3:1458 researcher subjectivity, 3:1459–1460 style guidelines and organization, 3:1458 See also individual names of style guides Research reports, objective, 3:1456–1458 examples, 3:1457 objective versus subjective, 3:1456–1457 overview, 3:1456 writing, 3:1457–1458

Research reports, subjective, 3:1460–1462 approaches, 3:1460–1461 cultivating subjectivity in research, 3:1462 methods, 3:1461–1462 objective versus subjective, 3:1456–1457 overview, 3:1460 Research topic, definition of, 3:1462–1465 choosing topic (step 2), 3:1463–1464 getting ideas (step 1), 3:1462–1463 refining research question (step 4), 3:1465 topic as research question (step 3), 3:1464–1465, 3:1464 (table) Residuals, 1:31–33 Respect for persons, 2:669–670 Respondent-assisted sampling, 4:1537 Respondent interviews, 3:1470–1472 structured, 3:1470 unstructured, 3:1470–1472 Respondents, 3:1472–1473 ethical considerations, 3:1473 finding, 3:1472–1473 overview, 3:1472 types of, 3:1473 Response priming, 4:1619–1620 Response style, 3:1474–1476 cultural differences, 3:1474–1475 overview, 3:1474 response style corrections, 3:1476 survey instrument structure, 3:1475–1476 Restriction in range. See Errors of measurement: range restriction Results section. See Writing a results section Rethink Robotics, 3:1518 Retrospective cohort designs in communication, 1:153–154 Review of Communication, The (NCA), 1:4 Review of Communication Research, 1:4 Reynolds, Ronald A., 3:1225 Rheingold, Howard, 1:380, 3:1136 Rhetoric activism and social justice, 1:11–15 argumentation theory and, 1:52–55 conflict, mediation, and negotiation, 1:232–233 criticism and method, 3:1477–1478 Fisher narrative paradigm and rhetorical text, 2:577 freedom of expression, 2:593 health communication, 2:644–645 ideographs, 2:681–684

Latina/o communication studies, 2:851 media and technology studies, 2:959–960 overview, 3:1477 persuasion, 3:1224–1227 political debates and rhetorical analysis, 3:1275 propaganda, 3:1340–1344 psychoanalytic approach to, 3:1344–1347 terministic screens, 4:1747–1748 theoretical traditions, 4:1761 visual rhetoric, 4:1861–1862 See also Corporate communication; Critical analysis Rhetoric (Aristotle), 3:1088, 3:1449, 3:1478, 3:1480, 3:1482, 3:1506 Rhetoric (Talon), 2:663, 3:1508 Rhetoric, Aristotle’s: ethos, 3:1478–1480 applications, 3:1479 dimensions of, 3:1478–1479 limits in, 3:1479 overview, 3:1478 Rhetoric, Aristotle’s: logos, 3:1480–1482 argument, 3:1480–1481 overview, 3:1480 reasoning, 3:1481 syllogism or enthymeme, 3:1481–1482 Rhetoric, Aristotle’s: pathos, 3:1482–1483 appeal to emotion, 3:1483 modern-day applications, 3:1483 overview, 3:1482 three-part view, 3:1482–1483 Rhetoric, Isocrates’, 3:1483–1488 comparative study, 3:1486 complicating factors for researchers, 3:1484–1485 critical perspectives, 3:1487 historical contextualization, 3:1485–1486 influences, 3:1486–1487 methods of investigation, 3:1485 overview, 3:1483–1484 philological methods, 3:1487–1488 reception history, 3:1487 Rhetorical and dramatism analysis, 3:1491–1494 applications, 3:1493–1494 dramatism, 1:109 guilt-purification-redemption cycle, 3:1493 identification, 3:1492–1493 overview, 3:1491–1492 pentad, 3:1492

Index   1957 Rhetorical arena theory, 1:292 Rhetorical artifact, 3:1494–1496 analyzing, 3:1495, 3:1496–1497 assembling, 3:1495–1496 defined, 3:1494 role of artifact in rhetorical criticism, 3:1494–1495 Rhetorical Criticism: A Study in Method (Black), 2:664, 3:1498, 3:1509 Rhetorical genre, 3:1496–1499 academic work in, 3:1497–1498 as classification of rhetorical artifacts, 3:1496–1497 as method of analysis, 3:1497 overview, 3:1496 usefulness for undergraduate students, 3:1498–1499 Rhetorical hermeneutics, 2:650 Rhetorical leadership, 3:1499–1502 as constitutive, 3:1501–1502 expansions in form, 3:1500 expansions in frequency, 3:1500 future study, 3:1502 in historical comparison, 3:1499 overview, 3:1499 as situational judgment, 3:1501 Rhetorical method, 3:1502–1506 close textual analysis, 3:1505 critical rhetoric, 3:1504–1505 metaphoric criticism, 3:1504 overview, 3:1502–1503 purpose of, 3:1503 social movement criticism, 3:1504 types of, 3:1503–1504 Rhetorical theory, 3:1506–1511 discourse making, 3:1506–1509 overview, 3:1506 rhetoric as criticism of discourse, 3:1510–1511 rhetoric as discourse, 3:1509–1510 Rhetoric as epistemic, 3:1488–1491 critiques, 3:1489–1490 overview, 3:1488 Scott’s conception of, 3:1489 socially relevant knowledge, 3:1490–1491 Rhetoric of Motives (Burke), 3:1508 Rhetoric of Religion (Burke), 3:1508 Rhetoric Society Quarterly, 1:3 Rice, Ray, 2:617, 4:1664 Rich, Adrienne, 3:1390 Richards, I. A., 3:1509 Richards, W. D., 4:1635 Richlin, Laurie, 4:1570 Ricoeur, Paul, 2:648, 3:1229, 3:1234 Riessman, Catherine, 4:1777 Rigor, 3:1511–1514 applied communication and, 1:42–43 case study research and, 1:118

defined, 3:1511 in different research stages, 3:1511–1512 intercoder reliability, 2:723–724 of negative case analysis, 3:1086–1087 in qualitative research, 3:1513–1514 in quantitative research, 3:1512–1513 rigorous scientific knowledge, 1:467 Rios, Diana I., 2:852 Risk communication, 3:1514–1517 case study approach, 3:1515–1516 content analyses and surveys, 3:1516–1517 environmental communication, 1:423 experiments, 3:1515 mental models approach, 3:1514–1515 overview, 3:1514 Rivera, Alex, 2:780 Rivera-Servera, Ramón H., 2:852 Rivette, Jacques, 2:568 Robbers Cave Experiment, 2:562–563 Roberts Clark, W., 3:1037 Robert’s Rules of Order, 2:719 Robinson, Piers, 2:821 Robotic communication, 3:1517–1521 chatbots, 3:1518–1519 history of robotics, 3:1517–1518 human-robot interaction (HRI), 3:1517 industrial robotics, 3:1518 research methods, 3:1519–1520 social robotics, 3:1518 teleoperated robotics, 3:1518 theoretical foundations of, 3:1519 Rogers, Everett, 2:774, 2:960, 2:961, 2:962, 2:963 Rohmer, Éric, 2:568 Rojecki, Andrew, 2:821 Roles, social relationships and, 4:1647–1648 Rollin, Charles, 3:1508 Romantic communication, 1:165. See also Dime dating; Interpersonal communication; Jealousy Roosevelt, Eleanor, 4:1580 Roosevelt, Franklin D., 1:45, 4:1523 Roosevelt, Theodore, 2:665 Root mean square error of approximation (RMSEA), 2:507, 3:1196, 4:1683–1684 Rorty, Richard, 3:1232 Rosch, Eleanor, 1:119–120 Rose, Gillian, 4:1863 Rosenthal, Robert, 1:95 Rostow, W.W., 2:775

Rotated matrix. See Factor analysis: rotated matrix Rotation. See Factor analysis: oblique rotation; Factor analysis: rotated matrix Rowe, Aimee Carillo, 2:852 Rowland, Robert, 2:577 Roy’s largest root, 1:398, 3:1061 Rubin, Donald B., 1:95 Rubin, Rebecca, 1:189 Rubio, Paulina, 2:851 Rudiments of Grammar (Perotti), 3:1508 Rudolph, Frederick, 1:89 Rufo, Kenneth, 3:1490 Run Lola Run (film), 2:780 Russian Communication Association (RCA), 3:1331 Ryan-Joiner, 4:1606 Sackett, Jim, 2:838 Sackett’s z, 2:839 Sacks, Harvey, 1:120, 1:258, 1:465, 2:717 Sagan, Carl, 4:1573 Said, Edward, 2:775, 3:1294 “Salami slicing,” 3:1259 Saldaña, Johnny, 3:1219 Same-sex marriage, 2:625 Sample. See Population/sample Sample versus population, 4:1523–1526 defining population, 4:1524 overview, 4:1523 types of sampling, 4:1524–1525 Sampling, determining size, 4:1526–1528 determining sample size, 4:1527–1528 populations, 4:1526 reminders to researchers, 4:1528 samples, 4:1526–1527 Sampling, Internet, 4:1529–1531 advantages of, 4:1530 disadvantages of, 4:1530–1531 nonprobability sampling on Internet, 4:1530 opportunities and challenges for, 4:1529–1530 overview, 4:1529 types of probability samples, 4:1529 Sampling, methodological issues in, 4:1531–1534 mixing methods, 4:1533 overview, 4:1531 probability sampling, 4:1531–1532 purposive sampling, 4:1532–1533 Sampling, multistage, 4:1534–1536 benefits and limitations of, 4:1535–1536 MSS, overview, 4:1534 steps and uses of, 4:1534–1535

1958   Index Sampling, nonprobability, 4:1536–1538 advantages, 4:1538 disadvantages, 4:1537–1538 Internet, 4:1530 nonprobability sampling, 4:1524–1525, 4:1530 overview, 4:1524–1525, 4:1536 types of, 4:1536–1537 Sampling, probability, 4:1538–1541 advantages of, 4:1539–1540 basic types of, 4:1538–1539 contrast analysis, 1:250–251 Internet, 4:1531–1532 logic of hypothesis testing, 2:677 overview, 4:1529, 4:1538 probability/random sampling, 4:1524–1525 sampling decisions, 4:1544–1545 Sampling, random, 4:1541–1542 advantages and disadvantages of, 4:1542 defining, 4:1541–1542 overview, 4:1541 sampling theory, 4:1550 simple random sampling (SRS), 3:1399–1400, 4:1525, 4:1538 use of, 4:1542 Sampling, special population, 4:1542–1544 challenges and important conditions, 4:1544 obtaining sample of special population, 4:1543 overview, 4:1542–1543 Sampling decisions, 4:1544–1546 overview, 4:1544 probability sampling, 4:1544–1545 sample size, 4:1545–1546 sampling and systematic errors, 4:1546 theoretical sampling, 4:1545 Sampling frames, 4:1546–1549 challenges and considerations, 4:1548–1549 overview, 4:1546 research design, 4:1549 sample versus population, 4:1524 sampling frames versus samples, 4:1547 selecting a sample, 4:1548 use of, 4:1546–1547 Sampling theory, 4:1549–1552 cluster sampling, 4:1550–1551 errors and biases, 4:1551–1552 overview, 4:1549 procedures, 4:1550 simple random and systematic sampling, 4:1550 stratified sampling, 4:1550

Sanders, Robert, 2:845 San Diego State College, 4:1582–1583 Sandler, Howard, 4:1651 Sanson-Fisher, Rob, 1:85 Sapir, Edward, 1:310–311, 4:1625 Sapir-Whorf hypothesis, 1:310–311 Sarachild, Kathie, 4:1580 Sarbanes-Oxley Act, 2:570 Sarris, Andrew, 2:568 Sartre, Jean-Paul, 3:1228, 3:1229 SAS, for Kruskal–Wallis test, 2:833 Saturation, grounded theory and, 2:632 Saussure, Ferdinand de, 2:568, 4:1587, 4:1588, 4:1589, 4:1624 Saussurean semiotics, 4:1588 Saving Jessica Lynch (NBC), 2:617 Sawilowsky, Shlomo, 4:1651 Sawyer, Diane, 2:617 Sayette, Michael A., 2:489 Scales, forced choice, 4:1552–1554 advantages of, 4:1553–1554 check-all-that-apply format versus, 4:1552–1553 disadvantages of, 4:1553–1554 overview, 4:1552 Scales, Likert statement, 4:1554–1557 advantages and disadvantages, 4:1556–1557 issues in development of, 4:1555–1556 linear versus nonlinear relationships, 2:867 overview, 4:1554–1555 population/sample issues, 3:1285–1286 Scales, open-ended, 4:1557–1559 defined, 4:1557–1558 item examples, 4:1558–1559 Scales, rank order, 4:1559–1561 overview, 4:1559 rank order expressed by language, 4:1559–1560 rank order expressed by numerical ranking, 4:1560 statistical analysis of rank order data, 4:1560–1561 Scales, semantic differential, 4:1561–1563 origin of, 4:1561 overview, 4:1561 performance, 4:1563 process, 4:1562–1563, 4:1562 (fig.) uses of, 4:1561–1562 Scales, true/false, 4:1563–1564 benefits of, 4:1564 overview, 4:1563 use of, 4:1563, 4:1564 writing items for, 4:1564

Scaling, Guttman, 4:1564–1568 criticism, 4:1567 development, 4:1565–1567, 4:1566 (table), 4:1567 (table) example, 4:1565 overview, 4:1564–1565 Scatter plot, 1:268–270, 1:269 (fig.), 2:867 Schank, Roger, 2:1017 Schechner, Richard, 3:1216 Scheffe test. See Post hoc test: Scheffe test Schegloff, Emanuel, 1:258, 1:465, 2:717 Scheler, Max, 3:1229 Schiappa, Edward, 2:650, 3:1486 Schiller, Herbert, 2:776 Schiller, Max, 4:1623 Schlafly, Phyllis, 4:1582 Schleiermacher, Friedrich, 2:648 Schmidt, Frank L., 1:434, 2:981, 2:986 Scholarly publications. See Peer review; Publications, scholarly Scholarship of Teaching and Learning, 4:1568–1572 characteristics of SoTL research, overview, 4:1568–1569 intention of, 4:1571 maximizing student learning through pedagogical knowledge, 4:1569 as public, 4:1570–1571 rigorous methodology for, 4:1569–1570 situated in specific classrooms, 4:1569 SoTL, overview, 4:1568 students engaged in research process, 4:1570 Scholar-to-scholar presentation, 1:23 Schramm, Wilbur, 2:773, 2:774 Schütz, Alfred, 1:463–464, 3:1229 Schwarz, Norbert, 4:1699 Schwarzkopf, Norman, 3:1504 Science, 1:5 Science and Sanity (Korzybski), 4:1624 Science communication, 1:423, 4:1572–1574 democratization of science and, 4:1572–1573 as emerging field of study, 4:1573–1574 history of, as process, 4:1572 overview, 4:1572 popularization of, 4:1573 Science Communication, 4:1572, 4:1574 Science Direct, 1:349 Science fiction, 1:167–168 Scott, Robert L., 3:1477, 3:1488–1491 Scott, William A., 2:753

Index   1959 Scott’s pi. See Intercoder reliability techniques: Scott’s pi Screen, 2:568 Script creation, performance studies, 3:1218–1219 Search engines for a literature search, 4:1574–1577 Google Scholar, 4:1576 Internet resources, 4:1574–1575 overview, 4:1574 search strategy, 4:1575–1576 Searle, John R., 3:1216, 3:1232, 4:1626 Seattle Times, 2:822 Sebeok, Thomas, 4:1587 Secondary data, 4:1577–1579 advantages of, 4:1578 disadvantages of, 4:1579 finding, 4:1577–1578 overview, 1:336–338, 4:1577 Second-level agenda framing, 1:20 Second wave feminism, 4:1579–1583 gender and communication studies on waves of feminism, 2:611 historical timeline of, 4:1579 important events to, 4:1580–1583 overview, 4:1579–1580 study of, 4:1583 See also Feminist analysis Secret, 3:1083 Sedgwick, Eve Kosofsky, 3:1390, 3:1395 SeekingArrangement.com, 1:386 Seiler, William J., 1:90 Selection, internal validity, 2:772 Selective coding, 2:632 Selective exposure, 4:1584–1587 biophysiological measures, 4:1586 forced exposure versus, 4:1584 measurement approaches, 4:1584–1585 method combinations, 4:1586 observations of media exposure behavior, 4:1585–1586 overview, 4:1584 self-reports, 4:1585 Self-categorization, 1:120 Self-indication, 4:1741 Self-perceived communication competence, 1:187 Self-perception, organizational identification and, 3:1162–1163 Self-reporting defined, 1:335–336 delayed measurement in, 1:370 family communication research, 2:546–547 in patent-centered communication, 3:1200 Semantic development, of language, 1:373, 1:374, 1:375

Semantic priming, 4:1619 Seminars, 1:23 Semiotics, 4:1587–1590 cultural semiotics, 4:1587–1588 denotation versus connotation, 4:1588 interpretance, 4:1589–1590 object, 4:1589 overview, 4:1587 Peircean semiotics, 4:1588–1589 representamen, 4:1589 Saussurean semiotics, 4:1588 semiosphere, 4:1587 social constructionism and, 4:1624–1625 theoretical traditions, 4:1761 Semi-partial r, 4:1590–1592 correlation, 4:1590–1591 overview, 4:1590 partial correlation, 4:1591 semi-partial correlation, 4:1591–1592 Seneca Falls Convention (1848), 2:572–573, 2:574, 2:611 Sensitivity analysis, 4:1592–1593 modeling and uncertainty, 4:1592–1593 overview, 4:1592–1593 Sensitization, 2:692 Sensitizers, 3:1245 Sentiment analysis, with new media, 3:1093 Sequential analysis, of group communication, 2:636 Sequential discriminant function analysis, 1:397 Sequential priming, implicit measures, 2:689–690 Sequoyah, 3:1078, 3:1080 Service learning, 4:1593–1596 assessing impact of, on community, 4:1595–1596 assessing impact of, on student, 4:1595 overview, 4:1593–1594 research, 4:1594–1595 value of, 4:1594 Sex, gender versus, 2:611 Sexual arousal. See Physiological measurement: genital blood volume Sexuality, gender versus, 2:611 Sexual Politics (Millet), 4:1581 Shadish, William R., 1:480, 2:618, 2:619 Shaffer, Tracy Stephenson, 3:1218 Shakespeare, William, 1:111 Shang shu (Book of Documents), 2:593 Shannon, Claude, 2:884 Shapiro–Wilk test, 2:832, 4:1606

Shaw, Adrienne, 2:553 Shaw, Donald L., 1:20 Shaw, Susan, 2:574 Sheehan, Megan, 2:840 Shen, Lijiang, 3:1225 Shepherd, Matthew, 3:1492 Sherif, Muzafer, 2:562–563 Sherry, John, 1:171 Sherry, Suzanne, 1:304 Shome, Raka, 3:1292 Shugart, Helene, 3:1510 Shulman, Lee, 4:1569, 4:1570 Siebert, Fred, 2:773 Siegel, Deborah, 2:573 Signaling theory, 4:1641–1642 Significance test, 4:1596–1598 errors in, 4:1598 null and alternative hypothesis, 4:1596 one-tailed and two-tailed tests, 4:1596–1597 overview, 4:1596 statistical significance, 4:1597 steps in significance testing, 4:1597–1598 Silent Language, The (Hall), 2:756 Silver, Christina, 1:217 Simmel, Georg, 4:1636 Simon, Herbert A., 3:1155, 3:1156 Simple bivariate correlation, 4:1598–1601 correlation versus causation, 4:1600–1601 directional relationship, 4:1599 interpreting coefficient, 4:1599–1600 overview, 4:1598 Pearson product-moment, 4:1599 Simple descriptive statistics, 4:1601–1606 importance of descriptive statistics, 4:1605 measures of central tendency, definitions, 4:1601–1603 measures of variance, 4:1604–1605 overview, 4:1601 sample distribution, 4:1603–1604 using measures of central tendency, 4:1603 Simple random sampling (SRS), 3:1399–1400, 4:1525, 4:1538 Sinclair, Upton, 2:820 Singer, Peter, 1:195 Single-authored publication, 1:71 Single-sample chi-square, 1:129–130, 1:129 (table) Siochrú, Seán Ó, 2:777 Sisterhood Is Powerful (Morgan), 4:1581

1960   Index Situated identity theory, 1:16 Situation, argumentation theory and, 1:55 Situational crisis communication theory (SCCT), 1:292–293 68-95-99.7 rule, 4:1667–1668 Skewness, 4:1606–1608 considerations for researchers, 4:1607–1608 evaluating and adapting to, 4:1607 implications of skewed distribution, 4:1606 overview, 4:1606 related issues, 4:1607 testing for, in normal distribution, 4:1606–1607 Skill component, of communication competence, 1:186 Skills, for communicating. See Communication skills Skin conductance. See Physiological measurement: skin conductance Skinner, B. F., 1:102–103, 1:374 Skype, 2:801, 2:804, 2:863, 3:1144, 4:1776 Slagle, R. Anthony, 3:1391 Sleep Dealer (film), 2:780 SLIP technique, 2:977 Small group communication, 4:1608–1613 communication and leadership in groups, 4:1610 communication competence, 1:188 communication perspective of small groups, 4:1608–1609 creativity as communication process, 4:1610–1611 decision making and, 4:1661 development and directions of group communication research, 4:1611 development of group communication theories, 4:1611–1612 future research, 4:1612 information processing, 4:1609–1610 overview, 4:1608 SMCR (source, message, channel, receiver) model of communication, 2:884 Smiling, 2:490 Smith, Adam, 3:1277 Smith, Anthony, 2:776 Smith, Craig, 2:650 Smith, Howard, 4:1580 Smith, Marc, 2:823 Smyth, Dallas, 2:776 Snapchat, 1:265, 2:823, 4:1837 Snowball/network sampling, 4:1525

Snowball subject recruitment, 4:1613–1616 advantages and disadvantages of, 4:1615 overview, 4:1613 quantitative and qualitative, 4:1615 random versus nonrandom sampling, 4:1613 types of nonrandom sampling, 4:1613–1614 use of, 4:1614–1615 Snowden, Edward, 2:591 Snyder v. Phelps, 2:589 Sobel, Michael E., 4:1616 Sobel test, 4:1616–1618 assumptions limitations, alternatives, 4:1617–1618 background and example, 4:1616–1617 bootstrapping and, 1:106, 1:108 overview, 4:1616 Sober, Elliott, 2:698 Social capital, online communities and, 3:1136–1137, 4:1641 Social change, critical ethnography and, 1:298–299 Social cognition, 4:1618–1623 communication and, 4:1621–1622 critiques of research, 4:1622–1623 measurement paradigms, 4:1619–1620 memory, 4:1620–1621 overview, 4:1618–1619 Social comparison theory, 1:102 Social computing, 2:671–672 Social constructionism, 4:1623–1628 antecedents of, 4:1623–1624 critiquing literature sources, 2:885 key intentions of, 4:1624–1625 making sense of experience and, 4:1626 nature of reality and, 4:1625–1626 overview, 4:1623 realism versus constructionism, 4:1627–1628 society and, 4:1626–1627 understanding reality, 4:1628 Social Construction of Reality, The (Berger, Luckmann), 2:885, 4:1623 Social desirability bias, 4:1556–1557 Social Determinants of Health (SDOH), 2:640 Social distance, 3:1266 Social exchange theory, 1:232 Social Genocide (film), 2:781 Social identity model of deindividuation effects (SIDE), 1:176

Social identity theory (SIT), 1:161, 2:761–762, 2:764 Social implications of research, 4:1628–1630 overview, 4:1628–1629 placement within a research report, 4:1629 practical implications, 4:1629–1630 Social information processing theory (SIPT), 1:174–175 Social interaction theories, 2:672 See also Language and social interaction Socialization, organizational identification, 3:1162–1163 Social media blogs and research, 1:100 chat rooms and, 1:124–126 communication history of, 1:202 computer-mediated communication and surveys, 1:218–219, 1:220 corporate communication and social media research methods, 1:265–266 covert observation in, 1:286 digital natives and, 1:383–386 GLBT people and, 2:626–629 Internet research and ethical decision making, 2:787–789 media literacy, 2:972 naturalistic observation via, 3:1083 online social worlds, 3:1146–1149 political debates and, 3:1275–1276 social media mining, 3:1141 term for, 2:627 unobtrusive analysis, 4:1813 See also Communication and technology; individual names of social media websites Social media: blogs, microblogs, and Twitter, 4:1630–1633 blogging technology, 4:1631 efficacy, 4:1631 history of blogs, 4:1630–1631 identity and networks, 4:1632 overview, 4:1630 public discourse, 4:1631–1632 See also Twitter Social movements, in environmental communication, 1:422 Social network analysis, 4:1633–1636 computer-mediated communication and, 1:221–222 contexts and entities, 4:1633 “links,” 4:1633–1634 methods, 4:1634 network configurations, 4:1635 pragmatics, 4:1635–1636

Index   1961 Social networks, online, 4:1640–1643 hyperpersonal model, 4:1642 issues, 4:1640–1641 overview, 4:1640 research, 4:1641 signaling theory, 4:1641–1642 social capital, 3:1136–1137, 4:1641 Social network systems, 4:1636–1640 analysis of, 4:1637–1638 functions of, 4:1638 history of, 4:1636–1637 impact of, 4:1638–1639 overview, 4:1636 triadic closure, strong ties, weak ties, 4:1638 Social presence, 4:1643–1645 causes, effects, and implications of, 4:1644 conceptualization of, 4:1643–1644 overview, 4:1643 Social relationships, 4:1645–1649 attributes of, 4:1646–1648 as drama, 3:1493–1494 implications for research, 4:1648 relevance for communication, 4:1645 relevance for communication research, 4:1645–1646 Social-responsibility, 2:773–774 Social robotics, 3:1518 Social Sciences and Humanities Research Council (Canada), 1:50 Social values. See Research ethics and social values Society for Professional Journalists (SPJ), 1:453 Sociolinguistic Archive and Analysis Project, 1:51 Sociology of knowledge, 4:1623 Sociopsychological theory, as tradition, 4:1762 Socrates, 2:884, 3:1481 Solomon, Richard, 4:1650 Solomon four-group design, 4:1649–1652 advantages and disadvantages, 4:1650 comparing to posttest-only design, 4:1651 experimental design, overview, 1:479 overview, 4:1649 statistical analysis, 4:1650–1651 threats to internal and external validity in pretest-posttest design, 4:1649–1650 Solózano, Daniel, 1:304 Sony, 3:1141, 3:1518, 4:1858 Soup (Austria), 4:1640 Source credibility, 2:508

Southern Association of Colleges and Schools (SACS)/Commission on Colleges, 1:182 Southern Communication Journal, 1:3 Southern Regional Communication Association, 1:10 Southern States Communication Association (SSCA), 1:41, 2:555, 2:863 Space, diaspora and, 1:379 “Space Traders” (Bell), 1:303 Spacewar! (game studies prototype), 2:604, 4:1858 Spam, 4:1652–1654 as data, 4:1653 online research as, 4:1653–1654 overview, 4:1652 unsolicited and unwanted online communication, 4:1652–1653 Speaking Into the Air (Peters), 3:1232 Speaking turn, 3:1422–1423 Spearman, Charles, 1:273–274, 1:430 Spearman-Brown Prophesy formula, 3:1417, 3:1421 Spearman correlation. See Correlation, Spearman Spectral analysis, 4:1768 Speech, origins of, 1:199 Speech Association of America Committee, 1:89 Speech codes theory, 2:846 Speech Communication Association of America, 1:89 Speech games, 3:1409–1410 Spencer, Leland G., 2:553 Spender, Dale, 2:554 Sphericity, 3:1433 Spiegelberg, Herbert, 3:1227 SPIKES protocol, 1:85–86 Spiritual Communication Division (NCA), 4:1654, 4:1658 Spirituality and communication, 4:1654–1659 common ground between, 4:1655–1656 overview, 4:1654–1655 research methods, 4:1658 spirituality, defined, 4:1655 understanding communication from spirituality perspective, 4:1657–1658 understanding spirituality from communication perspective, 4:1656–1657 Spitzberg, B. H., 1:333 (fig.), 1:186 Spivak, Gayatri, 3:1292, 3:1294 SPJ Code of Ethics (Society for Professional Journalists), 1:453 Split-plot design, 2:498

Spontaneous decision making, 4:1659–1662 amount of information and, 4:1660 decision making, overview, 4:1659 level of decision, 4:1660–1661 level of uncertainty, 4:1660 nature of decision, 4:1660 Sports communication, 4:1662–1665 media, 4:1663–1664 organizations, 4:1664–1665 overview, 4:1662 participants, 4:1662–1663 spectators, 4:1663 Sports Illustrated, “curse of,” 1:442 Spradley, James P., 2:700–701 Sprague, Jo, 1:191 SPSS, 1:268, 1:272, 2:515, 2:833 Stability. See Intercoder reliability standards: stability Stacks, Don, 1:41 Stalin, Joseph, 4:1588 Standard deviation and variance, 4:1665–1668 calculating standard deviation, 4:1666 calculating variance, 4:1666–1667 frequency distributions, 2:597 normal curve distribution and relationship to, 3:1102–1103 overview, 4:1665–1666 range and, 3:1402 68-95-99.7 rule, 4:1667–1668 standard deviation, defined, 2:903–904, 2:956 Standard error, 4:1668–1670 calculating, 4:1668–1669 overview, 4:1668 terms frequently confused with standard error of the mean, 4:1669–1670 Standard error, mean, 4:1670–1672 calculating standard error of the mean, 4:1671 overview, 4:1670 significance of standard error of the mean, 4:1671 standard deviation and standard error of the mean, 4:1670 Standard error of statistic, defined, 2:903–904 Standardized root mean square residual (SRMSR), 2:507, 4:1684 Standard score, 4:1672–1675 calculating, 4:1672 for comparative purposes and analyses, 4:1673–1674 normal curve and, 4:1673 overview, 4:1672 value of, 4:1672–1673

1962   Index Standpoint theory, 2:559 Stanford Prison Experiment, 1:67, 1:255–256 Stanley, Julian C., 2:770, 4:1651, 4:1677, 4:1678 Stanton, Elizabeth Cady, 2:572, 2:573, 2:611, 2:665 Starbucks, 3:1214 Starhawk, 2:554 STATA, for Kruskal–Wallis test, 2:833 Stationarity, 2:636 Statistical censoring, 1:347 Statistical conversion to common metric. See Meta-analysis: statistical conversion to common metric Statistical power analysis, 4:1675–1677 factors influencing power, 4:1675–1676 overview, 4:1675 power, defined, 4:1675 software packages for, 4:1676–1677 Statues, as public memory, 3:1354–1356 St. Augustine, 3:1507 “Stealing thunder,” 1:293 Steffens, Lincoln, 1:19 Steinem, Gloria, 4:1581 Steinfield, Charles, 4:1641, 4:1642 Stepwise discriminant function analysis, 1:397 Stereotypes, 4:1825–1826 Sterk, Helen, 2:553 Stern, D.M., 2:554 Stevens, Stanley S., 2:940, 4:1559, 4:1561 Stewart, Jon, 3:1270 Stewart, Moira, 3:1198 Stigma, 2:643 Stiles, William, 3:1423 Stimulus pretest, 4:1677–1679 experimental design, 4:1677–1678 overview, 4:1677 strengths and weakness of pre-test, 4:1678–1679 St. Louis Observer, 2:820 Stochastic process, 4:1767 Stone, Lucy, 2:573, 2:574 Stooge manipulation, 2:901 Stormfront Studios, 4:1858 Story of the Weeping Camel, The (film), 2:781 Storytelling. See Authoring: telling a research story Stouffer’s z, 4:1651 Strachey, Christoper, 4:1857 “Straight Outta Compton” (N.W.A.), 3:1503

Strategic communication, 4:1679–1682 communication goals research, 4:1681 defining, 4:1679–1680 overview, 4:1679 research in, 4:1680 researching means of communication, 4:1681–1682 target audience research, 4:1680–1681 Strategic maneuvering, 1:54 Strategic Simulations, 4:1858 Stratified sampling, 3:1400, 4:1525, 4:1539, 4:1550 Straubhaar, Joe, 2:778 Strauss, Anselm, 1:213, 2:630 Strength, social network analysis, 4:1634 Stroop effect, 2:688 Structural equation modeling, 4:1682–1687 confirmatory factor analysis (CFA), 4:1682, 4:1685–1686 error term, 1:426, 1:427 extended uses of, 4:1687 factor analysis: internal consistency, 2:519–521 general approach, 4:1683 measurement model, 4:1686 meta-analysis: fixed-effects analysis, 2:991–992 meta-analysis: model testing, 2:997–998 model identification, 4:1685 modification indices, 4:1684 nested and non-nested models, 4:1684–1685 ordinary least squared, 3:1150–1151 overidentified model in, 3:1171–1172 path analysis, 4:1686 sample size, 4:1685 SEM, overview, 4:1682–1683 structural model, 4:1686 tests of model fit, 4:1683–1684 Structural holes, 4:1635 Structural imperialism, 2:776 Structuration theory, 4:1687–1691 mixed method approaches, 4:1690 overview, 4:1687–1688 qualitative approaches, 4:1688–1689 quantitative approaches, 4:1689–1690 small group communication, 4:1611 Structure, of focus group interviews, 2:578–579 Stuckey, Mary, 1:45 Student characteristics, instructional communication, 2:713–714 Student motivation, 1:193

Student-Newman-Keuls method. See Post hoc tests: StudentNewman-Keuls test Student Nonviolent Coordinating Committee (SNCC), 4:1580 Students for a Democratic Society (SDS), 4:1580 Studies in Ethnomethodology (Garfinkel), 1:463 StudiVZ (Germany), 4:1640 Subaltern studies, 3:1293–1294 Subgroup analysis, 2:1005–1006, 2:1006 (fig.) Subject attrition, 2:771 Subject searching, of databases, 1:349 Submission, of journal articles, 3:1375–1376 Submission of research to a convention, 4:1691–1693 deadlines and process, 4:1692–1693 overview, 4:1691–1692 Submission of research to a journal, 4:1693–1695 journal submission, 4:1693–1694 overview, 4:1693 responding to journal decisions, 4:1694–1695 style requirements, 4:1694 Sudman, Seymour, 4:1552 Suffrage, women’s, 2:572–574 “Sugar babies”/“sugar daddies,” 1:386 Sullivan, Patricia A., 2:552, 2:553 Summary articles, news writing for, 3:1097–1098 “Summer of Mercy,” 2:698 Sum of squares (SS), defined, 1:363. See also Decomposing sums of squares Super Mario Brothers (video game), 4:1858 Support vector machine (SVM), 2:491 Surface structure, 1:317–318 Surgant, Johannes, 3:1508 Surra, Catherine, 4:1795, 4:1796 Surrogacy, 4:1695–1698 achieving parenthood, 4:1698 achieving pregnancy and preparing for birth, 4:1698 communication research and, 4:1696–1697 defined, 4:1695 matching, 4:1697 negotiated rates, 4:1695–1696 professional representation, 4:1696 types of, 4:1695

Index   1963 Survey: contrast questions, 4:1698–1701 assimilation and contrast effects, 4:1699–1700 mental representations and accessible information, 4:1699 overview, 4:1699 suggestions for designing a questionnaire, 4:1700–1701 Survey: demographic questions, 4:1701–1704 overview, 4:1701 placing demographic questions in surveys, 4:1703–1704 purpose of, 4:1701 types of demographic information, 4:1702 writing demographic questions, 4:1702–1703 Survey: dichotomous questions, 4:1704–1706 disadvantages, 4:1705–1706 limitations, 4:1705 overview, 4:1704–1705 Survey: filter questions, 4:1707–1708 formatting, 4:1707–1708 overview, 4:1707 potential issues of, 4:1708 Survey: follow-up questions, 4:1708–1710 branching questions, 4:1709 follow-up studies, 4:1709 overview, 4:1708–1709 probing questions, 4:1709 Survey instructions, 4:1723–1725 components of, 4:1723–1724 overview, 4:1723 Survey: leading questions, 4:1710–1711 avoiding, 4:1711 different types of, 4:1710–1711 ethical considerations, 4:1711 overview, 4:1710 SurveyMonkey, 2:786, 2:791, 3:1138 Survey: multiple-choice questions, 4:1711–1713 behavioral questions, 4:1712 demographic questions, 4:1712 overview, 4:1711–1712 questions about attitudes and beliefs, 4:1713 Survey: negative wording questions, 4:1713–1715 empirical problems, 4:1714 overview, 4:1713–1714 recommendations, 4:1714–1715 Survey: open-ended questions, 4:1715–1717 advantages of, 4:1715–1716 constructing questions, 4:1716 overview, 4:1715

Survey: questionnaire, 4:1717–1718 item selection, 4:1717 organization, 4:1717–1718 overview, 4:1717 Survey questions, writing and phrasing of, 4:1725–1728 effect questions, 4:1726 guidelines, 4:1726–1727 overview, 4:1725–1726 recommendations for specific types of questions, 4:1727–1728 survey wording, 4:1730–1734 Survey response rates, 4:1728–1730 importance of, 4:1728–1730 overview, 4:1728 Surveys causality and, 1:123–124 communication and technology research, 1:174–175 computer-mediated communication, 1:218–219 content analysis of, 1:241 corporate communication, 1:265 cross-sectional surveys, in emergency communication, 1:410–411 English as a Second Language, 1:418, 1:419 instrument structure, 3:1475–1476 intergroup communication, 2:768–769 mass communication research, 2:914–916 media and technology studies, 2:959 multiplatform journalism and survey research, 3:1039–1040 political debates and, 3:1273–1274 questionnaire design and privacy of participants in Internet-based research, 2:785–786 risk communication, 3:1516–1517 Surveys, advantages and disadvantages of, 4:1734–1736 overview, 4:1734 structure and content of surveys, 4:1734 survey advantages, 4:1735 survey distribution, 4:1734–1735 survey limitations and disadvantages, 4:1735 Surveys, using others’, 4:1736–1738 advantages of, 4:1738 evaluating others’ surveys, 4:1736 modifying others’ surveys, 4:1736–1738 overview, 4:1736 Survey: sampling issues, 4:1718–1721 conducting survey, 4:1720 error and bias in, 4:1720 selecting sample, 4:1719–1720 specifying population, 4:1718

Survey: structural questions, 4:1721–1722 advantages of structural questions, 4:1721 forms and examples, 4:1721–1722 structured and unstructured questions, 4:1721 Survey wording, 4:1730–1734 age of participants, 4:1731 elements to avoid or consider, 4:1732–1733 ethical considerations, 4:1733 gender of participants, 4:1731 nature of topic, 4:1731 overview, 4:1730–1731 sensitivity of topic, 4:1731–1732 tone of wording, 4:1732 Survival task, 2:634 Suter, Elizabeth, 2:553 Sutherland, Ivan, 2:671 Swiffer, 2:617 Symbolic convergence theory, 4:1611 Symbolic expression, 2:589 Symbolic interactionism, 4:1738–1743 central ideas, 4:1739–1740 criticisms, 4:1742–1743 dramaturgy, 4:1741–1742 ethnomethodology, 4:1742 gestures and significant gestures, 4:1741 history of, 4:1739 “I”/“Me” concept, 4:1740–1741 looking glass self, 4:1740 overview, 4:1738–1739 self-indication, 4:1741 Symbols, frame analysis and, 2:584–585 Synchronous online interviews, 3:1144 Synecdoche, 4:1743–1744 concepts, 4:1743–1744 overview, 4:1743 Syntactical units, 1:246 Syntax, of language, 1:373, 1:374, 1:375 Synthesis, of literature review, 2:876 SysCloud, 3:1142 Systematic sampling, 3:1400, 4:1525, 4:1539, 4:1550 Systems theory, in group communication, 4:1612 Tabachnick, Barbara, 2:515, 2:653 Tajfel, Henri, 1:120, 1:161, 2:767, 4:1647 Talk-in-interaction, 1:258, 1:259–260 Talon, Omer, 3:1508 Tarbell, Ida, 1:19 Tarde, Gabriel, 2:960 Target audience research, 4:1680–1681 Task/role-play experimental manipulation, 1:475

1964   Index Teacher immediacy, 1:192–193 “Teamsterville” study, 1:162 “Tearoom sex” study, 3:1443 Technology. See Communication and technology Tehranian, Majid, 2:777 Teleoperated robotics, 3:1518 Telestar, 1:201 Television programs, gender-specific language in, 2:617–618 Temporal order, causality and, 1:122 Tennis for Two (game studies prototype), 2:604, 4:1857 Terministic screens, 4:1745–1748 applications of concept, overview, 4:1746 defining, 4:1745 origin of concept, 4:1745–1746 rhetorical function of, 4:1747–1748 “Terministic Screens” (Burke), 1:109, 4:1745 verbal method of, 4:1747 visual method of, 4:1746–1747 Terminology, for database searching, 1:350 Terrorism, 4:1748–1751 as communicative act, 4:1749 delayed measurement of, 1:370 overview, 4:1748–1749 professional press coverage of terrorists and, 4:1749–1750 terrorists’ use of media, 4:1750–1751 Testability, 4:1751–1753 barriers to, 4:1752 creating testability experiment, 4:1752 creating testable hypothesis, 4:1753 ethical implications of, 4:1752–1753 falsifiability versus, 4:1752 identifying, 4:1751–1752 overview, 4:1751 Testimonial (propaganda device), 3:1341 Testing, internal validity, 2:771 Test of English as a Foreign Language (TOEFL), 1:418 Test of English for International Communication (TOEIC), 1:418 Text and Performance Quarterly, 2:852 “Text recycling,” 3:1258 Textual analysis, 4:1753–1756 close reading for, 1:136–139 conducting, 4:1755–1756 in cultural studies, 1:321 for family communication research, 2:547–548 interpretive nature of, 4:1754–1755

media literacy, 2:970 overview, 4:1753–1754 public address and, 3:1349–1350 of qualitative data, 3:1376–1377 rhetorical method and, 3:1505 types of texts, 4:1754 TF1 Films, 2:781 Thematic analysis, 4:1756–1760 common pitfalls of, 4:1759 contributions, 4:1759–1760 deductive approach, 4:1758 describing themes, 4:1758–1759 inductive approach, 4:1758 interpreting themes, 4:1759 locating themes within data, 4:1757 overview, 4:1756–1757 systematic approach, 4:1757 thematic units, 1:246 Theoretical journals, 2:825–826 Theoretical sampling, 4:1545 Theoretical traditions, 4:1760–1763 critical tradition, 4:1762 cybernetic tradition, 4:1762 emerging traditions, 4:1763 overview, 4:1760 phenomenological tradition, 4:1761–1762 rhetorical tradition, 4:1761 semiotic tradition, 4:1761 sociocultural tradition, 4:1762 sociopsychological tradition, 4:1762 tradition in context, 4:1760–1761 Theory, defined, 2:790 Theory of planned behavior (TPB), 2:676 Theory of Reasoned Action (TRA), 1:214 Third-party imagined interactions (IIs), 2:685 “Third-person-perspective” communication activism social justice research, 1:12 Third-wave feminism, 4:1763–1766 criticism, 4:1765–1766 defining, 4:1764 gender and communication studies on waves of feminism, 2:611 origins, Riot Grrrls, and feminine feminism, 4:1764–1765 overview, 4:1763–1764 Third World Liberation Front, 1:60 Thomson Reuters, 1:5 Thonssen, Lester, 3:1088 Thorndyke, Edward, 1:434 Thorson, E., 3:1275 Thought in the Act (Manning, Massumi), 3:1489 Threshold effects, 1:476 Thussu, Daya Kishan, 2:778

Tietjen-Moore test, 3:1169 Tijerina, Reies, 2:851 Time, 2:870, 4:1581 Time domain time series analysis, 4:1767–1768 Time issues of interviews, 2:805–806 of measuring patent-centered communication, 3:1199–1200 of online interviews, 3:1144–1145 of personal relationship studies, 3:1222 time scales and archiving data, 1:50 Time Magazine, 2:874 Times series analysis, 4:1766–1769 across multiple cases, 4:1768–1769 in communication research, 4:1769 defined, 4:1766–1767 frequency domain time series analysis, 4:1769 lag sequential analysis and, 2:838, 2:839 overview, 4:1766 time domain time series analysis, 4:1767–1768 Times series notation, 4:1769–1772 in context, 4:1771–1772 defined, 4:1770–1771 limitations of, 4:1771 TSN, overview, 4:1769–1770 Time Warner Inc., 2:971 Timmerman, David, 3:1486 Ting-Toomey, Stella, 1:310 Tipping Point (Gladwell), 2:697 Title IX, 4:1582–1583 Title of manuscript, selection of, 4:1772–1773 audience for, 4:1773 conventions within communication discipline, 4:1773 overview, 4:1772 topic for, 4:1772 Title VII, Civil Rights Act, 4:1580 Tong, Rosemarie, 2:554 Too, Yun Lee, 3:1486 Toothaker, Larry E., 1:367 Toulmin, Stephen, 1:53, 3:1090, 3:1481, 3:1510 Track records, 2:694–695 Tracy, Karen, 2:846 Trademarks, 1:261 Trade publications, determining relevance of, 2:874 Training and development, 4:1773–1776 general communication education versus communication skills training, 4:1774–1775 new and innovative choices for, 4:1776

Index   1965 overview, 4:1773–1774 prevalence of theory in communication training and development research, 4:1775 process of developing and evaluation communication training, 4:1775–1776 Transactive memory system (TMS), 4:1622 Transcription as Theory (Ochs), 2:846 Transcription systems, 4:1776–1779 example, 4:1778–1779 language, 4:1777 making choices for, 4:1777 Markov analysis, 2:905–909 overview, 4:1776–1777 paralinguistic elements and additional activities, 4:1777–1778 transcribing of interviews, 2:802–803, 2:806–807 Transfer (propaganda device), 3:1341 Transformation, data. See Data transformation Transgender Communication Studies (NCA), 2:623 Translation, English as a Second Language, 1:419 Treatment groups, 4:1779–1781 common problems and solutions, 4:1781 experimental research, 4:1779–1780 overview, 4:1779 use of groups in experimental research, 4:1780–1781 Trend analysis, 1:169 Triangulation, 4:1781–1784 confusion with multiple methods research (MMR), 4:1783 controversy over, 4:1783–1784 overview, 4:1781–1782 types of, 4:1782–1783 uses of, 4:1782 Trimmed estimator. See Data trimming Trimodal distribution, 3:1030 Trip to the Moon, A (film), 2:567 Trochim, William, 3:1420 Trueblood, Thomas C., 1:87 True score, 4:1784–1785 impact of random errors, 4:1785 measured score versus, 4:1784–1785 overview, 4:1784 True zero-point, 2:942 Truffaut, François, 2:568 Trump, Donald, 1:370, 2:618, 3:1272, 3:1276 Truncation/truncated distribution, 1:350 Truth, Sojourner, 2:574

t-test, 4:1785–1788 ANOVA comparison to, 1:34 benefits and drawbacks, 4:1788 degrees of freedom, 1:367–368 dichotomizing a continuous variable compared to, 1:435, 1:436 heterogeneity of variance, 2:652 hierarchical model, 2:661 independent-sample t-test, 4:1786–1788, 4:1788 (fig.) one-sample t-test, 4:1786, 4:1786 (fig.) overview, 4:1785–1786 univariate statistics, 4:1811–1812 t-test, independent samples assumptions of, 4:1790 case example, 4:1790 degrees of freedom, 1:367–368 independent-sample t-test, overview, 4:1786–1789, 4:1788 (fig.) use of, 4:1789–1790 t-test, one sample assumptions, 4:1792 case example, 4:1792 degrees of freedom, 1:367 determining test value, 4:1791–1792 one-sample t-test, overview, 4:1786, 4:1786 (fig.), 4:1790–1791 use of, 4:1791 t-test, paired samples, 4:1793–1795 assumptions, 4:1794 degrees of freedom, 1:367–368 overview, 4:1793 repeated measures design, 4:1794 researcher-produced pairs, 4:1793–1794 Tuenti (Spain), 4:1640 Tukey, John W., 1:346 Tukey test. See Post hoc tests: Tukey honestly significant difference test Tumblr, 4:1640 Turabian, Kate L., 1:126 Turing, Alan, 2:671, 3:1518 Turkel, Sherry, 1:380 Turn-at-talk, 3:1422–1423 Turner, John, 2:767 Turner, Lynn, 2:554 Turner, Stephen, 2:1012 Turner, Victor, 3:1216 Turning point analysis, 4:1795–1797 application, 4:1796–1797 meaning of relationships, 4:1795 overview, 4:1795 process of relationships, 4:1795–1796 retrospective interview technique, 4:1796 Turoff, Murray, 1:173, 1:174 Turowetz, Jason, 1:466 Tuskegee case, 4:1871

Tuskegee Syphilis Experiment, 1:257, 2:669, 2:705, 2:708 Twain, Mark, 3:1503 Twitter agenda setting and, 1:20, 1:21 blogs and research, 1:100 Burkean analysis, 1:111 communication and technology, 1:176 computer-mediated communication and surveys, 1:218 corporate communication and, 1:266 corporate communication and social media research methods, 1:265 covert observation with, 1:286 digital natives, 1:383 GLBT people and social media, 2:627 intercultural communication, 2:756 Internet research and ethical decision making, 2:787, 2:788, 2:789 journalism and, 2:821, 2:822, 2:823 measures of central tendency, 2:950 media and technology studies, 2:958 media literacy, 2:972 multiplatform journalism, 3:1038 new media analysis, 3:1092 online and offline data, compared, 3:1134 online data, 3:1138 online data documentation, 3:1140 online social networks and, 4:1640, 4:1641 political debates, 3:1275, 3:1276 reality television and, 3:1406 rhetoric and, 3:1496 robotic communication, 3:1519 social media: blogs, microblogs, and Twitter, 4:1630–1633 social media issues of blogs, microblogs, and, 4:1630–1633 social network systems, 4:1638 surveys, 4:1737 terrorism and, 4:1750 2 x 2 factorial ANOVA, 2:534–536, 2:535 (tables), 2:536 (tables) Two-factor design, nested factors, 2:497–498 Two-group pretest-posttest design, 4:1797–1799 design setup, 4:1797–1798 experimental design, overview, 1:478–479 similar designs, 4:1798 some problems with design and possible solutions, 4:1798–1799

1966   Index Two-group random assignment pretest–posttest design, 4:1799–1801 experimental research and internal validity, 4:1800 overview, 4:1799–1800 precaution about, 4:1801 role of random assignment in, 4:1800–1801 Two-tailed test. See McNemar test Type I error, 4:1802–1803 Bonferroni procedures, 1:104–106 effect sizes, 2:507 false positive, 2:542–544 increase in, compared to Type II error, 2:541 logic of hypothesis testing, 2:678–679 in narrative analysis, 3:1076–1077 occurrence of, 4:1802 reducing, 4:1802–1803 relationship between Type I and Type II errors, 4:1804 Student-Newman-Keuls test, 3:1303–1304 Type II error, 4:1803–1805 Bonferroni procedures, 1:105 effect sizes, 2:507 false negative, 2:539–542 logic of hypothesis testing, 2:679 in narrative analysis, 3:1076–1077 occurrence of, 4:1803–1804 relationship between Type I and Type II errors, 4:1804 Tyson, Neil deGrasse, 3:1480, 4:1573 Ubiquitous computing, 2:671–672 Ultima Online (video game), 4:1858 Ulysses (Joyce), 1:111 Uncertainty management theory (URT), 3:1380 Uncertainty reduction theory (URT), 3:1379 Unconscious racism, 1:303 Underrepresented groups, 4:1807–1809 Afrocentric methodology, 4:1808 critical race methodology, 4:1807 cultural studies, 4:1807–1808 ethnographic research, 4:1808 feminist research, 4:1808 overview, 4:1807 phenomenological approaches, 4:1808–1809 postcolonialism, 4:1808 Unimodal distribution, 3:1030 United Nations (UN) Statistics Division, 4:1577 UNESCO, 2:774 Univariate meta-analysis, 2:990–991, 2:992–993

Univariate statistics, 4:1809–1811 ANOVA, 4:1811–1812 chi-square, 4:1812 in communication research, 4:1809–1810 overview, 4:1809 simple regression, 4:1812 t tests, 4:1811 types of, 4:1811 Universalism, 1:311 Universal Pictures, 2:781 University of Birmingham (UK), 1:295, 1:321 University of California, Los Angeles (UCLA), 1:463 University of Leipzig, 2:835 University of Missouri, 2:819–820 University of North Carolina at Chapel Hill, 1:51 University of Wisconsin, 1:404 Unknown variables, 2:695 Unobtrusive analysis, 4:1811–1813 analyzing existing data sources, 4:1812–1813 analyzing existing documents, 4:1812 ethical considerations, 4:1813 indirect data collection, 4:1812 Internet and social media sources, 4:1813 overview, 4:1811–1812 Unobtrusive measurement, 4:1813–1816 content analysis, limitations, 4:1815 content analysis, overview, 4:1815 indirect measures, 4:1814–1815 overview, 4:1813–1814 quantitative descriptive analysis, 4:1815 secondary analysis of data, 4:1816 thematic analysis, 4:1815 “Up for Whatever” (Budweiser), 1:265–266 Uplike (France), 4:1640 Upward comparison, 1:102 Usability, of Web resources, 1:94 U.S. Air Force, 1:300 USA PATRIOT Act, 2:590 USA Today, 2:855, 2:874 U.S. Census Bureau African American communication and culture, 1:15 IRBs, 2:710 Latina/o communication studies, 2:849 longitudinal design, 2:893 reliability of, 2:870 sample versus population, 4:1523 sampling by, 4:1537 sampling size, 4:1525, 4:1526 secondary data, 4:1577 surveys, 4:1712

U.S. Congress copyright issues in research, 1:262 foundation and government research collections, 2:582 freedom of expression, 2:589 health care disparities, 2:640 institutional review boards, 2:709 interaction analysis example, 2:719 political communication, 3:1268, 3:1269 rhetorical leadership, 3:1500 second wave feminism and, 4:1582 U.S. Constitution, on freedom of expression, 2:589–590 U.S. Declaration of Independence, 2:573 U.S. Department of Defense, 4:1559 U.S. Department of Health, Education, and Welfare, 2:708 U.S. Department of Health and Human Services, 1:342, 1:453, 2:709 U.S. Department of State, 3:1141, 4:1748 U.S. Department of Veterans Affairs, 2:709, 4:1615 U.S. Department of War, 4:1559 Usenet Newsgroups, 1:381 Uses-and-gratifications approach (U&G), 1:176 Uses of Argument, The (Toulmin), 1:53 U.S. Federal Trade Commission (FTC), 4:1653 U.S. Food and Drug Administration (FDA), 1:342, 2:708, 2:709, 4:1582 U.S. House of Representatives, 4:1582 Using Software in Qualitative Research: A Step-by-Step Guide, 2nd Ed (Silver, Lewins), 1:217 U.S. Library of Congress, 1:46, 1:203, 2:582 U.S. National Archives and Records Administration (NARA), 1:45, 2:582 U.S. Postal Service, 1:411 U.S. Public Health Service, 4:1871 U.S Safeguarding Vulnerable Groups Act, 4:1872 U.S. Senate, 4:1582 U.S. Supreme Court, 2:859, 3:1269, 3:1503, 4:1582. See also individual names of cases Utterance chains, 3:1410–1411 Valdivia, Angharad N., 2:852 Valencic, Kristin, 1:169 Validity conjoint analysis, 1:237–238 of focus groups, 2:580–581 meta-analysis: model testing, 2:997–998 of observer reliability, 3:1119

Index   1967 Validity, concurrent, 4:1817–1819 application, 4:1818–1819 assessment, 4:1817–1818 convergent validity compared to, 4:1819–1820 as imperfect gold standard, 4:1819 levels of validity, 4:1818 overview, 4:1817 Validity, construct, 4:1819–1822 application, 4:1821 assessment, 4:1820–1821 common errors, 4:1821–1822 levels of, 4:1821 overview, 4:1819–1820 Validity, face and content, 4:1822–1824 content validity, 4:1824 face validity, 4:1823–1824 measurement validity, 4:1823 overview, 4:1822 Validity, halo effect, 4:1824–1827 creation of stereotypes, 4:1825–1826 explanation by attribution theory, 4:1826 overview, 4:1824–1825 ratings by observers, 4:1825 techniques to avoid a halo effect in research, 4:1826 Validity, measurement of, 4:1827–1833 construct validity, elaborate version, 4:1831–1832, 4:1832 (fig.) construct validity, limited versions, 4:1832 content validity, 4:1830, 4:1830 (table) criterion-related validity, 4:1831 definitions, 4:1827–1829, 4:1828 (table) face validity, 4:1830 measurement, overview, 4:1827 operationalization, 4:1828–1829 qualitative types of validity, 4:1829 quantitative types of validity, 4:1830–1831 role of theory, 4:1832–1833 validity, defined, 4:1829 Validity, predictive, 4:1833–1835 application, 4:1834–1835 assessment, 4:1833–1834 common errors, 4:1835 levels of, 4:1834 overview, 4:1833 Value-added communication assessment, 1:185 Value-based evidence, 1:467 Value dimensions, Hofstede on, 1:159–161 Values. See Research ethics and social values

Van den Buick, Jan, 1:153 Van Dijk, Teun, 1:391, 2:718 Van Eemeren, Frans, 1:54 Van Fraassen, Bas, 2:696 Van Hook, La Rue, 3:1485 VanLear, C. Arthur, 2:840 Van Maanen, John, 1:460 Variables control of extraneous variables, 1:483–486 lambda, to consider relationships among dependent variables, 2:842 latent, 2:502 manipulation check, 2:900–902 relationships between, 3:1412–1414 unknown variables, 2:695 See also individual measurement levels Variables, categorical, 4:1835–1837 overview, 4:1835 uses of, 4:1835–1837 Variables, conceptualizing, 4:1837–1840 concepts and, 4:1837–1838 constructs and, 4:1838 indicators and dimensions, 4:1838–1839 inductive and deductive distinction, 4:1839 overview, 4:1837 Variables, continuous, 4:1840–1841 overview, 4:1840 use of, 4:1840–1841 Variables, control, 4:1841–1842 examples, 4:1841–1842 how variables are controlled in research, 4:1842 overview, 4:1841 Variables, defining, 4:1842–1844 exploratory, descriptive, explanatory studies, 4:1844 overview, 1:435, 4:1842–1843 real, nominal, operational definitions, 4:1843 Variables, dependent, 4:1844–1845 to consider relationships among dependent variables, 2:842 defined, 1:477 multivariate statistics and sets of dependent variables, 3:1065–1067 overview, 4:1844 use of, 4:1844–1845 See also Experimental manipulation Variables, independent, 4:1845–1846 defined, 1:477 dependent (DV) and independent (IV), 1:477 independent variable manipulation, 3:1385

measuring relationships between dependent variables and, 4:1845–1846 overview, 4:1845 types of independent variables, 1:474–475 (See also Experimental manipulation) Variables, interaction of, 4:1846–1848 example of, in ANOVA, 4:1846–1847, 4:1847 (fig.), 4:1847 (table) in multiple linear regression, 4:1847 overview, 4:1846 Variables, latent, 4:1848–1849 as hypothetical constructs and unmeasurable constructs, 4:1848–1849 as means of data reduction or prediction, 4:1849 methods of extracting latent variables from observed variables, 4:1849 overview, 4:1848 Variables, marker, 4:1849–1851 example, 4:1850 overview, 4:1849–1850 use of, 4:1850–1851 Variables, mediating types, 4:1851–1852 computation of, 4:1852 expectation for existence of, 4:1851–1852 overview, 4:1851 Variables, moderating types, 4:1852–1854 computation of, 4:1853–1854 expectation for existence of, 4:1853 overview, 4:1852–1853 Variables, operationalization, 4:1854–1856 concepts and conceptualization, 4:1854–1855 connecting concepts, conceptualization, operationalization, 4:1856 operationalization, 4:1855–1856 overview, 4:1854 Variance calculating, 4:1666–1667 simple descriptive statistics, 4:1604 standard deviation and, 4:1665–1668 variance inflation factor (VIF), 3:1036 Variance matrix. See Covariance/ variance matrix Varimax rotation method. See Factor analysis: varimax rotation Verant Interactive, 4:1858 Verbal communication religious communication, 3:1430 terministic screens, 4:1747 unitizing reliability in, 3:1421–1423 See also Message production

1968   Index Verification, for ethnographies, 1:461 Verizon, 3:1142 Versificator’s Art (Mathew of Vendome), 3:1507 Version control, 1:50 Vertov, Dziga, 2:567 Vico, Giambattista, 4:1744 Video games, 4:1857–1859 digital games and game studies scholarship, 2:604–605 history of, 4:1857–1858 history of communication research on, 4:1858–1859 overview, 4:1857 visual materials, analysis, 4:1866 Video materials, experimental manipulation, 1:475 Vietnam Veterans Memorial, 3:1354 Vijver, Fons van de, 3:1476 Vimeo, 1:265, 2:972 Vincent, Richard, 2:777 Vine, 2:627, 4:1640 Violence gender and communication, 2:614–615 gender-specific language, 2:617 peace studies on topics of, 3:1203–1206 See also Pornography and research Virtual communities, 3:1136 Virtual teams, communication and technology studies, 1:175–176 Virtue (arête), 3:1478 Visioning, 1:169 Visual communication studies, 4:1859–1862 cluster analysis, 1:142 imagined interactions, 2:685 overview, 4:1859–1860 terministic screens, 4:1746–1747 visual artifacts, 4:1860, 4:1861 visual communication, 4:1860–1861 visual ideographs, 2:683 visual rhetoric, 4:1861–1862 Visual images as data within qualitative research, 4:1863–1865 meaning of visual images, 4:1864 methodologies, 4:1864–1865 overview, 4:1863 visual images as data, 4:1863–1864 Visual materials, analysis of, 4:1865–1867 approaches, 4:1865–1866 frequency distributions, 2:594–599, 2:594–595 (tables), 2:596 (fig.), 2:597 (fig.), 2:598 (fig.) overview, 4:1865 VK (Russia), 4:1640 Vlogs, 2:627

Vocabulary for database searching, 1:350 development of communication in children, 1:374 Vocalics observational measurement, 3:1111–1112 vocalic code, 3:1099 Vogt, Thomas M., 1:482 Vote counting literature review methods, 4:1867–1870 common errors with process, 4:1867–1869, 4:1868 (tables) correcting for random error, 4:1869–1870 overview, 4:1867 process, 4:1867 reducing random error, 4:1869 Vulnerable groups, 4:1870–1873 Bangladesh refugee case, 4:1872 guidelines for protection of, 4:1872–1873 identifying, 4:1871 overview, 4:1870–1871 Tuskegee case, 4:1871 Waiting list, control groups and, 1:253 Wakefield, Andrew, 3:1443 Wales, Jimmy, 2:787 Walker, Alice, 1:17 Walker, Robert, 2:840 Wallis, W. Allen, 2:832 Wall Street Journal, 2:870, 2:874, 3:1097 Wal-Mart, 2:581 Walt Disney Company, 2:971, 3:1038 Walter, Otis, 3:1509 Walther, Joseph, 1:174, 1:219, 4:1642 Waltz With Bashir (film), 2:568 Wampum, as written communication, 3:1079–1080 Wang, P., 3:1040 Wanzer-Serrano, Darrel, 3:1504 Warner, Michael, 3:1390 War on terror (case study), 2:585–586 Warrant, in literature review, 2:876–877 Wartime communication, 4:1873–1877 modes of communication, 4:1875 outlooks and approaches, 4:1876–1877 overview, 4:1875 WASC Senior College and University Commission (WASC-SCUC), 1:182 Washington Post, 2:820, 3:1039, 4:1749 Wasko, Janet, 2:776 Watkins, Carleton, 3:1505 Watzlawick, Paul, 4:1624

Weaver, David H., 2:821 Weaver, Warren, 2:884 Weber, Rene, 1:170, 1:171 Weber, Steven J., 1:372 Web of Knowledge, 1:132–133 Web of Science, 1:349 Weick, Karl, 4:1627, 4:1628 Weingartner, Charles, 4:1625 Welch–Satterthwaite t, 2:652 Welch’s F, 2:653 WELL, 3:1136 Wellman, Barry, 1:221 Wells, H. G., 1:19 Wells Fargo, 2:581 West, Kanye, 2:911 Westboro Baptist Church, 2:589 Western Communication Association (WCA), 1:10, 1:21 Western Journal of Communication (ECA), 1:3, 3:1391 Western States Communication Association, 1:4, 2:555 Wetherell, Margaret, 1:391 What’s the Matter with Kansas? (Frank), 2:697–698 WhatsYourPrice.com, 1:386 White, Alice, 3:1423 White, David Manning, 1:20, 2:820 White, Eugene E., 1:88 White, Hayden, 4:1744 White propaganda, 3:1340 White’s test, 2:655, 2:656 Whole-plot design, 2:498 Whorf, Benjamin Lee, 1:310–311, 4:1625, 4:1628 Wichelns, Herbert A., 1:88, 2:663, 3:1087, 3:1088, 3:1509 Wickens, Thomas D., 1:366 Widlöcher, Antoine, 2:747 Wieder, Larry, 1:464–465 Wiemann, John, 1:187, 1:189 WikiLeaks, 3:1143 Wikipedia, 1:27, 1:46, 1:383, 2:787, 2:962, 3:1369 Wilcoxon–Mann–Whitney rank sum test, 2:832 Wildcard, for searching, 1:350 Wilks’ lambda, 1:398, 2:654, 2:841–842, 3:1061 Williams, Angie, 2:764 Williams, Linda, 3:1286, 3:1290, 3:1291 Williams, Patricia, 1:303 Willis-Rivera, Jennifer, 2:852 Wilnat, Lars, 2:821 Wilson, Thomas, 3:1508 Wilson, Woodrow, 3:1500 Wilson score interval, 2:904 Winkler, Carol K., 2:683

Index   1969 Winsor, Charles, 1:347 Winsorizing, 1:347 Winston, Roger, 1:72, 1:73 “Wisconsin’s College of the Air” (University of Wisconsin), 1:404 Wiseman, Frederick, 2:567 Withers, Lesley, 2:840 Within-groups variance estimate, 3:1129 Within-participants factorial designs, 1:36 Within-subjects design, 4:1877–1880 examples, 4:1877–1880 factorial ANOVA, 2:534 individual differences, 2:692 overview, 4:1877 Wittgenstein, Ludwig, 1:119, 4:1627, 4:1746 Wodak, Ruth, 2:718 Womanism, 1:17–18 Women & Language, 2:555, 3:1391 Women’s State Temperance Society, 2:573 Women’s Studies in Communication, 2:555, 4:1583 Wood, Julia, 4:1581 Woodhull, Victoria, 2:665 Work-life balance, 1:114 Workplace business communication in, 1:114–115 gender and communication, 2:613–614 See also Business communication Works and Days (Hesiod), 3:1506 World Anti-Slavery Convention (1840), 2:573 World Bank, 2:777 WorldCat, 1:44 World cinema, 2:779 Worldcom, 2:570 World Health Organization, 4:1578 World Intellectual Property Organization, 4:1578 World Medical Association, 1:448, 2:708 World of Warcraft (video game), 4:1858 World-systems theory, 2:777 World Tourism Organization, 4:1578 Wrage, Ernest, 2:664 Wreen, Michael, 3:1167 Wrench, Jason S., 3:1464 Wright, Sewall, 3:1193, 3:1194 Writer’s block, 4:1880–1883 avoiding, 4:1882–1883 defined, 4:1880 overcoming, 4:1881–1883 sources of, 4:1880

Writing, history and development of, 1:199–200 Writing a discussion section, 4:1883–1886 objectivity, 4:1885 overview, 4:1883 scholarly tone and language, 4:1885 structure, 4:1883–1884 Writing a literature review, 4:1886–1888 overview, 4:1886 purpose and format, 4:1886–1888 Writing a methods section, 4:1888–1891 abstract or executive summary and, 1:1–2 audience, 4:1890–1891 overview, 4:1888–1889 parts of methods section, 4:1889–1890 tips and resources for, 4:1891 Writing a results section, 4:1891–1894 overview, 4:1891 qualitative results section, 4:1893–1894 quantitative results section, 4:1891–1893 Writing process, the, 4:1894–1898 anonymous source of data, 1:39–40 audience, outlet, format, 4:1894–1895 business communication and writing skills, 1:113 for case study, 1:118 citations and attributions, 4:1897–1898 grounded theory, 2:631–632 limitations of research, 2:864 for news media, 3:1097–1098 for participant observation, 3:1188–1189 public style guides for, 3:1365–1368 style, 4:1895 transitions, 4:1897 verb tense and verb choice, 4:1896–1897 voice, 4:1895–1896 writer’s block, 4:1880–1883 See also Field notes; individual types of writing genres Written materials, experimental manipulation, 1:475 Wundt, Wilhelm, 2:835 Wurtee, 3:1080 Wurzelbacher, Samuel Joseph, 2:576 XML (eXtensible Markup Language), 1:51

Yahoo!, 2:784, 4:1530, 4:1574 Yale University, Milgram experiment at, 1:256–257, 1:362, 2:669, 3:1443 Yan, Zheng, 4:1732 “Yellow peril,” 1:60 Yep, Gust, 3:1391 Yik Yak, 3:1083 Yoked control groups, 1:253 Yosso, Tara, 1:304 Young, Thomas J., 2:517 Young Lords: A Reader, The (WanzerSerrano), 3:1504 YouTube coding of data, 1:149 corporate communication and social media research methods, 1:265 GLBT people and social media, 2:627, 2:628 Internet research and ethical decision making, 2:788 mass communication, 2:914 massive multiplayer online games, 2:918 media diffusion, 2:962 media literacy, 2:972 online and offline data, compared, 3:1134 open access publications and, 3:1369 political debates, 3:1275 popular communication, 3:1281 rhetoric and, 3:1496 social network systems, 4:1637 terrorism and, 4:1750 wartime communication, 4:1876, 4:1877 Zaeske, Susan, 2:574 Zarefsky, David, 2:665 Zaug, Pamela, 3:1154 Zavattini, Cesare, 2:567 Zenger, John Peter, 2:593 Zhang-Wei, 2:553 Zhiyi, 3:1430 Zimbardo, Philip, 1:255–256, 2:669 Žižek, Slavoj, 2:698 z score, 4:1899–1901 application, 4:1900 calculation, 4:1899–1900 defined, 4:1899 importance of, 4:1900 normal distribution curve, 3:1103 overview, 4:1899 simple descriptive statistics, 4:1604 z transformation, 4:1901–1902 applying z scores, 4:1901–1902 defined, 4:1901 standardization and z scores, 4:1901